Displaying items by tag: Ubuntu

Saturday, 30 May 2020 17:35

How to : Install PHP 7.2 to 7.4 on Ubuntu


Finally, the third part of our LAMP tutorial series: how to install PHP on Ubuntu. In this tutorial, we’ll show you how to install various versions of PHP, including PHP 7.2, PHP 7.3, and the latest PHP 7.4

This tutorial should work for any Ubuntu release and other Ubuntu-based releases. Ubuntu 14.04, Ubuntu 16.04, Ubuntu 18.04, even Ubuntu 19.10.


Tutorials here:

  • Before we begin
  • How to install PHP 7.4 on Ubuntu 18.04 or 16.04
  • How to Install PHP 7.2 on Ubuntu 16.04
  • How to Install PHP 7.2 on Ubuntu 18.04
  • How to Install PHP 7.3 on Ubuntu 18.04 or 16.04
  • How to change the PHP version you’re using
  • How to upgrade to PHP 7.3 (or 7.4) on Ubuntu
  • Speed up PHP by using an opcode cache



For the first part of our LAMP series, go to our Ubuntu: How to install Apache

And for the second part, go to How to Install MySQL/MariaDB on Ubuntu



Before we begin installing PHP on Ubuntu


  • PHP has different versions and releases you can use. Starting from the oldest that is currently supported – PHP 7.2, and onto PHP 7.3 and the latest – PHP 7.4. We’ll include instructions for PHP 7.4, PHP 7.2 (the default in Ubuntu 18.04) and the default PHP version in the Ubuntu 16.04 repositories – PHP 7. We recommend that you install PHP 7.3 as it’s stable and has lots of improvements and new features. If you still use PHP 7.1, you definitely need to upgrade ASAP because its security support ended at 2019.
  • You’ll obviously need an Ubuntu server. You can get one from Vultr. Their servers start at $2.5 per month. Or you can go with any other cloud server provider where you have root access to the server.
  • You’ll also need root access to your server. Either use the root user or a user with sudo access. We’ll use the root user in our tutorial so there’s no need to execute each command with ‘sudo’, but if you’re not using the root user, you’ll need to do that.
  • You’ll need SSH enabled if you use Ubuntu or an SSH client like MobaXterm if you use Windows.
  • Check if PHP is already installed on your server. You can use the ‘which php’ command. If it gives you a result, it’s installed, if it doesn’t, PHP is not installed. You can also use the “php -v” command. If one version is installed, you can still upgrade to another.
  • Some shared hosts have already implemented PHP 7.3 and PHP 7.4 in their shared servers, like Hawk Host and SiteGround.

Now, onto our tutorial.


How to install PHP 7 on Ubuntu 16.04

Currently, as of January 2018, the default PHP release in the Ubuntu 16.04 repositories is PHP 7.0. We’ll show you how to install it using Ubuntu’s repository.

You should use PHP 7.2 or 7.3 instead of the default, outdated PHP version in Ubuntu 16.04. Skip these instructions and follow the instructions below for a newer version.


Update Ubuntu

First, before you do anything else, you should update your Ubuntu server:

apt-get update && apt-get upgrade


Install PHP

Next, to install PHP, just run the following command:

apt-get install php

This command will install PHP 7.0, as well as some other dependencies:


To verify if PHP is installed, run the following command:

php -v

You should get a response similar to this:



And that’s it. PHP is installed on your Ubuntu server.



Install PHP 7.0 modules

You may need some additional packages and PHP modules in order for PHP to work with your applications. You can install the most commonly needed modules with:

apt-get install php-pear php7.0-dev php7.0-zip php7.0-curl php7.0-gd php7.0-mysql php7.0-mcrypt php7.0-xml libapache2-mod-php7.0


Depending on how and what you’re going to use, you may need additional PHP modules and packages. To check all the PHP modules available in Ubuntu, run:

apt-cache search --names-only ^php
You can tweak the command to only show ^php7.0- packages etc.


If you want to use the latest PHP version, follow the next instructions instead.


How to Install PHP 7.2 on Ubuntu 16.04
PHP 7.2 is a stable version of PHP and has many new features, improvements, and bug fixes. You should definitely use it if you want a better, faster website/application.


Update Ubuntu
Of course, as always, first update Ubuntu:

apt-get update && apt-get upgrade


Add the PHP repository
You can use a third-party repository to install the latest version of PHP. We’ll use the repository by Ondřej Surý.


First, make sure you have the following package installed so you can add repositories:

apt-get install software-properties-common


Next, add the PHP repository from Ondřej:

add-apt-repository ppa:ondrej/php

And finally, update your package list:

apt-get update


Install PHP 7.2

After you’ve added the repository, you can install PHP 7.2 with the following command:

apt-get install php7.2


This command will install additional packages:


And that’s it.


To check if PHP 7.2 is installed on your server, run the following command:

php -v


Install PHP 7.2 modules
You may need additional packages and modules depending on your applications. The most commonly used modules can be installed with the following command:

apt-get install php-pear php7.2-curl php7.2-dev php7.2-gd php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml

And that’s all. You can now start using PHP on your Ubuntu server.


If you want to further tweak and configure your PHP, read our instructions below.


How to Install PHP 7.2 on Ubuntu 18.04
PHP 7.2 is included by default in Ubuntu’s repositories since version 18.04. So the instructions are pretty similar to PHP 7 for 16.04.


Update Ubuntu
Again, before doing anything, you should update your server:

apt-get update && apt-get upgrade
Install PHP 7.2


Next, to install PHP 7.2 on Ubuntu 18.04, just run the following command:

apt-get install php

This command will install PHP 7.2, as well as some other dependencies.


To verify if PHP is installed, run the following command:

php -v
You should get a response similar to this:

PHP 7.2.3-1ubuntu1 (cli) (built: Mar 14 2018 22:03:58) ( NTS )
And that’s it. PHP 7.2 is installed on your Ubuntu 18.04 server.


Install PHP 7.2 modules
These are the most common PHP 7.2 modules often used by php applications. You may need more or less, so check the requirements of the software you’re planning to use:

apt-get install php-pear php-fpm php-dev php-zip php-curl php-xmlrpc php-gd php-mysql php-mbstring php-xml libapache2-mod-php

To check all the PHP modules available in Ubuntu, run:

apt-cache search --names-only ^php


How to install PHP 7.3 on Ubuntu 18.04 or 16.04
PHP 7.3 is a stable version that you can safely use on your servers.

Update Ubuntu

First, update your Ubuntu server:

Add the PHP repository
To install PHP 7.3 you’ll need to use a third-party repository. We’ll use the repository by Ondřej Surý that we previously used.

First, make sure you have the following package installed so you can add repositories:

apt-get install software-properties-common
Next, add the PHP repository from Ondřej:

add-apt-repository ppa:ondrej/php
And finally, update your package list:

apt-get update
Install PHP 7.3

After you’ve added the repository, you can install PHP 7.3 with the following command:

apt-get install php7.3


This command will install additional packages:

…and others.
And that’s it. 


To check if PHP 7.3 is installed on your server Run the following command:

php -v


Install PHP 7.3 modules
You may need additional packages and modules depending on your applications. The most commonly used modules can be installed with the following command:

apt-get install php-pear php7.3-curl php7.3-dev php7.3-gd php7.3-mbstring php7.3-zip php7.3-mysql php7.3-xml
And that’s all. You can now start using PHP on your Ubuntu server.

If you want to further tweak and configure your PHP, read our instructions below.



How to install PHP 7.4 on Ubuntu 18.04 or 16.04

PHP 7.4 is the latest version of PHP that has lots of improvements. The instructions are pretty similar to PHP 7.3.

Update Ubuntu

First, update your Ubuntu server:

apt-get update && apt-get upgrade
Add the PHP repository

To install PHP 7.4 you’ll need to use a third-party repository. We’ll use the repository by Ondřej Surý that we previously used again.


First, make sure you have the following package installed so you can add repositories:

apt-get install software-properties-common
Next, add the PHP repository from Ondřej:

add-apt-repository ppa:ondrej/php

And finally, update your package list:

apt-get update
Install PHP 7.4


After you’ve added the repository, you can install PHP 7.4 with the following command:

apt-get install php7.4

This command will install additional packages:

…and others.

And that’s it. To check if PHP 7.4 is installed on your server, run the following command:

php -v
Install PHP 7.4 modules


You may need additional packages and modules depending on your applications. The most commonly used modules can be installed with the following command:

apt-get install php-pear php7.4-curl php7.4-dev php7.4-gd php7.4-mbstring php7.4-zip php7.4-mysql php7.4-xml

And that’s all. You can now start using PHP on your Ubuntu server.

If you want to further tweak and configure your PHP, read our instructions below.


How to change the PHP version you’re using
If you have multiple PHP versions installed on your Ubuntu server, you can change what version is the default one.

To set PHP 7.2 as the default, run:

update-alternatives --set php /usr/bin/php7.2


To set PHP 7.3 as the default, run:

update-alternatives --set php /usr/bin/php7.3


To set PHP 7.4 as the default, run:

update-alternatives --set php /usr/bin/php7.4


If you’re following our LAMP tutorials and you’re using Apache, you can configure Apache to use PHP 7.3 with the following command:

a2enmod php7.3

And then restart Apache for the changes to take effect:

systemctl restart apache2


How to upgrade to PHP 7.3 or 7.4 on Ubuntu

If you’re already using an older version of PHP with some of your applications, you can upgrade by:

  • Backup everything.
  • Install the newest PHP and required modules.
  • Change the default version you’re using.
  • (Optionally) Remove the older PHP (Required) Configure your software to use the new PHP version. You’ll most likely need to configure Nginx/Apache, and many other services/applications. If you’re not sure what you need to do, contact professionals and let them do it for you.
  • Speed up PHP by using an opcode cache
  • You can improve the performance of your PHP by using a caching method. We’ll use APCu, but there are other alternatives available.


If you have the ‘php-pear’ module installed (we included it in our instructions above), you can install APCu with the following command:

pecl install apcu

There are also other ways you can install APCu, including using a package.


To start using APCu, you should run the following command for PHP 7.2:

echo "extension=apcu.so" | tee -a /etc/php/7.2/mods-available/cache.ini

Or this command for PHP 7.3:

echo "extension=apcu.so" | tee -a /etc/php/7.3/mods-available/cache.ini


And the following command for PHP 7.4:

echo "extension=apcu.so" | tee -a /etc/php/7.4/mods-available/cache.ini
If you’re following our LAMP tutorials and you’re using Apache, create a symlink for the file you’ve just created.

For PHP 7.2:

ln -s /etc/php/7.2/mods-available/cache.ini /etc/php/7.2/apache2/conf.d/30-cache.ini


For PHP 7.3:

ln -s /etc/php/7.3/mods-available/cache.ini /etc/php/7.3/apache2/conf.d/30-cache.ini


For PHP 7.4:

ln -s /etc/php/7.4/mods-available/cache.ini /etc/php/7.4/apache2/conf.d/30-cache.ini


And finally, reload Apache for the changes to take effect:

systemctl restart apache2

To further configure APCu and how it works, you can add some additional lines to the cache.ini file you previously created. The best configuration depends on what kind of server you’re using, what applications you are using etc. Either google it and find a configuration that works for you, or contact professionals and let them do it for you.

That’s it for our basic setup. Of course, there are much more options and configurations you can do, but we’ll leave them for another tutorial.


Published in GNU/Linux Rules!
Saturday, 30 May 2020 15:55

Ubuntu: How to install Apache


These instructions should work on any Ubuntu-based distro, including Ubuntu 16.04, Ubuntu 18.04, and even non-LTS Ubuntu releases like tested and written for Ubuntu 18.04.

Apache (aka httpd) is the most popular and most widely used web server, so this should be useful for everyone.


Before we begin installing Apache
Some requirements and notes before we begin:



  • Apache may already be installed on your server, so check if it is first. You can do so with the “apachectl -V” command that outputs the Apache version you’re using and some other information.
  • You’ll need an Ubuntu server. You can buy one from Vultr, they’re one of the best and cheapest cloud hosting providers. Their servers start from $2.5 per month.
  • You’ll need the root user or a user with sudo access. All commands below are executed by the root user so we didn’t have to append ‘sudo’ to each command.
  • You’ll need SSH enabled if you use Ubuntu or an SSH client like MobaXterm if you use Windows.

That’s most of it. Let’s move onto the installation.


Install Apache on Ubuntu

The first thing you always need to do is update Ubuntu before you do anything else. You can do so by running:

apt-get update && apt-get upgrade
Next, to install Apache, run the following command:

apt-get install apache2
If you want to, you can also install the Apache documentation and some Apache utilities. You’ll need the Apache utilities for some of the modules we’ll install later.

apt-get install apache2-doc apache2-utils
And that’s it. You’ve successfully installed Apache.

You’ll still need to configure it.


Configure and Optimize Apache on Ubuntu
There are various configs you can do on Apache, but the main and most common ones are explained below.


Check if Apache is running
By default, Apache is configured to start automatically on boot, so you don’t have to enable it. You can check if it’s running and other relevant information with the following command:

systemctl status apache2



And you can check what version you’re using with

apachectl -V

A simpler way of checking this is by visiting your server’s IP address. If you get the default Apache page, then everything’s working fine.


Update your firewall

If you use a firewall (which you should), you’ll probably need to update your firewall rules and allow access to the default ports. The most common firewall used on Ubuntu is UFW, so the instructions below are for UFW.

To allow traffic through both the 80 (http) and 443 (https) ports, run the following command:

ufw allow 'Apache Full'

Install common Apache modules

Some modules are frequently recommended and you should install them. We’ll include instructions for the most common ones:


Speed up your website with the PageSpeed module
The PageSpeed module will optimize and speed up your Apache server automatically.

First, go to the PageSpeed download page and choose the file you need. We’re using a 64-bit Ubuntu server and we’ll install the latest stable version. Download it using wget:

wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_amd64.deb

Then, install it with the following commands:

dpkg -i mod-pagespeed-stable_current_amd64.deb
apt-get -f install

Restart Apache for the changes to take effect:

systemctl restart apache2

Enable rewrites/redirects using the mod_rewrite module
This module is used for rewrites (redirects), as the name suggests. You’ll need it if you use WordPress or any other CMS for that matter. To install it, just run:

a2enmod rewrite
And restart Apache again. You may need some extra configurations depending on what CMS you’re using, if any. Google it for specific instructions for your setup.

Secure your Apache with the ModSecurity module
ModSecurity is a module used for security, again, as the name suggests. It basically acts as a firewall, and it monitors your traffic. To install it, run the following command:

apt-get install libapache2-modsecurity

And restart Apache again:

systemctl restart apache2

ModSecurity comes with a default setup that’s enough by itself, but if you want to extend it, you can use the OWASP rule set.


Block DDoS attacks using the mod_evasive module
You can use the mod_evasive module to block and prevent DDoS attacks on your server, though it’s debatable how useful it is in preventing attacks. To install it, use the following command:

apt-get install libapache2-mod-evasive

By default, mod_evasive is disabled, to enable it, edit the following file:

nano /etc/apache2/mods-enabled/evasive.conf
And uncomment all the lines (remove #) and configure it per your requirements. You can leave everything as-is if you don’t know what to edit.


And create a log file:

mkdir /var/log/mod_evasive
chown -R www-data:www-data /var/log/mod_evasive


That’s it. Now restart Apache for the changes to take effect:

systemctl restart apache2
There are additional modules you can install and configure, but it’s all up to you and the software you’re using. They’re usually not required. Even the 4 modules we included are not required. If a module is required for a specific application, then they’ll probably note that.


Optimize Apache with the Apache2Buddy script
Apache2Buddy is a script that will automatically fine-tune your Apache configuration. The only thing you need to do is run the following command and the script does the rest automatically:

 curl -sL https://raw.githubusercontent.com/richardforth/apache2buddy/master/apache2buddy.pl | perl


You may need to install curl if you don’t have it already installed. Use the following command to install curl:

apt-get install curl

Additional configurations
There’s some extra stuff you can do with Apache, but we’ll leave them for another tutorial. Stuff like enabling http/2 support, turning off (or on) KeepAlive, tuning your Apache even more. You don’t have to do any of this, but you can find tutorials online and do it if you can’t wait for our tutorials.


Create your first website with Apache
Now that we’re done with all the tuning, let’s move onto creating an actual website. Follow our instructions to create a simple HTML page and a virtual host that’s going to run on Apache.

The first thing you need to do is create a new directory for your website. Run the following command to do so:

mkdir -p /var/www/example.com/public_html
Of course, replace example.com with your desired domain. You can get a cheap domain name from Namecheap.

Don’t forget to replace example.com in all of the commands below.


Next, create a simple, static web page. Create the HTML file:

nano /var/www/example.com/public_html/index.html

And paste this:

Simple Page

If you're seeing this in your browser then everything works.

Save and close the file.


Configure the permissions of the directory:

chown -R www-data:www-data /var/www/example.com
chmod -R og-r /var/www/example.com


Create a new virtual host for your site:

nano /etc/apache2/sites-available/example.com.conf


And paste the following:

ServerAdmin This email address is being protected from spambots. You need JavaScript enabled to view it.
ServerName example.com
ServerAlias www.example.com

DocumentRoot /var/www/example.com/public_html

ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined

This is a basic virtual host. You may need a more advanced .conf file depending on your setup.

Save and close the file after updating everything accordingly.


Now, enable the virtual host with the following command:

a2ensite example.com.conf

And finally, restart Apache for the changes to take effect:

systemctl restart apache2

That’s it. You’re done. Now you can visit example.com and view your page.


Published in GNU/Linux Rules!
Tagged under
Friday, 28 February 2020 13:27

Linux Commands: The easy way



find command is a very essential command in the linux operating system. We need to use this amazing command to find files within our system hierarchy.

In a windows operating system we can use search option very simply in the GUI. Likewise we can use find linux command here to find and grab the files as our need.

Same as other linux commands find command also having so  many command options, like to search recursively in the files, to find the files with considering modified dates, accessed dates, files considering their sizes, files considering ownerships and  permissions,  and with so many options.

Also we can use the pipeline and redirection to have the output of the find command and pass it to another operation. As an example we can type the find command to find some files and delete those files with a single linux command. For that purpose we are using pipeline with –

exec or xarg command. We will discuss about -exec later. After this we will discuss the find command and it’s examples with the options. 



Syntax –

find [location] [options] [what to find]

  • location : The directories where you need to search. This can be a single location or multiple locations.
    eg – if we need to find a file under root directory, the location should be root directory.
    So, the command should start like find /
  • options : find command has so many options to optimise the search. Will discuss on below of the article.
  • What to find : The name of the file which you need to find.
    eg – if we need to find all files having the extention of .cpp under root directory, the command should be find / -name *.cpp. Here -name is the option we used to determine the file name. If we use to find a file giving a name of the file, we must use the option name. 




How to use find command with examples.


1) find files named “example.txt” in your current location

find . -name “example.txt”


2) find files named “passwd” under root directory

find / name “passwd”



 3) find files named with case insensetively.

Guess we have two files named, Text_file and text_file. These two words differ with capital T.If we need to ignore case sensitivity we need to use option name as -iname. 

find / -name “text_file” ( Case sensitive )

find / -iname “text_file” ( Case Insensitive )



 4) find file “example.txt” under your home directory

find /home -iname “example.txt”


5) find files which having the extention of .php under your home directory

find /home -type f -name “*.php”

here -type option is used to determine the type of the file, to recognize is it a file or a directory which you are searching. if you are finding a directory the option should be -type d on the above



6) find directories named example under your home directory

find /home -type d -name “example”


7) find files in more than one location

find / /home -iname “student”

Above command find the files named student in both places of root directory and /home directory.


8) find emty files

find / -type f -empty


9) find empty directories

find / -type d -empty


10) find files which have permissions of 777

find / -type f -perm 777 

-perm option is used to determine permissions with the find command. if you want to find permissions of 644, it should be like -perm 644.


11) find files not having 777 permissions

find / -type f ! -perm 777


is used to mention NOT.

12) find files which are set to SETUID

find / -perm /u=s


13) find files which are set to SETGID

find / -perm /g=s


14) find files read only files

find / -perm /u=r


15) find files and remove them

find /home -iname “*.cpp” -exec rm -rf {} ;


We will break this command into two parts.

  •  find part is ok for you. It finds all files having .cpp extension under your home directory.
  •  -exec rm -rf {} ; This part is taking the output of the find section and executes the rm -rf command to remove the searched files.  So the output of the find command is going to store inside of {} as an input to rm -rf command and -exec option makes the command as executable.


16) find files having 755 permissions and change back to 644 permissions.

find / -iname “*.cpp” -perm 755 -exec chmod 644 {} ;


17) find files based on users

find / -iname “example” -user student

Above command finds files with example named which is owned to user student.


18) find files based on group

find / -iname “example” -group user


This finds files with name example with group name user.


19) find modified files with given dates

find / -iname “*.txt” -mtime 7

find files having .txt extention under root directory which is modified on 7 days back.

find / -iname “*.txt” -mtime +7

find files having .txt extention under root directory which is modified on more than 7 days.

find / -iname “*.txt” -mtime -7

find files having .txt extention under root directory which is modified within 7 days.

find / -iname “*.txt” -mmin 7

find files having .txt extention under root directory which is modified before 7 minutes from now.

mtime is used to mention in days and mmin is used to mention in minutes. 



20) find files which accessed on 10 days back

find / -iname “*.txt” -atime 10


21) find files which are changed on before 10 minutes

find / -iname “*.txt” -cmin 10 


22) find files with size

find / -size +50M

find files which are more than 50M in size.

find / -size +50M -size -200M 

find files which sizes are more than 50M and less than 200M.


So, we have discussed alot of options can be used with find command. You can refer the internet to find more and more.




Published in GNU/Linux Rules!
Sunday, 24 November 2019 14:26

TIMESHIFT : Backup and Restore Ubuntu Linux


Have you ever wondered how you can backup and restore your Ubuntu or Debian system ? Timeshift is a free and opensource tool that allows you to create incremental snapshots of your filesystem. You can create a snapshot using either RSYNC or BTRFS.

With that. let’s delve in and install Timeshift. For this tutorial, we shall install on Ubuntu 18.04 LTS system.

Installing TimeShift on Ubuntu / Debian Linux

TimeShift is not hosted officially on Ubuntu and Debian repositories. With that in mind, we are going to run the command below to add the PPA:



# add-apt-repository -y ppa:teejee2008/ppa




Next, update the system packages with the command:


# apt update


After a successful system update, install timeshift by running following apt command :


# apt install timeshift



Preparing a backup storage device

Best practice demands that we save the system snapshot on a separate storage volume, aside from the system’s hard drive. For this guide, we are using a 16 GB flash drive as the secondary drive on which we are going to save the snapshot.


# lsblk | grep sdb



For the flash drive to be used as a backup location for the snapshot, we need to create a partition table on the device. Run the following commands:


# parted /dev/sdb mklabel gpt


# parted /dev/sdb mkpart primary 0% 100%


# mkfs.ext4 /dev/sdb1




After creating a partition table on the USB flash drive, we are all set to begin creating filesystem’s snapshots!

Using Timeshift to create snapshots

To launch Timeshift, use the application menu to search for the Timeshift application.




Click on the Timeshift icon and the system will prompt you for the Administrator’s password. Provide the password and click on Authenticate




Next, select your preferred snapshot type.



Click ‘Next’. Select the destination drive for the snapshot. In this case, my location is the external USB drive labeled as /dev/sdb



 Next, define the snapshot levels. Levels refer to the intervals during which the snapshots are created.  You can choose to have either monthly, weekly, daily, or hourly snapshot levels.




Click ‘Finish’ On the next Window, click on the ‘Create’ button to begin creating the snapshot. Thereafter, the system will begin creating the snapshot.



Finally, your snapshot will be displayed as shown



Restoring Ubuntu / Debian from a snapshot

Having created a system snapshot, let’s now see how you can restore your system from the same snapshot. On the same Timeshift window, click on the snapshot and click on the ‘Restore’ button as shown.





Next, you will be prompted to select the target device. leave the default selection and hit ‘Next’.



A dry run will be performed by Timeshift before the restore process commences.



In the next window, hit the ‘Next’ button to confirm actions displayed.



You’ll get a warning and a disclaimer as shown. Click ‘Next’ to initialize the restoration process.

Thereafter, the restore process will commence and finally, the system will thereafter reboot into an earlier version as defined by the snapshot.




As you have seen it quite easy to use TimeShift to restore your system from a snapshot. It comes in handy when backing up system files and allows you to recover in the event of a system fault. So don’t get scared to tinker with your system or mess up. TimeShift will give you the ability to go back to a point in time when everything was running smoothly.


Published in GNU/Linux Rules!




Linux is fully capable of running not weeks, but years, without a reboot. In some industries, that’s exactly what Linux does, thanks to advances like kpatch and kgraph.

For laptop and desktop users, though, that metric is a little extreme. While it may not be a day-to-day reality, it’s at least a weekly reality that sometimes you have a good reason to reboot your machine. And for a system that doesn’t need rebooting often, Linux offers plenty of choices for when it’s time to start over.



Understand your options

Before continuing though, a note on rebooting. Rebooting is a unique process on each operating system. Even within POSIX systems, the commands to power down and reboot may behave differently due to different initialization systems or command designs.

Despite this factor, two concepts are vital. First, rebooting is rarely requisite on a POSIX system. Your Linux machine can operate for weeks or months at a time without a reboot if that’s what you need. There’s no need to "freshen up" your computer with a reboot unless specifically advised to do so by a software installer or updater. Then again, it doesn’t hurt to reboot, either, so it’s up to you.

Second, rebooting is meant to be a friendly process, allowing time for programs to exit, files to be saved, temporary files to be removed, filesystem journals updated, and so on. Whenever possible, reboot using the intended interfaces, whether in a GUI or a terminal. If you force your computer to shut down or reboot, you risk losing unsaved and even recently-saved data, and even corrupting important system information; you should only ever force your computer off when there’s no other option.



Click the button

The first way to reboot or shut down Linux is the most common one, and the most intuitive for most desktop users regardless of their OS: It’s the power button in the GUI. Since powering down and rebooting are common tasks on a workstation, you can usually find the power button (typically with reboot and shut down options) in a few different places. On the GNOME desktop, it's in the system tray: 


It’s also in the GNOME Activities menu:


On the KDE desktop, the power buttons can be found in the Applications menu:


You can also access the KDE power controls by right-clicking on the desktop and selecting the Leave option, which opens the window you see here:


Other desktops provide variations on these themes, but the general idea is the same: use your mouse to locate the power button, and then click it. You may have to select between rebooting and powering down, but in the end, the result is nearly identical: Processes are stopped, nicely, so that data is saved and temporary files are removed, then data is synchronized to drives, and then the system is powered down.



Push the physical button

Most computers have a physical power button. If you press that button, your Linux desktop may display a power menu with options to shut down or reboot. This feature is provided by the Advanced Configuration and Power Interface (ACPI) subsystem, which communicates with your motherboard’s firmware to control your computer’s state.

ACPI is important but it’s limited in scope, so there’s not much to configure from the user’s perspective. Usually, ACPI options are generically called Power and are set to a sane default. If you want to change this setup, you can do so in your system settings.

On GNOME, open the system tray menu and select Activities, and then Settings. Next, select the Power category in the left column, which opens the following menu:


In the Suspend & Power Button section, select what you want the physical power button to do.

The process is similar across desktops. For instance, on KDE, the Power Management panel in System Settings contains an option for Button Event Handling.




After you configure how the button event is handled, pressing your computer’s physical power button follows whatever option you chose. Depending on your computer vendor (or parts vendors, if you build your own), a button press might be a light tap, or it may require a slightly longer push, so you might have to do some tests before you get the hang of it.

Beware of an over-long press, though, since it may shut your computer down without warning.



Run the systemctl command

If you operate more in a terminal than in a GUI desktop, you might prefer to reboot with a command. Broadly speaking, rebooting and powering down are processes of the init system—the sequence of programs that bring a computer up or down after a power signal (either on or off, respectively) is received.

On most modern Linux distributions, systemd is the init system, so both rebooting and powering down can be performed through the systemd user interface, systemctl. The systemctl command accepts, among many other options, halt (halts disk activity but does not cut power) reboot (halts disk activity and sends a reset signal to the motherboard) and poweroff (halts disk acitivity, and then cut power). These commands are mostly equivalent to starting the target file of the same name.

For instance, to trigger a reboot:

sudo systemctl start reboot.target


Run the shutdown command

Traditional UNIX, before the days of systemd (and for some Linux distributions, like Slackware, that’s now), there were commands specific to stopping a system. The shutdown command, for instance, can power down your machine, but it has several options to control exactly what that means.

This command requires a time argument, in minutes, so that shutdown knows when to execute. To reboot immediately, append the -r flag:

sudo shutdown -r now

To power down immediately:

sudo shutdown -P now

Or you can use the poweroff command:


To reboot after 10 minutes:

sudo shutdown -r 10

The shutdown command is a safe way to power off or reboot your computer, allowing disks to sync and processes to end. This command prevents new logins within the final 5 minutes of shutdown commencing, which is particularly useful on multi-user systems.

On many systems today, the shutdown command is actually just a call to systemctl with the appropriate reboot or power off option.


Run the reboot command

The reboot command, on its own, is basically a shortcut to shutdown -r now. From a terminal, this is the easiest and quickest reboot command:

sudo reboot

If your system is being blocked from shutting down (perhaps due to a runaway process), you can use the --force flag to make the system shut down anyway. However, this option skips the actual shutting down process, which can be abrupt for running processes, so it should only be used when the shutdowncommand is blocking you from powering down.

On many systems, reboot is actually a call to systemctl with the appropriate reboot or power off option.



On Linux distributions without systemd, there are up to 7 runlevels your computer understands. Different distributions can assign each mode uniquely, but generally, 0 initiates a halt state, and 6 initiates a reboot (the numbers in between denote states such as single-user mode, multi-user mode, a GUI prompt, and a text prompt).

These modes are defined in /etc/inittab on systems without systemd. On distributions using systemd as the init system, the /etc/inittab file is either missing, or it’s just a placeholder.

The telinit command is the front-end to your init system. If you’re using systemd, then this command is a link to systemctl with the appropriate options.

To power off your computer by sending it into runlevel 0:

sudo telinit 0

To reboot using the same method:

sudo telinit 6

How unsafe this command is for your data depends entirely on your init configuration. Most distributions try to protect you from pulling the plug (or the digital equivalent of that) by mapping runlevels to friendly commands.

You can see for yourself what happens at each runlevel by reading the init scripts found in /etc/rc.d or /etc/init.d, or by reading the systemd targets in /lib/systemd/system/.


Apply brute force

So far I’ve covered all the right ways to reboot or shut down your Linux computer. To be thorough, I include here additional methods of bringing down a Linux computer, but by no means are these methods recommended. They aren’t designed as a daily reboot or shut down command (reboot and shutdown exist for that), but they’re valid means to accomplish the task.

If you try these methods, try them in a virtual machine. Otherwise, use them only in emergencies.




A step lower than the init system is the /proc filesystem, which is a virtual representation of nearly everything happening on your computer. For instance, you can view your CPUs as though they were text files (with cat /proc/cpuinfo), view how much power is left in your laptop’s battery, or, after a fashion, reboot your system.

There’s a provision in the Linux kernel for system requests (Sysrq on most keyboards). You can communicate directly with this subsystem using key combinations, ideally regardless of what state your computer is in; it gets complex on some keyboards because the Sysrq key can be a special function key that requires a different key to access (such as Fn on many laptops).

An option less likely to fail is using echo to insert information into /proc, manually. First, make sure that the Sysrq system is enabled:

sudo echo 1 > /proc/sys/kernel/sysrq

To reboot, you can use either Alt+Sysrq+B or type:

sudo echo b > /proc/sysrq-trigger

This method is not a reasonable way to reboot your machine on a regular basis, but it gets the job done in a pinch.



Kernel parameters can be managed during runtime with sysctl. There are lots of kernel parameters, and you can see them all with sysctl --all. Most probably don’t mean much to you until you know what to look for, and in this case, you’re looking for kernel.panic.

You can query kernel parameters using the -–value option:

sudo sysctl --value kernel.panic


If you get a 0 back, then the kernel you’re running has no special setting, at least by default, to reboot upon a kernel panic. That situation is fairly typical since rebooting immediately on a catastrophic system crash makes it difficult to diagnose the cause of the crash. Then again, systems that need to stay on no matter what might benefit from an automatic restart after a kernel failure, so it’s an option that does get switched on in some cases.

You can activate this feature as an experiment (if you’re following along, try this in a virtual machine rather than on your actual computer


sudo sysctl kernel.reboot=1


Now, should your computer experience a kernel panic, it is set to reboot instead of waiting patiently for you to diagnose the problem. You can test this by simulating a catastrophic crash with sysrq. First, make sure that Sysrq is enabled:


sudo echo 1 > /proc/sys/kernel/sysrq

And then simulate a kernel panic:

sudo echo c > /proc/sysrq-trigger

Your computer reboots immediately.


Reboot responsibly

Knowing all of these options doesn't mean that you should use them all. Give careful thought to what you're trying to accomplish, and what the command you've selected will do. You don't want to damage your system by being reckless. That's what virtual machines are for. However, having so many options means that you're ready for most situations.

Have I left out your favorite method of rebooting or powering down a system? List what I’ve missed in the comments!



 Source: opensource.com Please visit and support the linux project.


Published in GNU/Linux Rules!
Thursday, 18 July 2019 23:32

Final battle: Ubuntu vs Windows  




In this post, I will share my opinion on these two operating systems: Windows and Ubuntu. I will point out all the pros and cons of both. I will share some nice tips and tricks that might be handy in your daily workflow which will help you compare them or make you switch to one or the other.

Quick Comparison of Windows and Ubuntu

Hardware Requirements Standard Standard, and a lightweight option
Servers Rarely used, worse option Widely used, better option
Security Relatively less secure Relatively more secure
Privacy Collects data by default Doesn't collect data
Live CD/USB No official versions Available by default
Installing Software App store (rarely used) and executable files (!) App store, executable files (rarely used), and package managers
Updates Intrusive system updates, separate update process for each app and the system Unintrusive, centralized updates for all apps and the system
Attaching Hardware Old hardware is incompatible Old hardware is compatible
Features Limited amount of features available by default Many features available by default, all customizable
Gaming All games are available for Windows natively Only some games are available for Ubuntu natively, though there are compatibility options available
Customizability Moderately customizable Highly customizable
Cost Paid Free
Code Closed source Open source
Support Paid support available, community support is worse Paid support available, community support is far better
Popularity More popular for desktops Less popular for desktops
Bloatware Many unnecessarily apps installed by default No bloatware installed by default
Beginner-Friendly Yes Yes
Editions A few similar editions Many editions with different features


Windows is by far the most popular operating system for personal computers. Almost all desktop PCs and laptops are sold with Windows preinstalled. The reason for this popularity comes from the 90s when really there were no other user-friendly alternatives. So people got used to it. Manufacturers then continued selling PCs that run Windows to not introduce any learning curve to the customers.

Exceptions are Apple’s products that use Mac OS (or iOS). There are just a few manufacturers that offer desktop PCs and laptops with Ubuntu or other Linux-based operating system preinstalled. Some of the most popular devices are Dell’s XPS series of laptops since they do have all of the latest hardware features, great design and, of course, Ubuntu preinstalled.

Speaking of Ubuntu, it’s an operating system just like Windows and Mac OS are. It is developed by Canonical, a company that many geeks and nerds love to hate. However, it does a great job by offering a free and open source Linux-based operating system. Yes, it is free as both a free beer and freedom of speech. It cost $0 to obtain a copy, but donations are welcomed. Everything that Canonical does is free for studying, modifications and distribution of the modified version. Because of this, many variants (flavors) of the Ubuntu operating systems are available as of today. Some of them are xUbuntuKubuntuLinux MintElementary OS, etc… All of them introduce some changes over the default Ubuntu to provide a slightly different feature set, but also relying on Ubuntu quite a lot. The very same thing is what Canonical is doing too. The Ubuntu operating system is heavily based on the Debian operating system.



Hardware Requirements

These are the minimum hardware requirements for Windows (10) and Ubuntu.




  • CPU: 1 GHz
  • RAM: 1 GB (or more for the 32-bit version), 2 GB (or more for the 64-bit version)
  • GPU: 800 x 600 pixels output resolution with a color depth of 32 bits
  • Disk Space: 32 GB




  • CPU: 2 GHz dual-core processor
  • RAM: 2 GB
  • GPU: 1024 x 768 screen resolution
  • Disk Space: 25 GB



Lubuntu and Xubuntu

These are essentially a lite version of Ubuntu with a lightweight desktop environment.

  • CPU: 300 MHz
  • RAM: 256 MB
  • GPU: 640 x 480 screen resolution
  • Disk Space: 1.5 GB

As you can see the required hardware to run Ubuntu or Windows is quite similar. The myth that Ubuntu uses fewer resources is false. However, there are lightweight alternatives (Lubuntu and Xubuntu) that use fewer resources, but also they do lack some of the features that I will mention later. If you need a lightweight alternative, there are other distros that run on 128MB RAM or less.




Windows and Ubuntu Servers Compared

Both Ubuntu and Windows have a specialized edition for servers.

Windows Server is mainly used when you need to run Windows-specific software, like .NET apps. Ubuntu is basically used for everything else.

By default, Windows Server does have a GUI, but Ubuntu Server doesn’t. You can easily install a graphical control panel on your Ubuntu Server, but out of the box, Windows is more beginner-friendly.

Ubuntu servers are better than Windows servers in terms of cost, security, stability, compatibility, online documentation and help, uptime, etc. The only reason why you should use a Windows server is if, and only if your apps and software require it.

Most websites you visit are powered by a Linux server. Most online multiplayer games you play host their servers on a Linux server. This website is hosted on an Ubuntu server.

While you can still install a LAMP stack on a Windows server and run an app like WordPress, it’s recommended to use Ubuntu servers because of their many advantages as compared to Windows servers.



Security in Windows and Ubuntu

The fact is that the more popular the operating system is the more people are interested in writing malicious software for it. Simply put, the effort into making malicious software is worth more since it will penetrate a greater number of PCs eventually since it has a larger potential user base (targets).

Since Windows is widely spread on personal computers at homes and offices, there is higher interest in stealing data or damaging those PCs. Most of the vulnerabilities come from:

  • Vendors don’t manage to secure their proprietary software utilities (Asus is the latest company that failed at this)
  • Vendors preinstalled malicious software on purpose (I’m talking to you Lenovo)
  • Users don’t encrypt their data (It’s like leaving the front door unlocked… Seriously, take care for your data if you want it to be safe)
  • Users are installing pirated software (cracked, please don’t do this)
  • Users don’t follow the recommended security procedures for scanning removable storage devices (USBs, external HDD) and incoming emails (There are many posts on this topic already)

There is growth in attempts against Linux-based operating systems too. But the layered security in Linux-based operating systems is quite good. The majority of security flaws come due to bugs of user space software, not the Linux kernel itself. The nice thing is that you can only attack user space data from the user space software. Those are images, videos, documents, but not the operating system itself. It is quite hard to bypass the authentication and gain root privileges to be able to attack the OS itself.

  • The Nvidia proprietary GPU driver had some security issues recently. The good thing is that even though the vulnerability was present, no PCs were affected since the attacker can not utilize the bug to execute any malicious code remotely. Giving the attacker physical access to the PC is a security flaw of a different type, not the OS itself. The bad thing is that on shared PCs this can be an issue. The worst thing is that this is proprietary code so no one else can fix it other than Nvidia themselves.
  • A ransomware that is worth having its own wiki page damaged tens of PCs. I’m laughing since there are only tens of PCs affected but it is quite a story though since these kinds of attacks are really rare.

Long story short, there are a lot of security issues in Linux-based operating systems like Ubuntu too. But the truth is that it is very rare that those vulnerabilities can do any damage in the real world. Since the majority of the source code that is being executed is freely available there are a lot of advanced users who will try to patch the vulnerabilities just a few minutes after its announce (one of the reasons you should use open source software is because of this). There is hardly any time for the attacker to plan and do any damage in the real world.

Since Ubuntu is not so popular for personal use, attackers don’t even bother spending time here. They try to attack the servers running Linux-based operating systems instead. Mostly, those servers offer features to users of other operating systems. For example, if a Linux-based computer that is acting as an email server is attacked, it can be used to send malicious emails to Windows users (and damage their PCs or data) while users of Linux-based operating systems like Ubuntu will remain safe. As I mentioned earlier, an attacker needs to gain root privilege to do damage on Linux-based operating systems.

So personal users are quite safe, while servers are almost as secure, they still need a bit more attention.

  • No risk of attaching malicious USB flash storage drives for home users
  • No risk of preinstalled software by vendors.
  • No risk of installing pirated software since most of the software is already free.

If a user succumbs to a phishing or social engineering attack, the operating system they’re using makes little to no difference. Same with connecting to an unsecured network or having an improperly configured system (firewalls, weak passwords, login and user control, etc.)



Privacy in Windows (or the lack thereof) vs Ubuntu

Windows doesn’t take privacy seriously. Even though there are plenty of options available to be turned on and off on the user’s behalf, the truth is that the majority of privacy-intrusive behavior cannot be disabled. For example, every keystroke on the keyboard is recorded to improve the spelling and other grammar-related things (which is nice) but not exclusive to. So the keystrokes are sent to third party servers for analysis but no one can be sure what else is being done there, on the remote server, or what is the actual content sent along with the keystrokes.

Ubuntu on the other side is by far more respectful to their users’ privacy. The very same features like tracking user input actions are present but the behavior is quite different. That data is only used as debug logs when the program or the operating system crashes and only if the user accepts to share the crash logs. That data can be very useful for developers to find the bug and fix it.

Because the source code of the operating system is freely available it can be freely reviewed on the user’s behalf. Some users that are too picky about the privacy can even modify the source code and redistribute the modified version if desired. There are specialized Linux distros that focus on privacy, like Tails.

As you can see, the free will of the user is far greater when the source code is freely available for modification and redistribution than if it is closed and inaccessible for general public review.



Games in Ubuntu (or the lack thereof) vs Windows

One of the areas where Windows is definitely better than Ubuntu is gaming. Although there are quite a lot of games available for Ubuntu, virtually all games are available for Windows, where only a small part are available for Ubuntu.

The biggest influence here is made by Steam, their proton framework which is bundled with Steam client allows many windows exclusive titles to be run on Linux based operating systems too. Simply enable proton library in Steam settings.

Recently I was playing some older titles like Crysis 2, GTA San Andreas and Dirt 3 (all of them are made for Windows only) on Steam with proton enabled and I have not found any issue. Everything is as smooth as on Windows. The best thing is that only one click of a mouse is required to enable proton.

The old school way of playing games for Windows on Ubuntu is with WineCrossOverand/or KVM (for more advanced users) which are to some point stripped down virtual machine emulators, or use VirtualBox as a fully virtual machine where you can install Windows and run all games. However, there might appear some issues with any of them so depending on the specific game and the hardware you might prefer one over the other.

There are a few open source project that brings life to gaming on Linux lately. The first one is DXVK, the second one is Looking Glass and the third one is Lutris. Long story short, no lag and frame drops. Amazing! For more details about them and technical information I highly recommend the Level 1 Linux channel on YouTube. There is some excellent content on this topic for those who are interested.

Although there are a number of options and a lot of games are working on Linux based operating systems nothing beats the native support. No matter how good the workaround is the native support is always superior. You can never be 100% sure that the next game will work with a current workaround, or to believe that what works today will continue to work tomorrow, anything can be broken anytime, that’s the nature of workarounds.

Also, there are specialized Linux distros for gaming that are optimized for games and have various emulators and games pre-installed. Linux distros are better than Windows when it comes to retro gaming with emulators.

At the end of the day, you can still play games on Ubuntu, but it’s highly unlikely that a new popular game will support Ubuntu, at least for the time being, whereas Windows support is almost guaranteed for each game.



Ubuntu’s Live CD/USB

Besides the ability to install the operating system, there is the Live option for Ubuntu. It allows the use of the operating system without installing it on the hard drive. In the Live option, the user has access to a wide variety of software for everyday use like a web browser, media player, text editor, calculator, file browser (file manager), etc. In the Live option, those apps are loaded directly from the installation media (CD or USB flash drive).

This option is very handy for recovering data from a damaged PC, troubleshooting a broken PC or simply to use on a CD/USB to boot on a shared PC if you have privacy concerns. So even if the PC is affected by the bugs of the Nvidia GPU driver mentioned earlier, you can simply boot from Live CD/USB on the shared PC and remain safe by using your own operating system.

It’s like taking ready to use operating system in your pocket. The only downside is the speed, It is noticeably slower than launching apps from a properly installed operating system.

There are no official live versions of Windows.



Installing additional software on Windows vs Ubuntu

Both Windows and Linux come with some necessary software preinstalled like a media player, a web browser, etc. Installing additional software is a quite different procedure on both.

Installing additional software on Windows can be done in two ways. If You use Windows 10, simply open the Store application, search the catalog and click install.



For older versions of Windows, installing additional software requires a web browser, a search engine, and an internet connection, or a different way of getting the software (CD, USB flash drive, etc.)

  • Open a web browser
  • Search for the software you want to install
  • Download the .exe file
  • Double click the .exe file to begin with the installation

As you can see it is quite a complex process. However, people are used to it so not much complaints are given. Beginners often make mistakes when downloading files and software from the internet, often downloading illegitimate files and software with malware.

On the other hand, installing additional software on Ubuntu is very similar to what we actually do on our smartphones (Android or iOS).

  • Open Store app (Play Store or App Store)
  • Search the app you want
  • Click install

Done. Similar to Windows 10 (and 8), but not to previous versions.

The concept to put all available applications in the same centralized place is present since the beginning of Ubuntu (2004). The same principle is later copied on Android and now on Windows starting from Windows 8. The concept of a centralized store of all the software originates from the early days of Linux-based operating systems in the 90s.

You can still download executable files from the internet and install them on your Ubuntu, similarly to Windows.

If you’re using an Ubuntu server, you’ll need to use the CLI (command line interface) to install software. If you’re using a GUI (a control panel), in most cases, you can do it without running any code.

There are many different applications that allow searching through the list of available software packages on Linux distros. Some of them are designed to be run from the CLI while others have a nice GUI. Here are some examples:

  • apt



  • pacman



  • muon



  • Discover



  • GNOME Software Center




  • Elementary Apps


So basically you have multiple options to choose from when it comes to app stores.




Upgrading (updating) software on Windows vs Ubuntu

On Windows, there are multiple streams for acquiring updates. There is one stream controlled by the operating system and updates the operating system only. Beside it, every other application activates a background service to acquire updates. That’s why Windows users often see popup messages about updates just after launching an application. While it is a concept that can get the job done, it can be very annoying. Users are executing applications to get a job done, not to maintain them, so any popup message can be counted as a distraction.




If you’d wanted to update all the installed apps on Windows, you’ll need to go through each app and update them separately (if not installed via the Store)

On Ubuntu, there is only one stream for acquiring updates. It fetches the updated versions of the operating system itself and every other installed application as a scheduled task. Because of this, applications do not have to enable additional background services. The end result is no annoying pop-up messages when launching an application and no wasted resources while you are doing your job. The scheduled task for fetching all updates will run when your PC is idle so you will barely notice it. There are no forced updates like Windows has. No automatic restarts after updates like Windows has. In other words, everything is smooth, silent and done in the background with as little interaction as possible from the user.



What about downgrading to previous versions?

This is a very interesting use case. People rarely do it, but from time to time, an updated version can cause some problems like incompatibility with third party software or tools or a minor bug might be introduced. Most of the time issues like these are fixed in next updates in almost no time, but still, it can be annoying,

On Windows, downgrading to an older version is almost impossible. There is no built-in mechanism for doing this but there is a workaround. It goes like this:

  • Uninstall the program.
  • Install an older version of the same program.

In order to achieve this, you must have the .exe installer of the desired program and the desired (older) version. If you don’t have it, you have to obtain it manually. So in this case, we need to open a web browser, use a search engine, download the software, etc. The downside is that the configuration might be broken if current configuration files are not completely deleted. Even if the configuration files are deleted altogether there might other bugs too. Users would have to do the very same configuration again. This can be tricky for some complex workflows.

Ubuntu on the other side does have a built-in method for downgrading software. All .deb files (installation files) are stored on the file system by default. Those files can be removed only if the file system is low on free space or if there are already too many older versions of the given application. The point is that there is always the installation file of the previous version of every application so users don’t have to waste time searching for it.

The sad thing is that this method works only from the command line (or at least I’m not aware of any GUI tool that can do this). Actually, it is just fine since this task is not aimed at regular users so no need for clickable buttons for something that will hardly ever be used. However, downgrading using the command line is a matter of copying and pasting one command and pressing the “Enter” key.

Be careful with the command line! Don’t do it if you don’t feel comfortable. It’s a very powerful tool so it must be used with care. It’s better to ask for help from IT support or at least talk about your issue on the Ubuntu forums or other online Ubuntu communities, there, you will be guided until the issue is resolved.


Attaching hardware on Windows vs Ubuntu

For Windows, back in the day, this process required a manual installation of a driver. But you had to be careful because drivers were distributed on CDs and there were many. One for Windows XP, one more for Vista, etc. And also in both 32bit and 64bit variants. Installing the wrong driver can lead to a broken PC. Not a pleasant experience. Windows 10 is quite nice, it can auto-detect hardware and install required drivers automatically but not early versions of Windows.

One big problem still remains unresolved. Older drivers are not compatible with newer versions of Windows. Attaching older hardware and setting it up and running it can be quite a challenge if not impossible, I have a bunch of old USB devices that are still useful but none of them work on Windows 10.

Ubuntu (and all of its derivatives like Kubuntu, Xubuntu, Elementary OS etc.) relay on the Linux kernel for the drivers. One of the principles about the Linux kernel is that compatibility should never be broken, otherwise that patch will never be released for public use. That’s why if some hardware device was working once on a Linux-based operating system, it will work forever in the feature.


Comparing features (workflow) on Windows and Kubuntu

An average user might be interested in utilizing a different flow of doing everyday tasks. Depending on the size and resolution of the screen, or the number of monitors attached the very same actions can be achieved more efficiently or at least made more visually appealing.

I’ll use Kubuntu as an example, but features are often the same or quite similar on more Ubuntu flavors.

So let’s compare some things here:


Virtual desktops

On Windows, this feature is available since the release of Windows 10 (2015). it does not have many features besides spreading the windows (grouping them) across virtual desktops. I often use one desktop for chat apps and multimedia, while all others are for job-related tasks grouped by a specific topic on each virtual desktop.





Kubuntu does share the very same features, but much earlier than Windows. I remember I have seen this feature back in 2008 (but I don’t bother about the first day of availability).





ALT + TAB for switching active windows

I bet this is the most used shortcut. Windows has a single UI for this and allows cycling in two ways, forward (ALT + TAB) and backward (ALT + SHIFT + TAB)




Kubuntu (my favorite flavor of Ubuntu) has the very same functionality but with more options.
It can have multiple UIs so chose what is the most visually appealing to you.


In addition to the visual appealing UI, there is one very handy shortcut ( ALT + ~ ) or ( ALT + SHIFT + ~ ) for cycling in a reverse order. This allows filtering the scope of windows that will be cycled. Only instances of the currently active application will be cycled and all other windows will be excluded.

At the beginning, I was using (ALT + TAB) for switching all windows (3 x Firefox, 1 x Steam and 1 x Clementine), then I used (ALT + ~) to switch instances of Firefox only. There is an option to assign a different shortcut for cycling through windows of the current virtual desktop only or cycling through the windows of all virtual desktops too.

On Windows, the cycling is limited to currently active virtual desktop only and can’t be changed. In addition to (ALT + TAB) for cycling through windows, there is one more option available and it is called “Present Windows”. It displays all windows side by side. There is an option to display windows from the current virtual desktop only or to display windows from all virtual desktops.






Moving windows on the screen

Windows allows moving the windows by pointing the cursor to the window frame (title bar), press and hold the left mouse button, and moving the cursor around. The window will follow the cursor placement. Very simple and intuitive.


Kubuntu can also do this by executing the exact actions. In addition, the user can move the window while pointing anywhere on the window (not limited to the title bar only) if that area is not an input field. For example, you can move a window around while pressing and holding the left mouse button anywhere on the toolbar. If the window does have a very big input area, there is one shortcut that can bypass this. Holding the windows key (super key, or meta key) on the keyboard while pressed and holding the left mouse button you can move the window around no matter where are you pointing to. I often use this, it is by far the most efficient way for me.



Copy and paste

On Windows, this is achieved by the (CTRL + C) shortcut for copying and (CTRL + V) shortcut for pasting. In addition, there is always that contextual menu on clicking the right mouse button and choosing “Copy” then “Paste”. Easy.

The very same actions can be executed in Kubuntu too. In addition, there are some very nice features built around the idea of copying and pasting. For example, there is a history of recently copied items. The history can be viewed by clicking the notes icon on the system tray menu on the bottom right corner of the screen.




The very same history can be opened in a contextual menu next to the cursor by CTRL + ALT + V shortcut. This is a built-in option, but the shortcut must be manually assigned.




The point of the history is that you can paste the very same item again (multiple times) that was copied in the past. When you get used to this feature it is very handy. If you measure how long it takes to copy an item, and multiply that by how many times you will paste that item you will realize how much time is literally wasted. For even more time saving, there is a search functionality in both the contextual menu and system tray app. It allows you to filter the list of results by the text you enter in the search field. Typing a few letters is way faster than scrolling until you reach something copied early in the morning.

Windows has a simpler built-in clipboard manager, where you can only view, pin to the top, and use recently copied items. You can access the clipboard manager by pressing the Windows key + V.




To get the same features you get with Kubuntu on Windows, like the search feature, you’d have to install additional software. You may need to activate the Clipboard feature from the Windows settings app before you can use it.



Window buttons

Windows puts the close button, minimize button and maximize button on the top right corner of the window. It’s simple, it works.




Kubuntu, on the other hand, is more flexible. You can change the position of the buttons to the left corner or right corner if desired.

You can also add more buttons or remove them if desired. Two of my favorite additional buttons are: keep above others (always on top) button and pin to all desktops (visible to all desktops) button. The always on top button will prevent the window to go behind other windows. I use it frequently to keep my terminal, calculator or notes app on top while using a web browser or programming.

The Pin to all desktops button will make the window visible to all workspaces. I use this feature to have a preview of an exact spreadsheet document on all virtual desktops, or to have access to the exact terminal emulator from any virtual desktop.


Smartphone connectivity

There is no built-in option for synchronizing an Android smartphone with a Windows PC, or maybe I am not aware of it. This is possible with third-party software like Pushbullet and others.

On Kubuntu, there is one very handy tool called KDE Connect and it is available by default. This is a wonderful syncing app, Its features are:

  • Browsing the smartphone file system (SD card / built-in storage) from your PC.
    So you can access all the documents stored on your smartphone from your PC.
  • Synchronizing clipboard (copy – paste) items.
    You can copy a paragraph of text on one of your devices and it will be available on the other devices ready for pasting.
  • Receive notifications.
    All notifications that appear on one device will be instantly shown on the other devices too. I often use this feature and generate a notification for my long lasting (compiling) jobs. This way when the job is done it will generate a notification so I’m notified on my smartphone while resting in my garden. Before KDE Connect I was checking the PC every 10 – 15 minutes to see if the job is done.
  • Phone calls and SMS.
    You can receive SMS notifications and respond to them from the PC. When a phone call is ongoing the volume of the PC is automatically turned down to not disturb the call and it’s turned up again when the phone call is over. I adore this.
  • Mouse and keyboard emulation.
    You can use your smartphone as a touchpad to control the cursor, you can also use your smartphone keyboard to input text to your PC.
  • Multimedia remote.
    There is a remote functionality that works with almost any media player on your PC. You can use the smartphone to play/pause, switch content, etc. The best thing is that KDE Connect is integrated as a widget in the system tray. It runs silently in the background and it’s enabled by default. Just install the KDE Connect Android app on your smartphone, pairing is a very simple process requiring just a few clicks.

All, or some of these features can only be available in Windows if you install a third-party app, while in Kubuntu it’s installed by default.



File Managers

Windows has a very basic File Manager. It is simple and not feature rich.

Kubuntu, on the other hand, does have a feature rich File Manager. It supports tabs similar to what web browsers do.




Also, there is a split option, so the user can see multiple locations on the same screen, A Terminal emulator can be enabled for the current location. I’ve noticed a lot of advanced users that are not aware of this.





All toolbars can be rearranged to whatever edge desired. No restrictions at all.

Spending some time in customizing the File Manager for sure can improve the workflow by reducing the clutter of having multiple windows open instead. Every button on the toolbar can be removed, or new ones can be added, Almost every action that the file manager can do, can be put in the toolbar for quick access.

The Windows file manager is not that customizable and doesn’t have all the features that Kubuntu’s file manager has.




Windows 10 does have improved search functionality built in the start menu. It can search for Applications, files, and folders on your PC. The search can also be expanded to web sources. One downside is that all the search queries are done by the Bing search engine. Bing is fine, but you can not switch to another search engine like Google or DuckDuckGo if desired. The start menu can have the traditional layout or a full-screen layout:

Kubuntu does offer similar functionalities. The default start menu can search for applications, files or folders stored on your PC. The search function is also nicely integrated with Firefox and Chromium (Google Chrome without any proprietary code included) web browsers (maybe with some others too) so a user can search their bookmarks from the start menu. It can expand the search to your contacts (Kmail – email client), KAddressbook (phone contacts), etc. Some popular application are also nicely integrated into this menu like Chromium, Firefox, Clementine (Media player), etc. So a user can search and execute some actions that are built in those applications. For example, typing “incognito” in the search menu will open a new Chromium window in incognito mode. The same can be done with Firefox too by using the “private” term instead to open a private window. Besides the search capabilities built in the start menu, there is a dedicated tool only for searching (activated by pressing ALT+ F2).



In addition to the alt-tab shortcut for switching windows, there are the window spread options. Both of these are already mentioned. What is not told is that the window spread option has built-in search functionalities.

  • Open windows spread
  • Type any characters
  • Only windows that contain these characters will remain visible

I often use this feature when I have a lot of windows open (sometimes up to 40), or even if there are multiple Chromium windows open. Since one window is dedicated to personal stuff only (YouTube, forums, social networks, etc.) and all others are for job-related tasks I can simply type the content that I want and the exact Chromium window will appear in front of me. Way faster than using alt-tab too many times.


Misc features in Kubuntu worth mentioning

Most of these features are not available in Windows by default.


There is a mute button in the taskbar icons.

Right click the app icon and click the mute button to silence the exact app. I use this to mute sounds of the video games while they are minimized



Audio from windows.

Every window that produces some noise (audio) is marked in the taskbar so you can mute that window. Very handy if there are multiple sources of audio from different virtual desktops. It helps a lot in dealing with unnecessary audio sources.


The same can also be achieved from the system tray, Every application that makes noise is displayed and independently the volume can be decreased for any of them. In Windows, you’ll have to open up the Volume Mixer to get the same feature.





This a web browser primarily, but it can also be used as a file manager, a pdf reader, etc. It is a very powerful application.

It supports tabs. Each tab can display different content, for example, a document (.pdf), a web page or the file system.

The tabs can also be split into multiple views. Each view can display different content. As you can see I have a tab with YouTube in one view, file manager in a second view and a photo in a third view. The second tab is dedicated to the www.andromedacomputer.net web page.

Konqueror helps a lot to build a productive workflow. I was using it a lot while studying. many .pdf documents, many StackOverflow pages, etc. And all of that organized in tabs, splits and multiple windows. A hell of an efficient flow for quick navigation to the desired content



Infinite scrolling

Many of the default apps offer scrolling the view by pressing the mouse scroll wheel and dragging the mouse. The scrolling, however, is not limited to the size of the screen. When an edge is reached the mouse is automatically displaced to the opposite edge so the scrolling can continue without interruption.


Rename multiple files

Switch active windows with the mouse wheel

Simply move the cursor over the taskbar and scroll the wheel

By default, this will switch active windows, but the shortcut can be reassigned to other actions if desired.

Scrolling the mouse wheel will switch active windows in the ALT-TAB menu too (not in every UI, but most of them support mouse actions too).



Built-in search for .pdf documents in Okular (.pdf reader)

You can select a text in the .pdf document and ask Okular to open the web browser and search the term by using one of the many search engines. I often use Wikipedia, Google, DuckDuckGo, or YouTube for additional information on some technical terms.


I have already told you about the availability of multiple choices for the appearance of the Alt-Tab window switcher utility. The very same is applicable almost anywhere on the system. For example, the default is to use the taskbar at the bottom of the screen.



The position and the items in the taskbar can also be rearranged without restrictions. You can even remove some items, like the start menu button, application launch icons or the clock if so desired and place some other items there. Many people prefer the taskbar on the top of the screen or a dock at the bottom. The most popular is Latte Dock.




Besides the layout, the color theme can be changed too. There are many included by default, but many more can be installed manually. Here are screenshots of some nice and popular themes for the KDE Desktop (Kubuntu). Note that the popular color themes are available for other desktop environments too.

plasma themes 1 300x169.jpg 

Many mouse cursor and icon themes are available. Some examples:

You can change the color theme in Windows too. There are a number of pre-installed to choose from, and you can manually install more. You can also change the cursor and icon themes.


Task automation

Advanced users don’t want to waste time and repeat task so they often automate them. For example, I don’t want to manually increase the fan speed, and the clock speed of the CPU and GPU while gaming so I modified the Steam app launcher icon.

The modified behavior is:

  • open the Steam app
  • increase the clock speed of the CPU
  • increase the fan speed of the CPU
  • increase the clock speed of the GPU
  • increase the fan speed of the GPU
  • wait for Steam to close
  • when Steam is closed, revert all the changes to default

This is how I have nice a quiet PC while working, but I squeeze every bit of power for gaming by preventing the overheating and forcing the fastest clock speed available.

Do you want to read and learn more about bash scripts and automating task in the Linux shell? If yes then please reply in the comments so I can prepare a post dedicated on this topic.



Conclusion on Windows vs Ubuntu:

As you can see, Ubuntu and other Linux-based operating systems, as compared to Windows, are:

  • Safer
  • Easier for installing and updating apps
  • Easier for downgrading apps if something is broken after an update
  • A lot more customizable
  • Better UX
  • Better for servers
  • Worse for gaming
  • Beginner-friendly
  • Free and open source
  • Care about your privacy and data
  • Easier for maintenance
  • No annoying pop-up messages
  • Can run on old PCs that cannot run Windows smoothly, yet be secure with all of the latest updates
  • and a lot more.

Kubuntu is my favorite derivative of all the Ubuntu-based operating systems. I can not point out any features as favorite because I like all of them. Everything mentioned above is part of my daily workflow.

Now when you know all of this it is worth trying them out. I was skeptical at first but later when I built my flow and learned how to utilize these features I can do everything faster, with fewer keystrokes and the most important thing is that I have a nicely organized desktop that helps me to minimize brain fatigue while doing my job.

Kubuntu is a great distro to switch to if you’re coming from Windows. They have a quite similar UI, and Kubuntu has all the features Windows has, plus more.


Published in GNU/Linux Rules!



What is a multi-user Operating system ? When the OS allows multiple people to use the computer at the same time without affecting other's stuff, it becomes a multi-user OS. Like wise Linux is also belongs to above mentioned category. There can be having multiple users, groups with their own personal files and preferences. So, this article will be helpful for you in below actions.



  • Managing Users ( Create/Edit/Delete accounts, Suspend accounts )
  • Manage User's Passwords ( Set Password policies, Expiration, further modifications )
  • Manage Groups ( Create/Delete user groups )


From this article we will discuss mostly useful Linux commands with their syntax's.

How to create a user


1) useradd : Add a user


syntax : useradd 

eg : We will create a user named ""Jesica". The command is useradd jesica . First i switch to root user with sudo su command as i am a sudo user.


You can see when we created the user in root account, it just added the user without asking the password for the newly created user. So now we will create a password for the user jesica.



2) passwd : set a password for users


syntax : passwd 


Here, i set a password for jesica. I set the password also as "jesica".You can use your own. The password you are writing will not be displayed for security reasons. As my password only having 6 characters, we get a message saying password is shorter than 8 characters. Those are password policies. We will discuss later in this article.


* Now we have created a new user with command useradd and set a password with passwd command. This is done in CentOS. But in some other linux distributions, adduser command will be used instead of useradd.

 * If you are a normal user, you have to be a super user to add a new user. So you have to use the commands as sudo useradd and sudo passwd .


Where all of these users are residing ?

We discussed these stuff in "Linux File System Hierarchy" article. As /root directory is root user's home directory, normal user's home directory is /home. Inside of /home directory all the user's profiles are stored. You can use the command ls /home to check who are currently in your OS. Check the below image, which shows my users in my OS.




What is /etc/passwd file ?


When you created a user with command useradd without any options, there are some configuration file which are changing. Those are as below


  1. /etc/passwd
  2. /etc/shadow
  3. /etc/groups
  4. /etc/gshadow


Output of the above files are as below according to my OS.


1. /etc/passwd file




2. /etc/shadow file



3. /etc/group file



When we created a new user with useradd command without any options, /etc/passwd file sets reasonable defaults for all field in that file for the new user. It is just a text file which contains useful information about the users like username, user id, group id, user's home directory path, shell and etc.


If we discuss about the fields in /etc/passwd file, eg : student:x:1000:1000:student:/home/student:/bin/bash


1. student : This is the username. To login we use this name.


2. x : This is the password. This is an encrypted password stored in /etc/shadow file. You can see the password record in /etc/shadow file for user student in the above image.


3. 1000 : This is the user id. Each an every user should have UID. This is zero for root user and 1-99 is for predefined user accounts and 100-999 is for system administrative accounts. Normal users are having User IDs starting from 1000. Extra - Also you can use command id for viewing user details.


4. 1000 : Primary group ID ( GID ). see /etc/group file on left side.

5. student : Comment field

6. /home/student : User's home directory

7. /bin/bash : The shell used by the user



* Summary of the above


  • When a user created, new profile will be created in /home/username by default
  • Hidden files like .bashrc , .bash_profile , .bash_logout will be copied to user's home directory. Environmental variables for the user is set by those hidden files and they will be covered in future articles.
  • A separate groups will be created for each user with their name.


Useradd command with some options


1.) If accidentally user's home directory is not created with useradd command.



If you want to create a user without the home directory, useradd -M panda.


2.) If you want to move your home directory to a separate directory



In the above command you have to use useradd command and then -d option for changing the default home directory path and /boo is the new home directory. Last put the username. You can see the below image. /etc/passwd file has a different home directory entry for user boo, Because we changed it's home directory.



3.) Add a comment for the user when adding



In /etc/passwd file :




4.) Create a user by your own UID, useradd -u

5.) Create a user by your own UID and GID, useradd -u -g

6.) Create a user adding to a different groups, useradd -G There groups can be one or more and should be separated with a comma (,) the groups.

7.) To create a user, but disable shell login useradd -s /sbin/nologin With the above command, we can disable shell interaction with the user. But the account is active.


How to remove an account


3. userdel : Remove a user


syntax : userdel


eg : userdel -r


* When deleting the user, go with option -r. Why is it ? With -r option, it removes user with it's home directory. If removed without -r option, user's home directory will not be deleted.


How to modify an user account


4. usermod : Modify a user


syntax : usermod


* Here we can use all the options used in useradd command. Below are some options which is not discussed above.


1.) How to change the user's name


usermod -l


2.) To lock a user


usermod -L


3.) To unlock a user


usermod -U


4.) To change the group of a user


usermod -G


5.) To append a group to a user


usermod -aG


* Here appending means adding groups without removing the already existing groups. But if we use without -a, it removes the existing groups and join to new groups. This is relevant under primary groups and supplementary groups.


What is a group ?


Group is a collection of one or more users in Linux OS. Same as users, groups also have a group name and a id ( GID ). The group details can be found in /etc/group file. There are two types of main groups in Linux OS. Those are Primary groups and Supplementary groups. Every user once created is getting a new groups with the user's account name. That is the primary group and Supplementary groups are groups having one or more users inside.


How to create a group


4. groupadd : create a linux group


syntax : groupadd


Few examples


1.) To create a group named "student"


groupadd student


2.) Define a different group id ( GID )


groupadd -g 5000 student


How to modify an existing group


5. groupmod : modify a group


syntax : groupmod


To change the name of the group, groupmod -n To change the group if, groupmod -g



How to delete an existing group


6. groupdel : delete a group


syntax : groupdel


How to manage user passwords using password policy ?



As we discussed above, while /etc/passwd file stores user details, /etc/shadow file stores user's password details. I attached an image of /etc/shadow file in the above. Here we use a term named Password aging. From that we use command chage edit the password aging policy. Look at the below image.


Refer the above image and the options are as below.


  • chage -d 0 : Forcefully request the user to change the password in the next login.
  • chage -E Year-Month-Date : To expire an user account ( It should be in format YYYY-MM-DD ) 
  • chage -M 90 : Set password policy for requesting password should be renewed in every 90 days
  • chage -m 7 : Minimum days should be 7 to wait for changing the password again.


* Inactive days are set to define from how many days the account will be kept inactive after password expiration. If the user didn't change the password within inactive period, the account will be expired. 


chage -l : To display user's current settings for password policy.


The default values for all of the above values ( password expiration days, inactive days and etc ) will be in the configuration file, /etc/login.defs text file. Including User account ID , Group Account ID configurations also can be seen there. You can change the values in the /etc/login.defs file as your requirement.



Now you have learned mostly needed stuff in Linux Users and Groups. This is not a small topic. There are a lots of commands you need to refer under this topic.


 You can see our previous posts with related topics



Published in GNU/Linux Rules!
Tagged under


Gaming in Linux has evolved a lot in the past few years. Now, you have dozens of distros pre-optimized for gaming and gamers. We tested all of them and hand-picked the best. There are a few other articles and lists of this type out there, but they don’t really go into detail and they are pretty outdated. This is an up-to-date list with any info you’d need.

How to choose the best Linux distro for gaming

Before we start listing out the best distros, you’d still need to choose one of them. Here are a few guidelines you can use to help you choose the right one for you:

  • Any Linux distro can be used for gaming. You can install Linux games on any distro or you can use tools like PlayOnLinuxWineSteam and a bunch of other emulators. At the end of the day, it all boils down to which one you personally prefer. Try them out. Use a live CD (flash drive) image and test it out without even installing it. Watch some videos, check some screenshots, read some reviews…
  • The main feature that matters when choosing a distro for gaming is support for drivers. Most distros support the latest (and even oldest) hardware out of the box. Even if they don’t, you can still manually find and install the driver yourself on any distro. If you’re really unsure, you can just google some info for your hardware and see if the distro supports it out of the box.
  • Second most important feature is update frequency. Is it a rolling release distro like Manjaro (very frequent updates without a schedule, always the latest software)? Or is it a point release distro like Ubuntu (scheduled updates, not always the latest software). If you prefer to always use the latest versions of any software and apps, go with a rolling release distro. That way, you’ll always get the latest driver updates and you’ll most likely already have the latest drivers for your new GPU/CPU. If you’d like to stick with what you know and use a more stable OS, go with a point release distro.
  • Previous Linux experience should also be a deciding factor. Have you used a Linux VPS before? Which distro did you use for your server? If it was Ubuntu, then you should choose the desktop version of Ubuntu since you’ll be more familiar with it. If you’ve used CentOS for your server, go with a Fedora-based distro for gaming. Did you use an LXDE distro? Go with a gaming distro that uses LXDE.



Now, let’s move on to the main part.

Best Linux Distro for Gaming

Here’s a 8 examples to help you choose the best Linux distro for you:

1. SteamOS

There’s a reason why SteamOS is always the first on every Linux gaming distro list. It’s designed with gaming in mind. It comes pre-installed with Steam and it’s based on Debian. SteamOS is built, designed and maintained by Valve. By default, SteamOS only has Steam installed, but you can activate the “desktop mode” and you’ll get a fully-featured desktop OS where you can run other applications besides Steam and games. It has everything set up out of the box, so you don’t need to install or configure anything to play on Steam, which is why this is the most recommended distro for beginners and Linux gamers.

SteamOS hardware requirements

However, if you have an older machine, SteamOS is not recommended, as it has quite a lot of hardware requirements:

  • Intel or AMD 64-bit capable processor
  • 4GB or more RAM
  • 250GB or larger disk
  • NVIDIA, Intel, or AMD graphics card

SteamOS facts and features

Linux and Steam for gamers.

  • Steam is preinstalled out of the box
  • Ready to play games without needing to install any additional software
  • Free and open source (apart from Steam itself, which is proprietary)
  • Support for many graphic cards, controllers and other gaming-related hardware

Visit their official website for download/installation instructions and FAQ:
Download SteamOS

Let’s move on to the next distro on our list:

2. Ubuntu GamePack

t’s not the default Ubuntu, but it’s a distro based on Ubuntu. You can still use the default Ubuntu and install Play on Linux, Wine and Steam or any other game you’d want to, but it would not be as optimized for gaming as Ubuntu GamePack is.

Ubuntu GamePack hardware requirements

Quite similar to the default Ubuntu, this distro requires:

  • 2 GHz or more processor (64-bit recommended)
  • 1GB RAM or more
  • 9GB disk (the more the better)
  • VGA capable of 1024×768 screen resolution. Intel HD graphics/AMD Radeon 8500 for Steam games and any other GPU for other games.

Ubuntu GamePack facts and features

Ubuntu for gamers.

  • Pre-installed with Lutris, PlayOnLinux, Wine, and Steam
  • Great hardware drivers support
  • Low(er) hardware requirements
  • Free and open source OS
  • Supports Flash and Java (great for online, browser-based games)

If you’re already familiar with Ubuntu, go with this distro.

Visit their official website for download/installation instructions and FAQ:
Download Ubuntu GamePack


The second most popular Linux distro used for desktop computers is Fedora. Luckily, Fedora also has a gaming flavor (spin):

3. Fedora – Games Spin

Fedora – Games Spin has thousands of games already pre-installed and ready to play. It doesn’t support as much hardware as some other distros, and it doesn’t come with Wine/Steam pre-installed, which is why this is not recommended for anyone. However, if you’re already familiar with Fedora or if you like the XFCE desktop environment, this distro would be perfect for you.

Fedora Games Spin hardware requirements

Similar to the Fedora desktop distro:

  • 2 GHz or more processor (64-bit recommended)
  • 1GB RAM or more
  • 10GB disk (the more the better)
  • Intel HD graphics/AMD Radeon 8500 for Steam games and any other GPU for other games.

Fedora Games Spin facts and features

For Fedora users.

  • Has thousands of games already pre-installed
  • Stable, but not with the latest software and doesn’t have pre-installed drivers for all hardware
  • Steam and Wine are not pre-installed
  • Free and open source
  • Uses the XFCE Desktop Environment

If you’ve used Fedora (or CentOS) before, either for a server or for your desktop computer, try this Fedora spin.

Visit their official website for download/installation instructions and FAQ:
Download Fedora – Games Spin


Moving on to the next one:

4. SparkyLinux – GameOver Edition

SparkyLinux is a Linux distribution created on the “testing” branch of Debian. It uses the LXDEdesktop environment and it has everything you’d need already pre-installed.

SparkyLinux – GameOver Edition hardware requirements

A very lightweight distro.

  • CPU i586 / amd64
  • 256 MB of RAM memory (some games need more than that – 500-1000MB recommended)
  • 20 GB of space for installation on a hard drive (30GB recommended)

So just about any old PC/laptop can run it without any issues.

SparkyLinux – GameOver Edition facts and features

Ready out of the box.

  • Has everything you’d need pre-installed out of the box. Wine, Play On Linux, Steam etc.
  • Many open source Linux games pre-installed
  • Emulators and tools for easily installing emulators
  • Free and open source

If you’ve used an LXDE Linux distro before and you want everything pre-installed, go with SparkyLinux – GameOver Edition.

Visit their official website for download/installation instructions and FAQ:

Download SparkyLinux – GameOver Edition


Gaming doesn’t have to be all 2019 and bleeding-edge. You may be into retro games, which is where this distro comes to play:


5. Lakka

Although it’s based on Linux (kernel), it doesn’t have any desktop environment and you can’t really use it for anything other than turning a computer into a retro gaming console.

Lakka hardware requirements

You can turn any computer into a console since Lakka doesn’t have a lot of requirements. You can even use a Raspberry Pi to run Lakka. It’s a very lightweight OS that can run on just about anything.

Lakka facts and features

For retro gamers.

  • Pre-installed and optimized with various emulators
  • Very lightweight with minimum hardware requirements
  • Beautiful, easy-to-use UI
  • Free and open source with various retro games to choose from

Visit their official website for download/installation instructions and FAQ:
Download Lakka

Game Drift Linux is no longer being maintained. Though you can still use it, we wouldn’t recommend using it. We’ll leave their spot here for archive purposes, but there are other actively maintained alternatives if you’re looking for a gaming distro.

Want to play Windows games on a Linux distro without too many configurations?

6. Game Drift Linux

Based on Ubuntu, this distro would be perfect for beginners that previously used Ubuntu. Easy to install and everything works out of the box.

Game Drift Linux hardware requirements

Although not the most lightweight Linux distro for gaming, it doesn’t require much. At least not as much as SteamOS.

  • 1-2 GHz processor (32 or 64 bit)
  • 1-2 GB RAM
  • 4 GB hard disk drive for Game Drift Linux (excluding games)
  • ATI, NVidia or Intel graphics adapter suitable for games

If you can run Ubuntu desktop, you can run Game Drift Linux.

Game Drift Linux facts and features

You can play A LOT of Windows games on Game Drift Linux. It has all the tools you need pre-installed.

  • Has a game store with free and premium games – all run perfectly on Game Drift Linux. High-quality games only
  • You can play more than 1200 Windows games (due to CrossOver Games technology)
  • The distro itself is free, but you need to purchase an activation key for CrossOver Games in order to play more than 1200 Windows games
  • Based on Ubuntu

The game store is great – a wide choice of quality games that you can install with a single click.

Visit their official website for download/installation instructions and FAQ:

Download Game Drift Linux

Need a full-featured Linux distro for gaming, media, browsing and general use?

7. Solus

Recently this year, Solus became a rolling release distro, which means that you’ll get the latest software with all the latest updates. Solus looks great, especially with the Budgie desktop environment. It has all the features you need for an OS for gaming/media playback/browsing/general use. There’s an official Steam integration for Solus which will greatly help you with installing and configuring steam on your Linux system. It’s based on the Linux kernel, but it’s independent of any other distro like Ubuntu or Fedora.

Solus hardware requirements

Although not the most lightweight Linux distro for gaming, it doesn’t require much. At least not as much as SteamOS.

  • Intel/AMD CPU (64 bit recommended). ARM-based processors won’t work
  • 2GB RAM Minimum, 4GB+ recommended
  • 10GB+ storage
  • ATI, NVidia or Intel GPU suitable for games

Requires a more powerful machine.

Solus facts and features

Everything built into one modern system.

  • Has different desktop environments to choose from: Budgie, Mate, and GNOME
  • Modern – has notification features
  • Free and open source
  • Can be used for everything – including gaming, browsing, general home use etc. Everything’s set up out of the box
  • Rolling release – you’ll get the latest updates and latest software all the time.

Solus looks great. One of the best looking Linux distros out there today, especially with its flagship desktop environment Budgie.

Visit their official website for download/installation instructions and FAQ:

Download Solus

Want to use the Manjaro (rolling release) distro?

8. Manjaro Gaming Edition (mGAMe)

mGAMe, which is based on Manjaro, which is based on Arch Linux, is a rolling-release gaming distro with everything you need pre-installed. Steam, PlayOnLinux, Lutris, Minecraft, Editing Tools, and a bunch of other emulators are already installed. You can easily enable the “living room mode” in which case you won’t need a mouse – you can do everything with your controller or keyboard.

mGAMe hardware requirements

Although not the most lightweight Linux distro for gaming, it doesn’t require much. At least not as much as SteamOS.

  • At least 1GHz processor
  • At least 1GB RAM
  • At Least 30GB storage
  • ATI, NVidia or Intel GPU suitable for games and HD

If you don’t have enough hardware requirements to run Solus, but still need a rolling-release distro, go with mGAMe.

mGAMe facts and features

Everything’s pre-installed and ready to play.

  • Pre-installed Software and Emulators list: Audacity, KdenLIVE, Lutris, Minecraft, Minetest, Mumble, OBS Studio, OpenShot, PlayOnLinux, Wine, DeSmuME, Dolphin Emulator (64-Bit only), DOSBox, Fceux, Kega Fusion, PCSXR, PCSX2, PPSSPP, RetroArch, Stella, VBA-M, Yabause, ZSNES…
  • Steam is not pre-installed, you’ll have to install it manually.
  • XFCE desktop environment
  • Rolling release – you’ll get the latest updates and latest software all the time.

A great, more lightweight rolling-release distro for gaming.

Visit their official website for download/installation instructions and FAQ:

Download mGAMe

Wanna turn any computer into a Linux gaming machine?

9. SuperGamer

The new v4 of SuperGamer was recently released and no longer includes some open source games pre-installed, but you can easily install them, or install an app like Steam. The distro is optimized for gaming and ready to use via a live DVD/USB. It’s a great distro for testing out a machine.

SuperGamer hardware requirements

The distro is based on Ubuntu 16.04 and Linux Lite and only works with 64bit.

  • Intel/AMD CPU (64 bit recommended). ARM-based processors won’t work
  • 1.5GB RAM Minimum
  • 2GB+ DVD/Flash Drive
  • ATI, NVidia or Intel GPU suitable for games

Ready to use, no installation needed.

SuperGamer facts and features

A live Linux gaming distro.

  • Based on Ubuntu 18.04 and Linux Lite.
  • Free and open source.
  • Optimized for gaming.

Visit their official website for download/installation instructions and support via their forums:

Download SuperGamer

Feeling nostalgic for the good old games? Play them without installing a distro.

10. batocera.linux

batocera.linux is another live Linux gaming distro similar to Lakka.tv that you can use for retro gaming. Easy to install, easy to set up, and comes pre-installed with everything you need. A great way to go back in time and play the good old retro games.

batocera.linux hardware requirements

A live Linux distro ready for retro gaming with an active community.

  • Any supported CPU, at least 2.4GHz for some games.
  • 512MB RAM
  • 2GB+ DVD/Flash Drive
  • ATI, NVidia or Intel GPU supported by Linux

Ready to play retro games, no installation needed.

batocera.linux facts and features

A live Linux gaming distro.

  • Can run on a Raspberry Pi or any other nano PC
  • Free
  • Optimized for retro gaming (more than 50 consoles, including Dreamcast. Wii, PS2)
  • Fully controllable from a pad
  • You don’t need to stop it and shut it down properly, it behaves like a real console
  • You can use an USB flash drive or SD card
  • Pre-installed with Kodi
  • Has exclusive builds for Odroid devices

Visit their official website for download/installation instructions and support via their forums:

Download batocera.linux

Honorable mentions

Here’s something extra you can explore when it comes to Linux gaming:

  • live.linuX-gamers.net – a live Linux gaming distro – hasn’t been updated since 2011
  • LinuxConsole – lightweight gaming distro with a few games pre-installed
  • RetroArch – a frontend for emulators, game engines, and media players. Great for retro games and nano PCs.
  • RetroPie – turns your nano PC into a retro gaming machine

Which distro do you use? What kind of a Linux gaming setup do you have? Did we miss something? Leave a comment below!


Published in GNU/Linux Rules!


Linux-based operating systems (often called Linux Distributions, or just Distros) are quite popular among programmers and developers since their announcement in the 90s. The Linux kernel itself is designed to be flexible and open for modifications and contributions, thus it can run on any hardware. The same principle is applied to almost the whole software stack above the kernel that constitutes the Linux Distribution as a complete product. In general, it is designed from programmers for programmers and freely available to everyone.




All of the Linux-based distributions share common code – the Linux Kernel itself, but many different methods of software distribution to the end users appeared. Some of them tend to provide a stable environment while others tend to provide the very latest software available all the time.

Stable vs Rolling Distros for Development

Linux-based distributions that are focused on stability achieve it by freezing the software as much as possible. An updated version of some software will be distributed to end users only in critical situations. Most of the time those are security updates and bug fixes but for sure no new features are being added until the end of life for that release is reached. In short, if it is not broken don’t fix it.

There are Linux distributions that tend to offer the latest of everything. “Latest must be greatest” is in their genes. Usually, the term rolling distribution is used as a description for these distros. It is very common to receive an update every hour or two. Although this looks like an awesome concept, in practice it is shown to bring some sort of instability with it and in some cases, even breaking the system.

There are pros and cons in both. Stable distributions prefer safety rather than features. This is important if you are working on a product that must be running 24/7 and must be error-free. They are often used for servers, data centers, and home users. Developers choose this type of distribution when they need to provide long-term support for the product or if the developing of the product requires an extended amount of time, like 5 or more years.

For programmers, each of their programs relies on features offered by some other program. Those are called dependencies. A stable environment can guarantee that no bugs will appear overnight due to changes in some of the dependencies. In other words, if there is a bug, don’t waste time trying to find it somewhere else rather than in your own code. This saves a lot of time and frustration.

The cons of the stable environment is that most of the time they are lacking all the cool new features. Sometimes this forces developers to utilize a harder route to achieve the desired point and nothing else. But sometimes this means that end product will run slower in production due to missing optimizations in some of the dependencies, or it will be less secure due to unpatched exploits etc.

Rolling Distributions prefer and lean towards new features and bleeding-edge software as compared to stable ones. Those are preferred among programmers and developers working on continuous integration. For example, if a program is tightly coupled with many dependencies.

Being informed of every change in your dependencies forces developers to resolve minor conflicts on a daily basis. This opens an opportunity to immediately inform third parties that they are breaking your code. Sometimes it is much easier to release a fix on their side instead in your own. Asking them to undo all the patches released in the last few years because just now you are aware of having a problem with them is a no-go. Simply, they won’t accept the undo request, so everything must be fixed on your side.

Also, you have a chance to develop your product with the latest and greatest tools available at the time. This can improve the performance, reduce the cost, can introduce a new and easier way of doing things (new API) etc.

It’s worth noting that constantly throwing updates can break things. Most of the time it is about not simultaneously updating to a new release when needed. Some apps are intended to work with the exact version of another app so breaking this creates undesired behavior. In this case, both apps must be updated at the same time which is not the case in a rolling release model.

Best Linux Distros for Programming Compared

Now, to the main part, choosing the best Linux distro for you. This is an overview and comparison of the best Linux distros for programming.


1. Ubuntuubuntu.jpg


Ubuntu is the most popular Linux Distribution among all of them. It is used by programmers and most of the home users too.

There is one major release every two years. This is called an LTS (Long Term Support) release. Those are stable thus receiving only bug fixes and security updates in next 5 years. As the release model prefers stability its underlying layers are mostly stable (unchanged) during this 5 years period. The latest LTS release as of writing is Ubuntu 18.04 LTS.

There is one non-LTS release every 6 months supported for a period of 9 months. Those releases are not considered as stable. Big and significant changes can occur in every release. Sometimes those releases are caring packages that break dependencies with a previous release. It is like a playground for merging software and continuously searching for incompatibilities in a desire to provide the best fit solution for the next LTS release. Developing software in this kind of unpredictable environment is not clever. But in real life, it is not so frightening.  Even non-LTS releases offer a crash-prone environment. Many home users are using them as a daily driver with no issues at all. They see a benefit from having more recent software than what is available in the latest LTS release for example.

As you can see it is a mix of everything. You can have 5 years or stability, or stability for 9 months depending on what fits the best. Even mixing packages is possible but not recommended. A user of an LTS release can obtain a newer version of the same software from a more recent non-LTS release. This is handy as a one-time workaround but it is like a tempered bomb waiting to break the system. Pooling recent packages will continue until some incompatibility occurs. It is better to switch to a non-LTS version instead.

It is worth mentioning that Ubuntu is the place where developers and home users meat each other. Therefore, Ubuntu is the starting point for a company offering a product or a service on a Linux-based operating system. Here they find an environment that is stable and familiar to the developers but also many target users. In addition, it is best to develop the software in the native environment as the product or service that’s going to be deployed and used in production. Sounds like a perfect balance.

Ubuntu is one of the most popular Linux distro for servers, and most people use it as their main distro with their Cloud hosting.

Some of the companies that love Ubuntu and that are offering their products or services on Ubuntu as a first choice are: Nvidia, Google, Dell, STMicroelectronics etc. Most companies that sell Linux laptops offer Ubuntu as the first choice for a pre-installed distro.

  • Nvidia is offering the CUDA toolkit natively on Ubuntu as a first choice. Only the LTS releases are officially tested and supported, thus Ubuntu is the best fit if you rely on CUDA for your project. But it is not exclusive. The CUDA toolkit is available on non-LTS releases and many other Linux-based distributions, but without support or guarantees that things will behave as expected.
  • Google is the company behind Android. They offer developing Android applications on Windows, Mac OS, and Linux-based distributions. Ubuntu is their first choice. Android Studio (IDE) and all other tools are tested on LTS releases of Ubuntu before distribution to end users.
  • STMicroelectronics is a company producing ARM-based CPUs for embedded devices. Developing software for their CPUs is possible on Windows, Mac OS and Linux-based distributions. They support Open STM 32 for developing a free and cross-platform IDE, System WorkBench. Again, LTS releases of Ubuntu is their first choice for a Linux Distribution.
  • Dell is known for their laptops, ultra-books, PCs and monitors. Their products are mostly offered with Windows preinstalled and Ubuntu for some of them. The Dell XPS 13 Developer Edition is a small, light, fast, and beautiful, and runs an Ubuntu LTS release by default.

There are more companies that offer and use Ubuntu, but this should give you an idea of how software and development companies incorporate Ubuntu.



2. Arch Linux



Arch Linux is just the opposite of Ubuntu. It is a rolling release Linux Distribution. There are constantly new updates. Every hour or two something new arrives in your system. It is a perfect working environment for some. As we mentioned earlier this type of software distribution is best suitable for developers working on software that is highly coupled with some or many dependencies. They will receive an updated version of their dependencies with almost no delay. But this comes at a price for sure. The instability of the system offers no guarantees for the origins of the new bugs.

Also, Arch Linux is hard to install. An advanced user can do it in no more than 15 minutes, but it is almost impossible for a newcomer to succeed. It requires a lot of knowledge because there is nothing preconfigured, there is no default, everything is custom instead. A pure mechanism for distributing software and nothing more, it is up to the user to install and configure things according to their personal requirements. This is why many people use Arch Linux as a lightweight Linux distro, by installing a lightweight window manager/desktop environment, and only the essential software. As you can already see Arch Linux provides a perfectly configured environment for every developer that knows how to utilize it.

Every Arch Linux is unique thus each of them encounters unique obstacles. This is what makes it special and loved among programmers. Just by using it on daily basis you grow. There is a giant and thorough wiki page. It’s one of the best wikis you can find with very detailed and strict explanations and guides for configuring stuff and encouraging the use of what is said to be good practice. Its necessity can be seen just when you try to install it (as we mentioned earlier, it is hard if not following the wiki the first time). Reading documentation may seem like wasting time but it is an essential skill for every programmer. Just by reading good documentation developers also learn how to write good documentation.

Tinkering here and there with the operating system itself will teach you how one works so you can build your own later. It’s an important skill to have, especially if you end up working with embedded devices in your career. Every day you can read about some unexpected issues on the forum and very clever workarounds for each of them. Just being aware of what might go wrong makes a developer produce code with better quality if paying attention.

The best thing about Arch Linux is its huge repository of available software. Personally, I can’t think of something that I need that is unavailable. Although the software is there, because of the very different configurations among users, the quality of provided software can be lower than expected. It is not unusual if users need to get their hands dirty doing some minor manual intervention. It’s brilliant for improving skills but some can struggle with the maintenance at the beginning.

It’s worth to be mentioned that there are no devices that come with Arch Linux preinstalled. It is painful to do so since until the device reaches the customer, the software is out of date and performing one giant update is very likely to break the system (while constant minor updates don’t). Even if some vendors do, advanced users will find it uncomfortable and will change it anyway.


3. Fedora


Fedora is another popular Linux Distribution among programmers. It is just in the middle between Ubuntu and Arch Linux. It is more stable than Arch Linux, but it is rolling faster than what Ubuntu does.

There is one major release every 6 months supported for 13 months. Basically, 13 months of a stable environment is just fine. Also, 6 months of delay between the next big update is fine too. No software is growing so fast so it is good even for those who want to experience and work with the latest stuff in a stable environment while still doing their integration job without issues. Excellent balance as Ubuntu does but with a smaller amount of home users.

In terms of software availability, there is no such broader range as in Ubuntu or Arch Linux. If you are looking for proprietary software the situation is even worse. There isn’t any official support for it. But if you are working with open source software instead Fedora is excellent.

The people behind Fedora embrace free and open software and do the best for it but it is a big no go for any proprietary stuff. You can’t find Java, DVD codecs, Flash Player etc. Of course, all those are available in some private repositories with weaker license policies but they are not officially supported so no guarantees for any incompatibilities or misbehavior. It is a big issue if you are working on a project that costs money or that is expected to have a big impact because you don’t want to rely on unreliable sources. You want support instead. On Ubuntu for example, companies do offer support for their proprietary software.

There are several Fedora “Spins”, which are similar to Ubuntu flavors. It’s basically Fedora with software pre-installed for a specific purpose, but the main difference is the desktop environment. We featured the Games Spin of Fedora in our Best Linux Gaming Distros list.

Do not forget that the Fedora project is founded by Red Hat Linux distribution which is targeting the enterprise sector and offer paid support for it. Fedora is like a playground but a good one. At some point, the Fedora release will become a Red Hat release. Everybody has some benefits. Big companies receive a rock solid and stable system with years-long support (from Red Hat) while casual users receive a big amount of free software and a stable environment that is more recent than Ubuntu (from Fedora).

Just like Arch Linux, there are no devices with Fedora preinstalled because of very short time between major releases. In 6 months there is no time for manufacturers to produce and sell the device.

There are hundreds of Linux distros out there, each more different than the other. Though the 3 distros we mentioned are great for developers, you may find a better fit in a different distro. For example, if you’re developing an application that’s supposed to run on a server, you may need to use a server distro like Ubuntu or CentOS. So do your research and you may find a better one for you.


Overview of The Best Tools for Programmers on Linux

Out of the box, no distro comes with IDEs and toolkits pre-installed, neither Windows nor Mac OS do, so developers have to install them manually. Only a simple text editor like gedit or nano (command line text editor) can be found preinstalled. Some popular IDEs are: Eclipse, QT Creator, NetBeans etc. but many developers dislike IDEs in general and use simpler text editors like Sublime, Atom or Vim instead.

Eclipse is the most commonly used IDE. It supports multiple programming languages like C, C++, Java etc. Its basic features can be extended by various plugins. This allows a company to develop a complete IDE for their product just by writing a small plugin and relying on Eclipse for everything else.

Until 2016, Eclipse was actively supported by Google as the recommended IDE for Android applications development. Later Google migrated to InteliJ IDE and abandoned Eclipse, but users continued developing plugins like gradile-android-eclipse and still providing easy to use Android IDE based on Eclipse.

Eclipse is the recommended IDE for CUDA development on Linux-based operating systems (Visual Studio for Windows). Nvidia is distributing a slightly customized version of Eclipse called Eclipse Nsight. For them, It is much easier to provide an IDE by reusing components, but also it is even easier for developers when they don’t have to endure the hassle of building custom toolchains even for simple “Hello World” examples.

System Workbench is an IDE for programming ARM-based CPUs.  This IDE is free and built by the community but also supported and recommended by STMicroelectronic, a company that produces ARM-based CPUs. The IDE can be downloaded standalone, or by adding a plugin on top of existing Eclipse installation.

The latest stable version of Eclipse is 4.7, named Oxygen. As Arch Linux tends to have the latest of everything, version 4.7 is available in the repositories. On Ubuntu 16.04, which is latest stable release at the moment. version 3.18 of Eclipse is available while Fedora offers version 4.7 just like Arch Linux.

QT Creator is another very popular IDE. It is developed by the QT Company as an IDE for the QT framework. Although targeting a single C++ framework, because the nature of the language it is commonly used for developing non-QT applications with plain C or C++. It cannot be extended like Eclipse but it is much faster because it is written in C++. Also, it provides better theming options than Eclipse that blends well with the native desktop environment.

On Arch Linux users can obtain the very latest version 4.6 but on Ubuntu, we are stuck with version 3.5 while on Fedora it’s version 4.5.

Netbeans is an IDE for developing in C, C++, Java, PHP, Node JS, Angular JS etc. It is most popular among PHP developers and Web developers. Also, some tend to use it for Android application development with the NBAndroid plugin. There are many other plugins that enable a better integration with various technologies like WordPress, Ruby, Ruby on rails etc. Much like Eclipse.

The latest stable version of Netbeans is 8.2 and the same is available in the Arch Linux repositories. Ubuntu users can obtain version 8.1 while Fedora users must do a manual installation by downloading Netbeans from the website and eventually manually resolving conflicts and dependency issues if any. Just a warning than officially supported software is always of a better quality.

Sublime is one of the most famous text editors available. Sublime is mature, supports extensions and has autocomplete and code highlight for almost any kind of programming. Even though it is just a text editor, its extensions can easily add every feature that is expected from a modern IDE. Those are the main selling points. Once you get familiar with it, you are going to use it for everything.

Ubuntu doesn’t ship Sublime in their repositories, but Sublime developers offer a packaged version in their private repository, just follow the instructions and obtain the very latest version available from their website. Arch Linux also doesn’t distribute it in the official repositories but it is available from the AUR (packaged from users, also unofficially). Fedora requires manual installation too, see their website for instructions.

Atom text editor is an alternative to Sublime. It is free as in freedom and it is based on Electron. Although it has the very same features set and capabilities it tends to be heavier than Sublime so some developers are simply rejecting it.

As with Sublime, Atom on Ubuntu and Fedora can be installed manually by following instructions on the website while on Arch Linux they distribute version 1.25.

Developers who are doing most of the work in a terminal use the Vim text editor, especially on servers. It is a free and extensible command line text editor. It is an improved version of the older Vi text editor. One can use Vim for developing in any programming language or toolkit. Vim is the most customizable of them all by adding additional plugins. It is the most keyboard-friendly text editor available too. Some developers find using the mouse hurting their shoulders and muscles thus being able to do anything with only a keyboard is like being blessed.

Vim behaves like a person. Stand-alone is nothing, but plugins make It evolve to anything. Many actions are just like having a conversation with the editor. For example, to delete next three words after the cursor you just type “d3w” (Delete 3 words) and done, or to move the cursor 4 rows down type “4j” and done. There are many many more similar shortcuts that allow developers to do things faster and easier compared to other text editors or IDEs.

Both Arch Linux and Fedora distribute version 8.0 of Vim, while Ubuntu version 7.6.


Introduction to the Linux Shell for Development

Beside IDEs and text editors, Linux-based operating systems are popular because of the Shell. Ba Shell, known as bash, is most commonly found preinstalled on every major Linux distribution but it’s not the only one available. There are zsh, tcsh, ksh etc. They all do the same job but with minor differences that are not part of this introduction. The thing about shells is that they are an environment for interaction with the system. Often shells are used for automating things.

Some tasks through the development lifecycle are repetitive and require time synchronization in terms of executing the next task when the previous is done. Very common in embedded development like build kernel, then waiting until it is done to start building the image, then again waiting to start the transferring the image to the device, and waiting one more time until transfer complete to boot finally boot the device.

The point is that no one wants to sit in front of the PC or sever waiting for a job to finish then manually executing the next one. It is nicer for developers if they can just write the code and let someone or something else to manage task execution in the right order. It’s about not hurting your eyes while paying attention to the execution status. A simple five-line shell script can automate all of this. In addition, the shell can even send a notification to your smartphone when all the jobs are done or if there is a crash.

Another use case when the shell is important is automating the crashes. Knowing that each build generates text output and that the output can be redirected to shell script we can automate the crash handling process. Thus, if compiling fails due to a missing header, a shell script can search the file system to find the location of the header and check if that location is included in our build. If not, then alter the content of a single file and add the path to the header. Now the shell script can simply inform the developer of the crash, what actions were taken if any and retry to compile. This is handy for long-lasting projects.


Windows vs Linux for Programmers

Nevertheless, developing on Windows requires installing more additional software. For example, for Android development, device drivers are required. Sometimes drivers might crash or cannot be installed or don’t work on recent versions of Windows (if it is an older device). But a good programmer must have many devices around and test the program on each one of them. This can complicate the setup of the working environment quite a bit.

On Linux distros, this is a very smooth process. All the drivers are already present in the Linux kernel (with just a few exceptions) so no additional installation is required beside the IDE. Just plug in a device and you are ready to go. As smooth as that.

Another use case is when developers have to obtain support for multiple products at the same time. This is fine until two products enforce the existence of software that cannot coexist. For example, version 3.1 of a given program and version 4.2 of the same program but for the other project.

On Windows, installing a newer version often requires deleting the older version. Even if the older version is not automatically deleted, environmental variables are being automatically modified so pooling wrong dependencies or pooling a dependency twice might occur.

On Linux distros, this can be resolved quite easily. Just extract one version in one folder, extract another version in another folder and you are halfway done. Second part is either change the global environment variable to point to only one path, or alter the content of the variable but with a tighter scope. Thus, the same variable will have a different property for different compilations. Isn’t this great? Not just resolving the coexistence problem, in addition, a developer can even run two compiles at the same time without issues.


Conclusion – Best Linux Distro for Programmers

In general, Linux-based operating systems offer a more than excellent environment for developers. It just takes some time to learn the cool stuff. No matter which distribution you choose you won’t regret doing it. Just pay attention to the method of software distribution. Choose what suits you best and the projects you are working on. If not sure, just choose Ubuntu, overall it is the best-balanced Linux distribution.



Published in GNU/Linux Rules!
Wednesday, 08 May 2019 23:04

Using rsync to back up your Linux system

Find out how to use rsync in a backup scenario.


Backups are an incredibly important aspect of a system administrator’s job. Without good backups and a well-planned backup policy and process, it is a near certainty that sooner or later some critical data will be irretrievably lost.

All companies, regardless of how large or small, run on their data. Consider the financial and business cost of losing all of the data you need to run your business. There is not a business today ranging from the smallest sole proprietorship to the largest global corporation that could survive the loss of all or even a large fraction of its data. Your place of business can be rebuilt using insurance, but your data can never be rebuilt.

By loss, here, I don't mean stolen data; that is an entirely different type of disaster. What I mean here is the complete destruction of the data.

Even if you are an individual and not running a large corporation, backing up your data is very important. I have two decades of personal financial data as well as that for my now closed businesses, including a large number of electronic receipts. I also have many documents, presentations, and spreadsheets of various types that I have created over the years. I really don't want to lose all of that.

So backups are imperative to ensure the long-term safety of my data.


Backup options


There are many options for performing backups. Most Linux distributions are provided with one or more open source programs specially designed to perform backups. There are many commercial options available as well. But none of those directly met my needs so I decided to use basic Linux tools to do the job.

In my article for the Open Source Yearbook last year, Best Couple of 2015: tar and ssh, I showed that fancy and expensive backup programs are not really necessary to design and implement a viable backup program.

Since last year, I have been experimenting with another backup option, the rsync command which has some very interesting features that I have been able to use to good advantage. My primary objectives were to create backups from which users could locate and restore files without having to untar a backup tarball, and to reduce the amount of time taken to create the backups.

This article is intended only to describe my own use of rsync in a backup scenario. It is not a look at all of the capabilities of rsync or the many ways in which it can be used.


The rsync command

The rsync command was written by Andrew Tridgell and Paul Mackerras and first released in 1996. The primary intention for rsync is to remotely synchronize the files on one computer with those on another. Did you notice what they did to create the name there? rsync is open source software and is provided with almost all major distributions.

The rsync command can be used to synchronize two directories or directory trees whether they are on the same computer or on different computers but it can do so much more than that. rsync creates or updates the target directory to be identical to the source directory. The target directory is freely accessible by all the usual Linux tools because it is not stored in a tarball or zip file or any other archival file type; it is just a regular directory with regular files that can be navigated by regular users using basic Linux tools. This meets one of my primary objectives.

One of the most important features of rsync is the method it uses to synchronize preexisting files that have changed in the source directory. Rather than copying the entire file from the source, it uses checksums to compare blocks of the source and target files. If all of the blocks in the two files are the same, no data is transferred. If the data differs, only the block that has changed on the source is transferred to the target. This saves an immense amount of time and network bandwidth for remote sync. For example, when I first used my rsync Bash script to back up all of my hosts to a large external USB hard drive, it took about three hours. That is because all of the data had to be transferred. Subsequent syncs took 3-8 minutes of real time, depending upon how many files had been changed or created since the previous sync. I used the time command to determine this so it is empirical data. Last night, for example, it took just over three minutes to complete a sync of approximately 750GB of data from six remote systems and the local workstation. Of course, only a few hundred megabytes of data were actually altered during the day and needed to be synchronized.

The following simple rsync command can be used to synchronize the contents of two directories and any of their subdirectories. That is, the contents of the target directory are synchronized with the contents of the source directory so that at the end of the sync, the target directory is identical to the source directory.


rsync -aH sourcedir targetdir


The -a option is for archive mode which preserves permissions, ownerships and symbolic (soft) links. The -H is used to preserve hard links. Note that either the source or target directories can be on a remote host.

Now let's assume that yesterday we used rsync to synchronized two directories. Today we want to resync them, but we have deleted some files from the source directory. The normal way in which rsync would do this is to simply copy all the new or changed files to the target location and leave the deleted files in place on the target. This may be the behavior you want, but if you would prefer that files deleted from the source also be deleted from the target, you can add the --delete option to make that happen.


Another interesting option, and my personal favorite because it increases the power and flexibility of rsync immensely, is the --link-dest option. The --link-dest option allows a series of daily backups that take up very little additional space for each day and also take very little time to create.


Specify the previous day's target directory with this option and a new directory for today. rsync then creates today's new directory and a hard link for each file in yesterday's directory is created in today's directory. So we now have a bunch of hard links to yesterday's files in today's directory. No new files have been created or duplicated. Just a bunch of hard links have been created. Wikipedia has a very good description of hard links. After creating the target directory for today with this set of hard links to yesterday's target directory, rsync performs its sync as usual, but when a change is detected in a file, the target hard link is replaced by a copy of the file from yesterday and the changes to the file are then copied from the source to the target.


So now our command looks like the following.

rsync -aH --delete --link-dest=yesterdaystargetdir sourcedir todaystargetdir


There are also times when it is desirable to exclude certain directories or files from being synchronized. For this, there is the --exclude option. Use this option and the pattern for the files or directories you want to exclude. You might want to exclude browser cache files so your new command will look like this.


rsync -aH --delete --exclude Cache --link-dest=yesterdaystargetdir sourcedir todaystargetdir

Note that each file pattern you want to exclude must have a separate exclude option.

rsync can sync files with remote hosts as either the source or the target. For the next example, let's assume that the source directory is on a remote computer with the hostname remote1 and the target directory is on the local host. Even though SSH is the default communications protocol used when transferring data to or from a remote host, I always add the ssh option. The command now looks like this.


rsync -aH -e ssh --delete --exclude Cache --link-dest=yesterdaystargetdir remote1:sourcedir todaystargetdir


This is the final form of my rsync backup command.

rsync has a very large number of options that you can use to customize the synchronization process. For the most part, the relatively simple commands that I have described here are perfect for making backups for my personal needs. Be sure to read the extensive man page for rsync to learn about more of its capabilities as well as the options discussed here.

Performing backups

I automated my backups because – “automate everything.” I wrote a BASH script that handles the details of creating a series of daily backups using rsync. This includes ensuring that the backup medium is mounted, generating the names for yesterday and today's backup directories, creating appropriate directory structures on the backup medium if they are not already there, performing the actual backups and unmounting the medium.

I run the script daily, early every morning, as a cron job to ensure that I never forget to perform my backups.

My script, rsbu, and its configuration file, rsbu.conf, are available at https://github.com/opensourceway/rsync-backup-script


Recovery testing

No backup regimen would be complete without testing. You should regularly test recovery of random files or entire directory structures to ensure not only that the backups are working, but that the data in the backups can be recovered for use after a disaster. I have seen too many instances where a backup could not be restored for one reason or another and valuable data was lost because the lack of testing prevented discovery of the problem.

Just select a file or directory to test and restore it to a test location such as /tmp so that you won't overwrite a file that may have been updated since the backup was performed. Verify that the files' contents are as you expect them to be. Restoring files from a backup made using the rsync commands above simply a matter of finding the file you want to restore from the backup and then copying it to the location you want to restore it to.

I have had a few circumstances where I have had to restore individual files and, occasionally, a complete directory structure. Most of the time this has been self-inflicted when I accidentally deleted a file or directory. At least a few times it has been due to a crashed hard drive. So those backups do come in handy.


The last step

But just creating the backups will not save your business. You need to make regular backups and keep the most recent copies at a remote location, that is not in the same building or even within a few miles of your business location, if at all possible. This helps to ensure that a large-scale disaster does not destroy all of your backups.

A reasonable option for most small businesses is to make daily backups on removable media and take the latest copy home at night. The next morning, take an older backup back to the office. You should have several rotating copies of your backups. Even better would be to take the latest backup to the bank and place it in your safe deposit box, then return with the backup from the day before.

Source: opensource.com

Marielle Price

Published in GNU/Linux Rules!
Page 1 of 3