Andromeda Computer - Blog

linux-icon-t.jpg

We all know that we use cd command to move from one directory to another. To return back to the previous directory, we use “cd ..” or “cd <location_of_previous_directory>” commands. This is how I mostly navigate between directories until I found these trio commands, namely pushdpopd, and dirs. These three commands provides a way faster navigation between directories. Unlike cd command, pushd and popd commands are used to manage a stack of directories. Just enter into a directory and do something you want to do, and “pop” back to the previous directory quickly without having to type the long path name. dirs command is used to show the current directory stack, just like “ls” command. These trio commands are extremely useful when you’re working in a deep directory structure and scripts.

Still confused? No worries! I am going to explain these commands in layman terms with some practical examples.

Use Pushd, Popd And Dirs For Faster Navigation Between Directories

Pushd, popd ,and dirs commands are comes pre-installed, so let us just forget about the installation, and go ahead to see how to use them in real time.

Right now, I am in /tmp directory.

1.png

I am going to create ten directories, namely test1test2, …. test10 in /tmp directory.

As may already know, We can easily create multiple directories at once using mkdir command as shown below.

mkdir test1 test2 test3 test4 test5 test6 test7 test8 test9 test10

Or,

mkdir test{1,2,3,4,5,6,7,8,9,10}

Now, let us move to test3 directory. To do so, just type:

pushd test3

2.png

To know where you are now, just type:

dirs

Sample output:

/tmp/test3 /tmp /tmp

3.png

As you see in the above output, dirs command shows we have two directories in the stack now. Do something you wish to do in this directory. Once done, you can go back to your previous working directory using command:

popd

4.png

No need to mention the full path of previous directory. If you use cd command, you should type “cd ..” or “cd <full_path_name>” to go back to the /tmp directory. But, using popd command we can instantly move back to the previous working directory. It’s simple as that.

Let us go again to test8 directory. To do so, run:

pushd test8

Sample output:

/tmp/test8 /tmp /tmp

5.png

Let us go deep in the stack.

pushd /tmp/test10

Sample output:

/tmp/test10 /tmp/test8 /tmp /tmp

6.png

We’re now in test10 directory, and we have totally 3 directories (test10, test8 and tmp) in our stack. Did you also notice the direction? Each new directory is getting added to the left. When we start poping directories off, they will come from the left as well.

Now, if you want to move to the previous working directory i.e test8 using cd command, the command would be like below.

cd /tmp/test8

But it is not necessary though. We can do it more quickly by running the popd command.

popd

Sample output:

/tmp/test8 /tmp /tmp

7.png

As you see in the above output, we moved to the previous working directory without having to type full path (i.e /tmp/test8).

Now, let us pop again?

popd

Sample output:

/tmp /tmp

8.png

Finally, We came back to the directory where we started.

In this example, I have used just ten directories. So, It may seem it is no big deal. Think about twenty or more directories? Would you type “cd <path_name>” or “cd ..” each time to move between directories? Nope. It would be time consuming. Just use pushd command to change to any directory in the stack and move back to your previous working directory using popd command. Also, you can use dirs command at any time to show the current directory stack at any time. You can add a series of paths onto your stack and then navigate to them in the reverse order. This will save you lot of time when you are navigating around stack of directories.


Also read:


You know now how to effectively navigate between directories without using cd command. These commands comes in handy when you’re working with large directory stack. You can quickly move back and forth through x amount of directories, and these commands are much useful working with scripts too.

That’s all for now. If you know any other methods, feel free to share them in the comment section below. I will be here with another interesting guide soon.

Marielle Price

Published in GNU/Linux Rules!

Today we are going to learn some command line productivity hacks. As you already know, we use “cd” command to move between a stack of directories in Unix-like operating systems. In this guide I am going to teach you how to navigate directories faster without having to use “cd” command often. There could be many ways, but I only know the following five methods right now! I will keep updating this guide when I came across any methods or utilities to achieve this task in the days to come.

Five Different Methods To Navigate Directories Faster In Linux

Method 1: Using “Pushd”, “Popd” And “Dirs” Commands

This is the most frequent method that I use everyday to navigate between a stack of directories. The “Pushd”, “Popd”, and “Dirs” commands comes pre-installed in most Linux distributions, so don’t bother with installation. These trio commands are quite useful when you’re working in a deep directory structure and scripts. For more details, check our guide in the link given below.

Method 2: Using “bd” utility

The “bd” utility also helps you to quickly go back to a specific parent directory without having to repeatedly typing “cd ../../.” on your Bash.

Bd is also available in the Debian extra and Ubuntu universe repositories. So, you can install it using “apt-get” package manager in Debian, Ubuntu and other DEB based systems as shown below:

$ sudo apt-get update
$ sudo apt-get install bd

For other distributions, you can install as shown below.

$ sudo wget --no-check-certificate -O /usr/local/bin/bd https://raw.github.com/vigneshwaranr/bd/master/bd
$ sudo chmod +rx /usr/local/bin/bd
$ echo 'alias bd=". bd -si"' >> ~/.bashrc
$ source ~/.bashrc

To enable auto completion, run:

$ sudo wget -O /etc/bash_completion.d/bd https://raw.github.com/vigneshwaranr/bd/master/bash_completion.d/bd
$ source /etc/bash_completion.d/bd

The Bd utility has now been installed. Let us see few examples to understand how to quickly move through stack of directories using this tool.

Create some directories.

$ mkdir -p dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10

The above command will create a hierarchy of directories. Let us check directory structure using command:

$ tree dir1/
dir1/
└── dir2
 └── dir3
 └── dir4
 └── dir5
 └── dir6
 └── dir7
 └── dir8
 └── dir9
 └── dir10

9 directories, 0 files

Alright, we have now 10 directories. Let us say you’re currently in 7th directory i.e dir7.

$ pwd
/home/sk/dir1/dir2/dir3/dir4/dir5/dir6/dir7

You want to move to dir3. Normally you would type:

$ cd /home/sk/dir1/dir2/dir3

Right? yes! But it not necessary though! To go back to dir3, just type:

$ bd dir3

Now you will be in dir3.

1.png

Navigate Directories Faster In Linux Using “bd” Utility

Easy, isn’t it? It supports auto complete, so you can just type the partial name of a directory and hit the tab key to auto complete the full path.

To check the contents of a specific parent directory, you don’t need to inside that particular directory. Instead, just type:

$ ls `bd dir1`

The above command will display the contents of dir1 from your current working directory.

For more details, check out the following GitHub page.

Method 3: Using “Up” Shell script

The “Up” is a shell script allows you to move quickly to your parent directory. It works well on many popular shells such as Bash, Fish, and Zsh etc. Installation is absolutely easy too!

To install “Up” on Bash, run the following commands one bye:

$ curl --create-dirs -o ~/.config/up/up.sh https://raw.githubusercontent.com/shannonmoeller/up/master/up.sh
$ echo 'source ~/.config/up/up.sh' >> ~/.bashrc

The up script registers the “up” function and some completion functions via your “.bashrc” file.

Update the changes using command:

$ source ~/.bashrc

On zsh:

$ curl --create-dirs -o ~/.config/up/up.sh https://raw.githubusercontent.com/shannonmoeller/up/master/up.sh
$ echo 'source ~/.config/up/up.sh' >> ~/.zshrc

The up script registers the “up” function and some completion functions via your “.zshrc” file.

Update the changes using command:

$ source ~/.zshrc

On fish:

$ curl --create-dirs -o ~/.config/up/up.fish https://raw.githubusercontent.com/shannonmoeller/up/master/up.fish
$ source ~/.config/up/up.fish

The up script registers the “up” function and some completion functions via “funcsave”.

Now it is time to see some examples.

Let us create some directories.

$ mkdir -p dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10

Let us say you’re in 7th directory i.e dir7.

$ pwd
/home/sk/dir1/dir2/dir3/dir4/dir5/dir6/dir7

You want to move to dir3. Using “cd” command, we can do this by typing the following command:

$ cd /home/sk/dir1/dir2/dir3

But it is really easy to go back to dir3 using “up” script:

$ up dir3

That’s it. Now you will be in dir3. To go one directory up, just type:

$ up 1

To go back two directory type:

$ up 2

It’s that simple. Did I type the full path? Nope. Also it supports tab completion. So just type the partial directory name and hit the tab to complete the full path.

For more details, check out the GitHub page.

Please be mindful that “bd” and “up” tools can only help you to go backward i.e to the parent directory of the current working directory. You can’t move forward. If you want to switch to dir10 from dir5, you can’t! Instead, you need to use “cd” command to switch to dir10. These two utilities are meant for quickly moving you to the parent directory!

Method 4: Using “Shortcut” tool

This is yet another handy method to switch between different directories quickly and easily. This is somewhat similar to alias command. In this method, we create shortcuts to frequently used directories and use the shortcut name to go to that respective directory without having to type the path. If you’re working in deep directory structure and stack of directories, this method will greatly save some time. You can learn how it works in the guide given below.

Method 5: Using “CDPATH” Environment variable

This method doesn’t require any installation. CDPATH is an environment variable. It is somewhat similar to PATH variable which contains many different paths concatenated using ‘:’ (colon). The main difference between PATH and CDPATH variables is the PATH variable is usable with all commands whereas CDPATH works only for cd command.

I have the following directory structure.

2.png

Directory structure

As you see, there are four child directories under a parent directory named “ostechnix”.

Now add this parent directory to CDPATH using command:

$ export CDPATH=~/ostechnix

You now can instantly cd to the sub-directories of the parent directory (i.e ~/ostechnix in our case) from anywhere in the filesystem.

For instance, currently I am in /var/mail/ location.


3.pngTo cd into 
~/ostechnix/Linux/ directory, we don’t have to use the full path of the directory as shown below:

$ cd ~/ostechnix/Linux

Instead, just mention the name of the sub-directory you want to switch to:

$ cd Linux

It will automatically cd to ~/ostechnix/Linux directory instantly.

4.png

As you can see in the above output, I didn’t use “cd <full-path-of-subdir>”. Instead, I just used “cd <subdir-name>” command.

Please note that CDPATH will allow you to quickly navigate to only one child directory of the parent directory set in CDPATH variable. It doesn’t much help for navigating a stack of directories (directories inside sub-directories, of course).

To find the values of CDPATH variable, run:

$ echo $CDPATH

Sample output would be:

/home/sk/ostechnix

Set multiple values to CDPATH

Similar to PATH variable, we can also set multiple values (more than one directory) to CDPATH separated by colon (:).

$ export CDPATH=.:~/ostechnix:/etc:/var:/opt

Make the changes persistent

As you already know, the above command (export) will only keep the values of CDPATH until next reboot. To permanently set the values of CDPATH, just add them to your ~/.bashrc or ~/.bash_profile files.

$ vi ~/.bash_profile

Add the values:

export CDPATH=.:~/ostechnix:/etc:/var:/opt

Hit ESC key and type :wq to save and exit.

Apply the changes using command:

$ source ~/.bash_profile

Clear CDPATH

To clear the values of CDPATH, use export CDPATH=””. Or, simply delete the entire line from ~/.bashrc or ~/.bash_profile files.

In this article, you have learned the different ways to navigate directory stack faster and easier in Linux. As you can see, it’s not that difficult to browse a pile of directories faster. Now stop typing “cd ../../..” endlessly by using these tools. If you know any other worth trying tool or method to navigate directories faster, feel free to let us know in the comment section below. I will review and add them in this guide.

And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned!

Marielle Price

Published in GNU/Linux Rules!

This article explains how to list all the packages available in an Ubuntu, Linux Mint or Debian repository (installed and available for install), be it an official repository or a third-party source like a PPA, and so on.

Below you'll find 2 ways of listing packages from a repository: using a GUI or from the command line.

List all packages in a Debian, Ubuntu or Linux Mint repository using a GUI


If you want to list all the packages in a repository on your desktop, you can use Synaptic Package Manager. 

Synaptic is a graphical package management application for APT (APT being the main command line package manager for Debian and its derivatives).

If you don't have Synaptic installed, you can install it on Debian, Ubuntu, and any Debian or Ubuntu based Linux distribution, including elementary OS, Linux Mint and so on, by using this command:

sudo apt install synaptic


To list all the packages in a particular software repository using Synaptic, launch the application and click on Origin in the bottom left-hand side of its window. Next, select the repository for which you want to list all available packages (both installed and available for installation) from the list that's displayed in the left-hand side of Synaptic Package Manager.

For example, here's Synaptic showing all the packages available in the Google repository, listing Google Chrome stable, beta and unstable, as well as Google Earth Pro and EC:

Synaptic list all packages in a repository on Ubuntu or Debian


As you can see, all the software sources are listed here, including the official repositories. 

Launchpad PPA repositories are supported as well. Their name begins with LP-PPA, followed by the actual PPA name. Synaptic lists 2 entries for each PPA - make sure you select the PPA entry ending with /ubuntu-codename, for example /bionic/cosmic, etc. The entry ending in /now doesn't list all the available packages in the PPA.

This is a screenshot showing all the packages available in the Ubuntu Graphics Drivers PPA (for Ubuntu 18.10 Cosmic Cuttlefish, since that's what I'm using), including showing which are installed on my system:

Synaptic list all packages in a repository


I'm not sure why, but some packages are listed multiple times for PPA sources (and only for PPA repositories). That only a display thing, and it doesn't break any functionality.

List all packages in a repository in Ubuntu, Debian or Linux Mint from the command line


Listing all packages in a repository from the command line in Ubuntu, Debian or Linux Mint is a bit tricky, but still quite easy to do.

There are multiple ways of doing this from the command, but I'll only list one. The command to list all packages available in repository-name is the one that follows:

grep ^Package /var/lib/apt/lists/repository-name*_Packages | awk '{print $2}' | sort -u


I'll explain later on how to find out the repository name from /var/list/apt/lists and how to use it. Before that I'll explain what this command does:

  • grep ^Package ... searches for lines beginning with ^Package in the /var/lib/apt/lists/*_Packagesfile
  • awk '{print $2}' prints the second column for each line (so it filters out everything but the package name)
  • sort -u sorts the lines and outputs only unique lines (removes duplicates)


The first thing you need to do is find the name of the repository *_Packages file from /var/lib/apt/lists/. You can list all the repository _Packages files available in /var/lib/apt/lists/ by using a simple ls:

ls /var/lib/apt/lists/*_Packages


Since the results may be very long, you can run the command output through more for easier reading:

ls /var/lib/apt/lists/*_Packages | more


If you know part of the repository name (I'm using KEYWORD in the command below as the name), you can filter the ls results using grep, like this:

ls /var/lib/apt/lists/*_Packages | grep KEYWORD


For example, let's say you want to list all the packages in the official Tor repository, and you know the repository name must contain tor. In this case, you'd use this command to find out the _Packages filename from /var/lib/apt/lists/

ls /var/lib/apt/lists/*_Packages | grep tor


For short queries, some unrelated repositories might be displayed, but it's still easier to see what you're looking for using grep than listing all the repositories _Packages files.

Now that you know the _Packages filename, you can list all the packages available in that repository by issuing this command:

grep ^Package /var/lib/apt/lists/some-repository-amd64_Packages | awk '{print $2}' | sort -u


Use the file containing the architecture for which you want to list all available packages in that repository. The example above is for 64bit (amd64), but you could use i386 for 32bit, etc.

You don't need the complete repository _Packages filename. Back to my Tor repository example, the _Packages filename for Tor is deb.torproject.org_torproject.org_dists_cosmic_main_binary-amd64_Packages. In this case, you could use deb.torproject followed by *_Packages to simplify things, like this:

grep ^Package /var/lib/apt/lists/deb.torproject*_Packages | awk '{print $2}' | sort -u


Which outputs the following:

deb.torproject.org-keyring
tor
tor-geoipdb


Another example. Let's say you want to see all packages available in the Linux Uprising Oracle Java 11 PPA(ppa:linuxuprising/java). You can list them by using:

grep ^Package /var/lib/apt/lists/ppa.launchpad.net_linuxuprising_java*_Packages | awk '{print $2}' | sort -u


Which outputs this:

oracle-java11-installer
oracle-java11-set-default


To use this with other PPA repositories, replace linuxuprising with the first part of the PPA name, and java with the second part of the PPA name, and the command will list all the packages from that PPA (both installed and not installed).

You can also list all the packages available in all the PPA repositories you have added on your system, by using:

grep ^Package /var/lib/apt/lists/ppa.launchpad.net*_Packages | awk '{print $2}' | sort -u


For easy access, you could bookmark this command using Marker commands bookmark manager (while used primarily for searching, HSTR can bookmark commands as well).

Published in GNU/Linux Rules!

There are multiple ways of searching for packages available in the Debian, Ubuntu or Linux Mint from the command line, and in this article I'll cover aptapt-cache and aptitude. Use this to search in both package names and package descriptions, useful if either you're looking for a specific package but you don't know the exact package name, or if you need a tool for a particular purpose / task but you don't know the available options.

The major differences between using aptapt-cache and aptitude to search for available packages is their output and the sort order, as you'll see in the examples below. Also, aptitude may not be installed by default on your Debian-based Linux distribution.

I personally prefer apt-cache because of the easier to read output (and I don't need extra info usually - to see installed/available versions I can use apt-cache policy package-name); it also tends to display the results I'm looking for near the top.

Another thing to note is that apt and apt-cache search the apt software package cache, so they return both packages available in the repositories as well as DEB packages installed manually (not available in the repos), while aptitude only returns packages that are available in the repositories.

I. Search available packages using aptitude

aptitude is a Ncurses-based front-end for apt. This tool is usually not installed by default but you can install it in Debian, Ubuntu, Linux Mint and other Debian-based Linux distributions using this command:

sudo apt install aptitude


You can use aptitude to search for packages from the command line, like this:

aptitude search KEYWORD


Example:

$ aptitude search openssh

p   libconfig-model-openssh-perl                           - configuration editor for OpenSsh                                
p   libghc-crypto-pubkey-openssh-dev                       - OpenSSH key codec  
p   libghc-crypto-pubkey-openssh-dev:i386                  - OpenSSH key codec  
v   libghc-crypto-pubkey-openssh-dev-0.2.7-6af0a           -                    
v   libghc-crypto-pubkey-openssh-dev-0.2.7-6af0a:i386      -                    
p   libghc-crypto-pubkey-openssh-doc                       - OpenSSH key codec; documentation                                
p   libghc-crypto-pubkey-openssh-prof                      - OpenSSH key codec; profiling libraries                          
p   libghc-crypto-pubkey-openssh-prof:i386                 - OpenSSH key codec; profiling libraries                          
v   libghc-crypto-pubkey-openssh-prof-0.2.7-6af0a          -                    
v   libghc-crypto-pubkey-openssh-prof-0.2.7-6af0a:i386     -                    
p   libnet-openssh-compat-perl                             - collection of compatibility modules for Net::OpenSSH            
p   libnet-openssh-parallel-perl                           - run SSH jobs in parallel                                        
p   libnet-openssh-perl                                    - Perl SSH client package implemented on top of OpenSSH           
p   lxqt-openssh-askpass                                   - OpenSSH user/password GUI dialog for LXQt                       
p   lxqt-openssh-askpass:i386                              - OpenSSH user/password GUI dialog for LXQt                       
p   lxqt-openssh-askpass-l10n                              - Language package for lxqt-openssh-askpass                       
v   lxqt-openssh-askpass-l10n:i386                         -                    
i   openssh-client                                         - secure shell (SSH) client, for secure access to remote machines 
p   openssh-client:i386                                    - secure shell (SSH) client, for secure access to remote machines 
p   openssh-client-ssh1                                    - secure shell (SSH) client for legacy SSH1 protocol              
p   openssh-client-ssh1:i386                               - secure shell (SSH) client for legacy SSH1 protocol              
p   openssh-known-hosts                                    - download, filter and merge known_hosts for OpenSSH
p   openssh-server                                         - secure shell (SSH) server, for secure access from remote machines
p   openssh-server:i386                                    - secure shell (SSH) server, for secure access from remote machines 
p   openssh-sftp-server                                    - secure shell (SSH) sftp server module, for SFTP access from remote machines
p   openssh-sftp-server:i386                               - secure shell (SSH) sftp server module, for SFTP access from remote machines


You can also use the aptitude Ncurses UI if you wish. Type aptitude to start it:

Aptitude ncurses interface


You can search packages by pressing / and then start typing the keyword.

II. Search available packages using apt-cache

Use apt-cache to to search for packages available in the Debian, Ubuntu or Linux Mint repositories (and installed DEB packages that aren't in the repositories) like this:

apt-cache search KEYWORD


Example:

$ apt-cache search openssh

openssh-client - secure shell (SSH) client, for secure access to remote machines
openssh-server - secure shell (SSH) server, for secure access from remote machines
openssh-sftp-server - secure shell (SSH) sftp server module, for SFTP access from remote machines
python-setproctitle - Setproctitle implementation for Python 2
python3-setproctitle - Setproctitle implementation for Python 3
ssh - secure shell client and server (metapackage)
agent-transfer - copy a secret key from GnuPG's gpg-agent to OpenSSH's ssh-agent

...

ssh-askpass-gnome - interactive X program to prompt users for a passphrase for ssh-add
ssh-audit - tool for ssh server auditing
sshpass - Non-interactive ssh password authentication


I removed some of the output because it can get very long. The visible results order was not changed though.

III. Search available packages using apt

Using apt you can search for available packages from the command line as follows:

apt search KEYWORD


Replace KEYWORD with the keyword you want to search for (you can add multiple keywords in quotes).

Here is an example search for "openssh" together with its output:

$ apt search openssh

Sorting... Done
Full Text Search... Done
agent-transfer/bionic 0.41-1ubuntu1 amd64
  copy a secret key from GnuPG's gpg-agent to OpenSSH's ssh-agent

cme/bionic,bionic 1.026-1 all
  Check or edit configuration data with Config::Model

connect-proxy/bionic 1.105-1 amd64
  Establish TCP connection using SOCKS4/5 or HTTP tunnel

...

openssh-client/bionic,now 1:7.6p1-4 amd64 [installed]
  secure shell (SSH) client, for secure access to remote machines

openssh-client-ssh1/bionic 1:7.5p1-10 amd64
  secure shell (SSH) client for legacy SSH1 protocol

openssh-known-hosts/bionic,bionic 0.6.2-1 all
  download, filter and merge known_hosts for OpenSSH

openssh-server/bionic 1:7.6p1-4 amd64
  secure shell (SSH) server, for secure access from remote machines

openssh-sftp-server/bionic 1:7.6p1-4 amd64
  secure shell (SSH) sftp server module, for SFTP access from remote machines

putty-tools/bionic 0.70-4 amd64
  command-line tools for SSH, SCP, and SFTP

python-scp/bionic,bionic 0.10.2-1 all
  scp module for paramiko


Once again, I removed some of the results because the results list is quite long. The results order was not changed though.


For all three, the search results may be very long. In such cases, you can run them through more, for easier reading, like this:

apt-cache search KEYWORD | more


You can also exclude results that don't include a particular keyword (KEYWORD2 in this example) by using grep:

apt-cache search KEYWORD | grep KEYWORD2


grep is case sensitive by default. Add -i (grep -i KEYWORD2) to ignore case.

Published in GNU/Linux Rules!

There are multiple ways of preventing a package from updating in Debian, Ubuntu, Linux Mint, elementary OS and other Debian/Ubuntu-based Linux distributions. This article presents 3 ways of excluding repository packages from being upgraded.

Why prevent a package from being updated? Let's say you install a package that's older than the version available in Debian, Ubuntu or Linux Mint repositories, or you know some update is causing issues, and you want to upgrade all packages minus one (or two, three...).

Here's an example. I'm using Chromium browser with hardware accelerationpatches from the Saiarcot895-dev PPA, in Ubuntu 18.10. To get hardware acceleration to work with Nvidia drivers, a patched vdpau-va-driver package is needed, and this is not yet available in this PPA for the latest Ubuntu 18.10. Luckily, the Ubuntu 18.04 package can be installed in Ubuntu 18.10, but any upgrade through "apt upgrade" or using the Software Updater will upgrade this package, which I don't want. So in this case, holding this package from upgrades would allow me to upgrade all other packages without having to worry about it.

It should be noted that preventing a package from future upgrades may cause issues in some situations, if the package you're holding is used as a dependency for another package that can be upgraded. So try not to prevent too many packages from upgrades, especially libraries.


Here are 3 ways of preventing a package from updating in Debian, Ubuntu, Linux Mint.

1. Prevent package updates using a GUI: Synaptic Package Manager

Synaptic Package Manager, a Gtk graphical package management program for apt, can lock packages which prevents them from being updated.

It's important to note that using Synaptic to lock packages won't keep them from being updated from the command line - running apt upgrade or apt-get upgrade will still upgrade a package locked in Synaptic. Locking packages in Synaptic will prevent package upgrades using Ubuntu's Software Updater app, and possibly other graphical package managers. It will not prevent updating packages using the Linux Mint Update Manager application though. As a result, I recommend using apt-mark or dpkg (see below) to keep packages from updating.

You can install Synaptic Package Manager using this command:

sudo apt install synaptic


To prevent a package from updating using Synaptic, search for it, select the package and from the Synaptic menu click Package -> Lock Version:

Synaptic lock package version


In the same way you can unlock the package too.

To see all locked packages in Synaptic, click Status in the bottom left-hand side, then click on Pinned above the Status section:

Synaptic show locked (pinned) packages


2. Keep a package from updating using apt-mark

Holding packages from updating with apt-mark should prevent them from updating using Ubuntu's Software Updater, as well as command line upgrades (apt upgrade / apt-get upgrade).

You can hold a package from future upgrades (and from being automatically removed) with apt-mark by using this command:

sudo apt-mark hold PACKAGE


Replacing PACKAGE with the package you want to hold from updating.

You can check which packages marked as hold by using:

apt-mark showhold


To remove a hold (so the package can be updated), use:

sudo apt-mark unhold PACKAGE


For both hold and unhold you can specify multiple packages, just like when installing software with apt (separate the packages by a space).

3. Prevent package updates with dpkg

A while back there were some graphical package managers that ignored the apt-mark hold status. I'm not sure if that's still the case, but just to be safe (and in case you're using an old Debian / Ubuntu / Linux Mint version), here's another way of preventing package updates in Ubuntu, Linux Mint or Debian: dpkg.

To prevent a package from upgrades using dpkg, use:

echo "PACKAGE hold" | sudo dpkg --set-selections


You can see all package holds using this command:

dpkg --get-selections | grep hold


To remove the hold (allow the package to be upgraded), use:

echo "PACKAGE install" | sudo dpkg --set-selections


Unlike apt-mark, this solution doesn't allow specifying multiple packages at once.

Published in GNU/Linux Rules!

There are multiple ways of finding out to which package a particular file belongs to, on Ubuntu, Debian or Linux Mint. This article presents two ways of achieving this, both from the command line.

1. Using apt-file to find the package that provides a file (for repository packages, either installed or not installed)


apt-file indexes the contents of all packages available in your repositories, and allows you to search for files in all these packages. 

That means you can use apt-file to search for files inside DEB packages that are installed on your system, as well as packages that are not installed on your Debian (and Debian-based Linux distributions, like Ubuntu) machine, but are available to install from the repositories. This is useful in case you want to find what package contains a file that you need to compile some program, etc.

apt-file cannot find the package that provides a file in case you downloaded a DEB package and installed it, without using a repository. The package needs to be available in the repositories for apt-file to be able to find it.

apt-file may not be installed on your system. To install it in Debian, Ubuntu, Linux Mint and other Debian-based or Ubuntu-based Linux distributions, use this command:

sudo apt install apt-file


This tool find the files belonging to a package by using a database, which needs to be updated in order to be able to use it. To update the apt-file database, use:

sudo apt-file update


Now you can use apt-file to find the DEB package that provides a file, be it a package you've installed from the repositories, or a package available in the repositories, but not installed on your Debian / Ubuntu / Linux Mint system. To do this, run:

apt-file search filename


Replacing filename with the name of the file you want to find.

This command will list all occurrences of filename found in various packages. If you know the exact file path and filename, you can get the search results to only list the package that includes that exact file, like this:

apt-file search /path/to/filename


For example, running only apt-file search cairo.h will list a large list search results:

$ apt-file search cairo.h
fltk1.3-doc: /usr/share/doc/fltk1.3-doc/HTML/group__group__cairo.html
ggobi: /usr/include/ggobi/ggobi-renderer-cairo.h
glabels-dev: /usr/include/libglbarcode-3.0/libglbarcode/lgl-barcode-render-to-cairo.h
glabels-dev: /usr/share/gtk-doc/html/libglbarcode-3.0/libglbarcode-3.0-lgl-barcode-render-to-cairo.html
gstreamer1.0-plugins-good-doc: /usr/share/gtk-doc/html/gst-plugins-good-plugins-1.0/gst-plugins-good-plugins-plugin-cairo.html
guile-cairo-dev: /usr/include/guile-cairo/guile-cairo.h
guitarix-doc: /usr/share/doc/guitarix-doc/namespacegx__cairo.html
ipe: /usr/share/ipe/7.2.7/doc/group__cairo.html
libcairo-ocaml-dev: /usr/share/doc/libcairo-ocaml-dev/html/Pango_cairo.html
libcairo-ocaml-dev: /usr/share/doc/libcairo-ocaml-dev/html/type_Pango_cairo.html
libcairo2-dev: /usr/include/cairo/cairo.h
...


However, if you know the file path, e.g. you want to find out to which package the file /usr/include/cairo/cairo.h belongs to, run:

apt-file search /usr/include/cairo/cairo.h


This only lists the package that contains this file:

$ apt-file search /usr/include/cairo/cairo.h
libcairo2-dev: /usr/include/cairo/cairo.h


In this example, the package that includes the file I searched for (/usr/include/cairo/cairo.h) is libcairo2-dev.

apt-file may also be used to list all the files included in a package (apt-file list packagename), perform regex search, and more. Consult its man page (man apt-file) and help for more information (apt-file --help).

2. Using dpkg to find the package that provides a file (only for installed DEB packages - from any source)


dpkg can also be used to find out to which package a file belongs to. It can be faster to use than apt-file, because you don't need to install anything, and there's no database to update. 

However, dpkg can only search for files belonging to installed packages, so if you're searching for a file in a package that's not installed on your system, use apt-file. On the other hand, dpkg can be used to find files belonging to packages that were installed without using a repository, a feature that's not available for apt-file.

To use dpkg to find the installed DEB package that provides a file, run it with the -S (or --search) flag, followed by the filename (or pattern) you want to see to which package it belongs, like this:

dpkg -S filename


For example, to find out to which package the cairo.h file belongs to, use dpkg -S cairo.h:

$ dpkg -S cairo.h
libgtk2.0-dev:amd64: /usr/include/gtk-2.0/gdk/gdkcairo.h
libcairo2-dev:amd64: /usr/include/cairo/cairo.h
libpango1.0-dev: /usr/include/pango-1.0/pango/pangocairo.h
libgtk-3-dev:amd64: /usr/include/gtk-3.0/gdk/gdkcairo.h


Just like for apt-file, this may show multiple packages that have files containing the filename you're looking for. You can enter the full path of the file to get only the package that contains that specific file. Example:

$ dpkg -S /usr/include/cairo/cairo.h
libcairo2-dev:amd64: /usr/include/cairo/cairo.h


In this example, the Debian package that includes the file I searched for (/usr/include/cairo/cairo.h) is libcairo2-dev.


Other notable ways of finding the package a file belongs to is using the online search provided by Ubuntu and Debian:


For both, you'll also find options to find the packages that contain files named exactly like your input keyword, packages ending with the keyword, or packages that contains files whose names contain the keyword.

The Linux Mint package search website doesn't include an option to search for files inside packages, but you can use the Ubuntu or Debian online package search for packages that Linux Mint imports from Debian / Ubuntu.

Published in GNU/Linux Rules!

This article explains how to show a history of recently installed, upgraded or removed packages, on Debian, Ubuntu or Linux Mint, from the command line.

To be able to get a complete history list of package changes, including installed, upgraded or removed DEB packages, and show the date on which a particular action was performed, in Debian or Ubuntu, one can read the dpkg (the low-level infrastructure for handling the installation and removal of Debian software packages) log available at /var/log/dpkg.log. You can use grep to parse this file from the command line and only display installed, upgraded or removed packages, depending on what you need.

This works for DEB packages installed in any way, be it using a graphical tool such as Synaptic, Gnome Software, Update Manager, or a command line tool like aptapt-getaptitude or dpkg. It does not work for other packages, like Flatpak or Snap, or for software installed from source, and so on.

Some alternative ways of showing the package manager history on Debian, Ubuntu or Linux Mint, do not display a complete log. For example, Synaptic Package Manager (File -> History) can only show a history of installed, upgraded or removed software packages for which Synaptic itself was used to perform those actions, but you won't see any packages installed, updated or removed from the command line (using aptapt-getdpkg), using the Software Updater, or the Software application. Similarly, the /var/log/apt/history.log APT log file only lists actions performed using apt/apt-get.

Show a history of recently installed packages, their version number, and the date / time they were installed on Debian, Ubuntu or Linux Mint:

grep "install " /var/log/dpkg.log


This is how it looks:

$ grep "install " /var/log/dpkg.log
2019-01-08 13:22:15 install automathemely:all <none> 1.3
2019-01-08 13:22:29 install python3-astral:all <none> 1.6.1-1
2019-01-08 13:22:29 install python3-tzlocal:all <none> 1.5.1-1
2019-01-08 13:22:29 install python3-schedule:all <none> 0.3.2-1

...

2019-01-09 17:19:49 install libwebkit2-sharp-4.0-cil:amd64 <none> 2.10.9+git20160917-1.1
2019-01-09 17:19:49 install sparkleshare:all <none> 3.28-1
2019-01-15 15:58:20 install ffsend:amd64 <none> 0.1.2

Show a list of recently upgraded packages, the date / time they were upgraded, as well as the old and new package version, on Debian, Ubuntu or Linux Mint:

grep "upgrade " /var/log/dpkg.log


Sample output:

$ grep "upgrade " /var/log/dpkg.log
2019-01-07 11:14:10 upgrade tzdata:all 2018g-0ubuntu0.18.10 2018i-0ubuntu0.18.10
2019-01-07 11:35:14 upgrade davinci-resolve:amd64 15.2-2 15.2.2-1
2019-01-07 12:31:04 upgrade chromium-chromedriver:amd64 72.0.3626.17-0ubuntu1~ppa1~18.10.1 72.0.3626.28-0ubuntu1~ppa1~18.10.1
2019-01-07 12:31:04 upgrade chromium-browser-l10n:all 72.0.3626.17-0ubuntu1~ppa1~18.10.1 72.0.3626.28-0ubuntu1~ppa1~18.10.1
2019-01-07 12:31:08 upgrade chromium-browser:amd64 72.0.3626.17-0ubuntu1~ppa1~18.10.1 72.0.3626.28-0ubuntu1~ppa1~18.10.1
2019-01-07 12:31:12 upgrade chromium-codecs-ffmpeg-extra:amd64 72.0.3626.17-0ubuntu1~ppa1~18.10.1 72.0.3626.28-0ubuntu1~ppa1~18.10.1

...

2019-01-15 15:51:31 upgrade vlc-plugin-bittorrent:amd64 2.5-1~cosmic 2.6-1~cosmic
2019-01-15 17:30:44 upgrade virtualbox-6.0:amd64 6.0.0-127566~Ubuntu~bionic 6.0.2-128162~Ubuntu~bionic
2019-01-15 17:34:33 upgrade libarchive13:amd64 3.2.2-5 3.2.2-5ubuntu0.1
2019-01-16 12:32:43 upgrade oracle-java11-installer:amd64 11.0.1-2~linuxuprising1 11.0.2-1~linuxuprising0
2019-01-16 12:42:20 upgrade nvidiux:amd64 2.0.4 2.1
2019-01-16 13:41:05 upgrade plata-theme:all 0.4.1-0ubuntu1~cosmic1 0.5.4-0ubuntu1~cosmic1



Show a history of recently removed packages and the date / time they were removed, on Debian, Ubuntu or Linux Mint:

grep "remove " /var/log/dpkg.log


Example:

$ grep "remove " /var/log/dpkg.log
2019-01-10 12:30:55 remove automathemely:all 1.3 <none>
2019-01-11 13:16:38 remove persepolis:all 3.1.0.0 <none>
2019-01-11 13:38:52 remove python3-astral:all 1.6.1-1 <none>
2019-01-11 13:38:52 remove python3-psutil:amd64 5.4.6-1build1 <none>
2019-01-11 13:38:52 remove python3-pyxattr:amd64 0.6.0-2build3 <none>
2019-01-11 13:38:52 remove python3-schedule:all 0.3.2-1 <none>
2019-01-11 13:38:53 remove python3-tzlocal:all 1.5.1-1 <none>


/var/log/dpkg.log contains the package install, update and remove history for the current monthFor the previous month, read the /var/log/dpkg.log.1log file. For example to see the package installation history for the previous month, use:

grep "install " /var/log/dpkg.log.1


Want to go back even more in the dpkg history? Use zgrep instead of grep, and read /var/log/dpkg.log.2.gz/var/log/dpkg.log.3.gz/var/log/dpkg.log.4.gz and so on, which go back two, three and respectively four months.

Example:

zgrep "upgrade " /var/log/dpkg.log.2.gz


This is because by default on Debian, Ubuntu and Linux Mint, the dpkg log is set to rotate once a month, keeping 12 old logs (so for 12 months), and compress rotated files by using gzip (.gz). You can check the Debian/Ubuntu Logrotate configuration for dpkg by using cat /etc/logrotate.d/dpkg.

Published in GNU/Linux Rules!

This article explains how to downgrade a package to a specific version using apt, in Debian, Ubuntu or Linux Mint (from the command line).

Sometimes you may encounter issues with a recently upgraded package, and you want to downgrade it. To be able to downgrade a package in Debian, Ubuntu or Linux Mint (and other Debian/Ubuntu-based Linux distributions), the package version to which you want to downgrade must be available in a repository.

From the same series:


To downgrade a package to a specific version, you'll need to append =version after the package name in the installation command, with version being the version to which you want to downgrade the package:

sudo apt install <package>=<version>


Example 1.

Let's look at a simple example. I currently have Firefox 65 installed in Ubuntu 18.10, and I want to downgrade it using apt. The first thing to do is to look at the available versions, by running apt policy firefox (apt-cache policy works as well):

$ apt policy firefox
firefox:
  Installed: 65.0+build2-0ubuntu0.18.10.1
  Candidate: 65.0+build2-0ubuntu0.18.10.1
  Version table:
 *** 65.0+build2-0ubuntu0.18.10.1 500
        500 http://security.ubuntu.com/ubuntu cosmic-security/main amd64 Packages
        500 http://archive.ubuntu.com/ubuntu cosmic-updates/main amd64 Packages
        100 /var/lib/dpkg/status
     63.0+build1-0ubuntu1 500
        500 http://archive.ubuntu.com/ubuntu cosmic/main amd64 Packages


This apt command shows that the Firefox version installed on my system is 65.0+build2-0ubuntu0.18.10.1, and it's available in the cosmic-security and cosmic-updates repositories. There is an older version, 63.0+build1-0ubuntu1, available in the main repository, so Firefox can be downgraded to this version.

To downgrade Firefox from the installed 65.0+build2-0ubuntu0.18.10.1 version, to the 63.0+build1-0ubuntu1 version from the main repository, the command would be:

sudo apt install firefox=63.0+build1-0ubuntu1


This command downgrades Firefox without having to downgrade any other packages, because Firefox doesn't depend on any strict package versions:

$ sudo apt install firefox=63.0+build1-0ubuntu1
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be DOWNGRADED:
  firefox
0 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 51 not upgraded.
Need to get 46.1 MB of archives.
After this operation, 4,243 kB disk space will be freed.
Do you want to continue? [Y/n]


There are cases in which you must resolve some dependencies to be able to downgrade the package though, and we'll look at an example like that below.

Example 2.

Let's look at a more complicated example - a package that can't be directly downgraded using apt without also downgrading some of its dependencies.

$ apt policy chromium-browser
chromium-browser:
  Installed: 72.0.3626.81-0ubuntu1~ppa2~18.10.1
  Candidate: 72.0.3626.81-0ubuntu1~ppa2~18.10.1
  Version table:
 *** 72.0.3626.81-0ubuntu1~ppa2~18.10.1 500
        500 http://ppa.launchpad.net/saiarcot895/chromium-beta/ubuntu cosmic/main amd64 Packages
        100 /var/lib/dpkg/status
     71.0.3578.98-0ubuntu0.18.10.1 500
        500 http://security.ubuntu.com/ubuntu cosmic-security/universe amd64 Packages
        500 http://archive.ubuntu.com/ubuntu cosmic-updates/universe amd64 Packages
     69.0.3497.100-0ubuntu1 500
        500 http://archive.ubuntu.com/ubuntu cosmic/universe amd64 Packages


The apt policy command above shows that I currently have Chromium browser beta (version 72) installed from the Saiarcot Chromium Beta PPA, with two older versions being available in the Ubuntu security/updates and main repositories.

Let's try to downgrade chromium browser from version 72.0.3626.81-0ubuntu1~ppa2~18.10.1 to version 71.0.3578.98-0ubuntu0.18.10.1 (from the security/updates repositories) using apt and see what happens:

$ sudo apt install chromium-browser=71.0.3578.98-0ubuntu0.18.10.1
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 chromium-browser : Depends: chromium-codecs-ffmpeg-extra (= 71.0.3578.98-0ubuntu0.18.10.1) but 72.0.3626.81-0ubuntu1~ppa2~18.10.1 is to be installed or
                             chromium-codecs-ffmpeg (= 71.0.3578.98-0ubuntu0.18.10.1) but it is not going to be installed
                    Recommends: chromium-browser-l10n but it is not going to be installed
E: Unable to correct problems, you have held broken packages.


Downgrading Chromium browser doesn't work because it depends on chromium-codecs-ffmpeg-extra or chromium-codecs-ffmpeg, with the exact same version as the chromium-browser package itself. In this case, let's also downgrade the chromium-codecs-ffmpeg-extra package to the same version:

$ sudo apt install chromium-browser=71.0.3578.98-0ubuntu0.18.10.1 chromium-codecs-ffmpeg-extra=71.0.3578.98-0ubuntu0.18.10.1
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Suggested packages:
  webaccounts-chromium-extension unity-chromium-extension adobe-flashplugin
Recommended packages:
  chromium-browser-l10n
The following packages will be REMOVED:
  chromium-browser-l10n chromium-chromedriver
The following packages will be DOWNGRADED:
  chromium-browser chromium-codecs-ffmpeg-extra
0 upgraded, 0 newly installed, 2 downgraded, 2 to remove and 51 not upgraded.
Need to get 58.8 MB of archives.
After this operation, 61.5 MB disk space will be freed.
Do you want to continue? [Y/n]


The apt downgrade command output shows that chromium-browser can now be downgraded, but the command wants to remove 2 packages. Those are recommended packages that were automatically installed when chromium-browser was installed (and they too need to be the exact same version as the chromium-browser package), and while they are not required by chromium-browser, you may still need them. So it's a good idea to downgrade those as well, so they are not removed.

In this case, the apt downgrade command becomes:

sudo apt install chromium-browser=71.0.3578.98-0ubuntu0.18.10.1 chromium-codecs-ffmpeg-extra=71.0.3578.98-0ubuntu0.18.10.1 chromium-browser-l10n=71.0.3578.98-0ubuntu0.18.10.1 chromium-chromedriver=71.0.3578.98-0ubuntu0.18.10.1


Let's look at what happens when we use it:

$ sudo apt install chromium-browser=71.0.3578.98-0ubuntu0.18.10.1 chromium-codecs-ffmpeg-extra=71.0.3578.98-0ubuntu0.18.10.1 chromium-browser-l10n=71.0.3578.98-0ubuntu0.18.10.1 chromium-chromedriver=71.0.3578.98-0ubuntu0.18.10.1
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Suggested packages:
  webaccounts-chromium-extension unity-chromium-extension adobe-flashplugin
The following packages will be DOWNGRADED:
  chromium-browser chromium-browser-l10n chromium-chromedriver chromium-codecs-ffmpeg-extra
0 upgraded, 0 newly installed, 4 downgraded, 0 to remove and 51 not upgraded.
Need to get 64.9 MB of archives.
After this operation, 35.8 MB disk space will be freed.
Do you want to continue? [Y/n]


As you can see, the downgrade can be performed, and no packages are about to be removed. Since it all looks good now, we can proceed with the downgrade.

Published in GNU/Linux Rules!
Monday, 14 January 2019 20:37

The State of Desktop Linux 2019

A snapshot of the current state of Desktop Linux at the start of 2019—with comparison charts and a roundtable Q&A with the leaders of three top Linux distributions.

I've never been able to stay in one place for long—at least in terms of which Linux distribution I call home. In my time as a self-identified "Linux Person", I've bounced around between a number of truly excellent ones. In my early days, I picked up boxed copies of S.u.S.E. (back before they made the U uppercase and dropped the dots entirely) and Red Hat Linux (before Fedora was a thing) from store shelves at various software outlets.

Side note: remember when we used to buy Operating Systems—and even most software—in actual boxes, with actual physical media and actual printed manuals? I still have big printed manuals for a few early Linux versions, which, back then, were necessary for getting just about everything working (from X11 to networking and sound). Heck, sometimes simply getting a successful boot required a few trips through those heavy manuals. Ah, those were the days.

Debian, Ubuntu, Fedora, openSUSE—I spent a good amount of time living in the biggest distributions around (and many others). All of them were fantastic. Truly stellar. Yet, each had their own quirks and peculiarities.

As I bounced from distro to distro, I developed a strong attachment to just about all of them, learning, as I went, to appreciate each for what it was. Just the same, when asked which distribution I recommend to others, my brain begins to melt down. Offering any single recommendation feels simply inadequate.

Choosing which one to call home, even if simply on a secondary PC, is a deeply personal choice.

Maybe you have an aging desktop computer with limited RAM and an older, but still absolutely functional, CPU. You're going to need something light on system resources that runs on 32-bit processors.

Or, perhaps you work with a wide variety of hardware architectures and need a single operating system that works well on all of them—and standardizing on a single Linux distribution would make it easier for you to administer and update all of them. But what options even are available?

To help make this process a bit easier, I've put together a handy set of charts and graphs to let you quickly glance and find the one that fits your needs (Figures 1 and 2).

grap1Figure 1. Distribution Comparison Chart I

 

 

 grap2Figure 2. Distribution Comparison Chart II

 

 

But, let's be honest, knowing that a particular system meets your hardware needs (and preferences) simply is not enough. What is the community like? What's in store for the future of this new system you are investing in? Do the ideals of its leadership match up with your own?

In the interests of helping to answer those questions, I sat down with the leaders of three of the most prominent Linux distros of the day:

  • Chris Lamb: Debian Project Leader
  • Daniel Fore: elementary Founder
  • Matthew Miller: Fedora Project Leade

Each of these systems is unique, respected and brings something truly valuable to the world.

I asked all three leaders the exact same questions—and gave each the chance to respond to each other. The topics are all over the place and designed to help show the similarities and differences between the distributions, both in terms of goals and culture.

Note that the Fedora project leader, Matthew Miller, was having an unusually busy time (both for work and personally), but he still made time to answer as many questions as he could. That, right there, is what I call dedication.

Bryan (LJ):

Introduce your Linux distribution (the short, elevator-pitch version—just a few sentences) and your role within it.

Daniel (elementary):

elementary is focused on growing the market for open-source software and chipping away at the share of our closed-source competitors. We believe in providing a great user experience for both new users and pro users, and putting a strong emphasis on security and privacy. We build elementary OS: a consumer-focused operating system for desktops and notebooks.

My role at elementary is as Founder and CEO. I work with our various teams (like design, development, web and translation teams) to put together a cohesive vision, product roadmap and ensure that we're following an ethical path to sustainable funding.

Chris (Debian):

The Debian Project, which celebrated its 25th birthday this year, is one of the oldest and largest GNU/Linux distributions and is run on an entirely volunteer basis.

Not only does it have stellar reputation for stability and technical excellence, it has a unwavering philosophical stance on free software (i.e., it comes with no proprietary software pre-installed and the main repository is only free software). As it underpins countless derivative distributions, such as Ubuntu, et al., it is uniquely poised and able to improve the Free Software world as a whole.

The Debian Project Leader (DPL) is a curious beast. Far from being a BDFL—the DPL has no authoritative or deciding say in technical matters—the project leader is elected every year to a heady mix of figurehead, spokesperson and focus/contact point, but the DPL is also responsible for the quotidian business of keeping the project moving with respect to reducing bureaucracy and smoothing any and all roadblocks to Debian Developers' productivity.

Matthew (Fedora):

The Fedora distribution brings all of the innovation of thousands of upstream projects and hundreds of thousands of upstream developers together into a polished operating system for users, with releases on a six-month cadence. We're a community project tied together through the shared project mission and through the "four Fs" of our foundations: Freedom, Friends, Features and First. Something like 3,000 people contribute directly to Fedora in any given year, with a core active group of around 400 people participating in any given week.

We just celebrated the 15th anniversary of our first release, but our history goes back even further than that to Red Hat Linux. I'm the Fedora Project Leader, a role funded by Red Hat—paying people to work on the project is the largest way Red Hat acts as a sponsor. It's not a dictatorial role; mostly, I collect good ideas and write short persuasive essays about them. Leadership responsibility is shared with the Fedora Council, which includes both funded roles, members selected by parts of the community and at-large elected representatives.

Bryan (LJ):

With introductions out of the way, let's start with this (perhaps deceptively) simple question:

How many Linux distributions should there be? And why?

Daniel (elementary):

As long as there are a set of users who aren't getting their needs met by existing options, there's a purpose for any number of distros to exist. Some come and some go, and many are very very niche, but that's okay. I think there's a lot of people who are obsessed with trying to have some dominant player take a total monopoly, but in every other market category, it's immediately apparent how silly that idea is. You wouldn't want a single clothing manufacturer or a single restaurant chain or a single internet provider (wink hint nudge) to have total market dominance. Diversity and choice in the marketplace is good for customers, and I think it's no different when it comes to operating systems.

Matthew (Fedora):

[Responding to Daniel] Yes, I agree exactly. That said, creating an entirely from scratch distro is a lot of work, and a lot of it not very interesting work. If you've got something innovative at the how-we-put-the-OS-together level (like CoreOS), there's room for that, but if you're focused higher up the stack, like a new desktop environment or something else around user experience, it makes the most sense to make a derivative of one of the big community-powered distros. There's a lot of boring hard work, and it makes sense to reuse rather than carry those same rocks to the top of a slightly different hill.

In Fedora, we're aiming to make custom distro creation as easy as possible. We have "spins", which are basically mini custom distros. This is stuff like the Python Classroom Lab or Fedora Jam (which is focused on musicians). We have a framework for making those within the Fedora project—I'm all about encouraging bigger, broader sharing and collaboration in Fedora. But if you want to work outside the project—say, you really have different ideas on free and open-source vs. proprietary software—we have Fedora Remixes that let you do that.

Chris (Debian):

The competing choice of distributions is often cited as a reason preventing Linux from becoming mainstream as it robs the movement of a consistent and focused marketing push.

However, philosophical objections against monopolistic behaviour granted, the diversity and freedom that this bazaar of distributions affords is, in my view, paradoxically exactly why it has succeeded.

That people are free—but more important, feel free—to create a new distribution as a means to try experimental or outlandish approaches to perceived problems is surely sufficient justification for some degree of proliferation or even duplication of effort.

In this capacity, Debian's technical excellence, flexibility and deliberate lack of a top-down direction has resulted in it becoming the base underpinning countless derivatives, clearly and evidently able to provide the ingredients to build one's "own" distribution, often without overt credit.

Matthew wrote: "if you want to work outside the project—say, you really have different ideas on free and open source vs. proprietary software—we have Fedora Remixes that let you do that."

Given that, I would be curious to learn how you protect your reputation if you encourage, or people otherwise use your infrastructure, tools and possibly even your name to create and distribute works that are antithetical to the cause of software and user freedom?

Bryan (LJ):

Thinking about it from a slightly different angle—how many distros would be TOO many distros?

Daniel (elementary):

More than the market can sustain I guess? The thing about Linux is that it powers all kinds of stuff. So even for one non-technical person, they could still end up running a handful of distros for their notebook, their router, their phone someday, IoT devices, etc. So the number of distros that could exist sustainably could easily be in the hundreds or thousands, I think.

Chris (Debian):

If I may be so bold as to interpret this more widely, whilst it might look like we have "too many" distributions, I fear this might be misunderstanding the reasons why people are creating these newer offerings in the first place.

Apart from the aforementioned distros created for technical experimentation, someone spinning up their own distribution might be (subconsciously!) doing it for the delight and satisfaction in building something themselves and having their name attached to it—something entirely reasonable and justifiable IMHO.

To then read this creation through a lens of not being ideal for new users or even some silly "Linux worldwide domination" metric could therefore even be missing the point and some of the sheer delight of free software to begin with.

Besides, the "market" for distributions seems to be doing a pretty good job of correcting itself.

Bryan (LJ):

Okay, since you guys brought it up, let's talk about world domination.

How much of what you do (and what your teams do) is influenced by a desire to increase marketshare (either of your distribution specifically or desktop Linux in general)?

Daniel (elementary):

When we first started out, elementary OS was something we made for fun out of a desire to see something exist that we felt didn't yet. But as the company, and our user base, has grown, it's become more clear that our mission must be about getting open-source software in the hands of more people. As of now, our estimated userbase is somewhere in the hundreds of thousands with more than 75% of downloads coming from users of closed-source operating systems, so I think we're making good progress toward that goal. Making the company mission about reaching out to people directly has shaped the way we monetize, develop products, market and more, by ensuring we always put users' needs and experiences first.

Chris (Debian):

I think it would be fair to say that "increasing market share" is not an overt nor overly explicit priority for Debian.

In our 25-year history, Debian has found that if we just continue to do good work, then good things will follow.

That is not to say that other approaches can't work or are harmful, but chasing potentially chimeric concepts such as "market share" can very easily lead to negative outcomes in the long run.

Matthew (Fedora):

A project's user base is directly tied to its ability to have an effect in the world. If we were just doing cool stuff but no one used it, it really wouldn't matter much. And, no one really comes into working on a distro without having been a user first. So I guess to answer the question directly for me at least, it's pretty much all of it—even things that are not immediately related are about helping keep our community healthy and growing in the long term.

Bryan (LJ):

The three of you represent distros that are "funded" in very different ways. Fedora being sponsored (more or less) by Red Hat, elementary being its own company and Debian being, well, Debian.

I would love to hear your thoughts around funding the work that goes into building a distribution. Is there a "right" or "ideal" way to fund that work (either from an ethical perspective or a purely practical one)?

Chris (Debian):

Clearly, melding "corporate interests" with the interests of a community distribution can be fraught with issues.

I am always interested to hear how other distros separate influence and power particularly in terms of increasing transparency using tools such as Councils with community representation, etc. Indeed, this question of "optics" is often highly under-appreciated; it is simply not enough to be honest, you must be seen to be honest too.

Unfortunately, whilst I would love to be able to say that Debian is by-definition free (!) of all such problems by not having a "big sister" company sitting next to it, we have a long history of conversations regarding the role of money in funding contributors.

For example, is it appropriate to fund developers to do work that might not not be done otherwise? And if it is paid for, isn't this simply a feedback loop that effectively ensures that this work will cease to within the remit of volunteers. There are no easy answers and we have no firm consensus, alas.

Daniel (elementary):

I'm not sure that there's a single right way, but I think we have the opinion that there are some wrong ways. The biggest questions we're always trying to ask about funding are where it's coming from and what it's incentivizing. We've taken a hard stance that advertising income is not in the interest of our users. When companies make their income from advertising, they tend to have to make compromises to display advertising content instead of the things their users actually want to see, and oftentimes are they incentivized to invade their users' privacy in order to target ads more effectively. We've also chosen to avoid big enterprise markets like server and IoT, because we believe that since companies will naturally be incentivized to work on products that turn a profit, that making that our business model would result in things like the recent Red Hat acquisition or in killing products that users love, like Ubuntu's Unity.

Instead, we focus on things like individual sales of software directly to our users, bug bounties, Patreon, etc. We believe that doing business directly with our users incentivizes the company to focus on features and products that are in the benefit of those paying customers. Whenever a discussion comes up about how elementary is funded, we always make a point to evaluate if that funding incentivizes outcomes that are ethical and in the favor of our users.

Regarding paying developers, I think elementary is a little different here. We believe that people writing open-source software should be able to make a living doing it. We owe a lot to our volunteer community, and the current product could not be possible without their hard work, but we also have to recognize that there's a significant portion of work that would never get done unless someone is being paid to do it. There are important tasks that are difficult or menial, and expecting someone to volunteer their time to them after their full work day is a big ask, especially if the people knowledgeable in these domains would have to take time away from their families or personal lives to do so. Many tasks are also just more suited to sustained work and require the dedicated attention of a single person for several weeks or months instead of some attention from multiple people over the span of years. So I think we're pretty firmly in the camp that not only is it important for some work to be paid, but the eventual goal should be that anyone writing open-source code should be able to get paid for their contributions.

Chris (Debian):

Daniel wrote: "So I think we're pretty firmly in the camp that not only is it important for some work to be paid, but the eventual goal should be that anyone writing open-source code should be able to get paid."

Do you worry that you could be creating a two-tier community with this approach?

Not only in terms of hard influence (eg. if I'm paid, I'm likely to be able to simply spend longer on my approach) but moreover in terms of "soft" influence during discussions or by putting off so-called "drive-thru" contributions? Do you do anything to prevent the appearance of this?

Matthew (Fedora):

Chris wrote: "Do you worry that you could be creating a two-tier community with this approach?"

Yeah, this is a big challenge for us. We have many people who are paid by Red Hat to work on Fedora either full time or as part of their job, and that gives a freedom to just be around a lot more, which pretty much directly translates to influence. Right now, many of the community-elected positions in Fedora leadership are filled by Red Hatters, because they're people the community knows and trusts. It takes a lot of time and effort to build up that visibility when you have a different day job. But there's some important nuances here too, because many of these Red Hatters aren't actually paid to work on Fedora at all—they're doing it just like anyone else who loves the project.

Daniel (elementary):

Chris wrote: "Do you worry that you could be creating a two-tier community with this approach?"

It's possible, but I'm not sure that we've measured anything to this effect. I think you might be right that employees at elementary can have more influence just as a byproduct of having more time to participate in more discussions, but I wouldn't say that volunteers' opinions are discounted in any way or that they're underrepresented when it comes to major technical decisions. I think it's more that we can direct labor after design and architecture decisions have been discussed. As an example, we recently had decided to make the switch from CMake to Meson. This was a group discussion primarily led by volunteers, but the actual implementation was then largely carried out by employees.

Chris (Debian):

Daniel wrote: "Do you worry that you could be creating a two-tier community with this approach? ... It's possible, but I'm not sure that we've measured anything to this effect."

I think it might be another one of those situations where the optics in play is perhaps as important as the reality. Do you do anything to prevent the appearance of any bias?

Not sure how best to frame it hypothetically, but if I turned up to your project tomorrow and learned that some developers were paid for their work (however fairly integrated in practice), that would perhaps put me off investing my energy.

Bryan (LJ):

What do you see as the single biggest challenge currently facing both your specific project—and desktop Linux in general?

Daniel (elementary):

Third-party apps! Our operating systems are valuable to people only if they can use them to complete the tasks that they care about. Today, that increasingly means using proprietary services that tie in to closed-source and non-native apps that often have major usability and accessibility problems. Even major open-source apps like Firefox don't adhere to free desktop standards like shipping a .desktop file or take advantage of new cross-desktop metadata standards like AppStream. If we want to stay relevant for desktop users, we need to encourage the development of native open-source apps and invest in non-proprietary cloud services and social networks. The next set of industry-disrupting apps (like DropBox, Sketch, Slack, etc.) need to be open source and Linux-first.

Chris (Debian):

Third-party apps/stores are perhaps the biggest challenge facing all distributions within the medium- to long-term, but whilst I would concede there are cultural issues in play here, I believe they have some element of being technical challenges or at least having some technical ameliorations.

More difficult, however, is that our current paradigms of what constitutes software freedom are becoming difficult to square with the increased usage of cloud services. In the years ahead we may need to revise our perspectives, ideas and possibly even our definitions of what constitutes free software.

There will be a time when the FLOSS community will have to cease the casual mocking of "cloud" and acknowledge the reality that it is, regardless of one's view of it, here to stay.

Matthew (Fedora):

For desktop Linux, on the technical side, I'm worried about hardware enablement—not just the work dealing with driver compatibility and proprietary hardware, but more fundamentally, just being locked out. We've just seen Apple come out with hardware locked so Linux won't even boot—even with signed kernels. We're going to see more of that, and more tablets and tablet-keyboard combos with similar locked, proprietary operating systems.

A bigger worry I have is with bringing the next generation to open source—a lot of Fedora core contributors have been with the project since it started 15 years ago, which on the one hand is awesome, but also, we need to make sure that we're not going to end up with no new energy. When I was a kid, I got into computers through programming BASIC on an Apple ][. I could see commercial software and easily imagine myself making the same kind of thing. Even the fanciest games on offer—I could see the pixels and could use PEEK and POKE to make those beeps and boops. But now, with kids getting into computers via Fortnite or whatever, that's not something one can just sit down and make an approximation of as a middle-school kid. That's discouraging and makes a bigger hill to climb.

This is one reason I'm excited about Fedora IoT—you can use Linux and open source at a tinkerer's level to make something that actually has an effect on the world around you, and actually probably a lot better than a lot of off-the-shelf IoT stuff.

Bryan (LJ):

Where do you see your distribution in five years? What will be its place be in the broader Linux and computing world?

Chris (Debian):

Debian naturally faces some challenges in the years ahead, but I sincerely believe that the Project remains as healthy as ever.

We are remarkably cherished and uniquely poised to improve the free software ecosystem as a whole. Moreover, our stellar reputation for technical excellence, stability and software freedom remains highly respected where losing this would surely be the beginning of the end for Debian.

Daniel (elementary):

Our short-term goals are mostly about growing our third-party app ecosystem and improving our platform. We're investing a lot of time into online accounts integration and working with other organizations, like GNOME, to make our libraries and tooling more compelling. Sandboxed packaging and Wayland will give us the tools to help keep our users' data private and to keep their operating system stable and secure. We're also working with OEMs to make elementary OS more shippable and to give users a way to get an open-source operating system when they buy a new computer. Part of that work is the new installer that we're collaborating with System76 to develop. Overall, I'd say that we're going to continue to make it easier to switch away from closed-source operating systems, and we're working on increasing collaborative efforts to do that.

Bryan (LJ):

When you go to a FOSS or Linux conference and see folks using Mac and Windows PCs, what's your reaction? Is it a good thing or a bad thing when developers of Linux software primarily use another platform?

Chris (Debian):

Rushing to label this as a "good" or "bad" thing can make it easy to miss the underlying and more interesting lessons we can learn here.

Clearly, if everyone was using a Linux-based operating system, that would be a better state of affairs, but if we are overly quick to dismiss the usage of Mac systems as "bad", then we can often fail to understand why people have chosen to adopt the trade-offs of these platforms in the first place.

By not demonstrating sufficient empathy for such users as well as newcomers or those without our experience, we alienate potential users and contributors and tragically fail to communicate our true message. Basically, we can be our own worst enemy sometimes.

Daniel (elementary):

Within elementary, we strongly believe in dogfood, but I think when we see someone at a conference using a closed-source operating system, it's a learning opportunity. Instead of being upset about it or blaming them, we should be asking why we haven't been able to make a conversion. We need to identify if the problem is a missing product, feature, or just with outreach and then address that.

Bryan (LJ):

How often do you interact with the leaders of other distributions? And is that the right amount?

Chris (Debian):

Whilst there are a few meta-community discussion groups around, they tend to have a wider focus, so yes, I think we could probably talk a little more, even just as a support group or a place to rant!

More seriously though, this conversation itself has been fairly insightful, and I've learned a few things that I think I "should" have known already, hinting that we could be doing a better job here.

Daniel (elementary):

With other distros, not too often. I think we're a bit more active with our partners, upstreams and downstreams. It's always interesting to hear about how someone else tackles a problem, so I would be interested in interacting more with others, but in a lot of cases, I think there are philosophical or technical differences that mean our solutions might not be relevant for other distros.

Bryan (LJ):

Is there value in the major distributions standardizing on package management systems? Should that be done? Can that be done?

Chris (Debian):

I think I would prefer to see effort go toward consistent philosophical outlooks and messaging on third-party apps and related issues before I saw energy being invested into having a single package management format.

I mean, is this really the thing that is holding us all back? I would grant there is some duplication of effort, but I'm not sure it is the most egregious example and—as you suggest—it is not even really technically feasible or is at least subject to severe diminishing returns.

Daniel (elementary):

For users, there's a lot of value in being able to sideload cross-platform, closed-source apps that they rely on. But outside of this use case, I'm not sure that packaging is much more than an implementation detail as far as our users are concerned. I do think though that developers can benefit from having more examples and more documentation available, and the packaging formats can benefit from having a diverse set of implementations. Having something like Flatpak or Snap become as well accepted as SystemD would probably be good in the long run, but our users probably never noticed when we switched from Upstart, and they probably won't notice when we switch from Debian packages.

Bryan (LJ):

Big thanks to Daniel, Matthew and Chris for taking time out to answer questions and engage in this discussion with each other. Seeing the leadership of such excellent projects talking together about the things they differ on—and the things they align on completely—warms my little heart.

Resources
Debian Project
Debian's Unwavering Philosophical Stance on Free Software
Debian's 25th Birthday
Benevolent Dictator for Life (Wikipedia)
Debian Project Leader Elections 2017
Debian Project Leader Elections 2018
Bits from the DPL (October 2018)
Get Fedora
Fedora's Mission and Foundations
Celebrate 15 Years of Fedora
elementary OS
Publish on AppCenter

 

source: linuxjournal
 
Published in GNU/Linux Rules!
Sunday, 13 January 2019 20:42

Top 5 Linux Distributions for Productivity

I have to confess, this particular topic is a tough one to address. Why? First off, Linux is a productive operating system by design. Thanks to an incredibly reliable and stable platform, getting work done is easy. Second, to gauge effectiveness, you have to consider what type of work you need a productivity boost for. General office work? Development? School? Data mining? Human resources? You see how this question can get somewhat complicated.

That doesn’t mean, however, that some distributions aren’t able to do a better job of configuring and presenting that underlying operating system into an efficient platform for getting work done. Quite the contrary. Some distributions do a much better job of “getting out of the way,” so you don’t find yourself in a work-related hole, having to dig yourself out and catch up before the end of day. These distributions help strip away the complexity that can be found in Linux, thereby making your workflow painless.

Let’s take a look at the distros I consider to be your best bet for productivity. To help make sense of this, I’ve divided them into categories of productivity. That task itself was challenging, because everyone’s productivity varies. For the purposes of this list, however, I’ll look at:

  • General Productivity: For those who just need to work efficiently on multiple tasks.
  • Graphic Design: For those that work with the creation and manipulation of graphic images.
  • Development: For those who use their Linux desktops for programming.
  • Administration: For those who need a distribution to facilitate their system administration tasks.
  • Education: For those who need a desktop distribution to make them more productive in an educational environment.

Yes, there are more categories to be had, many of which can get very niche-y, but these five should fill most of your needs.

 

General Productivity


For general productivity, you won’t get much more efficient than Ubuntu. The primary reason for choosing Ubuntu for this category is the seamless integration of apps, services, and desktop. You might be wondering why I didn’t choose Linux Mint for this category? Because Ubuntu now defaults to the GNOME desktop, it gains the added advantage of GNOME Extensions (Figure 1).

productivity 1

Figure 1: The GNOME Clipboard Indicator extension in action.

These extensions go a very long way to aid in boosting productivity (so Ubuntu gets the nod over Mint). But Ubuntu didn’t just accept a vanilla GNOME desktop. Instead, they tweaked it to make it slightly more efficient and user-friendly, out of the box. And because Ubuntu contains just the right mixture of default, out-of-the-box, apps (that just work), it makes for a nearly perfect platform for productivity.

Whether you need to write a paper, work on a spreadsheet, code a new app, work on your company website, create marketing images, administer a server or network, or manage human resources from within your company HR tool, Ubuntu has you covered. The Ubuntu desktop distribution also doesn’t require the user to jump through many hoops to get things working … it simply works (and quite well). Finally, thanks to it’s Debian base, Ubuntu makes installing third-party apps incredibly easy.

Although Ubuntu tends to be the go-to for nearly every list of “top distributions for X,” it’s very hard to argue against this particular distribution topping the list of general productivity distributions.

Graphic Design
If you’re looking to up your graphic design productivity, you can’t go wrong with Fedora Design Suite. This Fedora respin was created by the team responsible for all Fedora-related art work. Although the default selection of apps isn’t a massive collection of tools, those it does include are geared specifically for the creation and manipulation of images.

With apps like GIMP, Inkscape, Darktable, Krita, Entangle, Blender, Pitivi, Scribus, and more (Figure 2), you’ll find everything you need to get your image editing jobs done and done well. But Fedora Design Suite doesn’t end there. This desktop platform also includes a bevy of tutorials that cover countless subjects for many of the installed applications. For anyone trying to be as productive as possible, this is some seriously handy information to have at the ready. I will say, however, the tutorial entry in the GNOME Favorites is nothing more than a link to this page.

productivity 2

Figure 2: The Fedora Design Suite Favorites menu includes plenty of tools for getting your graphic design on.

Those that work with a digital camera will certainly appreciate the inclusion of the Entangle app, which allows you to control your DSLR from the desktop.

Development


Nearly all Linux distributions are great platforms for programmers. However, one particular distributions stands out, above the rest, as one of the most productive tools you’ll find for the task. That OS comes from System76 and it’s called Pop!_OS. Pop!_OS is tailored specifically for creators, but not of the artistic type. Instead, Pop!_OS is geared toward creators who specialize in developing, programming, and making. If you need an environment that is not only perfected suited for your development work, but includes a desktop that’s sure to get out of your way, you won’t find a better option than Pop!_OS (Figure 3).

What might surprise you (given how “young” this operating system is), is that Pop!_OS is also one of the single most stable GNOME-based platforms you’ll ever use. This means Pop!_OS isn’t just for creators and makers, but anyone looking for a solid operating system. One thing that many users will greatly appreciate with Pop!_OS, is that you can download an ISO specifically for your video hardware. If you have Intel hardware, download the version for Intel/AMD. If your graphics card is NVIDIA, download that specific release. Either way, you are sure go get a solid platform for which to create your masterpiece.

 productivity 3 0

 Figure 3: The Pop!_OS take on GNOME Overview.

Interestingly enough, with Pop!_OS, you won’t find much in the way of pre-installed development tools. You won’t find an included IDE, or many other dev tools. You can, however, find all the development tools you need in the Pop Shop.

Administration
If you’re looking to find one of the most productive distributions for admin tasks, look no further than Debian. Why? Because Debian is not only incredibly reliable, it’s one of those distributions that gets out of your way better than most others. Debian is the perfect combination of ease of use and unlimited possibility. On top of which, because this is the distribution for which so many others are based, you can bet if there’s an admin tool you need for a task, it’s available for Debian. Of course, we’re talking about general admin tasks, which means most of the time you’ll be using a terminal window to SSH into your servers (Figure 4) or a browser to work with web-based GUI tools on your network. Why bother making use of a desktop that’s going to add layers of complexity (such as SELinux in Fedora, or YaST in openSUSE)? Instead, chose simplicity.

productivity 4

Figure 4: SSH’ing into a remote server on Debian.
 

And because you can select which desktop you want (from GNOME, Xfce, KDE, Cinnamon, MATE, LXDE), you can be sure to have the interface that best matches your work habits.

Education
If you are a teacher or student, or otherwise involved in education, you need the right tools to be productive. Once upon a time, there existed the likes of Edubuntu. That distribution never failed to be listed in the top of education-related lists. However, that distro hasn’t been updated since it was based on Ubuntu 14.04. Fortunately, there’s a new education-based distribution ready to take that title, based on openSUSE. This spin is called openSUSE:Education-Li-f-e (Linux For Education - Figure 5), and is based on openSUSE Leap 42.1 (so it is slightly out of date).

openSUSE:Education-Li-f-e includes tools like:

  • Brain Workshop - A dual n-back brain exercise
  • GCompris - An educational software suite for young children
  • gElemental - A periodic table viewer
  • iGNUit - A general purpose flash card program
  • Little Wizard - Development environment for children based on Pascal
  • Stellarium - An astronomical sky simulator
  • TuxMath - An math tutor game
  • TuxPaint - A drawing program for young children
  • TuxType - An educational typing tutor for children
  • wxMaxima - A cross platform GUI for the computer algebra system
  • Inkscape - Vector graphics program
  • GIMP - Graphic image manipulation program
  • Pencil - GUI prototyping tool
  • Hugin - Panorama photo stitching and HDR merging program.
 productivity 5
Figure 5: The openSUSE:Education-Li-f-e distro has plenty of tools to help you be productive in or for school.
 

Also included with openSUSE:Education-Li-f-e is the KIWI-LTSP Server. The KIWI-LTSP Server is a flexible, cost effective solution aimed at empowering schools, businesses, and organizations all over the world to easily install and deploy desktop workstations. Although this might not directly aid the student to be more productive, it certainly enables educational institutions be more productive in deploying desktops for students to use. For more information on setting up KIWI-LTSP, check out the openSUSE KIWI-LTSP quick start guide.

Learn more about Linux through the free "Introduction to Linux" course from The Linux Foundation and edX.

 
Published in GNU/Linux Rules!
Our website is protected by DMC Firewall!