Displaying items by tag: gnu

Sunday, 24 November 2019 14:26

TIMESHIFT : Backup and Restore Ubuntu Linux


Have you ever wondered how you can backup and restore your Ubuntu or Debian system ? Timeshift is a free and opensource tool that allows you to create incremental snapshots of your filesystem. You can create a snapshot using either RSYNC or BTRFS.

With that. let’s delve in and install Timeshift. For this tutorial, we shall install on Ubuntu 18.04 LTS system.

Installing TimeShift on Ubuntu / Debian Linux

TimeShift is not hosted officially on Ubuntu and Debian repositories. With that in mind, we are going to run the command below to add the PPA:



# add-apt-repository -y ppa:teejee2008/ppa




Next, update the system packages with the command:


# apt update


After a successful system update, install timeshift by running following apt command :


# apt install timeshift



Preparing a backup storage device

Best practice demands that we save the system snapshot on a separate storage volume, aside from the system’s hard drive. For this guide, we are using a 16 GB flash drive as the secondary drive on which we are going to save the snapshot.


# lsblk | grep sdb



For the flash drive to be used as a backup location for the snapshot, we need to create a partition table on the device. Run the following commands:


# parted /dev/sdb mklabel gpt


# parted /dev/sdb mkpart primary 0% 100%


# mkfs.ext4 /dev/sdb1




After creating a partition table on the USB flash drive, we are all set to begin creating filesystem’s snapshots!

Using Timeshift to create snapshots

To launch Timeshift, use the application menu to search for the Timeshift application.




Click on the Timeshift icon and the system will prompt you for the Administrator’s password. Provide the password and click on Authenticate




Next, select your preferred snapshot type.



Click ‘Next’. Select the destination drive for the snapshot. In this case, my location is the external USB drive labeled as /dev/sdb



 Next, define the snapshot levels. Levels refer to the intervals during which the snapshots are created.  You can choose to have either monthly, weekly, daily, or hourly snapshot levels.




Click ‘Finish’ On the next Window, click on the ‘Create’ button to begin creating the snapshot. Thereafter, the system will begin creating the snapshot.



Finally, your snapshot will be displayed as shown



Restoring Ubuntu / Debian from a snapshot

Having created a system snapshot, let’s now see how you can restore your system from the same snapshot. On the same Timeshift window, click on the snapshot and click on the ‘Restore’ button as shown.





Next, you will be prompted to select the target device. leave the default selection and hit ‘Next’.



A dry run will be performed by Timeshift before the restore process commences.



In the next window, hit the ‘Next’ button to confirm actions displayed.



You’ll get a warning and a disclaimer as shown. Click ‘Next’ to initialize the restoration process.

Thereafter, the restore process will commence and finally, the system will thereafter reboot into an earlier version as defined by the snapshot.




As you have seen it quite easy to use TimeShift to restore your system from a snapshot. It comes in handy when backing up system files and allows you to recover in the event of a system fault. So don’t get scared to tinker with your system or mess up. TimeShift will give you the ability to go back to a point in time when everything was running smoothly.


Published in GNU/Linux Rules!

Basic rsync commands are usually enough to manage your Linux backups, but a few extra options add speed and power to large backup sets.



It seems clear that backups are always a hot topic in the Linux world. Back in 2017, David Both offered Opensource.com readers tips on "Using rsync to back up your Linux system," and earlier this year, he published a poll asking us, "What's your primary backup strategy for the /home directory in Linux?" In another poll this year, Don Watkins asked, "Which open source backup solution do you use?"

My response is rsync. I really like rsync! There are plenty of large and complex tools on the market that may be necessary for managing tape drives or storage library devices, but a simple open source command line tool may be all you need.

Basic rsync

I managed the binary repository system for a global organization that had roughly 35,000 developers with multiple terabytes of files. I regularly moved or archived hundreds of gigabytes of data at a time. Rsync was used. This experience gave me confidence in this simple tool. (So, yes, I use it at home to back up my Linux systems.)



The basic rsync command is simple.

rsync -av SRC DST

Indeed, the rsync commands taught in any tutorial will work fine for most general situations. However, suppose we need to back up a very large amount of data. Something like a directory with 2,000 sub-directories, each holding anywhere from 50GB to 700GB of data. Running rsync on this directory could take a tremendous amount of time, particularly if you're using the checksum option, which I prefer.

Performance is likely to suffer if we try to sync large amounts of data or sync across slow network connections. Let me show you some methods I use to ensure good performance and reliability.


Advanced rsync

One of the first lines that appears when rsync runs is: "sending incremental file list." If you do a search for this line, you'll see many questions asking things like: why is it taking forever? or why does it seem to hang up?

Here's an example based on this scenario. Let's say we have a directory called /storage that we want to back up to an external USB device mounted at /media/WDPassport.

If we want to back up /storage to a USB external drive, we could use this command:

rsync -cav /storage /media/WDPassport


The c option tells rsync to use file checksums instead of timestamps to determine changed files, and this usually takes longer. In order to break down the /storage directory, I sync by subdirectory, using the find command. Here's an example:

find /storage -type d -exec rsync -cav {} /media/WDPassport \;


This looks OK, but if there are any files in the /storage directory, they will not be copied. So, how can we sync the files in /storage? There is also a small nuance where certain options will cause rsync to sync the . directory, which is the root of the source directory; this means it will sync the subdirectories twice, and we don't want that.

Long story short, the solution I settled on is a "double-incremental" script. This allows me to break down a directory, for example, breaking /home into the individual users' home directories or in cases when you have multiple large directories, such as music or family photos.

Here is an example of my script:


for HOME in $HOMES; do
     cd /home/$HOME
     rsync -cdlptgov --delete . /$DRIVE/$HOME
     find . -maxdepth 1 -type d -not -name "." -exec rsync -crlptgov --delete {} /$DRIVE/$HOME \;


The first rsync command copies the files and directories that it finds in the source directory. However, it leaves the directories empty so we can iterate through them using the find command. This is done by passing the d argument, which tells rsync not to recurse the directory.

-d, --dirs                  transfer directories without recursing


The find command then passes each directory to rsync individually. Rsync then copies the directories' contents. This is done by passing the r argument, which tells rsync to recurse the directory.

-r, --recursive             recurse into directories


This keeps the increment file that rsync uses to a manageable size.

Most rsync tutorials use the a (or archive) argument for convenience. This is actually a compound argument.

-a, --archive               archive mode; equals -rlptgoD (no -H,-A,-X)


The other arguments that I pass would have been included in the a; those are lptg, and o.

-l, --links                 copy symlinks as symlinks
-p, --perms                 preserve permissions
-t, --times                 preserve modification times
-g, --group                 preserve group
-o, --owner                 preserve owner (super-user only)


The --delete option tells rsync to remove any files on the destination that no longer exist on the source. This way, the result is an exact duplication. You can also add an exclude for the .Trash directories or perhaps the .DS_Store files created by MacOS.

-not -name ".Trash*" -not -name ".DS_Store"


Be careful

One final recommendation: rsync can be a destructive command. Luckily, its thoughtful creators provided the ability to do "dry runs." If we include the noption, rsync will display the expected output without writing any data.

rsync -cdlptgovn --delete . /$DRIVE/$HOME

This script is scalable to very large storage sizes and large latency or slow link situations. I'm sure there is still room for improvement, as there always is. If you have suggestions, please share them in the comments.

Source: opensource.com


Marielle Price 

Published in GNU/Linux Rules!
Tagged under
Sunday, 21 April 2019 11:19

Learn To Navigate Directories Faster In Linux

Today we are going to learn some command line productivity hacks. As you already know, we use “cd” command to move between a stack of directories in Unix-like operating systems. In this guide I am going to teach you how to navigate directories faster without having to use “cd” command often. There could be many ways, but I only know the following five methods right now! I will keep updating this guide when I came across any methods or utilities to achieve this task in the days to come.

Five Different Methods To Navigate Directories Faster In Linux

Method 1: Using “Pushd”, “Popd” And “Dirs” Commands

This is the most frequent method that I use everyday to navigate between a stack of directories. The “Pushd”, “Popd”, and “Dirs” commands comes pre-installed in most Linux distributions, so don’t bother with installation. These trio commands are quite useful when you’re working in a deep directory structure and scripts. For more details, check our guide in the link given below.

Method 2: Using “bd” utility

The “bd” utility also helps you to quickly go back to a specific parent directory without having to repeatedly typing “cd ../../.” on your Bash.

Bd is also available in the Debian extra and Ubuntu universe repositories. So, you can install it using “apt-get” package manager in Debian, Ubuntu and other DEB based systems as shown below:

$ sudo apt-get update
$ sudo apt-get install bd

For other distributions, you can install as shown below.

$ sudo wget --no-check-certificate -O /usr/local/bin/bd https://raw.github.com/vigneshwaranr/bd/master/bd
$ sudo chmod +rx /usr/local/bin/bd
$ echo 'alias bd=". bd -si"' >> ~/.bashrc
$ source ~/.bashrc

To enable auto completion, run:

$ sudo wget -O /etc/bash_completion.d/bd https://raw.github.com/vigneshwaranr/bd/master/bash_completion.d/bd
$ source /etc/bash_completion.d/bd

The Bd utility has now been installed. Let us see few examples to understand how to quickly move through stack of directories using this tool.

Create some directories.

$ mkdir -p dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10

The above command will create a hierarchy of directories. Let us check directory structure using command:

$ tree dir1/
└── dir2
 └── dir3
 └── dir4
 └── dir5
 └── dir6
 └── dir7
 └── dir8
 └── dir9
 └── dir10

9 directories, 0 files

Alright, we have now 10 directories. Let us say you’re currently in 7th directory i.e dir7.

$ pwd

You want to move to dir3. Normally you would type:

$ cd /home/sk/dir1/dir2/dir3

Right? yes! But it not necessary though! To go back to dir3, just type:

$ bd dir3

Now you will be in dir3.


Navigate Directories Faster In Linux Using “bd” Utility

Easy, isn’t it? It supports auto complete, so you can just type the partial name of a directory and hit the tab key to auto complete the full path.

To check the contents of a specific parent directory, you don’t need to inside that particular directory. Instead, just type:

$ ls `bd dir1`

The above command will display the contents of dir1 from your current working directory.

For more details, check out the following GitHub page.

Method 3: Using “Up” Shell script

The “Up” is a shell script allows you to move quickly to your parent directory. It works well on many popular shells such as Bash, Fish, and Zsh etc. Installation is absolutely easy too!

To install “Up” on Bash, run the following commands one bye:

$ curl --create-dirs -o ~/.config/up/up.sh https://raw.githubusercontent.com/shannonmoeller/up/master/up.sh
$ echo 'source ~/.config/up/up.sh' >> ~/.bashrc

The up script registers the “up” function and some completion functions via your “.bashrc” file.

Update the changes using command:

$ source ~/.bashrc

On zsh:

$ curl --create-dirs -o ~/.config/up/up.sh https://raw.githubusercontent.com/shannonmoeller/up/master/up.sh
$ echo 'source ~/.config/up/up.sh' >> ~/.zshrc

The up script registers the “up” function and some completion functions via your “.zshrc” file.

Update the changes using command:

$ source ~/.zshrc

On fish:

$ curl --create-dirs -o ~/.config/up/up.fish https://raw.githubusercontent.com/shannonmoeller/up/master/up.fish
$ source ~/.config/up/up.fish

The up script registers the “up” function and some completion functions via “funcsave”.

Now it is time to see some examples.

Let us create some directories.

$ mkdir -p dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10

Let us say you’re in 7th directory i.e dir7.

$ pwd

You want to move to dir3. Using “cd” command, we can do this by typing the following command:

$ cd /home/sk/dir1/dir2/dir3

But it is really easy to go back to dir3 using “up” script:

$ up dir3

That’s it. Now you will be in dir3. To go one directory up, just type:

$ up 1

To go back two directory type:

$ up 2

It’s that simple. Did I type the full path? Nope. Also it supports tab completion. So just type the partial directory name and hit the tab to complete the full path.

For more details, check out the GitHub page.

Please be mindful that “bd” and “up” tools can only help you to go backward i.e to the parent directory of the current working directory. You can’t move forward. If you want to switch to dir10 from dir5, you can’t! Instead, you need to use “cd” command to switch to dir10. These two utilities are meant for quickly moving you to the parent directory!

Method 4: Using “Shortcut” tool

This is yet another handy method to switch between different directories quickly and easily. This is somewhat similar to alias command. In this method, we create shortcuts to frequently used directories and use the shortcut name to go to that respective directory without having to type the path. If you’re working in deep directory structure and stack of directories, this method will greatly save some time. You can learn how it works in the guide given below.

Method 5: Using “CDPATH” Environment variable

This method doesn’t require any installation. CDPATH is an environment variable. It is somewhat similar to PATH variable which contains many different paths concatenated using ‘:’ (colon). The main difference between PATH and CDPATH variables is the PATH variable is usable with all commands whereas CDPATH works only for cd command.

I have the following directory structure.


Directory structure

As you see, there are four child directories under a parent directory named “ostechnix”.

Now add this parent directory to CDPATH using command:

$ export CDPATH=~/ostechnix

You now can instantly cd to the sub-directories of the parent directory (i.e ~/ostechnix in our case) from anywhere in the filesystem.

For instance, currently I am in /var/mail/ location.

3.pngTo cd into 
~/ostechnix/Linux/ directory, we don’t have to use the full path of the directory as shown below:

$ cd ~/ostechnix/Linux

Instead, just mention the name of the sub-directory you want to switch to:

$ cd Linux

It will automatically cd to ~/ostechnix/Linux directory instantly.


As you can see in the above output, I didn’t use “cd ”. Instead, I just used “cd ” command.

Please note that CDPATH will allow you to quickly navigate to only one child directory of the parent directory set in CDPATH variable. It doesn’t much help for navigating a stack of directories (directories inside sub-directories, of course).

To find the values of CDPATH variable, run:

$ echo $CDPATH

Sample output would be:


Set multiple values to CDPATH

Similar to PATH variable, we can also set multiple values (more than one directory) to CDPATH separated by colon (:).

$ export CDPATH=.:~/ostechnix:/etc:/var:/opt

Make the changes persistent

As you already know, the above command (export) will only keep the values of CDPATH until next reboot. To permanently set the values of CDPATH, just add them to your ~/.bashrc or ~/.bash_profile files.

$ vi ~/.bash_profile

Add the values:

export CDPATH=.:~/ostechnix:/etc:/var:/opt

Hit ESC key and type :wq to save and exit.

Apply the changes using command:

$ source ~/.bash_profile


To clear the values of CDPATH, use export CDPATH=””. Or, simply delete the entire line from ~/.bashrc or ~/.bash_profile files.

In this article, you have learned the different ways to navigate directory stack faster and easier in Linux. As you can see, it’s not that difficult to browse a pile of directories faster. Now stop typing “cd ../../..” endlessly by using these tools. If you know any other worth trying tool or method to navigate directories faster, feel free to let us know in the comment section below. I will review and add them in this guide.

And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned!

Marielle Price

Published in GNU/Linux Rules!
Tagged under
Thursday, 28 February 2019 14:43

Do Linux distributions still matter with containers?

best linux distros 670x335

Some people say Linux distributions no longer matter with containers. Alternative approaches, like distroless and scratch containers, seem to be all the rage. It appears we are considering and making technology decisions based more on fashion sense and immediate emotional gratification than thinking through the secondary effects of our choices. We should be asking questions like: How will these choices affect maintenance six months down the road? What are the engineering tradeoffs? How does this paradigm shift affect our build systems at scale?




It's frustrating to watch. If we forget that engineering is a zero-sum game with measurable tradeoffs—advantages and disadvantages, with costs and benefits of different approaches— we do ourselves a disservice, we do our employers a disservice, and we do our colleagues who will eventually maintain our code a disservice. Finally, we do all of the maintainers (hail the maintainers!) a disservice by not appreciating the work they do.

Understanding the problem

To understand the problem, we have to investigate why we started using Linux distributions in the first place. I would group the reasons into two major buckets: kernels and other packages. Compiling kernels is actually fairly easy. Slackware and Gentoo (I still have a soft spot in my heart) taught us that. 


On the other hand, the tremendous amount of development and runtime software that needs to be packaged for a usable Linux system can be daunting. Furthermore, the only way you can ensure that millions of permutations of packages can be installed and work together is by using the old paradigm: compile it and ship it together as a thing (i.e., a Linux distribution). So, why do Linux distributions compile kernels and all the packages together? Simple: to make sure things work together.

First, let's talk about kernels. The kernel is special. Booting a Linux system without a compiled kernel is a bit of a challenge. It's the core of a Linux operating system, and it's the first thing we rely on when a system boots. Kernels have a lot of different configuration options when they're being compiled that can have a tremendous effect on how hardware and software run on one. A secondary problem in this bucket is that system software, like compilers, C libraries, and interpreters, must be tuned for the options you built into the kernel. Gentoo taught us this in a visceral way, which turned everyone into a miniature distribution maintainer.

Embarrassingly (because I have worked with containers for the last five years), I must admit that I have compiled kernels quite recently. I had to get nested KVM working on RHEL 7 so that I could run OpenShift on OpenStack virtual machines, in a KVM virtual machine on my laptop, as well as our Container Development Kit (CDK). #justsayin Suffice to say, I fired RHEL7 up on a brand new 4.X kernel at the time. Like any good sysadmin, I was a little worried that I missed some important configuration options and patches. And, of course, I had missed some things. Sleep mode stopped working right, my docking station stopped working right, and there were numerous other small, random errors. But it did work well enough for a live demo of OpenShift on OpenStack, in a single KVM virtual machine on my laptop. Come on, that's kinda' fun, right? But I digress…

Now, let's talk about all the other packages. While the kernel and associated system software can be tricky to compile, the much, much bigger problem from a workload perspective is compiling thousands and thousands of packages to give us a useable Linux system. Each package requires subject matter expertise. Some pieces of software require running only three commands: ./configure, make, and make install. Others require a lot of subject matter expertise ranging from adding users and configuring specific defaults in etc to running post-install scripts and adding systemd unit files. The set of skills necessary for the thousands of different pieces of software you might use is daunting for any single person. But, if you want a usable system with the ability to try new software whenever you want, you have to learn how to compile and install the new software before you can even begin to learn to use it. That's Linux without a Linux distribution. That's the engineering problem you are agreeing to when you forgo a Linux distribution.

The point is that you have to build everything together to ensure it works together with any sane level of reliability, and it takes a ton of knowledge to build a usable cohort of packages. This is more knowledge than any single developer or sysadmin is ever going to reasonably learn and retain. Every problem I described applies to your container host (kernel and system software) and container image (system software and all other packages)—notice the overlap; there are compilers, C libraries, interpreters, and JVMs in the container image, too.

The solution

You already know this, but Linux distributions are the solution. Stop reading and send your nearest package maintainer (again, hail the maintainers!) an e-card (wait, did I just give my age away?). Seriously though, these people do a ton of work, and it's really underappreciated. Kubernetes, Istio, Prometheus, and Knative: I am looking at you. Your time is coming too, when you will be in maintenance mode, overused, and underappreciated. I will be writing this same article again, probably about Kubernetes, in about seven to 10 years.

First principles with container builds

There are tradeoffs to building from scratch and building from base images.

Building from base images

Building from base images has the advantage that most build operations are nothing more than a package install or update. It relies on a ton of work done by package maintainers in a Linux distribution. It also has the advantage that a patching event six months—or even 10 years—from now (with RHEL) is an operations/systems administrator event (yum update), not a developer event (that requires picking through code to figure out why some function argument no longer works).

Let's double-click on that a bit. Application code relies on a lot of libraries ranging from JSON munging libraries to object-relational mappers. Unlike the Linux kernel and Glibc, these types of libraries change with very little regard to breaking API compatibility. That means that three years from now your patching event likely becomes a code-changing event, not a yum update event. Got it, let that sink in. Developers, you are getting paged at 2 AM if the security team can't find a firewall hack to block the exploit.

Building from a base image is not perfect; there are disadvantages, like the size of all the dependencies that get dragged in. This will almost always make your container images larger than building from scratch. Another disadvantage is you will not always have access to the latest upstream code. This can be frustrating for developers, especially when you just want to get something out the door, but not as frustrating as being paged to look at a library you haven't thought about in three years that the upstream maintainers have been changing the whole time.

If you are a web developer and rolling your eyes at me, I have one word for you: DevOps. That means you are carrying a pager, my friend.

Building from scratch

Scratch builds have the advantage of being really small. When you don't rely on a Linux distribution in the container, you have a lot of control, which means you can customize everything for your needs. This is a best-of-breed model, and it's valid in certain use cases. Another advantage is you have access to the latest packages. You don't have to wait for a Linux distro to update anything. You are in control, so you choose when to spend the engineering work to incorporate new software.

Remember, there is a cost to controlling everything. Often, updating to new libraries with new features drags in unwanted API changes, which means fixing incompatibilities in code (in other words, shaving yaks). Shaving yaks at 2 AM when the application doesn't work is not fun. Luckily, with containers, you can roll back and shave the yaks the next business day, but it will still eat into your time for delivering new value to the business, new features to your applications. Welcome to the life of a sysadmin.

OK, that said, there are times that building from scratch makes sense. I will completely concede that statically compiled Golang programs and C programs are two decent candidates for scratch/distroless builds. With these types of programs, every container build is a compile event. You still have to worry about API breakage three years from now, but if you are a Golang shop, you should have the skillset to fix things over time.


Basically, Linux distributions do a ton of work to save you time—on a regular Linux system or with containers. The knowledge that maintainers have is tremendous and leveraged so much without really being appreciated. The adoption of containers has made the problem even worse because it's even further abstracted.

With container hosts, a Linux distribution offers you access to a wide hardware ecosystem, ranging from tiny ARM systems, to giant 128 CPU x86 boxes, to cloud-provider VMs. They offer working container engines and container runtimes out of the box, so you can just fire up your containers and let somebody else worry about making things work.

For container images, Linux distributions offer you easy access to a ton of software for your projects. Even when you build from scratch, you will likely look at how a package maintainer built and shipped things—a good artist is a good thief—so, don't undervalue this work.

So, thank you to all of the maintainers in Fedora, RHEL (Frantisek, you are my hero), Debian, Gentoo, and every other Linux distribution. I appreciate the work you do, even though I am a "container guy."

Original Post: opensource.com

Published in GNU/Linux Rules!
Tagged under

el group computer 685

This has been an interesting experience, in no small part because most of the people aren't at all technical. They know how to use a computer to do what they need to do. Beyond that, they're not interested in delving deeper. That said, they were (and are) attracted to Linux for a number of reasons—probably because I constantly prattle on about it.


While bringing them to the Linux side of the computing world, I learned a few things about helping non-techies move to Linux. If someone asks you to help them make the jump to Linux, these eight tips can help you.

1. Be honest about Linux.

Linux is great. It's not perfect, though. It can be perplexing and sometimes frustrating for new users. It's best to prepare the person you're helping with a short pep talk.

What should you talk about? Briefly explain what Linux is and how it differs from other operating systems. Explain what you can and can't do with it. Let them know some of the pain points they might encounter when using Linux daily.

If you take a bit of time to ease them into Linux and open source, the switch won't be as jarring.

2. It's not about you.

It's easy to fall into what I call the power user fallacy: the idea that everyone uses technology the same way you do. That's rarely, if ever, the case.

This isn't about you. It's not about your needs or how you use a computer. It's about the person you're helping's needs and intentions. Their needs, especially if they're not particularly technical, will be different from yours.

It doesn't matter if Ubuntu or Elementary or Manjaro aren't your distros of choice. It doesn't matter if you turn your nose up at window managers like GNOME, KDE, or Pantheon in favor of i3 or Ratpoison. The person you're helping might think otherwise.

Put your needs and prejudices aside and help them find the right Linux distribution for them. Find out what they use their computer for and tailor your recommendations for a distribution or three based on that.

3. Not everyone's a techie.

And not everyone wants to be. Everyone I've helped move to Linux in the last 10 months has no interest in compiling kernels or code nor in editing and tweaking configuration files. Most of them will never crack open a terminal window. I don't expect them to be interested in doing any of that in the future, either.

Guess what? There's nothing wrong with that. Maybe they won't get the most out of Linux (whatever that means) by not embracing their inner geeks. Not everyone will want to take on challenges of, say, installing and configuring Slackware or Arch. They need something that will work out of the box.

4. Take stock of their hardware.

In an ideal world, we'd all have tricked-out, high-powered laptops or desktops with everything maxed out. Sadly, that world doesn't exist.

That probably includes the person you're helping move to Linux. They may have slightly (maybe more than slightly) older hardware that they're comfortable with and that works for them. Hardware that they might not be able to afford to upgrade or replace.

Also, remember that not everyone needs a system for heavy-duty development or gaming or audio and video production. They just need a computer for browsing the web, editing photos, running personal productivity software, and the like.

One person I recently helped adopt Linux had an Acer Aspire 1 laptop with 4GB of RAM and a 64GB SSD. That helped inform my recommendations, which revolved around a few lightweight Linux distributions.

5. Help them test-drive some distros.

The DistroWatch database contains close to 900 Linux distributions. You should be able to find three to five Linux distributions to recommend. Make a short list of the distributions you think would be a good fit for them. Also, point them to reviews so they can get other perspectives on those distributions.

When it comes time to take those Linux distributions for a spin, don't just hand someone a bunch of flash drives and walk away. You might be surprised to learn that most people have never run a live Linux distribution or installed an operating system. Any operating system. Beyond plugging the flash drives in, they probably won't know what to do.

Instead, show them how to create bootable flash drives and set up their computer's BIOS to start from those drives. Then, let them spend some time running the distros off the flash drives. That will give them a rudimentary feel for the distros and their window managers' quirks.

6. Walk them through an installation.

Running a live session with a flash drive tells someone only so much. They need to work with a Linux distribution for a couple or three weeks to really form an opinion of it and to understand its quirks and strengths.

There's a myth that Linux is difficult to install. That might have been true back in the mid-1990s, but today most Linux distributions are easy to install. You follow a few graphical prompts and let the software do the rest.

For someone who's never installed any operating system, installing Linux can be a bit daunting. They might not know what to choose when, say, they're asked which filesystem to use or whether or not to encrypt their hard disk.

Guide them through at least one installation. While you should let them do most of the work, be there to answer questions.

7. Be prepared to do a couple of installs.

As I mentioned a paragraph or two ago, using a Linux distribution for two weeks gives someone ample time to regularly interact with it and see if it can be their daily driver. It often works out. Sometimes, though, it doesn't.

Remember the person with the Acer Aspire 1 laptop? She thought Xubuntu was the right distribution for her. After a few weeks of working with it, that wasn't the case. There wasn't a technical reason—Xubuntu ran smoothly on her laptop. It was just a matter of feel. Instead, she switched back to the first distro she test drove: MX Linux. She's been happily using MX ever since.

Back in 2016, I took down the shingle for my technology coaching business. Permanently. Or so I thought.

Over the last 10 months, a handful of friends and acquaintances have pulled me back into that realm. How? With their desire to dump That Other Operating System™ and move to Linux.

8. Teach them to fish.

You can't always be there to be the guiding hand. Or to be the mechanic or plumber who can fix any problems the person encounters. You have a life, too.

Once they've settled on a Linux distribution, explain that you'll offer a helping hand for two or three weeks. After that, they're on their own. Don't completely abandon them. Be around to help with big problems, but let them know they'll have to learn to do things for themselves.

Introduce them to websites that can help them solve their problems. Point them to useful articles and books. Doing that will help make them more confident and competent users of Linux—and of computers and technology in general.

Final thoughts

Helping someone move to Linux from another, more familiar operating system can be a challenge—a challenge for them and for you. If you take it slowly and follow the advice in this article, you can make the process smoother.

Do you have other tips for helping a non-techie switch to Linux? Feel free to share them by leaving a comment.

Original post at: opensource.com

Published in GNU/Linux Rules!
Tagged under
Flowblade Linux video editor

Flowblade, a free and open source video editor for Linux has had a major release, with important changes too its timeline editing workflow, new tools, and a new custom dark theme.

Flowblade features editing tools like move or trim, image compositing with 10 compositors and mix, zoom, move and rotate animations capabilities, image and audio filtering (with 50+ image filters and 30+ audio filters), built-in title tool, G'MIC effects tool, to name just a few of the things this tool can do. The video editing software supports most video and audio formats, depending on installed MLT / FFmpeg codecs.



In previous versions, Flowblade used a film-style insert editing model as its default workflow, which was found somewhat unintuitive by some users, while others found it efficient and clean. To solve this, Flowblade 2.0 introduces a configurable workflow, which allows the user to tweak the application, with two workflow presets being shown when the application first starts, from which the user can choose:

  • the Standard workflow preset, which has the Move tool as default tool, and presents a workflow similar to most editors
  • the Film Style workflow preset, which has the Insert tool as default, and uses insert-style editing (this was the default workflow in previous versions of Flowblade)

Flowblade 2.0 also includes some user interface updates aimed to clean up and modernize its design. For this, a new custom dark theme was created and made default, while some UI elements, like the panel, have been updated. Users can change to a regular dark or light theme, from the Flowblade preferences.

Flowblade 2.0 also includes four new tools:

  • Keyframe tool - used for editing Volume and Brightness keyframes on the timeline with overlay curves editor
  • Multitrim - it combines Trim, Roll and Slip tool into a single tool that communicates the available edit action with context sensitive cursor changes
  • Cut - alternative way of performing cuts, in addition to the earlier method of cut action at playhead
  • Ripple Trim - this is now a separate tool, previously it was part of the Trim tool



Also, the Overwrite tool name has been changed to Move.

Besides the new Keyframe tool, keyframe editing has received some other updates as well, including slider to keyframe editor functionality, or buttons to move keyframes 1 frame forward or backwards.

With the 2.0 series, Flowblade will continue to use Python 2, with the plan being to switch to Python 3 with the Flowblade 3.0 release.

The complete Flowblade 2.0 release announcement can be found here.

Published in GNU/Linux Rules!