Displaying items by tag: linux

Try out furniture layouts, color schemes, and more in virtual reality before you go shopping in the real world.


There are three schools of thought on how to go about decorating a room:

  1. Buy a bunch of furniture and cram it into the room.
  2. Take careful measurements of each item of furniture, calculate the theoretical capacity of the room, then cram it all in, ignoring the fact that you've placed a bookshelf on top of your bed .
  3. Use a computer for pre-visualization.



Historically, I practiced the little-known fourth principle: don't have furniture. However, since I became a remote worker, I've found that a home office needs conveniences like a desk and a chair, a bookshelf for reference books and tech manuals, and so on. Therefore, I have been formulating a plan to populate my living and working space with actual furniture, made of actual wood rather than milk crates (or glue and sawdust, for that matter), with an emphasis on plan. The last thing I want is to bring home a great find from a garage sale to discover that it doesn't fit through the door or that it's oversized compared to another item of furniture.

It was time to do what the professionals do. It was time to pre-viz.


Open source interior design

Sweet Home 3D is an open source (GPLv2) interior design application that helps you draw your home's floor plan and then define, resize, and arrange furniture. You can do all of this with precise measurements, down to fractions of a centimeter, without having to do any math and with the ease of basic drag-and-drop operations. And when you're done, you can view the results in 3D. If you can create a basic table (not the furniture kind) in a word processor, you can plan the interior design of your home in Sweet Home 3D.



Sweet Home 3D is a Java application, so it's universal. It runs on any operating system that can run Java, which includes Linux, Windows, MacOS, and BSD. Regardless of your OS, you can download the application from the website.

  1. On Linux, untar the archive. Right-click on the SweetHome3D file and select Properties. In the Permission tab, grant the file executable permission.
  2. On MacOS and Windows, expand the archive and launch the application. You must grant it permission to run on your system when prompted.


On Linux, you can also install Sweet Home 3D as a Snap package, provided you have snapd installed and enabled.


Measures of success

First thing first: Break out your measuring tape. To get the most out of Sweet Home 3D, you must know the actual dimensions of the living space you're planning for. You may or may not need to measure down to the millimeter or 16th of an inch; you know your own tolerance for variance. But you must get the basic dimensions, including measuring walls and windows and doors.

Use your best judgment for common sense. For instance, When measuring doors, include the door frame; while it's not technically part of the door itself, it is part of the wall space that you probably don't want to cover with furniture.



Creating a room

When you first launch Sweet Home 3D, it opens a blank canvas in its default viewing mode, a blueprint view in the top panel, and a 3D rendering in the bottom panel. On my Slackware desktop computer, this works famously, but my desktop is also my video editing and gaming computer, so it's got a great graphics card for 3D rendering. On my laptop, this view was a lot slower. For best performance (especially on a computer not dedicated to 3D rendering), go to the 3D View menu at the top of the window and select Virtual Visit. This view mode renders your work from a ground-level point of view based on the position of a virtual visitor. That means you get to control what is rendered and when.


It makes sense to switch to this view regardless of your computer's power because an aerial 3D rendering doesn't provide you with much more detail than what you have in your blueprint plan. Once you have changed the view mode, you can start designing.


The first step is to define the walls of your home. This is done with the Create Walls tool, found to the right of the Hand icon in the top toolbar. Drawing walls is simple: Click where you want a wall to begin, click to anchor it, and continue until your room is complete.



Once you close the walls, press Esc to exit the tool.


Defining a room

Sweet Home 3D is flexible on how you create walls. You can draw the outer boundary of your house first, and then subdivide the interior, or you can draw each room as conjoined "containers" that ultimately form the footprint of your house. This flexibility is possible because, in real life and in Sweet Home 3D, walls don't always define a room. To define a room, use the Create Rooms button to the right of the Create Walls button in the top toolbar.


If the room's floor space is defined by four walls, then all you need to do to define that enclosure as a room is double-click within the four walls. Sweet Home 3D defines the space as a room and provides you with its area in feet or meters, depending on your preference.



For irregular rooms, you must manually define each corner of the room with a click. Depending on the complexity of the room shape, you may have to experiment to find whether you need to work clockwise or counterclockwise from your origin point to avoid quirky Möbius-strip flooring. Generally, however, defining the floor space of a room is straightforward.




After you give the room a floor, you can change to the Arrow tool and double-click on the room to give it a name. You can also set the color and texture of the flooring, walls, ceiling, and baseboards.



None of this is rendered in your blueprint view by default. To enable room rendering in your blueprint panel, go to the File menu and select Preferences. In the Preferences panel, set Room rendering in plan to Floor color or texture.



Doors and windows


Once you've finished the basic floor plan, you can switch permanently to the Arrow tool.

You can find doors and windows in the left column of Sweet Home 3D, in the Doors and Windows category. You have many choices, so choose whatever is closest to what you have in your home.




To place a door or window into your plan, drag-and-drop it on the appropriate wall in your blueprint panel. To adjust its position and size, double-click the door or window.


Adding furniture

With the base plan complete, the part of the job that feels like work is over! From this point onward, you can play with furniture arrangements and other décor.

You can find furniture in the left column, organized by the room for which each is intended. You can drag-and-drop any item into your blueprint plan and control orientation and size with the tools visible when you hover your mouse over the item's corners. Double-click on any item to adjust its color and finish.


Visiting and exporting

To see what your future home will look like, drag the "person" icon in your blueprint view into a room.




You can strike your own balance between realism and just getting a feel for space, but your imagination is your only limit. You can get additional assets to add to your home from the Sweet Home 3D download page. You can even create your own furniture and textures with the Library Editor applications, which are optional downloads from the project site.

Sweet Home 3D can export your blueprint plan to SVG format for use in Inkscape, and it can export your 3D model to OBJ format for use in Blender. To export your blueprint, go to the Plan menu and select Export to SVG format. To export a 3D model, go to the 3D View menu and select Export to OBJ format.

You can also take "snapshots" of your home so that you can refer to your ideas without opening Sweet Home 3D. To create a snapshot, go to the 3D View menu and select Create Photo. The snapshot is rendered from the perspective of the person icon in the blueprint view, so adjust as required, then click the Create button in the Create Photo window. If you're happy with the photo, click Save.


Home sweet home

There are many more features in Sweet Home 3D. You can add a sky and a lawn, position lights for your photos, set ceiling height, add another level to your house, and much more. Whether you're planning for a flat you're renting or a house you're buying—or a house that doesn't even exist (yet), Sweet Home 3D is an engaging and easy application that can entertain and help you make better purchasing choices when scurrying around for furniture, so you can finally stop eating breakfast at the kitchen counter and working while crouched on the floor.




Published in GNU/Linux Rules!
Tuesday, 15 October 2019 16:36

Benefits of centralizing GNOME in GitLabs

The GNOME project's decision to centralize on GitLab is creating benefits across the community—even beyond the developers.



“What’s your GitLab?” is one of the first questions I was asked on my first day working for the GNOME Foundation—the nonprofit that supports GNOME projects, including the desktop environment, GTK, and GStreamer. The person was referring to my username on GNOME’s GitLab instance. In my time with GNOME, I’ve been asked for my GitLab a lot.

We use GitLab for basically everything. In a typical day, I get several issues and reference bug reports, and I occasionally need to modify a file. I don’t do this in the capacity of being a developer or a sysadmin. I’m involved with the Engagement and Inclusion & Diversity (I&D) teams. I write newsletters for Friends of GNOME and interview contributors to the project. I work on sponsorships for GNOME events. I don’t write code, and I use GitLab every day.


The GNOME project has been managed a lot of ways over the past two decades. Different parts of the project used different systems to track changes to code, collaborate, and share information both as a project and as a social space. However, the project made the decision that it needed to become more integrated and it took about a year from conception to completion. There were a number of reasons GNOME wanted to switch to a single tool for use across the community. External projects touch GNOME, and providing them an easier way to interact with resources was important for the project, both to support the community and to grow the ecosystem. We also wanted to better track metrics for GNOME—the number of contributors, the type and number of contributions, and the developmental progress of different parts of the project.

When it came time to pick a collaboration tool, we considered what we needed. One of the most important requirements was that it must be hosted by the GNOME community; being hosted by a third party didn’t feel like an option, so that discounted services like GitHub and Atlassian. And, of course, it had to be free software. It quickly became obvious that the only real contender was GitLab. We wanted to make sure contribution would be easy. GitLab has features like single sign-on, which allows people to use GitHub, Google, GitLab.com, and GNOME accounts.

We agreed that GitLab was the way to go, and we began to migrate from many tools to a single tool. GNOME board member Carlos Soriano led the charge. With lots of support from GitLab and the GNOME community, we completed the process in May 2018.

There was a lot of hope that moving to GitLab would help grow the community and make contributing easier. Because GNOME previously used so many different tools, including Bugzilla and CGit, it’s hard to quantitatively measure how the switch has impacted the number of contributions. We can more clearly track some statistics though, such as the nearly 10,000 issues closed and 7,085 merge requests merged between June and November 2018. People feel that the community has grown and become more welcoming and that contribution is, in fact, easier.

People come to free software from all sorts of different starting points, and it’s important to try to even out the playing field by providing better resources and extra support for people who need them. Git, as a tool, is widely used, and more people are coming to participate in free software with those skills ready to go. Self-hosting GitLab provides the perfect opportunity to combine the familiarity of Git with the feature-rich, user-friendly environment provided by GitLab.

It’s been a little over a year, and the change is really noticeable. Continuous integration (CI) has been a huge benefit for development, and it has been completely integrated into nearly every part of GNOME. Teams that aren’t doing code development have also switched to using the GitLab ecosystem for their work. Whether it’s using issue tracking to manage assigned tasks or version control to share and manage assets, even teams like Engagement and I&D have taken up using GitLab.

It can be hard for a community, even one developing free software, to adapt to a new technology or tool. It is especially hard in a case like GNOME, a project that recently turned 22. After more than two decades of building a project like GNOME, with so many parts used by so many people and organizations, the migration was an endeavor that was only possible thanks to the hard work of the GNOME community and generous assistance from GitLab.

I find a lot of convenience in working for a project that uses Git for version control. It’s a system that feels comfortable and is familiar—it’s a tool that is consistent across workplaces and hobby projects. As a new member of the GNOME community, it was great to be able to jump in and just use GitLab. As a community builder, it’s inspiring to see the results: more associated projects coming on board and entering the ecosystem; new contributors and community members making their first contributions to the project; and increased ability to measure the work we’re doing to know it’s effective and successful.

It’s great that so many teams doing completely different things (such as what they’re working on and what skills they’re using) agree to centralize on any tool—especially one that is considered a standard across open source. As a contributor to GNOME, I really appreciate that we’re using GitLab.


Published in GNU/Linux Rules!
Tagged under

DevSecOps evolves DevOps to ensure security remains an essential part of the process.


DevOps is well-understood in the IT world by now, but it's not flawless. Imagine you have implemented all of the DevOps engineering practices in modern application delivery for a project. You've reached the end of the development pipeline—but a penetration testing team (internal or external) has detected a security flaw and come up with a report. Now you have to re-initiate all of your processes and ask developers to fix the flaw.

This is not terribly tedious in a DevOps-based software development lifecycle (SDLC) system—but it does consume time and affects the delivery schedule. If security were integrated from the start of the SDLC, you might have tracked down the glitch and eliminated it on the go. But pushing security to the end of the development pipeline, as in the above scenario, leads to a longer development lifecycle.

This is the reason for introducing DevSecOps, which consolidates the overall software delivery cycle in an automated way.

In modern DevOps methodologies, where containers are widely used by organizations to host applications, we see greater use of Kubernetes and Istio. However, these tools have their own vulnerabilities. For example, the Cloud Native Computing Foundation (CNCF) recently completed a Kubernetes security audit that identified several issues. All tools used in the DevOps pipeline need to undergo security checks while running in the pipeline, and DevSecOps pushes admins to monitor the tools' repositories for upgrades and patches.


What Is DevSecOps?

Like DevOps, DevSecOps is a mindset or a culture that developers and IT operations teams follow while developing and deploying software applications. It integrates active and automated security audits and penetration testing into agile application development.



To utilize DevSecOps, you need to:

Introduce the concept of security right from the start of the SDLC to minimize vulnerabilities in software code. Ensure everyone (including developers and IT operations teams) shares responsibility for following security practices in their tasks. Integrate security controls, tools, and processes at the start of the DevOps workflow. These will enable automated security checks at each stage of software delivery. DevOps has always been about including security—as well as quality assurance (QA), database administration, and everyone else—in the dev and release process. However, DevSecOps is an evolution of that process to ensure security is never forgotten as an essential part of the process.


Understanding the DevSecOps pipeline

There are different stages in a typical DevOps pipeline; a typical SDLC process includes phases like Plan, Code, Build, Test, Release, and Deploy. In DevSecOps, specific security checks are applied in each phase.


Plan: Execute security analysis and create a test plan to determine scenarios for where, how, and when testing will be done.

Code: Deploy linting tools and Git controls to secure passwords and API keys.

Build: While building code for execution, incorporate static application security testing (SAST) tools to track down flaws in code before deploying to production. These tools are specific to programming languages.

Test: Use dynamic application security testing (DAST) tools to test your application while in runtime. These tools can detect errors associated with user authentication, authorization, SQL injection, and API-related endpoints.

Release: Just before releasing the application, employ security analysis tools to perform thorough penetration testing and vulnerability scanning.

Deploy: After completing the above tests in runtime, send a secure build to production for final deployment.


DevSecOps tools

Tools are available for every phase of the SDLC. Some are commercial products, but most are open source. In my next article, I will talk more about the tools to use in different stages of the pipeline.

DevSecOps will play a more crucial role as we continue to see an increase in the complexity of enterprise security threats built on modern IT infrastructure. However, the DevSecOps pipeline will need to improve over time, rather than simply relying on implementing all security changes simultaneously. This will eliminate the possibility of backtracking or the failure of application delivery.



Published in GNU/Linux Rules!
Wednesday, 09 October 2019 17:25

How to: Developing in the cloud (Eclipse Che IDE)

che cloud.png


Eclipse Che offers Java developers an Eclipse IDE in a container-based cloud environment.




In the many, many technical interviews I've gone through in my professional career, I've noticed that I'm rarely asked questions that have definitive answers. Most of the time, I'm asked open-ended questions that do not have an absolutely correct answer but evaluate my prior experiences and how well I can explain things.

One interesting open-ended question that I've been asked several times is:


"As you start your first day on a project, what five tools do you install first and why?"


There is no single definitely correct answer to this question. But as a programmer who codes, I know the must-have tools that I cannot live without. And as a Java developer, I always include an interactive development environment (IDE)—and my two favorites are Eclipse IDE and IntelliJ IDEA.



My Java story

When I was a student at the University of Texas at Austin, most of my computer science courses were taught in Java. And as an enterprise developer working for different companies, I have mostly worked with Java to build various enterprise-level applications. So, I know Java, and most of the time I've developed with Eclipse. I have also used the Spring Tools Suite (STS), which is a variation of the Eclipse IDE that is installed with Spring Framework plugins, and IntelliJ, which is not exactly open source, since I prefer its paid edition, but some Java developers favor it due to its faster performance and other fancy features.

Regardless of which IDE you use, installing your own developer IDE presents one common, big problem: "It works on my computer, and I don't know why it doesn't work on your computer."



Because a developer tool like Eclipse can be highly dependent on the runtime environment, library configuration, and operating system, the task of creating a unified sharing environment for everyone can be quite a challenge.


But there is a perfect solution to this. We are living in the age of cloud computing, and Eclipse Che provides an open source solution to running an Eclipse-based IDE in a container-based cloud environment.


From local development to a cloud environment

I want the benefits of a cloud-based development environment with the familiarity of my local system. That's a difficult balance to find.

When I first heard about Eclipse Che, it looked like the cloud-based development environment I'd been looking for, but I got busy with technology I needed to learn and didn't follow up with it. Then a new project came up that required a remote environment, and I had the perfect excuse to use Che. Although I couldn't fully switch to the cloud-based IDE for my daily work, I saw it as a chance to get more familiar with it.




Eclipse Che IDE has a lot of excellent features, but what I like most is that it is an open source framework that offers exactly what I want to achieve:

  • Scalable workspaces leveraging the power of cloud
  • Extensible and customizable plugins for different runtimes
  • A seamless onboarding experience to enable smooth collaboration between members


Getting started with Eclipse Che

Eclipse Che can be installed on any container-based environment. I run both Code Ready Workspace 1.2 and Eclipse Che 7 on OpenShift, but I've also tried it on top of Minikube and Minishift.



You can also run Che on any container-based environment like OKD, Kubernetes, or Docker. Read the requirement guides to ensure your runtime is compatible with Che:



For instance, you can quickly install Eclipse Che if you launch OKD locally through Minishift, but make sure to have at least 5GB RAM to have a smooth experience.

There are various ways to install Eclipse Che; I recommend leveraging the Che command-line interface, chectl. Although it is still in an incubator stage, it is my preferred way because it gives multiple configuration and management options. You can also run the installation as an Operator. I decided to go with chectl since I did not want to take on both concepts at the same time. Che's quick-start provides installation steps for many scenarios.


Why cloud works best for me


Although the local installation of Eclipse Che works, I found the most painless way is to install it on one of the common public cloud vendors.

I like to collaborate with others in my IDE; working collaboratively is essential if you want your application to be something more than a hobby project. And when you are working at a company, there will be enterprise considerations around the application lifecycle of develop, test, and deploy for your application.

Eclipse Che's multi-user capability means each person owns an isolated workspace that does not interfere with others' workspaces, yet team members can still collaborate on application development by working in the same cluster. And if you are considering moving to Eclipse Che for something more than a hobby or testing, the cloud environment's multi-user features will enable a faster development cycle. This includes resource management to ensure resources are allocated to each environment, as well as security considerations like authentication and authorization (or specific needs like OpenID) that are important to maintaining the environment.

Therefore, moving Eclipse Che to the cloud early will be a good choice if your development experience is like mine. By moving to the cloud, you can take advantage of cloud-based scalability and resource flexibility while on the road.


Use Che and give back

I really enjoy this new development configuration that enables me to regularly code in the cloud. Open source enables me to do so in an easy way, so it's important for me to consider how to give back. All of Che's components are open source under the Eclipse Public License 2.0 and available on GitHub at the following links:

Consider using Che and giving back—either as a user by filing bug reports or as a developer to help enhance the project.




Published in GNU/Linux Rules!
Monday, 30 September 2019 14:02

GNU Debugger: Practical tips


Learn how to use some of the lesser-known features of gdb to inspect and fix your code.



The GNU Debugger (gdb) is an invaluable tool for inspecting running processes and fixing problems while you're developing programs.

You can set breakpoints at specific locations (by function name, line number, and so on), enable and disable those breakpoints, display and alter variable values, and do all the standard things you would expect any debugger to do. But it has many other features you might not have experimented with. Here are five for you to try.

Conditional breakpoints

Setting a breakpoint is one of the first things you'll learn to do with the GNU Debugger. The program stops when it reaches a breakpoint, and you can run gdb commands to inspect it or change variables before allowing the program to continue.

For example, you might know that an often-called function crashes sometimes, but only when it gets a certain parameter value. You could set a breakpoint at the start of that function and run the program. The function parameters are shown each time it hits the breakpoint, and if the parameter value that triggers the crash is not supplied, you can continue until the function is called again. When the troublesome parameter triggers a crash, you can step through the code to see what's wrong.



(gdb) break sometimes_crashes

Breakpoint 1 at 0x40110e: file prog.c, line 5.

(gdb) run


Breakpoint 1, sometimes_crashes (f=0x7fffffffd1bc) at prog.c:5

5 fprintf(stderr,

(gdb) continue

Breakpoint 1, sometimes_crashes (f=0x7fffffffd1bc) at prog.c:5

5 fprintf(stderr,

(gdb) continue


To make this more repeatable, you could count how many times the function is called before the specific call you are interested in, and set a counter on that breakpoint (for example, "continue 30" to make it ignore the next 29 times it reaches the breakpoint).


But where breakpoints get really powerful is in their ability to evaluate expressions at runtime, which allows you to automate this kind of testing. Enter: conditional breakpoints.


(gdb) break sometimes_crashes if !f

Breakpoint 1 at 0x401132: file prog.c, line 5.

(gdb) run


Breakpoint 1, sometimes_crashes (f=0x0) at prog.c:5

5 fprintf(stderr,




Instead of having gdb ask what to do every time the function is called, a conditional breakpoint allows you to make gdb stop at that location only when a particular expression evaluates as true. If the execution reaches the conditional breakpoint location, but the expression evaluates as false, the debugger automatically lets the program continue without asking the user what to do.


Breakpoint commands


An even more sophisticated feature of breakpoints in the GNU Debugger is the ability to script a response to reaching a breakpoint. Breakpoint commands allow you to write a list of GNU Debugger commands to run whenever it reaches a breakpoint.

We can use this to work around the bug we already know about in the sometimes_crashes function and make it return from that function harmlessly when it provides a null pointer.

We can use silent as the first line to get more control over the output. Without this, the stack frame will be displayed each time the breakpoint is hit, even before our breakpoint commands run.


(gdb) break sometimes_crashes

Breakpoint 1 at 0x401132: file prog.c, line 5.

(gdb) commands 1

Type commands for breakpoint(s) 1, one per line.

End with a line saying just "end".


>if !f


>printf "Skipping call\n"

>return 0



>printf "Continuing\n"



(gdb) run

Starting program: /home/twaugh/Documents/GDB/prog

warning: Loadable section ".note.gnu.property" outside of ELF segments




#0 sometimes_crashes (f=0x0) at prog.c:5

5 fprintf(stderr,

Skipping call

[Inferior 1 (process 9373) exited normally]



Dump binary memory


GNU Debugger has built-in support for examining memory using the x command in various formats, including octal, hexadecimal, and so on. But I like to see two formats side by side: hexadecimal bytes on the left, and ASCII characters represented by those same bytes on the right.


When I want to view the contents of a file byte-by-byte, I often use hexdump -C (hexdump comes from the util-linux package). Here is gdb's x command displaying hexadecimal bytes:

(gdb) x/33xb mydata
0x404040 mydata>:    0x02    0x01    0x00    0x02    0x00    0x00    0x00    0x01
0x404048 mydata+8>:    0x01    0x47    0x00    0x12    0x61    0x74    0x74    0x72
0x404050 mydata+16>:    0x69    0x62    0x75    0x74    0x65    0x73    0x2d    0x63
0x404058 mydata+24>:    0x68    0x61    0x72    0x73    0x65    0x75    0x00    0x05
0x404060 mydata+32>:    0x00



What if you could teach gdb to display memory just like hexdump does? You can, and in fact, you can use this method for any format you prefer.


By combining the dump command to store the bytes in a file, the shell command to run hexdump on the file, and the define command, we can make our own new hexdump command to use hexdump to display the contents of memory.



(gdb) define hexdump

Type commands for definition of "hexdump".

End with a line saying just "end".

>dump binary memory /tmp/dump.bin $arg0 $arg0+$arg1

>shell hexdump -C /tmp/dump.bin





Those commands can even go in the ~/.gdbinit file to define the hexdump command permanently. Here it is in action:


(gdb) hexdump mydata sizeof(mydata) 00000000 02 01 00 02 00 00 00 01 01 47 00 12 61 74 74 72 |.........G..attr| 00000010 69 62 75 74 65 73 2d 63 68 61 72 73 65 75 00 05 |ibutes-charseu..| 00000020 00 |.| 00000021


Inline disassembly


Sometimes you want to understand more about what happened leading up to a crash, and the source code is not enough. You want to see what's going on at the CPU instruction level.

The disassemble command lets you see the CPU instructions that implement a function. But sometimes the output can be hard to follow. Usually, I want to see what instructions correspond to a certain section of source code in the function. To achieve this, use the /s modifier to include source code lines with the disassembly.


(gdb) disassemble/s main
Dump of assembler code for function main:
11    {
   0x0000000000401158 +0>:    push   %rbp
   0x0000000000401159 +1>:    mov      %rsp,%rbp
   0x000000000040115c +4>:    sub      $0x10,%rsp

12      int n = 0;
   0x0000000000401160 +8>:    movl   $0x0,-0x4(%rbp)

13      sometimes_crashes(&n);
   0x0000000000401167 +15>:    lea     -0x4(%rbp),%rax
   0x000000000040116b +19>:    mov     %rax,%rdi
   0x000000000040116e +22>:    callq  0x401126 sometimes_crashes>



This, along with info registers to see the current values of all the CPU registers and commands like stepi to step one instruction at a time, allow you to have a much more detailed understanding of the program.

Reverse debug


Sometimes you wish you could turn back time. Imagine you've hit a watchpoint on a variable. A watchpoint is like a breakpoint, but instead of being set at a location in the program, it is set on an expression (using the watch command). Whenever the value of the expression changes, execution stops, and the debugger takes control.

So imagine you've hit this watchpoint, and the memory used by a variable has changed value. This can turn out to be caused by something that occurred much earlier; for example, the memory was freed and is now being re-used. But when and why was it freed?

The GNU Debugger can solve even this problem because you can run your program in reverse!

It achieves this by carefully recording the state of the program at each step so that it can restore previously recorded states, giving the illusion of time flowing backward.

To enable this state recording, use the target record-full command. Then you can use impossible-sounding commands, such as:



reverse-step, which rewinds to the previous source line

reverse-next, which rewinds to the previous source line, stepping backward over function calls

reverse-finish, which rewinds to the point when the current function was about to be called

reverse-continue, which rewinds to the previous state in the program that would (now) trigger a breakpoint (or anything else that causes it to stop)


Here is an example of reverse debugging in action:

 (gdb) b main
Breakpoint 1 at 0x401160: file prog.c, line 12.
(gdb) r
Starting program: /home/twaugh/Documents/GDB/prog

Breakpoint 1, main () at prog.c:12
12      int n = 0;
(gdb) target record-full
(gdb) c

Program received signal SIGSEGV, Segmentation fault.
0x0000000000401154 in sometimes_crashes (f=0x0) at prog.c:7
7      return *f;
(gdb) reverse-finish
Run back to call of #0  0x0000000000401154 in sometimes_crashes (f=0x0)
        at prog.c:7
0x0000000000401190 in main () at prog.c:16
16      sometimes_crashes(0);




These are just a handful of useful things the GNU Debugger can do. There are many more to discover. Which hidden, little-known, or just plain amazing feature of gdb is your favorite? Please share it in the comments.

Published in GNU/Linux Rules!


Access your Android device from your PC with this open source application based on scrcpy.



In the future, all the information you need will be just one gesture away, and it will all appear in midair as a hologram that you can interact with even while you're driving your flying car. That's the future, though, and until that arrives, we're all stuck with information spread across a laptop, a phone, a tablet, and a smart refrigerator. Unfortunately, that means when we need information from a device, we generally have to look at that device.

While not quite holographic terminals or flying cars, guiscrcpy by developer Srevin Saju is an application that consolidates multiple screens in one location and helps to capture that futuristic feeling.

Guiscrcpy is an open source (GNU GPLv3 licensed) project based on the award-winning scrcpy open source engine. With guiscrcpy, you can cast your Android screen onto your computer screen so you can view it along with everything else. Guiscrcpy supports Linux, Windows, and MacOS.

Unlike many scrcpy alternatives, Guiscrcpy is not a fork of scrcpy. The project prioritizes collaborating with other open source projects, so Guiscrcpy is an extension, or a graphical user interface (GUI) layer, for scrcpy. Keeping the Python 3 GUI separate from scrcpy ensures that nothing interferes with the efficiency of the scrcpy backend. You can screencast up to 1080p resolution and, because it uses ultrafast rendering and surprisingly little CPU, it works even on a relatively low-end PC.


Scrcpy, Guiscrcpy's foundation, is a command-line application, so it doesn't have GUI buttons to handle gestures, it doesn't provide a Back or Home button, and it requires familiarity with the Linux terminal. Guiscrcpy adds GUI panels to scrcpy, so any user can run it—and cast and control their device—without sending any information over the internet. Everything works over USB or WiFi (using only a local network). Guiscrcpy also adds a desktop launcher to Linux and Windows systems and provides compiled binaries for Linux and Windows.


Installing Guiscrcpy

Before installing Guiscrcpy, you must install its dependencies, most notably scrcpy. Possibly the easiest way to install scrcpy is with snap, which is available for most major Linux distributions. If you have snap installed and active, then you can install scrcpy with one easy command:


$ sudo snap install scrcpy


While it's installing, you can install the other dependencies. The Simple DirectMedia Layer (SDL 2.0) toolkit is required to display and interact with the phone screen, and the Android Debug Bridge (adb) command connects your computer to your Android phone.

On Fedora or CentOS:



$ sudo dnf install SDL2 android-tools


On Ubuntu or Debian:


$ sudo apt install SDL2 android-tools-adb


In another terminal, install the Python dependencies:


$ python3 -m pip install -r requirements.txt --user


Setting up your phone


For your phone to accept an adb connection, it must have Developer Mode enabled. To enable Developer Mode on Android, go to Settings and select About phone. In About phone, find the Build number (it may be in the Software information panel). Believe it or not, to enable Developer Mode, tap Build number seven times in a row.




For full instructions on all the many ways you can configure your phone for access from your computer, read the Android developer documentation.

Once that's set up, plug your phone into a USB port on your computer (or ensure that you've configured it correctly to connect over WiFi).


Using guiscrcpy

When you launch guiscrcpy, you see its main control window. In this window, click the Start scrcpy button. This connects to your phone, as long as it's set up in Developer Mode and connected to your computer over USB or WiFi.



It also includes a configuration-writing system, where you can write a configuration file to your ~/.config directory to preserve your preferences between uses.

The bottom panel of guiscrcpy is a floating window that helps you perform basic controlling actions. It has buttons for Home, Back, Power, and more. These are common functions on Android devices, but an important feature of this module is that it doesn't interact with scrcpy's SDL, so it can function with no lag. In other words, this panel communicates directly with your connected device through adb rather than scrcpy.




The project is in active development and new features are still being added. The latest build has an interface for gestures and notifications.

With guiscrcpy, you not only see your phone on your screen, but you can also interact with it, either by clicking the SDL window itself, just as you would tap your physical phone, or by using the buttons on the panels.



Guiscrcpy is a fun and useful application that provides features that ought to be official features of any modern device, especially a platform like Android. Try it out yourself, and add some futuristic pragmatism to your present-day digital life.



Published in GNU/Linux Rules!

Deepin OS is one of the most modern-looking Linux distros. If you are a fan of sleek design and at the same time easy-to-use distro, then get your hands on Deepin. It is also extremely easy to install. I am sure you’ll love it.

The team has developed its own desktop environment based on Qt and also uses KDE plasma’s window manager aka. dde-kwin. Deepin team has also developed 30 native applications for users to make day-to-day tasks easier to complete.

Some of the native deepin applications are — Deepin installer, Deepin file manager, Deepin system monitor, Deepin Store, Deepin screen recorder, Deepin cloud print, and so on… If you ever run out of options, do not forget thousands of open source applications are also available in the store.

The development of Deepin started in 2004 under the name ‘Hiwix’ and it’s been active since then. The distro’s name was changed multiple times but the motto remained the same, provide a stable operating system which is easy to install and use.

The current version Deepin OS 15.11 is based on Debian stable branch. It was released on 19, July 2019 with some great features and many improvements and bug fixes.



Cloud sync

The most notable feature in this release is Cloud sync. This feature is useful if you have multiple machines running Deepin or you have to reset your deepin installation more often than others. The distro will keep your system settings in sync with cloud storage as soon as you sign on. In case the installation is reset, the settings can be quickly imported from the cloud. This feature will sync all the system settings such as themes, sound settings, update settings, wallpaper, dock, power settings, etc. Unfortunately, the cloud sync feature is only available for users with Deepin ID in mainland china.

They are testing the feature and will be releasing it soon for the rest of the Deepin users. There are other user-friendly focused Linux distributions which should develop this feature. The cloud sync is useful for new Linux users. They don’t have to set up everything from scratch if they mess things up with the current installation.




Deepin switched from dde-wm to dde-kwin in 15.10. ddep-kwin consumes less memory and provides a faster and better user experience. Deepin 15.11 brings more stability to dde-kwin. Deepin Store The deepin team has developed 30 native applications, among those is Deepin store. It lets you easily browse and install applications from the distro repositories. The new release ships with Deepin Store 5.3. The updated store app can now determine the user region based on Deepin ID’s location. Another option has been added to the deepin file manager for burning files to CD/DVD. Though CD/DVD is the thing of the past but still if somebody needs to burn data to it, it’s extremely easy to do it in Deepin.

To play media, the distro ships with Deepin movie application which now supports drag-n-drop load subtitle feature. Just drag the subtitle script and drop it on the player while the movie is playing. Besides these new features, there are more improvements and bug fixes in Deepin 15.11. For people finding out a beautiful, feature-rich and stable Linux distribution, Deepin can be the platform of your choice.

Published in GNU/Linux Rules!
Friday, 16 August 2019 19:35

GNOME desktop: Best extensions



Add functionality and features to your Linux desktop with these add-ons.


The GNOME desktop is the default graphical user interface for most of the popular Linux distributions and some of the BSD and Solaris operating systems. Currently at version 3, GNOME provides a sleek user experience, and extensions are available for additional functionality. We've covered GNOME extensions before, but to celebrate GNOME's 22nd anniversary, I decided to revisit the topic. Some of these extensions may already be installed, depending on your Linux distribution; if not, check your package manager.


 How to add extensions from the package manager
To install extensions that aren't in your distro, open the package manager and click Add-ons. Then click Shell Extensions at the top-right of the Add-ons screen, and you will see a button for Extension Settings and a list of available extensions.


To install extensions that aren't in your distro, open the package manager and clic Then clic Shell Extensions at the top-right of the Add-ons screen, and you will see a button for Extension Settings and a list of available extensions.



1. GNOME Clocks

GNOME Clocks is an application that includes a world clock, alarm, stopwatch, and timer. You can configure clocks for different geographic locations. For example, if you regularly work with colleagues in another time zone, you can set up a clock for their location. You can access the World Clocks section in the top panel's drop-down menu by clicking the system clock. It shows your configured world clocks (not including your local time), so you can quickly check the time in other parts of the world.




2. GNOME Weather

GNOME Weather displays the weather conditions and forecast for your current location. You can access local weather conditions from the top panel's drop-down menu. You can also check the weather in other geographic locations using Weather's Places menu.



GNOME Clocks and Weather are small applications that have extension-like functionality. Both are installed by default on Fedora 30 (which is what I'm using). If you're using another distribution and don't see them, check the package manager. You can see both extensions in action in the image below.





3. Applications Menu
I think the GNOME 3 interface is perfectly enjoyable in its stock form, but you may prefer a traditional application menu. In GNOME 30, the Applications Menu extension was installed by default but not enabled. To enable it, click the Extensions Settings button in the Add-ons section of the package manager and enable the Applications Menu extension.




Now you can see the Applications Menu in the top-left corner of the top panel.



4. More columns in applications view
The Applications view is set by default to six columns of icons, probably because GNOME needs to accommodate a wide array of displays. If you're using a wide-screen display, you can use the More columns in applications menu extension to increase the columns. I find that setting it to eight makes better use of my screen by eliminating the empty columns on either side of the icons when I launch the Applications view.


Add system info to the top panel
The next three extensions provide basic system information to the top panel.

5. Harddisk LED shows a small hard drive icon with input/output (I/O) activity.
6. Load Average indicates Linux load averages taken over three time intervals.
7. Uptime Indicator shows system uptime; when it's clicked, it shows the date and time the system was started.


8. Sound Input and Output Device Chooser
Your system may have more than one audio device for input and output. For example, my laptop has internal speakers and sometimes I use a wireless Bluetooth speaker. The Sound Input and Output Device Chooser extension adds a list of your sound devices to the System Menu so you can quickly select which one you want to use.


9. Drop Down Terminal
Fellow writer Scott Nesbitt recommended the next two extensions. The first, Drop Down Terminal, enables a terminal window to drop down from the top panel by pressing a certain key; the default is the key above Tab; on my keyboard, that's the tilde (~) character. Drop Down Terminal has a settings menu for customizing transparency, height, the activation keystroke, and other configurations.


10. Todo.txt
Todo.txt adds a menu to the top panel for maintaining a file for Todo.txt task tracking. You can add or delete a task from the menu or mark it as completed.



11. Removable Drive Menu
the editor Seth Kenlon suggested Removable Drive Menu. It provides a drop-down menu for managing removable media, such as USB thumb drives. From the extension's menu, you can access a drive's files and eject it. The menu only appears when removable media is inserted.




12. GNOME Internet Radio
I enjoy listening to internet radio streams with the GNOME Internet Radio extension, which I wrote about in How to Stream Music with GNOME Internet Radio.


please read more at opensource.com



Published in GNU/Linux Rules!




Linux is fully capable of running not weeks, but years, without a reboot. In some industries, that’s exactly what Linux does, thanks to advances like kpatch and kgraph.

For laptop and desktop users, though, that metric is a little extreme. While it may not be a day-to-day reality, it’s at least a weekly reality that sometimes you have a good reason to reboot your machine. And for a system that doesn’t need rebooting often, Linux offers plenty of choices for when it’s time to start over.



Understand your options

Before continuing though, a note on rebooting. Rebooting is a unique process on each operating system. Even within POSIX systems, the commands to power down and reboot may behave differently due to different initialization systems or command designs.

Despite this factor, two concepts are vital. First, rebooting is rarely requisite on a POSIX system. Your Linux machine can operate for weeks or months at a time without a reboot if that’s what you need. There’s no need to "freshen up" your computer with a reboot unless specifically advised to do so by a software installer or updater. Then again, it doesn’t hurt to reboot, either, so it’s up to you.

Second, rebooting is meant to be a friendly process, allowing time for programs to exit, files to be saved, temporary files to be removed, filesystem journals updated, and so on. Whenever possible, reboot using the intended interfaces, whether in a GUI or a terminal. If you force your computer to shut down or reboot, you risk losing unsaved and even recently-saved data, and even corrupting important system information; you should only ever force your computer off when there’s no other option.



Click the button

The first way to reboot or shut down Linux is the most common one, and the most intuitive for most desktop users regardless of their OS: It’s the power button in the GUI. Since powering down and rebooting are common tasks on a workstation, you can usually find the power button (typically with reboot and shut down options) in a few different places. On the GNOME desktop, it's in the system tray: 


It’s also in the GNOME Activities menu:


On the KDE desktop, the power buttons can be found in the Applications menu:


You can also access the KDE power controls by right-clicking on the desktop and selecting the Leave option, which opens the window you see here:


Other desktops provide variations on these themes, but the general idea is the same: use your mouse to locate the power button, and then click it. You may have to select between rebooting and powering down, but in the end, the result is nearly identical: Processes are stopped, nicely, so that data is saved and temporary files are removed, then data is synchronized to drives, and then the system is powered down.



Push the physical button

Most computers have a physical power button. If you press that button, your Linux desktop may display a power menu with options to shut down or reboot. This feature is provided by the Advanced Configuration and Power Interface (ACPI) subsystem, which communicates with your motherboard’s firmware to control your computer’s state.

ACPI is important but it’s limited in scope, so there’s not much to configure from the user’s perspective. Usually, ACPI options are generically called Power and are set to a sane default. If you want to change this setup, you can do so in your system settings.

On GNOME, open the system tray menu and select Activities, and then Settings. Next, select the Power category in the left column, which opens the following menu:


In the Suspend & Power Button section, select what you want the physical power button to do.

The process is similar across desktops. For instance, on KDE, the Power Management panel in System Settings contains an option for Button Event Handling.




After you configure how the button event is handled, pressing your computer’s physical power button follows whatever option you chose. Depending on your computer vendor (or parts vendors, if you build your own), a button press might be a light tap, or it may require a slightly longer push, so you might have to do some tests before you get the hang of it.

Beware of an over-long press, though, since it may shut your computer down without warning.



Run the systemctl command

If you operate more in a terminal than in a GUI desktop, you might prefer to reboot with a command. Broadly speaking, rebooting and powering down are processes of the init system—the sequence of programs that bring a computer up or down after a power signal (either on or off, respectively) is received.

On most modern Linux distributions, systemd is the init system, so both rebooting and powering down can be performed through the systemd user interface, systemctl. The systemctl command accepts, among many other options, halt (halts disk activity but does not cut power) reboot (halts disk activity and sends a reset signal to the motherboard) and poweroff (halts disk acitivity, and then cut power). These commands are mostly equivalent to starting the target file of the same name.

For instance, to trigger a reboot:

sudo systemctl start reboot.target


Run the shutdown command

Traditional UNIX, before the days of systemd (and for some Linux distributions, like Slackware, that’s now), there were commands specific to stopping a system. The shutdown command, for instance, can power down your machine, but it has several options to control exactly what that means.

This command requires a time argument, in minutes, so that shutdown knows when to execute. To reboot immediately, append the -r flag:

sudo shutdown -r now

To power down immediately:

sudo shutdown -P now

Or you can use the poweroff command:


To reboot after 10 minutes:

sudo shutdown -r 10

The shutdown command is a safe way to power off or reboot your computer, allowing disks to sync and processes to end. This command prevents new logins within the final 5 minutes of shutdown commencing, which is particularly useful on multi-user systems.

On many systems today, the shutdown command is actually just a call to systemctl with the appropriate reboot or power off option.


Run the reboot command

The reboot command, on its own, is basically a shortcut to shutdown -r now. From a terminal, this is the easiest and quickest reboot command:

sudo reboot

If your system is being blocked from shutting down (perhaps due to a runaway process), you can use the --force flag to make the system shut down anyway. However, this option skips the actual shutting down process, which can be abrupt for running processes, so it should only be used when the shutdowncommand is blocking you from powering down.

On many systems, reboot is actually a call to systemctl with the appropriate reboot or power off option.



On Linux distributions without systemd, there are up to 7 runlevels your computer understands. Different distributions can assign each mode uniquely, but generally, 0 initiates a halt state, and 6 initiates a reboot (the numbers in between denote states such as single-user mode, multi-user mode, a GUI prompt, and a text prompt).

These modes are defined in /etc/inittab on systems without systemd. On distributions using systemd as the init system, the /etc/inittab file is either missing, or it’s just a placeholder.

The telinit command is the front-end to your init system. If you’re using systemd, then this command is a link to systemctl with the appropriate options.

To power off your computer by sending it into runlevel 0:

sudo telinit 0

To reboot using the same method:

sudo telinit 6

How unsafe this command is for your data depends entirely on your init configuration. Most distributions try to protect you from pulling the plug (or the digital equivalent of that) by mapping runlevels to friendly commands.

You can see for yourself what happens at each runlevel by reading the init scripts found in /etc/rc.d or /etc/init.d, or by reading the systemd targets in /lib/systemd/system/.


Apply brute force

So far I’ve covered all the right ways to reboot or shut down your Linux computer. To be thorough, I include here additional methods of bringing down a Linux computer, but by no means are these methods recommended. They aren’t designed as a daily reboot or shut down command (reboot and shutdown exist for that), but they’re valid means to accomplish the task.

If you try these methods, try them in a virtual machine. Otherwise, use them only in emergencies.




A step lower than the init system is the /proc filesystem, which is a virtual representation of nearly everything happening on your computer. For instance, you can view your CPUs as though they were text files (with cat /proc/cpuinfo), view how much power is left in your laptop’s battery, or, after a fashion, reboot your system.

There’s a provision in the Linux kernel for system requests (Sysrq on most keyboards). You can communicate directly with this subsystem using key combinations, ideally regardless of what state your computer is in; it gets complex on some keyboards because the Sysrq key can be a special function key that requires a different key to access (such as Fn on many laptops).

An option less likely to fail is using echo to insert information into /proc, manually. First, make sure that the Sysrq system is enabled:

sudo echo 1 > /proc/sys/kernel/sysrq

To reboot, you can use either Alt+Sysrq+B or type:

sudo echo b > /proc/sysrq-trigger

This method is not a reasonable way to reboot your machine on a regular basis, but it gets the job done in a pinch.



Kernel parameters can be managed during runtime with sysctl. There are lots of kernel parameters, and you can see them all with sysctl --all. Most probably don’t mean much to you until you know what to look for, and in this case, you’re looking for kernel.panic.

You can query kernel parameters using the -–value option:

sudo sysctl --value kernel.panic


If you get a 0 back, then the kernel you’re running has no special setting, at least by default, to reboot upon a kernel panic. That situation is fairly typical since rebooting immediately on a catastrophic system crash makes it difficult to diagnose the cause of the crash. Then again, systems that need to stay on no matter what might benefit from an automatic restart after a kernel failure, so it’s an option that does get switched on in some cases.

You can activate this feature as an experiment (if you’re following along, try this in a virtual machine rather than on your actual computer


sudo sysctl kernel.reboot=1


Now, should your computer experience a kernel panic, it is set to reboot instead of waiting patiently for you to diagnose the problem. You can test this by simulating a catastrophic crash with sysrq. First, make sure that Sysrq is enabled:


sudo echo 1 > /proc/sys/kernel/sysrq

And then simulate a kernel panic:

sudo echo c > /proc/sysrq-trigger

Your computer reboots immediately.


Reboot responsibly

Knowing all of these options doesn't mean that you should use them all. Give careful thought to what you're trying to accomplish, and what the command you've selected will do. You don't want to damage your system by being reckless. That's what virtual machines are for. However, having so many options means that you're ready for most situations.

Have I left out your favorite method of rebooting or powering down a system? List what I’ve missed in the comments!



 Source: opensource.com Please visit and support the linux project.


Published in GNU/Linux Rules!
Thursday, 18 July 2019 23:32

Final battle: Ubuntu vs Windows  




In this post, I will share my opinion on these two operating systems: Windows and Ubuntu. I will point out all the pros and cons of both. I will share some nice tips and tricks that might be handy in your daily workflow which will help you compare them or make you switch to one or the other.

Quick Comparison of Windows and Ubuntu

Hardware Requirements Standard Standard, and a lightweight option
Servers Rarely used, worse option Widely used, better option
Security Relatively less secure Relatively more secure
Privacy Collects data by default Doesn't collect data
Live CD/USB No official versions Available by default
Installing Software App store (rarely used) and executable files (!) App store, executable files (rarely used), and package managers
Updates Intrusive system updates, separate update process for each app and the system Unintrusive, centralized updates for all apps and the system
Attaching Hardware Old hardware is incompatible Old hardware is compatible
Features Limited amount of features available by default Many features available by default, all customizable
Gaming All games are available for Windows natively Only some games are available for Ubuntu natively, though there are compatibility options available
Customizability Moderately customizable Highly customizable
Cost Paid Free
Code Closed source Open source
Support Paid support available, community support is worse Paid support available, community support is far better
Popularity More popular for desktops Less popular for desktops
Bloatware Many unnecessarily apps installed by default No bloatware installed by default
Beginner-Friendly Yes Yes
Editions A few similar editions Many editions with different features


Windows is by far the most popular operating system for personal computers. Almost all desktop PCs and laptops are sold with Windows preinstalled. The reason for this popularity comes from the 90s when really there were no other user-friendly alternatives. So people got used to it. Manufacturers then continued selling PCs that run Windows to not introduce any learning curve to the customers.

Exceptions are Apple’s products that use Mac OS (or iOS). There are just a few manufacturers that offer desktop PCs and laptops with Ubuntu or other Linux-based operating system preinstalled. Some of the most popular devices are Dell’s XPS series of laptops since they do have all of the latest hardware features, great design and, of course, Ubuntu preinstalled.

Speaking of Ubuntu, it’s an operating system just like Windows and Mac OS are. It is developed by Canonical, a company that many geeks and nerds love to hate. However, it does a great job by offering a free and open source Linux-based operating system. Yes, it is free as both a free beer and freedom of speech. It cost $0 to obtain a copy, but donations are welcomed. Everything that Canonical does is free for studying, modifications and distribution of the modified version. Because of this, many variants (flavors) of the Ubuntu operating systems are available as of today. Some of them are xUbuntuKubuntuLinux MintElementary OS, etc… All of them introduce some changes over the default Ubuntu to provide a slightly different feature set, but also relying on Ubuntu quite a lot. The very same thing is what Canonical is doing too. The Ubuntu operating system is heavily based on the Debian operating system.



Hardware Requirements

These are the minimum hardware requirements for Windows (10) and Ubuntu.




  • CPU: 1 GHz
  • RAM: 1 GB (or more for the 32-bit version), 2 GB (or more for the 64-bit version)
  • GPU: 800 x 600 pixels output resolution with a color depth of 32 bits
  • Disk Space: 32 GB




  • CPU: 2 GHz dual-core processor
  • RAM: 2 GB
  • GPU: 1024 x 768 screen resolution
  • Disk Space: 25 GB



Lubuntu and Xubuntu

These are essentially a lite version of Ubuntu with a lightweight desktop environment.

  • CPU: 300 MHz
  • RAM: 256 MB
  • GPU: 640 x 480 screen resolution
  • Disk Space: 1.5 GB

As you can see the required hardware to run Ubuntu or Windows is quite similar. The myth that Ubuntu uses fewer resources is false. However, there are lightweight alternatives (Lubuntu and Xubuntu) that use fewer resources, but also they do lack some of the features that I will mention later. If you need a lightweight alternative, there are other distros that run on 128MB RAM or less.




Windows and Ubuntu Servers Compared

Both Ubuntu and Windows have a specialized edition for servers.

Windows Server is mainly used when you need to run Windows-specific software, like .NET apps. Ubuntu is basically used for everything else.

By default, Windows Server does have a GUI, but Ubuntu Server doesn’t. You can easily install a graphical control panel on your Ubuntu Server, but out of the box, Windows is more beginner-friendly.

Ubuntu servers are better than Windows servers in terms of cost, security, stability, compatibility, online documentation and help, uptime, etc. The only reason why you should use a Windows server is if, and only if your apps and software require it.

Most websites you visit are powered by a Linux server. Most online multiplayer games you play host their servers on a Linux server. This website is hosted on an Ubuntu server.

While you can still install a LAMP stack on a Windows server and run an app like WordPress, it’s recommended to use Ubuntu servers because of their many advantages as compared to Windows servers.



Security in Windows and Ubuntu

The fact is that the more popular the operating system is the more people are interested in writing malicious software for it. Simply put, the effort into making malicious software is worth more since it will penetrate a greater number of PCs eventually since it has a larger potential user base (targets).

Since Windows is widely spread on personal computers at homes and offices, there is higher interest in stealing data or damaging those PCs. Most of the vulnerabilities come from:

  • Vendors don’t manage to secure their proprietary software utilities (Asus is the latest company that failed at this)
  • Vendors preinstalled malicious software on purpose (I’m talking to you Lenovo)
  • Users don’t encrypt their data (It’s like leaving the front door unlocked… Seriously, take care for your data if you want it to be safe)
  • Users are installing pirated software (cracked, please don’t do this)
  • Users don’t follow the recommended security procedures for scanning removable storage devices (USBs, external HDD) and incoming emails (There are many posts on this topic already)

There is growth in attempts against Linux-based operating systems too. But the layered security in Linux-based operating systems is quite good. The majority of security flaws come due to bugs of user space software, not the Linux kernel itself. The nice thing is that you can only attack user space data from the user space software. Those are images, videos, documents, but not the operating system itself. It is quite hard to bypass the authentication and gain root privileges to be able to attack the OS itself.

  • The Nvidia proprietary GPU driver had some security issues recently. The good thing is that even though the vulnerability was present, no PCs were affected since the attacker can not utilize the bug to execute any malicious code remotely. Giving the attacker physical access to the PC is a security flaw of a different type, not the OS itself. The bad thing is that on shared PCs this can be an issue. The worst thing is that this is proprietary code so no one else can fix it other than Nvidia themselves.
  • A ransomware that is worth having its own wiki page damaged tens of PCs. I’m laughing since there are only tens of PCs affected but it is quite a story though since these kinds of attacks are really rare.

Long story short, there are a lot of security issues in Linux-based operating systems like Ubuntu too. But the truth is that it is very rare that those vulnerabilities can do any damage in the real world. Since the majority of the source code that is being executed is freely available there are a lot of advanced users who will try to patch the vulnerabilities just a few minutes after its announce (one of the reasons you should use open source software is because of this). There is hardly any time for the attacker to plan and do any damage in the real world.

Since Ubuntu is not so popular for personal use, attackers don’t even bother spending time here. They try to attack the servers running Linux-based operating systems instead. Mostly, those servers offer features to users of other operating systems. For example, if a Linux-based computer that is acting as an email server is attacked, it can be used to send malicious emails to Windows users (and damage their PCs or data) while users of Linux-based operating systems like Ubuntu will remain safe. As I mentioned earlier, an attacker needs to gain root privilege to do damage on Linux-based operating systems.

So personal users are quite safe, while servers are almost as secure, they still need a bit more attention.

  • No risk of attaching malicious USB flash storage drives for home users
  • No risk of preinstalled software by vendors.
  • No risk of installing pirated software since most of the software is already free.

If a user succumbs to a phishing or social engineering attack, the operating system they’re using makes little to no difference. Same with connecting to an unsecured network or having an improperly configured system (firewalls, weak passwords, login and user control, etc.)



Privacy in Windows (or the lack thereof) vs Ubuntu

Windows doesn’t take privacy seriously. Even though there are plenty of options available to be turned on and off on the user’s behalf, the truth is that the majority of privacy-intrusive behavior cannot be disabled. For example, every keystroke on the keyboard is recorded to improve the spelling and other grammar-related things (which is nice) but not exclusive to. So the keystrokes are sent to third party servers for analysis but no one can be sure what else is being done there, on the remote server, or what is the actual content sent along with the keystrokes.

Ubuntu on the other side is by far more respectful to their users’ privacy. The very same features like tracking user input actions are present but the behavior is quite different. That data is only used as debug logs when the program or the operating system crashes and only if the user accepts to share the crash logs. That data can be very useful for developers to find the bug and fix it.

Because the source code of the operating system is freely available it can be freely reviewed on the user’s behalf. Some users that are too picky about the privacy can even modify the source code and redistribute the modified version if desired. There are specialized Linux distros that focus on privacy, like Tails.

As you can see, the free will of the user is far greater when the source code is freely available for modification and redistribution than if it is closed and inaccessible for general public review.



Games in Ubuntu (or the lack thereof) vs Windows

One of the areas where Windows is definitely better than Ubuntu is gaming. Although there are quite a lot of games available for Ubuntu, virtually all games are available for Windows, where only a small part are available for Ubuntu.

The biggest influence here is made by Steam, their proton framework which is bundled with Steam client allows many windows exclusive titles to be run on Linux based operating systems too. Simply enable proton library in Steam settings.

Recently I was playing some older titles like Crysis 2, GTA San Andreas and Dirt 3 (all of them are made for Windows only) on Steam with proton enabled and I have not found any issue. Everything is as smooth as on Windows. The best thing is that only one click of a mouse is required to enable proton.

The old school way of playing games for Windows on Ubuntu is with WineCrossOverand/or KVM (for more advanced users) which are to some point stripped down virtual machine emulators, or use VirtualBox as a fully virtual machine where you can install Windows and run all games. However, there might appear some issues with any of them so depending on the specific game and the hardware you might prefer one over the other.

There are a few open source project that brings life to gaming on Linux lately. The first one is DXVK, the second one is Looking Glass and the third one is Lutris. Long story short, no lag and frame drops. Amazing! For more details about them and technical information I highly recommend the Level 1 Linux channel on YouTube. There is some excellent content on this topic for those who are interested.

Although there are a number of options and a lot of games are working on Linux based operating systems nothing beats the native support. No matter how good the workaround is the native support is always superior. You can never be 100% sure that the next game will work with a current workaround, or to believe that what works today will continue to work tomorrow, anything can be broken anytime, that’s the nature of workarounds.

Also, there are specialized Linux distros for gaming that are optimized for games and have various emulators and games pre-installed. Linux distros are better than Windows when it comes to retro gaming with emulators.

At the end of the day, you can still play games on Ubuntu, but it’s highly unlikely that a new popular game will support Ubuntu, at least for the time being, whereas Windows support is almost guaranteed for each game.



Ubuntu’s Live CD/USB

Besides the ability to install the operating system, there is the Live option for Ubuntu. It allows the use of the operating system without installing it on the hard drive. In the Live option, the user has access to a wide variety of software for everyday use like a web browser, media player, text editor, calculator, file browser (file manager), etc. In the Live option, those apps are loaded directly from the installation media (CD or USB flash drive).

This option is very handy for recovering data from a damaged PC, troubleshooting a broken PC or simply to use on a CD/USB to boot on a shared PC if you have privacy concerns. So even if the PC is affected by the bugs of the Nvidia GPU driver mentioned earlier, you can simply boot from Live CD/USB on the shared PC and remain safe by using your own operating system.

It’s like taking ready to use operating system in your pocket. The only downside is the speed, It is noticeably slower than launching apps from a properly installed operating system.

There are no official live versions of Windows.



Installing additional software on Windows vs Ubuntu

Both Windows and Linux come with some necessary software preinstalled like a media player, a web browser, etc. Installing additional software is a quite different procedure on both.

Installing additional software on Windows can be done in two ways. If You use Windows 10, simply open the Store application, search the catalog and click install.



For older versions of Windows, installing additional software requires a web browser, a search engine, and an internet connection, or a different way of getting the software (CD, USB flash drive, etc.)

  • Open a web browser
  • Search for the software you want to install
  • Download the .exe file
  • Double click the .exe file to begin with the installation

As you can see it is quite a complex process. However, people are used to it so not much complaints are given. Beginners often make mistakes when downloading files and software from the internet, often downloading illegitimate files and software with malware.

On the other hand, installing additional software on Ubuntu is very similar to what we actually do on our smartphones (Android or iOS).

  • Open Store app (Play Store or App Store)
  • Search the app you want
  • Click install

Done. Similar to Windows 10 (and 8), but not to previous versions.

The concept to put all available applications in the same centralized place is present since the beginning of Ubuntu (2004). The same principle is later copied on Android and now on Windows starting from Windows 8. The concept of a centralized store of all the software originates from the early days of Linux-based operating systems in the 90s.

You can still download executable files from the internet and install them on your Ubuntu, similarly to Windows.

If you’re using an Ubuntu server, you’ll need to use the CLI (command line interface) to install software. If you’re using a GUI (a control panel), in most cases, you can do it without running any code.

There are many different applications that allow searching through the list of available software packages on Linux distros. Some of them are designed to be run from the CLI while others have a nice GUI. Here are some examples:

  • apt



  • pacman



  • muon



  • Discover



  • GNOME Software Center




  • Elementary Apps


So basically you have multiple options to choose from when it comes to app stores.




Upgrading (updating) software on Windows vs Ubuntu

On Windows, there are multiple streams for acquiring updates. There is one stream controlled by the operating system and updates the operating system only. Beside it, every other application activates a background service to acquire updates. That’s why Windows users often see popup messages about updates just after launching an application. While it is a concept that can get the job done, it can be very annoying. Users are executing applications to get a job done, not to maintain them, so any popup message can be counted as a distraction.




If you’d wanted to update all the installed apps on Windows, you’ll need to go through each app and update them separately (if not installed via the Store)

On Ubuntu, there is only one stream for acquiring updates. It fetches the updated versions of the operating system itself and every other installed application as a scheduled task. Because of this, applications do not have to enable additional background services. The end result is no annoying pop-up messages when launching an application and no wasted resources while you are doing your job. The scheduled task for fetching all updates will run when your PC is idle so you will barely notice it. There are no forced updates like Windows has. No automatic restarts after updates like Windows has. In other words, everything is smooth, silent and done in the background with as little interaction as possible from the user.



What about downgrading to previous versions?

This is a very interesting use case. People rarely do it, but from time to time, an updated version can cause some problems like incompatibility with third party software or tools or a minor bug might be introduced. Most of the time issues like these are fixed in next updates in almost no time, but still, it can be annoying,

On Windows, downgrading to an older version is almost impossible. There is no built-in mechanism for doing this but there is a workaround. It goes like this:

  • Uninstall the program.
  • Install an older version of the same program.

In order to achieve this, you must have the .exe installer of the desired program and the desired (older) version. If you don’t have it, you have to obtain it manually. So in this case, we need to open a web browser, use a search engine, download the software, etc. The downside is that the configuration might be broken if current configuration files are not completely deleted. Even if the configuration files are deleted altogether there might other bugs too. Users would have to do the very same configuration again. This can be tricky for some complex workflows.

Ubuntu on the other side does have a built-in method for downgrading software. All .deb files (installation files) are stored on the file system by default. Those files can be removed only if the file system is low on free space or if there are already too many older versions of the given application. The point is that there is always the installation file of the previous version of every application so users don’t have to waste time searching for it.

The sad thing is that this method works only from the command line (or at least I’m not aware of any GUI tool that can do this). Actually, it is just fine since this task is not aimed at regular users so no need for clickable buttons for something that will hardly ever be used. However, downgrading using the command line is a matter of copying and pasting one command and pressing the “Enter” key.

Be careful with the command line! Don’t do it if you don’t feel comfortable. It’s a very powerful tool so it must be used with care. It’s better to ask for help from IT support or at least talk about your issue on the Ubuntu forums or other online Ubuntu communities, there, you will be guided until the issue is resolved.


Attaching hardware on Windows vs Ubuntu

For Windows, back in the day, this process required a manual installation of a driver. But you had to be careful because drivers were distributed on CDs and there were many. One for Windows XP, one more for Vista, etc. And also in both 32bit and 64bit variants. Installing the wrong driver can lead to a broken PC. Not a pleasant experience. Windows 10 is quite nice, it can auto-detect hardware and install required drivers automatically but not early versions of Windows.

One big problem still remains unresolved. Older drivers are not compatible with newer versions of Windows. Attaching older hardware and setting it up and running it can be quite a challenge if not impossible, I have a bunch of old USB devices that are still useful but none of them work on Windows 10.

Ubuntu (and all of its derivatives like Kubuntu, Xubuntu, Elementary OS etc.) relay on the Linux kernel for the drivers. One of the principles about the Linux kernel is that compatibility should never be broken, otherwise that patch will never be released for public use. That’s why if some hardware device was working once on a Linux-based operating system, it will work forever in the feature.


Comparing features (workflow) on Windows and Kubuntu

An average user might be interested in utilizing a different flow of doing everyday tasks. Depending on the size and resolution of the screen, or the number of monitors attached the very same actions can be achieved more efficiently or at least made more visually appealing.

I’ll use Kubuntu as an example, but features are often the same or quite similar on more Ubuntu flavors.

So let’s compare some things here:


Virtual desktops

On Windows, this feature is available since the release of Windows 10 (2015). it does not have many features besides spreading the windows (grouping them) across virtual desktops. I often use one desktop for chat apps and multimedia, while all others are for job-related tasks grouped by a specific topic on each virtual desktop.





Kubuntu does share the very same features, but much earlier than Windows. I remember I have seen this feature back in 2008 (but I don’t bother about the first day of availability).





ALT + TAB for switching active windows

I bet this is the most used shortcut. Windows has a single UI for this and allows cycling in two ways, forward (ALT + TAB) and backward (ALT + SHIFT + TAB)




Kubuntu (my favorite flavor of Ubuntu) has the very same functionality but with more options.
It can have multiple UIs so chose what is the most visually appealing to you.


In addition to the visual appealing UI, there is one very handy shortcut ( ALT + ~ ) or ( ALT + SHIFT + ~ ) for cycling in a reverse order. This allows filtering the scope of windows that will be cycled. Only instances of the currently active application will be cycled and all other windows will be excluded.

At the beginning, I was using (ALT + TAB) for switching all windows (3 x Firefox, 1 x Steam and 1 x Clementine), then I used (ALT + ~) to switch instances of Firefox only. There is an option to assign a different shortcut for cycling through windows of the current virtual desktop only or cycling through the windows of all virtual desktops too.

On Windows, the cycling is limited to currently active virtual desktop only and can’t be changed. In addition to (ALT + TAB) for cycling through windows, there is one more option available and it is called “Present Windows”. It displays all windows side by side. There is an option to display windows from the current virtual desktop only or to display windows from all virtual desktops.






Moving windows on the screen

Windows allows moving the windows by pointing the cursor to the window frame (title bar), press and hold the left mouse button, and moving the cursor around. The window will follow the cursor placement. Very simple and intuitive.


Kubuntu can also do this by executing the exact actions. In addition, the user can move the window while pointing anywhere on the window (not limited to the title bar only) if that area is not an input field. For example, you can move a window around while pressing and holding the left mouse button anywhere on the toolbar. If the window does have a very big input area, there is one shortcut that can bypass this. Holding the windows key (super key, or meta key) on the keyboard while pressed and holding the left mouse button you can move the window around no matter where are you pointing to. I often use this, it is by far the most efficient way for me.



Copy and paste

On Windows, this is achieved by the (CTRL + C) shortcut for copying and (CTRL + V) shortcut for pasting. In addition, there is always that contextual menu on clicking the right mouse button and choosing “Copy” then “Paste”. Easy.

The very same actions can be executed in Kubuntu too. In addition, there are some very nice features built around the idea of copying and pasting. For example, there is a history of recently copied items. The history can be viewed by clicking the notes icon on the system tray menu on the bottom right corner of the screen.




The very same history can be opened in a contextual menu next to the cursor by CTRL + ALT + V shortcut. This is a built-in option, but the shortcut must be manually assigned.




The point of the history is that you can paste the very same item again (multiple times) that was copied in the past. When you get used to this feature it is very handy. If you measure how long it takes to copy an item, and multiply that by how many times you will paste that item you will realize how much time is literally wasted. For even more time saving, there is a search functionality in both the contextual menu and system tray app. It allows you to filter the list of results by the text you enter in the search field. Typing a few letters is way faster than scrolling until you reach something copied early in the morning.

Windows has a simpler built-in clipboard manager, where you can only view, pin to the top, and use recently copied items. You can access the clipboard manager by pressing the Windows key + V.




To get the same features you get with Kubuntu on Windows, like the search feature, you’d have to install additional software. You may need to activate the Clipboard feature from the Windows settings app before you can use it.



Window buttons

Windows puts the close button, minimize button and maximize button on the top right corner of the window. It’s simple, it works.




Kubuntu, on the other hand, is more flexible. You can change the position of the buttons to the left corner or right corner if desired.

You can also add more buttons or remove them if desired. Two of my favorite additional buttons are: keep above others (always on top) button and pin to all desktops (visible to all desktops) button. The always on top button will prevent the window to go behind other windows. I use it frequently to keep my terminal, calculator or notes app on top while using a web browser or programming.

The Pin to all desktops button will make the window visible to all workspaces. I use this feature to have a preview of an exact spreadsheet document on all virtual desktops, or to have access to the exact terminal emulator from any virtual desktop.


Smartphone connectivity

There is no built-in option for synchronizing an Android smartphone with a Windows PC, or maybe I am not aware of it. This is possible with third-party software like Pushbullet and others.

On Kubuntu, there is one very handy tool called KDE Connect and it is available by default. This is a wonderful syncing app, Its features are:

  • Browsing the smartphone file system (SD card / built-in storage) from your PC.
    So you can access all the documents stored on your smartphone from your PC.
  • Synchronizing clipboard (copy – paste) items.
    You can copy a paragraph of text on one of your devices and it will be available on the other devices ready for pasting.
  • Receive notifications.
    All notifications that appear on one device will be instantly shown on the other devices too. I often use this feature and generate a notification for my long lasting (compiling) jobs. This way when the job is done it will generate a notification so I’m notified on my smartphone while resting in my garden. Before KDE Connect I was checking the PC every 10 – 15 minutes to see if the job is done.
  • Phone calls and SMS.
    You can receive SMS notifications and respond to them from the PC. When a phone call is ongoing the volume of the PC is automatically turned down to not disturb the call and it’s turned up again when the phone call is over. I adore this.
  • Mouse and keyboard emulation.
    You can use your smartphone as a touchpad to control the cursor, you can also use your smartphone keyboard to input text to your PC.
  • Multimedia remote.
    There is a remote functionality that works with almost any media player on your PC. You can use the smartphone to play/pause, switch content, etc. The best thing is that KDE Connect is integrated as a widget in the system tray. It runs silently in the background and it’s enabled by default. Just install the KDE Connect Android app on your smartphone, pairing is a very simple process requiring just a few clicks.

All, or some of these features can only be available in Windows if you install a third-party app, while in Kubuntu it’s installed by default.



File Managers

Windows has a very basic File Manager. It is simple and not feature rich.

Kubuntu, on the other hand, does have a feature rich File Manager. It supports tabs similar to what web browsers do.




Also, there is a split option, so the user can see multiple locations on the same screen, A Terminal emulator can be enabled for the current location. I’ve noticed a lot of advanced users that are not aware of this.





All toolbars can be rearranged to whatever edge desired. No restrictions at all.

Spending some time in customizing the File Manager for sure can improve the workflow by reducing the clutter of having multiple windows open instead. Every button on the toolbar can be removed, or new ones can be added, Almost every action that the file manager can do, can be put in the toolbar for quick access.

The Windows file manager is not that customizable and doesn’t have all the features that Kubuntu’s file manager has.




Windows 10 does have improved search functionality built in the start menu. It can search for Applications, files, and folders on your PC. The search can also be expanded to web sources. One downside is that all the search queries are done by the Bing search engine. Bing is fine, but you can not switch to another search engine like Google or DuckDuckGo if desired. The start menu can have the traditional layout or a full-screen layout:

Kubuntu does offer similar functionalities. The default start menu can search for applications, files or folders stored on your PC. The search function is also nicely integrated with Firefox and Chromium (Google Chrome without any proprietary code included) web browsers (maybe with some others too) so a user can search their bookmarks from the start menu. It can expand the search to your contacts (Kmail – email client), KAddressbook (phone contacts), etc. Some popular application are also nicely integrated into this menu like Chromium, Firefox, Clementine (Media player), etc. So a user can search and execute some actions that are built in those applications. For example, typing “incognito” in the search menu will open a new Chromium window in incognito mode. The same can be done with Firefox too by using the “private” term instead to open a private window. Besides the search capabilities built in the start menu, there is a dedicated tool only for searching (activated by pressing ALT+ F2).



In addition to the alt-tab shortcut for switching windows, there are the window spread options. Both of these are already mentioned. What is not told is that the window spread option has built-in search functionalities.

  • Open windows spread
  • Type any characters
  • Only windows that contain these characters will remain visible

I often use this feature when I have a lot of windows open (sometimes up to 40), or even if there are multiple Chromium windows open. Since one window is dedicated to personal stuff only (YouTube, forums, social networks, etc.) and all others are for job-related tasks I can simply type the content that I want and the exact Chromium window will appear in front of me. Way faster than using alt-tab too many times.


Misc features in Kubuntu worth mentioning

Most of these features are not available in Windows by default.


There is a mute button in the taskbar icons.

Right click the app icon and click the mute button to silence the exact app. I use this to mute sounds of the video games while they are minimized



Audio from windows.

Every window that produces some noise (audio) is marked in the taskbar so you can mute that window. Very handy if there are multiple sources of audio from different virtual desktops. It helps a lot in dealing with unnecessary audio sources.


The same can also be achieved from the system tray, Every application that makes noise is displayed and independently the volume can be decreased for any of them. In Windows, you’ll have to open up the Volume Mixer to get the same feature.





This a web browser primarily, but it can also be used as a file manager, a pdf reader, etc. It is a very powerful application.

It supports tabs. Each tab can display different content, for example, a document (.pdf), a web page or the file system.

The tabs can also be split into multiple views. Each view can display different content. As you can see I have a tab with YouTube in one view, file manager in a second view and a photo in a third view. The second tab is dedicated to the www.andromedacomputer.net web page.

Konqueror helps a lot to build a productive workflow. I was using it a lot while studying. many .pdf documents, many StackOverflow pages, etc. And all of that organized in tabs, splits and multiple windows. A hell of an efficient flow for quick navigation to the desired content



Infinite scrolling

Many of the default apps offer scrolling the view by pressing the mouse scroll wheel and dragging the mouse. The scrolling, however, is not limited to the size of the screen. When an edge is reached the mouse is automatically displaced to the opposite edge so the scrolling can continue without interruption.


Rename multiple files

Switch active windows with the mouse wheel

Simply move the cursor over the taskbar and scroll the wheel

By default, this will switch active windows, but the shortcut can be reassigned to other actions if desired.

Scrolling the mouse wheel will switch active windows in the ALT-TAB menu too (not in every UI, but most of them support mouse actions too).



Built-in search for .pdf documents in Okular (.pdf reader)

You can select a text in the .pdf document and ask Okular to open the web browser and search the term by using one of the many search engines. I often use Wikipedia, Google, DuckDuckGo, or YouTube for additional information on some technical terms.


I have already told you about the availability of multiple choices for the appearance of the Alt-Tab window switcher utility. The very same is applicable almost anywhere on the system. For example, the default is to use the taskbar at the bottom of the screen.



The position and the items in the taskbar can also be rearranged without restrictions. You can even remove some items, like the start menu button, application launch icons or the clock if so desired and place some other items there. Many people prefer the taskbar on the top of the screen or a dock at the bottom. The most popular is Latte Dock.




Besides the layout, the color theme can be changed too. There are many included by default, but many more can be installed manually. Here are screenshots of some nice and popular themes for the KDE Desktop (Kubuntu). Note that the popular color themes are available for other desktop environments too.

plasma themes 1 300x169.jpg 

Many mouse cursor and icon themes are available. Some examples:

You can change the color theme in Windows too. There are a number of pre-installed to choose from, and you can manually install more. You can also change the cursor and icon themes.


Task automation

Advanced users don’t want to waste time and repeat task so they often automate them. For example, I don’t want to manually increase the fan speed, and the clock speed of the CPU and GPU while gaming so I modified the Steam app launcher icon.

The modified behavior is:

  • open the Steam app
  • increase the clock speed of the CPU
  • increase the fan speed of the CPU
  • increase the clock speed of the GPU
  • increase the fan speed of the GPU
  • wait for Steam to close
  • when Steam is closed, revert all the changes to default

This is how I have nice a quiet PC while working, but I squeeze every bit of power for gaming by preventing the overheating and forcing the fastest clock speed available.

Do you want to read and learn more about bash scripts and automating task in the Linux shell? If yes then please reply in the comments so I can prepare a post dedicated on this topic.



Conclusion on Windows vs Ubuntu:

As you can see, Ubuntu and other Linux-based operating systems, as compared to Windows, are:

  • Safer
  • Easier for installing and updating apps
  • Easier for downgrading apps if something is broken after an update
  • A lot more customizable
  • Better UX
  • Better for servers
  • Worse for gaming
  • Beginner-friendly
  • Free and open source
  • Care about your privacy and data
  • Easier for maintenance
  • No annoying pop-up messages
  • Can run on old PCs that cannot run Windows smoothly, yet be secure with all of the latest updates
  • and a lot more.

Kubuntu is my favorite derivative of all the Ubuntu-based operating systems. I can not point out any features as favorite because I like all of them. Everything mentioned above is part of my daily workflow.

Now when you know all of this it is worth trying them out. I was skeptical at first but later when I built my flow and learned how to utilize these features I can do everything faster, with fewer keystrokes and the most important thing is that I have a nicely organized desktop that helps me to minimize brain fatigue while doing my job.

Kubuntu is a great distro to switch to if you’re coming from Windows. They have a quite similar UI, and Kubuntu has all the features Windows has, plus more.


Published in GNU/Linux Rules!
Page 2 of 5