Displaying items by tag: geek




Think you know everything about sudo? Think again.

Everybody knows sudo, right? This tool is installed by default on most Linux systems and is available for most BSD and commercial Unix variants.

Still, after talking to hundreds of sudo users, the most common answer I received was that sudo is a tool to complicate life.

There is a root user and there is the su command, so why have yet another tool? For many, sudo was just a prefix for administrative commands.

Only a handful mentioned that when you have multiple administrators for the same system, you can use sudo logs to see who did what.



So, what is sudo? According to the sudo website:

"Sudo allows a system administrator to delegate authority by giving certain users the ability to run some commands as root or another user while providing an audit trail of the commands and their arguments."

By default, sudo comes with a simple configuration, a single rule allowing a user or a group of users to do practically anything (more on the configuration file later in this article):

%wheel ALL=(ALL) ALL

In this example, the parameters mean the following:

The first parameter defines the members of the group.

The second parameter defines the host(s) the group members can run commands on.

The third parameter defines the usernames under which the command can be executed.

The last parameter defines the applications that can be run.

So, in this example, the members of the wheel group can run all applications as all users on all hosts. Even this really permissive rule is useful because it results in logs of who did what on your machine.



Of course, once it is not just you and your best friend administering a shared box, you will start to fine-tune permissions. You can replace the items in the above configuration with lists: a list of users, a list of commands, and so on. Most likely, you will copy and paste some of these lists around in your configuration.

This situation is where aliases can come handy. Maintaining the same list in multiple places is error-prone. You define an alias once and then you can use it many times. Therefore, when you lose trust in one of your administrators, you can remove them from the alias and you are done. With multiple lists instead of aliases, it is easy to forget to remove the user from one of the lists with elevated privileges.


Enable features for a certain group of users

The sudo command comes with a huge set of defaults. Still, there are situations when you want to override some of these. This is when you use the Defaults statement in the configuration. Usually, these defaults are enforced on every user, but you can narrow the setting down to a subset of users based on host, username, and so on. Here is an example that my generation of sysadmins loves to hear about: insults. These are just some funny messages for when someone mistypes a password:

czanik@linux-mewy:~> sudo ls


[sudo] password for root:


Hold it up to the light --- not a brain in sight!


[sudo] password for root:


My pet ferret can type better than you!


[sudo] password for root:


sudo: 3 incorrect password attempts



Because not everyone is a fan of sysadmin humor, these insults are disabled by default. The following example shows how to enable this setting only for your seasoned sysadmins, who are members of the wheel group:

Defaults !insults Defaults:%wheel insults

I do not have enough fingers to count how many people thanked me for bringing these messages back.


Digest verification

There are, of course, more serious features in sudo as well. One of them is digest verification. You can include the digest of applications in your configuration:

peter ALL = sha244:11925141bb22866afdf257ce7790bd6275feda80b3b241c108b79c88 /usr/bin/passwd

In this case, sudo checks and compares the digest of the application to the one stored in the configuration before running the application. If they do not match, sudo refuses to run the application. While it is difficult to maintain this information in your configuration—there are no automated tools for this purpose—these digests can provide you with an additional layer of protection.


Session recording

Session recording is also a lesser-known feature of sudo. After my demo, many people leave my talk with plans to implement it on their infrastructure. Why? Because with session recording, you see not just the command name, but also everything that happened in the terminal. You can see what your admins are doing even if they have shell access and logs only show that bash is started.

There is one limitation, currently. Records are stored locally, so with enough permissions, users can delete their traces. Stay tuned for upcoming features.



Starting with version 1.8, sudo changed to a modular, plugin-based architecture. With most features implemented as plugins, you can easily replace or extend the functionality of sudo by writing your own. There are both open source and commercial plugins already available for sudo.

In my talk, I demonstrated the sudo_pair plugin, which is available on GitHub. This plugin is developed in Rust, meaning that it is not so easy to compile, and it is even more difficult to distribute the results. On the other hand, the plugin provides interesting functionality, requiring a second admin to approve (or deny) running commands through sudo. Not just that, but sessions can be followed on-screen and terminated if there is suspicious activity.

In a demo I did during a recent talk at the All Things Open conference, I had the infamous:

czanik@linux-mewy:~> sudo rm -fr /

ommand displayed on the screen. Everybody was holding their breath to see whether my laptop got destroyed, but it survived.



As I already mentioned at the beginning, logging and alerting is an important part of sudo. If you do not check your sudo logs regularly, there is not much worth in using sudo. This tool alerts by email on events specified in the configuration and logs all events to syslog. Debug logs can be turned on and used to debug rules or report bugs.

Alerts Email alerts are kind of old-fashioned now, but if you use syslog-ng for collecting your log messages, your sudo log messages are automatically parsed. You can easily create custom alerts and send those to a wide variety of destinations, including Slack, Telegram, Splunk, or Elasticsearch. You can learn more about this feature from my blog on syslong-ng.com.

Configuration We talked a lot about sudo features and even saw a few lines of configuration. Now, let’s take a closer look at how sudo is configured. The configuration itself is available in /etc/sudoers, which is a simple text file. Still, it is not recommended to edit this file directly. Instead, use visudo, as this tool also does syntax checking. If you do not like vi, you can change which editor to use by pointing the EDITOR environment variable at your preferred option.

Before you start editing the sudo configuration, make sure that you know the root password. (Yes, even on Ubuntu, where root does not have a password by default.) While visudo checks the syntax, it is easy to create a syntactically correct configuration that locks you out of your system.

When you have a root password at hand in case of an emergency, you can start editing your configuration. When it comes to the sudoers file, there is one important thing to remember: This file is read from top to bottom, and the last setting wins. What this fact means for you is that you should start with generic settings and place exceptions at the end, otherwise exceptions are overridden by the generic settings.



You can find a simple sudoers file below, based on the one in CentOS, and add a few lines we discussed previously:


Defaults !visiblepw

Defaults always_set_home

Defaults match_group_by_gid

Defaults always_query_group_plugin

Defaults env_reset



Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin

root ALL=(ALL) ALL

%wheel ALL=(ALL) ALL

Defaults:%wheel insults

Defaults !insults

Defaults log_output

This file starts by changing a number of defaults. Then come the usual default rules: The root user and members of the wheel group have full permissions over the machine. Next, we enable insults for the wheel group, but disable them for everyone else. The last line enables session recording.

The above configuration is syntactically correct, but can you spot the logical error? Yes, there is one: Insults are disabled for everyone since the last, generic setting overrides the previous, more specific setting. Once you switch the two lines, the setup works as expected: Members of the wheel group receive funny messages, but the rest of the users do not receive them.


Configuration management

Once you have to maintain the sudoers file on multiple machines, you will most likely want to manage your configuration centrally. There are two major open source possibilities here. Both have their advantages and drawbacks.

You can use one of the configuration management applications that you also use to configure the rest of your infrastructure. Red Hat Ansible, Puppet, and Chef all have modules to configure sudo. The problem with this approach is that updating configurations is far from real-time. Also, users can still edit the sudoers file locally and change settings.

The sudo tool can also store its configuration in LDAP. In this case, configuration changes are real-time and users cannot mess with the sudoers file. On the other hand, this method also has limitations. For example, you cannot use aliases or use sudo when the LDAP server is unavailable.


New features

There is a new version of sudo right around the corner. Version 1.9 will include many interesting new features. Here are the most important planned features:

A recording service to collect session recordings centrally, which offers many advantages compared to local storage:

It is more convenient to search in one place.

Recordings are available even if the sender machine is down.

Recordings cannot be deleted by someone who wants to delete their tracks.

The audit plugin does not add new features to sudoers, but instead provides an API for plugins to easily access any kind of sudo logs. This plugin enables creating custom logs from sudo events using plugins.

The approval plugin enables session approvals without using third-party plugins.

And my personal favorite: Python support for plugins, which enables you to easily extend sudo using Python code instead of coding natively in C.

Conclusion I hope this article proved to you that sudo is a lot more than just a simple prefix. There are tons of possibilities to fine-tune permissions on your system. You cannot just fine-tune permissions, but also improve security by checking digests. Session recordings enable you to check what is happening on your systems. You can also extend the functionality of sudo using plugins, either using something already available or writing your own. Finally, given the list of upcoming features you can see that even if sudo is decades old, it is a living project that is constantly evolving.

If you want to learn more about sudo, here are a few resources:


Published in GNU/Linux Rules!
Monday, 30 September 2019 14:02

GNU Debugger: Practical tips


Learn how to use some of the lesser-known features of gdb to inspect and fix your code.



The GNU Debugger (gdb) is an invaluable tool for inspecting running processes and fixing problems while you're developing programs.

You can set breakpoints at specific locations (by function name, line number, and so on), enable and disable those breakpoints, display and alter variable values, and do all the standard things you would expect any debugger to do. But it has many other features you might not have experimented with. Here are five for you to try.

Conditional breakpoints

Setting a breakpoint is one of the first things you'll learn to do with the GNU Debugger. The program stops when it reaches a breakpoint, and you can run gdb commands to inspect it or change variables before allowing the program to continue.

For example, you might know that an often-called function crashes sometimes, but only when it gets a certain parameter value. You could set a breakpoint at the start of that function and run the program. The function parameters are shown each time it hits the breakpoint, and if the parameter value that triggers the crash is not supplied, you can continue until the function is called again. When the troublesome parameter triggers a crash, you can step through the code to see what's wrong.



(gdb) break sometimes_crashes

Breakpoint 1 at 0x40110e: file prog.c, line 5.

(gdb) run


Breakpoint 1, sometimes_crashes (f=0x7fffffffd1bc) at prog.c:5

5 fprintf(stderr,

(gdb) continue

Breakpoint 1, sometimes_crashes (f=0x7fffffffd1bc) at prog.c:5

5 fprintf(stderr,

(gdb) continue


To make this more repeatable, you could count how many times the function is called before the specific call you are interested in, and set a counter on that breakpoint (for example, "continue 30" to make it ignore the next 29 times it reaches the breakpoint).


But where breakpoints get really powerful is in their ability to evaluate expressions at runtime, which allows you to automate this kind of testing. Enter: conditional breakpoints.


(gdb) break sometimes_crashes if !f

Breakpoint 1 at 0x401132: file prog.c, line 5.

(gdb) run


Breakpoint 1, sometimes_crashes (f=0x0) at prog.c:5

5 fprintf(stderr,




Instead of having gdb ask what to do every time the function is called, a conditional breakpoint allows you to make gdb stop at that location only when a particular expression evaluates as true. If the execution reaches the conditional breakpoint location, but the expression evaluates as false, the debugger automatically lets the program continue without asking the user what to do.


Breakpoint commands


An even more sophisticated feature of breakpoints in the GNU Debugger is the ability to script a response to reaching a breakpoint. Breakpoint commands allow you to write a list of GNU Debugger commands to run whenever it reaches a breakpoint.

We can use this to work around the bug we already know about in the sometimes_crashes function and make it return from that function harmlessly when it provides a null pointer.

We can use silent as the first line to get more control over the output. Without this, the stack frame will be displayed each time the breakpoint is hit, even before our breakpoint commands run.


(gdb) break sometimes_crashes

Breakpoint 1 at 0x401132: file prog.c, line 5.

(gdb) commands 1

Type commands for breakpoint(s) 1, one per line.

End with a line saying just "end".


>if !f


>printf "Skipping call\n"

>return 0



>printf "Continuing\n"



(gdb) run

Starting program: /home/twaugh/Documents/GDB/prog

warning: Loadable section ".note.gnu.property" outside of ELF segments




#0 sometimes_crashes (f=0x0) at prog.c:5

5 fprintf(stderr,

Skipping call

[Inferior 1 (process 9373) exited normally]



Dump binary memory


GNU Debugger has built-in support for examining memory using the x command in various formats, including octal, hexadecimal, and so on. But I like to see two formats side by side: hexadecimal bytes on the left, and ASCII characters represented by those same bytes on the right.


When I want to view the contents of a file byte-by-byte, I often use hexdump -C (hexdump comes from the util-linux package). Here is gdb's x command displaying hexadecimal bytes:

(gdb) x/33xb mydata
0x404040 mydata>:    0x02    0x01    0x00    0x02    0x00    0x00    0x00    0x01
0x404048 mydata+8>:    0x01    0x47    0x00    0x12    0x61    0x74    0x74    0x72
0x404050 mydata+16>:    0x69    0x62    0x75    0x74    0x65    0x73    0x2d    0x63
0x404058 mydata+24>:    0x68    0x61    0x72    0x73    0x65    0x75    0x00    0x05
0x404060 mydata+32>:    0x00



What if you could teach gdb to display memory just like hexdump does? You can, and in fact, you can use this method for any format you prefer.


By combining the dump command to store the bytes in a file, the shell command to run hexdump on the file, and the define command, we can make our own new hexdump command to use hexdump to display the contents of memory.



(gdb) define hexdump

Type commands for definition of "hexdump".

End with a line saying just "end".

>dump binary memory /tmp/dump.bin $arg0 $arg0+$arg1

>shell hexdump -C /tmp/dump.bin





Those commands can even go in the ~/.gdbinit file to define the hexdump command permanently. Here it is in action:


(gdb) hexdump mydata sizeof(mydata) 00000000 02 01 00 02 00 00 00 01 01 47 00 12 61 74 74 72 |.........G..attr| 00000010 69 62 75 74 65 73 2d 63 68 61 72 73 65 75 00 05 |ibutes-charseu..| 00000020 00 |.| 00000021


Inline disassembly


Sometimes you want to understand more about what happened leading up to a crash, and the source code is not enough. You want to see what's going on at the CPU instruction level.

The disassemble command lets you see the CPU instructions that implement a function. But sometimes the output can be hard to follow. Usually, I want to see what instructions correspond to a certain section of source code in the function. To achieve this, use the /s modifier to include source code lines with the disassembly.


(gdb) disassemble/s main
Dump of assembler code for function main:
11    {
   0x0000000000401158 +0>:    push   %rbp
   0x0000000000401159 +1>:    mov      %rsp,%rbp
   0x000000000040115c +4>:    sub      $0x10,%rsp

12      int n = 0;
   0x0000000000401160 +8>:    movl   $0x0,-0x4(%rbp)

13      sometimes_crashes(&n);
   0x0000000000401167 +15>:    lea     -0x4(%rbp),%rax
   0x000000000040116b +19>:    mov     %rax,%rdi
   0x000000000040116e +22>:    callq  0x401126 sometimes_crashes>



This, along with info registers to see the current values of all the CPU registers and commands like stepi to step one instruction at a time, allow you to have a much more detailed understanding of the program.

Reverse debug


Sometimes you wish you could turn back time. Imagine you've hit a watchpoint on a variable. A watchpoint is like a breakpoint, but instead of being set at a location in the program, it is set on an expression (using the watch command). Whenever the value of the expression changes, execution stops, and the debugger takes control.

So imagine you've hit this watchpoint, and the memory used by a variable has changed value. This can turn out to be caused by something that occurred much earlier; for example, the memory was freed and is now being re-used. But when and why was it freed?

The GNU Debugger can solve even this problem because you can run your program in reverse!

It achieves this by carefully recording the state of the program at each step so that it can restore previously recorded states, giving the illusion of time flowing backward.

To enable this state recording, use the target record-full command. Then you can use impossible-sounding commands, such as:



reverse-step, which rewinds to the previous source line

reverse-next, which rewinds to the previous source line, stepping backward over function calls

reverse-finish, which rewinds to the point when the current function was about to be called

reverse-continue, which rewinds to the previous state in the program that would (now) trigger a breakpoint (or anything else that causes it to stop)


Here is an example of reverse debugging in action:

 (gdb) b main
Breakpoint 1 at 0x401160: file prog.c, line 12.
(gdb) r
Starting program: /home/twaugh/Documents/GDB/prog

Breakpoint 1, main () at prog.c:12
12      int n = 0;
(gdb) target record-full
(gdb) c

Program received signal SIGSEGV, Segmentation fault.
0x0000000000401154 in sometimes_crashes (f=0x0) at prog.c:7
7      return *f;
(gdb) reverse-finish
Run back to call of #0  0x0000000000401154 in sometimes_crashes (f=0x0)
        at prog.c:7
0x0000000000401190 in main () at prog.c:16
16      sometimes_crashes(0);




These are just a handful of useful things the GNU Debugger can do. There are many more to discover. Which hidden, little-known, or just plain amazing feature of gdb is your favorite? Please share it in the comments.

Published in GNU/Linux Rules!

linux-16 (1).jpg


Linux has come a long way since 1991. These events mark its evolution.


1. Linus releases Linux

Linus Torvalds initially released Linux to the world in 1991 as a hobby. It didn't remain a hobby for long!



2. Linux distributions

In 1993, several Linux distributions were founded, notably DebianRed Hat, and Slackware. These were important because they demonstrated Linux's gains in market acceptance and development that enabled it to survive the tumultuous OS wars, browser wars, and protocol wars of the 1990s. In contrast, many established, commercial, and proprietary products did not make it past the turn of the millennium!



3. IBM's big investment in Linux

In 2000, IBM announced it would invest US$1 billion dollars in Linux. In his CNN Money article about the investment, Richard Richtmyer wrote: "The announcement underscores Big Blue's commitment to Linux and marks significant progress in moving the alternative operating system into the mainstream commercial market."



4. Hollywood adopts Linux

In 2002, it seemed the entire Hollywood movie industry adopted Linux. DisneyDreamworks, and Industrial Light & Magic all began making movies with Linux that year.




5. Linux for national security

In 2003, another big moment came with the US government's acceptance of Linux. Red Hat Linux was awarded the Department of Defense Common Operating Environment (COE) certification. This is significant because the government—intelligence and military agencies in particular—have very strict requirements for computing systems to prevent attacks and support national security. This opened the door for other agencies to use Linux. Later that year, the National Weather Service announced it would replace outdated systems with new computers running Linux.



6. The systems I managed

This "moment" is really a collection of my personal experiences. As my career progressed in the 2000s, I discovered several types of systems and devices that I managed were all running Linux. Some of the places I found Linux were VMware ESX, F5 Big-IP, Check Point UTM Edge, Cisco ASA, and PIX. This made me realize that Linux was truly viable and here to stay.



7. Ubuntu

In 2004, Canonical was founded by Mark Shuttleworth to provide an easy-to-use Linux desktop—Ubuntu Linux—based on the Debian distribution. I think Ubuntu Linux helped to expand the desktop Linux install base. It put Linux in front of many more people, from casual home users to professional software developers.



8. Google Linux

Google released two operating systems based on the Linux kernel: the Android mobile operating system in mid-2008 and Chrome OS, running on a Chromebook, in 2011. Since then, millions of Android mobile phones and Chromebooks have been sold.



9. The cloud is Linux

In the past 10 years or so, cloud computing has gone from a grandiose vision of computing on the internet to a reinvention of how we use computers personally and professionally. The big players in the cloud space are built on Linux, including Amazon Web ServicesGoogle Cloud Services, and Linode. Even in cases where we aren't certain, such as Microsoft Azure, running Linux workloads is well supported.



10. My car runs Linux

And so will yours! Many automakers began introducing Linux a few years ago. This led to the formation of the collaborative open source project called Automotive Grade Linux. Major car makers, such as Toyota and Subaru, have joined together to develop Linux-based automotive entertainment, navigation, and engine-management systems.



Share your favorite

Source: Opesource.com 

Author: Alan Formy-Duval

Marielle Price


Published in GNU/Linux Rules!

These 19 photos prove that this room is the coolest you will see today, worthy of a real gamer, amateur technology, ultra high definition graphics, powerful processors and maximum comfort in a futuristic environment worthy of a James Cameron movie

Published in It is so cool...
Tagged under