Archive

Archive for June, 2011

Enablement Security Policies with Sudo

June 9, 2011 Leave a comment

Policies in Linux

You get many types of security policies that allow or prevent users in a multiuser environment to perform a certain task. You get policies that allow a specific task, and you get policies that disallow a task where it would otherwise be allowed, and policies that control the frequency or limit that amount of times a task can be performed, and so on.

Linux doesn’t have a single system or architecture for such policies. Software usually implement these themselves and provide a way to manage them if the administrator wants to enable them. Though, an enormous amount of control can be exercised with process ownership and file permissions or ACLs alone.

However, the root user (administrator user on Unix systems) on Linux has total control. When you have a root shell there is no file you can’t edit, or no process you can’t control. With ultra high security requirements this can be a problem. SELinux is a system which implements policies in a different way where you can secure your system beyond what is available by default. For more information on SELinux, see SELinux on WikiPedia.

Creating Policies with sudo

Let me explain with a hypothetical example. Assume you wanted to give a specific group of users the ability to clear a certain process’ log file.

To do this the file itself needs to be emptied and a USR1 signal sent to the process to have it reopen the file. The process runs as user and group foo, so the logged in users will not be allowed to send a signal to it. The log file is also owned by the same foo user and group, so they won’t be allowed to write to it, and thus can’t empty it either.

To give them access to this, you can create a group named logflushers which will control access through membership. Then create a script that performs the 2 necessary tasks. Assuming the script is named /usr/bin/flushfoolog, you can add the following line to the /etc/sudoers file to enable the members of the logflushers group to execute the command as the foo user:

%logflushers ALL=(foo) NOPASSWD: /tmp/notouch

With this in place any of logflushers group’s members can flush the log by running:

sudo -u foo flushfoolog

If the user is not part of the logflushers group and they try to run this, they’ll get an error like the following:

Sorry, user quintin is not allowed to execute '/usr/bin/flushfoolog' as
foo on localhost.

Simplifying the Command

The command to flush the foo log seems a bit redundant. You can simplify it by adding the following to the top of the /usr/bin/flushfoolog bash script. If it’s not a bash script you just need to implement the same logic in whatever language it is.

targetuser=foo
if [ $(id -u) -ne $(id -u $targetuser) ]
then
  sudo -u $targetuser "$0"
  exit $?
fi

What this does is check if the script is being run as the foo user. If it’s not, then it will execute the same command as the foo user, using sudo. If a user isn’t granted permission to run this command via sudo, (they’re not in the logflushers group) then they’ll get the error saying they’re not allowed to run this command as the foo user.

Conclusion

Sudo makes it very easy to control or grant access to commands to certain users. If combined with scripts access control can be made finer performing programmatic tasks or access control inside the script.

So Why Love Linux? Because it’s so simple to implement flexible access control when combining sudo and scripting.

Pipes and FIFOs

June 8, 2011 Leave a comment

Overview

The basic design of the Unix command line world is the same one which makes it so powerful and which many people swear by, namely the pipes and FIFOs design. The basic idea is that you have many simple commands which take an input and produce an output, and then to string them together into something that will give the desired effect.

Standard IO Streams or Pipes

To start out, let me give a rough explanation of the IO pipes. These are Standard Input, Standard Output and Standard Error. Standard Error and Standard Output are both output streams writing out whatever the application puts into to them. Standard Input is an input stream giving to the application whatever is written into it. Each program when run has each of these 3 streams available to it by default.

From this point forward I’ll refer to the 3 streams as STDOUT for Standard Output, STDERR for Standard Error and STDIN for Standard Input. These are the general short form or constant names for these streams.

Take for example the echo and cat commands. The echo command takes all text supplied as arguments on it’s command line and writes it out STDOUT. For example, the following command will print the text “Hi There” to the STDOUT stream, which by default is linked to the terminal’s output.

echo Hi There

Then, in it’s simplest form the cat command takes all data it reads from it’s STDIN stream and writes it back out to STDOUT exactly as it was received. You can also instruct cat to read in the contents of one or more files and write it back out to STDOUT. For example, to read in the contents of a file name namelist, and write it to STDOUT (the terminal) you can do:

cat namelist

To see cat in it’s purest form, simply run it without arguments, as:

cat

Each line of input typed in will be duplicated. This is because the input you type is sent to STDIN. This input is then received by cat which will write it back to STDOUT. The end of your input can be indicated by pressing Ctrl+D, which is the EOF or End of File key. Pressing Ctrl+D will close the STDIN stream, and will be handled by the program the same as if it was reading a file and came to the end of that file.

Pipes and Redirects

Now, all command line terminals allow you to do some powerful things with these IO pipes. Each type of shell has it’s own syntax, so I will be explaining these using the syntax for the Bash shell.

You could for instance redirect the output from a command into a file using the greater than or > operator. For example, to redirect the STDOUT of the echo command into a file called message, you would do:

echo Hi There > message

You could also read this file back into a command using the less than or < operator. This will take the contents of the file and write it to the command’s STDIN stream. For example, reading the above file into the cat program, would have it written back to STDOUT. So this has the same effect as supplying the filename as an argument to cat, but instead uses the IO pipes to supply the data.

cat < message

Where things really get powerful is when you start stringing together commands. You can take the STDOUT of one command and pipe it into the STDIN of another command, with as many commands as you want. For example, the following command pipes the message “Pipes are very useful” into the cut command, instructing it to give us the 4th word of the line. This will result in the text “useful” being printed to the terminal.

echo Pipes are very useful | cut -f 4 -d " "

As you can see, commands are stringed together with the pipe or | operator. The pipe operator by itself makes many powerful things possible.

Using the pipe (|) and redirect (>) operator, let’s give a more complex example. Let’s say we want to get the PID and user name of all running processes, sorted by the PID and separated by a comma. We can do something like this:

ps -ef | tail -n+2 | awk '{print $2 " " $1}' | sort -n | sed "s/ /,/"

To give an idea of what happens here, let me explain the purpose of each of these commands with the output each one produces (which becomes the input of the command that follows it).

Command Description
ps -ef Gives us a list of processes with many columns of data, of these the 1st column being the user and the 2nd column being the PID.

Output:

UID        PID  PPID  C STIME TTY          TIME CMD
root      4222   443  0 20:14 ?        00:00:00 udevd
quintin   3922  2488  0 20:14 pts/2    00:00:00 /bin/bash
quintin   4107  2496  0 20:18 pts/0    00:00:00 vi TODO
tail -n+2 Takes the output of ps and gives us all the lines from line 2 onwards, effectively stripping the header.

Output:

root      4222   443  0 20:14 ?        00:00:00 udevd
quintin   3922  2488  0 20:14 pts/2    00:00:00 /bin/bash
quintin   4107  2496  0 20:18 pts/0    00:00:00 vi TODO
awk ‘{print
$2 ” ” $1}’
Takes the output of tail, and prints the PID first, a space and then the user name. The rest of the data is discarded here.

Output:

4222 root
3922 quintin
4107 quintin
sort -n This sorts the lines received from awk numerically.

Output:

3922 quintin
4107 quintin
4222 root
sed “s/ /,/” Replaces the space separating the PID and user name with a comma.

Output:

3922,quintin
4107,quintin
4222,root

Some Example Useful Commands

The above should give you a basic idea of what it’s all about. If you feel like experimenting, here are a bunch of useful commands to mess around with.

I’ll be describing the commands from the perspective of the standard IO streams. So even though I don’t mention it, some of these commands also support reading input from files specified as command line arguments.

To get more details about the usage of these commands, see the manual page for the given command by running:

man [command]

.

Command Description
echo Writes to STDOUT the text supplied in command line arguments.
cat Writes to STDOUT the input from STDIN.
sort Sorts all lines of input from STDIN.
uniq Strips duplicate lines. The input needs to be sorted first, thus same basic effect can be achieved with just sort -u
cut Cuts a string by a specified character and returns requested parts.
grep Search for a specified pattern or string in the data supplied via STDIN.
gzip Compresses the input from STDIN and writes the result to STDOUT. Uses gzip compression.
gunzip Uncompresses the gzip input from STDIN and writes the results to STDOUT. Basically the reverse of gzip.
sed Stream editor applying basic processing and filtering operations to STDIN and writes to STDOUT.
awk Pattern scanning and processing langauge. Powerful script-like processing of lines/words from input.
column Takes the input from STDIN and formats it into columns, writing the result to STDOUT. Useful for displaying data.
md5sum Takes the input from STDIN and produces a md5sum of the data
sha1sum Takes the input from STDIN and produces a sha1sum of the data
base64 Takes the input from STDIN and base64 encodes or decodes it
xargs Takes input from STDIN and a uses it as arguments to a specified command
wc Count the number of lines, words or characters read from input.
tee Read input and write it to both STDOUT as well as a specified file.
tr Translate or delete characters read from input

Conclusion

I would recommend anyone to get comfortable with these aspects of the Linux terminal as well as Bash scripting. Not knowing this, you might not even realize how many of your common tasks could be automated/simplified by it. Also remember that automation not only makes your tasks be completed quicker, but also reduces the chances for errors/mistakes that come from doing repetitive tasks by hand.

So Why Love Linux? Because the pipes and FIFOs pattern gives you a lot of power for building complex instructions.

Belkin F5D8053 Support in Ubuntu – Drivers from Nowhere

June 7, 2011 Leave a comment

New Hardware

I recently purchased a new wireless ADSL router, which came with a USB WiFi network adapter. Being a fairly new release of the hardware the kernel didn’t recognize it and I couldn’t use it.

However, after some research I found that it is in deed supported by the kernel – but just not recognized due to a change in the hardware ID reported by the device. The kernel module it should work with is the RT2870 driver.

If this is the case, it should be easy to fix.

Making it Work

If you have recently rebuilt your kernel you can skip section (A), and simply proceed with (B) and (C).

A. Preparing Build Environment

The steps for preparing the build environment is based on those needed for Ubuntu 10.04 (Lucid Lynx). If you have problems building the kernel after following these steps (especially for older versions of Ubuntu), see Kernel/Compile for instructions more tailored to your version of Ubuntu.

  1. Prepare your environment for building
    sudo apt-get install fakeroot build-essential
    sudo apt-get install crash kexec-tools makedumpfile kernel-wedge
    sudo apt-get build-dep linux
    sudo apt-get install git-core libncurses5 libncurses5-dev libelf-dev 
    sudo apt-get install asciidoc binutils-dev
  2. Create and change into a directory where you want to build the kernel
  3. Then download the kernel source:
    sudo apt-get build-dep --no-install-recommends linux-image-$(uname -r)
    apt-get source linux-image-$(uname -r)
  4. A new directory will be created containing the kernel source code. Everything else should happen from inside this directory
B. Modifying the Source
  1. Run lsusb and look for your Belkin device. It should look something like this
    Bus 002 Device 010: ID 050d:815f Belkin Components
  2. Note the hardware identifier, which in the above example is 050d:815f. Save this value for later.
  3. Inside the kernel source, edit the file drivers/staging/rt2870/2870_main_dev.c
  4. Search for the string Belkin. You should find a few lines looking like this:
      { USB_DEVICE(0x050D, 0x8053) }, /* Belkin */
      { USB_DEVICE(0x050D, 0x815C) }, /* Belkin */
      { USB_DEVICE(0x050D, 0x825a) }, /* Belkin */
  5. Duplicate one of these lines and replace in the ID noted in (3) above. Split and adapt the format of the 2 parts on either sides of the colon to conform to the syntax used in the lines from (4). In my example the 050D:815F would end up with a line as follows:
      { USB_DEVICE(0x050D, 0x815F) }, /* Belkin */
  6. Save the file and proceed to (C)
C. Rebuilding and Reinstalling the Kernel
  1. Run the following command in the kernel source directory:
    AUTOBUILD=1 NOEXTRAS=1 fakeroot debian/rules binary-generic
  2. This command will take a few minutes to complete. When it’s done you’ll find the kernel image .deb files one level up from the kernel source directory.
  3. To update your kernel image, you can install these using dpkg. For example:
    sudo dpkg -i linux-image-2.6.32-21-generic_2.6.32-21.32_i386.deb
  4. After installing, reboot your system and the device should be working.

Conclusion

Hardware manufacturers change the device IDs for various reasons, mainly when changes were made to the device that requires distinguishing it from previous releases (for example new functionality added to an existing model). A driver will always have a list of devices with which is it compatible. Sometimes the changes made by the manufacturer doesn’t really change the interface to the device, which means previous drivers would work perfectly fine had they known they are able to control the specified device.

The change above was exactly such a case where the driver is able to work with the device, but doesn’t know it supports this exact model. So all we did was to find the hardware ID actually reported by the device and add this to the list of IDs the driver will recognize and accept.

This is a great thing about Linux and the common Linux distributions. Almost all (if not all) packages you’ll find in your distribution are open source and make it possible to change whatever you need changing. Had this not been the case we would have needed to wait for whomever maintained the drivers to supply us with a version that would identify this device, where the existing drivers could have worked perfectly fine all along.

So in this case, it was like getting a driver for the device out of thin air.

Further, even if you spent time researching it and the change didn’t work, if you’re like me you’ll be just as excited as if it did work as now you can spent time figuring out why it didn’t work. This is actually the case with me, where the change didn’t make my device work. So I’m jumping straight in and fixing it. Will update when I get it working.

So Why Love Linux? Because by having the source code to your whole system available, you have complete freedom and control.

TrueCrypt – Open Source Security

June 6, 2011 Leave a comment

Overview

TrueCrypt is a very useful program. It allows you to encrypt your data either by encrypting your whole partition/hard drive, or by creating a file which is mounted as a virtual drive. I usually prefer the latter option, where I create a file of a certain size and then have it mounted to somewhere in my home directory. Everything private/personal would then be stored inside this directory, which results in it being encrypted. I would then be prompted at boot time for a password, which is needed to have this file decrypted and the directory become available.

I have a second smaller encrypted file which I also carry around on my pendrive, along with a TrueCrypt installation for both Windows and Linux. This second file contains some data like my private keys, certificates, passwords and other information I might need on the road.

Encrypting your Data

When you create your encrypted drive you are given the option of many crypto and hash algorithms and combinations of these. Each option has it’s own strength and speed, so with this selection you can decide on a balance between performance and security. On top of this you can also select the filesystem you wish to format the drive with, and when it comes time to formatting the drive you can improve the security of the initialization by supplying true randomness in the form of moving your mouse randomly. Some argue this isn’t true randomness or doesn’t have real value to the security, though I believe it’s certainly better than relying completely on the pseudo random generator algorithm, and most of all gives the feeling of security, which is just as important as having security. At this level of encryption the feeling of security is probably good enough, since the real security is already so high.

Passwords and Key Files

As far is password selection goes TrueCrypt encourages you to select a password of at least 20 characters and has the option of specifying one or more key files together with your password. A key file is a file you select from storage. It can be seen as increasing the length of your password with the contents of these files. For example, if you select a key file to be the executable file of your calculator program, then the contents of this file will be used together with your password to protect your data. You can also have TrueCrypt generate a key file of selected length for you. The key files can be of any size, though TrueCrypt will only use the first megabyte of data.

So when you mount the drive you not only have to supply the password, but also select all of the key files in the same order as it was configured. This can significantly improve security, especially if the key file is stored on a physical device like a security token or smart card. In this case to decrypt the volume, you need to (on top of the password) have knowledge that a token is needed, the physical token itself as well as it’s PIN.

The downside of key files are that if you loose the key file it would be very difficult to recover your data. If you select something like a file from your operating system and an update causes that file to change, then you will only be able to mount the drive if you get hold of that exact version of the file. So when using key files you need to be very careful in selecting files you won’t be likely to loose or which won’t be changed without you expecting it to change. Also, selecting key files it’s also important to not select ones which will be obvious to an attacker. For example, don’t select a key file named “keyfile.txt” which is in the same directory as the encrypted volume.

The better option is probably to have TrueCrypt generate the key file for you, and then use physical methods like a security token with a PIN to protect it. The benefit of security tokens used in this way can be visualized as having a password, but only those who have the correct token are allowed to use the password. So even if someone discovers the password they are unable to use it. And even if the token is stolen, without having the password it can not be used.

Hidden Volumes

TrueCrypt also has a function called a hidden volume, which is a form of steganography. This is where your encrypted container file, partition or hard drive contains a secret volume inside of it. So you end up having 2 passwords for your volume. If you try and mount this volume with the first (decoy) password, it would mount the outer or decoy volume. If you enter the 2nd (true) password it would mount the true or hidden volume. It’s possible to store data in both these volumes, which if done well will not give away the fact that the first volume is a decoy.

The benefit here is that if you are forced to hand over your password, you can give the password for the outer volume and thus not have anything you wish to remain private become exposed. With whole disk encryption you can even go as far as installing an operating system in both volumes, resulting in a hidden operating system altogether. So if you were to enter the hidden volume’s password you would boot into the installation of the hidden volume, and if you were to enter the outer volume’s password you would boot into the decoy operating system.

There is no way to determine whether a hidden volume exists within a particular TrueCrypt file/disk, not even when the decoy or outer volume is mounted or decoy operating system is booted. The only way to know this or mount it is to know the hidden volume’s password.

Conclusion

The primary reasons I like TrueCrypt so much is that it makes it easy for anyone to protect their data, giving you many choices in doing so and allowing you to choose the balance between security and performance. And when it gives you options for security it gives you options to have a decent amount of it (key files and hidden volumes). TrueCrypt is also very easy to install and integrates well with the environment. For certain tasks it needs administrator permissions, and on Linux many programs require you to run them as root if they need such access. TrueCrypt was implemented well enough to ask you for the administrator access when it needs to have it. It also allows mounting on startup to be easily achieved. It’s all these small things which make your life easier.

I would recommend TrueCrypt to everyone. Store all your sensitive data in a TrueCrypt drive because you never know what might happen to it. You always have the choice of using your operating system’s native data encryption functionality. Though TrueCrypt certainly has more features and makes all of them easily accessible and maintainable. It’s GUI is also easy to use, and more advanced functionality like mount options is available when/where it’s needed.

To download or find out more, see http://www.truecrypt.org/.

So Why Love Linux? Because it has had a strong influence on the open source movement, resulting in high quality open source software like TrueCrypt.

[13 Jul 2014 EDIT: with the recent events with TrueCrypt I would probably think I was making assumptions when writing this… LOL]

Knowing the Moment a Port Opens

June 5, 2011 Leave a comment

Automated Attempts

Sometimes when a server is rebooted, whether a clean soft reboot or a hard reboot after a crash, I need to perform a task on it as quickly as possible. This can be for many reasons like ensuring all services are started to making a quick change. Sometimes I just need to know the moment a certain service is started to notify everyone of this fact. The point is that every second counts.

When the servers starts up and the network is joined you can start receiving ping responses from the server. At this point all the services haven’t started up yet (on most configurations at least), so I can’t necessarily log into the server or access the specific service, yet. Attempting to do so I would get a connection refused or port closed error.

What I usually do in cases where I urgently need to log back into the server is ping the IP address and wait for the first response packet. When I receive this packet I know the server is almost finished booting up. Now I just need to wait for the remote access service to start up. For Linux boxes this is SSH and for Windows boxes it’s RDP (remote desktop protocol).

I could try to repeatedly connect to it, but this is unnecessarily manual, and when every second counts probably less than optimal. Depending on what I’m trying to do I have different methods of automating this.

If I just needed to know that a certain service is started and available again, I would put a netcat session in a loop, which would repeatedly attempt a connection. As long as the service isn’t ready (the port is closed), the netcat command will fail and exit. The loop will then wait for 1 second and try again. As soon as the port opens the connection will succeed and netcat will print a message stating the connection is established and then wait for input (meaning the loop will stop iterating). At this point I can just cancel the whole command and notify everyone that it’s up and running. The command for doing this is as follows:

while true; do nc -v 10.0.0.221 80; sleep 1; done

If I needed remote access to the server, I would use a similar command as above, but use the remote access command instead, and add a break statement to quit the loop after the command was successful. For example, for an SSH session I would use the ssh command, and for a remote desktop session the rdesktop command. A typical SSH command will look like:

while true; do ssh 10.0.0.221 && break; sleep 1; done

This will simply keep trying the ssh command until a connection has been established. As soon as a connection was successful I will receive a shell, which when exited from will break the loop and return me to my local command prompt.

Automatically Running a Command

If you had to run some command the moment you are able to do so, you could use the above SSH command with some minor modifications.

Lets say you wanted to remove the file /opt/repository.lock as soon as possible. To keep it simple we’re assuming the user you log in as has permission to do so.

The basic idea is that each time you fail to connect, SSH will return a non-zero status. As soon as you connect and run the command you will break out of the loop. In order to do so, we need a zero exit status to distinguish between a failed and successful connect.

The exit status during a successful connect, however, will depend on the command being run on the other end of the connection. If it fails for some reason, you don’t want SSH to repeatedly try and fail, effectively ending up in a loop that won’t exit by itself. So you need to ensure it’s exit status is 0, whether it fails or not. You can handle the failure manually.

This can be achieved by executing the true command after the rm command. All the true command does is to immediately exit with a zero (success) exit status. It’s the same command we use to create an infinite while loop in all these examples.

The resulting command is as follows:

while true; do \
  ssh 10.0.0.221 "rm -f /opt/repository.lock ; true" && break; \
  sleep 1; \
done

This will create an infinite while loop and execute the ssh and sleep commands. As soon as a SSH connection is established, it will remove the /opt/repository.lock file and run the true command, which will return a 0 status. The SSH instance will exit with success status, which will cause a break from the while loop and end the command, returning back to the command prompt. As with all the previous examples, when the connection fails the loop will pause for a second, and then try again.

Conclusion

By using these commands instead of repeatedly trying to connect yourself, there is a max of 1 second from the time the service started till when you’re connected. This can be very useful in emergency situations where every second you have some problem could cost you money or reputation.

The Linux terminal is a powerful place and I sometimes wonder if those who designed the Unix terminal knew what they were creating and how powerful it would become.

So Why Love Linux? Because the Linux terminal allows you to optimize your tasks beyond humanly capability.

 

Building from Source Has Never Been Easier

June 4, 2011 Leave a comment

Overview

For me, one of the greatest things Debian gave to the world was apt and dpkg, ie. Debian’s package management system. It does a brilliant job of almost everything and is very easy to use. What I’ll be explaining in this post is how you would use these tools to customize a package at the source level.

If you wanted to change something in the source code of a package, you could always go check it out from the project’s revision control system, or download it from the project’s web site. Though this won’t necessarily be the same version you received through the repositories, and will most probably not have all the patches applied by the distribution’s authors.

There are benefits in getting the latest vanilla version, though there are more cons than pros when compared to using apt and dpkg to get and build the source. Provided one is available, some of the benefits of using the source package from the repositories are:

  1. The source code you will be editing will be for the same version as the package you have installed.
  2. The source will have all the patches as applied by the distribution’s authors. Some of these patches are sometimes extra functionality which would be lost if you use the vanilla source code.
  3. The package version and patches from the distribution is what was tested within that environment.
  4. You are building a .deb package file, which can be installed and/or added to a repository to easily use on multiple installations
  5. When using a .deb file you can benefit from the dependency management
  6. Having a .deb you can control how new versions of the package are handled (like preventing new installations, safely overriding with new versions, etc.)
  7. By having a .deb it’s easy to remove the package and install the original again

Points 4 to 7 are also possible to achieve when downloading the vanilla source, though requires many more steps and is far more complicated than the technique I’m describing in this post.

Above all of these benefits, the biggest reason of all why I like to follow this approach when hacking the source of packages from my installations, is definitely the simplicity of it. When summarized, it all comes down to 3 commands I’ll list in the conclusion of this post.

Getting the Source

Before you can start building you need to prepare your environment for it. Run the following command to install the necessary packages:

quintin:~$ sudo apt-get install build-essential fakeroot dpkg-dev

So, for all the examples I’ll be using blueproximity as the package to be built. It’s a python script, so you don’t really need to download separate source code to modify it. Though to demonstrate this technique I figured it’s a good candidate given it’s small size and simple structure.

So to get the source, I’ll make a directory called src and change into it.

quintin:~$ mkdir src
quintin:~$ cd src/

Then instruct apt to download the source code for the project named blueproximity.

quintin:~/src$ apt-get source blueproximity
Reading package lists... Done
Building dependency tree
Reading state information... Done
Need to get 309kB of source archives.
Get:1 http://repo/ubuntu/ lucid/universe blueproximity 1.2.5-4 (dsc) [1,377B]
Get:2 http://repo/ubuntu/ lucid/universe blueproximity 1.2.5-4 (tar) [301kB]
Get:3 http://repo/ubuntu/ lucid/universe blueproximity 1.2.5-4 (diff) [6,857B]
Fetched 309kB in 9s (32.6kB/s)
gpgv: Signature made Mon 24 Aug 2009 00:52:04 SAST using DSA key ID 7ADF9466
gpgv: Can't check signature: public key not found
dpkg-source: warning: failed to verify signature on ./blueproximity_1.2.5-4.dsc
dpkg-source: info: extracting blueproximity in blueproximity-1.2.5
dpkg-source: info: unpacking blueproximity_1.2.5.orig.tar.gz
dpkg-source: info: applying blueproximity_1.2.5-4.diff.gz

As you can see, apt

  1. Downloaded the source tarball blueproximity-1.2.5.orig.tar.gz
  2. Downloaded a patch file blueproximity_1.2.5-4.diff.gz.
  3. It extracted the source code into a directory blueproximity-1.2.5.
  4. And then applied the patch to this directory.

At this stage the source is ready for editing.

Building the Source

In order for your build to complete successfully you might need some development dependencies. These are usually the header files or link libraries, and often named after the package with a -dev suffix. Apt can install anything needed to build a specific package using the build-dep command.

To make sure we have all these dependencies for building blueproximity, we run:

quintin:~/src$ sudo apt-get build-dep blueproximity
Reading package lists... Done
Building dependency tree
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

In my case these wasn’t any build dependencies needed, and thus nothing was installed.

Once you’re happy with your changes and want to build the .deb file, you simply need to

  1. Change into the root of the extracted project source code
    quintin:~/src$ cd blueproximity-1.2.5
  2. And run the build.
    quintin:~/src/blueproximity-1.2.5$ dpkg-buildpackage -rfakeroot -uc -b
    [truncated output]
    dpkg-deb: building package `blueproximity'
    in `../blueproximity_1.2.5-4_all.deb'.
    dpkg-deb: warning: ignoring 1 warnings about the control file(s)
    
    dpkg-genchanges -b >../blueproximity_1.2.5-4_i386.changes
    dpkg-genchanges: binary-only upload - not including any source code
    dpkg-buildpackage: binary only upload (no source included)

You’ll see a lot of output which I truncated here. The result will be one or more .deb files in the directory where you downloaded the source (in the example, the one named src).

Conclusion

As you can see, there is very little needed to get a package’s source code and build a .deb from it. I’ve done this a few times, and all the packages I’ve modified I’ve added to a repository of my own, and have found this approach to work very well for integrating my changes into the existing system as seamlessly as possible.

There are basically 3 commands needed to build a package from source (excluding setting up your environment for this). These are:

apt-get source [package name]
apt-get build-dep [package name]
dpkg-buildpackage -rfakeroot -uc -b

The first 2 commands you would run in a dedicated directory created for this purpose. The first command will then create a sub directory where it extracts the source code to. It’s in this sub directory where you would run the last command. The first 2 commands you would also only need to run once. The last command you can run each time you want to build a package from the same source code, perhaps when making changes for a second or third time.

It’s really that simple.

So Why Love Linux? Because apt opens up a very easy way of customizing your system at as low a level as the source code.

The Traveling Network Manager

June 3, 2011 Leave a comment

Overview

Networks are such a big part of our lives these days that being at a place where there isn’t some form of a computer network, it feels like something’s off or missing, or like it wasn’t done well. You notice this especially when you travel around with a device capable of joining WiFi networks, like a smartphone, tablet or laptop. And even more so when you depend on these to get internet access.

Ubuntu, and I assume most modern desktop distributions, come with a utility called NetworkManager. It’s this utility’s job to join you to networks and manage these connections. It was designed to make best attempt to configure a network for you automatically with as little user interaction as possible. Even when using the GUI components, all input fields and configuration UIs were designed to make managing your networks as painless as possible, keeping in mind the average user’s abilities. All complicated setup options were completely removed, so you can’t configure things like multiple IP addresses, or select the WiFi channel, etc.

NetworkManager is mostly used through an icon in the system tray. Clicking this icon brings up a list of all available networks. If you select a network, NetworkManager will attempt to connect to the network and configure for your device via DHCP. If it needs any more information from you (like for a WiFi pass phrase or SIM card pin code), it will prompt you. If this connection becomes available in the future it will then automatically try and connect to it. For WiFi connections it’s the user’s job to select the first connection from the menu. For ethernet networks NetworkManager will automatically connect the first time.

These automatic actions NetworkManager takes are to make things more comfortable for the end user. The more advanced user can always go and disable or fine tune these as needed. For example to disable automatically connecting to a certain network, or setting a static IP address on a connection.

Roaming Profiles

If you travel around a lot you end up with many different network “profiles”. Each location where you join a network will have it’s own setup. If all these locations have DHCP you rarely need to perform any manual configuration to join the network. You do get the odd location, though, where you need some specific configuration like a static IP address. NetworkManager makes this and roaming very easy and natural to implement, and seamlessly manages this “profile” for you.

You would do this by first joining the network. Once connected, and whether or not your were given an IP address, you would open the NetworkManager connections dialog and locate the connection for the network you just joined. From here you would edit it and set your static IP address (or some other configuration option) and save the connection.

By doing this you effectively created your roaming profile for this network. None of your other connections will be affected, so whenever you join any of your other networks, they will still be working as they did previously, and the new network will have it’s own specific configuration.

This was never really intended to be a roaming profile manager, so other options related to roaming (like proxy servers) will not be configured automatically. I’m sure with a few scripts and a bit of hacking you should be able to automate setting up these configurations depending on the network you’re joining.

Conclusion

NetworkManager is maybe not the advanced user’s favorite tool. But if you don’t need any of these advanced features I would certainly recommend it.

So Why Love Linux? Because NetworkManager does a brilliant job of making networking comfortable in a very natural way.

Within the Blue Proximity

June 2, 2011 2 comments

Overview

I read about the awesome little program called Blue Proximity. It’s a Python script that repeatedly measures the signal strength from a selected Bluetooth device. It then uses this knowledge to lock your computer if you are further away from it, and unlock it or keep it unlocked when you are close to it.

It’s very simple to setup. It has a little GUI from which you select which device you want to use for this and then specify the distance value at which to lock/unlock your computer, as well as which time delay for the lock/unlock process. The distance can’t be measured in meters/feet, but instead just a generic unit. This unit is an 8bit signed scale based on the signal strength measured from the device and isn’t terribly accurate. It’s not a perfect science and a lot of factors affect the reading.

So the general idea is that you try and get your environment as normal as you would usually have it and try different values for lock/unlock distances until you get a configuration that works best for you. There are a few more advanced parameters to play with as well. Especially the very useful ring buffer size, which allows you to effectively average that value over the last few readings, instead of using the raw value each time. It’s certainly worth playing around with these values until you find what gives you the best result.

You can even go as far as specifying the commands to be executed for locking/unlocking the screen. The default is probably sufficient for most purposes, but it’s definitely available for those that want to run other commands.

Beyond just locking/unlocking there is also a proximity command feature, which will ensure that the computer doesn’t lock from inactivity as long as you’re close to it. This is very useful for times where you’re watching a movie or presentation and don’t want the screen to keep locking just because you didn’t move the mouse or type on the keyboard.

My Setup

Before I had this program I would have my computer lock after a 10 minute idle period. Then if I return it would almost be automatic for me to start typing my password. The Gnome lock screen is optimized cleverly, in that you can simply start typing your password even if the password dialog doesn’t display yet. It will recognize the first key press in a locked state as an indication of your intent to unlock the screen as well as use it for the first character of your password.

After I configured and hacked Blue Proximity to my liking the screen would lock as soon as I’m about 3 meters away from the computer, and unlock when I’m right in front of it. I configured a 10 second ring buffer to average the reading it gives over the readings for the past 10 seconds. I also made 0 or higher values (closest reading to the computer) count as double entries. Meaning when 0 values are being read it will average down to 0 twice as fast. This allows for it to be more stable when moving around, but unlock very quickly when standing right next to the machine. It all works very well.

It’s been a few days now, and still when I get to the computer and it unlocks by itself I’m amused. Sometimes I even start getting ready to enter my unlock password when the screen is automatically unlocked. Very amusing.

It’s not perfect, and sometimes the screen would lock while I’m busy using the computer and then immediately unlock again. This is to be expected from the nature of wireless technologies, though I’m sure a bit more hacking and tuning will get it at least as close to perfect as it can be.

Conclusion

It’s typical of the software world to always produce amusing and fun utilities like this one. This one is definitely one of my favorites.

So Why Love Linux? Because there are tons of free and open source programs and utilities of all kinds.

Managed Packages

June 1, 2011 Leave a comment

There are tons and tons of open source projects out there. Something for almost every topic or task. From general purpose, common or popular down to highly specialized or unheard of software. This is one of Linux’s strengths, especially with distributions like Ubuntu which have package repositories with thousands of options readily available to the user.

Package Manager

Synaptic Package Manager is Ubuntu’s user interface to the underlying apt package management system. Whenever I want to install something I would first go check if I can’t find it in Synaptic before I go look to download it manually. More often than not I would find the package in Synaptic, and can have it then installed with just 2 more clicks of the mouse.

This saves a lot of time, and never goes unappreciated.

Ubuntu Repositories

The package management software for Ubuntu is brilliant. But without thorough repositories they’re nothing more than just that, package management.

Ubuntu has multiple levels of repositories by default, nl. main, universe, multiverse and restricted.

  • The main repository is maintained by, and contains software officially supported by Canonicle themselves.
  • The universe repository is maintained by the community and isn’t officially supported by Canonicle.
  • The restricted repository contains packages that isn’t available under a completely free license. A popular example is for proprietary drivers, like the Nvidia or ATI graphics drivers.
  • The multiverse repository contain software that isn’t free.

Canonicle is doing a great job with the main repository, having a decent variety of packages available and kept up to date. On top of this the community is doing a fantastic job to keep the universe repository filled up. With these two I rarely have the need to go looking for software on the internet.

Easy Repository Integration

For the few cases where the default repositories don’t have what you need, you need to get it from the internet.

There are a few ways to install packages from the internet.

  • Download an installer and run it.
  • Download an archive and either build from source or install it some manual way.
  • Download a .deb package and install via dpkg.
  • Add a 3rd party repository to your package management system and then install via Synaptic.

The Ubuntu system makes it very easy to add a 3rd party repository. This means that if you come across a site that offers an Ubuntu (or apt) repository, it can usually come in the form of

  1. A string called an “APT line”, which you can just add using the supplied GUI in Synaptic Package Manager, or
  2. A .deb file which you install via dpkg. This will then set up the repository for you. You can usually just double click on the .deb and it will start up the installation for you.

After you’ve got their repository set up you can go into Synaptic, search for the package you want, and install it.

Standardized Maintenance and Management

One of the biggest benefits of installing packages via the repositories (other than it making your life easier), is that the program is now maintained by the package management system. This means that your system has a standardized way of

  1. Having on record what is installed and what files are owned by the package
  2. Reinstalling if files go missing or become corrupted
  3. Cleanly removing the package
  4. Finding and installing updates for the package.

For packages installed via other methods there is usually no uninstall or automated update support.

Some of the more advanced programs have built in support for this. But if you installed it into a shared location owned by root, you won’t be able to update. I usually get around this by temporarily changing the ownership of the directory, doing the update and restoring the ownership.

Exploring

With the large variety of packages available via the Ubuntu repositories, you have an endless number of programs to try out if you feel like exploring. I have had some of these moment where I just pick some random location and start reading the description of each package until I find something that pokes seems interesting. I will then install it, play around and return to the list to find another one.

It’s a very good way of learning about new programs and projects, and certainly an amusing exercise.

Conclusion

So Why Love Linux? Being the result of open source communities there are tons of projects out there and decent repositories and package management systems make these easily available.