Archive

Posts Tagged ‘terminal’

Pipes and FIFOs

June 8, 2011 Leave a comment

Overview

The basic design of the Unix command line world is the same one which makes it so powerful and which many people swear by, namely the pipes and FIFOs design. The basic idea is that you have many simple commands which take an input and produce an output, and then to string them together into something that will give the desired effect.

Standard IO Streams or Pipes

To start out, let me give a rough explanation of the IO pipes. These are Standard Input, Standard Output and Standard Error. Standard Error and Standard Output are both output streams writing out whatever the application puts into to them. Standard Input is an input stream giving to the application whatever is written into it. Each program when run has each of these 3 streams available to it by default.

From this point forward I’ll refer to the 3 streams as STDOUT for Standard Output, STDERR for Standard Error and STDIN for Standard Input. These are the general short form or constant names for these streams.

Take for example the echo and cat commands. The echo command takes all text supplied as arguments on it’s command line and writes it out STDOUT. For example, the following command will print the text “Hi There” to the STDOUT stream, which by default is linked to the terminal’s output.

echo Hi There

Then, in it’s simplest form the cat command takes all data it reads from it’s STDIN stream and writes it back out to STDOUT exactly as it was received. You can also instruct cat to read in the contents of one or more files and write it back out to STDOUT. For example, to read in the contents of a file name namelist, and write it to STDOUT (the terminal) you can do:

cat namelist

To see cat in it’s purest form, simply run it without arguments, as:

cat

Each line of input typed in will be duplicated. This is because the input you type is sent to STDIN. This input is then received by cat which will write it back to STDOUT. The end of your input can be indicated by pressing Ctrl+D, which is the EOF or End of File key. Pressing Ctrl+D will close the STDIN stream, and will be handled by the program the same as if it was reading a file and came to the end of that file.

Pipes and Redirects

Now, all command line terminals allow you to do some powerful things with these IO pipes. Each type of shell has it’s own syntax, so I will be explaining these using the syntax for the Bash shell.

You could for instance redirect the output from a command into a file using the greater than or > operator. For example, to redirect the STDOUT of the echo command into a file called message, you would do:

echo Hi There > message

You could also read this file back into a command using the less than or < operator. This will take the contents of the file and write it to the command’s STDIN stream. For example, reading the above file into the cat program, would have it written back to STDOUT. So this has the same effect as supplying the filename as an argument to cat, but instead uses the IO pipes to supply the data.

cat < message

Where things really get powerful is when you start stringing together commands. You can take the STDOUT of one command and pipe it into the STDIN of another command, with as many commands as you want. For example, the following command pipes the message “Pipes are very useful” into the cut command, instructing it to give us the 4th word of the line. This will result in the text “useful” being printed to the terminal.

echo Pipes are very useful | cut -f 4 -d " "

As you can see, commands are stringed together with the pipe or | operator. The pipe operator by itself makes many powerful things possible.

Using the pipe (|) and redirect (>) operator, let’s give a more complex example. Let’s say we want to get the PID and user name of all running processes, sorted by the PID and separated by a comma. We can do something like this:

ps -ef | tail -n+2 | awk '{print $2 " " $1}' | sort -n | sed "s/ /,/"

To give an idea of what happens here, let me explain the purpose of each of these commands with the output each one produces (which becomes the input of the command that follows it).

Command Description
ps -ef Gives us a list of processes with many columns of data, of these the 1st column being the user and the 2nd column being the PID.

Output:

UID        PID  PPID  C STIME TTY          TIME CMD
root      4222   443  0 20:14 ?        00:00:00 udevd
quintin   3922  2488  0 20:14 pts/2    00:00:00 /bin/bash
quintin   4107  2496  0 20:18 pts/0    00:00:00 vi TODO
tail -n+2 Takes the output of ps and gives us all the lines from line 2 onwards, effectively stripping the header.

Output:

root      4222   443  0 20:14 ?        00:00:00 udevd
quintin   3922  2488  0 20:14 pts/2    00:00:00 /bin/bash
quintin   4107  2496  0 20:18 pts/0    00:00:00 vi TODO
awk ‘{print
$2 ” ” $1}’
Takes the output of tail, and prints the PID first, a space and then the user name. The rest of the data is discarded here.

Output:

4222 root
3922 quintin
4107 quintin
sort -n This sorts the lines received from awk numerically.

Output:

3922 quintin
4107 quintin
4222 root
sed “s/ /,/” Replaces the space separating the PID and user name with a comma.

Output:

3922,quintin
4107,quintin
4222,root

Some Example Useful Commands

The above should give you a basic idea of what it’s all about. If you feel like experimenting, here are a bunch of useful commands to mess around with.

I’ll be describing the commands from the perspective of the standard IO streams. So even though I don’t mention it, some of these commands also support reading input from files specified as command line arguments.

To get more details about the usage of these commands, see the manual page for the given command by running:

man [command]

.

Command Description
echo Writes to STDOUT the text supplied in command line arguments.
cat Writes to STDOUT the input from STDIN.
sort Sorts all lines of input from STDIN.
uniq Strips duplicate lines. The input needs to be sorted first, thus same basic effect can be achieved with just sort -u
cut Cuts a string by a specified character and returns requested parts.
grep Search for a specified pattern or string in the data supplied via STDIN.
gzip Compresses the input from STDIN and writes the result to STDOUT. Uses gzip compression.
gunzip Uncompresses the gzip input from STDIN and writes the results to STDOUT. Basically the reverse of gzip.
sed Stream editor applying basic processing and filtering operations to STDIN and writes to STDOUT.
awk Pattern scanning and processing langauge. Powerful script-like processing of lines/words from input.
column Takes the input from STDIN and formats it into columns, writing the result to STDOUT. Useful for displaying data.
md5sum Takes the input from STDIN and produces a md5sum of the data
sha1sum Takes the input from STDIN and produces a sha1sum of the data
base64 Takes the input from STDIN and base64 encodes or decodes it
xargs Takes input from STDIN and a uses it as arguments to a specified command
wc Count the number of lines, words or characters read from input.
tee Read input and write it to both STDOUT as well as a specified file.
tr Translate or delete characters read from input

Conclusion

I would recommend anyone to get comfortable with these aspects of the Linux terminal as well as Bash scripting. Not knowing this, you might not even realize how many of your common tasks could be automated/simplified by it. Also remember that automation not only makes your tasks be completed quicker, but also reduces the chances for errors/mistakes that come from doing repetitive tasks by hand.

So Why Love Linux? Because the pipes and FIFOs pattern gives you a lot of power for building complex instructions.

Terminal Auto Complete

May 31, 2011 Leave a comment

Overview

Command line, or Bash completion is a Bash user’s best friend. If you had to type out everything you do on the command line, you would be doing a lot of typing. Similar to auto completion in most text fields these days, Bash completion allows you to start typing something and then have Bash complete it for you.

Basic Usage

You activate completion with the TAB key.

So lets say you wanted to change into a directory named books, and there were 3 directories available, namely books, movies and music. If you were to type

$ cd b

and then press the TAB key, it would end up as

$ cd books

Though, if there was another directory next to books that also starts with bo, for examples boxes, then Bash would only complete up till what they have in common, which is bo. You can then press the TAB key again to have it show you the available options, as so:

$ cd bo
books/ boxes/

From here you can just type one more character to distinguish them and then press TAB again to have it complete the whole word. So you would type o and press TAB, and your command will be completed.

You get used to this very quickly, and it becomes a very powerful tool.

Smart Completion

The basic completion of files and directories are the default. You get smart completion which is command specific. These are done based on what you have typed so far and computed with procedures. For example, if you type:

$ unzip

and then press the TAB key, it could complete only ZIP files, instead of suggesting all available file.

It doesn’t stop there, though. Non-file completions are also possible. For example the apt-get command has various sub commands like install, remove, source and upgrade. All it’s sub commands are also completed. For example, typing

$ apt

and pressing the TAB key twice, gives you

$ apt-get 
autoclean  autoremove    build-dep        check
clean      dist-upgrade  dselect-upgrade  install
purge      remove        source           update
upgrade

You can now start typing one of them and press TAB to have it complete. For example, typing i and pressing TAB will complete into install.

Customizing

It gets even better. All this is very easily customized. If I wanted to have the perl command be completed only with files ending in the .pl extension, I can achieve this by creating a file at /etc/bash_completion.d/perl and giving it the following contents:

complete -f -X '!*.pl' perl

That’s it. Whenever I complete the arguments of the perl command, it will only do so for files ending in .pl.

Other Shortcuts

There are some other shortcuts also related to command line editing and history.

You can select your previous command by pressing the Up key. Repeatedly pressing this will allow you to browse through your history of commands. Pressing the Down will make you go forward into your history.

Or, if you start typing a command and realize your missed something and want to clear the whole line to start over, you can press Alt+R. This is like pressing and holding backspace until the line is cleared. An alternative is to just press Ctrl+C and return to the command line.

Sometimes you want to perform a series of commands on the same file. For example, to change the ownership and permissions of the same file you could type:

chown root:root myfile
chmod 0644 myfile

This requires typing the filename twice. Bash will do this for you on the second line if you press Alt+. (Alt and the full stop character). For example, typing:

chown root:root myfile
chmod 0644

and then pressing Alt+., will complete your command by taking the last argument of the previous command (ie. myfile) and inserting it at your current position, resulting in:

chmod 0644 myfile

Conclusion

So Why Love Linux? Because the command line, one of Linux’s true powers, offers powerful editing capabilities.

Pipe a Hard Drive Through SSH

May 24, 2011 Leave a comment

Introduction

So, assume you’re trying to clone a hard drive, byte for byte. Maybe you just want to backup a drive before it fails, or you want to do some filesystem data recovery on it. The point is you need to mirror the whole drive. But what if you can’t install a second destination drive into the machine, like in the case for most laptops. Or maybe you just want to do something funky?

What you can do is install the destination drive into another machine and mirror it over the network. If both these machines have Linux installed then you don’t need tany extra software. Otherwise you can just boot from a Live CD Linux distribution to do the following.

Setup

We’ll assume the source hard drive on the client machine is /dev/sda, and the destination hard drive on the server machine is /dev/sdb. We’ll be mirroring from /dev/sda on the client to /dev/sdb on the server.

An SSH server instance is also installed and running on the server machine and we’ll assume you have root access on both machines.

Finally, for simplicity in these examples we’ll name the server machine server-host.

The Command

So once everything is setup, all you need to do is run the following on the client PC:

dd if=/dev/sda | ssh server-host "dd of=/dev/sdb"

And that’s it. All data will be

  1. read from /dev/sda,
  2. piped into SSH, which will in turn
  3. pipe it into the dd command on the other end, which will
  4. write it into /dev/sdb.

You can tweak the block sizes and copy rates with dd parameters to improve performance, though this is the minimum you need to get it done.

Conclusion

So Why Love Linux? Because it’s Pipes and FIFOs design is very powerful.

No More Ctrl+C Echo

May 18, 2011 Leave a comment

I’ve always liked the terminal and especially the Ctrl+C key. If I make a bad typo, forgot something or want to abort some command Ctrl+C is always an option. Sometimes I just want to remember something, like someone giving me a telephone number, so I quickly type it onto the command prompt and press Ctrl+C. Then I can put it somewhere more persistent when I have the time. The point is that Ctrl+C will immediately return me back to the command prompt. I would much rather press Ctrl+C and return to the prompt immediately, than to have to press and hold backspace or alt+backspace for a couple of seconds. I tend to optimize things a lot to help me achieve my goal as fast as possible, and Ctrl+C is a tool you can use for much more than just aborting a running command.

Now, some configurations will echo the text ^C when you press Ctrl+C. I’ve always had this turned off, and got used to having it this way. So, when I upgraded to Ubuntu 9.10 something changed. For some reason it was echoing ^C everytime I press it. I figured I’d disable it later and continued with it turned on. After a while it really started irritating me because my screen was full of ^C, which inspired me to disable it immediately.

Having never done this myself (it’s always been off by default) I did a quick Google to find out how it’s turned off. After some digging I found a terminal option I could disable with stty, called echoctl.

I gave this a try and it seemed to work. No more ^C when pressing it at the command prompt. Started up cat and pressed Ctrl+C to abort the command, and there it was again.

Failure.

After some investigation and experimentation I realized that it is turned off everywhere except when aborting a command with xterm TTY configuration in the gnome-terminal. Even just running a screen session inside the gnome-terminal would have it be gone for good. So if I had to abort a command it will print, and there didn’t seem to be an easy way around this. If there was I missed it somewhere.

Now… I don’t like defeat. So I decided to play dirty and change it right where it comes from. I used the fantastic package management tool apt and prepared an environment for building the kernel. Then I jumped into the tty source and changed it to not print ^C at ALL when echoctl is turned off.

After building and installing the new kernel, I just had to make sure the -echoctl option was persistent across boot, and finally had ^C gone for good.

So why love Linux? Because it makes it easy for you to be in complete control.