Archive

Posts Tagged ‘hacking’

Belkin F5D8053 Support in Ubuntu – Drivers from Nowhere

June 7, 2011 Leave a comment

New Hardware

I recently purchased a new wireless ADSL router, which came with a USB WiFi network adapter. Being a fairly new release of the hardware the kernel didn’t recognize it and I couldn’t use it.

However, after some research I found that it is in deed supported by the kernel – but just not recognized due to a change in the hardware ID reported by the device. The kernel module it should work with is the RT2870 driver.

If this is the case, it should be easy to fix.

Making it Work

If you have recently rebuilt your kernel you can skip section (A), and simply proceed with (B) and (C).

A. Preparing Build Environment

The steps for preparing the build environment is based on those needed for Ubuntu 10.04 (Lucid Lynx). If you have problems building the kernel after following these steps (especially for older versions of Ubuntu), see Kernel/Compile for instructions more tailored to your version of Ubuntu.

  1. Prepare your environment for building
    sudo apt-get install fakeroot build-essential
    sudo apt-get install crash kexec-tools makedumpfile kernel-wedge
    sudo apt-get build-dep linux
    sudo apt-get install git-core libncurses5 libncurses5-dev libelf-dev 
    sudo apt-get install asciidoc binutils-dev
  2. Create and change into a directory where you want to build the kernel
  3. Then download the kernel source:
    sudo apt-get build-dep --no-install-recommends linux-image-$(uname -r)
    apt-get source linux-image-$(uname -r)
  4. A new directory will be created containing the kernel source code. Everything else should happen from inside this directory
B. Modifying the Source
  1. Run lsusb and look for your Belkin device. It should look something like this
    Bus 002 Device 010: ID 050d:815f Belkin Components
  2. Note the hardware identifier, which in the above example is 050d:815f. Save this value for later.
  3. Inside the kernel source, edit the file drivers/staging/rt2870/2870_main_dev.c
  4. Search for the string Belkin. You should find a few lines looking like this:
      { USB_DEVICE(0x050D, 0x8053) }, /* Belkin */
      { USB_DEVICE(0x050D, 0x815C) }, /* Belkin */
      { USB_DEVICE(0x050D, 0x825a) }, /* Belkin */
  5. Duplicate one of these lines and replace in the ID noted in (3) above. Split and adapt the format of the 2 parts on either sides of the colon to conform to the syntax used in the lines from (4). In my example the 050D:815F would end up with a line as follows:
      { USB_DEVICE(0x050D, 0x815F) }, /* Belkin */
  6. Save the file and proceed to (C)
C. Rebuilding and Reinstalling the Kernel
  1. Run the following command in the kernel source directory:
    AUTOBUILD=1 NOEXTRAS=1 fakeroot debian/rules binary-generic
  2. This command will take a few minutes to complete. When it’s done you’ll find the kernel image .deb files one level up from the kernel source directory.
  3. To update your kernel image, you can install these using dpkg. For example:
    sudo dpkg -i linux-image-2.6.32-21-generic_2.6.32-21.32_i386.deb
  4. After installing, reboot your system and the device should be working.

Conclusion

Hardware manufacturers change the device IDs for various reasons, mainly when changes were made to the device that requires distinguishing it from previous releases (for example new functionality added to an existing model). A driver will always have a list of devices with which is it compatible. Sometimes the changes made by the manufacturer doesn’t really change the interface to the device, which means previous drivers would work perfectly fine had they known they are able to control the specified device.

The change above was exactly such a case where the driver is able to work with the device, but doesn’t know it supports this exact model. So all we did was to find the hardware ID actually reported by the device and add this to the list of IDs the driver will recognize and accept.

This is a great thing about Linux and the common Linux distributions. Almost all (if not all) packages you’ll find in your distribution are open source and make it possible to change whatever you need changing. Had this not been the case we would have needed to wait for whomever maintained the drivers to supply us with a version that would identify this device, where the existing drivers could have worked perfectly fine all along.

So in this case, it was like getting a driver for the device out of thin air.

Further, even if you spent time researching it and the change didn’t work, if you’re like me you’ll be just as excited as if it did work as now you can spent time figuring out why it didn’t work. This is actually the case with me, where the change didn’t make my device work. So I’m jumping straight in and fixing it. Will update when I get it working.

So Why Love Linux? Because by having the source code to your whole system available, you have complete freedom and control.

Knowing the Moment a Port Opens

June 5, 2011 Leave a comment

Automated Attempts

Sometimes when a server is rebooted, whether a clean soft reboot or a hard reboot after a crash, I need to perform a task on it as quickly as possible. This can be for many reasons like ensuring all services are started to making a quick change. Sometimes I just need to know the moment a certain service is started to notify everyone of this fact. The point is that every second counts.

When the servers starts up and the network is joined you can start receiving ping responses from the server. At this point all the services haven’t started up yet (on most configurations at least), so I can’t necessarily log into the server or access the specific service, yet. Attempting to do so I would get a connection refused or port closed error.

What I usually do in cases where I urgently need to log back into the server is ping the IP address and wait for the first response packet. When I receive this packet I know the server is almost finished booting up. Now I just need to wait for the remote access service to start up. For Linux boxes this is SSH and for Windows boxes it’s RDP (remote desktop protocol).

I could try to repeatedly connect to it, but this is unnecessarily manual, and when every second counts probably less than optimal. Depending on what I’m trying to do I have different methods of automating this.

If I just needed to know that a certain service is started and available again, I would put a netcat session in a loop, which would repeatedly attempt a connection. As long as the service isn’t ready (the port is closed), the netcat command will fail and exit. The loop will then wait for 1 second and try again. As soon as the port opens the connection will succeed and netcat will print a message stating the connection is established and then wait for input (meaning the loop will stop iterating). At this point I can just cancel the whole command and notify everyone that it’s up and running. The command for doing this is as follows:

while true; do nc -v 10.0.0.221 80; sleep 1; done

If I needed remote access to the server, I would use a similar command as above, but use the remote access command instead, and add a break statement to quit the loop after the command was successful. For example, for an SSH session I would use the ssh command, and for a remote desktop session the rdesktop command. A typical SSH command will look like:

while true; do ssh 10.0.0.221 && break; sleep 1; done

This will simply keep trying the ssh command until a connection has been established. As soon as a connection was successful I will receive a shell, which when exited from will break the loop and return me to my local command prompt.

Automatically Running a Command

If you had to run some command the moment you are able to do so, you could use the above SSH command with some minor modifications.

Lets say you wanted to remove the file /opt/repository.lock as soon as possible. To keep it simple we’re assuming the user you log in as has permission to do so.

The basic idea is that each time you fail to connect, SSH will return a non-zero status. As soon as you connect and run the command you will break out of the loop. In order to do so, we need a zero exit status to distinguish between a failed and successful connect.

The exit status during a successful connect, however, will depend on the command being run on the other end of the connection. If it fails for some reason, you don’t want SSH to repeatedly try and fail, effectively ending up in a loop that won’t exit by itself. So you need to ensure it’s exit status is 0, whether it fails or not. You can handle the failure manually.

This can be achieved by executing the true command after the rm command. All the true command does is to immediately exit with a zero (success) exit status. It’s the same command we use to create an infinite while loop in all these examples.

The resulting command is as follows:

while true; do \
  ssh 10.0.0.221 "rm -f /opt/repository.lock ; true" && break; \
  sleep 1; \
done

This will create an infinite while loop and execute the ssh and sleep commands. As soon as a SSH connection is established, it will remove the /opt/repository.lock file and run the true command, which will return a 0 status. The SSH instance will exit with success status, which will cause a break from the while loop and end the command, returning back to the command prompt. As with all the previous examples, when the connection fails the loop will pause for a second, and then try again.

Conclusion

By using these commands instead of repeatedly trying to connect yourself, there is a max of 1 second from the time the service started till when you’re connected. This can be very useful in emergency situations where every second you have some problem could cost you money or reputation.

The Linux terminal is a powerful place and I sometimes wonder if those who designed the Unix terminal knew what they were creating and how powerful it would become.

So Why Love Linux? Because the Linux terminal allows you to optimize your tasks beyond humanly capability.

 

Building from Source Has Never Been Easier

June 4, 2011 Leave a comment

Overview

For me, one of the greatest things Debian gave to the world was apt and dpkg, ie. Debian’s package management system. It does a brilliant job of almost everything and is very easy to use. What I’ll be explaining in this post is how you would use these tools to customize a package at the source level.

If you wanted to change something in the source code of a package, you could always go check it out from the project’s revision control system, or download it from the project’s web site. Though this won’t necessarily be the same version you received through the repositories, and will most probably not have all the patches applied by the distribution’s authors.

There are benefits in getting the latest vanilla version, though there are more cons than pros when compared to using apt and dpkg to get and build the source. Provided one is available, some of the benefits of using the source package from the repositories are:

  1. The source code you will be editing will be for the same version as the package you have installed.
  2. The source will have all the patches as applied by the distribution’s authors. Some of these patches are sometimes extra functionality which would be lost if you use the vanilla source code.
  3. The package version and patches from the distribution is what was tested within that environment.
  4. You are building a .deb package file, which can be installed and/or added to a repository to easily use on multiple installations
  5. When using a .deb file you can benefit from the dependency management
  6. Having a .deb you can control how new versions of the package are handled (like preventing new installations, safely overriding with new versions, etc.)
  7. By having a .deb it’s easy to remove the package and install the original again

Points 4 to 7 are also possible to achieve when downloading the vanilla source, though requires many more steps and is far more complicated than the technique I’m describing in this post.

Above all of these benefits, the biggest reason of all why I like to follow this approach when hacking the source of packages from my installations, is definitely the simplicity of it. When summarized, it all comes down to 3 commands I’ll list in the conclusion of this post.

Getting the Source

Before you can start building you need to prepare your environment for it. Run the following command to install the necessary packages:

quintin:~$ sudo apt-get install build-essential fakeroot dpkg-dev

So, for all the examples I’ll be using blueproximity as the package to be built. It’s a python script, so you don’t really need to download separate source code to modify it. Though to demonstrate this technique I figured it’s a good candidate given it’s small size and simple structure.

So to get the source, I’ll make a directory called src and change into it.

quintin:~$ mkdir src
quintin:~$ cd src/

Then instruct apt to download the source code for the project named blueproximity.

quintin:~/src$ apt-get source blueproximity
Reading package lists... Done
Building dependency tree
Reading state information... Done
Need to get 309kB of source archives.
Get:1 http://repo/ubuntu/ lucid/universe blueproximity 1.2.5-4 (dsc) [1,377B]
Get:2 http://repo/ubuntu/ lucid/universe blueproximity 1.2.5-4 (tar) [301kB]
Get:3 http://repo/ubuntu/ lucid/universe blueproximity 1.2.5-4 (diff) [6,857B]
Fetched 309kB in 9s (32.6kB/s)
gpgv: Signature made Mon 24 Aug 2009 00:52:04 SAST using DSA key ID 7ADF9466
gpgv: Can't check signature: public key not found
dpkg-source: warning: failed to verify signature on ./blueproximity_1.2.5-4.dsc
dpkg-source: info: extracting blueproximity in blueproximity-1.2.5
dpkg-source: info: unpacking blueproximity_1.2.5.orig.tar.gz
dpkg-source: info: applying blueproximity_1.2.5-4.diff.gz

As you can see, apt

  1. Downloaded the source tarball blueproximity-1.2.5.orig.tar.gz
  2. Downloaded a patch file blueproximity_1.2.5-4.diff.gz.
  3. It extracted the source code into a directory blueproximity-1.2.5.
  4. And then applied the patch to this directory.

At this stage the source is ready for editing.

Building the Source

In order for your build to complete successfully you might need some development dependencies. These are usually the header files or link libraries, and often named after the package with a -dev suffix. Apt can install anything needed to build a specific package using the build-dep command.

To make sure we have all these dependencies for building blueproximity, we run:

quintin:~/src$ sudo apt-get build-dep blueproximity
Reading package lists... Done
Building dependency tree
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

In my case these wasn’t any build dependencies needed, and thus nothing was installed.

Once you’re happy with your changes and want to build the .deb file, you simply need to

  1. Change into the root of the extracted project source code
    quintin:~/src$ cd blueproximity-1.2.5
  2. And run the build.
    quintin:~/src/blueproximity-1.2.5$ dpkg-buildpackage -rfakeroot -uc -b
    [truncated output]
    dpkg-deb: building package `blueproximity'
    in `../blueproximity_1.2.5-4_all.deb'.
    dpkg-deb: warning: ignoring 1 warnings about the control file(s)
    
    dpkg-genchanges -b >../blueproximity_1.2.5-4_i386.changes
    dpkg-genchanges: binary-only upload - not including any source code
    dpkg-buildpackage: binary only upload (no source included)

You’ll see a lot of output which I truncated here. The result will be one or more .deb files in the directory where you downloaded the source (in the example, the one named src).

Conclusion

As you can see, there is very little needed to get a package’s source code and build a .deb from it. I’ve done this a few times, and all the packages I’ve modified I’ve added to a repository of my own, and have found this approach to work very well for integrating my changes into the existing system as seamlessly as possible.

There are basically 3 commands needed to build a package from source (excluding setting up your environment for this). These are:

apt-get source [package name]
apt-get build-dep [package name]
dpkg-buildpackage -rfakeroot -uc -b

The first 2 commands you would run in a dedicated directory created for this purpose. The first command will then create a sub directory where it extracts the source code to. It’s in this sub directory where you would run the last command. The first 2 commands you would also only need to run once. The last command you can run each time you want to build a package from the same source code, perhaps when making changes for a second or third time.

It’s really that simple.

So Why Love Linux? Because apt opens up a very easy way of customizing your system at as low a level as the source code.

Within the Blue Proximity

June 2, 2011 2 comments

Overview

I read about the awesome little program called Blue Proximity. It’s a Python script that repeatedly measures the signal strength from a selected Bluetooth device. It then uses this knowledge to lock your computer if you are further away from it, and unlock it or keep it unlocked when you are close to it.

It’s very simple to setup. It has a little GUI from which you select which device you want to use for this and then specify the distance value at which to lock/unlock your computer, as well as which time delay for the lock/unlock process. The distance can’t be measured in meters/feet, but instead just a generic unit. This unit is an 8bit signed scale based on the signal strength measured from the device and isn’t terribly accurate. It’s not a perfect science and a lot of factors affect the reading.

So the general idea is that you try and get your environment as normal as you would usually have it and try different values for lock/unlock distances until you get a configuration that works best for you. There are a few more advanced parameters to play with as well. Especially the very useful ring buffer size, which allows you to effectively average that value over the last few readings, instead of using the raw value each time. It’s certainly worth playing around with these values until you find what gives you the best result.

You can even go as far as specifying the commands to be executed for locking/unlocking the screen. The default is probably sufficient for most purposes, but it’s definitely available for those that want to run other commands.

Beyond just locking/unlocking there is also a proximity command feature, which will ensure that the computer doesn’t lock from inactivity as long as you’re close to it. This is very useful for times where you’re watching a movie or presentation and don’t want the screen to keep locking just because you didn’t move the mouse or type on the keyboard.

My Setup

Before I had this program I would have my computer lock after a 10 minute idle period. Then if I return it would almost be automatic for me to start typing my password. The Gnome lock screen is optimized cleverly, in that you can simply start typing your password even if the password dialog doesn’t display yet. It will recognize the first key press in a locked state as an indication of your intent to unlock the screen as well as use it for the first character of your password.

After I configured and hacked Blue Proximity to my liking the screen would lock as soon as I’m about 3 meters away from the computer, and unlock when I’m right in front of it. I configured a 10 second ring buffer to average the reading it gives over the readings for the past 10 seconds. I also made 0 or higher values (closest reading to the computer) count as double entries. Meaning when 0 values are being read it will average down to 0 twice as fast. This allows for it to be more stable when moving around, but unlock very quickly when standing right next to the machine. It all works very well.

It’s been a few days now, and still when I get to the computer and it unlocks by itself I’m amused. Sometimes I even start getting ready to enter my unlock password when the screen is automatically unlocked. Very amusing.

It’s not perfect, and sometimes the screen would lock while I’m busy using the computer and then immediately unlock again. This is to be expected from the nature of wireless technologies, though I’m sure a bit more hacking and tuning will get it at least as close to perfect as it can be.

Conclusion

It’s typical of the software world to always produce amusing and fun utilities like this one. This one is definitely one of my favorites.

So Why Love Linux? Because there are tons of free and open source programs and utilities of all kinds.

Custom Boot Screen

May 30, 2011 Leave a comment

The Look and Feel

When I installed a new Ubuntu version, they changed the look of the boot loader screen into a plain terminal-like look and feel. I liked the high resolution look it had before and was determined to bring a similar look into my new installation.

After some investigation I found how to set a background for Grub and configured one I liked. This was all good, except the borders and text for Grub didn’t quite fit in with my background image. Grub, for instance, had it’s name and version at the top of the screen, which you can’t change or turn off. And the borders/background for selecting boot options wasn’t configurable except for changing between a handful of colors.

I was already busy with customization and figured I’d go all out and make it look just the way I wanted it. Using Ubuntu (or Debian) makes this all easy, as I’m able to get the source, build it and package into a .deb ready for installation using only 2 commands. So I started hacking at the source, and eventually came up with something I liked even more than what I had before.

My Grub Boot Screen

The List

Further, the actual list of items displayed in the boot screen is generated through a script, which detects all the kernels and supported bootable partitions. I also modified these scripts to make sure the list it generates and the list item it selects to be the default is what I wanted it to be.

I added support for a configuration entry like this:
GRUB_DEFAULT=image:vmlinuz-2.6.32-21-generic

So when anything is installed which triggers the Grub script to run, or I manually run it, this option will instruct the script to use the specified Linux image as the default option. It will also support a value of “Windows”, which when set will make the first Windows partition found the default boot option.

Further I added functionality, that if my default was configured as a Linux image I also made the script create an extra entry with crash dump support enabled. For all other Linux images it would just generate the standard and recovery mode entries.

Finally, all Windows and other operating system options would be generated and appended to the end of the list.

Conclusion

So Why Love Linux? Because your imagination sets the boundaries of what is possible. It’s possible and easy to make your computer do exactly what you want it to do.

Easily Create In-Memory Directory

May 27, 2011 Leave a comment

In Memory

If you have ever had to run a task which performs a lot of on-disk file operations you know how much of a bottleneck the hard drive could be. It’s just not fast enough to keep up with the demands of the program running in the CPU.

So what I usually do when I need to run some very disk intensive tasks is to move it all to an in-memory filesystem. Linux comes with a module called tmpfs, which allows you to mount such a filesystem.

So, assuming my file operations would be happening at /opt/work. What I would do is

  1. Move /opt/work out of the way, to /opt/work.tmp
  2. Make a new directory called /opt/work
  3. Mount the in-memory filesystem at /opt/work
    sudo mount -t tmpfs none /opt/work
  4. Copy the contents of /opt/work.tmp into /opt/work.
  5. Start the program.
  6. Then when the program completes I would copy everything back, unmount the in-memory filesystem and clean up.

Now, since the directory where all the operations will be happening isn’t actually on disk (but in memory), it can happen without any hard drive bottleneck.

Here are some benchmark results showing how much faster this is. I was writing 1GB of data to each filesystem. Specifically look at the last bit of the last line.

For the memory filesystem:
quintin@quintin-VAIO:/dev/shm$ dd if=/dev/zero of=test bs=$((1024 * 1024)) count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.984447 s, 1.1 GB/s

And for the hard drive filesystem:
quintin@quintin-VAIO:/tmp$ dd if=/dev/zero of=test bs=$((1024 * 1024)) count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 24.2657 s, 44.2 MB/s

As you can see, the memory filesystem was more than 24 times faster than the hard drive filesystem. I was able to write at only 44.2MB/s on the hard drive, but 1.1 GB/s on the memory filesystem. Note that this isn’t it’s maximum, since I had some other bottlenecks here. If you were to optimize this you would be able to write even faster to the memory filesystem. The fact remains that filesystem intensive tasks can run much faster when done this way.

There are some risks involved, in that loosing power to the system will cause everything that was in the memory filesystem to be lost. Keep this in mind when using it. In other words, don’t store any critical/important data only in an in-memory filesystem.

The Core of it All

So in the end it all comes down the the fact that you can easily create a in-memory filesystem. All you need to do to achieve this is decide on a directory you want to be in-memory, and mount it as such.

For example, if we were to choose /home/mem as an in-memory directory, we can mount it as follows:
sudo mount -t tmpfs none /home/mem

If we need it to be persistently mounted as such (across system boots), we can add the following line to /etc/fstab:
none /home/mem tmpfs defaults 0 0

Conclusion

So Why Love Linux? Because with a single command you can get a temporary in-memory filesystem running to help you speed up a filesystem intensive task.

Flexible Firewall

May 19, 2011 Leave a comment

I have a very strict incoming filter firewall on my laptop, ignoring any incoming traffic except for port 113 (IDENT), which it rejects with a port-closed ICMP packet. This is to avoid delays when connecting to IRC servers.

Now, there is an online test at Gibson Research Corporation called ShieldsUP!, which tests to see if your computer is stealthed on the internet. What they mean with stealth is that it doesn’t respond to any traffic originating from an external host. A computer in “stealth” is obviously a good idea since bots, discovery scans or a stumbling attacker won’t be able to determine if a device is behind the IP address owned by your computer. Even if someone were to know for sure a computer is behind this IP address, being in stealth less information can be discovered about your computer. A single closed and open port is enough for NMAP to determine some frightening things.

So, since I reject port 113 traffic I’m not completely stealthed. I wasn’t really worried about this, though. But I read an interesting thing on the ShieldsUP! page about ZoneAlarm adaptively blocking port 113 depending on whether or not your computer has an existing relationship with the IP requesting the connection. This is clever, as it would ignore traffic to port 113 from an IP, unless you have previously established a connection with the same IP.

Being me, I found this very interesting and decided to implement this in my iptables configuration. The perfect module for this is obviously ipt_recent, which allows you to record the address of a packet in a list, and then run checks against that list with other packets passing through the firewall. I was able to do this by adding a single rule to my OUTPUT chain, and then modifying my existing REJECT rule for port 113. It was really that simple.

The 2 rules can be created as follows:
-A OUTPUT ! -o lo -m recent --name "relationship" --rdest --set
-A INPUT ! -i lo -p tcp -m state --state NEW -m tcp --dport 113 -m recent --name "relationship" --rcheck --seconds 60 -j REJECT --reject-with icmp-port-unreachable

The first rule will match any packet originating from your computer intended to leave the computer and record the destination address into a list named relationship. So all traffic leaving your computer will be captured in this list. The second rule will match any traffic coming into the computer for port 113 to be checked against this relationship list, and if the source IP is in this list and has been communicated with in the last 60 seconds, the packet will be rejected with the port-closed ICMP response. If these conditions aren’t satisfied, then the action will not be performed and the rest of the chain will be evaluated (which in my case results in the packet being ignored).

Note that these 2 rules alone won’t make your PC pass this “stealth test”. For steps on setting up a stealth firewall, see the Adaptive Stealth Firewall on Linux guide.

So Why Love Linux? Because built into the kernel is netfilter, an extremely powerful, secure and flexible firewall, and iptables allows you to easily bend it to your will.

No More Ctrl+C Echo

May 18, 2011 Leave a comment

I’ve always liked the terminal and especially the Ctrl+C key. If I make a bad typo, forgot something or want to abort some command Ctrl+C is always an option. Sometimes I just want to remember something, like someone giving me a telephone number, so I quickly type it onto the command prompt and press Ctrl+C. Then I can put it somewhere more persistent when I have the time. The point is that Ctrl+C will immediately return me back to the command prompt. I would much rather press Ctrl+C and return to the prompt immediately, than to have to press and hold backspace or alt+backspace for a couple of seconds. I tend to optimize things a lot to help me achieve my goal as fast as possible, and Ctrl+C is a tool you can use for much more than just aborting a running command.

Now, some configurations will echo the text ^C when you press Ctrl+C. I’ve always had this turned off, and got used to having it this way. So, when I upgraded to Ubuntu 9.10 something changed. For some reason it was echoing ^C everytime I press it. I figured I’d disable it later and continued with it turned on. After a while it really started irritating me because my screen was full of ^C, which inspired me to disable it immediately.

Having never done this myself (it’s always been off by default) I did a quick Google to find out how it’s turned off. After some digging I found a terminal option I could disable with stty, called echoctl.

I gave this a try and it seemed to work. No more ^C when pressing it at the command prompt. Started up cat and pressed Ctrl+C to abort the command, and there it was again.

Failure.

After some investigation and experimentation I realized that it is turned off everywhere except when aborting a command with xterm TTY configuration in the gnome-terminal. Even just running a screen session inside the gnome-terminal would have it be gone for good. So if I had to abort a command it will print, and there didn’t seem to be an easy way around this. If there was I missed it somewhere.

Now… I don’t like defeat. So I decided to play dirty and change it right where it comes from. I used the fantastic package management tool apt and prepared an environment for building the kernel. Then I jumped into the tty source and changed it to not print ^C at ALL when echoctl is turned off.

After building and installing the new kernel, I just had to make sure the -echoctl option was persistent across boot, and finally had ^C gone for good.

So why love Linux? Because it makes it easy for you to be in complete control.