From the Canyon Edge -- :-Dustin

Tuesday, December 14, 2010

So Many Passwords...

Yesterday, there was an announcement that hashes Gawker Media's account passwords had been compromised and published on the internet. I had never heard of Gawker Media.

Whoa, sucks for them!

A few hours later, I received an email from LifeHacker saying that its accounts are actually managed by Gawker and that there's a chance that my account might have been compromised.

Dang, sucks for me :-(

So I spent some time thinking about it, and I've decided I'm going to take a new approach to passwords and my hundreds of disparate accounts on the web...

The Code
  1. I am going to use even stronger passphrases for each of my primary accounts.
  2. I am going to always use different passphrases for each of those primary accounts.
  3. I am going to memorize each of those passphrases from (1) and (2).

  4. For all secondary accounts, I am going to use unique, randomly generated passphrases, perhaps created like this:
    apg -a 1 -m 15 -M SNCL -n 1 -c /dev/urandom
  5. I am not going to memorize any passphrases for secondary accounts. Rather, I will entrust my browser to save those passwords (which are stored in my encrypted home directory). I will use a password reset function any time I lose or forget or clear that database.
  6. I will maintain ~/.passwords.gpg -- an encrypted text file with all of my accounts and passwords, and use the gnugpg.vim plug to securely edit the file.
(1), (2), and (3) are really no different for what I do now.

(4), (5) and (6) are what's really new to me. As of now, I'm separating primary and secondary accounts. I won't even attempt to remember passwords for the hundreds of secondary accounts out there. I'll randomly generate new passwords for each, cache that in my local application (which I believe is better protected), and just reset those passwords as necessary.

  • Primary accounts - the few things that I need or else I'm unable to get work done, or access other critical data (e.g. Gmail, Launchpad/Ubuntu SSO, ssh, gpg, eCryptfs)
  • Secondary accounts - everything else that has a password reset function and can be securely and locally cached in a browser's (or other application's) saved password database (e.g. Facebook, LinkedIn, Twitter, my banks, et al.)
Using the above, I will:
  1. Minimize the number of passphrases I have to remember.
  2. Strengthen and diversify the passphrases to my few primary accounts.
  3. Eliminate the possibility of any passphrase being cracked by brute force.
  4. Consolidate the risk of any one passphrase being stolen to that account alone.
Does anyone else have better solutions to these problems?


Tuesday, November 30, 2010

These aren't the droids you're looking for

I recently inherited a disused Linksys wireless router. The previous owner had long since forgotten the WEP passphrase (although he still had embedded devices around the house that were connected to the WAP). Of course, it's trivial to reset a Linksys router back to the factory defaults, which I would use eventually. But before I did that, I thought I would try something else first.

Having never tried to crack a WEP key before, I thought this would be a nice opportunity to learn how. There are plenty of excellent, detailed tutorials out there. And this blog post isn't one of them.

It's merely a "note to self" -- what worked for me at this point in time. So if you're looking for a detailed explanation of the process or perhaps support in your quest, "These aren't the droids you're looking for. Move along, move along."

First, I created a directory to store the captured packets.
DIR=$(mktemp -d)
cd $DIR
I then installed the utilities from the 10.10 archive.
sudo apt-get install aircrack-ng
Next, I checked my interface.
sudo airmon-ng check wlan0
And I stopped any services using wlan0 (avahi-daemon, NetworkManager, wpa_supplicant). Then I started monitoring mode on the interface.
sudo airmon-ng start wlan0
Now, I needed to scan the airwaves, looking for my access point.
sudo airodump-ng mon0
When I recognized the ESSID I was looking for, I noted the BSSID and Channel number. Then, I started replaying ARP requests.
sudo aireplay-ng -3 -b $bssid -h 00:00:00:00:00:00 wlan0
I let this run for a while in one window. At the same time, I started capturing replies in another window.
sudo airodump-ng --channel $channel --bssid $bssid --write dump wlan0
And in a third window, I started analyzing the captured data, looking for the key.
sudo aircrack-ng *cap
It took about ~7500 ARP requests and IVs gathered over ~2 hours to divine the key, but it worked eventually, like a charm!


Thursday, November 18, 2010


I was invited to speak last night at the Texas A&M University Linux Users Group (TAMULUG) last night. Being an Aggie myself (Class of 2001, whoop), and I was honored and thrilled to hang out with some Linux loving Aggies.

My wife, Kim, joined me. We picked up a stack of pizzas and soft drinks and met the group in the brand new Mitchell Physics building on A&M's campus.

I really enjoyed the atmosphere and the conversations!

I spoke for about an hour on eCryptfs, going into deep technical detail on the design and implementation of Ubuntu's Encrypted Home Directory feature, and fielded a number of questions. I hooked my primary desktop up to the projector and we used a rescue CD to attempt an "attack" on my encrypted data. I gave a tour of the structure of the ~/.ecryptfs directory and what the encrypted contents look like. I then briefly introduced the /usr/bin/ecryptfs-* tools. We talked a bit about the cryptography involved and series of encryption/decryption/hashing that goes on.

Some of the attendees noticed that I was running Byobu in my terminal, and there was a bit of a mixed reaction. A few people noted that they liked it and replaced their use of screen, while other smirked and shirked. I then introduced myself as the author of Byobu and the tone changed slightly. :-) I spent the next hour thoroughly exploring the inspiration, design, and features of Byobu. Many of the attendees were themselves GNU Screen experts, and we traded hacks, tips and tricks. In particular, we experimented with horizontal splits, vertical splits, and nested sessions in Byobu. Based on the response by the end of the talk, I think there were a couple of converts ;-)

I closed the talk with maybe 5 minutes of preaching about how important it is to get involved in open source while in college. I talked about opportunities within Ubuntu, Fedora, Debian, and other communities to work on packaging, development, bug triage, documentation, etc. Having conducted several dozen interviews over the years, I can speak to how important it is to have a public track record in open source, when applying for a job in open source.


RHEL6 from an Ubuntu Server Developer's Perspective

For some very good reasons, many people are quite excited about last week's RHEL6 release.

A friend and mentor of mine, Tim Burke, gives a technical introduction to some RHEL6's features in the video here.

Myself being an Ubuntu Core Developer on the Ubuntu Server, I thought it prudent to take an honest look at RHEL6, and capture a few new notes here, complimenting Red Hat on their new release, noting some differences between Ubuntu and RHEL, and perhaps inspiring a few lessons we could learn in Ubuntu.

The Download

Several years have past since I last downloaded a RHEL ISO (I do check out Fedora from time to time). I was somewhat surprised that I had to register for an account at to download the image. I was also surprised the credentials I use to log bugs at and other RH/Fedora sites were not accepted. Apparently, these identities are not yet federated into an SSO. To create this account, I had to accept Red Hat's legal statement, terms and conditions, and their export control policy documents.

I also had to agree that my email address could be shared with Red Hat's partners. :-/

Once I had my account, I was ready to download. At this point I realized that I actually wanted to save the ISO to a different machine, rather than my desktop. Since the ISO was behind an authentication wall, I wouldn't be able to use wget or rsync to pull it down to that command-line only system. Oh well. I'd have to just pull it to my desktop and then copy it over.

Once the download started, it came down really quickly, maxing out my 20mbps cable internet downstream connection. I pulled the 3.2GB DVD ISO in a quick 22 minutes. Those guys must have some serious bandwidth! :-)

I suppose that I have learned to take for granted Ubuntu's free download policy, making heavy use of local mirrors and convenient access through wget, curl, rsync, zsync, or torrent downloads of Ubuntu ISOs from anywhere, to anywhere.

In any case, I now have a trial RHEL6 ISO in hand.

The Basic Server Installation

I used Ubuntu's TestDrive to try installing RHEL6 in a handful of KVM virtual machines on my Thinkpad x201. Each guest had 4GB of RAM, 4 CPUs, and 6GB of disk.

$ testdrive -u rhel-server-6.0-x86_64-dvd.iso

The installer starts with a graphical boot menu.

And then drops to a text mode disc integrity check application.

I accidentally chose to test the integrity of the ISO, which actually happened pretty quickly. After testing, the cursor landed back on the "test" button again, which I would have expected to have moved to "continue". Then I tried to "continue", but evidently the ISO had been "ejected" and KVM couldn't find it anymore. So I started the installation over.

This time, I skipped the disc integrity check altogether. The next stage of the installation is entirely graphical. This is in stark contrast to Ubuntu's text-only server installation. There's quite a bit more screen real estate in RHEL's graphical installer, which is used particularly well.

I chose my language and keyboard (mostly autodetected), much like Ubuntu.

Then I chose between two different backing storage options, "basic" or and advanced selection for stuff like iSCSI. I chose "basic".

Next, I initialized my virtio backing disk, which was quite simple -- just one click.

Then I set the host name. On this page, I had the option to configure a more advanced network configuration, but I just used the default. This seemed quite nicely done.

Next, I had to choose a timezone, where I landed in the painful, horrible, no good old timezone selector. It's damn near impossible to click on a city in the US Central timezone, and scrolling through the drop box of all cities is just ridiculous. Ubuntu has made some incredible leaps and bounds forward on this subtle-yet-important aspect of almost any installer.

Then I was prompted for a root password -- something I've not set in a very long time :-) Alrighty, no problem. When in Rome, do as the Romans...

Next, I landed back in a partitioning menu. I say "back" because I had earlier answered a question about "initializing" disks, and now I'm being asked again about my disk layout. No matter. In my setup, I have a qcow2 backed virtio disk, so I just told the installer to wipe it completely and give me the default layout (LVM and such). I really liked how simple this aspect of the RHEL installer is, as compared to the Ubuntu partitioning workflow.

The installer very quickly formatted my disk with the ext4 filesystem.

Next, I'm at basically a graphical equivalent of the Ubuntu Server's tasksel page. There are vastly more profiles and package sets available in this part of the RHEL installer than on the Ubuntu Server installer. This is due, in part at least, to the fact that the installation media is a 3.2GB DVD ISO, about 4.5 times the size of the Ubuntu Server's 700MB CD ISO. In any case, the breaking is fairly logical, though some of this hasn't changed since I first installed Red Hat from a CD in 1998.

For my first installation, I simply selected the "basic server" profile, and didn't add any additional profiles, packages, repositories, or customizations at this time. The installation commenced installing 533 total packages, which took about 5 minutes.

I must say that the graphical installer looks very professional here, with a useful progress bar, information about the packages being installed, and a banner with the RHEL logo (which could just as easily be a slideshow introducing new features). The text based installer in Ubuntu looks a little too 1992 for my tastes. (He says, humming "...Here we are now/We're Nirvana...")

Also, the bootloader was installed automatically, while the Ubuntu Server installer asks if I really want a bootloader on the system. (Yes, please.)

And the installation completed just after that.

The Basic Server Runtime

The installation completed and rebooted. I hit to take a look at grub, where I noticed that only a single kernel was installed. Ubuntu also installs a recovery mode kernel, and a memory test, which I rarely use but are nice to have. The plymouth and boot screens were very clean, minimal, and fast. Like Ubuntu, the boot process no longer scrolls the list of services being started. I rebooted, and removed "rhgb" and "quiet" from the kernel boot parameters and I got my nice, juicy, informative scrolling boot screens. I think saw most of what I would want to see if I had to debug a server with boot problems. This was quite nice.

RHEL6's default server had a somewhat meaty 1.2GB initial disk usage, which contrasts with a default Ubuntu Server footprint that's about 700MB. That said, the default RHEL6 basic server includes 1MB of pure gold, the SSH server, by default. (See the ongoing discussion on ubuntu-devel@ about this one). RHEL6's default does not include a graphical desktop by default (like Ubuntu), though you can add it easily enough during installation.

A few packages I expected to have were missing, such as GNU screen, so I tried to install it using yum. However, yum refused to add any additional packages since this installation is not registered with RHN. Bummer. I might have been able to dig around and add some free repositories, but I didn't. And I had no intention on registering with RHN, so yum would remain untested by me. Sorry.

I played around a little more in userspace. The kernel is 2.6.32, which is the same base version as Ubuntu 10.04 LTS. I only have a root user, by default. I ran $(rpm -qa | sort) and pasted the default package list here.

Strangely, I noticed that eth0 was not up and configured. I did skip the network configuration section of the installer, but I expected that the system would just be set to DHCP since it was unspecified. Well, I think I was wrong on that account. I had to run $(ifup eth0; dhclient eth0), and then I had a working network interface in my RHEL6 VM.

At this point, I deleted this VM.

The Customized Installation

So I kicked off another installation, this time choosing the "customize now" option in the installer. Here, I found some really enterprisey looking options. Tasks like: a backup client, infiniband support, legacy unix support, mainframe access, scientific support, smartcard support, iscsi. There was also a bunch of other server profiles, like: web services, databases, management, virtualization, and desktop options (which included both Gnome and KDE). Each title had a 1-line sentence describing the option in the text field below. Selecting any of these would then enable another button which would show the suite of packages related to this option, each of which could selected or deselected.

Ideally, I would have been able to see (and search) a complete package list right here. That's not quite how the interface works, though. Still, for a new RHEL6 user, this is a really interesting introduction to the depth and breadth of the distribution.

For this installation, I just selected the virtualization packages.

Once the install completed, I rebooted and logged in again. I can see that qemu-kvm, libvirt 0.8.1, and virt-manager 0.8.4 are installed. It's odd, I know, so I must have been doing something wrong, but I couldn't find the kvm or the qemu binaries anywhere on the system. I'm not sure what was going on with that... [[EDIT: Thanks, tyhicks, for pointing out /usr/libexec/qemu-kvm.]]

The Minimal Installation

Since I wasn't able to play with the virtualization host much (as I was already running in a VM, and couldn't find the qemu binary), I killed that installation and restarted, with the "minimal install".

I was curious just how minimal this minimal would be. It installed a total of 219 packages, accounting for a 599M footprint. Not too bad, but I think I would expect something a bit smaller out of a minimal Linux image at this point.

That's pretty much my only experiment with the minimal installation. Again, I would have liked to have used yum to add packages and build this minimal system up to something useful, but I didn't register with RHN. I killed this VM too.

The Software Development Workstation

Finally, I performed one more installation, selecting the "software development workstation" profile. I launched each of these installations using TestDrive, which defaults to a 6GB backing disk image.

Once I selected the workstation profile, though, the installer notified me that I didn't have enough disk space to perform this installation, so I killed the VM and upped my DISK_SIZE=10G in my ~/.testdriverc.

This installation obviously took a lot longer, cranking through 1,451 packages and using several gigabytes of disk space.

After the 1st stage completed installing all of these packages, the system rebooted, and launched directly in the graphical 2nd stage of the installation. Here, I was prompted to enter my non-root user information (name, password, etc). I was also prompted for RHN credentials, which I politely declined.

At this point, I landed in a gdm login window, where I was able to log in with my non-root user. With that, I found myself in the very familiar Red Hat desktop (and felt just a tinge of nostalgia). The applications menu was thoroughly populated with development tools, such as Eclipse and friends.

Wrapping Up

I really enjoyed the 4 or 5 hours I spent test-driving RHEL6. It's pretty important for us, as a Linux community, to be aware of what's going on in our ecosystem. I thought the graphical installer enabled a much smoother installation. The base install footprint seemed a little heavy, but I was impressed to see SSH enabled and running by default (and I wish and pray we could take this plunge in Ubuntu). On the other hand, I found the registration process for downloading the ISO and RHN enablement for repository access highly annoying.

I've been around Red Hat systems for over 12 years, though not so much in the last 3 years. These guys (RH) are doing some phenomenal work on enterprise Linux, and they deserve quite a bit of praise on this front. Nice job.

For my part, I'm going to continue working hard to ensure that Ubuntu 11.04, 11.10, and eventually 12.04 LTS evolve into outstanding enterprise Linux server distributions in their own rights.


Wednesday, November 17, 2010

Guarded Gorilla

Five gorillas were placed in a large cage. In the far corner of the cage, ten steps led up to a small platform. At random intervals, bananas would be lowered onto the platform.

One observant gorilla noticed the bananas, and he started up the stairs. But a sensor detected the gorilla's presence near the stairs and the entire cage (all five gorillas) were thoroughly drenched with ice cold water. Gorillas, like other primates, hate being sprayed with ice cold water.

This happened each and every time any individual approached the stairs. They learned to never approach the stairs, no matter how many delicious treats landed on the platform. Gorillas, like other primates, respond quickly to conditioning.

Eventually, one of the five gorillas was replaced by a new subject. This new gorilla saw the juicy bananas at the top of the stairs and started toward them. They beat the new gorilla senseless before he even reached the stairs. Gorillas, like other primates, can be terribly violent creatures.

The second of the five original gorillas was subsequently replaced by another new gorilla. An encore scene ensued, with the new gorilla approaching the banana platform, but he, too, was severely pummeled by the other four, including the gorilla who was most recently mauled!

The third, fourth, and fifth of the original gorillas were each replaced, one by one, until none of the five original gorillas remained in the cage. Moreover, none of these five gorillas had ever actually been sprayed with the detested ice water.

In fact, the sensor that monitored the stairs had been damaged during one of the more intense skirmishes and was no longer operational. A heap of bananas remained at the top of the stairs on the platform, free for the taking. Yet each time a new gorilla arrived, the other five never-been-sprayed gorillas provided the newcomer with their introductory thrashing and local education.

These gorillas, like other primates, have a penchant for maintaining "the way it's always been done 'round here."

Adapted from the oft-retold Parable of the Gorilla.

Byobu positive press (including one post from a Fedora user)

I was quite pleased to see two new articles on Byobu show up in my Google Alerts monitor yesterday, from and

Enhance screen with Byobu's cool functionality
by Vincent Danen

This article is particularly interesting, in that it's written by a Red Hat developer, with screen shots from a Fedora system. I'm particularly proud that Byobu is gaining some users on the Fedora side of Linux.

Use byobu for extended features in your terminal window
by Jack Wallen

This article has screen shots from an Ubuntu 10.04.1 system, and focuses on giving some powerful functionality to terminal users.

Nice articles, guys. Thanks.


Tuesday, November 16, 2010

Yet another Ubuntu Archive Proxy Solution (approx)

Many developers of Ubuntu find it useful to cache all (or at least some) of the Ubuntu Archive locally.

I certainly do.

I have maintained a full copy of the Ubuntu archive for the last ~3 years. Originally, I just used rsync and slapped logic around it to make sure it did the right thing. It did most of the time.

Eventually, Jonathan Davies' ubumirror project/package simplified my mirror situation, and really made it easy to filter out some of the architectures I didn't need.

Still, this required about 400GB of disc space, and quite a bit of overnight bandwidth to keep it perfectly in sync.

Earlier this year, I learned about the approx package, and it has become my new favorite proxy solution. I did look at apt-cacher-ng, but the configuration was complicated that I could figure out in 5 minutes, so if you can show me how to do exactly what I've done with approx, I'm all ears ;-) I also looked at squid-deb-proxy, but I didn't want to have to install additional packages on my clients, and I really wanted this to work well for network installations of Ubuntu servers.

Here's my solution...

To install, simply:
sudo apt-get install approx
Then set the URLs you want to proxy, in /etc/approx/approx.conf:
I configured my proxy machine to listen on port 80:
sudo dpkg-reconfigure approx
Next, I took a little shortcut on my dd-wrt router's DNSMasq options, so that I don't have to configure to each and every one of my guests to point to my local mirror. I want that to happen automatically and transparently to my guests. So I set my router to authoritatively serve my local proxy's IP address as the resolution for and The additional DNSMasq options for me are:
where "" is my proxy's static IP address.

This ensures that all of my guests transparently use my local proxy, without having to perform custom configuration on each.

Now on the proxy itself, I don't want to point to the localhost, as that won't work very well at all! So for that one machine, I changed its DNS to point to Google's Public DNS at
echo "nameserver" | sudo tee /etc/resolv.conf
Alternatively, I could manually set the IP address of and in that machine's /etc/hosts.

Moreover, if I ever need to disable the use of the caching proxy on a single guest, I can simply and temporarily change that machine's DNS to as above.

I'm really finding this to be a handy way of speeding up my network installs and package upgrades on my set of Ubuntu machines at home. I'm not wasting nearly as much disk space or network bandwidth, and I don't have to configure anything on each and every client or installation.

And now that I no longer need a 500GB local disk, I will probably move my proxy into a virtual machine very soon.

I also added a custom byobu status script to track the size of the approx cache, as well as the number of files in the cache, ~/.byobu/bin/61_approx:
du=$(du -sh $dir | awk '{print $1}')
count=$(find $dir -type f -name "*.deb" | wc -l)
printf "Prox:%s,%s" "$du" "$count"


Monday, November 15, 2010

Landscape for the Ubuntu Evangelist-turned-Remote-Sysadmin

Like many of you here at Planet Ubuntu, I'm on a continuous quest to convert friends and family to Ubuntu. I'm proud to say that Ubuntu users now includes my parents, my wife, her parents, both of my sisters, her sister, their husbands, and several friends.

On the overwhelming whole, they're all quite satisfied with Ubuntu. They like that it's virus-free, never crashes, does not pathologically slow down over time, and that it generally just works.

That said, whenever I visit any of the above parties, I generally spend a good 30 minutes to an hour giving their Ubuntu system a good tune-up. Usually that just entails installing all updates, etc. But sometimes there's a bit more work to be done. This is time I would rather spend with my family, and them with me.

I previously have not had a use case for Canonical's Landscape service. I don't use it for my systems at home, as I have a static IP and an SSH connection to my suite of servers, desktops, and virtual machines, with which I can generally do all that I need.

But that's not the case for my relatives. I recently realized how much Landscape would help me remotely manage my extended family's Ubuntu systems.

I am, for all practical purposes, their system administrator, and Landscape gives me a really convenient way of managing each of their systems remotely, wherever they are, from wherever I am -- which is typically not one in the same.

So just an idea from left field, here... Landscape is generally targeted at enterprise users trying to manage data centers full of Ubuntu Servers. But I'm finding it really convenient to manage a few dozen machines scattered about the country while I travel around the globe. Nice job, Landscape team.
Note that unlike Ubuntu, Landscape is an optional, value-add, paid-for service on top of Ubuntu.

Thursday, November 4, 2010

Meeting your Childhood Hero

Who was your sports hero when you were 8 years old?

Mine was, without question, Chicago Cubs right fielder, Andre Dawson. He had a tremendous swing, a laser rocket arm, a golden glove, and an intensity that was unmatched by most who played the game (his nickname was The Hawk). I wanted nothing more, as an 8 year old kid, to grow up and be just like Andre Dawson.

I was quite happy to see that he was elected to the Baseball Hall of Fame in July 2010, and excited to actually visit the Hall in Cooperstown, NY a week later.

I was happy that he was elected, and excited to visit the Hall and see his exhibit.

Now just imagine my surprise as I'm leaving the Ubuntu Developer Summit sitting on one of Continental Airlines' smallest prop planes in Orlando, FL waiting to depart for Miami, FL. Sitting there, a luggage tag slides by me at eye level. There's a Florida Marlins logo, and a name, Andre Dawson. My heart skipped a beat with excitement. Andre Dawson was one of the 6 other people on my flight!

I spent the next hour reminiscing over the incredible situation. I ran the conversation over in my head a dozen times. Finally, when the plane landed, I walked two rows back and said:
"Mr. Dawson? I just wanted to say that you have been my hero since I was 8 years old. I loved the way you played the game, and wanted to play just like you. Congratulations on Cooperstown, well deserved! Would it be possible to take a picture with you?"
He said, "Oh, uh, thank you, thank you, thank you. Sure."

And, again, thank you, Andre Dawson.


Tuesday, October 26, 2010

The Completely Unofficial Beer of Ubuntu Natty

The only picture I took at UDS-N in Orlando, FL.

Only for the brave. Or the tasteless. Or those who appreciate good old fashion poignant humor.

I am at least one of the above.


Saturday, October 23, 2010

Ubuntu, The Restaurant, 10.10.10

My wife, Kim, and I were recently in the California wine country, in Napa and Sonoma Valleys.

While there, we had a lovely dinner one night at the Ubuntu Restaurant and Yoga Studio.

The Ubuntu restaurant has nothing to do with the software I write, other than we share a name and actually have a similar set of principles. While we apply the tennets of Ubuntu to software, they apply it to food. The inside of the restaurant is really quite chic (like much of Napa). It really reminded me of the set from Joss Whedon's Dollhouse TV series :-)

All of the food is vegetarian, and it's ordered and served tapas style. Everything we had was delicious.

The portions looked small, but neither of us were hungry at all by the time we left.

We did have cookies for dessert, just to make sure we wouldn't leave hungry.

We did leave our waitress with a stack of Ubuntu CDs.

Their menus are custom printed every day, so in exchange, she mailed me a copy of Sunday's 10.10.10 Ubuntu restaurant menu (at the top of this post). Check out the date in the top corner:

Cheers to the Ubuntu restaurant, and the Ubuntu 10.10.10 release!


Friday, October 22, 2010

Bikeshed: dman (download manpages from the web)

I have kept a little shell script called dman in my $HOME/bin ever since came online, in 2008.

It's a really convenient way to read manpages in your terminal, for packages that you don't have installed locally. Assuming you're internet connected, it's a really handy tool, saving lots of disk space, while giving you access to many gigabytes of excellent system level documentation.

For example:
  dman wtf
If you find this useful, install the bikeshed package from Natty, or from the Bikeshed PPA for other versions of Ubuntu.


Thursday, October 21, 2010

Bikeshed: wifi-status (monitor your wifi connection)

I work from coffee shops, pubs, and conferences quite a bit. That means lots and lots and lots of of WiFi.

Modern Ubuntu desktops have a handsome indicator applet with an animation that shows the connection process.

But I'm a geek, and I need to know in more detail what's happening with my wireless connection, especially when it seems like it's taking forever to get a wireless connection.

For this, I wrote a utility called wifi-status that's now in bikeshed. Run this from a terminal and you'll see both the iwconfig and ifconfig status of your wireless interface.


Every 1.0s: iwconfig wlan0; ifconfig wlan0 Fri Oct 15 14:07:49 2010

wlan0 IEEE 802.11abg ESSID:"CampusCoffeeBean1"
Mode:Managed Frequency:2.412 GHz Access Point: 00:24:7B:21:90:A0
Bit Rate=54 Mb/s Tx-Power=14 dBm
Retry long limit:7 RTS thr:off Fragment thr:off
Power Management:off
Link Quality=64/70 Signal level=-46 dBm
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:0 Invalid misc:0 Missed beacon:0

wlan0 Link encap:Ethernet HWaddr 00:11:22:33:44:55
inet addr: Bcast: Mask:
inet6 addr: fe80::221:6aff:fe50:a606/64 Scope:Link
RX packets:1820355 errors:0 dropped:0 overruns:0 frame:0
TX packets:2068354 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:620541137 (620.5 MB) TX bytes:1581840633 (1.5 GB)

If you find this useful, install the bikeshed package from Natty, or from the Bikeshed PPA for other versions of Ubuntu.


Bikeshed: bch (seed and edit bzr changelog)

Here's another tool I use every single day, many times per day: bch.

I maintain all of my projects/packages in bzr itself, using debian/changelog to describe changes to both the source code and the packaging.

I used to use dch to edit the current changelog entry, starting a new line with an asterisk, and then listing each path of each file I've changed, followed by a colon, and then a description of my changes.

However, I can easily use bzr diff to get a good list of the files I've changed, use sort to arrange them in alphabetical order, print a comma-separated lists, and use dch to insert that list into debian/changelog. All I have to do is describe the change, save, and close the file.

Note that I typically follow bch with debcommit, which uses that same changelog entry when committing to bzr. It's really, really handy and convenient!

Try it:
  1. Grab some source code
    bzr branch lp:bikeshed

  2. Make some changes
    cd bikeshed
    echo foo > bar
    bzr add bar
    echo "" >> pbput

  3. Add a changelog entry

If you find this useful, install the bikeshed package from Natty, or from the Bikeshed PPA for other versions of Ubuntu.


Wednesday, October 20, 2010

Bikeshed: 1 .. 9 (wicked convenient awk)

I previously introduced the 1, 2, 3, 4, 5, 6, 7, 8, 9 utilities here in my blog as useful awk hacks a few months ago.

Basically, there's one script, installed at /usr/bin/1 and all of the rest are symbolic links back to this one.

The net effect of each of these is to print the Nth column of whatever comes in on standard input. In this way, "1" is sort of an alias for:
  awk '{print $1}'
Each of these accept a single option argument. By default, whitespace is assumed to be the input field separator. You can specify a different character or string here.

For example:
  ls -alF | 5
cat /etc/passwd | 7 :
If you find this useful, install the bikeshed package from Natty, or from the Bikeshed PPA for other versions of Ubuntu.


Tuesday, October 19, 2010

Bikeshed: bzrp (bzr with a sensible-pager)

I'll admit it ... I'm a huge fan of bzr. I'm conversational in git, but I really love the ease of use of bzr. It's friendly, convenient, and well documented.

I really only have one complaint... I really wish it paged output to sensible-pager, when running in an interactive terminal and the output is more than one screen-full.

I talked to Robert Collins about this in Wellington at LCA2010 earlier this year. He was lukewarm to the idea, asking me why don't I just pipe the output to sensible-pager. Heh. Sure, I can do that.

Okay, okay, so I created a simple alias, and eventually this wrapper script, bzrp, which basically has that effect.

Try it for yourself!
bzrp log --include-merges
bzrp diff
bzrp cdiff
This works with any bzr command that has output on standard out.

For what it's worth, I'm pronouncing this "ba-zerp" for now :-)

If you find this useful, install the bikeshed package from Natty, or from the Bikeshed PPA for other versions of Ubuntu.


Monday, October 18, 2010

bikeshed: pbput and pbget (pastebin binary files)

I absolutely love the pastebinit tool, from Stephane Graber. Genius, I tell you. I must use it 20 times per day, to share source code and configuration files.

Sometimes, I need to share a binary file, such as a screen shot, or a tarball.

For that, I wrote the pbput (and pbget) scripts!

The pbput script works on either standard in, or a file argument, lzma compresses the input, base64 encodes it, and then uses pastebinit to post to

And the pbget script basically reverses that process, using wget to retrieve the remote data, base64 decoding it, lzma decompressing it, and writing it to standard out.

Try it for yourself!
  pbput /tmp/Screenshot.png

pbget > /tmp/out.png
md5sum /tmp/*png
f7e7ba26a2681c0666ebca022c504594 /tmp/out.png
f7e7ba26a2681c0666ebca022c504594 /tmp/Screenshot.png
If you find this useful, install the bikeshed package from Natty, or from the Bikeshed PPA for other versions of Ubuntu.


Friday, October 15, 2010

Introducing the Bikeshed Package!

James Westby jw+debian at
Wed Aug 11 17:41:16 BST 2010
On Wed, 11 Aug 2010 12:32:34 -0400, Dustin Kirkland wrote:
> We have some initiatives right now, trying to make it easier for
> people to get new applications into Ubuntu, and the Ubuntu Software
> Center. This is merely a 10-line, GPL'd shell script, and most
> developers agree on its usefulness. But the experience of giving this
> code away for the benefit of others is less than ideal. I'm happy to
> persevere, push it to the right place, do the right thing. But will
> the next aspiring developer who wants to share a small, useful hack
> bother themselves with the process?

Maybe as an Ubuntu core-dev you want to upload a useful-hacks package
and accept all contributions in this vein in to that?

Like many of you, I have some useful scripts in my $HOME/bin directory, and aliases in my $HOME/.bashrc.

I have contributed some of these to existing open source projects, while others have turned into stand-alone free software packages/projects themselves. In other cases, I have tried, with great heartache, to contribute useful utilities to open source projects. Sometimes these work out eventually, but it can take many months or years of persistence to win the approval of some maintainers. These are most certainly battles worth fighting, but in the meantime, there are many Ubuntu users and developers who could benefit from these tools.

Per the suggestion from James Westby in the note above, I have founded the bikeshed project at You can grab the source code with:
  bzr branch lp:bikeshed
And you can install the package from the Bikeshed PPA for Karmic, Lucid, and Maverick if you like. It just landed in Natty today.

The mission statement of the project is:
  • While others debate where some tool should go, we put it in the bikeshed.
The package description goes into a little more detail:
  • Description: random useful tools that do not yet have a permanent home
    Bikeshed is a collection of random but useful tools and utilities that either don't quite fit anywhere else, or have not yet been accepted by a more appropriate project. Think of this package as an "orphanage", where tools live until they are adopted by loving, accepting parents.
The name of the project reflects the tremendous insight provided by Poul-Henning Kamp on a FreeBSD mailing list in 1999. If you haven't read it yet, I highly recommend you do. It's 11 years old and directed toward FreeBSD development, but it applies to ubuntu-devel@ and debian-devel@ and most other software development mailing lists just as well today.

The general concept is known as Parkinson's Law of Triviality, from 1957, when C. Northcote Parkinson described the unfortunate effects of trivial matters carrying disproportionate weight (and actually first used the bike shed example).

I'm going to describe each utility in bikeshed in a series of posts here in my blog. Hopefully you will find some of them very useful!


Tuesday, October 12, 2010

Ubuntu OpenWeek: Deploying Web Applications in the Cloud

Howdy all!

This week is once again Ubuntu Open Week!

Join me tomorrow, Wednesday, October 13, 2010 at 17:00 UTC in #ubuntu-classroom on for a session on Deploying Web Applications on Ubuntu in the Cloud!

As usual (when I give education sessions), I will be communicating in #ubuntu-classroom in IRC, and you can additionally follow my keystrokes in the examples by joining a shared Byobu session on an instance I run in Amazon EC2.

I will show you how to install, configure, and run a couple of web applications that I've packaged for Ubuntu, as well as some first steps to writing and perhaps even packaging your own!


Sunday, October 10, 2010

Friday, October 8, 2010

Brand Refresh of

With the great assistance of my colleagues Stuart Metcalfe and Matthew Nuzum, we have rolled out a new revision of featuring the updated Ubuntu website theme and color scheme.

Thanks guys, the site and the theme look great!


Thursday, October 7, 2010

Try Ubuntu Server in the Cloud on our Dime!

in the Cloud

At the Lucid Release Party in Austin, Texas, I bought a round of beer, and I remarked that for the price of a pint of the local micro-brew (about $5 at this particular pub), I could have bought everyone in the pub an hour of Ubuntu Server run time in the cloud. At $0.10/hour, we could have launched 50 instances, and spent part of the release party test-driving the new server release. I mentioned the idea to Scott Moser and Dave Walker (among others), who actually grabbed the idea and ran with it...
So we're celebrating the release of the Ubuntu 10.10.10 Server this Sunday by offering anyone with a account one free hour running their own Ubuntu Server instance in Amazon's EC2 Cloud.

This is an absolutely unprecedented offer:

Canonical will foot the bill for you to try
Ubuntu Server in the Cloud!

To participate, here's what you need:
  1. An account on
  2. A public/private SSH key pair
  3. Your public SSH key uploaded to your account
That's it!

Our web application will launch a brand new Official Ubuntu Server Image in an m1.small instance, and insert your public SSH key into the instance. Within minutes, you will have root (sudo) in the instance. With root access, you can do pretty much anything you want within the instance! You can install applications, compile your code, host network services, poke around the filesystem, compare it to previous versions of Ubuntu or other UNIX or Linux distributions. We just ask that you abide by the Amazon Terms of Service, and the Ubuntu Code of Conduct.

Get in while you can, at:


Thursday, September 30, 2010

Seven Reasons to Deploy Your Enterprise Cloud on Ubuntu

Howdy all!

In case you missed my live webcast about the Ubuntu Enterprise Cloud this morning, it's now available in the Intel Cloud Builder archive at:
Note that there is a Linux-friendly Flash option now, which is actually brand new for the Cloud Builder series ;-) Yep, we helped Intel get this tested and working just for you, our Ubuntu users!


Tuesday, September 28, 2010

Seven Reasons to Deploy Your Enterprise Cloud on Ubuntu OS (Webcast I'm giving at

I'm giving a live webcast on Thursday, September 30, 2010 at 7:30am US Pacific time on, through Intel's CloudBuilder program, where Canonical and Intel partnered to produce a whitepaper about the Ubuntu Enterprise Cloud.

The title and abstract:
Seven Reasons to Deploy Your Enterprise Cloud on Ubuntu OS

Join Canonical and Intel for a lively discussion on Canonical's Ubuntu OS, a key enabling technology for Enterprise Clouds. Learn how Ubuntu Enterprise Cloud implementations address Amazon EC2 compatibility, public/private cloud interoperability, Intel Virtualization Technology and more. Ask a question of our experts and gain insights that will guide your own cloud deployment.
To attend, go to:
Hope to see you there!


Monday, September 27, 2010

LinuxMag: Byobu #2 of 10 Essential Linux Admin Tools

Ken Hess of Linux Magazine names Byobu #2 of 10 essential Linux admin tools:

The article is a good, quick read, and I'm quite proud of Byobu considering its company on that list, among webmin, tcpdump, nagios, and vnc.


Wednesday, September 15, 2010

My First Year of Solar Power

I've posted a few times now about the 6.7KW photo-voltaic (solar) power system we have on our roof in Austin, Texas. It was activated one year ago, today.

Many, many people ask me about it. It has been operational for about a year, so I can finally analyze it's performance each month out of the year. This is important because the energy produce depends greatly on the position of the sun in the sky, the length of the days, and the weather. Different amounts of power are produced at different times.

I'm currently using Curt Blank's aurora program to gather data from my inverter. I have packaged this for Ubuntu, by the way. You can find it in Ubuntu 10.04 and beyond.

My current inverter reading as of today looks like this:
Current date/time: 15-Sep-2010 11:30:02

Daily Energy = 7.314 KWh
Weekly Energy = 91.434 KWh
Monthly Energy = 366.448 KWh
Yearly Energy = 7123.188 KWh
Total Energy = 9433.281 KWh
Partial Energy = 1537.161 KWh

Current date/time: 15-Sep-2010 11:30:05

Input 1 Voltage = 244.048767 V
Input 1 Current = 9.157255 A
Input 1 Power = 2234.816895 W

Input 2 Voltage = 255.783203 V
Input 2 Current = 3.767791 A
Input 2 Power = 963.737732 W

Grid Voltage Reading = 239.640839 V
Grid Current Reading = 12.036012 A
Grid Power Reading = 3196.468750 W
Frequency Reading = 59.966419 Hz.

DC/AC Coversion Efficiency = 99.9 %
Inverter Temperature = 54.750835 C
Booster Temperature = 49.749878 C

The most important number above (for this post) is:

Total Energy = 9433.281 KWh

In the last 365 days, this system has produced 9.4 Megawatt-hours of power.

What does this mean in terms of cost savings? Roughly, I know that electricity in Austin is about $0.115/KWh, so that's approximately $1,085 in savings on my electric bill. The real formula is actually a far more complicated differential equation, as I buy and sell electricity at two different rates, the rates change slightly every month, etc. But this is a reasonable ballpark figure.

Austin Energy actually has a web application where I can view and analyze my usage online. Here's a screenshot of my last 2 year's usage. Note the "Solar kWh" row, as well as the year-to-year difference in "$ Billed".

I can also download these stats in a CSV format, drop it into a spreadsheet and print some pretty cool charts. Analyzing the data directly, I can see that my solar investment has saved me exactly $1,210.71 over the last 12 months -- about $100/month, which is what I expected when I purchased the system.

Accounting for both the Austin Energy PV Rebate, and the Federal Tax Credit, our system is well on its way to paying itself off in just a few short years.

Once again, thanks to the outstanding individuals at Texas Solar Power Company in Austin for their outstanding service and timely installation.

As George Harrison wrote, "Here comes the sun!"

Doo do doo doo,

Java isn't Evil, but it's not for Me

First off, I apologize. In my last post, I called Java "evil". That's not fair, and several people called me out in the comments. The post has been updated to drop the "evils of Java" verbiage.

My statement was a reference to the humorous-though-irreverent Call of Codethulhu.

It's a personal taste issue. I dislike writing Java, packaging Java programs, chasing down Java dependencies, and even reading Java code. There's nothing necessarily "evil" about it. I have declined job offers that require work to be exclusively performed in Java. I just don't like being around Java and really dislike some of the habits it encourages.

Before I started my University work, I had extensive programming experience in Basic, Pascal, C, and C++ as an ambitious (dorky?) high school kid. The "Intro to Computer Programming" class most Computer Science freshman at my University took was based in Java. And at that time, they were being taught on Windows computers.

I found it very disappointing that a more UNIX/C approach was not used to introduce most of my college freshmen computer science classmates to programming. The approach nursed bad habits, and many programming fundamentals were missed, in my opinion.

A few years later, while working at IBM, I again landed on a series of projects where Java was king. And once again, I found some of these Java programmers lazy in their approach, and bloat-ware abounded. 2GB of memory were required to run simple services that should run in a few MB. Do-one-thing-and-do-it-well was no where to be found And finally, write-once-run-anywhere couldn't have been further from the truth.

I exited myself from the Java world, once again, choosing C, Python, Perl, PHP, and Shell for the projects I initiated and maintain. I'm able to use sound object oriented practices (in Python, Perl, and PHP), and able to honor to the principles of UNIX (with C and Shell).

Occasionally, I'm required to deal with Java when maintaining or packaging something for Ubuntu. I generally start from scratch, trying to have an open mind, but within minutes or hours, my skin starts to boil and steam flows out of my ears. I find over-engineered code in the source, binary JARs lumped within other projects out of laziness, and memory requirements that are simply staggering for the goal of the program.

Sorry, that's not for me.

Still, thanks for keeping me honest, making me explain myself, and pruning the potentially offensive language out of the other post.


Learning to Wink, Learning to Code

My wife, Kim, isn't a hacker.

She's a kindergarten teacher. She likes to crochet, and she's pulling a needle and thread through some embroidery on the couch next to me right now.

And this is why I nearly choked on my tortilla chips at the Red Iguana in Salt Lake City a few days ago when she asked me, "When you say you're coding, what are you actually doing?", soon followed by, "So why do you hate on Java so much?"

Kim wasn't asking just to wind me up or kill time -- she was genuinely curious about my work, perhaps for the first time. Fortunately, we had a 3+ hour drive after dinner that night. In the passenger seat, she cracked open her Lenovo S10-2 netbook running Ubuntu 10.04 and wrote hello world in 5 different languages: C, Perl, Python, Shell, and Java. Kim particularly liked how Gedit color coded her syntax.

We worked through the difference between compiled and interpreted languages. Unsurprisingly, she found Perl, Python, and Shell straightforward, and C slightly more complicated.

Her favorite language after 30 minutes of experimentation was Shell, so we decided to try something slightly more interesting: input and output. Here's what she came up with:
echo "what is your favorite color?"
color=$(head -n1)
echo "oh, $color is my favorite too"
Kim finally asked, "What is this Byobu thing you're always talking about?" Yep, I lit up like a light again. So I demonstrated Byobu for her, and she took to the status notifications at the bottom of the screen.

She suggested creating a plug in that would remind me to take a break from work periodically and have dinner :-) We shelved that one for now, and instead, she made a plug in that "winks" every few seconds. Here's her code:

if [ -f /tmp/wink ]; then
echo ":)"
rm /tmp/wink
echo ";)"
touch /tmp/wink
She made it executable with:
chmod +x /home/kim/.byobu/bin/2_wink
And a few seconds later, Byobu is winking at her!

I'm extremely proud of Kim's keen curiosity about my work, and particularly her follow-through . I'm not sure I'll be crocheting a doily any time soon, but I am running her winky face notification in Byobu. It reminds me what a lucky guy I am. ;-)