From the Canyon Edge -- :-Dustin

Tuesday, November 30, 2010

These aren't the droids you're looking for


I recently inherited a disused Linksys wireless router. The previous owner had long since forgotten the WEP passphrase (although he still had embedded devices around the house that were connected to the WAP). Of course, it's trivial to reset a Linksys router back to the factory defaults, which I would use eventually. But before I did that, I thought I would try something else first.

Having never tried to crack a WEP key before, I thought this would be a nice opportunity to learn how. There are plenty of excellent, detailed tutorials out there. And this blog post isn't one of them.

It's merely a "note to self" -- what worked for me at this point in time. So if you're looking for a detailed explanation of the process or perhaps support in your quest, "These aren't the droids you're looking for. Move along, move along."

First, I created a directory to store the captured packets.
DIR=$(mktemp -d)
cd $DIR
I then installed the utilities from the 10.10 archive.
sudo apt-get install aircrack-ng
Next, I checked my interface.
sudo airmon-ng check wlan0
And I stopped any services using wlan0 (avahi-daemon, NetworkManager, wpa_supplicant). Then I started monitoring mode on the interface.
sudo airmon-ng start wlan0
Now, I needed to scan the airwaves, looking for my access point.
sudo airodump-ng mon0
When I recognized the ESSID I was looking for, I noted the BSSID and Channel number. Then, I started replaying ARP requests.
sudo aireplay-ng -3 -b $bssid -h 00:00:00:00:00:00 wlan0
I let this run for a while in one window. At the same time, I started capturing replies in another window.
sudo airodump-ng --channel $channel --bssid $bssid --write dump wlan0
And in a third window, I started analyzing the captured data, looking for the key.
sudo aircrack-ng *cap
It took about ~7500 ARP requests and IVs gathered over ~2 hours to divine the key, but it worked eventually, like a charm!

:-Dustin

Thursday, November 18, 2010

TAMULUG 2010


I was invited to speak last night at the Texas A&M University Linux Users Group (TAMULUG) last night. Being an Aggie myself (Class of 2001, whoop), and I was honored and thrilled to hang out with some Linux loving Aggies.

My wife, Kim, joined me. We picked up a stack of pizzas and soft drinks and met the group in the brand new Mitchell Physics building on A&M's campus.

I really enjoyed the atmosphere and the conversations!

I spoke for about an hour on eCryptfs, going into deep technical detail on the design and implementation of Ubuntu's Encrypted Home Directory feature, and fielded a number of questions. I hooked my primary desktop up to the projector and we used a rescue CD to attempt an "attack" on my encrypted data. I gave a tour of the structure of the ~/.ecryptfs directory and what the encrypted contents look like. I then briefly introduced the /usr/bin/ecryptfs-* tools. We talked a bit about the cryptography involved and series of encryption/decryption/hashing that goes on.

Some of the attendees noticed that I was running Byobu in my terminal, and there was a bit of a mixed reaction. A few people noted that they liked it and replaced their use of screen, while other smirked and shirked. I then introduced myself as the author of Byobu and the tone changed slightly. :-) I spent the next hour thoroughly exploring the inspiration, design, and features of Byobu. Many of the attendees were themselves GNU Screen experts, and we traded hacks, tips and tricks. In particular, we experimented with horizontal splits, vertical splits, and nested sessions in Byobu. Based on the response by the end of the talk, I think there were a couple of converts ;-)

I closed the talk with maybe 5 minutes of preaching about how important it is to get involved in open source while in college. I talked about opportunities within Ubuntu, Fedora, Debian, and other communities to work on packaging, development, bug triage, documentation, etc. Having conducted several dozen interviews over the years, I can speak to how important it is to have a public track record in open source, when applying for a job in open source.

:-Dustin

RHEL6 from an Ubuntu Server Developer's Perspective

For some very good reasons, many people are quite excited about last week's RHEL6 release.

A friend and mentor of mine, Tim Burke, gives a technical introduction to some RHEL6's features in the video here.

Myself being an Ubuntu Core Developer on the Ubuntu Server, I thought it prudent to take an honest look at RHEL6, and capture a few new notes here, complimenting Red Hat on their new release, noting some differences between Ubuntu and RHEL, and perhaps inspiring a few lessons we could learn in Ubuntu.

The Download

Several years have past since I last downloaded a RHEL ISO (I do check out Fedora from time to time). I was somewhat surprised that I had to register for an account at RedHat.com to download the image. I was also surprised the credentials I use to log bugs at bugzilla.redhat.com and other RH/Fedora sites were not accepted. Apparently, these identities are not yet federated into an SSO. To create this account, I had to accept Red Hat's legal statement, terms and conditions, and their export control policy documents.

I also had to agree that my email address could be shared with Red Hat's partners. :-/

Once I had my account, I was ready to download. At this point I realized that I actually wanted to save the ISO to a different machine, rather than my desktop. Since the ISO was behind an authentication wall, I wouldn't be able to use wget or rsync to pull it down to that command-line only system. Oh well. I'd have to just pull it to my desktop and then copy it over.

Once the download started, it came down really quickly, maxing out my 20mbps cable internet downstream connection. I pulled the 3.2GB DVD ISO in a quick 22 minutes. Those guys must have some serious bandwidth! :-)

I suppose that I have learned to take for granted Ubuntu's free download policy, making heavy use of local mirrors and convenient access through wget, curl, rsync, zsync, or torrent downloads of Ubuntu ISOs from anywhere, to anywhere.

In any case, I now have a trial RHEL6 ISO in hand.

The Basic Server Installation

I used Ubuntu's TestDrive to try installing RHEL6 in a handful of KVM virtual machines on my Thinkpad x201. Each guest had 4GB of RAM, 4 CPUs, and 6GB of disk.

$ testdrive -u rhel-server-6.0-x86_64-dvd.iso

The installer starts with a graphical boot menu.

And then drops to a text mode disc integrity check application.

I accidentally chose to test the integrity of the ISO, which actually happened pretty quickly. After testing, the cursor landed back on the "test" button again, which I would have expected to have moved to "continue". Then I tried to "continue", but evidently the ISO had been "ejected" and KVM couldn't find it anymore. So I started the installation over.

This time, I skipped the disc integrity check altogether. The next stage of the installation is entirely graphical. This is in stark contrast to Ubuntu's text-only server installation. There's quite a bit more screen real estate in RHEL's graphical installer, which is used particularly well.

I chose my language and keyboard (mostly autodetected), much like Ubuntu.


Then I chose between two different backing storage options, "basic" or and advanced selection for stuff like iSCSI. I chose "basic".

Next, I initialized my virtio backing disk, which was quite simple -- just one click.


Then I set the host name. On this page, I had the option to configure a more advanced network configuration, but I just used the default. This seemed quite nicely done.

Next, I had to choose a timezone, where I landed in the painful, horrible, no good old timezone selector. It's damn near impossible to click on a city in the US Central timezone, and scrolling through the drop box of all cities is just ridiculous. Ubuntu has made some incredible leaps and bounds forward on this subtle-yet-important aspect of almost any installer.

Then I was prompted for a root password -- something I've not set in a very long time :-) Alrighty, no problem. When in Rome, do as the Romans...

Next, I landed back in a partitioning menu. I say "back" because I had earlier answered a question about "initializing" disks, and now I'm being asked again about my disk layout. No matter. In my setup, I have a qcow2 backed virtio disk, so I just told the installer to wipe it completely and give me the default layout (LVM and such). I really liked how simple this aspect of the RHEL installer is, as compared to the Ubuntu partitioning workflow.



The installer very quickly formatted my disk with the ext4 filesystem.

Next, I'm at basically a graphical equivalent of the Ubuntu Server's tasksel page. There are vastly more profiles and package sets available in this part of the RHEL installer than on the Ubuntu Server installer. This is due, in part at least, to the fact that the installation media is a 3.2GB DVD ISO, about 4.5 times the size of the Ubuntu Server's 700MB CD ISO. In any case, the breaking is fairly logical, though some of this hasn't changed since I first installed Red Hat from a CD in 1998.

For my first installation, I simply selected the "basic server" profile, and didn't add any additional profiles, packages, repositories, or customizations at this time. The installation commenced installing 533 total packages, which took about 5 minutes.

I must say that the graphical installer looks very professional here, with a useful progress bar, information about the packages being installed, and a banner with the RHEL logo (which could just as easily be a slideshow introducing new features). The text based installer in Ubuntu looks a little too 1992 for my tastes. (He says, humming "...Here we are now/We're Nirvana...")

Also, the bootloader was installed automatically, while the Ubuntu Server installer asks if I really want a bootloader on the system. (Yes, please.)

And the installation completed just after that.


The Basic Server Runtime

The installation completed and rebooted. I hit to take a look at grub, where I noticed that only a single kernel was installed. Ubuntu also installs a recovery mode kernel, and a memory test, which I rarely use but are nice to have. The plymouth and boot screens were very clean, minimal, and fast. Like Ubuntu, the boot process no longer scrolls the list of services being started. I rebooted, and removed "rhgb" and "quiet" from the kernel boot parameters and I got my nice, juicy, informative scrolling boot screens. I think saw most of what I would want to see if I had to debug a server with boot problems. This was quite nice.

RHEL6's default server had a somewhat meaty 1.2GB initial disk usage, which contrasts with a default Ubuntu Server footprint that's about 700MB. That said, the default RHEL6 basic server includes 1MB of pure gold, the SSH server, by default. (See the ongoing discussion on ubuntu-devel@ about this one). RHEL6's default does not include a graphical desktop by default (like Ubuntu), though you can add it easily enough during installation.

A few packages I expected to have were missing, such as GNU screen, so I tried to install it using yum. However, yum refused to add any additional packages since this installation is not registered with RHN. Bummer. I might have been able to dig around and add some free repositories, but I didn't. And I had no intention on registering with RHN, so yum would remain untested by me. Sorry.

I played around a little more in userspace. The kernel is 2.6.32, which is the same base version as Ubuntu 10.04 LTS. I only have a root user, by default. I ran $(rpm -qa | sort) and pasted the default package list here.

Strangely, I noticed that eth0 was not up and configured. I did skip the network configuration section of the installer, but I expected that the system would just be set to DHCP since it was unspecified. Well, I think I was wrong on that account. I had to run $(ifup eth0; dhclient eth0), and then I had a working network interface in my RHEL6 VM.

At this point, I deleted this VM.

The Customized Installation

So I kicked off another installation, this time choosing the "customize now" option in the installer. Here, I found some really enterprisey looking options. Tasks like: a backup client, infiniband support, legacy unix support, mainframe access, scientific support, smartcard support, iscsi. There was also a bunch of other server profiles, like: web services, databases, management, virtualization, and desktop options (which included both Gnome and KDE). Each title had a 1-line sentence describing the option in the text field below. Selecting any of these would then enable another button which would show the suite of packages related to this option, each of which could selected or deselected.

Ideally, I would have been able to see (and search) a complete package list right here. That's not quite how the interface works, though. Still, for a new RHEL6 user, this is a really interesting introduction to the depth and breadth of the distribution.

For this installation, I just selected the virtualization packages.

Once the install completed, I rebooted and logged in again. I can see that qemu-kvm 0.12.1.2, libvirt 0.8.1, and virt-manager 0.8.4 are installed. It's odd, I know, so I must have been doing something wrong, but I couldn't find the kvm or the qemu binaries anywhere on the system. I'm not sure what was going on with that... [[EDIT: Thanks, tyhicks, for pointing out /usr/libexec/qemu-kvm.]]

The Minimal Installation

Since I wasn't able to play with the virtualization host much (as I was already running in a VM, and couldn't find the qemu binary), I killed that installation and restarted, with the "minimal install".

I was curious just how minimal this minimal would be. It installed a total of 219 packages, accounting for a 599M footprint. Not too bad, but I think I would expect something a bit smaller out of a minimal Linux image at this point.

That's pretty much my only experiment with the minimal installation. Again, I would have liked to have used yum to add packages and build this minimal system up to something useful, but I didn't register with RHN. I killed this VM too.

The Software Development Workstation

Finally, I performed one more installation, selecting the "software development workstation" profile. I launched each of these installations using TestDrive, which defaults to a 6GB backing disk image.

Once I selected the workstation profile, though, the installer notified me that I didn't have enough disk space to perform this installation, so I killed the VM and upped my DISK_SIZE=10G in my ~/.testdriverc.

This installation obviously took a lot longer, cranking through 1,451 packages and using several gigabytes of disk space.

After the 1st stage completed installing all of these packages, the system rebooted, and launched directly in the graphical 2nd stage of the installation. Here, I was prompted to enter my non-root user information (name, password, etc). I was also prompted for RHN credentials, which I politely declined.

At this point, I landed in a gdm login window, where I was able to log in with my non-root user. With that, I found myself in the very familiar Red Hat desktop (and felt just a tinge of nostalgia). The applications menu was thoroughly populated with development tools, such as Eclipse and friends.

Wrapping Up

I really enjoyed the 4 or 5 hours I spent test-driving RHEL6. It's pretty important for us, as a Linux community, to be aware of what's going on in our ecosystem. I thought the graphical installer enabled a much smoother installation. The base install footprint seemed a little heavy, but I was impressed to see SSH enabled and running by default (and I wish and pray we could take this plunge in Ubuntu). On the other hand, I found the registration process for downloading the ISO and RHN enablement for repository access highly annoying.

I've been around Red Hat systems for over 12 years, though not so much in the last 3 years. These guys (RH) are doing some phenomenal work on enterprise Linux, and they deserve quite a bit of praise on this front. Nice job.

For my part, I'm going to continue working hard to ensure that Ubuntu 11.04, 11.10, and eventually 12.04 LTS evolve into outstanding enterprise Linux server distributions in their own rights.

:-Dustin

Wednesday, November 17, 2010

Guarded Gorilla

Five gorillas were placed in a large cage. In the far corner of the cage, ten steps led up to a small platform. At random intervals, bananas would be lowered onto the platform.

One observant gorilla noticed the bananas, and he started up the stairs. But a sensor detected the gorilla's presence near the stairs and the entire cage (all five gorillas) were thoroughly drenched with ice cold water. Gorillas, like other primates, hate being sprayed with ice cold water.

This happened each and every time any individual approached the stairs. They learned to never approach the stairs, no matter how many delicious treats landed on the platform. Gorillas, like other primates, respond quickly to conditioning.

Eventually, one of the five gorillas was replaced by a new subject. This new gorilla saw the juicy bananas at the top of the stairs and started toward them. They beat the new gorilla senseless before he even reached the stairs. Gorillas, like other primates, can be terribly violent creatures.

The second of the five original gorillas was subsequently replaced by another new gorilla. An encore scene ensued, with the new gorilla approaching the banana platform, but he, too, was severely pummeled by the other four, including the gorilla who was most recently mauled!

The third, fourth, and fifth of the original gorillas were each replaced, one by one, until none of the five original gorillas remained in the cage. Moreover, none of these five gorillas had ever actually been sprayed with the detested ice water.

In fact, the sensor that monitored the stairs had been damaged during one of the more intense skirmishes and was no longer operational. A heap of bananas remained at the top of the stairs on the platform, free for the taking. Yet each time a new gorilla arrived, the other five never-been-sprayed gorillas provided the newcomer with their introductory thrashing and local education.

These gorillas, like other primates, have a penchant for maintaining "the way it's always been done 'round here."

:-Dustin
Adapted from the oft-retold Parable of the Gorilla.

Byobu positive press (including one post from a Fedora user)

I was quite pleased to see two new articles on Byobu show up in my Google Alerts monitor yesterday, from TechRepublic.com and ghacks.net.

Enhance screen with Byobu's cool functionality
by Vincent Danen
http://blogs.techrepublic.com.com/opensource/?p=2006

This article is particularly interesting, in that it's written by a Red Hat developer, with screen shots from a Fedora system. I'm particularly proud that Byobu is gaining some users on the Fedora side of Linux.

Use byobu for extended features in your terminal window
by Jack Wallen
http://www.ghacks.net/2010/11/16/use-byobu-for-extended-features-in-your-terminal-window/

This article has screen shots from an Ubuntu 10.04.1 system, and focuses on giving some powerful functionality to terminal users.

Nice articles, guys. Thanks.

:-Dustin

Tuesday, November 16, 2010

Yet another Ubuntu Archive Proxy Solution (approx)

Many developers of Ubuntu find it useful to cache all (or at least some) of the Ubuntu Archive locally.

I certainly do.

I have maintained a full copy of the Ubuntu archive for the last ~3 years. Originally, I just used rsync and slapped logic around it to make sure it did the right thing. It did most of the time.

Eventually, Jonathan Davies' ubumirror project/package simplified my mirror situation, and really made it easy to filter out some of the architectures I didn't need.

Still, this required about 400GB of disc space, and quite a bit of overnight bandwidth to keep it perfectly in sync.

Earlier this year, I learned about the approx package, and it has become my new favorite proxy solution. I did look at apt-cacher-ng, but the configuration was complicated that I could figure out in 5 minutes, so if you can show me how to do exactly what I've done with approx, I'm all ears ;-) I also looked at squid-deb-proxy, but I didn't want to have to install additional packages on my clients, and I really wanted this to work well for network installations of Ubuntu servers.

Here's my solution...

To install, simply:
sudo apt-get install approx
Then set the URLs you want to proxy, in /etc/approx/approx.conf:
ubuntu http://archive.ubuntu.com/ubuntu
ubuntu-security http://security.ubuntu.com/ubuntu
I configured my proxy machine to listen on port 80:
sudo dpkg-reconfigure approx
Next, I took a little shortcut on my dd-wrt router's DNSMasq options, so that I don't have to configure to each and every one of my guests to point to my local mirror. I want that to happen automatically and transparently to my guests. So I set my router to authoritatively serve my local proxy's IP address as the resolution for archive.ubuntu.com and security.ubuntu.com. The additional DNSMasq options for me are:
address=/archive.ubuntu.com/security.ubuntu.com/10.1.1.11
where "10.1.1.11" is my proxy's static IP address.

This ensures that all of my guests transparently use my local proxy, without having to perform custom configuration on each.

Now on the proxy itself, I don't want archive.ubuntu.com to point to the localhost, as that won't work very well at all! So for that one machine, I changed its DNS to point to Google's Public DNS at 8.8.8.8.
echo "nameserver 8.8.8.8" | sudo tee /etc/resolv.conf
Alternatively, I could manually set the IP address of archive.ubuntu.com and security.ubuntu.com in that machine's /etc/hosts.

Moreover, if I ever need to disable the use of the caching proxy on a single guest, I can simply and temporarily change that machine's DNS to 8.8.8.8 as above.

I'm really finding this to be a handy way of speeding up my network installs and package upgrades on my set of Ubuntu machines at home. I'm not wasting nearly as much disk space or network bandwidth, and I don't have to configure anything on each and every client or installation.

And now that I no longer need a 500GB local disk, I will probably move my proxy into a virtual machine very soon.

I also added a custom byobu status script to track the size of the approx cache, as well as the number of files in the cache, ~/.byobu/bin/61_approx:
#!/bin/sh
dir=/var/cache/approx
du=$(du -sh $dir | awk '{print $1}')
count=$(find $dir -type f -name "*.deb" | wc -l)
printf "Prox:%s,%s" "$du" "$count"

Cheers,
:-Dustin

Monday, November 15, 2010

Landscape for the Ubuntu Evangelist-turned-Remote-Sysadmin


Like many of you here at Planet Ubuntu, I'm on a continuous quest to convert friends and family to Ubuntu. I'm proud to say that Ubuntu users now includes my parents, my wife, her parents, both of my sisters, her sister, their husbands, and several friends.

On the overwhelming whole, they're all quite satisfied with Ubuntu. They like that it's virus-free, never crashes, does not pathologically slow down over time, and that it generally just works.

That said, whenever I visit any of the above parties, I generally spend a good 30 minutes to an hour giving their Ubuntu system a good tune-up. Usually that just entails installing all updates, etc. But sometimes there's a bit more work to be done. This is time I would rather spend with my family, and them with me.

I previously have not had a use case for Canonical's Landscape service. I don't use it for my systems at home, as I have a static IP and an SSH connection to my suite of servers, desktops, and virtual machines, with which I can generally do all that I need.

But that's not the case for my relatives. I recently realized how much Landscape would help me remotely manage my extended family's Ubuntu systems.

I am, for all practical purposes, their system administrator, and Landscape gives me a really convenient way of managing each of their systems remotely, wherever they are, from wherever I am -- which is typically not one in the same.

So just an idea from left field, here... Landscape is generally targeted at enterprise users trying to manage data centers full of Ubuntu Servers. But I'm finding it really convenient to manage a few dozen machines scattered about the country while I travel around the globe. Nice job, Landscape team.
Note that unlike Ubuntu, Landscape is an optional, value-add, paid-for service on top of Ubuntu.
:-Dustin

Thursday, November 4, 2010

Meeting your Childhood Hero

Who was your sports hero when you were 8 years old?

Mine was, without question, Chicago Cubs right fielder, Andre Dawson. He had a tremendous swing, a laser rocket arm, a golden glove, and an intensity that was unmatched by most who played the game (his nickname was The Hawk). I wanted nothing more, as an 8 year old kid, to grow up and be just like Andre Dawson.

I was quite happy to see that he was elected to the Baseball Hall of Fame in July 2010, and excited to actually visit the Hall in Cooperstown, NY a week later.

I was happy that he was elected, and excited to visit the Hall and see his exhibit.

Now just imagine my surprise as I'm leaving the Ubuntu Developer Summit sitting on one of Continental Airlines' smallest prop planes in Orlando, FL waiting to depart for Miami, FL. Sitting there, a luggage tag slides by me at eye level. There's a Florida Marlins logo, and a name, Andre Dawson. My heart skipped a beat with excitement. Andre Dawson was one of the 6 other people on my flight!

I spent the next hour reminiscing over the incredible situation. I ran the conversation over in my head a dozen times. Finally, when the plane landed, I walked two rows back and said:
"Mr. Dawson? I just wanted to say that you have been my hero since I was 8 years old. I loved the way you played the game, and wanted to play just like you. Congratulations on Cooperstown, well deserved! Would it be possible to take a picture with you?"
He said, "Oh, uh, thank you, thank you, thank you. Sure."

And, again, thank you, Andre Dawson.

:-Dustin

Printfriendly