From the Canyon Edge -- :-Dustin

Tuesday, July 26, 2011

The Obligatory DevOps Blog Post

Any business with half a need for computing resources has traditionally employed or contracted a team of professionals -- usually of the species Systemus Administratus (SysAdmin in the lingua franca) -- to manage those resources.  SysAdmins are distinct from their computer resource hunting/gathering predecessors in their ability to use tools, construct new ones, and most importantly, cultivate farms of local servers.  SysAdmins have ruled the landscape of the IT industry for nearly 30 years.  But the extensive manual labor previously required to provision and maintain entire data centers is quite different now, with the industrial revolution of cloud technologies.  The dawn of the cloud computing age has yielded a demand for a different IT skill set.
More recently, we have witnessed the rapid emergence of a successful new species, Developus Operatus, or DevOps for short.  DevOps embody a different collection of technical skills, distinct from their SysAdmin counterparts, finely honed for cloud computing environments and Agile development methodologies.  DevOps excel at data center automation, develop for cloud scale applications, and utilize extensive configuration management to orchestrate massive systems deployments.  DevOps are not exactly pure developers, engineers, testers, or tech operators, but in fact incorporate skills from each of these expertise.  Some SysAdmins have consciously migrated toward DevOps professions, while some others have subconsciously transformed.

With the accelerating adoption of cloud platforms, DevOps professionals are perhaps the most influential individuals in the technology industry.  The cloud’s first colonists and earliest adopters, DevOps technologists are thought leaders and key innovators in this thriving market.  Expert DevOps collaboration is now essential in any Agile development shop, with DevOps stakeholders providing vital guidance to design discussions, platform adoption, and even procurement decisions.

Linux and UNIX server distributions with decades of tradition are hard wired directly into the DNA and collective memory of many SysAdmins.  For veterans who measure system uptime in decades, the Ubuntu Server is still quite a newcomer to this SysAdmin camp, and is often (and unfortunately) treated with inescapable skepticism.

On the other hand, the Ubuntu Server seems rather more attractive to the DevOps guild, as it presents interesting, advantageous opportunities as an ideal Linux platform.  DevOps demand dynamic, cloud-ready environments that older Linux/UNIX distributions do not yet deliver.  The Ubuntu Server is uniquely positioned to appeal to the hearts and minds of the DevOps discipline, who require a unique balance of stability, security, timely releases, yet also the latest and greatest features.  Ubuntu builds on the foundation of Debian’s Linux/UNIX tradition, but continuously integrates the latest application enhancements with high quality, releasing every six months.  On time.  Every...single...time.

I believe that Ubuntu is already appealing to the DevOps crowd as a comprehensive, complimentary platform, particularly in contrast to some of the other industry players.  Never complete, the Ubuntu platform continues to evolve with the likes of the DevOps confluence.

I know that we in Ubuntu are working to ensure that the Ubuntu Server is the ideal Linux platform for the greater DevOps community for many years to come.  Stay tuned to hear how Ubuntu's Orchestra and Ensemble projects are aiming to do just that...


Monday, July 25, 2011

Getting started with the CloudFoundry Client in Ubuntu

I'm pleased to introduce a powerful new cloud computing tool available in Ubuntu 11.10 (Oneiric), thanks to the hard work of my team (Brian Thomason, Juan Negron, and Marc Cluet), as well as our partners at VMWare -- the cloudfoundry-client package, ruby-vmc and it's command line interface, vmc.

CloudFoundry is a PaaS (Platform as a Service) cloud solution, open sourced earlier this year by VMWare.  Canonical's Systems Integration Team has been hard at work for several weeks now packaging both the client and server pieces of CloudFoundry for Ubuntu 11.10 (Oneiric).  We're at a point now where we'd love to get some feedback from early adopting Oneiric users on the client piece.

PaaS is a somewhat new area of Cloud Computing for Ubuntu.  Most of our efforts up to this point have been focused on IaaS (Infrastructure as a Service) solutions, such as Eucalyptus and OpenStack.  With IaaS, you (the end user of the service) run virtual machine instances of an OS (hopefully Ubuntu!), and build your solutions on top of that infrastructure layer.  With PaaS, you (the end user of the service) develop applications against a given software platform (within a few constraints), but you never actually touch the OS layer (the infrastructure).

CloudFoundry is one of the more interesting open source PaaS offerings I've used lately, supporting several different platforms already (Ruby, NodeJS, Spring Java), and several backend databases (MySQL, MongoDB, Redis), and support for other languages/databases under rapid development.

VMWare is hosting a free, public CloudFoundry server at (though you need to request an invite; took mine less than 48 hours to be arrive).  However, we're rapidly converging on a cloudfoundry-server package in a PPA, as well as an Ensemble formula.  Stay tuned for a subsequent introduction on that, and a similar how-to in the next few days...

In the mean time, let's deploy a couple of basic apps to!

Installing the Tools

The tool you need is vmc, which is provided by the ruby-vmc package.  We in the Canonical SI Team didn't find that package name very discoverable, so we created a meta package called cloudfoundry-client.

sudo apt-get install cloudfoundry-client

Setting the Target

First, you'll need to set the target server for the vmc command.  For this tutorial, we'll use VMWare's public server at  Very soon, you'll be able to target this at your locally deployed CloudFoundry server!

$ vmc target
Succesfully targeted to []

Logging In

Next, you'll log in with your credentials.  As I said above, it might take a few hours to receive your credentials from, but once you do, you'll log in like this:

$ vmc login
Password: **********
Successfully logged into []

Deploying Your First Applications

Your friendly Canonical Systems Integration Team have developed and tested a series of simple hello-world applications in each of CloudFoundry's supported languages.  Each of these applications simply print a welcome message, and display all of the environment variables available to the application.  The latter bit (the environment variables) are important, as several of them, those starting with VCAP_*, serve as a sort of metadata service for your applications.

Our sample apps are conveniently placed in /usr/share/doc/ruby-vmc/examples.

Deploying a Ruby Application

To deploy our sample Ruby application:

$ cd /usr/share/doc/ruby-vmc/examples/ruby/hello_env
$ vmc push
Would you like to deploy from the current directory? [Yn]: y
Application Name: example101
Application Deployed URL: ''? 
Detected a Sinatra Application, is this correct? [Yn]: y
Memory Reservation [Default:128M] (64M, 128M, 256M, 512M or 1G) 128M
Creating Application: OK
Would you like to bind any services to 'example101'? [yN]: n
Uploading Application:
  Checking for available resources: OK
  Packing application: OK
  Uploading (0K): OK   
Push Status: OK
Staging Application: OK                                                         
Starting Application: OK                                                        

And now, I can go to and see my application working.

Deploying a NodeJS Application

Next, I'm going to deploy our sample NodeJS application:

$ cd /usr/share/doc/ruby-vmc/examples/nodejs/hello_env
$ vmc push
Would you like to deploy from the current directory? [Yn]: y
Application Name: example102
Application Deployed URL: ''? 
Detected a Node.js Application, is this correct? [Yn]: y
Memory Reservation [Default:64M] (64M, 128M, 256M, 512M or 1G) 64M
Creating Application: OK
Would you like to bind any services to 'example102'? [yN]: n
Uploading Application:
  Checking for available resources: OK
  Packing application: OK
  Uploading (0K): OK   
Push Status: OK
Staging Application: OK                                                         
Starting Application: OK                                                       

And now, I can go to and see my simple NodeJS application running.

Deploying a Java Application

Now, we'll deploy our sample Java application.

$ cd /usr/share/doc/ruby-vmc/examples/springjava/hello_env

As with anything that involves Java, it's hardly as simple as our other examples :-) First we need to install the Java tool chain, and compile our jar file. I recommend you queue this one up and go brew yourself a gourmet pot of coffee. (You might even make it to Guatemala and back.) Also, note that we'll make a copy of this directory locally, because the maven build process needs to be able to write to the local directory.

$ sudo apt-get install openjdk-6-jdk maven2
$ cd $HOME
$ cp -r /usr/share/doc/ruby-vmc/examples/springjava .
$ cd springjava/hello_env/
$ mvn clean package
$ cd target
$ vmc push
Would you like to deploy from the current directory? [Yn]: y
Application Name: example103
Application Deployed URL: ''?  
Detected a Java Web Application, is this correct? [Yn]: y
Memory Reservation [Default:512M] (64M, 128M, 256M, 512M or 1G) 512M
Creating Application: OK
Would you like to bind any services to 'example103'? [yN]: n
Uploading Application:
  Checking for available resources: OK
  Packing application: OK
  Uploading (4K): OK   
Push Status: OK
Staging Application: OK                                                         
Starting Application: OK

All that for a Java hello-world ;-) Anyway, I now have it up and running at

Deploying a More Advanced Application

Hopefully these hello-world style applications will help you get started quickly and effortlessly deploy your first CloudFoundry apps. But let's look at one more complicated example -- one that requires a database service!

In digging around the web for some interesting NodeJS applications, I came across the Node Knockout programming competition. I found a few interesting apps, but had a lot of trouble tracking down the source for some of them. In any case, I really liked a shared-whiteboard application called Drawbridge, and I did find its source in github. So I branched the code, imported it to bzr and made a number of changes (with some awesome help from my boss, Zaid Al Hamami). I guess that's an important point to make here -- I've had to do some fairly intense surgery on pretty much every application I've ported to run in CloudFoundry, so please do understand that you'll very likely need to modify your code to port it to the CloudFoundry PaaS.

In any case, let's deploy Drawbridge to CloudFoundry!

$ cd $HOME
$ bzr branch lp:~kirkland/+junk/drawbridge
$ cd drawbridge
$ vmc push
Would you like to deploy from the current directory? [Yn]: y
Application Name: example104
Application Deployed URL: ''? 
Detected a Node.js Application, is this correct? [Yn]: y
Memory Reservation [Default:64M] (64M, 128M, 256M or 512M) 128M
Creating Application: OK
Would you like to bind any services to 'example104'? [yN]: y
Would you like to use an existing provisioned service [yN]? n
The following system services are available::
1. mongodb
2. mysql
3. redis
Please select one you wish to provision: 2
Specify the name of the service [mysql-4a958]: 
Creating Service: OK
Binding Service: OK
Uploading Application:
  Checking for available resources: OK
  Processing resources: OK
  Packing application: OK
  Uploading (77K): OK   
Push Status: OK
Staging Application: OK                                                         
Starting Application: OK

Note that vmc provisioning and linked a new MySQL instance with the app!

Now, let's see what Drawbridge is all about. Visiting in my browser, I can work on a collaborative whiteboard (much like Gobby or Etherpad, except for drawing pictures). Brian Thomason helped me create this Pulitzer-worthy doodle:

Listing Apps and Services

Now that I have a few apps and services running, I can take a look at what I have running using a few basic vmc commands.

Here are my apps:

$ vmc apps 
| Application | #  | Health  | URLS                        | Services    |
| example102  | 1  | RUNNING | |             |
| example103  | 1  | RUNNING | |             |
| example101  | 1  | RUNNING | |             |
| example104  | 1  | RUNNING | | mysql-4a958 |

And my services (available and provisioned):

$ vmc services 
============== System Services ==============
| Service | Version | Description                   |
| redis   | 2.2     | Redis key-value store service |
| mongodb | 1.8     | MongoDB NoSQL store           |
| mysql   | 5.1     | MySQL database service        |
=========== Provisioned Services ============
| Name        | Service |
| mysql-4a958 | mysql   |
| mysql-5894b | mysql   |

In this post, I've demonstrated a couple of frameworks (Ruby/Sinatra, NodeJS, Spring/Java), but here I can see that there are several others supported:

$ vmc frameworks 
| Name    |
| rails3  |
| sinatra |
| lift    |
| node    |
| grails  |
| spring  |

Scaling Instances

One of the huge advantages of PaaS deployment is how trivial application resource scalability can actually be. Let's increase the memory available to one of these applications:

$ vmc mem example101
Update Memory Reservation? [Current:128M] (64M, 128M, 256M or 512M) 512M
Updating Memory Reservation to 512M: OK
Stopping Application: OK
Staging Application: OK                                                         
Starting Application: OK   
$ vmc stats example101
| Instance | CPU (Cores) | Memory (limit) | Disk (limit) | Uptime       |
| 0        | 0.1% (4)    | 16.9M (512M)   | 40.0K (2G)   | 0d:0h:1m:22s |

Done! Wow, that was easy!

Now, let's add some additional instances, as I suspect they'll crash once my billions of blog readers start pound Drawbridge, maybe it'll stay up a bit longer :-)

$ vmc instances example104 4
Scaling Application instances up to 4: OK
$ vmc stats example104
| Instance | CPU (Cores) | Memory (limit) | Disk (limit) | Uptime        |
| 0        | 0.0% (4)    | 21.0M (128M)   | 28.0M (2G)   | 0d:0h:19m:33s |
| 1        | 0.0% (4)    | 15.8M (128M)   | 27.9M (2G)   | 0d:0h:2m:38s  |
| 2        | 0.0% (4)    | 16.3M (128M)   | 27.9M (2G)   | 0d:0h:2m:36s  |
| 3        | 0.0% (4)    | 15.8M (128M)   | 27.9M (2G)   | 0d:0h:2m:37s  |

In Conclusion

I hope this helps at least a few of you with an introduction to PaaS, CloudFoundry, and the CloudFoundry-Client (vmc) in Ubuntu.  As I said above, stay tuned for a post coming soon about hosting your own CloudFoundry-Server on Ubuntu!


Thursday, July 21, 2011

ASUS WL-330gE and DD-WRT

Different hotels have very different setups.  Some only have wired connections.  Some only have a single ethernet port that I need to share with my wife or roommate.

Traveling as much as I do, it's handy to bring a wireless router with me.

I bought a very, very tiny one in Taipei, Taiwan last year.  It was pretty good, very inexpensive, and rather configurable.  But it didn't run DD-WRT -- a real man's firmware! :-)

Last week, my buddy Scott Moser pointed me to the tiny ASUS WL-330gE.  I bought it on Amazon for $42, and just flashed it with DD-WRT.  Everything is working like a charm ;-)  In fact, I'm posting this entry on my Thinkpad, wirelessly connected to the device, which is also wirelessly connected to my home network (ie, operating in a wireless repeater bridge mode).

It's really small (3.25" x 2.5" x 0.75"), and can be powered by USB.

Pretty slick little device.  I'm quite happy with it ;-)


Wednesday, July 20, 2011

Introducing rootsign!

In my last post, I introduced the new utility bootmail, which can be configured to send you an email with the boot logs of your Ubuntu server each time it reboots.  This could prove really handy for your unattended or cloud servers.

While working on that tool, I quickly realized that any local user on the system could "forge" such an email message.  Truly, anyone can send email to anyone else.  That message can contain any data in it.  And even the sender and headers can be faked.  :-(

Thus, for bootmail to be useful, you'd need to have confidence that someone isn't faking your bootmail messages.  There's only one secure way to do that with email -- and that's a cryptographic signature of the message, signed with a private key known only to the root user of the system.

In retrospect, I realized that having a generic mechanism for the root user being able to sign any given text could actually be a useful tool to have.  So I split that logic out of the bootmail executable, and put it into its own, called rootsign (provided by the bootmail binary).

rootsign operates on standard input, signing that data with a private key generated specifically for rootsign signatures, and outputs the ascii-armored message and signature on standard out (suitable for piping directly to mail).

To verify the signature, you'll need to grab the public signature and import it into your local gpg keyring:

cat /var/lib/rootsign/ | gpg --import

And let's say I want to post a signed copy of my dmesg to a pastebin:

dmesg | sudo  rootsign | pastebinit

You can verify the signature from the public key at:

Do you have cronjobs that automatically send you email?  Have you ever wanted to assure yourself that these messages are authentic?  If so, rootsign is your friend :-)  Big thanks to Kees Cook who helped with a few design issues around the generation of the key to be used (that's a separate post!).

Can you think of any other cool uses of rootsign?


Introducing bootmail!

I have a handful of remote Ubuntu Servers floating around the Cloud, and even a couple of co-lo's at friends' houses.  All of these machines are very much "unattended", and I really don't like it when they get rebooted (unless I pulled that trigger)!

For this reason, I often added a cronjob to these systems to email me when they get rebooted.  It used to look something like this:

@reboot echo "$(hostname) rebooted on $(date)" | \
  mail -s "reboot notice [$(hostname)]"

Of course, I don't like duplicating code all over the place, and I love providing fun little utilities to all of you, so I improved upon this and just uploaded a new package to Ubuntu called bootmail.

You can already install it in Ubuntu 11.10 (Oneiric) with:

sudo apt-get install bootmail

Or you can install it on any older Ubuntu release with:

sudo apt-add-repository ppa:bootmail/ppa
sudo apt-get update
sudo apt-get install bootmail

Note that you'll be prompted by debconf to enter your email address, where bootmail will contact you when your system boots.

And now, each time I boot, I get an email that looks like this:

Subject: bootmail: [mirror] booted on [Tue Jul 19 19:19:44 CDT 2011]
Message-Id: ...
Date: Tue, 19 Jul 2011 19:19:53 -0500 (CDT)
From: noreply@mirror (Bootmail)

Hash: SHA1

bootmail: [mirror] booted on [Tue Jul 19 19:19:44 CDT 2011]

fsck from util-linux-ng 2.17.2
fsck from util-linux-ng 2.17.2
fsck from util-linux-ng 2.17.2
/dev/sda5 has been mounted 26 times without being checked, check forced.
/dev/sda6 has been mounted 26 times without being checked, check forced.
/dev/sda1: clean, 286/61312 files, 61620/244983 blocks
/dev/sda5: 222672/610800 files (0.1% non-contiguous), 1564984/2441872 blocks
/dev/sda6: 153602/29859840 files (1.3% non-contiguous), 67313156/119409129 blocks
init: ureadahead-other main process (880) terminated with status 4
/var/lib/tftpboot missing, aborting.
init: tftpd-hpa main process (907) terminated with status 1
init: tftpd-hpa main process ended, respawning
squid[931]: Squid Parent: child process 941 started
 * Starting AppArmor profiles       [80G [74G[ OK ]
 * Setting sensors limits       [80G [74G[ OK ]
 * Exporting directories for NFS kernel daemon...       [80G [74G[ OK ]
 * Starting NFS kernel daemon       [80G [74G[ OK ]
 * Not starting internet superserver: no services enabled
 * Starting Postfix Mail Transport Agent postfix       [80G [74G[ OK ]
 * Starting the landscape-client daemon       [80G [74G[ OK ]
 * Starting web server apache2       [80G apache2: Could not reliably determine the server's fully qualified domain name, using for ServerName
[74G[ OK ]

Linux mirror 2.6.32-33-server #70-Ubuntu SMP Thu Jul 7 22:28:30 UTC 2011 x86_64 GNU/Linux
Ubuntu 10.04.3 LTS

Welcome to the Ubuntu Server!
 * Documentation:

  System information as of Tue Jul 19 19:19:44 CDT 2011

  System load:  1.06              Processes:           150
  Usage of /:   63.5% of 9.17GB   Users logged in:     0
  Memory usage: 12%               IP address for eth1:
  Swap usage:   0%

  Graph this data and manage this system at


Version: GnuPG v1.4.10 (GNU/Linux)


So I get a date and timestamp, the hostname of the system that booted, and a couple of log files (which are configurable in /etc/bootmail/logs) every time the system boots (thanks for the suggestion, Clint) -- pure awesome for my unattended servers!

I also get a cryptographic signature of the entire message, signed by a GPG key uniquely generated for this local root user.  That piece is handled by another new utility that I've written for Ubuntu, called rootsign.  More about rootsign in my next post ;-)

Is there anyone out there that would use bootmail?

I was thinking about adding some support for bootmail in cloud-init, so that you could pass an email address to your instance through metadata, and it would email you as soon as its up.  What do you think?


Tuesday, July 19, 2011

Byobu 4.18 released, getting close to tmux profiles ;-)


I just released and uploaded Byobu 4.18 to the PPA for older Ubuntu releases and to Ubuntu Oneiric.  I'm very close to having profiles for tmux, as per the feedback to this post a few weeks ago.

In moving that direction, I needed to refactor Byobu's status framework, such that it could be usable by both Screen and Tmux.  So I created a byobu-statusd daemon that gathers and updates status as it expires, caching it in a run directory in tmpfs.  This has the really nice side effect that it should be much more efficient and less resource intensive, involving hundreds of fewer shell forks per minute.  Thanks to Scott Moser for some of the code and much help along the way!

That said, a lot of code has changed with byobu-4.18.  Please, please, please test it and report bugs in the usual location in Launchpad:


Monday, July 18, 2011

Introducing keep-one-running!

I just added another utility to the run-one package -- keep-one-running.  It's already in Ubuntu Oneiric (11.10), or you can add it to any other supported Ubuntu release from the PPA, with:

sudo apt-add-repository ppa:run-one/ppa
sudo apt-get update
sudo apt-get install -y run-one

run-one is a very useful tool that you can use to ensure that you never have more than one invocation of a process running on a system at a time.  I now use it in every single cron job I have, to keep long running jobs from ever stepping on a subsequent one.

I use a bip proxy to keep me connected to IRC and log messages even while I'm away.  Before opening xchat, I need to establish an ssh tunnel to my bip proxy.  More importantly, I need to keep that connection up (particularly when I'm on an unreliable network). 

To solve that problem generally, I added the keep-one-running mode to run-one.  And now, I added this command to my Unity startup applications:

keep-one-running ssh -N -C -L 7778:localhost:7778

If I were a root user, I could perhaps use upstart and the respawn directive.  I guess you could look at keep-one-running as a poor man's respawn.  Give it a shot and let me know if it's useful to you!

Enjoy ;-)