From the Canyon Edge -- :-Dustin

Thursday, October 27, 2011

Getting Started with Ubuntu Orchestra -- Servers in Concert!

Servers in Concert!

Ubuntu Orchestra is one of the most exciting features of the Ubuntu 11.10 Server release, and we're already improving upon it for the big 12.04 LTS!

I've previously given an architectural introduction to the design of Orchestra.  Now, let's take a practical look at it in this how-to guide.


To follow this particular guide, you'll need at least two physical systems and administrative access rights on your local DHCP server (perhaps on your network's router).  With a little ingenuity, you can probably use two virtual machines and work around the router configuration.  I'll follow this guide with another one using entirely virtual machines.

To build this demonstration, I'm using two older ASUS (P1AH2) desktop systems.  They're both dual-core 2.4GHz AMD processors and 2GB of RAM each.  I'm also using a Linksys WRT310n router flashed with DD-WRT.  Most importantly, at least one of the systems must be able to boot over the network using PXE.

Orchestra Installation

You will need to manually install Ubuntu 11.10 Server on one of the systems, using an ISO or a USB flash disk.  I used the 64-bit Ubuntu 11.10 Server ISO, and my no-questions-asked uquick installation method.  This took me a little less than 10 minutes.

After this system reboots, update and upgrade all packages on the system, and then install the ubuntu-orchestra-server package.

sudo apt-get update
sudo apt-get dist-upgrade -y
sudo apt-get install -y ubuntu-orchestra-server

You'll be prompted to enter a couple of configuration parameters, such as setting the cobbler user's password.  It's important to read and understand each question.  The default values are probably acceptable, except for one, which you'll want to be very careful about...the one that asks about DHCP/DNS management.

In this post, I selected "No", as I want my DD-WRT router to continue handling DHCP/DNS.  However, in a production environment (and if you want to use Orchestra with Juju), you might need to select "Yes" here.

And a about five minutes later, you should have an Ubuntu Orchestra Server up and running!

Target System Setup

Once your Orchestra Server is installed, you're ready to prepare your target system for installation.  You will need to enter your target system's BIOS settings, and ensure that the system is set to first boot from PXE (netboot), and then to local disk (hdd).  Orchestra uses Cobbler (a project maintained by our friends at Fedora) to prepare the network installation using PXE and TFTP, and thus your machine needs to boot from the network.  While you're in your BIOS configuration, you might also ensure that Wake on LAN (WoL) is also enabled.

Next, you'll need to obtain the MAC address of the network card in your target system.  One of many ways to obtain this is by booting that Ubuntu ISO, pressing ctrl-alt-F2, and running ip addr show.

Now, you should add the system to Cobbler.  Ubuntu 11.10 ships a feature called cobbler-enlist that automates this, however, for this guide, we'll use the Cobbler web interface.  Give the system a hostname (e.g., asus1), select its profile (e.g., oneiric-x86_64), IP address (e.g., and MAC address (e.g., 00:1a:92:88:b7:d9).  In the case of this system, I needed to tweak the Kernel Options, since this machine has more than one attached hard drive, and I want to ensure that Ubuntu installs onto /dev/sdc, so I set the Kernel Options to partman-auto/disk=/dev/sdc.  You might have other tweaks on a system-by-system basis that you need or want to adjust here (like IPMI configuration).

Finally, I adjusted my DD-WRT router to add a static lease for my target system, and point dnsmasq to PXE boot against the Orchestra Server.  You'll need to do something similar-but-different here, depending on how your network handles DHCP.

NOTE: As of October 27, 2011, Bug #882726 must be manually worked around, though this should be fixed in oneiric-updates any day now.  To work around this bug, login to the Orchestra Server and run:

RELEASES=$(distro-info --supported)
ARCHES="x86_64 i386"
for r in $RELEASES; do
  for a in $ARCHES; do
    sudo cobbler profile edit --name="$r-$a" \

Target Installation

All set!  Now, let's trigger the installation.  In the web interface, enable the machine for netbooting.

If you have WoL working for this system, you can even use the web interface to power the system on.  If not, you'll need to press the power button yourself.

Now, we can watch the installation remotely, from an SSH session into our Orchestra Server!  For extra bling, install these two packages:

sudo apt-get install -y tmux ccze

Now launch byobu-tmux (which handles splits much better than byobu-screen).  In the current window, run:

tail -f /var/log/syslog | ccze

Now, split the screen vertically with ctrl-F2.  In the new split, run:

sudo tail -f /var/log/squid/access.log | ccze

Move back and forth between splits with shift-F3 and shift-F4.  The ccze command colorizes log files.

syslog progress of your installation scrolling by.  In the right split, you'll see your squid logs, as your Orchestra server caches the binary deb files it downloads.  On your first installation, you'll see a lot of TCP_MISS messages.  But if you try this installation a second time, subsequent installs will roll along much faster and you should see lots of TCP_HIT messages.

It takes me about 5 minutes to install these machines with a warm squid cache (and maybe 10 mintues to do that first installation downloading all of those debs over the Internet).  More importantly, I have installed as many as 30 machines simultaneously in a little over 5 minutes with a warm cache!  I'd love to try more, but that's as much hardware as I've had concurrent access to, at this point.

Post Installation

Most of what you've seen above is the provisioning aspect of Orchestra -- how to get the Ubuntu Server installed to bare metal, over the network, and at scale.  Cobbler does much of the hard work there,  but remarkably, that's only the first pillar of Orchestra.

What you can do after the system is installed is even more exciting!  Each system installed by Orchestra automatically uses rsyslog to push logs back to the Orchestra server.  To keep the logs of multiple clients in sync, NTP is installed and running on every Orchestra managed system.  The Orchestra Server also includes the Nagios web front end, and each installed client runs a Nagios client.  We're working on improving the out-of-the-box Nagios experience for 12.04, but the fundamentals are already there.  Orchestra clients are running PowerNap in power-save mode, by default, so that Orchestra installed servers operate as energy efficiently as possible.

Perhaps most importantly, Orchestra can actually serve as a machine provider to Juju, which can then offer complete Service Orchestration to your physical servers.  I'll explain in another post soon how to point Juju to your Orchestra infrastructure, and deploy services directly to your bare metal servers.

Questions?  Comments?

I won't be able to offer support in the comments below, but if you have questions or comments, drop by the friendly #ubuntu-server IRC channel on, where we have at least a dozen Ubuntu Server developers with Orchestra expertise, hanging around and happy to help!


Wednesday, October 19, 2011

The Magic Number 4

We're less than two weeks away from the next Ubuntu Developer Summit, in Orlando, Florida, where nearly 700 techies will define the enterprise Linux landscape for the next decade.
You: "Come on, Dustin, you're being a bit melodramatic, here, no?"
Me: "Heh, if anything, I may be understating the importance of the Ubuntu 12.04 LTS!"
When it comes to enterprise operating systems, there's a certain magic aurora that surrounds the number, "4".  Let's take a stroll through enterprise operating systems history...

Anyone here remember Windows NT4?  You can hate Microsoft and Windows all you want, but in 1996, NT4 became the first Windows release in 11 years that delivered an enterprise-ready server.  I was in high school working for a little PC outfit called Alpha Computer Company in Plaquemine, Louisiana, and we installed NT4 servers by the hundreds.  For all its faults and security vulnerabilities, server administration had never been point-and-click easier.

I have infinite respect for RHEL4!  I was a Red Hat and Fedora user for 10 years between 1997 and 2006 (when I switched to Ubuntu), and ran nearly every version from Red Hat 5 through Fedora Core 5, as well as RHEL2.1 and RHEL3.  It was RHEL4 in 2005 that was pure gold!  The features, the stability -- this was the first enterprise Linux release anywhere that was ready for prime time.  And it's still a great OS nearly 7 years later.  There's no shortage of hosting companies still running RHEL4.x + cPanel out there.

I dabbled in Solaris just a little in high school and eventually in my Computer Science courses at Texas A&M University.  Guess what Solaris was called, before it was rebranded in 1993?  Yep, SunOS4 became the first Solaris!  I dare say that Sun cranked out the dominant UNIX implementation right up until OpenSolaris tanked spectacularly and the aforementioned RHEL4 stole the Linux/UNIX show.

I also served 8 years hard time at IBM, where we danced to a slightly different UNIX tune -- that of AIX.  Once again, it was the AIX4 release series that established AIX as a UNIX mainstay and rose to the level of expectations of IBM customers.  AIX4 shifted the focus to IBM's innovative PowerPC processors, introduced CDE, IPv6 (remarkably in 1997!), and everyone's favorite text-based system management utility, smitty ;-)

With all this talk about UNIX, we certainly cannot overlook SVR4.  UNIX System V Release 4.0 in 1988 was basically the last (SVR5 was a SCO disaster, and SVR6 was cancelled) of the great UNIX specification releases, feeding into all of the proprietary and open UNIX distributions, from Sun, to HP, to IBM, to DEC, to the various BSD derived distributions.  SVR4 was the beginning of a new era of UNIX computing, and its legacy runs right up to our doorsteps today.

And here we are, just 6 months away from the fourth Ubuntu LTS.  Reflecting back a bit, Ubuntu 6.06 LTS (Dapper) was the first long term supported, enterprise release, and the introduction of Ubuntu as a Server platform.  Support for Dapper just ended in June of this year (2011), and provided Ubuntu users with some rock-solid stability, if lacking a bit on some modern Linux features.  The Ubuntu 8.04 LTS (Hardy) release (the first cycle on which I worked the Ubuntu Server for Canonical) introduced the enterprise Linux industry to KVM as a hypervisor and refined our ability to deliver a long term supported, heavily QA'd server release.  Hardy is still supported for another 1.5 years, and I know of many Ubuntu Server installations happily cranking along on Hardy (including my own  Ubuntu 10.04 LTS defined the IaaS cloud market, providing a fully-functional, 100% open source cloud infrastructure with UEC, and absolutely rewrote the industry's books on Linux as a cloud guest operating system.

It's quite easy to see the progression of the Ubuntu LTS Server, from 6.06 to 8.04 to 10.04.  With that kind of momentum behind us, coupled with history's emphasis on "4th" releases of operating systems, can you imagine the quality, features, and industry impact of Ubuntu's LTS4?  I'm just beginning to wrap my head around it, and it's damn exciting!

Personally, I can't wait for UDS, to help get that chapter of history underway.


Thursday, October 13, 2011

The email I received from Dennis Ritchie (by way of maddog)

I learned earlier this morning that Dennis Ritchie, one of the fathers of the C programming and UNIX as we know it, passed away.  Thank you so much, Mr. Ritchie, for the immeasurable contributions you've made to the modern world of computing!  I think I'm gainfully employed and love computer technology in the way I do, and am in no small ways indebted to your innovation and open contributions to that world.

Sadly, I've never met "dmr", but I did have a very small conversation with him, via a mutual friend -- Jon "maddog" Hall (who wrote his own farewell in this heartfelt article).

A couple of years ago, I created the update-motd utility for Ubuntu systems, whereby the "message of the day", traditionally located at /etc/motd could be dynamically generated, rather than a static message composed by the system's administrator.  The initial driver for this was Canonical's Landscape project, but numerous others have found it useful, especially in Cloud environments.

A while back, a colleague of mine complemented the sheer simplicity of the idea of placing executable scripts in /etc/update-motd.d/ and collating the results at login into /etc/motd.  He asked if any Linux or UNIX distribution had ever provided a simple framework for dynamically generating the MOTD.  I've only been around Linux/UNIX for ~15 years, so I really had no idea.  This would take a bit of old school research into the origins of the MOTD!

I easily traced it back through every FHS release, back to the old fsstnd-1.0.  The earliest reference I could find in print that specifically referred to the path /etc/motd was Using the Unix System by Richard L. Gauthier (1981).

At this point, I reached out to colleagues Rusty Russell and Jon "maddog" Hall, and asked if they could help me a bit more with my search.  Rusty said that I would specifically need someone with a beard, and CC'd "maddog" (who I had also emailed :-)

Maddog did a bit of digging himself...if by "digging" you mean emailing the author of C and Unix!  I had a smile from ear to ear when this message appeared in my inbox:
Jon 'maddog' Hall to Dustin on Tue, Apr 20, 2010 at 10:08 PM: 

> A young friend of mine is investigating the origins of /etc/motd.  I
> think he is working on a mechanism to easily update that file.
> I think I can remember it in AT&T Unix of 1977, when I joined the labs,
> but we do not know how long it was in Unix before that, and if it was
> inspired by some other system.
> Can you help us out with this piece of trivia?

Ah, a softball!
MOTD is quite old.  The same thing was in CTSS and then
Multics, and doubtless in other systems.  I suspect
even the name is pretty old.  It came into Unix early on.

I haven't looked for the best  citation, but I bet it's easily
findable:  one of the startling things that happened
on CTSS was that someone was editing the password
file (at that time with no encryption) and managed
to save the password file as the MOTD.

Hope you're well,
Well sure enough, Dennis was (of course) right.  The "message of the day" does actually predate UNIX itself!  I would eventually find Time-sharing Computer Systems, by Maurice Wilkes (1968), which says:

"There is usually also a message of the day, a feature designed to keep users in touch with new facilities introduced and with other changes in the system"

As well as the Second National Symposium on Engineering Information, New York, October 27, 1965 proceedings:
"When a user sits down at his desk (console), he finds a "message of the day".  It is tailored to his specific interests, which are of course known by the system."

Brilliant!  So it wasn't so much that update-motd had introduced something that no one had ever thought of, but rather that it had re-introduced an old idea that had long since been forgotten in the annals of UNIX history.

I must express a belated "thank you" to Dennis (and maddog), for the nudges in the right direction.  Thank you for so many years of C and UNIX innovation.  Few complex technologies have stood the test of time as well as C, UNIX and the internal combustion engine.

RIP, Dennis.


Friday, October 7, 2011

Ubuntu Cloud Live

This morning, Canonical's CEO Jane Silber is delivering the first keynote address at the incredible OpenStack Conference in Boston, MA.  I've spent the entire week here in Boston -- Monday, Tuesday, and Wednesday were dedicated to an Ubuntu-style developer summit, focusing on the next OpenStack release (code named Essex), set for release in early April.  This version of OpenStack will form the IaaS basis for the Ubuntu 12.04 LTS server in April 2012.

I saw a preview of Jane's slides yesterday evening, and I'm quite sad that I'm missing her talk (I'm writing this from the Boston/Logan airport on my way back to Austin, TX).  Jorge Castro will be posting a video of her talk as soon as he can.  I think you'll hear about Jane's vision of a Ubuntu's history of leadership as the best Host and Guest OS in the Cloud, and our revolutionary approach Service Orchestration in the Cloud.

I've also seen a sneak preview of a demo given at the end of the talk.  Clint Byrum and Adam Gandelman have worked around the clock producing a spectacular visualization of an Ubuntu Cloud at work.  In the front of the stage, we have a portable rack of servers (a 40-core Intel Emerald Ridge, a 24-core HP Proliant, a 16-core Dell Precision, with a System76 local Ubuntu mirror, and Cisco networking hardware).  We've used Ubuntu Orchestra to remotely install the systems, and we've deployed OpenStack to the rack.  Once OpenStack is running, Clint has a series of Hadoop jobs that he spins up and runs against dozens of instances on the local Nova compute node.  And for the real whiz-bang, Clint uses gource for dynamic visualization of the Hadoop cluster, the various nodes, and their relationships.  It is absolutely stunning to behold!

We are also giving away a few hundred top notch USB sticks, rubber coated with the Ubuntu brandmark.  Ask Robbie Williamson how much he enjoyed dd'ing several hundred ISO images :-)  What was he loading onto the stick, you ask?

Rewind back to May 2010, in a 5-minute lightning talk at UDS-Brussels, I demonstrated an Ubuntu LiveISO running the Ubuntu Enterprise Cloud and called it Cloud in your Pocket.  A bit later, I reworked that image to support OpenStack too and showed that at the OpenStack Design Summit in San Antonio.  I was delighted when a couple of the Canonical OEM Server developers (Ante Karamatic, Dave Medberry, and Al Stone) have picked that work up, and ported it forward to Ubuntu 11.10, Unity, and OpenStack Diablo.

So this morning's OpenStack Conference attendees are walking away with the Ubuntu Cloud Live USB experience!  For the rest of you, you can freely download the image yourself, and write that to your own USB stick, or even run it in a virtual machine!

To get started download the image from:
We're going to re-roll that image for the 11.10 official GA release.  Next, write that image to a USB stick (assuming that USB drive is sdb):

sudo dd if=binary.img of=/dev/sdb

Or just run that image in a virtual machine using TestDrive:

testdrive -u ./binary.img

The image should boot much like an Ubuntu Desktop Live, and you should end up in a very minimal Unity environment, with a command line and a web browser, and not much else.  On the desktop, there's a text document with instructions for getting started.  We could have automated all of the cloud creation, but we figured it would be educational to leave a few steps for you (key generation, image registration, instance running).

You can watch it here:

I'm hoping we contribute Ubuntu Cloud Live to the OpenStack Satellite projects (akin to Ubuntu Universe -- it's not part of Core OpenStack, but it's related and useful to some OpenStack users).

It's quite easy for you to modify and rebuild the Ubuntu Cloud Live image to your uses!  That looks something like this...

Install the live-build tools and grab the source code from

sudo apt-get install live-build
bzr branch lp:cloud-live

Make your changes, if any.  And then build.

lb clean
lb build

You'll wait a while.  Internet connection speed and CPU/Memory will determine how long the build takes.  Eventually, you'll see a file called binary.img.  And there you go!  You have just re-built the Ubuntu Cloud Live image.