From the Canyon Edge -- :-Dustin

Monday, August 29, 2011

The Ubuntu Login Sound in 5.1 Channel Glory

It was way too hot here in Austin, Texas this weekend, as it hit 110F on Sunday!  So I spent most of the heat of the day inside, toying with something that I think is pretty cool :-)  I couldn't find any OS today (Mac, Windows, or Linux) that has a 5.1 channel login sound...  I'm hoping that Ubuntu might be the first!

I have 7.1 channel surround sound in my home theater, which is great for watching movies.  Hooked up to my projector is (of course) an Ubuntu nettop, which I use to stream and serve most of my media content.

I thought it would be neat to remix the Ubuntu login sound in 5.1 channels, to exercise my theater's surround sound at boot.

So I grabbed the familiar "drums and crickets" OGG file, which you can find at /usr/share/sounds/ubuntu/stereo/desktop-login.ogg, and opened it in audacity, a phenomenal open source mixer.  I split that stereo track into two mono tracks, and then added four more blank tracks.

The first two tracks are the Left and Right channels, respectively, followed by the Center channel, the Sub woofer channel, and then the Surround Left and Surround Right channels.  I copied the Left and Right channels to the Surround Left and Surround Right channels.

Then, I opened the original desktop-login.ogg again, and mixed that stereo track to a single mono track.  I took that mono track and copied it to the Center and Sub woofer channels.

Okay, now I had 6 tracks ... time to start playing with them!

I decided that I wanted the "crickets and wind" at the end of the clip to be exclusively in my rear, surround channels.  So I silenced the Surround Left and Surround Right tracks until about the 3.85 second mark, and then faded in from 3.85 seconds to 5.43 seconds, and faded out from 5.43 seconds until the end of the clip.  Since I wanted that sound exclusively in the rear channels, I silenced each of the Left, Right, Center, and Sub woofer channels from the 5.0 second mark, until the end of the clip.  Next, I smoothly faded out the Left and Right channels from about 2.21 until the 4.54 second marks.

For the intro, I wanted the first few drum beats to emanate from the center channel, and then spread wide to the Left and Right channels, right up to the big cymbal crash and the crescendo of the clip.  So I took the Center channel and added a very long fade, from the 0.30 second mark until about 3.97 seconds.  And then I set the Left and Right channels to slowly fade in, from 0 seconds to about 1.48 seconds.

Finally, I took the bass track and de-amplified it way down.  And then I applied a low-pass bass boost filter several times, until the lowest hits of the bass drum are the only audible parts of the track.

Want to hear it for yourself?  Well, you'll have to have 5.1 speakers in a true Surround Sound setup...  If so, grab the [flacogg, or wav] file, and open it in smplayer, ensuring that you have 5.1 channel sound enabled in smplayer.



With the right equipment, you should be in for a treat!  The first few drum beats you'll hear in your Center channel along with some solid, thumping bass hits.  The sound should spread quickly from the Center, fanning outward toward your Right and Left channels right up to the big crashing cymbal!  And with that crescendo, the Left, Right, Center, and Sub should all gracefully fall silent, while the crickets and the wooshing wind sweep back to your Rear Left and Rear Right channels!

Don't have 5.1 sound?  Well, you can still listen to each track individually.  Grab the [flac, ogg, or wav] file, and open it in audacity.  You should see 6 channels vertically down your screen.  You can click the Solo button next to each track, and listen to each track one-by-one.  Make sure you un-click the Solo button between plays.  This might give you a decent idea of how each of the channels come together.


Fancy yourself a sound producer?  Remix it again and share :-)  I have the wav sources up at lp:~kirkland/ubuntu-sounds/834802. Better yet, how about creating a whole new Ubuntu login sound?  :-)  Maybe one day....

From the right side of my brain,
:-Dustin

Thursday, August 25, 2011

Distro Breakdown in the Netflix/Linux Petition

I was pretty stoked to read earlier this month that ChromeOS and Chrome/Chromium was getting a Netflix app in the Chrome Webstore.  I installed it earlier tonight, but sadly it's not working on Chrome or Chromium on Ubuntu.  I installed it on my Cr-48, and it worked there.  Reports on the page indicate that it's working on Chrome/Windows.  But Chrome/Chromium on Linux -- no dice :-(

So the Netflix-on-Linux blues continue, unfortunately :-(

In looking for workarounds, I came across this web petition for Netflix-on-Linux support:
So I signed the petition and was impressed to see 16,518 other signatures!

In fact, I downloaded all of the signatures and did a little (far from scientific) grepping of my own to see where Ubuntu stood among the other desktops in the signature list.  Ubuntu lands at nearly 70%.  Impressive!





Ubuntu 11433 69.2%
Fedora/RH/CentOS 1600 9.7%
Mint 1092 6.6%
Arch 891 5.4%
Debian 856 5.2%
SuSE 596 3.6%
Other 50 0.3%

16518


:-Dustin

Tuesday, August 23, 2011

Ubuntu ARM Servers -- At last!!!

for Ubuntu Servers

I've long had a personal interest in the energy efficiency of the Ubuntu Server.  This interest has manifested in several ways.  From founding the PowerNap Project to using tiny netbooks and notebooks as servers, I'm just fascinated with the idea of making computing more energy efficient.

It wasn't so long ago, in December 2008 at UDS-Jaunty in Mountain View that I proposed just such a concept, and was nearly ridiculed out of the room.  (Surely no admin in his right mind would want enterprise infrastructure running on ARM processors!!! ...  Um, well, yeah, I do, actually....)  Just a little over two years ago, in July 2009, I longed for the day when Ubuntu ARM Servers might actually be a reality...

My friends, that day is here at last!  Ubuntu ARM Servers are now quite real!


The affable Chris Kenyon first introduced the world to Canonical's efforts in this space with his blog post, Ubuntu Server for ARM Processors a little over a week ago.  El Reg picked up the story quickly, and broke the news in Canonical ARMs Ubuntu for microserver wars.   And ZDNet wasn't far behind, running an article this week, Ubuntu Linux bets on the ARM server.  So the cat is now officially out of the bag -- Ubuntu Servers on ARM are here :-)

Looking for one?  This Geek.com article covers David Mandala's 42-core ARM cluster, based around TI Panda boards.  I also recently came across the ZT Systems R1801e Server, boasting 8 x Dual ARM Cortex A9 processors.  The most exciting part is that this is just the tip of the iceberg.  We've partnered with companies like Calxeda (here in Austin) and others to ensure that the ARM port of the Ubuntu Server will be the most attractive OS option for quite some time.

A huge round of kudos goes to the team of outstanding engineers at Canonical (and elsewhere) doing this work.  I'm sure I'm leaving off a ton of people (feel free to leave comments about who I've missed), but the work that's been most visible to me has been by:


So I'm looking forward to reducing my servers' energy footprint...are you?

:-Dustin

Friday, August 19, 2011

Ensemble: the Service Orchestration framework for hard core DevOps

I've seen Ensemble evolve from a series of design-level conversations (Brussels May 2010), through a year of fast-paced Canonical-style development, and participated in Ensemble sprints (Cape Town March 2011, and Dublin June 2011).  I've observed Ensemble at first as an outsider, then provided feedback as a stake-holder, and have now contributed code as a developer to Ensemble and authored Formulas.


Think about bzr or git circa 2004/2005, or apt circa 1998/1999, or even dpkg circa 1993/1994...  That's where we are today with Ensemble circa 2011. 

Ensemble is a radical, outside-of-the-box approach to a problem that the Cloud ecosystem is just starting to grok: Service Orchestration.  I'm quite confident that in a few years, we're going to look back at 2011 and the work we're doing with Ensemble and Ubuntu and see an clear inflection point in the efficiency of workload management in The Cloud.

From my perspective as the leader of Canonical's Systems Integration Team, Ensemble is now the most important tool in our software tool belt when building complex cloud solutions.

Period.

Juan, Marc, Brian, and I are using Ensemble to generate modern solutions around new service deployments to the cloud.  We have contributed many formulas already to Ensemble's collection, and continue to do so every day.

There's a number of novel ideas and unique approaches in Ensemble.  You can deep dive into the technical details here.  For me, there's one broad concept in Ensemble that just rocks my world...  Ensemble deals in individual service units, with the ability to replicate, associate, and scale those units quite dynamically.  Service units in practice are cloud instances (or if you're using Orchestra + Ensemble, bare metal systems!).  Service units are federated together to deliver a (perhaps large and complicated) user facing service.

Okay, that's a lot of words, and at a very high level.  Let to me try to break that down into something a bit more digestable...

I've been around Red Hat and Debian packaging for many years now.  Debian packaging is particularly amazing at defining prerequisites packages, pre- and post- installation procedures, and are just phenomenal at rolling upgrades.  I've worked with hundreds (thousands?) of packages at this point, including some mind bogglingly complex ones!

It's truly impressive how much can be accomplished within traditional Debian packaging.  But it has its limits.  These limits really start to bare their teeth when you need to install packages on multiple separate systems, and then federate those services together.  It's one thing if you need to install a web application on a single, local system:  depend on Apache, depend on MySQL, install, configure, restart the services...

sudo apt-get install your-web-app
...

Profit!

That's great.  But what if you need to install MySQL on two different nodes, set them up in a replicating configuration, install your web app and Apache on a third node, and put a caching reverse proxy on a fourth?  Oh, and maybe you want to do that a few times over.  And then scale them out.  Ummmm.....

sudo apt-get errrrrrr....yeah, not gonna work :-(

But these are exactly the type(s) of problems that Ensemble solves!  And quite elegantly in fact.

Once you've written your Formula, you'd simply:

ensemble bootstrap
ensemble deploy your-web-app
... 
Profit!

Stay tuned here and I'll actually show some real Ensemble examples in a series of upcoming posts.  I'll also write a bit about how Ensemble and Orchestra work together.

In the mean time, get primed on the Ensemble design and usage details here, and definitely check out some of Juan's awesome Ensemble how-to posts!

After that, grab the nearest terminal and come help out!

We are quite literally at the edge of something amazing here, and we welcome your contributions!  All of Ensemble and our Formula Repository are entirely free software, building on years of best practice open source development on Ubuntu at Canonical.  Drop into the #ubuntu-ensemble channel in irc.freenode.net, introduce yourself, and catch one of the earliest waves of something big.  Really, really big.

:-Dustin

Thursday, August 18, 2011

PowerNap Your Data Center! (LinuxCon 2011 Vancouver)


I was honored to speak at LinuxCon North America in beautiful Vancouver yesterday, about one of my favorite topics -- energy efficiency opportunities using Ubuntu Servers in the data center (something I've blogged before).

I'm pleased to share those slides with you today!  The talk is entitled PowerNap Your Data Center, and focused on Ubuntu's innovative PowerNap suite, from the system administrator's or data center manager's perspective.

We discussed the original, Cloud motivations for PowerNap, its evolution from the basic process monitoring and suspend/hibernate methods of PowerNap1, to our complete rewrite of PowerNap2 (thanks, Andres!) which added nearly a dozen monitors and the ubiquitously useful PowerSave mode.  PowerNap is now more useful and configurable than ever!

Flip through the presentation below, or download the PDF here.



Get Adobe Flash player


Stay tuned for another PowerNap presentation I'm giving at Linux Plumbers next month in California.  That one should be a bit deeper dive into the technical implementation, and hopefully generate some plumbing layer discussion and improvement suggestions.

:-Dustin

Monday, August 8, 2011

Howto: Install the CloudFoundry Server PaaS on Ubuntu 11.10



I recently gave an introduction to the CloudFoundry Client application (vmc),  which is already in Ubuntu 11.10's Universe archive.

Here, I'd like to introduce the far more interesting server piece -- how to run the CloudFoundry Server, on top of Ubuntu 11.10!  As far as I'm aware, this is the most complete PaaS solution we've made available on top of Ubuntu Servers, to date.

A big thanks to the VMWare CloudFoundry Team who has been helping us along with the deployment instructions.  Also, all of the packaging credit goes straight to Brian Thomason, Juan Negron, and Marc Cluet.

For testing purposes, I'm going to run this in Amazon's EC2 Cloud.  I'll need a somewhat larger instance to handle all the services and dependencies (ie, Java) required by the platform.  I find an m1.large seems to work pretty well, for $0.34/hour.  I'm using the Oneiric (Ubuntu 11.10) AMI's listed at http://uec-images.ubuntu.com/oneiric/current/.

Installation

To install CloudFoundry Server, add the PPA, update, and install:

sudo apt-add-repository ppa:cloudfoundry/ppa
sudo apt-get update
sudo apt-get install cloudfoundry-server

During the installation, there are a couple of debconf prompts, including:
  • a mysql password
    • required for configuration of the MySQL database (enter twice)
All in all, the install took me less than 7 minutes!

Next, install the client tools, either on your local system, or even on the server, so that we can test our server:

sudo apt-get install cloudfoundry-client

Configuration

Now, you'll need to target your vmc client against your installed server, rather than CloudFoundry.com, as I demonstrated in my last post.

In production, you're going to need access to a wildcard based DNS server, either your own, or a DynDNS service.  If you have a DynDNS.com standard account ($30/year), CloudFoundry actually supports dynamically adding DNS entries for your applications.  We've added debconf hooks in the cloudfoundry-server Ubuntu packaging to set this up for you.  So if you have a paid DynDNS account, just sudo dpkg-reconfigure cloudfoundry-server.

For this example, though, we're going to take the poor man's approach, and just edit our /etc/hosts file, BOTH locally on our laptop and on our CloudFoundry server.

First, look up your server's external IP address.  If you're running Byobu in EC2, it'll be the lower right corner.

Otherwise, grab your IPv4 address from the metadata service.

$ wget -q -O- http://169.254.169.254/latest/meta-data/public-ipv4
174.129.119.101

And you'll need to add an entry to your /etc/hosts for api.vcap.me, AND every application name you deploy.  Make sure you do this both on your laptop, and the server!  Our test application here will be called testing123. Don't forget to change my IP address to yours ;-)

echo "174.129.119.101  api.vcap.me testing123.vcap.me" | sudo tee -a /etc/hosts

Target

Now, let's target our vmc client at our vcap (CloudFoundry) server:

$ vmc target api.vcap.me
Succesfully targeted to [http://api.vcap.me]

Adding Users

And add a user.

$ vmc add-user 
Email: kirkland@example.com
Password: ********
Verify Password: ********
Creating New User: OK
Successfully logged into [http://api.vcap.me]

Logging In

Now we can login.

$ vmc login 
Email: kirkland@example.com
Password: ********
Successfully logged into [http://api.vcap.me]

Deploying an Application


At this point, you can jump over to my last post in the vmc client tool for a more comprehensive set of examples.  I'll just give one very simple one here, the Ruby/Sinatra helloworld + environment example.

Go to the examples directory, find an example, and push!

$ cd /usr/share/doc/ruby-vmc/examples/ruby/hello_env
$ vmc push
Would you like to deploy from the current directory? [Yn]: y
Application Name: testing123
Application Deployed URL: 'testing123.vcap.me'? 
Detected a Sinatra Application, is this correct? [Yn]: y
Memory Reservation [Default:128M] (64M, 128M, 256M, 512M, 1G or 2G) 
Creating Application: OK
Would you like to bind any services to 'testing123'? [yN]: n
Uploading Application:
  Checking for available resources: OK
  Packing application: OK
  Uploading (0K): OK 
Push Status: OK
Staging Application: OK
Starting Application: OK

Again, make absolutely sure that you edit your local /etc/hosts and add the testing123.vcap.me to the right IP address, and then just point a browser to http://testing123.vcap.me/


And there you have it!  An application pushed, and running on your CloudFoundry Server  -- Ubuntu's first packaged PaaS!

What's Next?

So the above setup is a package-based, all-in-one PaaS.  That's perhaps useful for your first CloudFoundry Server, and your initial experimentation.  But a production PaaS will probably involve multiple, decoupled servers, with clustered databases, highly available storage, and enterprise grade networking.

The Team is hard at work breaking CloudFoundry down to its fundamental components and creating a set of Ensemble formulas for deploying CloudFoundry itself as a scalable service.  Look for more news on that front very soon!

In the meantime, try the packages at ppa:cloudfoundry/ppa (or even the daily builds at ppa:cloudfoundry/daily) and let us know what you think!

:-Dustin

Saturday, August 6, 2011

With Open Arms

Jonathan Carter has admirably brought the topic of "Defining what upstream contributions are in terms of membership" to the Ubuntu Community Council meeting agenda.

Hats off, Jonathan, for taking a lead on this important question!

It's my sincere hope that the Ubuntu community continues to be one of the most open and accepting open source organizations in the world.

The current one-line description of Ubuntu membership is:
Membership of the Ubuntu community means recognition of a significant and sustained contribution to Ubuntu and the Ubuntu community.

I, personally, wholeheartedly welcome any and all who have contributed to Ubuntu in any way, even if indirectly, to the Ubuntu community.  (The words "significant" and "sustained", I think are too subjective for my tastes.)

Without question, to me that includes any Debian Developers, Maintainers, and countless Open Source Upstream Developers (even where those upstreams are employed by Ubuntu's competitors).

If such a person is willing to sign and abide by the Ubuntu Code of Conduct, they're an Ubuntu Member, in my book.

The more people in this (free software) world who abide by the Ubuntu Code of Conduct, the better place it will be for all!

An oldie, but goodie...




:-Dustin

Friday, August 5, 2011

A Formal Introduction to The Ubuntu Orchestra Project



Today's post by Matthew East, coupled with several discussions in IRC and the Mailing Lists have made me realize that we've not communicated the Ubuntu Orchestra Project clearly enough to some parts of the Ubuntu Community.  Within Ubuntu Server developer circles, I think the project's goals, design, and implementation are quite well understood.  But I now recognize that our community stretches both far and wide, and our messages about Orchestra have not yet reached all corners of the Ubuntu world :-)  Here's an attempt at that now!

History

Disorganized concepts of Ubuntu Orchestra have been discussed at every UDS since UDS-Intrepid in Prague, May 2008.  In its current form, I believe these were first discussed at UDS-Natty in Orlando in October 2010, in a series of sessions led by Mathias Gug and I.  Matthias left Canonical a few weeks later for a hot startup in California called Zimride, but we initiated the project during the Natty cycle based on the feedback from UDS, pulling together the bits and pieces.

The newly appointed Server Manager (and Nomenclature-Extraordinaire) Robbie Williamson suggested the name Orchestra (previously, were calling it Ubuntu Infrastructure Services).  Everyone on the team liked the name, and it stuck.  I renamed the project and packages and branding and everything around Ubuntu Orchestra, or just Orchestra for short.  Hereafter, we may say Orchestra, but we always mean Ubuntu Orchestra.

We had packages in a little-publicized PPA for Natty, but we never pushed the project into the archive for Natty.  It just wasn't baked yet.  And due to other priorities, and it just didn't land before the cycle's Feature Freeze.  Still, it was a great idea, we had a solid foundation, and the seed had been planted in people's minds for the next UDS in Budapest...

Right around UDS-Oneiric in Budapest (May 2011), I left the Ubuntu Platform Server Team, to manage a new team in Canonical Corporate Services, called the Solutions Integration Team (we build solutions on top of Ubuntu Server).  Two rock stars on that team (Juan Negron and Marc Cluet) had been hard at work on a project called the SI-Toolchain -- a series of Puppet Modules and mCollective plugins that can automate the deployment of workloads.  This was the piece that we were missing from Orchestra, the key feature that kept us from uploading Orchestra to Natty.  I worked extensively with them in the weeks before and after UDS merging their functionality into Orchestra, at which point we had a fully functional system for Oneiric.  Since that time, some of that functionality has been replaced with Ensemble, which aligns a bit better with how we see Service Orchestration in the world of Ubuntu Servers (more on that below).

Okay, history lesson done.  Now the technical details!

The Problem


Traditionally, the Ubuntu Server ships and installs from a single ISO.  That's fine and dandy if you're installing one or two servers.  But in the Cloud IaaS world where Ubuntu competes, that just doesn't cut the mustard.  Real Cloud deployments involve installing dozens, if not hundreds or thousands of systems.  And then managing, monitoring, and logging those system for their operational lives.

I've installed the Ubuntu Enterprise Cloud literally hundreds of times in the last 3 years.  While the UEC Installer option in the Server ISO was a landmark achievement in IaaS installations, it falls a bit short on large scale deployments.  With the move to OpenStack, we had a pressing need to rework the Ubuntu Cloud installation.  Rather than changing a bunch of hard coded values in the debian-installer (again), we opted to invest that effort instead into a scalable and automatable network installation mechanism.

Ubuntu Orchestra is an ambitious project to solve those problems for the modern system administrator, at scale, using the best of Ubuntu Server Open Source technologies.  It's tightly integrated with Ubuntu Ensemble, and OpenStack is Orchestra's foremost (but not only) workload.

The Moving Parts

In our experience, anyone who has more than, say, a dozen Ubuntu Servers has implemented some form of a local mirror (or cache), a pxe/tftp boot server, dhcp, dns, and probably quite a bit of Debian preseed hacking etc. to make that happen.  Most server admins have done something like this in their past.  And almost every implementation has been different.  We wanted to bundle this process and make it trivial for an Ubuntu system administration to install Orchestra on one server, and then deploy an entire data center effortlessly.

To do this, we wanted to write as little new code as possible and really focus on Ubuntu's strength here -- packaging and configuring the best of open source.  We reviewed several options in this space.

The Ubuntu Orchestra Server

At a general level, the pieces we decided we needed were:
  • Provisioning Server
  • Management Server
  • Monitoring Server
  • Logging Server
There exist excellent implementations of each of these in Ubuntu already.  The ultimate goal of Orchestra is to tie them all together into one big happy integrated stack.

If you're conversant in Debian control file syntax, take a look at Orchestra's control file, and you'll see how these pieces are laid out.  Much of Orchestra is just a complicated, opinionated meta package with most of the "code" being in the post installation helper scripts that get everything configured and working properly together.

As such, the ubuntu-orchestra-server package is a meta package that pulls in:

  • ubuntu-orchestra-provisioning-server
  • ubuntu-orchestra-management-server
  • ubuntu-orchestra-monitoring-server
  • ubuntu-orchestra-logging-server

Let's look at each of those components...

The Ubuntu Orchestra Provisioning Server

We looked at a hacky little project called uec-provisioning, that several of us were using to deploy our local test and development Eucalyptus clouds.  (In fact, uec-provisioning provides several of the fundamental concepts of Orchestra, going back to the Lucid development cycle -- but they were quick hacks here, and not a fully designed solution.)  We also examined FAI (Fully Automated Install) and Cobbler.  We took a high level look at several others, but really drilled down into FAI and Cobbler.

FAI was already packaged for Debian and Ubuntu, but it's dependency on NFS was a real limitation on what we wanted to do with large scale enterprise deployments.

Cobbler was a Fedora project, popular with many sysadmins, with a Python API and several users on their public mailing lists asking for Ubuntu support (both as a target and host).  All things considered, we settled on Cobbler and spent much of the Natty cycle doing the initial packaging and cleaning up the Debian and Ubuntu support with the upstream Fedora maintainers.  For Natty, we ended up with a good, clean Cobbler package, but as I said above, fell a little short on delivering the full Orchestra suite.  It's well worth mentioning that Cobbler is an excellent open source project with very attentive, friendly upstreams.

Cobbler is installable as a package, all on its own, on top of Ubuntu, and can be used to deploy Debian, Ubuntu, CentOS, Fedora, Red Hat, and SuSE systems.

But the ubuntu-orchestra-provisioning-server is a special meta package that adds some excellent enhancements to the Ubuntu provisioning experience.  It includes a squid-deb-proxy server, which caches local copies of installed packages, such that subsequent installations will occur at LAN speeds.  The Ubuntu Mini ISOs are automatically mirrored by a weekly cronjob, and automatically imported and updated in Cobbler.  Orchestra also ships specially crafted and thoroughly tested preseed files for Orchestra-deployed Ubuntu Servers.  These ensure that your network installations operate reliably unattended.

The Ubuntu Orchestra Management Server

In Orchestra's earliest (1.x) implementations, the Management Server portion of Orchestra was handled by a complicated combination of Puppet, mCollective, and over a dozen mCollective plugins (all of which we have now upstreamed to the mCollective project).  This design worked very well in the traditional "configuration management" approach to data center maintenance.

Instead, we're taking a very modern, opinionated approach on the future of the data center.  In the Orchestra 2.x series, we have adjusted our design from that traditional approach to a more modern "service orchestration" approach, which integrates much better into the overarching Ubuntu Cloud strategy.  Here, we're using Ensemble to provide a modern, Cloud-ready approach to today's data center.  Like Orchestra, Ensemble is a Canonical-driven open source project, driven by Ubuntu developers, for Ubuntu users.

The Ubuntu Orchestra Monitoring Server

We believe that Monitoring is an essential component of a modern, enterprise-ready data center, and we know that there are some outstanding open source tools in this space.  After experimentation, research, and extensive discussions at UDS in Budapest, we have settled on Nagios as our monitoring solution.  Nodes deployed by Orchestra will automatically be federated back to the Monitoring Server.  The goal is to make this as seamless and autonomic as possible, transparent to the system administrator as possible.

The Ubuntu Orchestra Logging Server

Similar, but slightly separate from the Monitoring Server is the need most sysadmins have for comprehensive remote logging.  Data center servers are necessarily headless.  Orchestra is currently using rsyslog to provide this capability, also configured automatically at installation time.

The Ubuntu Orchestra Client

Server provisioned by Orchestra, but before they're managed by Ensemble should all look identical.  We have modeled this behavior after Amazon EC2.  Every instance of Ubuntu Server you run in EC2 looks more or less the same at initial login.  We want a very similar experience in Orchestra deployed servers.

The default Orchestra deployed server looks very much like a default Ubuntu Server installation, with a couple of notable additions.  The preseed also adds the ubuntu-orchestra-client meta package, which pulls in: 
  • byobu, capistrano, cloud-init, ensemble, openssh-server, nagios, powernap, rsyslog, and squid-deb-proxy-client 
Note that administrators who disagree with these additions are welcome to edit the conffile, /etc/orchestra/ubuntu-orchestra-client.seed where this is specified.  But these are the client side pieces required by the rest of Orchestra's functionality.

In Comparison to Crowbar

Crowbar is a solution linking Dell and the OpenStack project that we've been following for some time.  I discussed the design of Orchestra at length with Crowbar's chief architect, Rob Hirschfeld, in San Antonio at the 2nd OpenStack Developer Summit in San Antonio in November 2010.  I've also seen OpsCode Matt Ray's excellent presentation/demonstration on Crowbar at the Texas Linux Fest.

Orchestra and Crowbar are similar in some respects, in that they both deploy OpenStack clouds, but differ significantly in others.  Notably:
  • Crowbar is was designed to deploy OpenStack (yesterday announcing that they're working on deploying Hadoop too).   Orchestra is designed to deploy Ubuntu Servers, and then task  them with jobs or roles (which might well be OpenStack compute, storage, or service nodes).
  • Crowbar was designed and optimized for Dell Servers (which allows it to automate some low-level tasks, like BIOS configuration), but has recently started deploying other hardware too.  Orchestra is designed to work with any hardware that can run Ubuntu (i386, amd64, and even ARM!).
  • Crowbar uses Chef for a configuration-management type experience, and while initially implemented on Ubuntu, should eventually work with other Linux OSes.  Orchestra uses Ensemble for a service-orchestration style experience, and while other OSes could be provisioned by Orchestra, it will always be optimized for Ubuntu.
  • Crowbar has been recently open sourcedOrchestra is, and has been, open source (AGPL) since January 2011.
None of these points should disparage Crowbar.  It sounds like an excellent solution to a specific problem -- getting OpenStack running on a rack of Dell Servers.  In the demos we've seen of Crowbar, they're using Ubuntu as the base OS, and that's great.  We (Ubuntu) will continue to do everything in our power to ensure that Ubuntu is the best OS for running your OpenStack cloud.  In fact, we can even see Orchestra being used to deploy your Crowbar server, which then deploys OpenStack to your rack of Dell Servers, if that's your taste.  In any case, we're quite excited that others are tackling the hard problems in this space.

In Conclusion

Ensemble is how you deploy your workloads into the Cloud.  And Orchestra is how you deploy the Cloud.  Orchestra is a suite of best practices for deploying Ubuntu Servers, from Ubuntu Servers.  After deployment, it provides automatic federation and integrated management, monitoring, and logging.


Orchestra is short hand for The Ubuntu Orchestra Project.  It's an Ubuntu Server solution.  For the Ubuntu community and users, as well as Canonical customers.  Designed and implemented by Ubuntu developers and aspiring Ubuntu developers.



:-Dustin

Thursday, August 4, 2011

Giving it a Go!


 At the urging of my pal Gustavo, I finally did some exploration of the Go programming language, so I thought I'd share my experience here...

In the course of a discussion with a handful of my colleagues this week, we identified a specific problem that I wanted to solve:
  • launching instances in Amazon EC2 using human readable aliases instead of the 8-character hexadecimal identifiers, like ami-123456ab

I wanted a precise wrapper for ec2-run-instances, identical in every way, except I wanted to support mnemonic terms like "lucid", or "natty amd64" or "oneiric daily" instead of "ami-5ec3ba0c".


With the help of Scott's new Ubuntu Cloud Images API, this problem really comes down to:
  1. Scanning argv[] for such aliases
  2. Retrieving the manifest of available images from the web service
  3. Applying a set of rules to identify the "right" or "best matching" AMI
  4. Replacing the alias values in argv[] with the identifed AMI
  5. Calling execv()  on the argv[] arguments and the right AMI
Simple enough!  That's roughly an hour's worth of work for me in Python, Shell, C, Perl, PHP etc.  But Gustavo threw down the gauntlet...he challenged me to do it in Go!  He gave me a couple of pointers in IRC and then dashed off to dinner, leaving me naked and stranded in the Googles :-)

It was easy enough to install Go on Oneiric.  Just sudo apt-get install golang.  And so I put a good hour into writing the tool.  I'm more than a little ashamed to admit how badly I struggled for the first hour.  And so I gave up (for a bit) and just banged the tool in Python.  In very little time, I had it working like a charm.

It was at that point (10pm?) that I then I ported it to Go.  This was a much more pleasant experience!  Rather than trying to design and implement a new tool in a new language, I was simply translating the logic I already had, into a new language.  The problem was more about learning the language's semantics, than taking on the entire project at once.  I learned a ton about Go in the process, and now have a more informed opinion on the technology (which I'll share below).

What you have here is a small program (under 200 lines) identical in behavior and functionality, which is implemented in both Python and Go!  So open up your favorite multiplexing terminal or editor and split your screen vertically (Ctrl-F2 in Byobu), and check out these two programs side by side!
So my initial thoughts on Go as a first-time practitioner...let's do this the Clint Eastwood way...

The Good
  • It has a switch statement!  Woohoo, take that, Python!  :-)
  • #!/usr/bin/gorun -- This is awesome!  You have to jump through a couple of minor hoops (add a PPA, install a package, make a symlink by hand) right now, but if you do, you can just put that hash-bang at the top of your code and skip the compilation/linking steps, and run your code just like you would with Python/Shell/Perl.  This provides a nice bridge, letting you do your fast, iterative development in an interpreted style language, but you can always compile your code too, for stripped, fast binaries.
  • gofmt is also very cool!  You can run this on your source code and it will normalize your indentation and formatting exactly to the language's recommended standard.  I don't necessarily agree with all of the formatting rules, but that doesn't matter, since I can write how I want, and just run gofmt on my code before committing.  It would be sweet to have one of these for every language!
  • A function's return type is specified in the top of the function definition itself (albeit in a strange location on the line).  I like this about C, C++, and Java and tend to miss it when reading someone else's code in Python, Perl, or Shell.
  • _ is used, by convention, as a throwaway variable.  It's kind of cool to just sort of have one of those around.  I'm sure there are purists who'd disagree, and this is super minor, but I found it quite convenient.
The Bad
  • I was totally frustrated with how difficult "Go" is to search for.  Try Googling for "go array example".  Hint: you can get better results if you search for "golang array example", but it took me a while to figure that out (you're welcome!).   I found this painfully ironic that it's Google who's behind Go in the first place :-)  I just wish everyone would start calling it Golang (like Erlang) rather than Go.  Side note: I also wish every language had official documentation and user contributed examples as useful and navigable as Php.net ;-)
  • I found it very strange and counterintuitive to put variable type declarations after the variable name.  So you would say "i int" rather than "int i".  And an hash with strings as keys and integers as values would be "myhash map[string]int".  To me, it was just a quirk and once I did it a few times, I got a little more used to it.  This is really a very minor psychological hurdle, in the end.
  • As cool as the gorun utility is, eventually, you'd probably want to eventually compile your code.  As far as I can tell, the result is a static binary after compilation and linking.  While this has some nice advantages for mobile, embedded devices that ship complete system/OS images, it seems like a huge nightmare for a distribution.  If a bug or vulnerability is found in the http library I used in my example above, I would have to recompile my code with the new library.  If Ubuntu shipped 100 such programs statically compiled, we'd have to recompile and SRU each and every one :-(  So I'm a little uneasy on this point, until I understand a bit better how to use shared libraries and dynamic linking in Go.
 The Ugly
  • Perhaps the most unattractive aspect of Go I found was how completely unreadable panics (tracebacks) are.  They look way too much like a Java shart, for my liking.  I mean, it's better than a totally useless C segmentation fault, but traceback readability is really where Python shines, to me.  Here's the first one I got from my program, last night: http://paste.ubuntu.com/658294/.  61 unreadable lines of text (which scrolled off of my screen) to tell me that I overran by buffer.  Moreover, the linguistics here leave a little to be desired:  "panic: runtime error: slice bounds out of range".  I'm hoping that these are designed to be machine readable, and that a good debugger exists that can make more sense of these?
  • The keyword "nil".  Seriously?  I mean, I'm even a lover of Latin, but this really demonstrates the length to which programming language authors go to differentiate their product.  Just seems a little vain to me.  What's wrong with null?  :-)  Okay, that one is totally comical.  I don't actually care.  It just seems silly to me.
My Verdict

A totally awesome learning experience that I recommend to any and all of my developer colleagues!  It's been several years since I learned a new programming language.  A big, sincere thanks to Gustavo for insisting that I give Go a try.

At this point, I don't really have any plans to start rewriting vast swaths of my Shell or Python code in Go.  But the next time I encounter a problem that I really should solve in C, I think I'll try doing it in Go first.  I don't yet quite see the massive advantages of Go over Shell or Python for distribution level development (which is my job at the end of the day), but I think I can see a few opportunities when we need something like C, C++, or Java.

So what about the tool you wrote???

So glad you asked!  Actually, it's landed in Oneiric, as of today!  It changed names, from ubuntu-run-instances, to ubuntu-ec2-run.  And Scott Moser rewrote it to his liking, as maintainer of the cloud-utils package and the web service API it uses.

He opted for the Python implementation, though for two reasons...  First, Go would have introduced a new build-dependency, and since cloud-utils is in Main, that would have required an MIR for the golang packages which are currently in Universe.  Second, the aforementioned static binary would likely prove difficult to maintain, from a distribution perspective.  We'd like to investigate this aspect of Go a bit more before heading down this route.

Happy hacking,
:-Dustin

LWN.net: Tongue in Cheek Humor, or Par for the Course?

Jon Corbet at LWN.net writes:
On the same day that Oracle announced the acquisition, CentOS developer Karanbir Singh suggested that one place the CentOS community could help out would be in the creation of a ksplice update stream. CentOS updates had been available from Ksplice Inc., on a trial basis at least; the company even somewhat snidely let it be known that they were providing updates for CentOS during the first few months of 2011, when the CentOS project itself had dropped the ball on that job. Oracle-ksplice still claims to support CentOS, but there is not even a trial service available for free; anybody wanting update service for CentOS must pay for it from the beginning. (The free service for Fedora and Ubuntu appears to still be functioning, for now - but who builds a high-availability system on those distributions?).


Quite a few people, actually!  :-)  Maybe even a few readers here.  Anyone?

:-Dustin

Tuesday, August 2, 2011

Keep One SSH Tunnel to a Bip Proxy Server Running

I've been using IRC proxies on-and-off since 1999, and consistently since 2003.

I used dircproxy until February 2008, when I joined Canonical, at which point I switched to bip, as I needed support for SSL encrypted connections.

I've also helped at least a dozen friends and colleagues construct similar setups, so this blog post documentation is long, long overdue and triggered by another colleague asking me tonight to explain my setup again :-)

As you'll see below, it's not too complex, but it's really quite robust.  With this setup, all messages are logged, whether I'm attached or not.  When I'm not attached, I'm automatically marked 'away'.  All traffic between me and my server is encrypted.  Most importantly, my client marks any flagged messages/highlights that I missed each time I reconnect.

There are 4 key pieces to this setup:
  1. bip
  2. ssh
  3. keep-one-running
  4. xchat (or insert your favorite IRC client here)

The Server

I have a production, monitored Ubuntu server hosting www.divitup.com -- a side project that I authored back in college in 2000 to help split bills between roommates.  Long before the dawn of Facebook when Zuckerberg and the Winklevosses were still in high school :-)

It's an Ubuntu Server inside a VPS hosted by A2Hosting.com (with whom I've always been quite pleased!).  There's rarely downtime, but when there is, I hear about it from DivItUp users in a hurry.  It's the closest thing I have to an always-up server ;-)

Beside the DivItUp.com web service, it also runs SSH (of course) and I've installed the Bip Proxy service (though the port is not open externally).  Bip installs quite trivially on Ubuntu with:

sudo apt-get install bip

Though you may need to enable it in /etc/default/bip.

Bip can be configured globally for the server in /etc/bip.conf.  See the manpage and the inline comments in your default /etc/bip.conf, but this should give a decent idea of roughly how mine is configured:

ip = "127.0.0.1";
port = 7778;
client_side_ssl = true;
log_level = 3;
pid_file="/var/run/bip/bip.pid";
log_root = "/var/log/bip/";
log_format = "%n/%Y-%m/%c.%d.log";
log_sync_interval = 5;
backlog = true;
backlog_lines = 0;              # number of lines in backlog, 0 means no limit
backlog_always = false;         # backlog even lines already backlogged
backlog_msg_only = true;
blreset_on_talk = true;
backlog_reset_on_talk = true;

# Networks
network {
        name = "canonical";
        ssl = true;
        server { host = "irc.canonical.com"; port = 6697; };
};
network {
        name = "freenode";
        server { host = "irc.freenode.net"; port = 6667; };
};

network {
        name = "oftc";
        server { host = "irc.oftc.net"; port = 6667; };
};

# Users/channels
user { 
        name = "kirkland";      # bip user name (not IRC username)
        password = "88548dff20a3b2b72852b4256a7a3544";  # bip user password, generated by bipmkpw
        ssl_check_mode = "none";
        default_nick = "kirkland";              # IRC nick
        default_user = "kirkland";              # IRC user
        default_realname = "Dustin Kirkland";   # IRC real name

        # A user can have mutiple connections to irc networks.
        connection {
                name = "canonical";             # used by bip only
                network = "canonical";  # which ircnet to connect to
                user = "kirkland";
                realname = "Dustin Kirkland";
                password = "SomePassword";
                ignore_first_nick = true;
                no_client_away_msg = "currently disconnected";
                # Autojoined channels
                channel { name = "#a-channel,#another-channel,#maybe-a-third"; };
        };

        # another connection (optional)
        connection {
                name = "freenode";
                network = "freenode";
                ignore_first_nick = true;
                no_client_away_msg = "currently disconnected";
                on_connect_send = "PRIVMSG NickServ :IDENTIFY yourIRCpasswordHere";
                # Autojoined channels:
                channel { name = "#byobu"; };
                channel { name = "#ubuntu-devel"; };
                channel { name = "#ubuntu-meeting"; };
                channel { name = "#ubuntu-server"; };
                channel { name = "#ubuntu-cloud"; };
                # Password protected channel
                channel { name = "##the-good-stuff"; key = "zuperSekrit"; };
        };
};

Once you've installed and configured bip, start the service!

sudo service bip start

Now, let's take a look at the client...

The Client

Here, you really just need two things ... an always-running SSH tunnel to your server, and your IRC client.  I'll discuss Ubuntu/xchat here, but you can do the same with Android/AndChat.

There are several ways to configure an SSH tunnel (like stunnel), but here I'm going to show you the one that I'm partial toward :-)  I wrap an ssh port forwarding session with keep-one-running, and configure Unity to launch that automatically at boot.

My ssh command looks like this:

ssh -N -L 7778:localhost:7778 divitup.com


Now I want to make sure that there's always one, and only one of these running on my laptop client at all times.  I want it to automatically reconnect if I lose wireless connectivity, switch access points networks, suspend-and-resume, etc.  So I wrap that command with the keep-one-running utility.

keep-one-running ssh -N -L 7778:localhost:7778 divitup.com

And I set Unity (or Gnome/KDE/XFCE) to run this command at desktop login.  Alt-F2, "Startup Applications".


At login, I can run "ps -ef | grep keep-one-running" and see the command in my list.

Finally, I need to configure my IRC client, xChat, to talk to localhost:7778, rather than irc.freenode.net.

Here, you'll add a custom "network" for each of the server connections you defined in your /etc/bip.conf on the Server.   You'll use localhost/7778 for the hostname and port, since that's where you're SSH-port-forwarding to.  You'll enter your NickServ password (if you authenticate to IRC).  And you'll use the Server Password you created with bipmkpw.



Now, if you have an Android device, you can connect to the same proxy, by following my colleague, Juan Negron's supplementary post here!

Do you think you could improve your connectivity with such a setup?  Do you have a better way of solving this problem?

:-Dustin

Monday, August 1, 2011

Canyon Edge Aurora/Solar now publishing to PVOutput.org

A big thanks to Eric Sandeen for introducing me to PVOutput.org -- a website dedicated to collecting and graphing PV/Solar data.  It has a really well documented service API.
I updated my cronjob that logs data from my Solar Inverter using Curt Blank's aurora (which I've packaged for Ubuntu), to additionally submit my data to PVOutput.org.  You can see my array's current day live data at: http://pvoutput.org/intraday.jsp?sid=3085, and from there, click through various views of daily, weekly, monthly, etc. reports.

I backfilled the last 1.5 years with monthly output totals, but as of July 31, 2011, I should have much more granular outputs, corresponding my my cronjob which runs every 15 minutes.

Having done that, I see that our array is currently ranked 3rd in overall power generation of all arrays registered at PVOutput.org!  That won't last too long, though, as there are a couple of much bigger arrays gathering sun a lot faster than ours, so I grab a screenshot for now :-)


A current snapshot from my inverter shows these values:

Current date/time: 01-Aug-2011 12:30:01

Daily Energy               =      11.692 KWh
Weekly Energy              =      47.445 KWh
Monthly Energy             =      11.691 KWh
Yearly Energy              =    6253.736 KWh
Total Energy               =   18596.278 KWh
Partial Energy             =   10700.143 KWh

Current date/time: 01-Aug-2011 12:30:05

Input 1 Voltage            =  279.369812 V
Input 1 Current            =   12.553665 A
Input 1 Power              = 3507.114990 W

Input 2 Voltage            =  272.658813 V
Input 2 Current            =    3.934551 A
Input 2 Power              = 1072.789917 W

Grid Voltage Reading       =  235.901169 V
Grid Current Reading       =   18.746544 A
Grid Power Reading         = 4391.979980 W
Frequency Reading          =   60.016804 Hz.

DC/AC Coversion Efficiency =        95.9 %
Inverter Temperature       =   67.880447 C
Booster Temperature        =   61.183586 C

The key number there is: Total Energy =   18596.278 KWhThat means that we've generated 18.6 megawatt-hours of power to date!

I also added the PVOutput widget to the bottom of the right column of this blog.

:-Dustin

Printfriendly