From the Canyon Edge -- :-Dustin

Thursday, November 6, 2014

Where We're Going With LXD

Earlier this week, here in Paris, at the OpenStack Design Summit, Mark Shuttleworth and Canonical introduced our vision and proof of concept for LXD.

You can find the official blog post on Canonical Insights, and a short video introduction on Youtube (by yours truly).

Our Canonical colleague Stephane Graber posted a bit more technical design detail here on the lxc-devel mailing list, which was picked up by HackerNews.  And LWN published a story yesterday covering another Canonical colleague of ours, Serge Hallyn, and his work on Cgroups and CGManager, all of which feeds into LXD.  As it happens, Stephane and Serge are upstream co-maintainers of Linux Containers.  Tycho Andersen, another colleague of ours, has been working on CRIU, which was the heart of his amazing demo this week, live migrating a container running the cult classic 1st person shooter, Doom! between two containers, back and forth.


Moreover, we've answered a few journalists' questions for excellent articles on ZDnet and SynergyMX.  Predictably, El Reg is skeptical (which isn't necessarily a bad thing).  But unfortunately, The Var Guy doesn't quite understand the technology (and unfortunately uses this article to conflate LXD with other random Canonical/Ubuntu complaints).

In any case, here's a bit more about LXD, in my own words...

Our primary design goal with LXD, is to extend containers into process based systems that behave like virtual machines.

We love KVM for its total machine abstraction, as a full virtualization hypervisor.  Moreover, we love what Docker does for application level development, confinement, packaging, and distribution.

But as an operating system and Linux distribution, our customers are, in fact, asking us for complete operating systems that boot and function within a Linux Container's execution space, natively.

Linux Containers are essential to our reference architecture of OpenStack, where we co-locate multiple services on each host.  Nearly every host is a Nova compute node, as well as a Ceph storage node, and also run a couple of units of "OpenStack overhead", such as MySQL, RabbitMQ, MongoDB, etc.  Rather than running each of those services all on the same physical system, we actually put each of them in their own container, with their own IP address, namespace, cgroup, etc.  This gives us tremendous flexibility, in the orchestration of those services.  We're able to move (migrate, even live migrate) those services from one host to another.  With that, it becomes possible to "evacuate" a given host, by moving each contained set of services elsewhere, perhaps a larger or smaller system, and then shut down the unit (perhaps to replace a hard drive or memory, or repurpose it entirely).

Containers also enable us to similarly confine services on virtual machines themselves!  Let that sink in for a second...  A contained workload is able, then, to move from one virtual machine to another, to a bare metal system.  Even from one public cloud provider, to another public or private cloud!

The last two paragraphs capture a few best practices that what we've learned over the last few years implementing OpenStack for some of the largest telcos and financial services companies in the world.  What we're hearing from Internet service and cloud providers is not too dissimilar...  These customers have their own customers who want cloud instances that perform at bare metal equivalence.  They also want to maximize the utilization of their server hardware, sometimes by more densely packing workloads on given systems.

As such, LXD is then a convergence of several different customer requirements, and our experience deploying some massively complex, scalable workloads (a la OpenStack, Hadoop, and others) in enterprises. 

The rapid evolution of a few key technologies under and around LXC have recently made this dream possible.  Namely: User namespaces, Cgroups, SECCOMP, AppArmorCRIU, as well as the library abstraction that our external tools use to manage these containers as systems.

LXD is a new "hypervisor" in that it provides (REST) APIs that can manage Linux Containers.  This is a step function beyond where we've been to date: able to start and stop containers with local commands and, to a limited extent, libvirt, but not much more.  "Booting" a system, in a container, running an init system, bringing up network devices (without nasty hacks in the container's root filesystem), etc. was challenging, but we've worked our way all of these, and Ubuntu boots unmodified in Linux Containers today.

Moreover, LXD is a whole new semantic for turning any machine -- Intel, AMD, ARM, POWER, physical, or even a virtual machine (e.g. your cloud instances) -- into a system that can host and manage and start and stop and import and export and migrate multiple collections of services bundled within containers.

I've received a number of questions about the "hardware assisted" containerization slide in my deck.  We're under confidentiality agreements with vendors as to the details and timelines for these features.

What (I think) I can say, is that there are hardware vendors who are rapidly extending some of the key features that have made cloud computing and virtualization practical, toward the exciting new world of Linux Containers.  Perhaps you might read a bit about CPU VT extensions, No Execute Bits, and similar hardware security technologies.  Use your imagination a bit, and you can probably converge on a few key concepts that will significantly extend the usefulness of Linux Containers.

As soon as such hardware technology is enabled in Linux, you have our commitment that Ubuntu will bring those features to end users faster than anyone else!

If you want to play with it today, you can certainly see the primitives within Ubuntu's LXC.  Launch Ubuntu containers within LXC and you'll start to get the general, low level idea.  If you want to view it from one layer above, give our new nova-compute-flex (flex was the code name, before it was released as LXD), a try.  It's publicly available as a tech preview in Ubuntu OpenStack Juno (authored by Chuck Short, Scott Moser, and James Page).  Here, you can launch OpenStack instances as LXC containers (rather than KVM virtual machines), as "general purpose" system instances.

Finally, perhaps lost in all of the activity here, is a couple of things we're doing different for the LXD project.  We at Canonical have taken our share of criticism over the years about choice of code hosting (our own Bazaar and Launchpad.net), our preferred free software licence (GPLv3/AGPLv3), and our contributor license agreement (Canonical CLA).   [For the record: I love bzr/Launchpad, prefer GPL/AGPL, and am mostly ambivalent on the CLA; but I won't argue those points here.]
  1. This is a public, community project under LinuxContainers.org
  2. The code and design documents are hosted on Github
  3. Under an Apache License
  4. Without requiring signatures of the Canonical CLA
These have been very deliberate, conscious decisions, lobbied for and won by our engineers leading the project, in the interest of collaborating and garnering the participation of communities that have traditionally shunned Canonical-led projects, raising the above objections.  I, for one, am eager to see contribution and collaboration that too often, we don't see.

Cheers!
:-Dustin

No comments:

Post a Comment

Please do not use blog comments for support requests! Blog comments do not scale well to this effect.

Instead, please use Launchpad for Bugs and StackExchange for Questions.
* bugs.launchpad.net
* stackexchange.com

Thanks,
:-Dustin

Printfriendly