You can find the official blog post on Canonical Insights, and a short video introduction on Youtube (by yours truly).
Our Canonical colleague Stephane Graber posted a bit more technical design detail here on the lxc-devel mailing list, which was picked up by HackerNews. And LWN published a story yesterday covering another Canonical colleague of ours, Serge Hallyn, and his work on Cgroups and CGManager, all of which feeds into LXD. As it happens, Stephane and Serge are upstream co-maintainers of Linux Containers. Tycho Andersen, another colleague of ours, has been working on CRIU, which was the heart of his amazing demo this week, live migrating a container running the cult classic 1st person shooter, Doom! between two containers, back and forth.
Moreover, we've answered a few journalists' questions for excellent articles on ZDnet and SynergyMX. Predictably, El Reg is skeptical (which isn't necessarily a bad thing). But unfortunately, The Var Guy doesn't quite understand the technology (and unfortunately uses this article to conflate LXD with other random Canonical/Ubuntu complaints).
In any case, here's a bit more about LXD, in my own words...
Our primary design goal with LXD, is to extend containers into process based systems that behave like virtual machines.
We love KVM for its total machine abstraction, as a full virtualization hypervisor. Moreover, we love what Docker does for application level development, confinement, packaging, and distribution.
But as an operating system and Linux distribution, our customers are, in fact, asking us for complete operating systems that boot and function within a Linux Container's execution space, natively.
Linux Containers are essential to our reference architecture of OpenStack, where we co-locate multiple services on each host. Nearly every host is a Nova compute node, as well as a Ceph storage node, and also run a couple of units of "OpenStack overhead", such as MySQL, RabbitMQ, MongoDB, etc. Rather than running each of those services all on the same physical system, we actually put each of them in their own container, with their own IP address, namespace, cgroup, etc. This gives us tremendous flexibility, in the orchestration of those services. We're able to move (migrate, even live migrate) those services from one host to another. With that, it becomes possible to "evacuate" a given host, by moving each contained set of services elsewhere, perhaps a larger or smaller system, and then shut down the unit (perhaps to replace a hard drive or memory, or repurpose it entirely).
Containers also enable us to similarly confine services on virtual machines themselves! Let that sink in for a second... A contained workload is able, then, to move from one virtual machine to another, to a bare metal system. Even from one public cloud provider, to another public or private cloud!
LXD is a new "hypervisor" in that it provides (REST) APIs that can manage Linux Containers. This is a step function beyond where we've been to date: able to start and stop containers with local commands and, to a limited extent, libvirt, but not much more. "Booting" a system, in a container, running an init system, bringing up network devices (without nasty hacks in the container's root filesystem), etc. was challenging, but we've worked our way all of these, and Ubuntu boots unmodified in Linux Containers today.
Moreover, LXD is a whole new semantic for turning any machine -- Intel, AMD, ARM, POWER, physical, or even a virtual machine (e.g. your cloud instances) -- into a system that can host and manage and start and stop and import and export and migrate multiple collections of services bundled within containers.