Canonical and Microsoft have teamed up to deliver an truly special experience -- running Ubuntu containers with Hyper-V Isolation on Windows 10 and Windows Servers!
We have published a fantastic tutorial at https://ubu.one/UhyperV, with screenshots and easy-to-follow instructions. You should be up and running in minutes!
Follow that tutorial, and you'll be able to launch Ubuntu containers with Hyper-V isolation by running the following directly from a Windows Powershell:
Earlier this month, I spoke at ContainerDays, part of the excellent DevOpsDays series of conferences -- this one in lovely Portland, Oregon.
I gave a live demo of Kubernetes running directly on bare metal. I was running it on an 11-node Ubuntu Orange Box -- but I used the exact same tools Canonical's world class consulting team uses to deploy Kubernetes onto racks of physical machines.
You see, the ability to run Kubernetes on bare metal, behind your firewall is essential to the yin-yang duality of Cloud Native computing. Sometimes, what you need is actually a Native Cloud.
Deploying Kubernetes into virtual machines in the cloud is rather easy, straightforward, with dozens of tools now that can handle that.
But there's only one tool today, that can deploy the exact same Kubernetes to AWS, Azure, GCE, as well as VMware, OpenStack, and bare metal machines. That tools is conjure-up, which acts as a command line front end to several essential Ubuntu tools: MAAS, LXD, and Juju.
I don't know if the presentation was recorded, but I'm happy to share with you my slides for download, and embedded here below. There are a few screenshots within that help convey the demo.
Yesterday, I delivered a talk to a lively audience at ContainerWorld in Santa Clara, California.
If I measured "the most interesting slides" by counting "the number of people who took a picture of the slide", then by far "the most interesting slides" are slides 8-11, which pose an answer the question:
"Should I run my PaaS on top of my IaaS, or my IaaS on top of my PaaS"?
In the Ubuntu world, that answer is super easy -- however you like! At Canonical, we're happy to support:
In all cases, the underlying substrate is perfectly consistent:
you've got 1 to N physical or virtual machines
which are dynamically provisioned by MAAS or your cloud provider
running stable, minimal, secure Ubuntu server image
carved up into fast, efficient, independently addressable LXD machine containers
With that as your base, we'll easily to conjure-up a Kubernetes, an OpenStack, or both. And once you have a Kubernetes or OpenStack, we'll gladly conjure-up one inside the other.
As always, I'm happy to share my slides with you here. You're welcome to download the PDF, or flip through the embedded slides below.
Or, if you're feeling more enterprisey and want the full experience, try:
$ conjure-up canonical-kubernetes
I hope to meet some of you at ContainerWorld in Santa Clara next week. Marco Ceppi and I are running a Kubernetes installfest workshop on Tuesday, February 21, 2017, from 3pm - 4:30pm. I can guarantee that every single person who attends will succeed in deploying their own Kubernetes cluster to a public cloud (AWS, Azure, or Google), or to their Ubuntu laptop or VM.
Finally, I invite you to check out this 30-minute podcast with David Daly, from DevOpsChat, where we talked quite a bit about Containers and Kubernetes and the experience we're working on in Ubuntu...
A couple of weeks ago, I delivered a talk at the Container Camp UK 2016. It was an brilliant event, on a beautiful stage at Picturehouse Central in Picadilly Circus in London.
I had the opportunity to speak at Container World 2016 in Santa Clara yesterday. Thanks in part to the Netflix guys who preceded me, the room was absolutely packed!
Picture yourself containers on a server With systemd trees and spawned tty's Somebody calls you, you answer quite quickly A world with the density so high
- Sgt. Graber's LXD Smarts Club Band
Last week, we proudly released Ubuntu 15.10 (Wily) -- the final developer snapshot of the Ubuntu Server before we focus the majority of our attention on quality, testing, performance, documentation, and stability for the Ubuntu 16.04 LTS cycle in the next 6 months.
Notably, LXD has been promoted to the Ubuntu Main archive, now commercially supported by Canonical. That has enabled us to install LXD by default on all Ubuntu Servers, from 15.10 forward.
That means that every Ubuntu server -- Intel, AMD, ARM, POWER, and even Virtual Machines in the cloud -- is now a full machine container hypervisor, capable of hosting hundreds of machine containers, right out of the box!
LXD in the Sky with Diamonds! Well, LXD is in the Cloud with Diamond level support from Canonical, anyway. You can even test it in your web browser here.
The development tree of Xenial (Ubuntu 16.04 LTS) has already inherited this behavior, and we will celebrate this feature broadly through our use of LXD containers in Juju, MAAS, and the reference platform of Ubuntu OpenStack, as well as the new nova-lxd hypervisor in the OpenStack Autopilot within Landscape.
While the young and the restless are already running Wily Ubuntu 15.10, the bold and the beautiful are still bound to their Trusty Ubuntu 14.04 LTS servers.
At Canonical, we understand both motivations, and this is why we have backported LXD to the Trusty archives, for safe, simple consumption and testing of this new generation of machine containers there, on your stable LTS.
Installing LXD on Trusty simply requires enabling the trusty-backports pocket, and installing the lxd package from there, with these 3 little commands:
In minutes, you can launch your first LXD containers. First, inherit your new group permissions, so you can execute the lxc command as your non-root user. Then, import some images, and launch a new container named lovely-rita. Shell into that container, and examine the process tree, install some packages, check the disk and memory and cpu available. Finally, exit when you're done, and optionally delete the container.
I was able to run over 600 containers simultaneously on my Thinkpad (x250, 16GB of RAM), and over 60 containers on an m1.small in Amazon (1.6GB of RAM).
Canonical is delighted to sponsor ContainerCon 2015, a Linux Foundation event in Seattle next week, August 17-19, 2015. It's quite exciting to see the A-list of sponsors, many of them newcomers to this particular technology, teaming with energy around containers.
From chroots to BSD Jails and Solaris Zones, the concepts behind containers were established decades ago, and in fact traverse the spectrum of server operating systems. At Canonical, we've been working on containers in Ubuntu for more than half a decade, providing a home and resources for stewardship and maintenance of the upstream Linux Containers (LXC) project since 2010.
Last year, we publicly shared our designs for LXD -- a new stratum on top of LXC that endows the advantages of a traditional hypervisor into the faster, more efficient world of containers.
LXD is a persistent daemon that provides a clean RESTful interface to manage (start, stop, clone, migrate, etc.) any of the containers on a given host.
Hosts running LXD are handily federated into clusters of container hypervisors, and can work as Nova Compute nodes in OpenStack, for example, delivering Infrastructure-as-a-Service cloud technology at lower costs and greater speeds.
Here, LXD and Docker are quite complementary technologies. LXD furnishes a dynamic platform for "system containers" -- containers that behave like physical or virtual machines, supplying all of the functionality of a full operating system (minus the kernel, which is shared with the host). Such "machine containers" are the core of IaaS clouds, where users focus on instances with compute, storage, and networking that behave like traditional datacenter hardware.
LXD runs perfectly well along with Docker, which supplies a framework for "application containers" -- containers that enclose individual processes that often relate to one another as pools of micro services and deliver complex web applications.
Moreover, the Zen of LXD is the fact that the underlying container implementation is actually decoupled from the RESTful API that drives LXD functionality. We are most excited to discuss next week at ContainerCon our work with Microsoft around the LXD RESTful API, as a cross-platform container management layer.
Ben Armstrong, a Principal Program Manager Lead at Microsoft on the core virtualization and container technologies, has this to say:
“As Microsoft is working to bring Windows Server Containers to the world – we are excited to see all the innovation happening across the industry, and have been collaborating with many projects to encourage and foster this environment. Canonical’s LXD project is providing a new way for people to look at and interact with container technologies. Utilizing ‘system containers’ to bring the advantages of container technology to the core of your cloud infrastructure is a great concept. We are looking forward to seeing the results of our engagement with Canonical in this space.”
Finally, if you're in Seattle next week, we hope you'll join us for the technical sessions we're leading at ContainerCon 2015, including: "Putting the D in LXD: Migration of Linux Containers", "Container Security - Past, Present, and Future", and "Large Scale Container Management with LXD and OpenStack". Details are below.
Date: Wednesday, August 19 10:25am-11:15am Title: Putting the D in LXD: Migration of Linux Containers Speaker: Tycho Andersen Abstract: http://sched.co/3YTz Location: Willow A Schedule: http://sched.co/3YTz
If you read my last post, perhaps you followed the embedded instructions and ran hundreds of LXD system containers on your own Ubuntu machine.
Or perhaps you're already a Docker enthusiast and your super savvy microservice architecture orchestrates dozens of applications among a pile of process containers.
Either way, the massive multiplication of containers everywhere introduces an interesting networking problem:
"How do thousands of containers interact with thousands of other containers efficiently over a network? What if every one of those containers could just route to one another?"
Canonical is pleased to introduce today an innovative solution that addresses this problem in perhaps the most elegant and efficient manner to date! We call it "The Fan" -- an extension of the network tunnel driver in the Linux kernel. The fan was conceived by Mark Shuttleworth and John Meinel, and implemented by Jay Vosburgh and Andy Whitcroft.
A Basic Overview
Each container host has a "fan bridge" that enables all of its containers to deterministically map network traffic to any other container on the fan network. I say "deterministically", in that there are no distributed databases, no consensus protocols, and no more overhead than IP-IP tunneling. [A more detailed technical description can be found here.] Quite simply, a /16 network gets mapped on onto an unused /8 network, and container traffic is routed by the host via an IP tunnel.
A Demo
Interested yet? Let's take it for a test drive in AWS...
First, launch two instances in EC2 (or your favorite cloud) in the same VPC. Ben Howard has created special test images for AWS and GCE, which include a modified Linux kernel, a modified iproute2 package, a new fanctl package, and Docker installed by default. You can find the right AMIs here.
Now, let's create a fan bridge on each of those two instances. We can create it on the command line using the new fanctl command, or we can put it in /etc/network/interfaces.d/eth0.cfg.
We'll do the latter, so that the configuration is persistent across boots.
$ cat /etc/network/interfaces.d/eth0.cfg
# The primary network interface
auto eth0
iface eth0 inet dhcp
up fanctl up 250.0.0.0/8 eth0/16 dhcp
down fanctl down 250.0.0.0/8 eth0/16
$ sudo ifup --force eth0
Now, let's send some traffic back and forth! Again, we can use ping and nc.
root@261ae39d90db:/# ping -c 3 250.0.27.3
PING 250.0.27.3 (250.0.27.3) 56(84) bytes of data.
64 bytes from 250.0.27.3: icmp_seq=1 ttl=62 time=0.563 ms
64 bytes from 250.0.27.3: icmp_seq=2 ttl=62 time=0.278 ms
64 bytes from 250.0.27.3: icmp_seq=3 ttl=62 time=0.260 ms
--- 250.0.27.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.260/0.367/0.563/0.138 ms
root@261ae39d90db:/# echo "here come the bits" | nc 250.0.27.3 9876
root@261ae39d90db:/#
─────────────────────────────────────────────────────────────────────
root@ddd943163843:/# ping -c 3 250.0.28.3
PING 250.0.28.3 (250.0.28.3) 56(84) bytes of data.
64 bytes from 250.0.28.3: icmp_seq=1 ttl=62 time=0.434 ms
64 bytes from 250.0.28.3: icmp_seq=2 ttl=62 time=0.258 ms
64 bytes from 250.0.28.3: icmp_seq=3 ttl=62 time=0.269 ms
--- 250.0.28.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.258/0.320/0.434/0.081 ms
root@ddd943163843:/# nc -l 9876
here come the bits
Alright, so now let's really bake your noodle...
That 250.0.0.0/8 network can actually be any /8 network. It could be a 10.* network or any other /8 that you choose. I've chosen to use something in the reserved Class E range, 240.* - 255.* so as not to conflict with any other routable network.
Finally, let's test the performance a bit using iperf and Amazon's 10gpbs instances!
So I fired up two c4.8xlarge instances, and configured the fan bridge there.
Multiple containers, on separate hosts, directly addressable to one another with nothing more than a single network device on each host. Deterministic routes. Blazing fast speeds. No distributed databases. No consensus protocols. Not an SDN. This is just amazing!
RFC
Give it a try and let us know what you think! We'd love to get your feedback and use cases as we work the kernel and userspace changes upstream.
Over the next few weeks, you'll see the fan patches landing in Wily, and backported to Trusty and Vivid. We are also drafting an RFC, as we think that other operating systems and the container world and the Internet at large would benefit from Fan Networking.
Our Canonical colleague Stephane Graber posted a bit more technical design detail here on the lxc-devel mailing list, which was picked up by HackerNews. And LWN published a story yesterday covering another Canonical colleague of ours, Serge Hallyn, and his work on Cgroups and CGManager, all of which feeds into LXD. As it happens, Stephane and Serge are upstream co-maintainers of Linux Containers. Tycho Andersen, another colleague of ours, has been working on CRIU, which was the heart of his amazing demo this week, live migrating a container running the cult classic 1st person shooter, Doom! between two containers, back and forth.
Moreover, we've answered a few journalists' questions for excellent articles on ZDnet and SynergyMX. Predictably, El Reg is skeptical (which isn't necessarily a bad thing). But unfortunately, The Var Guy doesn't quite understand the technology (and unfortunately uses this article to conflate LXD with other random Canonical/Ubuntu complaints).
In any case, here's a bit more about LXD, in my own words...
Our primary design goal with LXD, is to extend containers into process based systems that behave like virtual machines.
We love KVM for its total machine abstraction, as a full virtualization hypervisor. Moreover, we love what Docker does for application level development, confinement, packaging, and distribution.
But as an operating system and Linux distribution, our customers are, in fact, asking us for complete operating systems that boot and function within a Linux Container's execution space, natively.
Linux Containers are essential to our reference architecture of OpenStack, where we co-locate multiple services on each host. Nearly every host is a Nova compute node, as well as a Ceph storage node, and also run a couple of units of "OpenStack overhead", such as MySQL, RabbitMQ, MongoDB, etc. Rather than running each of those services all on the same physical system, we actually put each of them in their own container, with their own IP address, namespace, cgroup, etc. This gives us tremendous flexibility, in the orchestration of those services. We're able to move (migrate, even live migrate) those services from one host to another. With that, it becomes possible to "evacuate" a given host, by moving each contained set of services elsewhere, perhaps a larger or smaller system, and then shut down the unit (perhaps to replace a hard drive or memory, or repurpose it entirely).
Containers also enable us to similarly confine services on virtual machines themselves! Let that sink in for a second... A contained workload is able, then, to move from one virtual machine to another, to a bare metal system. Even from one public cloud provider, to another public or private cloud!
The last two paragraphs capture a few best practices that what we've learned over the last few years implementing OpenStack for some of the largest telcos and financial services companies in the world. What we're hearing from Internet service and cloud providers is not too dissimilar... These customers have their own customers who want cloud instances that perform at bare metal equivalence. They also want to maximize the utilization of their server hardware, sometimes by more densely packing workloads on given systems.
As such, LXD is then a convergence of several different customer requirements, and our experience deploying some massively complex, scalable workloads (a la OpenStack, Hadoop, and others) in enterprises.
The rapid evolution of a few key technologies under and around LXC have recently made this dream possible. Namely: User namespaces, Cgroups, SECCOMP, AppArmor, CRIU, as well as the library abstraction that our external tools use to manage these containers as systems.
LXD is a new "hypervisor" in that it provides (REST) APIs that can manage Linux Containers. This is a step function beyond where we've been to date: able to start and stop containers with local commands and, to a limited extent, libvirt, but not much more. "Booting" a system, in a container, running an init system, bringing up network devices (without nasty hacks in the container's root filesystem), etc. was challenging, but we've worked our way all of these, and Ubuntu boots unmodified in Linux Containers today.
Moreover, LXD is a whole new semantic for turning any machine -- Intel, AMD, ARM, POWER, physical, or even a virtual machine (e.g. your cloud instances) -- into a system that can host and manage and start and stop and import and export and migrate multiple collections of services bundled within containers.
I've received a number of questions about the "hardware assisted" containerization slide in my deck. We're under confidentiality agreements with vendors as to the details and timelines for these features.
What (I think) I can say, is that there are hardware vendors who are rapidly extending some of the key features that have made cloud computing and virtualization practical, toward the exciting new world of Linux Containers. Perhaps you might read a bit about CPU VT extensions, No Execute Bits, and similar hardware security technologies. Use your imagination a bit, and you can probably converge on a few key concepts that will significantly extend the usefulness of Linux Containers.
As soon as such hardware technology is enabled in Linux, you have our commitment that Ubuntu will bring those features to end users faster than anyone else!
If you want to play with it today, you can certainly see the primitives within Ubuntu's LXC. Launch Ubuntu containers within LXC and you'll start to get the general, low level idea. If you want to view it from one layer above, give our new nova-compute-flex (flex was the code name, before it was released as LXD), a try. It's publicly available as a tech preview in Ubuntu OpenStack Juno (authored by Chuck Short, Scott Moser, and James Page). Here, you can launch OpenStack instances as LXC containers (rather than KVM virtual machines), as "general purpose" system instances.
Finally, perhaps lost in all of the activity here, is a couple of things we're doing different for the LXD project. We at Canonical have taken our share of criticism over the years about choice of code hosting (our own Bazaar and Launchpad.net), our preferred free software licence (GPLv3/AGPLv3), and our contributor license agreement (Canonical CLA). [For the record: I love bzr/Launchpad, prefer GPL/AGPL, and am mostly ambivalent on the CLA; but I won't argue those points here.]
These have been very deliberate, conscious decisions, lobbied for and won by our engineers leading the project, in the interest of collaborating and garnering the participation of communities that have traditionally shunned Canonical-led projects, raising the above objections. I, for one, am eager to see contribution and collaboration that too often, we don't see.
There is a design pattern, occasionally found in nature, when some of the most elegant and impressive solutions often seem so intuitive, in retrospect.
For me, Docker is just that sort of game changing, hyper-innovative technology, that, at its core, somehow seems straightforward, beautiful, and obvious.
Linux containers, repositories of popular base images, snapshots using modern copy-on-write filesystem features. Brilliant, yet so simple. Docker.io for the win!
I clearly recall nine long months ago, intrigued by a fervor of HackerNews excitement pulsing around a nascent Docker technology. I followed a set of instructions on a very well designed and tastefully manicured web page, in order to launch my first Docker container. Something like: start with Ubuntu 13.04, downgrade the kernel, reboot, add an out-of-band package repository, install an oddly named package, import some images, perhaps debug or ignore some errors, and then launch. In few moments, I could clearly see the beginnings of a brave new world of lightning fast, cleanly managed, incrementally saved, highly dense, operating system containers. Ubuntu inside of Ubuntu, Inception style. So. Much. Potential.
Fast forward to today -- April 18, 2014 -- and the combination of Docker and Ubuntu 14.04 LTS has raised the bar, introducing a new echelon of usability and convenience, and coupled with the trust and track record of enterprise grade Long Term Support from Canonical and the Ubuntu community.
Big thanks, by the way, to Paul Tagliamonte, upstream Debian packager of Docker.io, as well as all of the early testers and users of Docker during the Ubuntu development cycle.
Docker is now officially in Ubuntu. That makes Ubuntu 14.04 LTS the first enterprise grade Linux distribution to ship with Docker natively packaged, continuously tested, and instantly installable. Millions of Ubuntu servers are now never more than three commands away from launching or managing Linux container sandboxes, thanks to Docker.
And after that last command, Ubuntu is now running within Docker, inside of a Linux container. Brilliant. Simple. Elegant. User friendly. Just the way we've been doing things in Ubuntu for nearly a decade. Thanks to our friends at Docker.io!
Dustin Kirkland (Twitter, LinkedIn) is an engineer at heart, with a penchant for reducing complexity and solving problems at the cross-sections of technology, business, and people.
With a degree in computer engineering from Texas A&M University (2001), his full-time career began as a software engineer at IBM in the Linux Technology Center working on the Linux kernel and security certifications, including a one-year stint as an dedicated engineer-in-residence at Red Hat in Boston (2005). Dustin was awarded the title Master Inventor at IBM, in recognition of his prolific patent work as an inventor and reviewer with IBM's intellectual property attorneys.
Dustin then first joined Canonical (2008) as an engineer (eventually, engineering manager), helping create the Ubuntu Server distribution and establishing Ubuntu as the overwhelming favorite Linux distribution in Amazon, Google, and Microsoft's cloud platforms, as well as authoring and maintaining dozens of new open source packages.
Dustin joined Gazzang (2011), a venture-backed start-up built around an open source project that he co-authored (eCryptFS), as Chief Technology Officer, and helped dozens of enterprise customers encrypt their data at rest and securely manage their keys. Gazzang was acquired by Cloudera (2014).
Having effectively monetized eCryptFS as an open source project at Gazzang, Dustin returned to Canonical (2013) as the VP of Product for Ubuntu and spent the next several years launching a portfolio of products and services (Ubuntu Advantage, Extended Security Maintenance, Canonical Livepatch, MAAS, OpenStack, Kubernetes) that continues to deliver considerable annual recurring revenue. With Canonical based in London, an 800+ work-from-home employee roster and customers spread across 40+ countries, Dustin traveled the world over, connecting with clients and colleagues steeped in rich cultural experiences.
Google Cloud (2018) recruited Dustin from Canonical to product manage Google's entrance into on-premises data centers with its GKE On-Prem (now, Anthos) offering, with a specific focus on the underlying operating system, hypervisor, and container security. This work afforded Dustin a view deep into the back end data center of many financial services companies, where he still sees tremendous opportunities for improvements in security, efficiencies, cost-reduction, and disruptive new technology adoption.
Seeking a growth-mode opportunity in the fintech sector, Dustin joined Apex Clearing (now, Apex Fintech Solutions) as the Chief Product Officer (2019), where he led several organizations including product management, field engineering, data science, and business partnerships. He drastically revamped Apex's product portfolio and product management processes, retooling away from a legacy "clearing house and custodian", and into a "software-as-a-service fintech" offering instant brokerage account opening, real-time fractional stock trading, a secure closed-network crypto solution, and led the acquisition and integration of Silver's tax and cost basis solution.
Drawn back into a large cap, Dustin joined Goldman Sachs (2021) as a Managing Director and Head of Platform Product Management, within the Consumer banking division, which included Marcus, and the Apple and GM credit cards. He built a cross-functional product management community and established numerous documented product management best practices, processes, and anti-patterns.
Dustin lives in Austin, Texas, with his wife Kim and their wonderful two daughters.