From the Canyon Edge -- :-Dustin
Showing posts with label Containers. Show all posts
Showing posts with label Containers. Show all posts

Wednesday, September 13, 2017

Running Ubuntu Containers with Hyper-V Isolation on Windows 10 and Windows Server


Canonical and Microsoft have teamed up to deliver an truly special experience -- running Ubuntu containers with Hyper-V Isolation on Windows 10 and Windows Servers!

We have published a fantastic tutorial at https://ubu.one/UhyperV, with screenshots and easy-to-follow instructions.  You should be up and running in minutes!

Follow that tutorial, and you'll be able to launch Ubuntu containers with Hyper-V isolation by running the following directly from a Windows Powershell:
  • docker run -it ubuntu bash
Cheers!
Dustin

Monday, August 21, 2017

Bare Metal Kubernetes: More Containers, Less Overhead

Earlier this month, I spoke at ContainerDays, part of the excellent DevOpsDays series of conferences -- this one in lovely Portland, Oregon.

I gave a live demo of Kubernetes running directly on bare metal.  I was running it on an 11-node Ubuntu Orange Box -- but I used the exact same tools Canonical's world class consulting team uses to deploy Kubernetes onto racks of physical machines.
You see, the ability to run Kubernetes on bare metal, behind your firewall is essential to the yin-yang duality of Cloud Native computing.  Sometimes, what you need is actually a Native Cloud.
Deploying Kubernetes into virtual machines in the cloud is rather easy, straightforward, with dozens of tools now that can handle that.

But there's only one tool today, that can deploy the exact same Kubernetes to AWS, Azure, GCE, as well as VMware, OpenStack, and bare metal machines.  That tools is conjure-up, which acts as a command line front end to several essential Ubuntu tools: MAAS, LXD, and Juju.

I don't know if the presentation was recorded, but I'm happy to share with you my slides for download, and embedded here below.  There are a few screenshots within that help convey the demo.




Cheers,
Dustin

Thursday, February 23, 2017

The Questions that You're Afraid to Ask about Containers



Yesterday, I delivered a talk to a lively audience at ContainerWorld in Santa Clara, California.

If I measured "the most interesting slides" by counting "the number of people who took a picture of the slide", then by far "the most interesting slides" are slides 8-11, which pose an answer the question:
"Should I run my PaaS on top of my IaaS, or my IaaS on top of my PaaS"?
In the Ubuntu world, that answer is super easy -- however you like!  At Canonical, we're happy to support:
  1. Kubernetes running on top of Ubuntu OpenStack
  2. OpenStack running on top of Canonical Kubernetes
  3. Kubernetes running along side OpenStack
In all cases, the underlying substrate is perfectly consistent:
  • you've got 1 to N physical or virtual machines
  • which are dynamically provisioned by MAAS or your cloud provider
  • running stable, minimal, secure Ubuntu server image
  • carved up into fast, efficient, independently addressable LXD machine containers
With that as your base, we'll easily to conjure-up a Kubernetes, an OpenStack, or both.  And once you have a Kubernetes or OpenStack, we'll gladly conjure-up one inside the other.


As always, I'm happy to share my slides with you here.  You're welcome to download the PDF, or flip through the embedded slides below.



Cheers,
Dustin

Tuesday, February 14, 2017

Kubernetes InstallFest at ContainerWorld -- Feb 21, 2017!


We at Canonical have been super busy fine tuning your experience with Kubernetes, Docker, and LXD on Ubuntu!

Amazingly, you're merely two commands away from standing up a fully functional, minimal Kubernetes cluster on any Ubuntu 16.04 LTS system...

$ sudo snap install --classic conjure-up
$ conjure-up kubernetes-core

Or, if you're feeling more enterprisey and want the full experience, try:

$ conjure-up canonical-kubernetes

I hope to meet some of you at ContainerWorld in Santa Clara next week.  Marco Ceppi and I are running a Kubernetes installfest workshop on Tuesday, February 21, 2017, from 3pm - 4:30pm.  I can guarantee that every single person who attends will succeed in deploying their own Kubernetes cluster to a public cloud (AWS, Azure, or Google), or to their Ubuntu laptop or VM.

Also, I'm giving a talk entitled, "Using the Right Container Technology for the Job", on Wednesday, February 22, 2017 from 1:30pm - 2:10pm.

Finally, I invite you to check out this 30-minute podcast with David Daly, from DevOpsChat, where we talked quite a bit about Containers and Kubernetes and the experience we're working on in Ubuntu...


Cheers,
:-Dustin

Monday, September 26, 2016

Container Camp London: Streamlining HPC Workloads with Containers


A couple of weeks ago, I delivered a talk at the Container Camp UK 2016.  It was an brilliant event, on a beautiful stage at Picturehouse Central in Picadilly Circus in London.

You're welcome to view the slides or download them as a PDF, or watch my talk below.

And for the techies who want to skip the slide fluff and get their hands dirty, setup your OpenStack and LXD and start streamlining your HPC workloads using this guide.




Enjoy,
:-Dustin

Thursday, February 18, 2016

Container World 2016: Application and Machine Containers (slides)



I had the opportunity to speak at Container World 2016 in Santa Clara yesterday.  Thanks in part to the Netflix guys who preceded me, the room was absolutely packed!

You can download a PDF of my slides here, or flip through them embedded below.

I'd really encourage you to try the demo instructions of LXD toward the end!


:-Dustin

Thursday, November 5, 2015

LXD in the Sky with Diamonds


Picture yourself containers on a server
With systemd trees and spawned tty's
Somebody calls you, you answer quite quickly
A world with the density so high

    - Sgt. Graber's LXD Smarts Club Band

Last week, we proudly released Ubuntu 15.10 (Wily) -- the final developer snapshot of the Ubuntu Server before we focus the majority of our attention on quality, testing, performance, documentation, and stability for the Ubuntu 16.04 LTS cycle in the next 6 months.

Notably, LXD has been promoted to the Ubuntu Main archive, now commercially supported by Canonical.  That has enabled us to install LXD by default on all Ubuntu Servers, from 15.10 forward.
Join us for an interactive, live webinar on November 12th at 5pm BST/12pm EST led by James Page, where he will demonstrate LXD as the fastest hypervisor in OpenStack!
That means that every Ubuntu server -- Intel, AMD, ARM, POWER, and even Virtual Machines in the cloud -- is now a full machine container hypervisor, capable of hosting hundreds of machine containers, right out of the box!

LXD in the Sky with Diamonds!  Well, LXD is in the Cloud with Diamond level support from Canonical, anyway.  You can even test it in your web browser here.

The development tree of Xenial (Ubuntu 16.04 LTS) has already inherited this behavior, and we will celebrate this feature broadly through our use of LXD containers in Juju, MAAS, and the reference platform of Ubuntu OpenStack, as well as the new nova-lxd hypervisor in the OpenStack Autopilot within Landscape.

While the young and the restless are already running Wily Ubuntu 15.10, the bold and the beautiful are still bound to their Trusty Ubuntu 14.04 LTS servers.

At Canonical, we understand both motivations, and this is why we have backported LXD to the Trusty archives, for safe, simple consumption and testing of this new generation of machine containers there, on your stable LTS.

Installing LXD on Trusty simply requires enabling the trusty-backports pocket, and installing the lxd package from there, with these 3 little commands:

sudo sed -i -e "/trusty-backports/ s/^# //" /etc/apt/sources.list
sudo apt-get update; sudo apt-get dist-upgrade -y
sudo apt-get -t trusty-backports install lxd

In minutes, you can launch your first LXD containers.  First, inherit your new group permissions, so you can execute the lxc command as your non-root user.  Then, import some images, and launch a new container named lovely-rita.  Shell into that container, and examine the process tree, install some packages, check the disk and memory and cpu available.  Finally, exit when you're done, and optionally delete the container.

newgrp lxd
lxd-images import ubuntu --alias ubuntu
lxc launch ubuntu lovely-rita
lxc list
lxc exec lovely-rita bash
  ps -ef
  apt-get update
  df -h
  free
  cat /proc/cpuinfo
  exit
lxc delete lovely-rita

I was able to run over 600 containers simultaneously on my Thinkpad (x250, 16GB of RAM), and over 60 containers on an m1.small in Amazon (1.6GB of RAM).

We're very interested in your feedback, as LXD is one of the most important features of the Ubuntu 16.04 LTS.  You can learn more about LXD, view the source code, file bugs, discuss on the mailing list, and peruse the Linux Containers upstream projects.

With a little help from my friends!
:-Dustin

Monday, September 28, 2015

Container Summit Presentation and Live LXD Demo


I delivered a presentation and an exciting live demo in San Francisco this week at the Container Summit (organized by Joyent).

It was professionally recorded by the A/V crew at the conference.  The live demo begins at the 25:21 mark.


You can also find the slide deck embedded below and download the PDFs from here.


Cheers,
:-Dustin

Wednesday, August 12, 2015

Ubuntu and LXD at ContainerCon 2015


Canonical is delighted to sponsor ContainerCon 2015, a Linux Foundation event in Seattle next week, August 17-19, 2015. It's quite exciting to see the A-list of sponsors, many of them newcomers to this particular technology, teaming with energy around containers. 

From chroots to BSD Jails and Solaris Zones, the concepts behind containers were established decades ago, and in fact traverse the spectrum of server operating systems. At Canonical, we've been working on containers in Ubuntu for more than half a decade, providing a home and resources for stewardship and maintenance of the upstream Linux Containers (LXC) project since 2010.

Last year, we publicly shared our designs for LXD -- a new stratum on top of LXC that endows the advantages of a traditional hypervisor into the faster, more efficient world of containers.

Those designs are now reality, with the open source Golang code readily available on Github, and Ubuntu packages available in a PPA for all supported releases of Ubuntu, and already in the Ubuntu 15.10 beta development tree. With ease, you can launch your first LXD containers in seconds, following this simple guide.

LXD is a persistent daemon that provides a clean RESTful interface to manage (start, stop, clone, migrate, etc.) any of the containers on a given host.

Hosts running LXD are handily federated into clusters of container hypervisors, and can work as Nova Compute nodes in OpenStack, for example, delivering Infrastructure-as-a-Service cloud technology at lower costs and greater speeds.

Here, LXD and Docker are quite complementary technologies. LXD furnishes a dynamic platform for "system containers" -- containers that behave like physical or virtual machines, supplying all of the functionality of a full operating system (minus the kernel, which is shared with the host). Such "machine containers" are the core of IaaS clouds, where users focus on instances with compute, storage, and networking that behave like traditional datacenter hardware.

LXD runs perfectly well along with Docker, which supplies a framework for "application containers" -- containers that enclose individual processes that often relate to one another as pools of micro services and deliver complex web applications.

Moreover, the Zen of LXD is the fact that the underlying container implementation is actually decoupled from the RESTful API that drives LXD functionality. We are most excited to discuss next week at ContainerCon our work with Microsoft around the LXD RESTful API, as a cross-platform container management layer.

Ben Armstrong, a Principal Program Manager Lead at Microsoft on the core virtualization and container technologies, has this to say:
“As Microsoft is working to bring Windows Server Containers to the world – we are excited to see all the innovation happening across the industry, and have been collaborating with many projects to encourage and foster this environment. Canonical’s LXD project is providing a new way for people to look at and interact with container technologies. Utilizing ‘system containers’ to bring the advantages of container technology to the core of your cloud infrastructure is a great concept. We are looking forward to seeing the results of our engagement with Canonical in this space.”
Finally, if you're in Seattle next week, we hope you'll join us for the technical sessions we're leading at ContainerCon 2015, including: "Putting the D in LXD: Migration of Linux Containers", "Container Security - Past, Present, and Future", and "Large Scale Container Management with LXD and OpenStack". Details are below.
Date: Monday, August 17 • 2:20pm - 3:10pm
Title: Large Scale Container Management with LXD and OpenStack
Speaker: Stéphane Graber
Abstracthttp://sched.co/3YK6
Location: Grand Ballroom B
Schedulehttp://sched.co/3YK6 
Date: Wednesday, August 19 10:25am-11:15am
Title: Putting the D in LXD: Migration of Linux Containers
Speaker: Tycho Andersen
Abstract: http://sched.co/3YTz
Location: Willow A
Schedule: http://sched.co/3YTz
Date: Wednesday, August 19 • 3:00pm - 3:50pm
Title: Container Security - Past, Present and Future
Speaker: Serge Hallyn
Abstract: http://sched.co/3YTl
Location: Ravenna
Schedule: http://sched.co/3YTl
Cheers,
Dustin

Monday, June 22, 2015

Container-to-Container Networking: The Bits have Hit the Fan!

A thing of beauty
If you read my last post, perhaps you followed the embedded instructions and ran hundreds of LXD system containers on your own Ubuntu machine.

Or perhaps you're already a Docker enthusiast and your super savvy microservice architecture orchestrates dozens of applications among a pile of process containers.

Either way, the massive multiplication of containers everywhere introduces an interesting networking problem:
"How do thousands of containers interact with thousands of other containers efficiently over a network?  What if every one of those containers could just route to one another?"

Canonical is pleased to introduce today an innovative solution that addresses this problem in perhaps the most elegant and efficient manner to date!  We call it "The Fan" -- an extension of the network tunnel driver in the Linux kernel.  The fan was conceived by Mark Shuttleworth and John Meinel, and implemented by Jay Vosburgh and Andy Whitcroft.

A Basic Overview

Each container host has a "fan bridge" that enables all of its containers to deterministically map network traffic to any other container on the fan network.  I say "deterministically", in that there are no distributed databases, no consensus protocols, and no more overhead than IP-IP tunneling.  [A more detailed technical description can be found here.]  Quite simply, a /16 network gets mapped on onto an unused /8 network, and container traffic is routed by the host via an IP tunnel.



A Demo

Interested yet?  Let's take it for a test drive in AWS...


First, launch two instances in EC2 (or your favorite cloud) in the same VPC.  Ben Howard has created special test images for AWS and GCE, which include a modified Linux kernel, a modified iproute2 package, a new fanctl package, and Docker installed by default.  You can find the right AMIs here.
Build and Publish report for trusty 20150621.1228.
-----------------------------------
BUILD INFO:
VERSION=14.04-LTS
STREAM=testing
BUILD_DATE=
BUG_NUMBER=1466602
STREAM="testing"
CLOUD=CustomAWS
SERIAL=20150621.1228
-----------------------------------
PUBLICATION REPORT:
NAME=ubuntu-14.04-LTS-testing-20150621.1228
SUITE=trusty
ARCH=amd64
BUILD=core
REPLICATE=1
IMAGE_FILE=/var/lib/jenkins/jobs/CloudImages-Small-CustomAWS/workspace/ARCH/amd64/trusty-server-cloudimg-CUSTOM-AWS-amd64-disk1.img
VERSION=14.04-LTS-testing-20150621.1228
INSTANCE_BUCKET=ubuntu-images-sandbox
INSTANCE_eu-central-1=ami-1aac9407
INSTANCE_sa-east-1=ami-59a22044
INSTANCE_ap-northeast-1=ami-3ae2453a
INSTANCE_eu-west-1=ami-d76623a0
INSTANCE_us-west-1=ami-238d7a67
INSTANCE_us-west-2=ami-53898c63
INSTANCE_ap-southeast-2=ami-ab95ef91
INSTANCE_ap-southeast-1=ami-98e9edca
INSTANCE_us-east-1=ami-b1a658da
EBS_BUCKET=ubuntu-images-sandbox
VOL_ID=vol-678e2c29
SNAP_ID=snap-efaa288b
EBS_eu-central-1=ami-b4ac94a9
EBS_sa-east-1=ami-e9a220f4
EBS_ap-northeast-1=ami-1aee491a
EBS_eu-west-1=ami-07602570
EBS_us-west-1=ami-318c7b75
EBS_us-west-2=ami-858b8eb5
EBS_ap-southeast-2=ami-558bf16f
EBS_ap-southeast-1=ami-faeaeea8
EBS_us-east-1=ami-afa25cc4
----
6cbd6751-6dae-4da7-acf3-6ace80c01acc




Next, ensure that those two instances can talk to one another.  Here, I tested that in both directions, using both ping and nc.

ubuntu@ip-172-30-0-28:~$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 0a:0a:8f:f8:cc:21  
          inet addr:172.30.0.28  Bcast:172.30.0.255  Mask:255.255.255.0
          inet6 addr: fe80::80a:8fff:fef8:cc21/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:2904565 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9919258 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:13999605561 (13.9 GB)  TX bytes:14530234506 (14.5 GB)

ubuntu@ip-172-30-0-28:~$ ping -c 3 172.30.0.27
PING 172.30.0.27 (172.30.0.27) 56(84) bytes of data.
64 bytes from 172.30.0.27: icmp_seq=1 ttl=64 time=0.289 ms
64 bytes from 172.30.0.27: icmp_seq=2 ttl=64 time=0.201 ms
64 bytes from 172.30.0.27: icmp_seq=3 ttl=64 time=0.192 ms

--- 172.30.0.27 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.192/0.227/0.289/0.045 ms
ubuntu@ip-172-30-0-28:~$ nc -l 1234
hi mom
─────────────────────────────────────────────────────────────────────
ubuntu@ip-172-30-0-27:~$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 0a:26:25:9a:77:df  
          inet addr:172.30.0.27  Bcast:172.30.0.255  Mask:255.255.255.0
          inet6 addr: fe80::826:25ff:fe9a:77df/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:11157399 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1671239 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:16519319463 (16.5 GB)  TX bytes:12019363671 (12.0 GB)

ubuntu@ip-172-30-0-27:~$ ping -c 3 172.30.0.28
PING 172.30.0.28 (172.30.0.28) 56(84) bytes of data.
64 bytes from 172.30.0.28: icmp_seq=1 ttl=64 time=0.245 ms
64 bytes from 172.30.0.28: icmp_seq=2 ttl=64 time=0.185 ms
64 bytes from 172.30.0.28: icmp_seq=3 ttl=64 time=0.186 ms

--- 172.30.0.28 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.185/0.205/0.245/0.030 ms
ubuntu@ip-172-30-0-27:~$ echo "hi mom" | nc 172.30.0.28 1234

If that doesn't work, you might have to adjust your security group until it does.


Now, import the Ubuntu image in Docker in both instances.

$ sudo docker pull ubuntu:latest
Pulling repository ubuntu
...
e9938c931006: Download complete
9802b3b654ec: Download complete
14975cc0f2bc: Download complete
8d07608668f6: Download complete

Now, let's create a fan bridge on each of those two instances.  We can create it on the command line using the new fanctl command, or we can put it in /etc/network/interfaces.d/eth0.cfg.

We'll do the latter, so that the configuration is persistent across boots.

$ cat /etc/network/interfaces.d/eth0.cfg
# The primary network interface
auto eth0
iface eth0 inet dhcp
    up fanctl up 250.0.0.0/8 eth0/16 dhcp
    down fanctl down 250.0.0.0/8 eth0/16

$ sudo ifup --force eth0

Now, let's look at our ifconfig...

$ ifconfig
docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99  
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr 0a:0a:8f:f8:cc:21  
          inet addr:172.30.0.28  Bcast:172.30.0.255  Mask:255.255.255.0
          inet6 addr: fe80::80a:8fff:fef8:cc21/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:2905229 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9919652 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:13999655286 (13.9 GB)  TX bytes:14530269365 (14.5 GB)

fan-250-0-28 Link encap:Ethernet  HWaddr 00:00:00:00:00:00  
          inet addr:250.0.28.1  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::8032:4dff:fe3b:a108/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1480  Metric:1
          RX packets:304246 errors:0 dropped:0 overruns:0 frame:0
          TX packets:245532 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:13697461502 (13.6 GB)  TX bytes:37375505 (37.3 MB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:1622 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1622 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:198717 (198.7 KB)  TX bytes:198717 (198.7 KB)

lxcbr0    Link encap:Ethernet  HWaddr 3a:6b:3c:9b:80:45  
          inet addr:10.0.3.1  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::386b:3cff:fe9b:8045/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)

tunl0     Link encap:IPIP Tunnel  HWaddr   
          UP RUNNING NOARP  MTU:1480  Metric:1
          RX packets:242799 errors:0 dropped:0 overruns:0 frame:0
          TX packets:302666 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:12793620 (12.7 MB)  TX bytes:13697374375 (13.6 GB)

Pay special attention to the new fan-250-0-28 device!  I've only shown this on one of my instances, but you should check both.

Now, let's tell Docker to use that device as its default bridge.

$ fandev=$(ifconfig | grep ^fan- | awk '{print $1}')
$ echo $fandev
fan-250-0-28
$ echo "DOCKER_OPTS='-d -b $fandev --mtu=1480 --iptables=false'" | \
      sudo tee -a /etc/default/docker*

Make sure you restart the docker.io service.  Note that it might be called docker.

$ sudo service docker.io restart || sudo service docker restart

Now we can launch a Docker container in each of our two EC2 instances...

$ sudo docker run -it ubuntu
root@261ae39d90db:/# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr e2:f4:fd:f7:b7:f5  
          inet addr:250.0.28.3  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::e0f4:fdff:fef7:b7f5/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1480  Metric:1
          RX packets:7 errors:0 dropped:2 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:558 (558.0 B)  TX bytes:648 (648.0 B)


And here's a second one, on my other instance...

sudo docker run -it ubuntu
root@ddd943163843:/# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 66:fa:41:e7:ad:44  
          inet addr:250.0.27.3  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::64fa:41ff:fee7:ad44/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1480  Metric:1
          RX packets:12 errors:0 dropped:2 overruns:0 frame:0
          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:936 (936.0 B)  TX bytes:1026 (1.0 KB)

Now, let's send some traffic back and forth!  Again, we can use ping and nc.



root@261ae39d90db:/# ping -c 3 250.0.27.3
PING 250.0.27.3 (250.0.27.3) 56(84) bytes of data.
64 bytes from 250.0.27.3: icmp_seq=1 ttl=62 time=0.563 ms
64 bytes from 250.0.27.3: icmp_seq=2 ttl=62 time=0.278 ms
64 bytes from 250.0.27.3: icmp_seq=3 ttl=62 time=0.260 ms
--- 250.0.27.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.260/0.367/0.563/0.138 ms
root@261ae39d90db:/# echo "here come the bits" | nc 250.0.27.3 9876
root@261ae39d90db:/# 
─────────────────────────────────────────────────────────────────────
root@ddd943163843:/# ping -c 3 250.0.28.3
PING 250.0.28.3 (250.0.28.3) 56(84) bytes of data.
64 bytes from 250.0.28.3: icmp_seq=1 ttl=62 time=0.434 ms
64 bytes from 250.0.28.3: icmp_seq=2 ttl=62 time=0.258 ms
64 bytes from 250.0.28.3: icmp_seq=3 ttl=62 time=0.269 ms
--- 250.0.28.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.258/0.320/0.434/0.081 ms
root@ddd943163843:/# nc -l 9876
here come the bits

Alright, so now let's really bake your noodle...

That 250.0.0.0/8 network can actually be any /8 network.  It could be a 10.* network or any other /8 that you choose.  I've chosen to use something in the reserved Class E range, 240.* - 255.* so as not to conflict with any other routable network.

Finally, let's test the performance a bit using iperf and Amazon's 10gpbs instances!

So I fired up two c4.8xlarge instances, and configured the fan bridge there.
$ fanctl show
Bridge           Overlay              Underlay             Flags
fan-250-0-28     250.0.0.0/8          172.30.0.28/16       dhcp host-reserve 1

And
$ fanctl show
Bridge           Overlay              Underlay             Flags
fan-250-0-27     250.0.0.0/8          172.30.0.27/16       dhcp host-reserve 1

Would you believe 5.46 Gigabits per second, between two Docker instances, directly addressed over a network?  Witness...

Server 1...

root@84364bf2bb8b:/# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 92:73:32:ac:9c:fe  
          inet addr:250.0.27.2  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::9073:32ff:feac:9cfe/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1480  Metric:1
          RX packets:173770 errors:0 dropped:2 overruns:0 frame:0
          TX packets:107628 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:6871890397 (6.8 GB)  TX bytes:7190603 (7.1 MB)

root@84364bf2bb8b:/# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 250.0.27.2 port 5001 connected with 250.0.28.2 port 35165
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  6.36 GBytes  5.46 Gbits/sec

And Server 2...

root@04fb9317c269:/# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr c2:6a:26:13:c5:95  
          inet addr:250.0.28.2  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::c06a:26ff:fe13:c595/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1480  Metric:1
          RX packets:109230 errors:0 dropped:2 overruns:0 frame:0
          TX packets:150164 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:28293821 (28.2 MB)  TX bytes:6849336379 (6.8 GB)

root@04fb9317c269:/# iperf -c 250.0.27.2
multicast ttl failed: Invalid argument
------------------------------------------------------------
Client connecting to 250.0.27.2, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 250.0.28.2 port 35165 connected with 250.0.27.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  6.36 GBytes  5.47 Gbits/sec

Multiple containers, on separate hosts, directly addressable to one another with nothing more than a single network device on each host.  Deterministic routes.  Blazing fast speeds.  No distributed databases.  No consensus protocols.  Not an SDN.  This is just amazing!

RFC

Give it a try and let us know what you think!  We'd love to get your feedback and use cases as we work the kernel and userspace changes upstream.

Over the next few weeks, you'll see the fan patches landing in Wily, and backported to Trusty and Vivid.  We are also drafting an RFC, as we think that other operating systems and the container world and the Internet at large would benefit from Fan Networking.

I'm already a fan!
Dustin

Thursday, November 6, 2014

Where We're Going With LXD

Earlier this week, here in Paris, at the OpenStack Design Summit, Mark Shuttleworth and Canonical introduced our vision and proof of concept for LXD.

You can find the official blog post on Canonical Insights, and a short video introduction on Youtube (by yours truly).

Our Canonical colleague Stephane Graber posted a bit more technical design detail here on the lxc-devel mailing list, which was picked up by HackerNews.  And LWN published a story yesterday covering another Canonical colleague of ours, Serge Hallyn, and his work on Cgroups and CGManager, all of which feeds into LXD.  As it happens, Stephane and Serge are upstream co-maintainers of Linux Containers.  Tycho Andersen, another colleague of ours, has been working on CRIU, which was the heart of his amazing demo this week, live migrating a container running the cult classic 1st person shooter, Doom! between two containers, back and forth.


Moreover, we've answered a few journalists' questions for excellent articles on ZDnet and SynergyMX.  Predictably, El Reg is skeptical (which isn't necessarily a bad thing).  But unfortunately, The Var Guy doesn't quite understand the technology (and unfortunately uses this article to conflate LXD with other random Canonical/Ubuntu complaints).

In any case, here's a bit more about LXD, in my own words...

Our primary design goal with LXD, is to extend containers into process based systems that behave like virtual machines.

We love KVM for its total machine abstraction, as a full virtualization hypervisor.  Moreover, we love what Docker does for application level development, confinement, packaging, and distribution.

But as an operating system and Linux distribution, our customers are, in fact, asking us for complete operating systems that boot and function within a Linux Container's execution space, natively.

Linux Containers are essential to our reference architecture of OpenStack, where we co-locate multiple services on each host.  Nearly every host is a Nova compute node, as well as a Ceph storage node, and also run a couple of units of "OpenStack overhead", such as MySQL, RabbitMQ, MongoDB, etc.  Rather than running each of those services all on the same physical system, we actually put each of them in their own container, with their own IP address, namespace, cgroup, etc.  This gives us tremendous flexibility, in the orchestration of those services.  We're able to move (migrate, even live migrate) those services from one host to another.  With that, it becomes possible to "evacuate" a given host, by moving each contained set of services elsewhere, perhaps a larger or smaller system, and then shut down the unit (perhaps to replace a hard drive or memory, or repurpose it entirely).

Containers also enable us to similarly confine services on virtual machines themselves!  Let that sink in for a second...  A contained workload is able, then, to move from one virtual machine to another, to a bare metal system.  Even from one public cloud provider, to another public or private cloud!

The last two paragraphs capture a few best practices that what we've learned over the last few years implementing OpenStack for some of the largest telcos and financial services companies in the world.  What we're hearing from Internet service and cloud providers is not too dissimilar...  These customers have their own customers who want cloud instances that perform at bare metal equivalence.  They also want to maximize the utilization of their server hardware, sometimes by more densely packing workloads on given systems.

As such, LXD is then a convergence of several different customer requirements, and our experience deploying some massively complex, scalable workloads (a la OpenStack, Hadoop, and others) in enterprises. 

The rapid evolution of a few key technologies under and around LXC have recently made this dream possible.  Namely: User namespaces, Cgroups, SECCOMP, AppArmorCRIU, as well as the library abstraction that our external tools use to manage these containers as systems.

LXD is a new "hypervisor" in that it provides (REST) APIs that can manage Linux Containers.  This is a step function beyond where we've been to date: able to start and stop containers with local commands and, to a limited extent, libvirt, but not much more.  "Booting" a system, in a container, running an init system, bringing up network devices (without nasty hacks in the container's root filesystem), etc. was challenging, but we've worked our way all of these, and Ubuntu boots unmodified in Linux Containers today.

Moreover, LXD is a whole new semantic for turning any machine -- Intel, AMD, ARM, POWER, physical, or even a virtual machine (e.g. your cloud instances) -- into a system that can host and manage and start and stop and import and export and migrate multiple collections of services bundled within containers.

I've received a number of questions about the "hardware assisted" containerization slide in my deck.  We're under confidentiality agreements with vendors as to the details and timelines for these features.

What (I think) I can say, is that there are hardware vendors who are rapidly extending some of the key features that have made cloud computing and virtualization practical, toward the exciting new world of Linux Containers.  Perhaps you might read a bit about CPU VT extensions, No Execute Bits, and similar hardware security technologies.  Use your imagination a bit, and you can probably converge on a few key concepts that will significantly extend the usefulness of Linux Containers.

As soon as such hardware technology is enabled in Linux, you have our commitment that Ubuntu will bring those features to end users faster than anyone else!

If you want to play with it today, you can certainly see the primitives within Ubuntu's LXC.  Launch Ubuntu containers within LXC and you'll start to get the general, low level idea.  If you want to view it from one layer above, give our new nova-compute-flex (flex was the code name, before it was released as LXD), a try.  It's publicly available as a tech preview in Ubuntu OpenStack Juno (authored by Chuck Short, Scott Moser, and James Page).  Here, you can launch OpenStack instances as LXC containers (rather than KVM virtual machines), as "general purpose" system instances.

Finally, perhaps lost in all of the activity here, is a couple of things we're doing different for the LXD project.  We at Canonical have taken our share of criticism over the years about choice of code hosting (our own Bazaar and Launchpad.net), our preferred free software licence (GPLv3/AGPLv3), and our contributor license agreement (Canonical CLA).   [For the record: I love bzr/Launchpad, prefer GPL/AGPL, and am mostly ambivalent on the CLA; but I won't argue those points here.]
  1. This is a public, community project under LinuxContainers.org
  2. The code and design documents are hosted on Github
  3. Under an Apache License
  4. Without requiring signatures of the Canonical CLA
These have been very deliberate, conscious decisions, lobbied for and won by our engineers leading the project, in the interest of collaborating and garnering the participation of communities that have traditionally shunned Canonical-led projects, raising the above objections.  I, for one, am eager to see contribution and collaboration that too often, we don't see.

Cheers!
:-Dustin

Friday, April 18, 2014

Docker in Ubuntu, Ubuntu in Docker





This article is cross-posted on Docker's blog as well.

There is a design pattern, occasionally found in nature, when some of the most elegant and impressive solutions often seem so intuitive, in retrospect.



For me, Docker is just that sort of game changing, hyper-innovative technology, that, at its core,  somehow seems straightforward, beautiful, and obvious.



Linux containers, repositories of popular base images, snapshots using modern copy-on-write filesystem features.  Brilliant, yet so simple.  Docker.io for the win!


I clearly recall nine long months ago, intrigued by a fervor of HackerNews excitement pulsing around a nascent Docker technology.  I followed a set of instructions on a very well designed and tastefully manicured web page, in order to launch my first Docker container.  Something like: start with Ubuntu 13.04, downgrade the kernel, reboot, add an out-of-band package repository, install an oddly named package, import some images, perhaps debug or ignore some errors, and then launch.  In few moments, I could clearly see the beginnings of a brave new world of lightning fast, cleanly managed, incrementally saved, highly dense, operating system containers.

Ubuntu inside of Ubuntu, Inception style.  So.  Much.  Potential.



Fast forward to today -- April 18, 2014 -- and the combination of Docker and Ubuntu 14.04 LTS has raised the bar, introducing a new echelon of usability and convenience, and coupled with the trust and track record of enterprise grade Long Term Support from Canonical and the Ubuntu community.
Big thanks, by the way, to Paul Tagliamonte, upstream Debian packager of Docker.io, as well as all of the early testers and users of Docker during the Ubuntu development cycle.
Docker is now officially in Ubuntu.  That makes Ubuntu 14.04 LTS the first enterprise grade Linux distribution to ship with Docker natively packaged, continuously tested, and instantly installable.  Millions of Ubuntu servers are now never more than three commands away from launching or managing Linux container sandboxes, thanks to Docker.


sudo apt-get install docker.io
sudo docker.io pull ubuntu
sudo docker.io run -i -t ubuntu /bin/bash


And after that last command, Ubuntu is now running within Docker, inside of a Linux container.

Brilliant.

Simple.

Elegant.

User friendly.

Just the way we've been doing things in Ubuntu for nearly a decade. Thanks to our friends at Docker.io!


Cheers,
:-Dustin

Printfriendly