To date, we've shaved the Bionic (18.04 LTS) minimal images down by over 53%, since Ubuntu 14.04 LTS, and trimmed nearly 100 packages and thousands of files.
This is particularly useful for container images (Docker, LXD, Kubernetes, etc.), embedded device environments, and anywhere a developer wants to bootstrap an Ubuntu system from the smallest possible starting point. Smaller images generally:
are subject to fewer security vulnerabilities and subsequent updates
reduce overall network bandwidth consumption
and require less on disk storage
First, a definition...
"The Ubuntu Minimal Image is the smallest base upon which a user can apt install any package in the Ubuntu archive."
By design, Ubuntu Minimal Images specifically lack the creature comforts, user interfaces and user design experience that have come to define the Ubuntu Desktop and Ubuntu Cloud images.
To date, we've shaved the Bionic (18.04 LTS) minimal images down by over 53%, since Ubuntu 14.04 LTS, and trimmed nearly 100 packages and thousands of files.
-->
Release
Bytes (compressed)
Bytes (uncompressed)
Files
Directories
Links
Packages
ls -alF
du -sb .
find . -type f | wc -l
find . -type d | wc -l
find . -type l | wc -l
sudo chroot . dpkg -l | grep -c ^i
14.04 LTS base
65,828,262
188,406,508
9,953
1,306
1,496
189
16.04 LTS base
48,296,930
120,370,143
5,655
751
1,531
103
18.04 LTS base
31,089,259
81,270,020
2,589
596
190
95
As of today, the Bionic (18.04 LTS) minimal image weighs in at 30MB (compressed), and 81MB (uncompressed on disk), and is comprised of 100 Debian packages.
We've removed things like locales and languages, which are easy to add back, but are less necessary in scale-out, container working environments. We've also removed other human-focused resources, like documentation, manpages, and changelogs, which are more easily read online (and also easy to re-enable). This base filesystem tarball also lacks a kernel and an init system, as it's intended to be used inside of a chroot or application container. Note that Canonical's Ubuntu Kernel team has also made tremendous strides tuning and minimizing Linux into various optimized kernel flavors.
I can still see another 1.2MB of savings to harvest in /usr/share/doc, /usr/share/info, and /usr/share/man, and the Foundations team is already looking into filtering out that documentation, too.
Do you see any other opportunities for savings? Can you help us crop the Bionic (18.04 LTS) images any further? Is there something that we've culled, that you see as problematic? We're interested in your feedback at the form here:
Yesterday, I delivered a talk to a lively audience at ContainerWorld in Santa Clara, California.
If I measured "the most interesting slides" by counting "the number of people who took a picture of the slide", then by far "the most interesting slides" are slides 8-11, which pose an answer the question:
"Should I run my PaaS on top of my IaaS, or my IaaS on top of my PaaS"?
In the Ubuntu world, that answer is super easy -- however you like! At Canonical, we're happy to support:
In all cases, the underlying substrate is perfectly consistent:
you've got 1 to N physical or virtual machines
which are dynamically provisioned by MAAS or your cloud provider
running stable, minimal, secure Ubuntu server image
carved up into fast, efficient, independently addressable LXD machine containers
With that as your base, we'll easily to conjure-up a Kubernetes, an OpenStack, or both. And once you have a Kubernetes or OpenStack, we'll gladly conjure-up one inside the other.
As always, I'm happy to share my slides with you here. You're welcome to download the PDF, or flip through the embedded slides below.
You can easily configure your own cloud-init configuration into your LXD instance profile.
In my case, I want cloud-init to automatically ssh-import-id kirkland, to fetch my keys from Launchpad. Alternatively, I could use gh:dustinkirkland to fetch my keys from Github.
Here's how!
First, edit your default LXD profile (or any other, for that matter):
Or, if you're feeling more enterprisey and want the full experience, try:
$ conjure-up canonical-kubernetes
I hope to meet some of you at ContainerWorld in Santa Clara next week. Marco Ceppi and I are running a Kubernetes installfest workshop on Tuesday, February 21, 2017, from 3pm - 4:30pm. I can guarantee that every single person who attends will succeed in deploying their own Kubernetes cluster to a public cloud (AWS, Azure, or Google), or to their Ubuntu laptop or VM.
Finally, I invite you to check out this 30-minute podcast with David Daly, from DevOpsChat, where we talked quite a bit about Containers and Kubernetes and the experience we're working on in Ubuntu...
I've been one of DD-WRT's biggest fans, for more than 10 years. I've always flashed my router with custom firmware, fine-tuned my wired and wireless networks, and locked down a VPN back home. I've genuinely always loved tinkering with network gear.
A couple of weeks ago, I decided to re-deploy my home network. I've been hearing about Ubiquiti Networks from my colleagues at Canonical, where we use Ubiquiti gear for our many and varied company events. Moreover, it seems a number of us have taken to running the same kits in our home offices.
There's something quite unique about the UniFi Controller -- the server that "controls" your router, gateway, and access points. Rather than being built into the USG itself, you run the server somewhere else.
Sure you can buy their hardware appliance (which I'm sure is nice). But you can just as easily run it on an Ubuntu machine yourself. That machine could be a physical machine on your network, a virtual machine locally or in the cloud, or it could be an LXD machine container.
I opted for the latter. I'm happily running the UniFi Controller in a LXD machine container, and it's easy for you to setup, too.
I'm running Ubuntu 16.04 LTS 64-bit on an Intel NUC somewhere in my house. It happens to be running Ubuntu Desktop, as it's attached to one of the TVs in my house, as a media playing device. In it's spare time, it's a server I use for LXD, Docker, and other development purposes.
I've configured the network on the machine to "bridge" LXD to my USG router, which happens to be running DHCP and DNS. I'm going to move that to a MAAS server, but that's a post for another day.
Here's /etc/network/interfaces on that machine:
kirkland@masterbr:~⟫ cat /etc/network/interfaces
# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
auto br0
iface br0 inet dhcp
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
So eth0 is bridged, to br0. ifconfig looks like this:
It's important to notice that this container drew an IP address on my 10.0.0.0/24 LAN. It will need this, to detect, federate, and manage the Ubiquiti hardware.
Now, let's exec into it, and import our SSH keys, so that we can SSH into it later.
kirkland@masterbr:~⟫ lxc exec unifi-controller bash
root@unifi-controller:~# ssh-import-id kirkland
2016-12-09 21:56:36,558 INFO Authorized key ['4096', 'd3:dd:e4:72:25:18:f3:ea:93:10:1a:5b:9f:bc:ef:5e', 'kirkland@x220', '(RSA)']
2016-12-09 21:56:36,568 INFO Authorized key ['2048', '69:57:f9:b6:11:73:48:ae:11:10:b5:18:26:7c:15:9d', 'kirkland@mac', '(RSA)']
2016-12-09 21:56:36,569 INFO [2] SSH keys [Authorized]
root@unifi-controller:~# exit
exit
kirkland@masterbr:~⟫ ssh root@10.0.0.183
The authenticity of host '10.0.0.183 (10.0.0.183)' can't be established.
ECDSA key fingerprint is SHA256:we0zAxifd0dcnAE2tVE53NFbQCop61f+MmHGsyGj0Xg.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.183' (ECDSA) to the list of known hosts.
root@unifi-controller:~#
Now, let's add the Unifi repository and install the deb and all its dependencies. It's a big pile of Java and MongoDB, which I'm happy to keep nicely "contained" in this LXD instance!
root@unifi-controller:~# echo deb http://www.ubnt.com/downloads/unifi/debian stable ubiquiti
deb http://www.ubnt.com/downloads/unifi/debian stable ubiquiti
root@unifi-controller:~# echo "deb http://www.ubnt.com/downloads/unifi/debian stable ubiquiti" | sudo tee -a /etc/apt/sources.list
deb http://www.ubnt.com/downloads/unifi/debian stable ubiquiti
root@unifi-controller:~# apt-key adv --keyserver keyserver.ubuntu.com --recv C0A52C50
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.hhgdd0ssJQ --no-auto-check-trustdb --trust-model always --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv C0A52C50
gpg: requesting key C0A52C50 from hkp server keyserver.ubuntu.com
gpg: key C0A52C50: public key "UniFi Developers " imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
root@unifi-controller:~# apt update >/dev/null 2>&1
root@unifi-controller:~# apt install unifi
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
os-prober
Use 'apt-get autoremove' to remove it.
The following extra packages will be installed:
binutils ca-certificates-java default-jre-headless fontconfig-config
fonts-dejavu-core java-common jsvc libasyncns0 libavahi-client3
libavahi-common-data libavahi-common3 libboost-filesystem1.54.0
libboost-program-options1.54.0 libboost-system1.54.0 libboost-thread1.54.0
libcommons-daemon-java libcups2 libflac8 libfontconfig1 libgoogle-perftools4
libjpeg-turbo8 libjpeg8 liblcms2-2 libnspr4 libnss3 libnss3-nssdb libogg0
libpcrecpp0 libpcsclite1 libpulse0 libsctp1 libsnappy1 libsndfile1
libtcmalloc-minimal4 libunwind8 libv8-3.14.5 libvorbis0a libvorbisenc2
lksctp-tools mongodb-clients mongodb-server openjdk-7-jre-headless tzdata
tzdata-java
Suggested packages:
binutils-doc default-jre equivs java-virtual-machine cups-common
liblcms2-utils pcscd pulseaudio icedtea-7-jre-jamvm libnss-mdns
sun-java6-fonts fonts-dejavu-extra fonts-ipafont-gothic fonts-ipafont-mincho
ttf-wqy-microhei ttf-wqy-zenhei ttf-indic-fonts-core ttf-telugu-fonts
ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts
The following NEW packages will be installed:
binutils ca-certificates-java default-jre-headless fontconfig-config
fonts-dejavu-core java-common jsvc libasyncns0 libavahi-client3
libavahi-common-data libavahi-common3 libboost-filesystem1.54.0
libboost-program-options1.54.0 libboost-system1.54.0 libboost-thread1.54.0
libcommons-daemon-java libcups2 libflac8 libfontconfig1 libgoogle-perftools4
libjpeg-turbo8 libjpeg8 liblcms2-2 libnspr4 libnss3 libnss3-nssdb libogg0
libpcrecpp0 libpcsclite1 libpulse0 libsctp1 libsnappy1 libsndfile1
libtcmalloc-minimal4 libunwind8 libv8-3.14.5 libvorbis0a libvorbisenc2
lksctp-tools mongodb-clients mongodb-server openjdk-7-jre-headless
tzdata-java unifi
The following packages will be upgraded:
tzdata
1 upgraded, 44 newly installed, 0 to remove and 10 not upgraded.
Need to get 133 MB of archives.
After this operation, 287 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
...
done.
Finally, we point a web browser at this server, http://10.0.0.183:8443/ in my case, and run through the UniFi setup there.
A couple of weeks ago, I delivered a talk at the Container Camp UK 2016. It was an brilliant event, on a beautiful stage at Picturehouse Central in Picadilly Circus in London.
I reinstalled my primary laptop (Lenovo x250) about 3 months ago (June 30, 2016), when I got a shiny new SSD, with a fresh Ubuntu 16.04 LTS image.
Just yesterday, I needed to test something in KVM. Something that could only be tested in KVM.
kirkland@x250:~⟫ kvm
The program 'kvm' is currently not installed. You can install it by typing:
sudo apt install qemu-kvm
127 kirkland@x250:~⟫
I don't have KVM installed? How is that even possible? I used to be the maintainer of the virtualization stack in Ubuntu (kvm, qemu, libvirt, virt-manager, et al.)! I lived and breathed virtualization on Ubuntu for years...
Alas, it seems that I've use LXD for everything these days! It's built into every Ubuntu 16.04 LTS server, and one 'apt install lxd' away from having it on your desktop. With ZFS, instances start in under 3 seconds. Snapshots, live migration, an image store, a REST API, all built in. Try it out, if you haven't, it's great!
kirkland@x250:~⟫ time lxc launch ubuntu:x
Creating supreme-parakeet
Starting supreme-parakeet
real 0m1.851s
user 0m0.008s
sys 0m0.000s
kirkland@x250:~⟫ lxc exec supreme-parakeet bash
root@supreme-parakeet:~#
But that's enough of a LXD advertisement...back to the title of the blog post.
Here, I want to download an Ubuntu cloud image, and boot into it. There's one extra step nowadays. You need to create your "user data" and feed it into cloud-init.
In the nominal case, you can now just launch KVM, and add your user data as a cdrom disk. When it boots, you can login with "ubuntu" and "passw0rd", which we set in the seed:
Finally, let's enable more bells an whistles, and speed this VM up. Let's give it all 4 CPUs, a healthy 8GB of memory, a virtio disk, and let's port forward ssh to 2222:
And with that, we can how ssh into the VM, with the public SSH key specified in our seed:
kirkland@x250:~⟫ ssh -p 5555 ubuntu@localhost
The authenticity of host '[localhost]:5555 ([127.0.0.1]:5555)' can't be established.
RSA key fingerprint is SHA256:w2FyU6TcZVj1WuaBA799pCE5MLShHzwio8tn8XwKSdg.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes
Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-36-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
0 packages can be updated.
0 updates are security updates.
ubuntu@ubuntu:~⟫
I'm thrilled to introduce Docker 1.10.3, available on every Ubuntu architecture, for Ubuntu 16.04 LTS, and announce the General Availability of Ubuntu Fan Networking!
i686 (does anyone seriously still run 32-bit intel servers?)
amd64 (most servers and clouds under the sun)
ppc64el (OpenPower and IBM POWER8 machine learning super servers)
s390x (IBM System Z LinuxOne super uptime mainframes)
That's Docker-Docker-Docker-Docker-Docker-Docker, from the smallest Raspberry Pi's to the biggest IBM mainframes in the world today! Never more than one 'sudo apt install docker.io' command away.
Moreover, we now have Docker running inside of LXD! Containers all the way down. Application containers (e.g. Docker), inside of Machine containers (e.g. LXD), inside of Virtual Machines (e.g. KVM), inside of a public or private cloud (e.g. Azure, OpenStack), running on bare metal (take your pick).
Let's have a look at launching a Docker application container inside of a LXD machine container:
Oh, and let's talk about networking... We're also pleased to announce the general availability of Ubuntu Fan networking -- specially designed to connect all of your Docker containers spread across your network. Ubuntu's Fan networking feature is an easy way to make every Docker container on your local network easily addressable by every other Docker host and container on the same network. It's high performance, super simple, utterly deterministic, and we've tested it on every major public cloud as well as OpenStack and our private networks.
Simply installing Ubuntu's Docker package will also install the ubuntu-fan package, which provides an interactive setup script, fanatic, should you choose to join the Fan. Simply run 'sudo fanatic' and answer the questions. You can trivially revert your Fan networking setup easily with 'sudo fanatic deconfigure'.
kirkland@x250:~$ sudo fanatic
Welcome to the fanatic fan networking wizard. This will help you set
up an example fan network and optionally configure docker and/or LXD to
use this network. See fanatic(1) for more details.
Configure fan underlay (hit return to accept, or specify alternative) [10.0.0.0/16]:
Configure fan overlay (hit return to accept, or specify alternative) [250.0.0.0/8]:
Create LXD networking for underlay:10.0.0.0/16 overlay:250.0.0.0/8 [Yn]: n
Create docker networking for underlay:10.0.0.0/16 overlay:250.0.0.0/8 [Yn]: Y
Test docker networking for underlay:10.0.0.45/16 overlay:250.0.0.0/8
(NOTE: potentially triggers large image downloads) [Yn]: Y
local docker test: creating test container ...
34710d2c9a856f4cd7d8aa10011d4d2b3d893d1c3551a870bdb9258b8f583246
test master: ping test (250.0.45.0) ...
test slave: ping test (250.0.45.1) ...
test master: ping test ... PASS
test master: short data test (250.0.45.1 -> 250.0.45.0) ...
test slave: ping test ... PASS
test slave: short data test (250.0.45.0 -> 250.0.45.1) ...
test master: short data ... PASS
test slave: short data ... PASS
test slave: long data test (250.0.45.0 -> 250.0.45.1) ...
test master: long data test (250.0.45.1 -> 250.0.45.0) ...
test master: long data ... PASS
test slave: long data ... PASS
local docker test: destroying test container ...
fanatic-test
fanatic-test
local docker test: test complete PASS (master=0 slave=0)
This host IP address: 10.0.0.45
I've run 'sudo fanatic' here on a couple of machines on my network -- x250 (10.0.0.45) and masterbr (10.0.0.8), and now I'm going to launch a Docker container on each of those two machines, obtain each IP address on the Fan (250.x.y.z), install iperf, and test the connectivity and bandwidth between each of them (on my gigabit home network). You'll see that we'll get 900mbps+ of throughput:
kirkland@masterbr:~⟫ sudo docker run -it ubuntu bash
root@effc8fe2513d:/# apt update >/dev/null 2>&1 ; apt install -y iperf >/dev/null 2>&1
root@effc8fe2513d:/# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:fa:00:08:00
inet addr:250.0.8.0 Bcast:0.0.0.0 Mask:255.0.0.0
inet6 addr: fe80::42:faff:fe00:800/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:7659 errors:0 dropped:0 overruns:0 frame:0
TX packets:3433 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:22131852 (22.1 MB) TX bytes:189875 (189.8 KB)
root@effc8fe2513d:/# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 250.0.8.0 port 5001 connected with 250.0.45.0 port 54274
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1.05 GBytes 899 Mbits/sec
Finally, let's have another long hard look at the image from the top of this post. Download it in full resolution to study very carefully what's happening here, because it's pretty [redacted] amazing!
Here, we have a Byobu session, split into 6 panes (Shift-F2 5x Times, Shift-F8 6x times). In each pane, we have an SSH session to Ubuntu 16.04 LTS servers spread across 6 different architectures -- armhf, arm64, i686, amd64, ppc64el, and s390x. I used the Shift-F9 key to simultaneously run the same commands in each and every window. Here are the commands I ran:
That's right. We just launched Ubuntu LXD containers, as well as Docker containers against every Ubuntu 16.04 LTS architecture. How's that for Ubuntu everywhere!?!
I had the opportunity to speak at Container World 2016 in Santa Clara yesterday. Thanks in part to the Netflix guys who preceded me, the room was absolutely packed!
Ubuntu 16.04 LTS (Xenial) is only a few short weeks away, and with it comes one of the most exciting new features Linux has seen in a very long time...
ZFS -- baked directly into Ubuntu -- supported by Canonical.
What is ZFS?
ZFS is a combination of a volume manager (like LVM) and a filesystem (like ext4, xfs, or btrfs).
ZFS one of the most beloved features of Solaris, universally coveted by every Linux sysadmin with a Solaris background. To our delight, we're happy to make to OpenZFS available on every Ubuntu system. Ubuntu's reference guide for ZFS can be found here, and these are a few of the killer features:
snapshots
copy-on-write cloning
continuous integrity checking against data corruption
automatic repair
efficient data compression.
These features truly make ZFS the perfect filesystem for containers.
What does "support" mean?
You'll find zfs.ko automatically built and installed on your Ubuntu systems. No more DKMS-built modules!
Now, let's install lxd and zfsutils-linux, if you haven't already:
$ sudo apt install lxd zfsutils-linux
Next, let's use the interactive lxd init command to setup LXD and ZFS. In the example below, I'm simply using a sparse, loopback file for the ZFS pool. For best results (and what I use on my laptop and production servers), it's best to use a raw SSD partition or device.
$ sudo lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 2
Would you like LXD to be available over the network (yes/no)? no
LXD has been successfully configured.
We can check our ZFS pool now:
$ sudo zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
lxd 1.98G 450K 1.98G - 0% 0% 1.00x ONLINE -
$ sudo zpool status
pool: lxd
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
lxd ONLINE 0 0 0
/var/lib/lxd/zfs.img ONLINE 0 0 0
errors: No known data errors
$ lxc config get storage.zfs_pool_name
storage.zfs_pool_name: lxd
Finally, let's import the Ubuntu LXD image, and launch a few containers. Note how fast containers launch, which is enabled by the ZFS cloning and copy-on-write features:
$ newgrp lxd
$ lxd-images import ubuntu --alias ubuntu
Downloading the GPG key for http://cloud-images.ubuntu.com
Progress: 48 %
Validating the GPG signature of /tmp/tmpa71cw5wl/download.json.asc
Downloading the image.
Image manifest: http://cloud-images.ubuntu.com/server/releases/trusty/release-20160201/ubuntu-14.04-server-cloudimg-amd64.manifest
Image imported as: 54c8caac1f61901ed86c68f24af5f5d3672bdc62c71d04f06df3a59e95684473
Setup alias: ubuntu
$ for i in $(seq 1 5); do lxc launch ubuntu; done
...
$ lxc list
+-------------------------+---------+-------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | EPHEMERAL | SNAPSHOTS |
+-------------------------+---------+-------------------+------+-----------+-----------+
| discordant-loria | RUNNING | 10.0.3.130 (eth0) | | NO | 0 |
+-------------------------+---------+-------------------+------+-----------+-----------+
| fictive-noble | RUNNING | 10.0.3.91 (eth0) | | NO | 0 |
+-------------------------+---------+-------------------+------+-----------+-----------+
| interprotoplasmic-essie | RUNNING | 10.0.3.242 (eth0) | | NO | 0 |
+-------------------------+---------+-------------------+------+-----------+-----------+
| nondamaging-cain | RUNNING | 10.0.3.9 (eth0) | | NO | 0 |
+-------------------------+---------+-------------------+------+-----------+-----------+
| untreasurable-efrain | RUNNING | 10.0.3.89 (eth0) | | NO | 0 |
+-------------------------+---------+-------------------+------+-----------+-----------+
There's no shortage of excitement, controversy, and readership, any time you can work "Docker" into a headline these days. Perhaps a bit like "Donald Trump", but for CIO tech blogs and IT news -- a real hot button. Hey, look, I even did it myself in the title of this post!
Docker's default container image is certainly Docker's decision to make. But it would be prudent to examine at a few facts:
(1) Check DockerHub and you may notice that while Busybox (Alpine Linux) has surpassed Ubuntu in the number downloads (66M to 40M), Ubuntu is still by far the most "popular" by number of "stars" -- likes, favorites, +1's, whatever, (3.2K to 499).
(2) Ubuntu's compressed, minimal root tarball is 59 MB, which is what is downloaded over the Internet. That's different from the 188 MB uncompressed root filesystem, which has been quoted a number of times in the press.
(3) The real magic of Docker is such that you only ever download that base image, one time! And you only store one copy of the uncompressed root filesystem on your disk! Just once, sudo docker pull ubuntu, on your laptop at home or work, and then launch thousands of images at a coffee shop or airport lounge with its spotty wifi. Build derivative images, FROM ubuntu, etc. and you only ever store the incremental differences.
Actually, I encourage you to test that out yourself... I just launched a t2.micro -- Amazon's cheapest instance type with the lowest networking bandwidth. It took 15.938s to sudo apt install docker.io. And it took 9.230s to sudo docker pull ubuntu. It takes less time to download Ubuntu than to install Docker!
ubuntu@ip-172-30-0-129:~⟫ time sudo apt install docker.io -y
...
real 0m15.938s
user 0m2.146s
sys 0m0.913s
As compared to...
ubuntu@ip-172-30-0-129:~⟫ time sudo docker pull ubuntu
latest: Pulling from ubuntu
f15ce52fc004: Pull complete
c4fae638e7ce: Pull complete
a4c5be5b6e59: Pull complete
8693db7e8a00: Pull complete
ubuntu:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:457b05828bdb5dcc044d93d042863fba3f2158ae249a6db5ae3934307c757c54
Status: Downloaded newer image for ubuntu:latest
real 0m9.230s
user 0m0.021s
sys 0m0.016s
Now, sure, it takes even less than that to download Alpine Linux (0.747s by my test), but again you only ever do that once! After you have your initial image, launching Docker containers take the exact same amount of time (0.233s) and identical storage differences. See:
ubuntu@ip-172-30-0-129:/tmp/docker⟫ time sudo docker run alpine /bin/true
real 0m0.233s
user 0m0.014s
sys 0m0.001s
ubuntu@ip-172-30-0-129:/tmp/docker⟫ time sudo docker run ubuntu /bin/true
real 0m0.234s
user 0m0.012s
sys 0m0.002s
(4) I regularly communicate sincere, warm congratulations to our friends at Docker Inc, on its continued growth. shykes publicly mentioned the hiring of the maintainer of Alpine Linux in that Hacker News post. As a long time Linux distro developer myself, I have tons of respect for everyone involved in building a high quality Linux distribution. In fact, Canonical employs over 700 people, in 44 countries, working around the clock, all calendar year, to make Ubuntu the world's most popular Linux OS. Importantly, that includes a dedicated security team that has an outstanding track record over the last 12 years, keeping Ubuntu servers, clouds, desktops, laptops, tablets, and phones up-to-date and protected against the latest security vulnerabilities. I don't know personally Natanael, but I'm intimately aware of what a spectacular amount of work it is to maintain and secure an OS distribution, as it makes its way into enterprise and production deployments. Good luck!
(5) There are currently 5,854 packages available via apk in Alpine Linux (sudo docker run alpine apk search -v). There are 8,862 packages in Ubuntu Main (officially supported by Canonical), and 53,150 binary packages across all of Ubuntu Main, Universe, Restricted, and Multiverse, supported by the greater Ubuntu community. Nearly all 50,000+ packages are updated every 6 months, on time, every time, and we release an LTS version of Ubuntu and the best of open source software in the world every 2 years. Like clockwork. Choice. Velocity. Stability. That's what Ubuntu brings.
Docker holds a special place in the Ubuntu ecosystem, and Ubuntu has been instrumental in Docker's growth over the last 3 years. Where we go from here, is largely up to the cross-section of our two vibrant communities.
And so I ask you honestly...what do you want to see? How would you like to see Docker and Ubuntu operate together?
I'm Canonical's Product Manager for Ubuntu Server, I'm responsible for Canonical's relationship with Docker Inc, and I will read absolutely every comment posted below.
Cheers,
:-Dustin
p.s. I'm speaking at Container Summit in New York City today, and wrote this post from the top of the (inspiring!) One World Observatory at the World Trade Center this morning. Please come up and talk to me, if you want to share your thoughts (at Container Summit, not the One World Observatory)!
Picture yourself containers on a server With systemd trees and spawned tty's Somebody calls you, you answer quite quickly A world with the density so high
- Sgt. Graber's LXD Smarts Club Band
Last week, we proudly released Ubuntu 15.10 (Wily) -- the final developer snapshot of the Ubuntu Server before we focus the majority of our attention on quality, testing, performance, documentation, and stability for the Ubuntu 16.04 LTS cycle in the next 6 months.
Notably, LXD has been promoted to the Ubuntu Main archive, now commercially supported by Canonical. That has enabled us to install LXD by default on all Ubuntu Servers, from 15.10 forward.
That means that every Ubuntu server -- Intel, AMD, ARM, POWER, and even Virtual Machines in the cloud -- is now a full machine container hypervisor, capable of hosting hundreds of machine containers, right out of the box!
LXD in the Sky with Diamonds! Well, LXD is in the Cloud with Diamond level support from Canonical, anyway. You can even test it in your web browser here.
The development tree of Xenial (Ubuntu 16.04 LTS) has already inherited this behavior, and we will celebrate this feature broadly through our use of LXD containers in Juju, MAAS, and the reference platform of Ubuntu OpenStack, as well as the new nova-lxd hypervisor in the OpenStack Autopilot within Landscape.
While the young and the restless are already running Wily Ubuntu 15.10, the bold and the beautiful are still bound to their Trusty Ubuntu 14.04 LTS servers.
At Canonical, we understand both motivations, and this is why we have backported LXD to the Trusty archives, for safe, simple consumption and testing of this new generation of machine containers there, on your stable LTS.
Installing LXD on Trusty simply requires enabling the trusty-backports pocket, and installing the lxd package from there, with these 3 little commands:
In minutes, you can launch your first LXD containers. First, inherit your new group permissions, so you can execute the lxc command as your non-root user. Then, import some images, and launch a new container named lovely-rita. Shell into that container, and examine the process tree, install some packages, check the disk and memory and cpu available. Finally, exit when you're done, and optionally delete the container.
I was able to run over 600 containers simultaneously on my Thinkpad (x250, 16GB of RAM), and over 60 containers on an m1.small in Amazon (1.6GB of RAM).
Canonical is delighted to sponsor ContainerCon 2015, a Linux Foundation event in Seattle next week, August 17-19, 2015. It's quite exciting to see the A-list of sponsors, many of them newcomers to this particular technology, teaming with energy around containers.
From chroots to BSD Jails and Solaris Zones, the concepts behind containers were established decades ago, and in fact traverse the spectrum of server operating systems. At Canonical, we've been working on containers in Ubuntu for more than half a decade, providing a home and resources for stewardship and maintenance of the upstream Linux Containers (LXC) project since 2010.
Last year, we publicly shared our designs for LXD -- a new stratum on top of LXC that endows the advantages of a traditional hypervisor into the faster, more efficient world of containers.
LXD is a persistent daemon that provides a clean RESTful interface to manage (start, stop, clone, migrate, etc.) any of the containers on a given host.
Hosts running LXD are handily federated into clusters of container hypervisors, and can work as Nova Compute nodes in OpenStack, for example, delivering Infrastructure-as-a-Service cloud technology at lower costs and greater speeds.
Here, LXD and Docker are quite complementary technologies. LXD furnishes a dynamic platform for "system containers" -- containers that behave like physical or virtual machines, supplying all of the functionality of a full operating system (minus the kernel, which is shared with the host). Such "machine containers" are the core of IaaS clouds, where users focus on instances with compute, storage, and networking that behave like traditional datacenter hardware.
LXD runs perfectly well along with Docker, which supplies a framework for "application containers" -- containers that enclose individual processes that often relate to one another as pools of micro services and deliver complex web applications.
Moreover, the Zen of LXD is the fact that the underlying container implementation is actually decoupled from the RESTful API that drives LXD functionality. We are most excited to discuss next week at ContainerCon our work with Microsoft around the LXD RESTful API, as a cross-platform container management layer.
Ben Armstrong, a Principal Program Manager Lead at Microsoft on the core virtualization and container technologies, has this to say:
“As Microsoft is working to bring Windows Server Containers to the world – we are excited to see all the innovation happening across the industry, and have been collaborating with many projects to encourage and foster this environment. Canonical’s LXD project is providing a new way for people to look at and interact with container technologies. Utilizing ‘system containers’ to bring the advantages of container technology to the core of your cloud infrastructure is a great concept. We are looking forward to seeing the results of our engagement with Canonical in this space.”
Finally, if you're in Seattle next week, we hope you'll join us for the technical sessions we're leading at ContainerCon 2015, including: "Putting the D in LXD: Migration of Linux Containers", "Container Security - Past, Present, and Future", and "Large Scale Container Management with LXD and OpenStack". Details are below.
Date: Wednesday, August 19 10:25am-11:15am Title: Putting the D in LXD: Migration of Linux Containers Speaker: Tycho Andersen Abstract: http://sched.co/3YTz Location: Willow A Schedule: http://sched.co/3YTz
If you read my last post, perhaps you followed the embedded instructions and ran hundreds of LXD system containers on your own Ubuntu machine.
Or perhaps you're already a Docker enthusiast and your super savvy microservice architecture orchestrates dozens of applications among a pile of process containers.
Either way, the massive multiplication of containers everywhere introduces an interesting networking problem:
"How do thousands of containers interact with thousands of other containers efficiently over a network? What if every one of those containers could just route to one another?"
Canonical is pleased to introduce today an innovative solution that addresses this problem in perhaps the most elegant and efficient manner to date! We call it "The Fan" -- an extension of the network tunnel driver in the Linux kernel. The fan was conceived by Mark Shuttleworth and John Meinel, and implemented by Jay Vosburgh and Andy Whitcroft.
A Basic Overview
Each container host has a "fan bridge" that enables all of its containers to deterministically map network traffic to any other container on the fan network. I say "deterministically", in that there are no distributed databases, no consensus protocols, and no more overhead than IP-IP tunneling. [A more detailed technical description can be found here.] Quite simply, a /16 network gets mapped on onto an unused /8 network, and container traffic is routed by the host via an IP tunnel.
A Demo
Interested yet? Let's take it for a test drive in AWS...
First, launch two instances in EC2 (or your favorite cloud) in the same VPC. Ben Howard has created special test images for AWS and GCE, which include a modified Linux kernel, a modified iproute2 package, a new fanctl package, and Docker installed by default. You can find the right AMIs here.
Now, let's create a fan bridge on each of those two instances. We can create it on the command line using the new fanctl command, or we can put it in /etc/network/interfaces.d/eth0.cfg.
We'll do the latter, so that the configuration is persistent across boots.
$ cat /etc/network/interfaces.d/eth0.cfg
# The primary network interface
auto eth0
iface eth0 inet dhcp
up fanctl up 250.0.0.0/8 eth0/16 dhcp
down fanctl down 250.0.0.0/8 eth0/16
$ sudo ifup --force eth0
Now, let's send some traffic back and forth! Again, we can use ping and nc.
root@261ae39d90db:/# ping -c 3 250.0.27.3
PING 250.0.27.3 (250.0.27.3) 56(84) bytes of data.
64 bytes from 250.0.27.3: icmp_seq=1 ttl=62 time=0.563 ms
64 bytes from 250.0.27.3: icmp_seq=2 ttl=62 time=0.278 ms
64 bytes from 250.0.27.3: icmp_seq=3 ttl=62 time=0.260 ms
--- 250.0.27.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.260/0.367/0.563/0.138 ms
root@261ae39d90db:/# echo "here come the bits" | nc 250.0.27.3 9876
root@261ae39d90db:/#
─────────────────────────────────────────────────────────────────────
root@ddd943163843:/# ping -c 3 250.0.28.3
PING 250.0.28.3 (250.0.28.3) 56(84) bytes of data.
64 bytes from 250.0.28.3: icmp_seq=1 ttl=62 time=0.434 ms
64 bytes from 250.0.28.3: icmp_seq=2 ttl=62 time=0.258 ms
64 bytes from 250.0.28.3: icmp_seq=3 ttl=62 time=0.269 ms
--- 250.0.28.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.258/0.320/0.434/0.081 ms
root@ddd943163843:/# nc -l 9876
here come the bits
Alright, so now let's really bake your noodle...
That 250.0.0.0/8 network can actually be any /8 network. It could be a 10.* network or any other /8 that you choose. I've chosen to use something in the reserved Class E range, 240.* - 255.* so as not to conflict with any other routable network.
Finally, let's test the performance a bit using iperf and Amazon's 10gpbs instances!
So I fired up two c4.8xlarge instances, and configured the fan bridge there.
Multiple containers, on separate hosts, directly addressable to one another with nothing more than a single network device on each host. Deterministic routes. Blazing fast speeds. No distributed databases. No consensus protocols. Not an SDN. This is just amazing!
RFC
Give it a try and let us know what you think! We'd love to get your feedback and use cases as we work the kernel and userspace changes upstream.
Over the next few weeks, you'll see the fan patches landing in Wily, and backported to Trusty and Vivid. We are also drafting an RFC, as we think that other operating systems and the container world and the Internet at large would benefit from Fan Networking.
Dustin Kirkland (Twitter, LinkedIn) is an engineer at heart, with a penchant for reducing complexity and solving problems at the cross-sections of technology, business, and people.
With a degree in computer engineering from Texas A&M University (2001), his full-time career began as a software engineer at IBM in the Linux Technology Center working on the Linux kernel and security certifications, including a one-year stint as an dedicated engineer-in-residence at Red Hat in Boston (2005). Dustin was awarded the title Master Inventor at IBM, in recognition of his prolific patent work as an inventor and reviewer with IBM's intellectual property attorneys.
Dustin then first joined Canonical (2008) as an engineer (eventually, engineering manager), helping create the Ubuntu Server distribution and establishing Ubuntu as the overwhelming favorite Linux distribution in Amazon, Google, and Microsoft's cloud platforms, as well as authoring and maintaining dozens of new open source packages.
Dustin joined Gazzang (2011), a venture-backed start-up built around an open source project that he co-authored (eCryptFS), as Chief Technology Officer, and helped dozens of enterprise customers encrypt their data at rest and securely manage their keys. Gazzang was acquired by Cloudera (2014).
Having effectively monetized eCryptFS as an open source project at Gazzang, Dustin returned to Canonical (2013) as the VP of Product for Ubuntu and spent the next several years launching a portfolio of products and services (Ubuntu Advantage, Extended Security Maintenance, Canonical Livepatch, MAAS, OpenStack, Kubernetes) that continues to deliver considerable annual recurring revenue. With Canonical based in London, an 800+ work-from-home employee roster and customers spread across 40+ countries, Dustin traveled the world over, connecting with clients and colleagues steeped in rich cultural experiences.
Google Cloud (2018) recruited Dustin from Canonical to product manage Google's entrance into on-premises data centers with its GKE On-Prem (now, Anthos) offering, with a specific focus on the underlying operating system, hypervisor, and container security. This work afforded Dustin a view deep into the back end data center of many financial services companies, where he still sees tremendous opportunities for improvements in security, efficiencies, cost-reduction, and disruptive new technology adoption.
Seeking a growth-mode opportunity in the fintech sector, Dustin joined Apex Clearing (now, Apex Fintech Solutions) as the Chief Product Officer (2019), where he led several organizations including product management, field engineering, data science, and business partnerships. He drastically revamped Apex's product portfolio and product management processes, retooling away from a legacy "clearing house and custodian", and into a "software-as-a-service fintech" offering instant brokerage account opening, real-time fractional stock trading, a secure closed-network crypto solution, and led the acquisition and integration of Silver's tax and cost basis solution.
Drawn back into a large cap, Dustin joined Goldman Sachs (2021) as a Managing Director and Head of Platform Product Management, within the Consumer banking division, which included Marcus, and the Apple and GM credit cards. He built a cross-functional product management community and established numerous documented product management best practices, processes, and anti-patterns.
Dustin lives in Austin, Texas, with his wife Kim and their wonderful two daughters.