From the Canyon Edge -- :-Dustin

Monday, June 22, 2015

Container-to-Container Networking: The Bits have Hit the Fan!

A thing of beauty
If you read my last post, perhaps you followed the embedded instructions and ran hundreds of LXD system containers on your own Ubuntu machine.

Or perhaps you're already a Docker enthusiast and your super savvy microservice architecture orchestrates dozens of applications among a pile of process containers.

Either way, the massive multiplication of containers everywhere introduces an interesting networking problem:
"How do thousands of containers interact with thousands of other containers efficiently over a network?  What if every one of those containers could just route to one another?"

Canonical is pleased to introduce today an innovative solution that addresses this problem in perhaps the most elegant and efficient manner to date!  We call it "The Fan" -- an extension of the network tunnel driver in the Linux kernel.  The fan was conceived by Mark Shuttleworth and John Meinel, and implemented by Jay Vosburgh and Andy Whitcroft.

A Basic Overview

Each container host has a "fan bridge" that enables all of its containers to deterministically map network traffic to any other container on the fan network.  I say "deterministically", in that there are no distributed databases, no consensus protocols, and no more overhead than IP-IP tunneling.  [A more detailed technical description can be found here.]  Quite simply, a /16 network gets mapped on onto an unused /8 network, and container traffic is routed by the host via an IP tunnel.



A Demo

Interested yet?  Let's take it for a test drive in AWS...


First, launch two instances in EC2 (or your favorite cloud) in the same VPC.  Ben Howard has created special test images for AWS and GCE, which include a modified Linux kernel, a modified iproute2 package, a new fanctl package, and Docker installed by default.  You can find the right AMIs here.
Build and Publish report for trusty 20150621.1228.
-----------------------------------
BUILD INFO:
VERSION=14.04-LTS
STREAM=testing
BUILD_DATE=
BUG_NUMBER=1466602
STREAM="testing"
CLOUD=CustomAWS
SERIAL=20150621.1228
-----------------------------------
PUBLICATION REPORT:
NAME=ubuntu-14.04-LTS-testing-20150621.1228
SUITE=trusty
ARCH=amd64
BUILD=core
REPLICATE=1
IMAGE_FILE=/var/lib/jenkins/jobs/CloudImages-Small-CustomAWS/workspace/ARCH/amd64/trusty-server-cloudimg-CUSTOM-AWS-amd64-disk1.img
VERSION=14.04-LTS-testing-20150621.1228
INSTANCE_BUCKET=ubuntu-images-sandbox
INSTANCE_eu-central-1=ami-1aac9407
INSTANCE_sa-east-1=ami-59a22044
INSTANCE_ap-northeast-1=ami-3ae2453a
INSTANCE_eu-west-1=ami-d76623a0
INSTANCE_us-west-1=ami-238d7a67
INSTANCE_us-west-2=ami-53898c63
INSTANCE_ap-southeast-2=ami-ab95ef91
INSTANCE_ap-southeast-1=ami-98e9edca
INSTANCE_us-east-1=ami-b1a658da
EBS_BUCKET=ubuntu-images-sandbox
VOL_ID=vol-678e2c29
SNAP_ID=snap-efaa288b
EBS_eu-central-1=ami-b4ac94a9
EBS_sa-east-1=ami-e9a220f4
EBS_ap-northeast-1=ami-1aee491a
EBS_eu-west-1=ami-07602570
EBS_us-west-1=ami-318c7b75
EBS_us-west-2=ami-858b8eb5
EBS_ap-southeast-2=ami-558bf16f
EBS_ap-southeast-1=ami-faeaeea8
EBS_us-east-1=ami-afa25cc4
----
6cbd6751-6dae-4da7-acf3-6ace80c01acc




Next, ensure that those two instances can talk to one another.  Here, I tested that in both directions, using both ping and nc.

ubuntu@ip-172-30-0-28:~$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 0a:0a:8f:f8:cc:21  
          inet addr:172.30.0.28  Bcast:172.30.0.255  Mask:255.255.255.0
          inet6 addr: fe80::80a:8fff:fef8:cc21/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:2904565 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9919258 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:13999605561 (13.9 GB)  TX bytes:14530234506 (14.5 GB)

ubuntu@ip-172-30-0-28:~$ ping -c 3 172.30.0.27
PING 172.30.0.27 (172.30.0.27) 56(84) bytes of data.
64 bytes from 172.30.0.27: icmp_seq=1 ttl=64 time=0.289 ms
64 bytes from 172.30.0.27: icmp_seq=2 ttl=64 time=0.201 ms
64 bytes from 172.30.0.27: icmp_seq=3 ttl=64 time=0.192 ms

--- 172.30.0.27 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.192/0.227/0.289/0.045 ms
ubuntu@ip-172-30-0-28:~$ nc -l 1234
hi mom
─────────────────────────────────────────────────────────────────────
ubuntu@ip-172-30-0-27:~$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 0a:26:25:9a:77:df  
          inet addr:172.30.0.27  Bcast:172.30.0.255  Mask:255.255.255.0
          inet6 addr: fe80::826:25ff:fe9a:77df/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:11157399 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1671239 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:16519319463 (16.5 GB)  TX bytes:12019363671 (12.0 GB)

ubuntu@ip-172-30-0-27:~$ ping -c 3 172.30.0.28
PING 172.30.0.28 (172.30.0.28) 56(84) bytes of data.
64 bytes from 172.30.0.28: icmp_seq=1 ttl=64 time=0.245 ms
64 bytes from 172.30.0.28: icmp_seq=2 ttl=64 time=0.185 ms
64 bytes from 172.30.0.28: icmp_seq=3 ttl=64 time=0.186 ms

--- 172.30.0.28 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.185/0.205/0.245/0.030 ms
ubuntu@ip-172-30-0-27:~$ echo "hi mom" | nc 172.30.0.28 1234

If that doesn't work, you might have to adjust your security group until it does.


Now, import the Ubuntu image in Docker in both instances.

$ sudo docker pull ubuntu:latest
Pulling repository ubuntu
...
e9938c931006: Download complete
9802b3b654ec: Download complete
14975cc0f2bc: Download complete
8d07608668f6: Download complete

Now, let's create a fan bridge on each of those two instances.  We can create it on the command line using the new fanctl command, or we can put it in /etc/network/interfaces.d/eth0.cfg.

We'll do the latter, so that the configuration is persistent across boots.

$ cat /etc/network/interfaces.d/eth0.cfg
# The primary network interface
auto eth0
iface eth0 inet dhcp
    up fanctl up 250.0.0.0/8 eth0/16 dhcp
    down fanctl down 250.0.0.0/8 eth0/16

$ sudo ifup --force eth0

Now, let's look at our ifconfig...

$ ifconfig
docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99  
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr 0a:0a:8f:f8:cc:21  
          inet addr:172.30.0.28  Bcast:172.30.0.255  Mask:255.255.255.0
          inet6 addr: fe80::80a:8fff:fef8:cc21/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:2905229 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9919652 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:13999655286 (13.9 GB)  TX bytes:14530269365 (14.5 GB)

fan-250-0-28 Link encap:Ethernet  HWaddr 00:00:00:00:00:00  
          inet addr:250.0.28.1  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::8032:4dff:fe3b:a108/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1480  Metric:1
          RX packets:304246 errors:0 dropped:0 overruns:0 frame:0
          TX packets:245532 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:13697461502 (13.6 GB)  TX bytes:37375505 (37.3 MB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:1622 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1622 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:198717 (198.7 KB)  TX bytes:198717 (198.7 KB)

lxcbr0    Link encap:Ethernet  HWaddr 3a:6b:3c:9b:80:45  
          inet addr:10.0.3.1  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::386b:3cff:fe9b:8045/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)

tunl0     Link encap:IPIP Tunnel  HWaddr   
          UP RUNNING NOARP  MTU:1480  Metric:1
          RX packets:242799 errors:0 dropped:0 overruns:0 frame:0
          TX packets:302666 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:12793620 (12.7 MB)  TX bytes:13697374375 (13.6 GB)

Pay special attention to the new fan-250-0-28 device!  I've only shown this on one of my instances, but you should check both.

Now, let's tell Docker to use that device as its default bridge.

$ fandev=$(ifconfig | grep ^fan- | awk '{print $1}')
$ echo $fandev
fan-250-0-28
$ echo "DOCKER_OPTS='-d -b $fandev --mtu=1480 --iptables=false'" | \
      sudo tee -a /etc/default/docker*

Make sure you restart the docker.io service.  Note that it might be called docker.

$ sudo service docker.io restart || sudo service docker restart

Now we can launch a Docker container in each of our two EC2 instances...

$ sudo docker run -it ubuntu
root@261ae39d90db:/# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr e2:f4:fd:f7:b7:f5  
          inet addr:250.0.28.3  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::e0f4:fdff:fef7:b7f5/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1480  Metric:1
          RX packets:7 errors:0 dropped:2 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:558 (558.0 B)  TX bytes:648 (648.0 B)


And here's a second one, on my other instance...

sudo docker run -it ubuntu
root@ddd943163843:/# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 66:fa:41:e7:ad:44  
          inet addr:250.0.27.3  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::64fa:41ff:fee7:ad44/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1480  Metric:1
          RX packets:12 errors:0 dropped:2 overruns:0 frame:0
          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:936 (936.0 B)  TX bytes:1026 (1.0 KB)

Now, let's send some traffic back and forth!  Again, we can use ping and nc.



root@261ae39d90db:/# ping -c 3 250.0.27.3
PING 250.0.27.3 (250.0.27.3) 56(84) bytes of data.
64 bytes from 250.0.27.3: icmp_seq=1 ttl=62 time=0.563 ms
64 bytes from 250.0.27.3: icmp_seq=2 ttl=62 time=0.278 ms
64 bytes from 250.0.27.3: icmp_seq=3 ttl=62 time=0.260 ms
--- 250.0.27.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.260/0.367/0.563/0.138 ms
root@261ae39d90db:/# echo "here come the bits" | nc 250.0.27.3 9876
root@261ae39d90db:/# 
─────────────────────────────────────────────────────────────────────
root@ddd943163843:/# ping -c 3 250.0.28.3
PING 250.0.28.3 (250.0.28.3) 56(84) bytes of data.
64 bytes from 250.0.28.3: icmp_seq=1 ttl=62 time=0.434 ms
64 bytes from 250.0.28.3: icmp_seq=2 ttl=62 time=0.258 ms
64 bytes from 250.0.28.3: icmp_seq=3 ttl=62 time=0.269 ms
--- 250.0.28.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.258/0.320/0.434/0.081 ms
root@ddd943163843:/# nc -l 9876
here come the bits

Alright, so now let's really bake your noodle...

That 250.0.0.0/8 network can actually be any /8 network.  It could be a 10.* network or any other /8 that you choose.  I've chosen to use something in the reserved Class E range, 240.* - 255.* so as not to conflict with any other routable network.

Finally, let's test the performance a bit using iperf and Amazon's 10gpbs instances!

So I fired up two c4.8xlarge instances, and configured the fan bridge there.
$ fanctl show
Bridge           Overlay              Underlay             Flags
fan-250-0-28     250.0.0.0/8          172.30.0.28/16       dhcp host-reserve 1

And
$ fanctl show
Bridge           Overlay              Underlay             Flags
fan-250-0-27     250.0.0.0/8          172.30.0.27/16       dhcp host-reserve 1

Would you believe 5.46 Gigabits per second, between two Docker instances, directly addressed over a network?  Witness...

Server 1...

root@84364bf2bb8b:/# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 92:73:32:ac:9c:fe  
          inet addr:250.0.27.2  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::9073:32ff:feac:9cfe/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1480  Metric:1
          RX packets:173770 errors:0 dropped:2 overruns:0 frame:0
          TX packets:107628 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:6871890397 (6.8 GB)  TX bytes:7190603 (7.1 MB)

root@84364bf2bb8b:/# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 250.0.27.2 port 5001 connected with 250.0.28.2 port 35165
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  6.36 GBytes  5.46 Gbits/sec

And Server 2...

root@04fb9317c269:/# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr c2:6a:26:13:c5:95  
          inet addr:250.0.28.2  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::c06a:26ff:fe13:c595/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1480  Metric:1
          RX packets:109230 errors:0 dropped:2 overruns:0 frame:0
          TX packets:150164 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:28293821 (28.2 MB)  TX bytes:6849336379 (6.8 GB)

root@04fb9317c269:/# iperf -c 250.0.27.2
multicast ttl failed: Invalid argument
------------------------------------------------------------
Client connecting to 250.0.27.2, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 250.0.28.2 port 35165 connected with 250.0.27.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  6.36 GBytes  5.47 Gbits/sec

Multiple containers, on separate hosts, directly addressable to one another with nothing more than a single network device on each host.  Deterministic routes.  Blazing fast speeds.  No distributed databases.  No consensus protocols.  Not an SDN.  This is just amazing!

RFC

Give it a try and let us know what you think!  We'd love to get your feedback and use cases as we work the kernel and userspace changes upstream.

Over the next few weeks, you'll see the fan patches landing in Wily, and backported to Trusty and Vivid.  We are also drafting an RFC, as we think that other operating systems and the container world and the Internet at large would benefit from Fan Networking.

I'm already a fan!
Dustin

Thursday, June 11, 2015

LXD Challenge: How many containers can you run on your machine?

652 Linux containers running on a Laptop?  Are you kidding me???

A couple of weeks ago, at the OpenStack Summit in Vancouver, Canonical released the results of some scalability testing of Linux containers (LXC) managed by LXD.

Ryan Harper and James Page presented their results -- some 536 Linux containers on a very modest little Intel server (16GB of RAM), versus 37 KVM virtual machines.

Ryan has published the code he used for the benchmarking, and I've used to to reproduce the test on my dev laptop (Thinkpad x230, 16GB of RAM, Intel i7-3520M).

I managed to pack a whopping 652 Ubuntu 14.04 LTS (Trusty) containers on my Ubuntu 15.04 (Vivid) laptop!


The system load peaked at 1056 (!!!), but I was using merely 56% of 15.4GB of system memory.  Amazingly, my Unity desktop and Byobu command line were still perfectly responsive, as were the containers that I ssh'd into.  (Aside: makes me wonder if the Linux system load average is accounting for container process correctly...)


Check out the process tree for a few hundred system containers here!

As for KVM, I managed to launch 31 virtual machines without KSM enabled, and 65 virtual machines with KSM enabled and working hard.  So that puts somewhere between 10x - 21x as many containers as virtual machines on the same laptop.

You can now repeat these tests, if you like.  Please share your results with #LXD on Google+ or Twitter!

I'd love to see someone try this in AWS, anywhere from an m3.small to an r3.8xlarge, and share your results ;-)

Density test instructions

## Install lxd
$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-git-master
$ sudo apt-get update
$ sudo apt-get install -y lxd bzr
$ cd /tmp
## At this point, it's a good idea to logout/login or reboot
## for your new group permissions to get applied
## Grab the tests, disable the tools download
$ bzr branch lp:~raharper/+junk/density-check
$ cd density-check
$ mkdir lxd_tools
## Periodically squeeze your cache
$ sudo bash -x -c 'while true; do sleep 30; \
    echo 3 | sudo tee /proc/sys/vm/drop_caches; \
    free; done' &
## Run the LXD test
$ ./density-check-lxd --limit=mem:512m --load=idle release=trusty arch=amd64
## Run the KVM test
$ ./density-check-kvm --limit=mem:512m --load=idle release=trusty arch=amd64

As for the speed-of-launch test, I'll cover that in a follow-up post!

Can you contain your excitement?

Cheers!
Dustin

Thursday, May 14, 2015

Ubuntu: Make Wonderful Things Possible!


In November of 2006, Canonical held an "all hands" event, which included a team building exercise.  Several teams recorded "Ubuntu commercials".

On one of the teams, Mark "Borat" Shuttleworth amusingly proffered,
"Ubuntu make wonderful things possible, for example, Linux appliance, with Ubuntu preinstalled, we call this -- the fridge!"


Nine years later, that tongue-in-cheek parody is no longer a joke.  It's a "cold" hard reality!

GE Appliances, FirstBuild, and Ubuntu announced a collaboration around a smart refrigerator, available today for $749, running Snappy Ubuntu Core on a Raspberry Pi 2, with multiple USB ports and available in-fridge accessories.  We had one in our booth at IoT World in San Francisco this week!










While the fridge prediction is indeed pretty amazing, the line that strikes me most is actually "Ubuntu make(s) wonderful things possible!"

With emphasis on "things".  As in, "Internet of Things."  The possibilities are absolutely endless in this brave new world of Snappy Ubuntu.  And that is indeed wonderful.

So what are you making with Ubuntu?!?

:-Dustin

Thursday, April 23, 2015

1stBuild Hackathon -- GE Smart Appliances and Snappy Ubuntu

A prototype is worth a thousand meetings -- Words to live by!
A couple of weeks ago, I had the pleasure of attending the 1stBuild Hackathon -- Hack the Home -- sponsored by GE, Canonical, and a host of other smart companies in the IoT space.


Over 250 makers -- hardware and software geeks much like myself -- competed for cash prizes in teams all night long in a 36 hour event at the amazing hackerspace hosted by 1stBuild and the University of Louisville in Kentucky.


Mark Shuttleworth recorded this message, played in the kickoff keynote, to start the hackathon:



Several entries did in fact use Snappy Ubuntu as the base operating system, including the 3rd Place entry, a Smart Crockpot!


I'll quote Jason Chodynieki, on the team that built that device, since I couldn't write it any better:
"I wanted to highlight that this project makes use of Snappy Ubuntu Core! Using Snappy, we were able to create a very modular application that could easily be updated across multiple devices if this project ever made it to production. Snappy provided us with the ability to use popular frameworks very easily and to package our application up as a Snap to make it accessible to the world. With Snappy and the associated CrockWatch snap, we are capable of dropping CrockWatch onto any device that is receiving sensor data from a Crockpot. Because of this, the CrockWatch application can not only run on the webserver (on a Raspberry Pi 2) we used for this project, but it can also be used on other devices. Imagine if your set top box on your TV could help show you what's cooking in the Crock Pot or if the screen on your fridge was capable of displaying this information! With Ubuntu Snappy, these thoughts could soon become reality!"

My wife absolutely loves this idea!  She often starts cooking dinner in the morning, in our slow cooker, and then spends the rest of the day running around town, dropping our kids off and picking them up from two different schools.  She would love the ability to remotely "check in" on the food, look at it from a camera, and adjust the temperature and pressure while out and about around town!



GE had a whole array of appliance available at the event, any of which could be controlled through a special interface, and a Raspberry Pi 2 running Snappy, including this fridge.


All in all, it was a fantastic event.  A big thanks to our hosts at 1stBuild and our colleagues at GE that introduced us to the event.  And an even bigger thanks to all the participants that worked with Ubuntu on their devices and to my colleague Massimo who helped them out!

Happy Hacking,
Dustin

Monday, March 16, 2015

SXSW 2015 Slides and Audio from Fingerprints are Usernames, Not Passwords



This morning, I led a "core conversation" session in the Security and Privacy track at SXSW Interactive festival.  With 60 seats in the room, it was standing room only, and unfortunately, some people were turned away from the session due to a lack of space.  Amazingly, that was a packed house at 9:30am on a Sunday morning, merely stumbling distance from the late night party that is 6th Street in Austin, Texas!

I'm pleased to share with you both the slides, as well as a rudimentary audio recording from the mic on my laptop.  The format of a "core conversation" at SXSW is not your typical conference lecture.  Rather, it's an interactive, dynamic, social exchange of ideas and thoughts.  I hope you enjoy!

Slides:


Audio:


Have a great South-by!
Dustin

Wednesday, January 28, 2015

Security and Biometrics: SXSW Preview Q&A


Rebecca: Can you give me a brief overview of why you see it as a problem that our personal biometrics, at this point mostly fingerprints, are being used to authenticate our actions rather than identify us?

Dustin: How many emails have you received, to date, from some online service or another saying, "We're sorry, but our site was attacked, and while we don't think your password was compromised, we think you should change it anyway, for good measure"?

Surely you've seen this once or twice, right?  And if you're like me, you kind of take a deep breath, and think, "Oh man, that's inconvenient..."

Now, what if that site used some form of biometrics, instead.  Let's say your fingerprint.  Or your eyeball.  How would that email read? You want me to change my fingerprints!?!  My eyeballs!?!

That's ridiculous, of course, but it perfectly shows the problem. Biometrics are not changeable.  You couldn't alter them if you tried. Being able to change, rotate, and strengthen passwords is one of the
most fundamental properties of authentication tokens -- and completely missing from all forms of biometrics!

That's just one of a number of problems with biometrics.  I'll cover more in my talk ;-)

Rebecca: Is biometrics something you've worked with professionally or what has piqued your interest in the area?  What made you want to do a panel on the issue?

Dustin: Sort of.  I've long maintained and developed an encrypted filesystem for Linux, called eCryptfs.  In 2008, I was asked to add eCryptfs support for Thinkpad's fingerprint reader.  After thinking about it
for a while, I refused to do so, with the core arguments being much of what I described above.  With that refusal to support fingerprint readers in 2009, I seemed to have picked a few fights and arguments with various users.

All was pretty quiet on the home front, until Apple released an iPhone with a built-in fingerprint reader in late 2013, and I blogged this piece that criticized the idea accordingly: http://blog.dustinkirkland.com/2013/10/fingerprints-are-user-names-not.html

That blog post in October 2013 sort of did the viral thing on social media, I guess, seeing almost a million unique views in about a month.

Rebecca: I feel embarrassed to admit that I had simply never thought of this issue until seeing your panel synopsis.  Then, it seemed incredibly obvious and I found myself looking at my phone's fingerprint scanner suspiciously.  Why do you think the public has had so little response to biometrics in technology, other than seeing it as a neat feature of a particular gadget?

Dustin: On the surface, it seems like such a good idea.  We've all seen Mission Impossible or 007 or countless other spy movies where Hollywood portrays biometrics as the authentication mechanism of the future.  But it's just that...  Bad pulp fiction.

There are plenty of ideas that probably seemed like a good idea at first, right?  Examples: Clippy, The Hindenburg, New Coke, Tanning beds, The Shake Weight, Subprime Mortgages, Leaded Gasoline.  Think about for just a minute, though.  A passenger blimp filled with Hydrogen?  An annoying cartoon character that always knows more than you?  Massive scale lending to high-risk individuals packed into mortgage-backed securities?  Dig a little deeper and these were actually misapplications from the beginning.  We'll be in the same place with Biometrics, I have no doubt.

Rebecca: Have there been any instances that you're aware of where the technology has been compromised?

Dustin: The Chaos Computer Club have demonstrated compromised Apple TouchID: http://arstechnica.com/apple/2013/09/chaos-computer-club-hackers-trick-apples-touchid-security-feature/

TouchID is actually pretty high resolution.  The Thinkpad fingerprint readers, until recently, could be fooled with a piece of scotch tape: https://pacsec.jp/psj06/psj06krissler-e.pdf

Rebecca: In the future, if we continue down the current path do you see identity theft including the hacking of our fingerprints and voice patterns in addition to our credit card info?

Dustin: I certainly hope we can curtail this doomed path of technology before we get to that point...

But if we don't, then yes, absolutely.  All of your biometrics are easily collected in public places, with your knowledge.


  • Your fingerprints are on your coffee mug and every beer bottle you've ever picked up with your bare hands.
  • Your hair, dandruff, and dead skin contain your DNA.
  • High resolution digital cameras can pick up your iris in incredible detail (less so for the retina currently)
  • Facial recognition -- seriously, unless you've taken exorbitant steps, your face is all over Facebook, Google, LinkedIn, etc., and everywhere you go in public today, there are security monitors.
  • The same goes for vocal recognition.  Surely you've heard, "This call may be recorded for training purposes".  Sure, that's fine.  But do you go spilling your master password to all of your accounts to that phone support?  Well, if you use voice recognition for your authentication, then that's exactly what you've done.

Rebecca: Beyond crime, what are the civil liberties issues you see being entwined with biometrics technology?  Could the government theoretically access this information in much the same way they have our email and phone records in the past?

Dustin: Theoretically, yes.  That that "theoretically, yes" is enough for me to be very concerned.

Is Apple colluding with the NSA/FBI/CIA/etc?  I am most certainly NOT making that accusation.

Could they, or anyone else in this biometrics?  Most certainly.  They could even be coerced or forced to do so.  And they could so unknowingly.  And it might not even be "the good guys".  Anyone of this magnitude is a target for attacks, by less than savory governments or crime organizations.

Moreover, I strongly recommend that everyone consider their biometrics compromised.  As I said above, you leave a trail of your fingerprints, DNA, face, voice, etc. everywhere you go.  Just accept that they're not secret, and don't pretend that they are :-)

Rebecca: What are some places where you see biometrics as appropriate and useful?

Dustin: Back to the title of the presentation, I think biometrics are decent as a "username", just not as a "password".

Is your name secret?  No, not really.  Is your email address secret? No, not really, either.

That's what biometrics are -- they're another expression of your "identity".  It can be used to replace, or rather, look up your name, username, or email address from a list, as it's just another expression of that information.

Now, a password is something entirely different.  A password is how you "prove" your identity.  This is something entirely different.  It must be long, and very hard to guess.  You have to be able to change it.  And you have to keep your passwords separate from different accounts, so that no one account could share that with another account and compromise you.

Rebecca: What are your thoughts on SXSW Interactive as a venue for such discussion?

Dustin: I think it's a fantastic venue!  I attended SXSW Interactive in 2014, and was very impressed with the quality of speakers and discussion around security, privacy, identity, and civil liberties.  I immediately regretted that I didn't submit this talk for the 2014 conference, and resolved to definitely do so for 2015.  Unfortunately, this subject is still important and topical in 2015 :-(  Which means we still have some work to do!

Rebecca: Finally, are there any other panels you're especially looking forward to?

Dustin: All of the Open Source ones (of which there are a lot!), as that's really my passion.  If I have to pick three right now I'm definitely attending, it would be:


Cheers,
Dustin

Monday, January 26, 2015

Introducing PetName libraries for Golang, Python, and Shell

Gratuitous picture of my pets, the day after we rescued them
The PetName libraries (Shell, Python, Golang) can generate infinite combinations of human readable UUIDs


Some Background

In March 2014, when I first started looking after MAAS as a product manager, I raised a minor feature request in Bug #1287224, noting that the random, 5-character hostnames that MAAS generates are not ideal. You can't read them or pronounce them or remember them easily. I'm talking about hostnames like: sldna, xwknd, hwrdz or wkrpb. From that perspective, they're not very friendly. Certainly not very Ubuntu.

We're not alone, in that respect. Amazon generates forgettable instance names like i-15a4417c, along with most virtual machine and container systems.


Meanwhile, there is a reasonably well-known concept -- Zooko's Triangle -- which says that names should be:
  • Human-meaningful: The quality of meaningfulness and memorability to the users of the naming system. Domain names and nicknaming are naming systems that are highly memorable
  • Decentralized: The lack of a centralized authority for determining the meaning of a name. Instead, measures such as a Web of trust are used.
  • Secure: The quality that there is one, unique and specific entity to which the name maps. For instance, domain names are unique because there is just one party able to prove that they are the owner of each domain name.
And, of course we know what XKCD has to say on a somewhat similar matter :-)

So I proposed a few different ways of automatically generating those names, modeled mostly after Ubuntu's beloved own code naming scheme -- Adjective Animal. To get the number of combinations high enough to model any reasonable MAAS user, though, we used Adjective Noun instead of Adjective Animal.

I collected a Adjective list and a Noun list from a blog run by moms, in the interest of having a nice, soft, friendly, non-offensive source of words.

For the most part, the feature served its purpose. We now get memorable, pronounceable names. However, we get a few odd balls in there from time to time. Most are humorous. But some combinations would prove, in fact, to be inappropriate, or perhaps even offensive to some people.

Accepting that, I started thinking about other solutions.

In the mean time, I realized that Docker had recently launched something similar, their NamesGenerator, which pairs an Adjective with a Famous Scientist's Last Name (except they have explicitly blacklisted boring_wozniak, because "Steve Wozniak is not boring", of course!).


Similarly, Github itself now also "suggests" random repo names.



I liked one part of the Docker approach better -- the use of proper names, rather than random nouns.

On the other hand, their approach is hard-coded into the Docker Golang source itself, and not usable or portable elsewhere, easily.

Moreover, there's only a few dozen Adjectives (57) and Names (76), yielding only about 4K combinations (4332) -- which is not nearly enough for MAAS's purposes, where we're shooting for 16M+, with minimal collisions (ie, covering a Class A network).

Introducing the PetName Libraries

I decided to scrap the Nouns list, and instead build a Names list. I started with Last Names (like Docker), but instead focused on First Names, and built a list of about 6,000 names from public census data.  I also built a new list of nearly 38,000 Adjectives.

The combination actually works pretty well! While smelly-Susan isn't particularly charming, it's certainly not an ad hominem attack targeted at any particular Susan! That 6,000 x 38,000 gives us well over 228 million unique combinations!

Moreover, I also thought about how I could actually make it infinitely extensible... The simple rules of English allow Adjectives to modify Nouns, while Adverbs can recursively modify other Adverbs or Adjectives.   How convenient!

So I built a word list of Adverbs (13,000) as well, and added support for specifying the "number" of words in a PetName.
  1. If you want 1, you get a random Name 
  2. If you want 2, you get a random Adjective followed by a Name 
  3. If you want 3 or more, you get N-2 Adverbs, an Adjective and a Name 
Oh, and the separator is now optional, and can be any character or string, with a default of a hyphen, "-".

In fact:
  • 2 words will generate over 221 million unique combinations, over 227 combinations
  • 3 words will generate over 2.8 trillion unique combinations, over 241 combinations (more than 32-bit space)
  • 4 words can generate over 255 combinations
  • 5 words can generate over 268 combinations (more than 64-bit space)
Interestingly, you need 10 words to cover 128-bit space!  So it's

unstoutly-clashingly-assentingly-overimpressibly-nonpermissibly-unfluently-chimerically-frolicly-irrational-wonda

versus

b9643037-4a79-412c-b7fc-80baa7233a31

Shell

So once the algorithm was spec'd out, I built and packaged a simple shell utility and text word lists, called petname, which are published at:
The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install petname
$ petname
itchy-Marvin
$ petname -w 3
listlessly-easygoing-Radia
$ petname -s ":" -w 5
onwardly:unflinchingly:debonairly:vibrant:Chandler

Python

That's only really useful from the command line, though. In MAAS, we'd want this in a native Python library. So it was really easy to create python-petname, source now published at:
The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:python-petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install python-petname
$ python-petname
flaky-Megan
$ python-petname -w 4
mercifully-grimly-fruitful-Salma
$ python-petname -s "" -w 2
filthyLaurel

Using it in your own Python code looks as simple as this:

$ python
⟫⟫⟫ import petname
⟫⟫⟫ foo = petname.Generate(3, "_")
⟫⟫⟫ print(foo)
boomingly_tangible_Mikayla

Golang


In the way that NamesGenerator is useful to Docker, I though a Golang library might be useful for us in LXD (and perhaps even usable by Docker or others too), so I created:
Of course you can use "go get" to fetch the Golang package:

$ export GOPATH=$HOME/go
$ mkdir -p $GOPATH
$ export PATH=$PATH:$GOPATH/bin
$ go get github.com/dustinkirkland/golang-petname

And also, the packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:golang-petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install golang-petname
$ golang-petname
quarrelsome-Cullen
$ golang-petname -words=1
Vivian
$ golang-petname -separator="|" -words=10
snobbily|oracularly|contemptuously|discordantly|lachrymosely|afterwards|coquettishly|politely|elaborate|Samir

Using it in your own Golang code looks as simple as this:

package main
import (
        "fmt"
        "math/rand"
        "time"
        "github.com/dustinkirkland/golang-petname"
)
func main() {
        flag.Parse()
        rand.Seed(time.Now().UnixNano())
        fmt.Println(petname.Generate(2, ""))
}
Gratuitous picture of my pets, 7 years later.
Cheers,
happily-hacking-Dustin

Printfriendly