From the Canyon Edge -- :-Dustin
Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

Tuesday, October 18, 2016

Hotfix Your Ubuntu Kernels with the Canonical Livepatch Service!

Introducting the Canonical Livepatch Service
Howdy!

Ubuntu 16.04 LTS’s 4.4 Linux kernel includes an important new security capability in Ubuntu -- the ability to modify the running Linux kernel code, without rebooting, through a mechanism called kernel livepatch.

Today, Canonical has publicly launched the Canonical Livepatch Service -- an authenticated, encrypted, signed stream of Linux livepatches that apply to the 64-bit Intel/AMD architecture of the Ubuntu 16.04 LTS (Xenial) Linux 4.4 kernel, addressing the highest and most critical security vulnerabilities, without requiring a reboot in order to take effect.  This is particularly amazing for Container hosts -- Docker, LXD, etc. -- as all of the containers share the same kernel, and thus all instances benefit.



I’ve tried to answer below some questions that you might have. As you have others, you’re welcome
to add them to the comments below or on Twitter with hastag #Livepatch.

Retrieve your token from ubuntu.com/livepatch

Q: How do I enable the Canonical Livepatch Service?

A: Three easy steps, on a fully up-to-date 64-bit Ubuntu 16.04 LTS system.
  1. Go to https://ubuntu.com/livepatch and retrieve your livepatch token
    1. Install the canonical-livepatch snap
      $ sudo snap install canonical-livepatch 
    2. Enable the service with your token
      $ sudo canonical-livepatch enable [TOKEN] 
    And you’re done! You can check the status at any time using:

    $ canonical-livepatch status --verbose

    Q: What are the system requirements?

    A: The Canonical Livepatch Service is available for the generic and low latency flavors of the 64-bit Intel/AMD (aka, x86_64, amd64) builds of the Ubuntu 16.04 LTS (Xenial) kernel, which is a Linux 4.4 kernel. Canonical livepatches work on Ubuntu 16.04 LTS Servers and Desktops, on physical machines, virtual machines, and in the cloud. The safety, security, and stability firmly depends on unmodified Ubuntu kernels and network access to the Canonical Livepatch Service (https://livepatch.canonical.com:443).  You also will need to apt update/upgrade to the latest version of snapd (at least 2.15).

    Q: What about other architectures?

    A: The upstream Linux livepatch functionality is currently limited to the 64-bit x86 architecture, at this time. IBM is working on support for POWER8 and s390x (LinuxOne mainframe), and there’s also active upstream development on ARM64, so we do plan to support these eventually. The livepatch plumbing for 32-bit ARM and 32-bit x86 are not under upstream development at this time.

    Q: What about other flavors?

    A: We are providing the Canonical Livepatch Service for the generic and low latency (telco) flavors of the the Linux kernel at this time.

    Q: What about other releases of Ubuntu?

    A: The Canonical Livepatch Service is provided for Ubuntu 16.04 LTS’s Linux 4.4 kernel. Older releases of Ubuntu will not work, because they’re missing the Linux kernel support. Interim releases of Ubuntu (e.g. Ubuntu 16.10) are targeted at developers and early adopters, rather than Long Term Support users or systems that require maximum uptime.  We will consider providing livepatches for the HWE kernels in 2017.

    Q: What about derivatives of Ubuntu?

    A: Canonical livepatches are fully supported on the 64-bit Ubuntu 16.04 LTS Desktop, Cloud, and Server operating systems. On other Ubuntu derivatives, your mileage may vary! These are not part of our automated continuous integration quality assurance testing framework for Canonical Livepatches. Canonical Livepatch safety, security, and stability will firmly depend on unmodified Ubuntu generic kernels and network access to the Canonical Livepatch Service.

    Q: How does Canonical test livepatches?

    A: Every livepatch is rigorously tested in Canonical's in-house CI/CD (Continuous Integration / Continuous Delivery) quality assurance system, which tests hundreds of combinations of livepatches, kernels, hardware, physical machines, and virtual machines.  Once a livepatch passes CI/CD and regression tests, it's rolled out on a canary testing basis, first to a tiny percentage of the Ubuntu Community users of the Canonical Livepatch Service. Based on the success of that microscopic rollout, a moderate rollout follows.  And assuming those also succeed, the livepatch is delivered to all free Ubuntu Community and paid Ubuntu Advantage users of the service.  Systemic failures are automatically detected and raised for inspection by Canonical engineers.  Ubuntu Community users of the Canonical Livepatch Service who want to eliminate the small chance of being randomly chosen as a canary should enroll in the Ubuntu Advantage program (starting at $12/month).

    Q: What kinds of updates will be provided by the Canonical Livepatch Service?

    A: The Canonical Livepatch Service is intended to address high and critical severity Linux kernel security vulnerabilities, as identified by Ubuntu Security Notices and the CVE database. Note that there are some limitations to the kernel livepatch technology -- some Linux kernel code paths cannot be safely patched while running. We will do our best to supply Canonical Livepatches for high and critical vulnerabilities in a timely fashion whenever possible. There may be occasions when the traditional kernel upgrade and reboot might still be necessary. We’ll communicate that clearly through the usual mechanisms -- USNs, Landscape, Desktop Notifications, Byobu, /etc/motd, etc.

    Q: What about non-security bug fixes, stability, performance, or hardware enablement updates?

    A: Canonical will continue to provide Linux kernel updates addressing bugs, stability issues, performance problems, and hardware compatibility on our usual cadence -- about every 3 weeks. These updates can be easily applied using ‘sudo apt update; sudo apt upgrade -y’, using the Desktop “Software Updates” application, or Landscape systems management. These standard (non-security) updates will still require a reboot, as they always have.

    Q: Can I rollback a Canonical Livepatch?

    A: Currently rolling-back/removing an already inserted livepatch module is disabled in Linux 4.4. This is because we need a way to determine if we are currently executing inside a patched function before safely removing it. We can, however, safely apply new livepatches on top of each other and even repatch functions over and over.

    Q: What about low and medium severity CVEs?

    A: We’re currently focusing our Canonical Livepatch development and testing resources on high and critical security vulnerabilities, as determined by the Ubuntu Security Team.  We'll livepatch other CVEs opportunistically.

    Q: Why are Canonical Livepatches provided as a subscription service?

    A: The Canonical Livepatch Service provides a secure, encrypted, authenticated connection, to ensure that only properly signed livepatch kernel modules -- and most importantly, the right modules -- are delivered directly to your system, with extremely high quality testing wrapped around it.

    Q: But I don’t want to buy UA support!

    A: You don’t have to! Canonical is providing the Canonical Livepatch Service to community users of Ubuntu, at no charge for up to 3 machines (desktop, server, virtual machines, or cloud instances). A randomly chosen subset of the free users of Canonical Livepatches will receive their Canonical Livepatches slightly earlier than the rest of the free users or UA users, as a lightweight canary testing mechanism, benefiting all Canonical Livepatch users (free and UA). Once those canary livepatches apply safely, all Canonical Livepatch users will receive their live updates.

    Q: But I don’t have an Ubuntu SSO account!

    A: An Ubuntu SSO account is free, and provides services similar to Google, Microsoft, and Apple for Android/Windows/Mac devices, respectively. You can create your Ubuntu SSO account here.

    Q: But I don’t want login to ubuntu.com!

    A: You don’t have to! Canonical Livepatch is absolutely not required maintain the security of any Ubuntu desktop or server! You may continue to freely and anonymously ‘sudo apt update; sudo apt upgrade; sudo reboot’ as often as you like, and receive all of the same updates, and simply reboot after kernel updates, as you always have with Ubuntu.

    Q: But I don't have Internet access to livepatch.canonical.com:443!

    A: You should think of the Canonical Livepatch Service much like you think of Netflix, Pandora, or Dropbox.  It's an Internet streaming service for security hotfixes for your kernel.  You have access to the stream of bits when you can connect to the service over the Internet.  On the flip side, your machines are already thoroughly secured, since they're so heavily firewalled off from the rest of the world!

    Q: Where’s the source code?

    A: The source code of livepatch modules can be found here.  The source code of the canonical-livepatch client is part of Canonical's Landscape system management product and is commercial software.

    Q: What about Ubuntu Core?

    A: Canonical Livepatches for Ubuntu Core are on the roadmap, and may be available in late 2016, for 64-bit Intel/AMD architectures. Canonical Livepatches for ARM-based IoT devices depend on upstream support for livepatches.

    Q: How does this compare to Oracle Ksplice, RHEL Live Patching and SUSE Live Patching?

    A: While the concepts are largely the same, the technical implementations and the commercial terms are very different:

    • Oracle Ksplice uses it’s own technology which is not in upstream Linux.
    • RHEL and SUSE currently use their own homegrown kpatch/kgraft implementations, respectively.
    • Canonical Livepatching uses the upstream Linux Kernel Live Patching technology.
    • Ksplice is free, but unsupported, for Ubuntu Desktops, and only available for Oracle Linux and RHEL servers with an Oracle Linux Premier Support license ($2299/node/year).
    • It’s a little unclear how to subscribe to RHEL Kernel Live Patching, but it appears that you need to first be a RHEL customer, and then enroll in the SIG (Special Interests Group) through your TAM (Technical Account Manager), which requires Red Hat Enterprise Linux Server Premium Subscription at $1299/node/year.  (I'm happy to be corrected and update this post)
    • SUSE Live Patching is available as an add-on to SUSE Linux Enterprise Server 12 Priority Support subscription at $1,499/node/year, but does come with a free music video.
    • Canonical Livepatching is available for every Ubuntu Advantage customer, starting at our entry level UA Essential for $150/node/year, and available for free to community users of Ubuntu.

    Q: What happens if I run into problems/bugs with Canonical Livepatches?

    A: Ubuntu Advantage customers will file a support request at support.canonical.com where it will be serviced according to their UA service level agreement (Essential, Standard, or Advanced). Ubuntu community users will file a bug report on Launchpad and we'll service it on a best effort basis.

    Q: Why does canonical-livepatch client/server have a proprietary license?

    A: The canonical-livepatch client is part of the Landscape family of tools available to Canonical support customers. We are enabling free access to the Canonical Livepatch Service for Ubuntu community users as a mark of our appreciation for the broader Ubuntu community, and in exchange for occasional, automatic canary testing.

    Q: How do I build my own livepatches?

    A: It’s certainly possible for you to build your own Linux kernel live patches, but it requires considerable skill, time, computing power to produce, and even more effort to comprehensively test. Rest assured that this is the real value of using the Canonical Livepatch Service! That said, Chris Arges has blogged a howto for the curious a while back:

    http://chrisarges.net/2015/09/21/livepatch-on-ubuntu.html

    Q: How do I get notifications of which CVEs are livepatched and which are not?

    A: You can, at any time, query the status of the canonical-livepatch daemon using: ‘canonical-livepatch status --verbose’. This command will show any livepatches successfully applied, any outstanding/unapplied livepatches, and any error conditions. Moreover, you can monitor the Ubuntu Security Notices RSS feed and the ubuntu-security-announce mailing list.

    Q: Isn't livepatching just a big ole rootkit?

    A: Canonical Livepatches inject kernel modules to replace sections of binary code in the running kernel. This requires the CAP_SYS_MODULE capability. This is required to modprobe any module into the Linux kernel. If you already have that capability (root does, by default, on Ubuntu), then you already have the ability to arbitrarily modify the kernel, with or without Canonical Livepatches. If you’re an Ubuntu sysadmin and you want to disable module loading (and thereby also disable Canonical Livepatches), simply ‘echo 1 | sudo tee /proc/sys/kernel/modules_disabled’.

    Keep the uptime!
    :-Dustin

    Tuesday, August 9, 2016

    Howdy, Windows! A Six-part Series about Ubuntu-on-Windows for Linux.com


    I hope you'll enjoy a shiny new 6-part blog series I recently published at Linux.com.
    1. The first article is a bit of back story, perhaps a behind-the-scenes look at the motivations, timelines, and some of the work performed between Microsoft and Canonical to bring Ubuntu to Windows.
    2. The second article is an updated getting-started guide, with screenshots, showing a Windows 10 user exactly how to enable and run Ubuntu on Windows.
    3. The third article walks through a dozen or so examples of the most essential command line utilities a Windows user, new to Ubuntu (and Bash), should absolutely learn.
    4. The fourth article shows how to write and execute your first script, "Howdy, Windows!", in 6 different dynamic scripting languages (Bash, Python, Perl, Ruby, PHP, and NodeJS).
    5. The fifth article demonstrates how to write, compile, and execute your first program in 7 different compiled programming languages (C, C++, Fortran, Golang).
    6. The sixth and final article conducts some performance benchmarks of the CPU, Memory, Disk, and Network, in both native Ubuntu on a physical machine, and Ubuntu on Windows running on the same system.
    I really enjoyed writing these.  Hopefully you'll try some of the examples, and share your experiences using Ubuntu native utilities on a Windows desktop.  You can find the source code of the programming examples in Github and Launchpad:
    Cheers,
    Dustin

    Thursday, June 16, 2016

    sudo purge-old-kernels: Recover some disk space!


    If you have long-running Ubuntu systems (server or desktop), and you keep those systems up to date, you will, over time, accumulate a lot of Linux kernels.

    Canonical's Ubuntu Kernel Team regularly (about once a month) provides kernel updates, patching security issues, fixing bugs, and enabling new hardware drivers.  The apt utility tries its best to remove unneeded packages, from time to time, but kernels are a little tricky, due to their version strings.

    Over time, you might find your /boot directory filled with vmlinuz kernels, consuming a considerable amount of disk space.  Sometimes, sudo apt-get autoremove will clean these up.  However, it doesn't always work very well (especially if you install a version of Ubuntu that's not yet released).

    What's the safest way to clean these up?  (This question has been asked numerous times, on the UbuntuForums.org and AskUbuntu.com.)

    The definitive answer is:

    sudo purge-old-kernels
    

    You'll already have the purge-old-kernels command in Ubuntu 16.04 LTS (and later), as part of the byobu package.  In earlier releases of Ubuntu, you might need to install bikeshed, you can grab it directly from Launchpad or Github.

    Here, for example, I'll save almost 700MB of disk space, by removing kernels I no longer need:

    $ sudo purge-old-kernels 
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following packages will be REMOVED:
      linux-headers-4.4.0-10-generic* linux-headers-4.4.0-12-generic* linux-headers-4.4.0-15-generic* linux-headers-4.4.0-16-generic*
      linux-headers-4.4.0-17-generic* linux-headers-4.4.0-18-generic* linux-image-4.4.0-10-generic* linux-image-4.4.0-12-generic*
      linux-image-4.4.0-15-generic* linux-image-4.4.0-16-generic* linux-image-4.4.0-17-generic* linux-image-4.4.0-18-generic*
      linux-image-extra-4.4.0-17-generic* linux-image-extra-4.4.0-18-generic*
    0 upgraded, 0 newly installed, 14 to remove and 196 not upgraded.
    After this operation, 696 MB disk space will be freed.
    Do you want to continue? [Y/n] 
    

    From the manpage:
    purge-old-kernels will remove old kernel and header packages from the system, freeing disk space. It will never remove the currently running kernel. By default, it will keep at least the latest 2 kernels, but the user can override that value using the --keep parameter. Any additional parameters will be passed directly to apt-get(8).
    Full disclosure: I'm the author of the purge-old-kernels utility.

    Enjoy,
    :-Dustin

    Thursday, February 18, 2016

    ZFS Licensing and Linux


    We at Canonical have conducted a legal review, including discussion with the industry's leading software freedom legal counsel, of the licenses that apply to the Linux kernel and to ZFS.

    And in doing so, we have concluded that we are acting within the rights granted and in compliance with their terms of both of those licenses.  Others have independently achieved the same conclusion.  Differing opinions exist, but please bear in mind that these are opinions.

    While the CDDL and GPLv2 are both "copyleft" licenses, they have different scope.  The CDDL applies to all files under the CDDL, while the GPLv2 applies to derivative works.

    The CDDL cannot apply to the Linux kernel because zfs.ko is a self-contained file system module -- the kernel itself is quite obviously not a derivative work of this new file system.

    And zfs.ko, as a self-contained file system module, is clearly not a derivative work of the Linux kernel but rather quite obviously a derivative work of OpenZFS and OpenSolaris.  Equivalent exceptions have existed for many years, for various other stand alone, self-contained, non-GPL kernel modules.

    Our conclusion is good for Ubuntu users, good for Linux, and good for all of free and open source software.

    As we have already reached the conclusion, we are not interested in debating license compatibility, but of course welcome the opportunity to discuss the technology.

    Cheers,
    Dustin

    EDIT: This post was updated to link to the supportive position paper from Eben Moglen of the SFLC, an amicus brief from James Bottomley, as well as the contrarian position from Bradley Kuhn and the SFC.

    Monday, June 22, 2015

    Container-to-Container Networking: The Bits have Hit the Fan!

    A thing of beauty
    If you read my last post, perhaps you followed the embedded instructions and ran hundreds of LXD system containers on your own Ubuntu machine.

    Or perhaps you're already a Docker enthusiast and your super savvy microservice architecture orchestrates dozens of applications among a pile of process containers.

    Either way, the massive multiplication of containers everywhere introduces an interesting networking problem:
    "How do thousands of containers interact with thousands of other containers efficiently over a network?  What if every one of those containers could just route to one another?"

    Canonical is pleased to introduce today an innovative solution that addresses this problem in perhaps the most elegant and efficient manner to date!  We call it "The Fan" -- an extension of the network tunnel driver in the Linux kernel.  The fan was conceived by Mark Shuttleworth and John Meinel, and implemented by Jay Vosburgh and Andy Whitcroft.

    A Basic Overview

    Each container host has a "fan bridge" that enables all of its containers to deterministically map network traffic to any other container on the fan network.  I say "deterministically", in that there are no distributed databases, no consensus protocols, and no more overhead than IP-IP tunneling.  [A more detailed technical description can be found here.]  Quite simply, a /16 network gets mapped on onto an unused /8 network, and container traffic is routed by the host via an IP tunnel.



    A Demo

    Interested yet?  Let's take it for a test drive in AWS...


    First, launch two instances in EC2 (or your favorite cloud) in the same VPC.  Ben Howard has created special test images for AWS and GCE, which include a modified Linux kernel, a modified iproute2 package, a new fanctl package, and Docker installed by default.  You can find the right AMIs here.
    Build and Publish report for trusty 20150621.1228.
    -----------------------------------
    BUILD INFO:
    VERSION=14.04-LTS
    STREAM=testing
    BUILD_DATE=
    BUG_NUMBER=1466602
    STREAM="testing"
    CLOUD=CustomAWS
    SERIAL=20150621.1228
    -----------------------------------
    PUBLICATION REPORT:
    NAME=ubuntu-14.04-LTS-testing-20150621.1228
    SUITE=trusty
    ARCH=amd64
    BUILD=core
    REPLICATE=1
    IMAGE_FILE=/var/lib/jenkins/jobs/CloudImages-Small-CustomAWS/workspace/ARCH/amd64/trusty-server-cloudimg-CUSTOM-AWS-amd64-disk1.img
    VERSION=14.04-LTS-testing-20150621.1228
    INSTANCE_BUCKET=ubuntu-images-sandbox
    INSTANCE_eu-central-1=ami-1aac9407
    INSTANCE_sa-east-1=ami-59a22044
    INSTANCE_ap-northeast-1=ami-3ae2453a
    INSTANCE_eu-west-1=ami-d76623a0
    INSTANCE_us-west-1=ami-238d7a67
    INSTANCE_us-west-2=ami-53898c63
    INSTANCE_ap-southeast-2=ami-ab95ef91
    INSTANCE_ap-southeast-1=ami-98e9edca
    INSTANCE_us-east-1=ami-b1a658da
    EBS_BUCKET=ubuntu-images-sandbox
    VOL_ID=vol-678e2c29
    SNAP_ID=snap-efaa288b
    EBS_eu-central-1=ami-b4ac94a9
    EBS_sa-east-1=ami-e9a220f4
    EBS_ap-northeast-1=ami-1aee491a
    EBS_eu-west-1=ami-07602570
    EBS_us-west-1=ami-318c7b75
    EBS_us-west-2=ami-858b8eb5
    EBS_ap-southeast-2=ami-558bf16f
    EBS_ap-southeast-1=ami-faeaeea8
    EBS_us-east-1=ami-afa25cc4
    ----
    6cbd6751-6dae-4da7-acf3-6ace80c01acc
    




    Next, ensure that those two instances can talk to one another.  Here, I tested that in both directions, using both ping and nc.

    ubuntu@ip-172-30-0-28:~$ ifconfig eth0
    eth0      Link encap:Ethernet  HWaddr 0a:0a:8f:f8:cc:21  
              inet addr:172.30.0.28  Bcast:172.30.0.255  Mask:255.255.255.0
              inet6 addr: fe80::80a:8fff:fef8:cc21/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
              RX packets:2904565 errors:0 dropped:0 overruns:0 frame:0
              TX packets:9919258 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:13999605561 (13.9 GB)  TX bytes:14530234506 (14.5 GB)
    
    ubuntu@ip-172-30-0-28:~$ ping -c 3 172.30.0.27
    PING 172.30.0.27 (172.30.0.27) 56(84) bytes of data.
    64 bytes from 172.30.0.27: icmp_seq=1 ttl=64 time=0.289 ms
    64 bytes from 172.30.0.27: icmp_seq=2 ttl=64 time=0.201 ms
    64 bytes from 172.30.0.27: icmp_seq=3 ttl=64 time=0.192 ms
    
    --- 172.30.0.27 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 1998ms
    rtt min/avg/max/mdev = 0.192/0.227/0.289/0.045 ms
    ubuntu@ip-172-30-0-28:~$ nc -l 1234
    hi mom
    ─────────────────────────────────────────────────────────────────────
    ubuntu@ip-172-30-0-27:~$ ifconfig eth0
    eth0      Link encap:Ethernet  HWaddr 0a:26:25:9a:77:df  
              inet addr:172.30.0.27  Bcast:172.30.0.255  Mask:255.255.255.0
              inet6 addr: fe80::826:25ff:fe9a:77df/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
              RX packets:11157399 errors:0 dropped:0 overruns:0 frame:0
              TX packets:1671239 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:16519319463 (16.5 GB)  TX bytes:12019363671 (12.0 GB)
    
    ubuntu@ip-172-30-0-27:~$ ping -c 3 172.30.0.28
    PING 172.30.0.28 (172.30.0.28) 56(84) bytes of data.
    64 bytes from 172.30.0.28: icmp_seq=1 ttl=64 time=0.245 ms
    64 bytes from 172.30.0.28: icmp_seq=2 ttl=64 time=0.185 ms
    64 bytes from 172.30.0.28: icmp_seq=3 ttl=64 time=0.186 ms
    
    --- 172.30.0.28 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 1998ms
    rtt min/avg/max/mdev = 0.185/0.205/0.245/0.030 ms
    ubuntu@ip-172-30-0-27:~$ echo "hi mom" | nc 172.30.0.28 1234
    

    If that doesn't work, you might have to adjust your security group until it does.


    Now, import the Ubuntu image in Docker in both instances.

    $ sudo docker pull ubuntu:latest
    Pulling repository ubuntu
    ...
    e9938c931006: Download complete
    9802b3b654ec: Download complete
    14975cc0f2bc: Download complete
    8d07608668f6: Download complete
    

    Now, let's create a fan bridge on each of those two instances.  We can create it on the command line using the new fanctl command, or we can put it in /etc/network/interfaces.d/eth0.cfg.

    We'll do the latter, so that the configuration is persistent across boots.

    $ cat /etc/network/interfaces.d/eth0.cfg
    # The primary network interface
    auto eth0
    iface eth0 inet dhcp
        up fanctl up 250.0.0.0/8 eth0/16 dhcp
        down fanctl down 250.0.0.0/8 eth0/16
    
    $ sudo ifup --force eth0
    

    Now, let's look at our ifconfig...

    $ ifconfig
    docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99  
              inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
              UP BROADCAST MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0 
              RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
    
    eth0      Link encap:Ethernet  HWaddr 0a:0a:8f:f8:cc:21  
              inet addr:172.30.0.28  Bcast:172.30.0.255  Mask:255.255.255.0
              inet6 addr: fe80::80a:8fff:fef8:cc21/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
              RX packets:2905229 errors:0 dropped:0 overruns:0 frame:0
              TX packets:9919652 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:13999655286 (13.9 GB)  TX bytes:14530269365 (14.5 GB)
    
    fan-250-0-28 Link encap:Ethernet  HWaddr 00:00:00:00:00:00  
              inet addr:250.0.28.1  Bcast:0.0.0.0  Mask:255.255.255.0
              inet6 addr: fe80::8032:4dff:fe3b:a108/64 Scope:Link
              UP BROADCAST MULTICAST  MTU:1480  Metric:1
              RX packets:304246 errors:0 dropped:0 overruns:0 frame:0
              TX packets:245532 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0 
              RX bytes:13697461502 (13.6 GB)  TX bytes:37375505 (37.3 MB)
    
    lo        Link encap:Local Loopback  
              inet addr:127.0.0.1  Mask:255.0.0.0
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:65536  Metric:1
              RX packets:1622 errors:0 dropped:0 overruns:0 frame:0
              TX packets:1622 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0 
              RX bytes:198717 (198.7 KB)  TX bytes:198717 (198.7 KB)
    
    lxcbr0    Link encap:Ethernet  HWaddr 3a:6b:3c:9b:80:45  
              inet addr:10.0.3.1  Bcast:0.0.0.0  Mask:255.255.255.0
              inet6 addr: fe80::386b:3cff:fe9b:8045/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0 
              RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)
    
    tunl0     Link encap:IPIP Tunnel  HWaddr   
              UP RUNNING NOARP  MTU:1480  Metric:1
              RX packets:242799 errors:0 dropped:0 overruns:0 frame:0
              TX packets:302666 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0 
              RX bytes:12793620 (12.7 MB)  TX bytes:13697374375 (13.6 GB)
    
    
    Pay special attention to the new fan-250-0-28 device!  I've only shown this on one of my instances, but you should check both.

    Now, let's tell Docker to use that device as its default bridge.

    $ fandev=$(ifconfig | grep ^fan- | awk '{print $1}')
    $ echo $fandev
    fan-250-0-28
    $ echo "DOCKER_OPTS='-d -b $fandev --mtu=1480 --iptables=false'" | \
          sudo tee -a /etc/default/docker*
    

    Make sure you restart the docker.io service.  Note that it might be called docker.

    $ sudo service docker.io restart || sudo service docker restart
    

    Now we can launch a Docker container in each of our two EC2 instances...

    $ sudo docker run -it ubuntu
    root@261ae39d90db:/# ifconfig eth0
    eth0      Link encap:Ethernet  HWaddr e2:f4:fd:f7:b7:f5  
              inet addr:250.0.28.3  Bcast:0.0.0.0  Mask:255.255.255.0
              inet6 addr: fe80::e0f4:fdff:fef7:b7f5/64 Scope:Link
              UP BROADCAST RUNNING  MTU:1480  Metric:1
              RX packets:7 errors:0 dropped:2 overruns:0 frame:0
              TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:558 (558.0 B)  TX bytes:648 (648.0 B)
    


    And here's a second one, on my other instance...

    sudo docker run -it ubuntu
    root@ddd943163843:/# ifconfig eth0
    eth0      Link encap:Ethernet  HWaddr 66:fa:41:e7:ad:44  
              inet addr:250.0.27.3  Bcast:0.0.0.0  Mask:255.255.255.0
              inet6 addr: fe80::64fa:41ff:fee7:ad44/64 Scope:Link
              UP BROADCAST RUNNING  MTU:1480  Metric:1
              RX packets:12 errors:0 dropped:2 overruns:0 frame:0
              TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:936 (936.0 B)  TX bytes:1026 (1.0 KB)
    

    Now, let's send some traffic back and forth!  Again, we can use ping and nc.



    root@261ae39d90db:/# ping -c 3 250.0.27.3
    PING 250.0.27.3 (250.0.27.3) 56(84) bytes of data.
    64 bytes from 250.0.27.3: icmp_seq=1 ttl=62 time=0.563 ms
    64 bytes from 250.0.27.3: icmp_seq=2 ttl=62 time=0.278 ms
    64 bytes from 250.0.27.3: icmp_seq=3 ttl=62 time=0.260 ms
    --- 250.0.27.3 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 1998ms
    rtt min/avg/max/mdev = 0.260/0.367/0.563/0.138 ms
    root@261ae39d90db:/# echo "here come the bits" | nc 250.0.27.3 9876
    root@261ae39d90db:/# 
    ─────────────────────────────────────────────────────────────────────
    root@ddd943163843:/# ping -c 3 250.0.28.3
    PING 250.0.28.3 (250.0.28.3) 56(84) bytes of data.
    64 bytes from 250.0.28.3: icmp_seq=1 ttl=62 time=0.434 ms
    64 bytes from 250.0.28.3: icmp_seq=2 ttl=62 time=0.258 ms
    64 bytes from 250.0.28.3: icmp_seq=3 ttl=62 time=0.269 ms
    --- 250.0.28.3 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 1998ms
    rtt min/avg/max/mdev = 0.258/0.320/0.434/0.081 ms
    root@ddd943163843:/# nc -l 9876
    here come the bits
    

    Alright, so now let's really bake your noodle...

    That 250.0.0.0/8 network can actually be any /8 network.  It could be a 10.* network or any other /8 that you choose.  I've chosen to use something in the reserved Class E range, 240.* - 255.* so as not to conflict with any other routable network.

    Finally, let's test the performance a bit using iperf and Amazon's 10gpbs instances!

    So I fired up two c4.8xlarge instances, and configured the fan bridge there.
    $ fanctl show
    Bridge           Overlay              Underlay             Flags
    fan-250-0-28     250.0.0.0/8          172.30.0.28/16       dhcp host-reserve 1
    

    And
    $ fanctl show
    Bridge           Overlay              Underlay             Flags
    fan-250-0-27     250.0.0.0/8          172.30.0.27/16       dhcp host-reserve 1
    

    Would you believe 5.46 Gigabits per second, between two Docker instances, directly addressed over a network?  Witness...

    Server 1...

    root@84364bf2bb8b:/# ifconfig eth0
    eth0      Link encap:Ethernet  HWaddr 92:73:32:ac:9c:fe  
              inet addr:250.0.27.2  Bcast:0.0.0.0  Mask:255.255.255.0
              inet6 addr: fe80::9073:32ff:feac:9cfe/64 Scope:Link
              UP BROADCAST RUNNING  MTU:1480  Metric:1
              RX packets:173770 errors:0 dropped:2 overruns:0 frame:0
              TX packets:107628 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:6871890397 (6.8 GB)  TX bytes:7190603 (7.1 MB)
    
    root@84364bf2bb8b:/# iperf -s
    ------------------------------------------------------------
    Server listening on TCP port 5001
    TCP window size: 85.3 KByte (default)
    ------------------------------------------------------------
    [  4] local 250.0.27.2 port 5001 connected with 250.0.28.2 port 35165
    [ ID] Interval       Transfer     Bandwidth
    [  4]  0.0-10.0 sec  6.36 GBytes  5.46 Gbits/sec
    

    And Server 2...

    root@04fb9317c269:/# ifconfig eth0
    eth0      Link encap:Ethernet  HWaddr c2:6a:26:13:c5:95  
              inet addr:250.0.28.2  Bcast:0.0.0.0  Mask:255.255.255.0
              inet6 addr: fe80::c06a:26ff:fe13:c595/64 Scope:Link
              UP BROADCAST RUNNING  MTU:1480  Metric:1
              RX packets:109230 errors:0 dropped:2 overruns:0 frame:0
              TX packets:150164 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:28293821 (28.2 MB)  TX bytes:6849336379 (6.8 GB)
    
    root@04fb9317c269:/# iperf -c 250.0.27.2
    multicast ttl failed: Invalid argument
    ------------------------------------------------------------
    Client connecting to 250.0.27.2, TCP port 5001
    TCP window size: 85.0 KByte (default)
    ------------------------------------------------------------
    [  3] local 250.0.28.2 port 35165 connected with 250.0.27.2 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-10.0 sec  6.36 GBytes  5.47 Gbits/sec
    

    Multiple containers, on separate hosts, directly addressable to one another with nothing more than a single network device on each host.  Deterministic routes.  Blazing fast speeds.  No distributed databases.  No consensus protocols.  Not an SDN.  This is just amazing!

    RFC

    Give it a try and let us know what you think!  We'd love to get your feedback and use cases as we work the kernel and userspace changes upstream.

    Over the next few weeks, you'll see the fan patches landing in Wily, and backported to Trusty and Vivid.  We are also drafting an RFC, as we think that other operating systems and the container world and the Internet at large would benefit from Fan Networking.

    I'm already a fan!
    Dustin

    Thursday, February 7, 2013

    Linux: Won't you be our Valentine?



    It will be a lovely week next week!

    Valentines Day is next Thursday, February 14th, of course.  Make sure you have chocolate and beautiful flowers for your sweetheart.  And remember, that nothing says, "Was just thinking of you" like finding something cute on Pinterest and pinning it on their wall.  (I need to go figure out how to do that, actually).  And, for extra bonus points, call Mom too!  She'll just love that you thought of her, too, on V-day ;-)


    Near and dear to my heart, I'm personally excited that Gazzang will be introduced as one of the newest card-carrying members of the Linux Foundation!  I've been an individual member of the Foundation for years, and have attended nearly a dozen LF events.  We're extremely, extremely proud to add Gazzang to its very impressive list of active corporate members.  What excellent company!  I feel that we at Gazzang are differentiating ourselves from our competitors with comprehensive offerings around big data security, enterprise class encryption, and innovative key management -- all built exclusively in and on top of Linux.


    And in celebration of all this love, Gazzang's fabulous marketing department has created a special Valentine's Day card for Linux, speaking on behalf of enterprises and big businesses far and wide that are just head over heels in love with the Penguin :-)  Enjoy!



    XOXO,
    :-Dustin

    Sunday, March 18, 2012

    Patchwork of Open Source Memories


    One of the biggest differences in my new job is that I have to commute into the office every day.  And with that, comes the second biggest difference -- that I can't wear a t-shirt and pajama pants as I sit and hack the day away in my Eames lounger.

    And so I drive 12 (scenic) miles from my house in the hills west of Austin right to the heart of downtown, fighting traffic if I sleep even a few minutes past 7:15am.  I wear a button-up shirt almost every day.  Not that that's formal -- I also wear jeans and cowboy boots.  But I'm dressing for the job I want, not the job I have.  A dude rancher, I reckon  :-)

    The net result is that I had a closet full of awesome Linux and open source t-shirts -- shirts I had worn for years -- that just weren't getting their due anymore.  And my Etsy-awesome lovely wife Kim convinced me to part with a number of my favorites to create a t-shirt quilt that captures my last ~7 years in the Open Source world!

    Now, mind you, I shed a tear or two as Kim's shears tore through a couple of these shirts that I've carried with me across six continents and most of the two dozen timezones...  :-/  On the other hand, a few of these weren't particularly my favorites, but did fit the color scheme she was going for.  In the end, her work was really quite beautiful!  And warm.

    For those interested, I'll document the 6 rows by 4 columns:


    Ah memories...  So Kim enjoyed making this for me, but it was a heck of a lot of work, and I don't think she'll be doing it again.  But if you're looking for a quilt made of your own favorite shirt, check out our friend Liz who has her own Etsy site for this sort of thing ;-)

    Cheers!
    :-Dustin

    Monday, February 20, 2012

    Thoughts on Hiring Linux Hackers (in 2012)


    I have interviewed hundreds of candidates and had the delight of hiring dozens of Linux and open source developers, engineers, and interns over the last 10 years -- at IBM, Canonical, and now Gazzang.  The most recent one signed his contract this morning, in fact!  It's quite a rush to bring new talent into a small team.

    Linux jobs are actually hotter now than ever before!  The Wall Street Journal picked this up recently.  And while HostGator has been running giant billboards throughout Austin for at least 2 years now, which plainly asks, "Do you know Linux?  We're hiring!" -- I was impressed to see that they had the same billboard scaled up to 3-stories in height right in Times Square, New York.


    Given that my own well being is so deeply invested in being an open source hacker, I selfishly love seeing the Linux and open source job market expanding so vibrantly.

    From the interviewer's chair, however, my poking and prodding of a given candidate's Linux skills have changed a bit over those 10 years.  I'm often looking for the candidate's inquisitive nature.  I want to know how interested they really are in going down the rabbit hole.






    • 10 years ago, you had to know how to deploy and run a LAMP stack, and hack your way around Apache, MySQL, PostgreSQL, PHP, Perl, and Python.  You would shriek in horror at bad HTML and CSS and could really make a website sing with a little Javascript.
    • 9 years ago, I wanted to see someone who regularly compiled their own upstream kernel, maybe tweaked a few configuration options on or off just for fun.  Bonus points for each additional software package you compiled from source.  Gentoo users were shoe-ins.
    • 8 years ago, I wanted to talk to people who were sending and receiving PGP or GPG signed, encrypted email.  I was delighted by those who had at least 1024D keys!
    • 7 years ago, I found users who were willing and able to tweak their SELinux policies and AppArmor profiles absolutely intriguing.  If you were running SELinux in enforcing mode on a production system, well, damn, you probably got the job!
    • 6 years ago, I wanted someone who had built their own Beowulf cluster, for fun, over the weekend.  If not Beowulf, then some sort of cluster computing.  Maybe Condor, or MPICH.
    • 5 years ago, I'd structure some conversation around reinstalling dd-wrt or openwrt firmware on routers.  What serious hackers would run stock router firmware?!?
    • 4 years ago, I needed you to have experience with open source virtualization, such as KVMXen, and QEMU.  Oh, and surely you're running MythTV on a few computers around the house, right?
    • 3 years ago, it was all about developers who had Launchpad or Github accounts, had written some open source software and packaged it for Ubuntu or Fedora.  While your friends update one other over Facebook, you're pushing updates over git and bzr.
    • 2 years  ago, I was interested in people who had built or deployed their own cloud infrastructure using Eucalyptus or OpenStack.
    • And last year, it was all about the move from traditional configuration management to cloud-ready service orchestration; experience with Puppet/Chef/Juju were golden.
    Nowadays?  Well, it's additive, to an extent.  Hopefully you have the LAMP stack and kernel compilations in your pocket, can send and receive signed/encrypted email.  No real hacker ever runs stock firmware on their router, surely you're using virtual machines and cloud computing on a daily basis, and hopefully you spend as much time on Launchpad/Github/StackExchange as Facebook/Twitter :-)

    But you need to be on the cusp of what's next.  I'm hoping you've rooted your phone, jacked your bootloader, and installed a CyanogenMod of your choosing -- at least on your phone at least if not your tablet and e-Reader too!  Hopefully you've tried out this big data business and threw together a map-reduce Hadoop job or two, just for grins.  Clearly you'll have a strong, informed opinions on Unity vs. Gnome3, upstart vs. systemd, and the UEFI secure boot mess.

    Oh, and big bonus points if you read my blog.  But you knew that already.  If you read my blog, you've seen this.  And this is what we'll talk about in our interview :-)

    :-Dustin

    Thursday, February 16, 2012

    Gazzang Bang and the SXSW Startup Pub Crawl


    The Gazzang office at 502 Baylor Street in Austin, Texas is one of the destinations of the 2012 SXSW Startup Pub Crawl, on Thursday, March 8th.

    Join us between 4 and 10 pm for an open house, drum circle, and some awesome live music from the Lost Pines bluegrass band!  Please RSVP here.  Come talk to us over free beer and food about Cloud security, data privacy, encryption, eCryptfs, key management, Linux, and Ubuntu.  Meet the entire cast of the Sh*t IT Security Guys Say short film.  And tap into the vibrant tech start-up culture that's rocking downtown Austin by day, juxtaposed against the awesome live music culture that rocks downtown Austin by night.



    View Larger Map


    Come get your bang on!

    :-Dustin

    Printfriendly