From the Canyon Edge -- :-Dustin

Tuesday, October 18, 2016

Hotfix Your Ubuntu Kernels with the Canonical Livepatch Service!

Introducting the Canonical Livepatch Service

Ubuntu 16.04 LTS’s 4.4 Linux kernel includes an important new security capability in Ubuntu -- the ability to modify the running Linux kernel code, without rebooting, through a mechanism called kernel livepatch.

Today, Canonical has publicly launched the Canonical Livepatch Service -- an authenticated, encrypted, signed stream of Linux livepatches that apply to the 64-bit Intel/AMD architecture of the Ubuntu 16.04 LTS (Xenial) Linux 4.4 kernel, addressing the highest and most critical security vulnerabilities, without requiring a reboot in order to take effect.  This is particularly amazing for Container hosts -- Docker, LXD, etc. -- as all of the containers share the same kernel, and thus all instances benefit.

I’ve tried to answer below some questions that you might have. As you have others, you’re welcome
to add them to the comments below or on Twitter with hastag #Livepatch.

Retrieve your token from

Q: How do I enable the Canonical Livepatch Service?

A: Three easy steps, on a fully up-to-date 64-bit Ubuntu 16.04 LTS system.
  1. Go to and retrieve your livepatch token
    1. Install the canonical-livepatch snap
      $ sudo snap install canonical-livepatch 
    2. Enable the service with your token
      $ sudo canonical-livepatch enable [TOKEN] 
    And you’re done! You can check the status at any time using:

    $ canonical-livepatch status --verbose

    Q: What are the system requirements?

    A: The Canonical Livepatch Service is available for the generic and low latency flavors of the 64-bit Intel/AMD (aka, x86_64, amd64) builds of the Ubuntu 16.04 LTS (Xenial) kernel, which is a Linux 4.4 kernel. Canonical livepatches work on Ubuntu 16.04 LTS Servers and Desktops, on physical machines, virtual machines, and in the cloud. The safety, security, and stability firmly depends on unmodified Ubuntu kernels and network access to the Canonical Livepatch Service (  You also will need to apt update/upgrade to the latest version of snapd (at least 2.15).

    Q: What about other architectures?

    A: The upstream Linux livepatch functionality is currently limited to the 64-bit x86 architecture, at this time. IBM is working on support for POWER8 and s390x (LinuxOne mainframe), and there’s also active upstream development on ARM64, so we do plan to support these eventually. The livepatch plumbing for 32-bit ARM and 32-bit x86 are not under upstream development at this time.

    Q: What about other flavors?

    A: We are providing the Canonical Livepatch Service for the generic and low latency (telco) flavors of the the Linux kernel at this time.

    Q: What about other releases of Ubuntu?

    A: The Canonical Livepatch Service is provided for Ubuntu 16.04 LTS’s Linux 4.4 kernel. Older releases of Ubuntu will not work, because they’re missing the Linux kernel support. Interim releases of Ubuntu (e.g. Ubuntu 16.10) are targeted at developers and early adopters, rather than Long Term Support users or systems that require maximum uptime.  We will consider providing livepatches for the HWE kernels in 2017.

    Q: What about derivatives of Ubuntu?

    A: Canonical livepatches are fully supported on the 64-bit Ubuntu 16.04 LTS Desktop, Cloud, and Server operating systems. On other Ubuntu derivatives, your mileage may vary! These are not part of our automated continuous integration quality assurance testing framework for Canonical Livepatches. Canonical Livepatch safety, security, and stability will firmly depend on unmodified Ubuntu generic kernels and network access to the Canonical Livepatch Service.

    Q: How does Canonical test livepatches?

    A: Every livepatch is rigorously tested in Canonical's in-house CI/CD (Continuous Integration / Continuous Delivery) quality assurance system, which tests hundreds of combinations of livepatches, kernels, hardware, physical machines, and virtual machines.  Once a livepatch passes CI/CD and regression tests, it's rolled out on a canary testing basis, first to a tiny percentage of the Ubuntu Community users of the Canonical Livepatch Service. Based on the success of that microscopic rollout, a moderate rollout follows.  And assuming those also succeed, the livepatch is delivered to all free Ubuntu Community and paid Ubuntu Advantage users of the service.  Systemic failures are automatically detected and raised for inspection by Canonical engineers.  Ubuntu Community users of the Canonical Livepatch Service who want to eliminate the small chance of being randomly chosen as a canary should enroll in the Ubuntu Advantage program (starting at $12/month).

    Q: What kinds of updates will be provided by the Canonical Livepatch Service?

    A: The Canonical Livepatch Service is intended to address high and critical severity Linux kernel security vulnerabilities, as identified by Ubuntu Security Notices and the CVE database. Note that there are some limitations to the kernel livepatch technology -- some Linux kernel code paths cannot be safely patched while running. We will do our best to supply Canonical Livepatches for high and critical vulnerabilities in a timely fashion whenever possible. There may be occasions when the traditional kernel upgrade and reboot might still be necessary. We’ll communicate that clearly through the usual mechanisms -- USNs, Landscape, Desktop Notifications, Byobu, /etc/motd, etc.

    Q: What about non-security bug fixes, stability, performance, or hardware enablement updates?

    A: Canonical will continue to provide Linux kernel updates addressing bugs, stability issues, performance problems, and hardware compatibility on our usual cadence -- about every 3 weeks. These updates can be easily applied using ‘sudo apt update; sudo apt upgrade -y’, using the Desktop “Software Updates” application, or Landscape systems management. These standard (non-security) updates will still require a reboot, as they always have.

    Q: Can I rollback a Canonical Livepatch?

    A: Currently rolling-back/removing an already inserted livepatch module is disabled in Linux 4.4. This is because we need a way to determine if we are currently executing inside a patched function before safely removing it. We can, however, safely apply new livepatches on top of each other and even repatch functions over and over.

    Q: What about low and medium severity CVEs?

    A: We’re currently focusing our Canonical Livepatch development and testing resources on high and critical security vulnerabilities, as determined by the Ubuntu Security Team.  We'll livepatch other CVEs opportunistically.

    Q: Why are Canonical Livepatches provided as a subscription service?

    A: The Canonical Livepatch Service provides a secure, encrypted, authenticated connection, to ensure that only properly signed livepatch kernel modules -- and most importantly, the right modules -- are delivered directly to your system, with extremely high quality testing wrapped around it.

    Q: But I don’t want to buy UA support!

    A: You don’t have to! Canonical is providing the Canonical Livepatch Service to community users of Ubuntu, at no charge for up to 3 machines (desktop, server, virtual machines, or cloud instances). A randomly chosen subset of the free users of Canonical Livepatches will receive their Canonical Livepatches slightly earlier than the rest of the free users or UA users, as a lightweight canary testing mechanism, benefiting all Canonical Livepatch users (free and UA). Once those canary livepatches apply safely, all Canonical Livepatch users will receive their live updates.

    Q: But I don’t have an Ubuntu SSO account!

    A: An Ubuntu SSO account is free, and provides services similar to Google, Microsoft, and Apple for Android/Windows/Mac devices, respectively. You can create your Ubuntu SSO account here.

    Q: But I don’t want login to!

    A: You don’t have to! Canonical Livepatch is absolutely not required maintain the security of any Ubuntu desktop or server! You may continue to freely and anonymously ‘sudo apt update; sudo apt upgrade; sudo reboot’ as often as you like, and receive all of the same updates, and simply reboot after kernel updates, as you always have with Ubuntu.

    Q: But I don't have Internet access to!

    A: You should think of the Canonical Livepatch Service much like you think of Netflix, Pandora, or Dropbox.  It's an Internet streaming service for security hotfixes for your kernel.  You have access to the stream of bits when you can connect to the service over the Internet.  On the flip side, your machines are already thoroughly secured, since they're so heavily firewalled off from the rest of the world!

    Q: Where’s the source code?

    A: The source code of livepatch modules can be found here.  The source code of the canonical-livepatch client is part of Canonical's Landscape system management product and is commercial software.

    Q: What about Ubuntu Core?

    A: Canonical Livepatches for Ubuntu Core are on the roadmap, and may be available in late 2016, for 64-bit Intel/AMD architectures. Canonical Livepatches for ARM-based IoT devices depend on upstream support for livepatches.

    Q: How does this compare to Oracle Ksplice, RHEL Live Patching and SUSE Live Patching?

    A: While the concepts are largely the same, the technical implementations and the commercial terms are very different:

    • Oracle Ksplice uses it’s own technology which is not in upstream Linux.
    • RHEL and SUSE currently use their own homegrown kpatch/kgraft implementations, respectively.
    • Canonical Livepatching uses the upstream Linux Kernel Live Patching technology.
    • Ksplice is free, but unsupported, for Ubuntu Desktops, and only available for Oracle Linux and RHEL servers with an Oracle Linux Premier Support license ($2299/node/year).
    • It’s a little unclear how to subscribe to RHEL Kernel Live Patching, but it appears that you need to first be a RHEL customer, and then enroll in the SIG (Special Interests Group) through your TAM (Technical Account Manager), which requires Red Hat Enterprise Linux Server Premium Subscription at $1299/node/year.  (I'm happy to be corrected and update this post)
    • SUSE Live Patching is available as an add-on to SUSE Linux Enterprise Server 12 Priority Support subscription at $1,499/node/year, but does come with a free music video.
    • Canonical Livepatching is available for every Ubuntu Advantage customer, starting at our entry level UA Essential for $150/node/year, and available for free to community users of Ubuntu.

    Q: What happens if I run into problems/bugs with Canonical Livepatches?

    A: Ubuntu Advantage customers will file a support request at where it will be serviced according to their UA service level agreement (Essential, Standard, or Advanced). Ubuntu community users will file a bug report on Launchpad and we'll service it on a best effort basis.

    Q: Why does canonical-livepatch client/server have a proprietary license?

    A: The canonical-livepatch client is part of the Landscape family of tools available to Canonical support customers. We are enabling free access to the Canonical Livepatch Service for Ubuntu community users as a mark of our appreciation for the broader Ubuntu community, and in exchange for occasional, automatic canary testing.

    Q: How do I build my own livepatches?

    A: It’s certainly possible for you to build your own Linux kernel live patches, but it requires considerable skill, time, computing power to produce, and even more effort to comprehensively test. Rest assured that this is the real value of using the Canonical Livepatch Service! That said, Chris Arges has blogged a howto for the curious a while back:

    Q: How do I get notifications of which CVEs are livepatched and which are not?

    A: You can, at any time, query the status of the canonical-livepatch daemon using: ‘canonical-livepatch status --verbose’. This command will show any livepatches successfully applied, any outstanding/unapplied livepatches, and any error conditions. Moreover, you can monitor the Ubuntu Security Notices RSS feed and the ubuntu-security-announce mailing list.

    Q: Isn't livepatching just a big ole rootkit?

    A: Canonical Livepatches inject kernel modules to replace sections of binary code in the running kernel. This requires the CAP_SYS_MODULE capability. This is required to modprobe any module into the Linux kernel. If you already have that capability (root does, by default, on Ubuntu), then you already have the ability to arbitrarily modify the kernel, with or without Canonical Livepatches. If you’re an Ubuntu sysadmin and you want to disable module loading (and thereby also disable Canonical Livepatches), simply ‘echo 1 | sudo tee /proc/sys/kernel/modules_disabled’.

    Keep the uptime!

    Tuesday, October 4, 2016

    A Parody within a Parody

    My wife, Kimberly, and I watch Saturday Night Live religiously.  As in, we probably haven't missed a single episode since we started dating more than 12 years ago.  And in fact, we both watched our fair share of SNL before we had even met, going back to our teenage years.

    We were catching up on SNL's 42nd season premier late this past Sunday night, after putting the kids to bed, when I was excited to see a hilarious sketch/parody of Mr. Robot.

    If SNL is my oldest TV favorite, Mr. Robot is certainly my newest!  Just wrapping its 2nd season, it's a brilliantly written, flawlessly acted, impeccably set techno drama series on USA.  I'm completely smitten, and the story seems to be just getting started!

    Okay, so Kim and I are watching a hilarious sketch where Leslie Jones asks Elliot to track down the person who recently hacked her social media accounts.  And, as always, I take note of what's going in the background on the computer screen.  It's just something I do.  I love to try and spot the app, the OS, the version, identify the Linux kernel oops, etc., of anything on any computer screen on TV.

    At about the 1:32 mark of the SNL/Mr.Robot skit, there was something unmistakable on the left computer, just over actor Pete Davidson's right shoulder.  Merely a fraction of a second, and I recognized it instantly!  A dark terminal, split into a dozen sections.  A light grey boarder, with a thicker grey highlighting one split.  The green drip of text from The Matrix in one of the splits. A flashing, bouncing yellow audio wave in another.  An instant rearrangement of all of those windows each second.

    It was Byobu and Hollywood!  I knew it.  Kim didn't believe me at first, until I proved it ;-)

    A couple of years ago, after seeing a 007 film in the theater, I created a bit of silliness -- a joke of a program that could turn any Linux terminal into a James Bond caliber hacker screen.  The result is a package called hollywood, which any Ubuntu user can install and run by simply typing:

    $ sudo apt install hollywood
    $ hollywood

    And a few months ago , Hollywood found its way into an NBC News piece that took itself perhaps a little too seriously, as it drummed up a bit of fear around "Ransomware".

    But, far more appropriately, I'm absolutely delighted to see another NBC program -- Saturday Night Live -- using Hollywood exactly as intended -- for parody!

    Enjoy a few screenshots below...


    Thursday, September 29, 2016

    OpenZFS Developer Summit Keynote: Everything Old is New Again...But Better!

    On Monday this week, I was afforded the distinct privilege to deliver the opening keynote at the OpenZFS Developer Summit in San Francisco.  It was a beautiful little event, with a full day of informative presentations and lots of networking during lunch and breaks.

    Below, you can view my slides, download the PDF, or watch the talk (starts at 31:10) and demo in its entirety.

    Hopefully you'll enjoy the demo -- especially the most interesting raw tracing system new in the Ubuntu 16.04 LTS Linux 4.4 kernel, something called The Berkeley Packet Filter, or "BPF" for short.  I used a series of open source utilities from Brendan Gregg (from Netflix), called iovisor/bcc.  Quoting the on Github:

    BCC is a toolkit for creating efficient kernel tracing and manipulation programs, and includes several useful tools and examples. It makes use of extended BPF (Berkeley Packet Filters), formally known as eBPF, a new feature that was first added to Linux 3.15. Much of what BCC uses requires Linux 4.1 and above.
    I'll follow up this post with another one, formally introducing BPF and how to install and use bcc in Ubuntu 16.04 LTS, if anyone is interested...


    Monday, September 26, 2016

    Container Camp London: Streamlining HPC Workloads with Containers

    A couple of weeks ago, I delivered a talk at the Container Camp UK 2016.  It was an brilliant event, on a beautiful stage at Picturehouse Central in Picadilly Circus in London.

    You're welcome to view the slides or download them as a PDF, or watch my talk below.

    And for the techies who want to skip the slide fluff and get their hands dirty, setup your OpenStack and LXD and start streamlining your HPC workloads using this guide.


    Wednesday, September 21, 2016

    HOWTO: Launch an Ubuntu Cloud Image with KVM from the Command Line

    I reinstalled my primary laptop (Lenovo x250) about 3 months ago (June 30, 2016), when I got a shiny new SSD, with a fresh Ubuntu 16.04 LTS image.

    Just yesterday, I needed to test something in KVM.  Something that could only be tested in KVM.

    kirkland@x250:~⟫ kvm
    The program 'kvm' is currently not installed. You can install it by typing:
    sudo apt install qemu-kvm
    127 kirkland@x250:~⟫ 

    I don't have KVM installed?  How is that even possible?  I used to be the maintainer of the virtualization stack in Ubuntu (kvm, qemu, libvirt, virt-manager, et al.)!  I lived and breathed virtualization on Ubuntu for years...

    Alas, it seems that I've use LXD for everything these days!  It's built into every Ubuntu 16.04 LTS server, and one 'apt install lxd' away from having it on your desktop.  With ZFS, instances start in under 3 seconds.  Snapshots, live migration, an image store, a REST API, all built in.  Try it out, if you haven't, it's great!

    kirkland@x250:~⟫ time lxc launch ubuntu:x
    Creating supreme-parakeet
    Starting supreme-parakeet
    real    0m1.851s
    user    0m0.008s
    sys     0m0.000s
    kirkland@x250:~⟫ lxc exec supreme-parakeet bash

    But that's enough of a LXD advertisement...back to the title of the blog post.

    Here, I want to download an Ubuntu cloud image, and boot into it.  There's one extra step nowadays.  You need to create your "user data" and feed it into cloud-init.

    First, create a simple text file, called "seed":

    kirkland@x250:~⟫ cat seed
    password: passw0rd
    chpasswd: { expire: False }
    ssh_pwauth: True
    ssh_import_id: kirkland

    Now, generate a "seed.img" disk, like this:

    kirkland@x250:~⟫ cloud-localds seed.img seed
    kirkland@x250:~⟫ ls -halF seed.img 
    -rw-rw-r-- 1 kirkland kirkland 366K Sep 20 17:12 seed.img

    Next, download your image from

    kirkland@x250:~⟫ wget                                                                                                                                                          
    --2016-09-20 17:13:57--
    Resolving (, 2001:67c:1360:8001:ffff:ffff:ffff:fffe
    Connecting to (||:80... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 312606720 (298M) [application/octet-stream]
    Saving to: ‘xenial-server-cloudimg-amd64-disk1.img’
    100%[=================================] 298.12M  3.35MB/s    in 88s     
    2016-09-20 17:15:25 (3.39 MB/s) - ‘xenial-server-cloudimg-amd64-disk1.img’ saved [312606720/312606720]

    In the nominal case, you can now just launch KVM, and add your user data as a cdrom disk.  When it boots, you can login with "ubuntu" and "passw0rd", which we set in the seed:

    kirkland@x250:~⟫ kvm -cdrom seed.img -hda xenial-server-cloudimg-amd64-disk1.img

    Finally, let's enable more bells an whistles, and speed this VM up.  Let's give it all 4 CPUs, a healthy 8GB of memory, a virtio disk, and let's port forward ssh to 2222:

    kirkland@x250:~⟫ kvm -m 8192 \
        -smp 4 \
        -cdrom seed.img \
        -device e1000,netdev=user.0 \
        -netdev user,id=user.0,hostfwd=tcp::5555-:22 \
        -drive file=xenial-server-cloudimg-amd64-disk1.img,if=virtio,cache=writeback,index=0

    And with that, we can how ssh into the VM, with the public SSH key specified in our seed:

    kirkland@x250:~⟫ ssh -p 5555 ubuntu@localhost
    The authenticity of host '[localhost]:5555 ([]:5555)' can't be established.
    RSA key fingerprint is SHA256:w2FyU6TcZVj1WuaBA799pCE5MLShHzwio8tn8XwKSdg.
    No matching host key fingerprint found in DNS.
    Are you sure you want to continue connecting (yes/no)? yes
    Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-36-generic x86_64)
     * Documentation:
     * Management:
     * Support:
      Get cloud support with Ubuntu Advantage Cloud Guest:
    0 packages can be updated.
    0 updates are security updates.


    Tuesday, August 9, 2016

    Howdy, Windows! A Six-part Series about Ubuntu-on-Windows for

    I hope you'll enjoy a shiny new 6-part blog series I recently published at
    1. The first article is a bit of back story, perhaps a behind-the-scenes look at the motivations, timelines, and some of the work performed between Microsoft and Canonical to bring Ubuntu to Windows.
    2. The second article is an updated getting-started guide, with screenshots, showing a Windows 10 user exactly how to enable and run Ubuntu on Windows.
    3. The third article walks through a dozen or so examples of the most essential command line utilities a Windows user, new to Ubuntu (and Bash), should absolutely learn.
    4. The fourth article shows how to write and execute your first script, "Howdy, Windows!", in 6 different dynamic scripting languages (Bash, Python, Perl, Ruby, PHP, and NodeJS).
    5. The fifth article demonstrates how to write, compile, and execute your first program in 7 different compiled programming languages (C, C++, Fortran, Golang).
    6. The sixth and final article conducts some performance benchmarks of the CPU, Memory, Disk, and Network, in both native Ubuntu on a physical machine, and Ubuntu on Windows running on the same system.
    I really enjoyed writing these.  Hopefully you'll try some of the examples, and share your experiences using Ubuntu native utilities on a Windows desktop.  You can find the source code of the programming examples in Github and Launchpad:

    Friday, June 24, 2016

    HOWTO: Host your own SNAP store!

    SNAPs are the cross-distro, cross-cloud, cross-device Linux packaging format of the future.  And we're already hosting a fantastic catalog of SNAPs in the SNAP store provided by Canonical.  Developers are welcome to publish their software for distribution across hundreds millions of Ubuntu servers, desktops, and devices.

    Several people have asked the inevitable open source software question, "SNAPs are awesome, but how can I stand up my own SNAP store?!?"

    The answer is really quite simple...  SNAP stores are really just HTTP web servers!  Of course, you can get fancy with branding, and authentication, and certificates.  But if you just want to host SNAPs and enable downstream users to fetch and install software, well, it's pretty trivial.

    In fact, Bret Barker has published an open source (Apache License) SNAP store on GitHub.  We're already looking at how to flesh out his proof-of-concept and bring it into snapcore itself.

    Here's a little HOWTO install and use it.

    First, I launched an instance in AWS.  Of course I could have launched an Ubuntu 16.04 LTS instance, but actually, I launched a Fedora 24 instance!  In fact, you could run your SNAP store on any OS that currently supports SNAPs, really, or even just fork this GitHub repo and install it stand alone..  See

    Now, let's find and install a snapstore SNAP.  (Note that in this AWS instance of Fedora 24, I also had to 'sudo yum install squashfs-tools kernel-modules'.

    At this point, you're running a SNAP store (webserver) on port 5000.

    Now, let's reconfigure snapd to talk to our own SNAP store, and search for a SNAP.

    Finally, let's install and inspect that SNAP.

    How about that?  Easy enough!