“It Doesn't Look Like Ubuntu Reached Its Goal Of 200 Million Users This Year”, says Michael Larabel of Phoronix, in a post that it seems he's been itching to post for months.
Why the negativity?!? Are you sure? Did you count all of them?
No one has.
How many people in the world use Ubuntu?
Actually, no one can count all of the Ubuntu users in the world!
Canonical, unlike Apple, Microsoft, Red Hat, or Google, does not require each user to register their installation of Ubuntu.
How many "users" of Ubuntu are there ultimately? I bet there are over a billion people today, using Ubuntu -- both directly and indirectly. Without a doubt, there are over a billion people on the planet benefiting from the services, security, and availability of Ubuntu today.
More people use Ubuntu than we know.
More people use Ubuntu than you know.
More people use Ubuntu than they know.
More people use Ubuntu than anyone actually knows.
Picture yourself containers on a server With systemd trees and spawned tty's Somebody calls you, you answer quite quickly A world with the density so high
- Sgt. Graber's LXD Smarts Club Band
Last week, we proudly released Ubuntu 15.10 (Wily) -- the final developer snapshot of the Ubuntu Server before we focus the majority of our attention on quality, testing, performance, documentation, and stability for the Ubuntu 16.04 LTS cycle in the next 6 months.
Notably, LXD has been promoted to the Ubuntu Main archive, now commercially supported by Canonical. That has enabled us to install LXD by default on all Ubuntu Servers, from 15.10 forward.
That means that every Ubuntu server -- Intel, AMD, ARM, POWER, and even Virtual Machines in the cloud -- is now a full machine container hypervisor, capable of hosting hundreds of machine containers, right out of the box!
LXD in the Sky with Diamonds! Well, LXD is in the Cloud with Diamond level support from Canonical, anyway. You can even test it in your web browser here.
The development tree of Xenial (Ubuntu 16.04 LTS) has already inherited this behavior, and we will celebrate this feature broadly through our use of LXD containers in Juju, MAAS, and the reference platform of Ubuntu OpenStack, as well as the new nova-lxd hypervisor in the OpenStack Autopilot within Landscape.
While the young and the restless are already running Wily Ubuntu 15.10, the bold and the beautiful are still bound to their Trusty Ubuntu 14.04 LTS servers.
At Canonical, we understand both motivations, and this is why we have backported LXD to the Trusty archives, for safe, simple consumption and testing of this new generation of machine containers there, on your stable LTS.
Installing LXD on Trusty simply requires enabling the trusty-backports pocket, and installing the lxd package from there, with these 3 little commands:
In minutes, you can launch your first LXD containers. First, inherit your new group permissions, so you can execute the lxc command as your non-root user. Then, import some images, and launch a new container named lovely-rita. Shell into that container, and examine the process tree, install some packages, check the disk and memory and cpu available. Finally, exit when you're done, and optionally delete the container.
I was able to run over 600 containers simultaneously on my Thinkpad (x250, 16GB of RAM), and over 60 containers on an m1.small in Amazon (1.6GB of RAM).
All Linux distributions -- Ubuntu, Red Hat, and others -- enable SSH for remote server login. That’s just a fact of life in a Linux-powered, cloud and server world. SSH is by far the most secure way to administer a Linux machine remotely, as it leverages both strong authentication and encryption technology, and is actively reviewed and maintained for security vulnerabilities.
Any Ubuntu machine that might be susceptible to this XOS.DDoS attack, is in a very small minority of the millions of Ubuntu systems in the world. Specifically, a vulnerable Ubuntu machine has been individually and manually configured by its administrator to:
permit SSH root password authentication, AND
have set a root password, AND
have chosen a poor quality root password that is subject to a brute force attack
A poor password generally uses a simple dictionary word, or a short password without numbers, case sensitivity or symbols.
Moreover, the antivirus software ClamAV is freely available in Ubuntu (sudo apt-get install clamav), and is able to detect and purge XOR.DDoS from any affected system.
It's 01:24am, on Tuesday, August 25, 2015. I am, again, awake in the the middle of the night, due to another false alarm from Google's spitefully sentient, irascibly ignorant NestProtect "smart" smoke alarm system.
Exactly how I feel right now. Except I'm in my pajamas.
Warning: You'll find very little profanity on this blog. However, the filter is off for this post. Apologies in advance.
ARRRRRRRRRRRRRRRRRGGGGGGGGGHHHHHHHHHHH!!!!!!!!!!! Oh. My. God. FOR FUCK'S SAKE.
"Heads up, there's smoke in the kids room," she says. Not once, but 3 times in a 30 minute period, between 12am and 1am, last night.
That's my alarm clock. Right now. I'm serious.
"Heads up, there's smoke in the guest bedroom," she says again tonight a few minutes ago, at 12:59am.
There was in fact never any smoke to clear.
Is it possible for anything wake you up more seriously and violently in a cold panic than a smoke alarm detecting something amiss in your 2 year old's bedroom?
Here's what happens (each time)...
Every Nest Protect unit in the house announces, in unison, "Heads up, there's smoke in the kids' room." Then both my phone and my wife's phone buzzes on our night stands, with the urgent incoming message from the Nest app. Another few seconds pass, and a another set of alarms, this time delivered by email, in case you missed the first two.
The first and second time it happens, you jump up immediately. You run into their room and make sure everyone is okay -- both the infant in the crib and toddler who's into everything. You walk the whole house, checking the oven, the stove, the toaster, the computer equipment. You open the door and check around outside. When everything is okay, you're left with a tingling in the back of your mind, wondering what went wrong. When you're a computer engineer by trade, you're trying to debug the hardware and/or software bug causing the false positive. Then you set about trying to calm your family and get them back into bed. And at some point later, you calm your own nerves and try to get some sleep. It's a work night after all.
But the third, fourth, and fifth time it happens? From 3 different units?
Well, it never ceases to scare the ever living shit out of you, waking up out of deep sleep, your mind racing, assessing the threat.
But then, reality kind of sets in. It's just the stupid Nest Protect fucking it all up again.
Roll over, go back to bed, and pray that the full alarm doesn't sound this time, waking up both kids and setting us up for a really bad night and next few days at school.
It's not over yet, though. You then wait for the same series of messages announcing the all clear -- first the bitch over the loudspeaker, followed by the Android app notification, then the email -- each with the same message: "Caution, the smoke is clearing..."
THERE WAS NEVER ANY FUCKING SMOKE, YOU STUPID CYBORG.
20 years later, and the smartest company in the world creates a smoke detector that broadcasts the IoT equivalent of PC LOAD LETTER to your smart home, mobile app, and email.
But not this time. I'm not rolling over. I'm here, typing with every ounce of anger this Thinkpad can muster. I'm mashing these keys in the guest bedroom that's supposedly on fire. I can most assuredly tell you that it's a comfy 72 F, that the air is as clean as a summer breeze.
I'm writing this, hoping that someone, somewhere hears how disturbingly defective, and dangerously disingenuous this product actually is.
It has one job to do. Detect and report smoke. And it's unable to do that effectively. If it can't reliably detect normality, what confidence should I have that it'll actually detect an emergency if that dreaded day ever comes?
The sad, sobering reality is: zero. I have zero confidence whatsoever in the Nest Protect.
What's worse, is that I'm embarrassed to say that I've been duped into buying 7 (yes, seven) of these broken pieces of shit, at $99 apiece. I'm a pretty savvy technical buyer, and admittedly a pretty magnanimous early adopter. But while I'm accepting on beta versions of gadgets and gizmos, I am entirely unforgiving on the safety and livelihood of my family and guests.
Michael Larabel of Phoronix recounts his similar experience here. He destroyed one with a sledgehammer, which might provide me with some catharsis when (not if, but when) this happens again.
Michael Larabel of Phoronix destroyed his malfunctioning Nest Protect with a 20 lb sledgehammer, to silence the false alarm in the middle of the night
There's a sad, long, thread on Nest's customer support forum, calling for a better "silence" feature. I'm sorry, that's just wrong. The solution is not a better way to "silence" false positives. Root out the false positives to begin with. Or recall the hardware. Tut, tut, tut.
You can't be serious...
This is from me to Google and Nest on behalf of thousands of trusting families out there: You have the opportunity, and ultimately the obligation. Please make this right. Whatever that means, you owe the world that.
Ship working firmware.
Recall faulty hardware.
Refund the product.
Okay, the empassioned rant is over. Time for data. Here is the detailed, distressing timeline.
January 2015: I installed 5 Nest Protects: one in each of two kids' rooms, the master bedroom, the hallway, and the kitchen/living room
February 2015: While on a business trip to, South Africa, I received notification via email and the Nest App that there was a smoke emergency at my home, half a world away, with my family in bed for the night. My wife called me immediately -- in the middle of the night in Texas. My heart raced. She assured me it was a false alarm, and that she had two screaming kids awake from the noise. I filed a support ticket with Nest (ref:_00D40Mlt9._50040jgU8y:ref) and tried to assure my wife that it was just a glitch and that I'd fix it when I got home.
May 23, 2015: We thought it was funny enough to post to Facebook, "When Nest mistakes a diaper change for a fire, that's one impressive poop, kiddo!" Not so funny now.
August 9, 2015: I installed 2 more Nest Protects, in the guest bedroom and my office
August 21, 2015, 11:26am: While on a flight home from another business, trip, I receive another set of daytime warnings about smoke in the house. Another false alarm.
August 24, 2015, 12am: While asleep, I receive another 3 false alarms.
August 25, 2015, 1am: Again, asleep, another false alarm. Different room, different unit. I'm fucking done with these.
I'm counting on you Google/Nest. Please make it right.
Burning up but not on fire,
Dustin
Update #1: I was contacted directly by email and over Twitter, by Nest's "Executive Relations", who offer to replace of all 7 of my "v1" Nest Protects with 7 new "v2" Nest Protects, at no charge. The new "v2" Protect reportedly has an improved design with better photoelectric detector that reduces false positives. I was initially inclined to try the new "v2" Protects, however, neither the mounting bracket nor the wiring harness are compatible from v1 to v2. So I would have to replace all of the brackets and redoing all of the wiring myself. I asked, but Nest would not cover the cost of a professional (re-)installation. At this point, as expressed my disappointment in this alternative, and I was offered a full refund, in 4-6 weeks time, after I return the 7 units. I've accepted this solution and will replace the Nest Protects with a simpler, more reliable traditional smoke detector.
Update #2: I suppose I should mention that I generally like my Nest Thermostat and (3) Dropcams. This blog post is really only complaining about the Titanic disaster that is the Nest Protect.
Canonical is delighted to sponsor ContainerCon 2015, a Linux Foundation event in Seattle next week, August 17-19, 2015. It's quite exciting to see the A-list of sponsors, many of them newcomers to this particular technology, teaming with energy around containers.
From chroots to BSD Jails and Solaris Zones, the concepts behind containers were established decades ago, and in fact traverse the spectrum of server operating systems. At Canonical, we've been working on containers in Ubuntu for more than half a decade, providing a home and resources for stewardship and maintenance of the upstream Linux Containers (LXC) project since 2010.
Last year, we publicly shared our designs for LXD -- a new stratum on top of LXC that endows the advantages of a traditional hypervisor into the faster, more efficient world of containers.
LXD is a persistent daemon that provides a clean RESTful interface to manage (start, stop, clone, migrate, etc.) any of the containers on a given host.
Hosts running LXD are handily federated into clusters of container hypervisors, and can work as Nova Compute nodes in OpenStack, for example, delivering Infrastructure-as-a-Service cloud technology at lower costs and greater speeds.
Here, LXD and Docker are quite complementary technologies. LXD furnishes a dynamic platform for "system containers" -- containers that behave like physical or virtual machines, supplying all of the functionality of a full operating system (minus the kernel, which is shared with the host). Such "machine containers" are the core of IaaS clouds, where users focus on instances with compute, storage, and networking that behave like traditional datacenter hardware.
LXD runs perfectly well along with Docker, which supplies a framework for "application containers" -- containers that enclose individual processes that often relate to one another as pools of micro services and deliver complex web applications.
Moreover, the Zen of LXD is the fact that the underlying container implementation is actually decoupled from the RESTful API that drives LXD functionality. We are most excited to discuss next week at ContainerCon our work with Microsoft around the LXD RESTful API, as a cross-platform container management layer.
Ben Armstrong, a Principal Program Manager Lead at Microsoft on the core virtualization and container technologies, has this to say:
“As Microsoft is working to bring Windows Server Containers to the world – we are excited to see all the innovation happening across the industry, and have been collaborating with many projects to encourage and foster this environment. Canonical’s LXD project is providing a new way for people to look at and interact with container technologies. Utilizing ‘system containers’ to bring the advantages of container technology to the core of your cloud infrastructure is a great concept. We are looking forward to seeing the results of our engagement with Canonical in this space.”
Finally, if you're in Seattle next week, we hope you'll join us for the technical sessions we're leading at ContainerCon 2015, including: "Putting the D in LXD: Migration of Linux Containers", "Container Security - Past, Present, and Future", and "Large Scale Container Management with LXD and OpenStack". Details are below.
Date: Wednesday, August 19 10:25am-11:15am Title: Putting the D in LXD: Migration of Linux Containers Speaker: Tycho Andersen Abstract: http://sched.co/3YTz Location: Willow A Schedule: http://sched.co/3YTz
The Golden Ratio is one of the oldest and most visible irrational numbers known to humanity. Pi is perhaps more famous, but the Golden Ratio is found in more of our art, architecture, and culture throughout human history.
I think of the Golden Ratio as sort of "Pi in 1 dimension". Whereas Pi is the ratio of a circle's circumference to its diameter, the Golden Ratio is the ratio of a whole to one of its parts, when the ratio of that part to the remainder is equal.
Visually, this diagram from Wikipedia helps explain it:
We find the Golden Ratio in the architecture of antiquity, from the Egyptians to the Greeks to the Romans, right up to the Renaissance and even modern times.
While the base of the pyramids are squares, the Golden Ratio can be observed as the base and the hypotenuse of a basic triangular cross section like so:
The floor plan of the Parthenon has a width/depth ratio matching the Golden Ratio...
For the first 300 years of printing, nearly all books were printed on pages whose length to width ratio matched that of the Golden Ratio.
Leonardo da Vinci used the Golden Ratio throughout his works. I'm told that his Vitruvian Man displays the Golden Ratio...
From school, you probably remember that the Golden Ratio is approximately ~1.6 (and change).
There's a strong chance that your computer or laptop monitor has a 16:10 aspect ratio. Does 1280x800 or 1680x1050 sound familiar?
That ~1.6 number is only an approximation, of course. The Golden Ratio is in fact an irrational number and can be calculated to much greater precision through several different representations, including:
You can plug that number into your computer's calculator and crank out a dozen or so significant digits.
However, if you want to go much farther than that, Alexander Yee has created a program called y-cruncher, which as been used to calculate most of the famous constants to world record precision. (Sorry free software readers of this blog -- y-cruncher is not open source code...)
I came across y-cruncher a few weeks ago when I was working on the mprime post, demonstrating how you can easily put any workload into a Docker container and then produce both Juju Charms and Ubuntu Snaps that package easily. While I opted to use mprime in that post, I saved y-cruncher for this one :-)
Also, while doing some network benchmark testing of The Fan Networking among Docker containers, I experimented for the first time with some of Amazon's biggest instances, which have dedicated 10gbps network links. While I had a couple of those instances up, I did some small scale benchmarking of y-cruncher.
Presently, none of the mathematical constant records are even remotely approachable with CPU and Memory alone. All of them require multiple terabytes of disk, which act as a sort of swap space for temporary files, as bits are moved in and out of memory while the CPU crunches. As such, approaching these are records are overwhelmingly I/O bound -- not CPU or Memory bound, as you might imagine.
After a variety of tests, I settled on the AWS d2.2xlarge instance size as the most affordable instance size to break the previous Golden Ratio record (1 trillion digits, by Alexander Yee on his gaming PC in 2010). I say "affordable", in that I could have cracked that record "2x faster" with a d2.4xlarge or d2.8xlarge, however, I would have paid much more (4x) for the total instance hours. This was purely an economic decision :-)
Let's geek out on technical specifications for a second... So what's in a d2.2xlarge?
First, I arranged all 6 of those 2TB disks into a RAID0 with mdadm, and formatted it with xfs (which performed better than ext4 or btrfs in my cursory tests).
Here's a brief look at raw read performance with hdparm:
$ sudo hdparm -tT /dev/md0
Timing cached reads: 21126 MB in 2.00 seconds = 10576.60 MB/sec
Timing buffered disk reads: 1784 MB in 3.00 seconds = 593.88 MB/sec
The beauty here of RAID0 is that each of the 6 disks can be used to read and/or write simultaneously, perfectly in parallel. 600 MB/sec is pretty quick reads by any measure! In fact, when I tested the d2.8xlarge, I put all 24x 2TB disks into the same RAID0 and saw nearly 2.4 GB/sec read performance across that 48TB array!
With /dev/md0 mounted on /mnt and writable by my ubuntu user, I kicked off y-crunch with these parameters:
Byobu was very handy here, being able to track in the bottom status bar my CPU load, memory usage, disk usage, and disk I/O, as well as connecting and disconnecting from the running session multiple times over the 4 days of running.
And approximately 79 hours later, it finished successfully!
Start Date: Thu Jul 16 03:54:11 2015
End Date: Sun Jul 19 11:14:52 2015
Computation Time: 221548.583 seconds
Total Time: 285640.965 seconds
CPU Utilization: 315.469 %
Multi-core Efficiency: 39.434 %
Last Digits:
5027026274 0209627284 1999836114 2950866539 8538613661 : 1,999,999,999,950
2578388470 9290671113 7339871816 2353911433 7831736127 : 2,000,000,000,000
Amazing, another person (who I don't know), named Ron Watkins, performed the exact same computation and published his results within 24 hours, on July 22nd/23rd. As such, Ron and I are "sharing" credit for the Golden Ratio record.
Now, let's talk about the economics here, which I think are the most interesting part of this post.
Look at the above chart of records, which are published on the y-cruncher page, the vast majority of those have been calculated on physical PCs -- most of them seem to be gamingPCs running Windows.
What's different about my approach is that I used Linux in the Cloud -- specifically Ubuntu in AWS. I paid hourly (actually, my employer, Canonical, reimbursed me for that expense, thanks!) It took right at 160 hours to run the initial calculation (79 hours) as well as the verification calculation (81 hours), at the current rate of $1.38/hour for a d2.2xlarge, which is a grand total of $220!
$220 is a small fraction of the cost of 6x 2TB disks, 60 GB of memory, or 8 Xeon cores, not to mention the electricity and cooling required to run a system of this size (~750W) for 160 hours.
If we say the first first trillion digits were already known from the previous record, that comes out to approximately 4.5 billion record-digits per dollar, and 12.5 billion record-digits per hour!
tl;dr: Your Ubuntu-based container is not a copyright violation. Nothing to see here. Carry on.
I am speaking for my employer, Canonical, when I say you are not violating our policies if you use Ubuntu with Docker in sensible, secure ways. Some have claimed otherwise, but that’s simply sensationalist and untrue.
Canonical publishes Ubuntu images for Docker specifically so that they will be useful to people. You are encouraged to use them! We see no conflict between our policies and the common sense use of Docker.
Going further, we distribute Ubuntu in many different signed formats -- ISOs, root tarballs, VMDKs, AMIs, IMGs, Docker images, among others. We take great pride in this work, and provide them to the world at large, on ubuntu.com, in public clouds like AWS, GCE, and Azure, as well as in OpenStack and on DockerHub. These images, and their signatures, are mirrored by hundreds of organizations all around the world. We would not publish Ubuntu in the DockerHub if we didn’t hope it would be useful to people using the DockerHub. We’re delighted for you to use them in your public clouds, private clouds, and bare metal deployments.
Any Docker user will recognize these, as the majority of all Dockerfiles start with these two words....
FROM ubuntu
In fact, we gave away hundreds of these t-shirts at DockerCon.
We explicitly encourage distribution and redistribution of Ubuntu images and packages! We also embrace a very wide range of community remixes and modifications. We go further than any other commercially supported Linux vendor to support developers and community members scratching their itches. There are dozens of such derivatives and many more commercial initiatives based on Ubuntu - we are definitely not trying to create friction for people who want to get stuff done with Ubuntu.
Our policy exists to ensure that when you receive something that claims to be Ubuntu, you can trust that it will work to the same standard, regardless of where you got it from. And people everywhere tell us they appreciate that - when they get Ubuntu on a cloud or as a VM, it works, and they can trust it. That concept is actually hundreds of years old, and we’ll talk more about that in a minute....
So, what do I mean by “sensible use” of Docker? In short - secure use of Docker. If you are using a Docker container then you are effectively giving the producer of that container ‘root’ on your host. We can safely assume that people sharing an Ubuntu docker based container know and trust one another, and their use of Ubuntu is explicitly covered as personal use in our policy. If you trust someone to give you a Docker container and have root on your system, then you can handle the risk that they inadvertently or deliberately compromise the integrity or reliability of your system.
Our policy distinguishes between personal use, which we can generalise to any group of collaborators who share root passwords, and third party redistribution, which is what people do when they exchange OS images with strangers.
Third party redistribution is more complicated because, when things go wrong, there’s a real question as to who is responsible for it. Here’s a real example: a school district buys laptops for all their students with free software. A local supplier takes their preferred Linux distribution and modifies parts of it (like the kernel) to work on their hardware, and sells them all the PCs. A month later, a distro kernel update breaks all the school laptops. In this case, the Linux distro who was not involved gets all the bad headlines, and the free software advocates who promoted the whole idea end up with egg on their faces.
We’ve seen such cases in real hardware, and in public clouds and other, similar environments. Digital Ocean very famously published some modified and very broken Ubuntu images, outside of Canonical's policies. That's inherently wrong, and easily avoidable.
So we simply say, if you’re going to redistribute Ubuntu to third parties who are trusting both you and Ubuntu to get it right, come and talk to Canonical and we’ll work out how to ensure everybody gets what they want and need.
Here’s a real exercise I hope you’ll try...
Head over to your local purveyor of fine wines and liquors.
In doing so, that bottle should earn your confidence that it was produced according to strict quality, format, and geographic standards.
Before you pop the cork, check the seal, to ensure it hasn’t been opened or tampered with. Now, drink it however you like.
Pour that Champagne over orange juice (if you must). Toss a couple ice cubes in your Scotch (if that’s really how you like it). Pour that Bourbon over a Coke (if that’s what you want).
Enjoy however you like -- straight up or mixed to taste -- with your own guests in the privacy of your home. Just please don’t pour those concoctions back into the bottle, shove a cork in, put them back on the shelf at your local liquor store and try to pass them off as Champagne/Scotch/Bourbon.
Rather, if that’s really what you want to do -- distribute a modified version of Ubuntu -- simply contact us and ask us first (thanks for sharing that link, mjg59). We havesomeamazingtools that can help you either avoid that situation entirely, or at least let’s do everyone a service and let us help you do it well.
Believe it or not, we’re really quite reasonable people! Canonical has a lengthy, public track record, donating infrastructure and resources to many derivative Ubuntu distributions. Moreover, we’ve successfully contracted mutually beneficial distribution agreements with numerous organizations and enterprises. The result is happy users and happy companies.
As you probably remember from grade school math class, primes are numbers that are only divisible by 1 and themselves. 2, 3, 5, 7, and 11 are the first 5 prime numbers, for example.
Many computer operations, such as public-key cryptography, depends entirely on prime numbers. In fact, RSA encryption, invented in 1978, uses a modulo of a product of two very large primes for encryption and decryption. The security of asymmetric encryption is tightly coupled with the computational difficulty in factoring large numbers. I actually use prime numbers as the status update intervals in Byobu, in order to improve performance and distribute the update spikes.
Euclid proved that there are infinitely many prime numbers around 300 BC. But the Prime Number Theorem (proven in the 19th century) says that the probability of any number is prime is inversely proportional to its number of digits. That means that larger prime numbers are notoriously harder to find, and it gets harder as they get bigger!
What's the largest known prime number in the world?
Well, it has 17,425,170 decimal digits! If you wanted to print it out, size 11 font, it would take 6,543 pages -- or 14 reams of paper!
That number is actually one less than a very large power of 2. 257,885,161-1. It was discovered by Curtis Cooper on January 25, 2013, on an Intel Core2 Duo.
Actually, each of the last 14 record largest prime numbers discovered (between 1996 and today) have been of that form, 2P-1. Numbers of that form are called Mersenne Prime Numbers, named after Friar Marin Mersenne, a French priest who studied them in the 1600s.
Friar Mersenne's work continues today in the form of the Great Internet Mersenne Prime Search, and the mprime program, which has been used to find those 14 huge prime numbers since 1996.
mprime is a massive parallel, cpu scavenging utility, much like SETI@home or the Protein Folding Project. It runs in the background, consuming resources, working on its little piece of the problem. mprime is open source code, and also distributed as a statically compiled binary. And it will make a fine example of how to package a service into a Docker container, a Juju charm, and a Snappy snap.
Docker Container
First, let's build the Docker container, which will serve as our fundamental building block. You'll first need to download the mprime tarball from here. Extract it, and the directory structure should look a little like this (or you can browse it here):
Now you can run this image anywhere you can run Docker.
$ sudo docker run -d kirkland/mprime
And verify that it's running:
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c9233f626c85 kirkland/mprime:latest "/opt/mprime/mprime 24 seconds ago Up 23 seconds furious_pike
Juju Charm
So now, let's create a Juju Charm that uses this Docker container. Actually, we're going to create a subordinate charm. Subordinate services in Juju are often monitoring and logging services, things that run along side primary services. Something like mprime is a good example of something that could be a subordinate service, attached to one or many other services in a Juju model.
Our directory structure for the charm looks like this (or you can browse it here):
$ cat metadata.yaml
name: mprime
summary: Search for Mersenne Prime numbers
maintainer: Dustin Kirkland
description: |
A Mersenne prime is a prime of the form 2^P-1.
The first Mersenne primes are 3, 7, 31, 127
(corresponding to P = 2, 3, 5, 7).
There are only 48 known Mersenne primes, and
the 13 largest known prime numbers in the world
are all Mersenne primes.
This charm uses a Docker image that includes the
statically built, 64-bit Linux binary mprime
which will consume considerable CPU and Memory,
searching for the next Mersenne prime number.
See http://www.mersenne.org/ for more details!
tags:
- misc
subordinate: true
requires:
juju-info:
interface: juju-info
scope: container
$ cat hooks/start
#!/bin/bash
service docker restart
docker run -d kirkland/mprime
Now, we can add the mprime service to any other running Juju service. As an example here, I'll --bootstrap, deploy the Apache2 charm, and attach mprime to it.
First, let's install the Docker framework, upon which we depend:
$ snappy-remote --url ssh://snappy-nuc install docker
=======================================================
Installing docker from the store
Installing docker
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================
And now, we can install our locally built Snap.
$ snappy-remote --url ssh://snappy-nuc install mprime_28.5-11_amd64.snap
=======================================================
Installing mprime_28.5-11_amd64.snap from local environment
Installing /tmp/mprime_28.5-11_amd64.snap
2015/07/20 17:44:26 Signature check failed, but installing anyway as requested
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
mprime 2015-07-20 28.5-11 sideload
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================
Alternatively, you can install the snap directly from the Ubuntu Snappy store, where I've already uploaded the mprime snap:
$ snappy-remote --url ssh://snappy-nuc install mprime.kirkland
=======================================================
Installing mprime.kirkland from the store
Installing mprime.kirkland
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
mprime 2015-07-20 28.5-11 kirkland
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================
Conclusion
How long until this Docker image, Juju charm, or Ubuntu Snap finds a Mersenne Prime? Almost certainly never :-) I want to be clear: that was never the point of this exercise!
Rather I hope you learned how easy it is to run a Docker image inside either a Juju charm or an Ubuntu snap. And maybe learned something about prime numbers along the way ;-)
If you read my last post, perhaps you followed the embedded instructions and ran hundreds of LXD system containers on your own Ubuntu machine.
Or perhaps you're already a Docker enthusiast and your super savvy microservice architecture orchestrates dozens of applications among a pile of process containers.
Either way, the massive multiplication of containers everywhere introduces an interesting networking problem:
"How do thousands of containers interact with thousands of other containers efficiently over a network? What if every one of those containers could just route to one another?"
Canonical is pleased to introduce today an innovative solution that addresses this problem in perhaps the most elegant and efficient manner to date! We call it "The Fan" -- an extension of the network tunnel driver in the Linux kernel. The fan was conceived by Mark Shuttleworth and John Meinel, and implemented by Jay Vosburgh and Andy Whitcroft.
A Basic Overview
Each container host has a "fan bridge" that enables all of its containers to deterministically map network traffic to any other container on the fan network. I say "deterministically", in that there are no distributed databases, no consensus protocols, and no more overhead than IP-IP tunneling. [A more detailed technical description can be found here.] Quite simply, a /16 network gets mapped on onto an unused /8 network, and container traffic is routed by the host via an IP tunnel.
A Demo
Interested yet? Let's take it for a test drive in AWS...
First, launch two instances in EC2 (or your favorite cloud) in the same VPC. Ben Howard has created special test images for AWS and GCE, which include a modified Linux kernel, a modified iproute2 package, a new fanctl package, and Docker installed by default. You can find the right AMIs here.
Now, let's create a fan bridge on each of those two instances. We can create it on the command line using the new fanctl command, or we can put it in /etc/network/interfaces.d/eth0.cfg.
We'll do the latter, so that the configuration is persistent across boots.
$ cat /etc/network/interfaces.d/eth0.cfg
# The primary network interface
auto eth0
iface eth0 inet dhcp
up fanctl up 250.0.0.0/8 eth0/16 dhcp
down fanctl down 250.0.0.0/8 eth0/16
$ sudo ifup --force eth0
Now, let's send some traffic back and forth! Again, we can use ping and nc.
root@261ae39d90db:/# ping -c 3 250.0.27.3
PING 250.0.27.3 (250.0.27.3) 56(84) bytes of data.
64 bytes from 250.0.27.3: icmp_seq=1 ttl=62 time=0.563 ms
64 bytes from 250.0.27.3: icmp_seq=2 ttl=62 time=0.278 ms
64 bytes from 250.0.27.3: icmp_seq=3 ttl=62 time=0.260 ms
--- 250.0.27.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.260/0.367/0.563/0.138 ms
root@261ae39d90db:/# echo "here come the bits" | nc 250.0.27.3 9876
root@261ae39d90db:/#
─────────────────────────────────────────────────────────────────────
root@ddd943163843:/# ping -c 3 250.0.28.3
PING 250.0.28.3 (250.0.28.3) 56(84) bytes of data.
64 bytes from 250.0.28.3: icmp_seq=1 ttl=62 time=0.434 ms
64 bytes from 250.0.28.3: icmp_seq=2 ttl=62 time=0.258 ms
64 bytes from 250.0.28.3: icmp_seq=3 ttl=62 time=0.269 ms
--- 250.0.28.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.258/0.320/0.434/0.081 ms
root@ddd943163843:/# nc -l 9876
here come the bits
Alright, so now let's really bake your noodle...
That 250.0.0.0/8 network can actually be any /8 network. It could be a 10.* network or any other /8 that you choose. I've chosen to use something in the reserved Class E range, 240.* - 255.* so as not to conflict with any other routable network.
Finally, let's test the performance a bit using iperf and Amazon's 10gpbs instances!
So I fired up two c4.8xlarge instances, and configured the fan bridge there.
Multiple containers, on separate hosts, directly addressable to one another with nothing more than a single network device on each host. Deterministic routes. Blazing fast speeds. No distributed databases. No consensus protocols. Not an SDN. This is just amazing!
RFC
Give it a try and let us know what you think! We'd love to get your feedback and use cases as we work the kernel and userspace changes upstream.
Over the next few weeks, you'll see the fan patches landing in Wily, and backported to Trusty and Vivid. We are also drafting an RFC, as we think that other operating systems and the container world and the Internet at large would benefit from Fan Networking.
Dustin Kirkland (Twitter, LinkedIn) is an engineer at heart, with a penchant for reducing complexity and solving problems at the cross-sections of technology, business, and people.
With a degree in computer engineering from Texas A&M University (2001), his full-time career began as a software engineer at IBM in the Linux Technology Center working on the Linux kernel and security certifications, including a one-year stint as an dedicated engineer-in-residence at Red Hat in Boston (2005). Dustin was awarded the title Master Inventor at IBM, in recognition of his prolific patent work as an inventor and reviewer with IBM's intellectual property attorneys.
Dustin then first joined Canonical (2008) as an engineer (eventually, engineering manager), helping create the Ubuntu Server distribution and establishing Ubuntu as the overwhelming favorite Linux distribution in Amazon, Google, and Microsoft's cloud platforms, as well as authoring and maintaining dozens of new open source packages.
Dustin joined Gazzang (2011), a venture-backed start-up built around an open source project that he co-authored (eCryptFS), as Chief Technology Officer, and helped dozens of enterprise customers encrypt their data at rest and securely manage their keys. Gazzang was acquired by Cloudera (2014).
Having effectively monetized eCryptFS as an open source project at Gazzang, Dustin returned to Canonical (2013) as the VP of Product for Ubuntu and spent the next several years launching a portfolio of products and services (Ubuntu Advantage, Extended Security Maintenance, Canonical Livepatch, MAAS, OpenStack, Kubernetes) that continues to deliver considerable annual recurring revenue. With Canonical based in London, an 800+ work-from-home employee roster and customers spread across 40+ countries, Dustin traveled the world over, connecting with clients and colleagues steeped in rich cultural experiences.
Google Cloud (2018) recruited Dustin from Canonical to product manage Google's entrance into on-premises data centers with its GKE On-Prem (now, Anthos) offering, with a specific focus on the underlying operating system, hypervisor, and container security. This work afforded Dustin a view deep into the back end data center of many financial services companies, where he still sees tremendous opportunities for improvements in security, efficiencies, cost-reduction, and disruptive new technology adoption.
Seeking a growth-mode opportunity in the fintech sector, Dustin joined Apex Clearing (now, Apex Fintech Solutions) as the Chief Product Officer (2019), where he led several organizations including product management, field engineering, data science, and business partnerships. He drastically revamped Apex's product portfolio and product management processes, retooling away from a legacy "clearing house and custodian", and into a "software-as-a-service fintech" offering instant brokerage account opening, real-time fractional stock trading, a secure closed-network crypto solution, and led the acquisition and integration of Silver's tax and cost basis solution.
Drawn back into a large cap, Dustin joined Goldman Sachs (2021) as a Managing Director and Head of Platform Product Management, within the Consumer banking division, which included Marcus, and the Apple and GM credit cards. He built a cross-functional product management community and established numerous documented product management best practices, processes, and anti-patterns.
Dustin lives in Austin, Texas, with his wife Kim and their wonderful two daughters.