Tuesday, December 22, 2015

How many people in the world use Ubuntu? More than anyone actually knows!

People of earth, waving at Saturn, courtesy of NASA.
“It Doesn't Look Like Ubuntu Reached Its Goal Of 200 Million Users This Year”, says Michael Larabel of Phoronix, in a post that it seems he's been itching to post for months.

Why the negativity?!? Are you sure? Did you count all of them?

No one has.

How many people in the world use Ubuntu?

Actually, no one can count all of the Ubuntu users in the world!

Canonical, unlike Apple, Microsoft, Red Hat, or Google, does not require each user to register their installation of Ubuntu.

Of course, you can buy laptops preloaded with Ubuntu from Dell, HP, Lenovo, and Asus.  And there are millions of them out there.  And you can buy servers powered by Ubuntu from IBM, Dell, HP, Cisco, Lenovo, Quanta, and compatible with the OpenCompute Project.

In 2011, hardware sales might have been how Mark Shuttleworth hoped to reach 200M Ubuntu users by 2015.

But in reality, hundreds of millions of PCs, servers, devices, virtual machines, and containers have booted Ubuntu to date!

Let's look at some facts...
  • Docker users have launched Ubuntu images over 35.5 million times.
  • HashiCorp's Vagrant images of Ubuntu 14.04 LTS 64-bit have been downloaded 10 million times.
  • At least 20 million unique instances of Ubuntu have launched in public clouds, private clouds, and bare metal in 2015 itself.
    • That's Ubuntu in clouds like AWS, Microsoft Azure, Google Compute Engine, Rackspace, Oracle Cloud, VMware, and others.
    • And that's Ubuntu in private clouds like OpenStack.
    • And Ubuntu at scale on bare metal with MAAS, often managed with Chef.
  • In fact, over 2 million new Ubuntu cloud instances launched in November 2015.
    • That's 67,000 new Ubuntu cloud instances launched per day.
    • That's 2,800 new Ubuntu cloud instances launched every hour.
    • That's 46 new Ubuntu cloud instances launched every minute.
    • That's nearly one new Ubuntu cloud instance launched every single second of every single day in November 2015.
  • And then there are Ubuntu phones from Meizu.
  • And more Ubuntu phones from BQ.
  • Of course, anyone can install Ubuntu on their Google Nexus tablet or phone.
  • Or buy a converged tablet/desktop preinstalled with Ubuntu from BQ.
  • Oh, and the Tesla entertainment system?  All electric Ubuntu.
  • Google's self-driving cars?  They're self-driven by Ubuntu.
  • George Hotz's home-made self-driving car?  It's a homebrewed Ubuntu autopilot.
  • Snappy Ubuntu downloads and updates for Raspberry Pi's and Beagle Bone Blacks -- the response has been tremendous.  Download numbers are astounding.
  • Drones, robots, network switches, smart devices, the Internet of Things.  More Snappy Ubuntu.
  • How about Walmart?  Everyday low prices.  Everyday Ubuntu.  Lots and lots of Ubuntu.
  • Are you orchestrating containers with Kubernetes or Apache Mesos?  There's plenty of Ubuntu in there.
  • Kicking PaaS with Cloud Foundry?  App instances are Ubuntu LXC containers.  Pivotal has lots of serious users.
  • And Heroku?  You bet your PaaS those hosted application containers are Ubuntu.  Plenty of serious users here too.
  • Tianhe-2, the world's largest super computer.  Merely 80,000 Xeons, 1.4 TB of memory, 12.4 PB of disk, all number crunching on Ubuntu.
  • Ever watch a movie on Netflix?  You were served by Ubuntu.
  • Ever hitch a ride with Uber or Lyft?  Your mobile app is talking to Ubuntu servers on the backend.
  • Did you enjoy watching The Hobbit?  Hunger Games?  Avengers?  Avatar?  All rendered on Ubuntu at WETA Digital.  Among many others.
  • Do you use Instagram?  Say cheese!
  • Listen to Spotify?  Music to my ears...
  • Doing a deal on Wall Street?  Ubuntu is serious business for Bloomberg.
  • Paypal, Dropbox, Snapchat, Pinterest, Reddit. Airbnb.  Yep.  More Ubuntu.
  • Wikipedia and Wikimedia, among the busiest sites on the Internet with 8 - 18 billion page views per month, are hosted on Ubuntu.
How many "users" of Ubuntu are there ultimately?  I bet there are over a billion people today, using Ubuntu -- both directly and indirectly.  Without a doubt, there are over a billion people on the planet benefiting from the services, security, and availability of Ubuntu today.
  • More people use Ubuntu than we know.
  • More people use Ubuntu than you know.
  • More people use Ubuntu than they know.
More people use Ubuntu than anyone actually knows.

Because of who we all are.

:-Dustin

Thursday, November 5, 2015

LXD in the Sky with Diamonds


Picture yourself containers on a server
With systemd trees and spawned tty's
Somebody calls you, you answer quite quickly
A world with the density so high

    - Sgt. Graber's LXD Smarts Club Band

Last week, we proudly released Ubuntu 15.10 (Wily) -- the final developer snapshot of the Ubuntu Server before we focus the majority of our attention on quality, testing, performance, documentation, and stability for the Ubuntu 16.04 LTS cycle in the next 6 months.

Notably, LXD has been promoted to the Ubuntu Main archive, now commercially supported by Canonical.  That has enabled us to install LXD by default on all Ubuntu Servers, from 15.10 forward.
Join us for an interactive, live webinar on November 12th at 5pm BST/12pm EST led by James Page, where he will demonstrate LXD as the fastest hypervisor in OpenStack!
That means that every Ubuntu server -- Intel, AMD, ARM, POWER, and even Virtual Machines in the cloud -- is now a full machine container hypervisor, capable of hosting hundreds of machine containers, right out of the box!

LXD in the Sky with Diamonds!  Well, LXD is in the Cloud with Diamond level support from Canonical, anyway.  You can even test it in your web browser here.

The development tree of Xenial (Ubuntu 16.04 LTS) has already inherited this behavior, and we will celebrate this feature broadly through our use of LXD containers in Juju, MAAS, and the reference platform of Ubuntu OpenStack, as well as the new nova-lxd hypervisor in the OpenStack Autopilot within Landscape.

While the young and the restless are already running Wily Ubuntu 15.10, the bold and the beautiful are still bound to their Trusty Ubuntu 14.04 LTS servers.

At Canonical, we understand both motivations, and this is why we have backported LXD to the Trusty archives, for safe, simple consumption and testing of this new generation of machine containers there, on your stable LTS.

Installing LXD on Trusty simply requires enabling the trusty-backports pocket, and installing the lxd package from there, with these 3 little commands:

sudo sed -i -e "/trusty-backports/ s/^# //" /etc/apt/sources.list
sudo apt-get update; sudo apt-get dist-upgrade -y
sudo apt-get -t trusty-backports install lxd

In minutes, you can launch your first LXD containers.  First, inherit your new group permissions, so you can execute the lxc command as your non-root user.  Then, import some images, and launch a new container named lovely-rita.  Shell into that container, and examine the process tree, install some packages, check the disk and memory and cpu available.  Finally, exit when you're done, and optionally delete the container.

newgrp lxd
lxd-images import ubuntu --alias ubuntu
lxc launch ubuntu lovely-rita
lxc list
lxc exec lovely-rita bash
  ps -ef
  apt-get update
  df -h
  free
  cat /proc/cpuinfo
  exit
lxc delete lovely-rita

I was able to run over 600 containers simultaneously on my Thinkpad (x250, 16GB of RAM), and over 60 containers on an m1.small in Amazon (1.6GB of RAM).

We're very interested in your feedback, as LXD is one of the most important features of the Ubuntu 16.04 LTS.  You can learn more about LXD, view the source code, file bugs, discuss on the mailing list, and peruse the Linux Containers upstream projects.

With a little help from my friends!
:-Dustin

Wednesday, September 30, 2015

Ubuntu and XOR.DDoS -- Nothing to see here


I woke this morning to a series of questions about a somewhat sensationalist article published by ZDnet this morning: Linux-powered botnet generates giant denial-of-service attacks

All Linux distributions -- Ubuntu, Red Hat, and others -- enable SSH for remote server login.  That’s just a fact of life in a Linux-powered, cloud and server world.  SSH is by far the most secure way to administer a Linux machine remotely, as it leverages both strong authentication and encryption technology, and is actively reviewed and maintained for security vulnerabilities.

However, in Ubuntu, we have never in 11 years asked a user to set a root password by default, and as of Ubuntu 14.04 LTS, we now explicitly disable root password logins over SSH.

Any Ubuntu machine that might be susceptible to this XOS.DDoS attack, is in a very small minority of the millions of Ubuntu systems in the world.  Specifically, a vulnerable Ubuntu machine has been individually and manually configured by its administrator to:

  1. permit SSH root password authentication, AND
  2. have set a root password, AND
  3. have chosen a poor quality root password that is subject to a brute force attack 

A poor password generally uses a simple dictionary word, or a short password without numbers, case sensitivity or symbols.

Moreover, the antivirus software ClamAV is freely available in Ubuntu (sudo apt-get install clamav), and is able to detect and purge XOR.DDoS from any affected system.

As a reminder, it’s important to:


For an exhaustive review of all Ubuntu security features, please refer to:



Cheers,
Dustin

Monday, September 28, 2015

Container Summit Presentation and Live LXD Demo


I delivered a presentation and an exciting live demo in San Francisco this week at the Container Summit (organized by Joyent).

It was professionally recorded by the A/V crew at the conference.  The live demo begins at the 25:21 mark.


You can also find the slide deck embedded below and download the PDFs from here.


Cheers,
:-Dustin

Tuesday, August 25, 2015

An Open Letter to Google Nest (was: OMG, F*CKING NEST PROTECT AGAIN!)

[Updates (1) and (2) at the bottom of the post]

It's 01:24am, on Tuesday, August 25, 2015.  I am, again, awake in the the middle of the night, due to another false alarm from Google's spitefully sentient, irascibly ignorant Nest Protect "smart" smoke alarm system.

Exactly how I feel right now.  Except I'm in my pajamas.
Warning: You'll find very little profanity on this blog.  However, the filter is off for this post.  Apologies in advance.

ARRRRRRRRRRRRRRRRRGGGGGGGGGHHHHHHHHHHH!!!!!!!!!!!
Oh.
My.
God.
FOR FUCK'S SAKE.

"Heads up, there's smoke in the kids room," she says.  Not once, but 3 times in a 30 minute period, between 12am and 1am, last night.


That's my alarm clock.  Right now.  I'm serious.
"Heads up, there's smoke in the guest bedroom," she says again tonight a few minutes ago, at 12:59am.

There was in fact never any smoke to clear.
Is it possible for anything wake you up more seriously and violently in a cold panic than a smoke alarm detecting something amiss in your 2 year old's bedroom?

Here's what happens (each time)...

Every Nest Protect unit in the house announces, in unison, "Heads up, there's smoke in the kids' room."  Then both my phone and my wife's phone buzzes on our night stands, with the urgent incoming message from the Nest app.  Another few seconds pass, and a another set of alarms, this time delivered by email, in case you missed the first two.

The first and second time it happens, you jump up immediately.  You run into their room and make sure everyone is okay -- both the infant in the crib and toddler who's into everything.  You walk the whole house, checking the oven, the stove, the toaster, the computer equipment.  You open the door and check around outside.  When everything is okay, you're left with a tingling in the back of your mind, wondering what went wrong.  When you're a computer engineer by trade, you're trying to debug the hardware and/or software bug causing the false positive.  Then you set about trying to calm your family and get them back into bed.  And at some point later, you calm your own nerves and try to get some sleep.  It's a work night after all.

But the third, fourth, and fifth time it happens?  From 3 different units?

Well, it never ceases to scare the ever living shit out of you, waking up out of deep sleep, your mind racing, assessing the threat.

But then, reality kind of sets in.  It's just the stupid Nest Protect fucking it all up again.

Roll over, go back to bed, and pray that the full alarm doesn't sound this time, waking up both kids and setting us up for a really bad night and next few days at school.

It's not over yet, though.  You then wait for the same series of messages announcing the all clear -- first the bitch over the loudspeaker, followed by the Android app notification, then the email -- each with the same message:  "Caution, the smoke is clearing..."

THERE WAS NEVER ANY FUCKING SMOKE, YOU STUPID CYBORG. 

20 years later, and the smartest company in the world
creates a smoke detector that broadcasts the IoT equivalent
of PC LOAD LETTER to your smart home, mobile app, and email.
But not this time.  I'm not rolling over.  I'm here, typing with every ounce of anger this Thinkpad can muster. I'm mashing these keys in the guest bedroom that's supposedly on fire.  I can most assuredly tell you that it's a comfy 72 F, that the air is as clean as a summer breeze.

I'm writing this, hoping that someone, somewhere hears how disturbingly defective, and dangerously disingenuous this product actually is.

It has one job to do.  Detect and report smoke.  And it's unable to do that effectively.  If it can't reliably detect normality, what confidence should I have that it'll actually detect an emergency if that dreaded day ever comes?

The sad, sobering reality is: zero.  I have zero confidence whatsoever in the Nest Protect.

What's worse, is that I'm embarrassed to say that I've been duped into buying 7 (yes, seven) of these broken pieces of shit, at $99 apiece.  I'm a pretty savvy technical buyer, and admittedly a pretty magnanimous early adopter.  But while I'm accepting on beta versions of gadgets and gizmos, I am entirely unforgiving on the safety and livelihood of my family and guests.

Michael Larabel of Phoronix recounts his similar experience here.  He destroyed one with a sledgehammer, which might provide me with some catharsis when (not if, but when) this happens again.

Michael Larabel of Phoronix destroyed his malfunctioning Nest Protect
with a 20 lb sledgehammer, to silence the false alarm in the middle of the night
 There's a sad, long, thread on Nest's customer support forum, calling for a better "silence" feature.  I'm sorry, that's just wrong.  The solution is not a better way to "silence" false positives.  Root out the false positives to begin with.  Or recall the hardware.  Tut, tut, tut.

You can't be serious...
This is from me to Google and Nest on behalf of thousands of trusting families out there:  You have the opportunity, and ultimately the obligation.  Please make this right.  Whatever that means, you owe the world that.
  • Ship working firmware.
  • Recall faulty hardware.
  • Refund the product.
Okay, the empassioned rant is over.  Time for data.  Here is the detailed, distressing timeline.
  • January 2015: I installed 5 Nest Protects: one in each of two kids' rooms, the master bedroom, the hallway, and the kitchen/living room
  • February 2015: While on a business trip to, South Africa, I received notification via email and the Nest App that there was a smoke emergency at my home, half a world away, with my family in bed for the night.  My wife called me immediately -- in the middle of the night in Texas.  My heart raced.  She assured me it was a false alarm, and that she had two screaming kids awake from the noise.  I filed a support ticket with Nest (ref:_00D40Mlt9._50040jgU8y:ref) and tried to assure my wife that it was just a glitch and that I'd fix it when I got home.

  • May 23, 2015: We thought it was funny enough to post to Facebook, "When Nest mistakes a diaper change for a fire, that's one impressive poop, kiddo!"  Not so funny now.


  • August 9, 2015: I installed 2 more Nest Protects, in the guest bedroom and my office
  • August 21, 2015, 11:26am: While on a flight home from another business, trip, I receive another set of daytime warnings about smoke in the house.  Another false alarm.
  • August 24, 2015, 12am: While asleep, I receive another 3 false alarms.
  • August 25, 2015, 1am: Again, asleep, another false alarm.  Different room, different unit.  I'm fucking done with these.
I'm counting on you Google/Nest.  Please make it right.

Burning up but not on fire,
Dustin

Update #1: I was contacted directly by email and over Twitter, by Nest's "Executive Relations", who offer to replace of all 7 of my "v1" Nest Protects with 7 new "v2" Nest Protects, at no charge.  The new "v2" Protect reportedly has an improved design with better photoelectric detector that reduces false positives.  I was initially inclined to try the new "v2" Protects, however, neither the mounting bracket nor the wiring harness are compatible from v1 to v2.  So I would have to replace all of the brackets and redoing all of the wiring myself.  I asked, but Nest would not cover the cost of a professional (re-)installation.  At this point, as expressed my disappointment in this alternative, and I was offered a full refund, in 4-6 weeks time, after I return the 7 units.  I've accepted this solution and will replace the Nest Protects with a simpler, more reliable traditional smoke detector. 
Update #2: I suppose I should mention that I generally like my Nest Thermostat and (3) Dropcams.  This blog post is really only complaining about the Titanic disaster that is the Nest Protect.

Wednesday, August 12, 2015

Ubuntu and LXD at ContainerCon 2015


Canonical is delighted to sponsor ContainerCon 2015, a Linux Foundation event in Seattle next week, August 17-19, 2015. It's quite exciting to see the A-list of sponsors, many of them newcomers to this particular technology, teaming with energy around containers. 

From chroots to BSD Jails and Solaris Zones, the concepts behind containers were established decades ago, and in fact traverse the spectrum of server operating systems. At Canonical, we've been working on containers in Ubuntu for more than half a decade, providing a home and resources for stewardship and maintenance of the upstream Linux Containers (LXC) project since 2010.

Last year, we publicly shared our designs for LXD -- a new stratum on top of LXC that endows the advantages of a traditional hypervisor into the faster, more efficient world of containers.

Those designs are now reality, with the open source Golang code readily available on Github, and Ubuntu packages available in a PPA for all supported releases of Ubuntu, and already in the Ubuntu 15.10 beta development tree. With ease, you can launch your first LXD containers in seconds, following this simple guide.

LXD is a persistent daemon that provides a clean RESTful interface to manage (start, stop, clone, migrate, etc.) any of the containers on a given host.

Hosts running LXD are handily federated into clusters of container hypervisors, and can work as Nova Compute nodes in OpenStack, for example, delivering Infrastructure-as-a-Service cloud technology at lower costs and greater speeds.

Here, LXD and Docker are quite complementary technologies. LXD furnishes a dynamic platform for "system containers" -- containers that behave like physical or virtual machines, supplying all of the functionality of a full operating system (minus the kernel, which is shared with the host). Such "machine containers" are the core of IaaS clouds, where users focus on instances with compute, storage, and networking that behave like traditional datacenter hardware.

LXD runs perfectly well along with Docker, which supplies a framework for "application containers" -- containers that enclose individual processes that often relate to one another as pools of micro services and deliver complex web applications.

Moreover, the Zen of LXD is the fact that the underlying container implementation is actually decoupled from the RESTful API that drives LXD functionality. We are most excited to discuss next week at ContainerCon our work with Microsoft around the LXD RESTful API, as a cross-platform container management layer.

Ben Armstrong, a Principal Program Manager Lead at Microsoft on the core virtualization and container technologies, has this to say:
“As Microsoft is working to bring Windows Server Containers to the world – we are excited to see all the innovation happening across the industry, and have been collaborating with many projects to encourage and foster this environment. Canonical’s LXD project is providing a new way for people to look at and interact with container technologies. Utilizing ‘system containers’ to bring the advantages of container technology to the core of your cloud infrastructure is a great concept. We are looking forward to seeing the results of our engagement with Canonical in this space.”
Finally, if you're in Seattle next week, we hope you'll join us for the technical sessions we're leading at ContainerCon 2015, including: "Putting the D in LXD: Migration of Linux Containers", "Container Security - Past, Present, and Future", and "Large Scale Container Management with LXD and OpenStack". Details are below.
Date: Monday, August 17 • 2:20pm - 3:10pm
Title: Large Scale Container Management with LXD and OpenStack
Speaker: Stéphane Graber
Abstracthttp://sched.co/3YK6
Location: Grand Ballroom B
Schedulehttp://sched.co/3YK6 
Date: Wednesday, August 19 10:25am-11:15am
Title: Putting the D in LXD: Migration of Linux Containers
Speaker: Tycho Andersen
Abstract: http://sched.co/3YTz
Location: Willow A
Schedule: http://sched.co/3YTz
Date: Wednesday, August 19 • 3:00pm - 3:50pm
Title: Container Security - Past, Present and Future
Speaker: Serge Hallyn
Abstract: http://sched.co/3YTl
Location: Ravenna
Schedule: http://sched.co/3YTl
Cheers,
Dustin

Monday, August 10, 2015

The Golden Ratio calculated to a record 2 trillion digits, on Ubuntu, in the Cloud!

The Golden Ratio is one of the oldest and most visible irrational numbers known to humanity.  Pi is perhaps more famous, but the Golden Ratio is found in more of our art, architecture, and culture throughout human history.

I think of the Golden Ratio as sort of "Pi in 1 dimension".  Whereas Pi is the ratio of a circle's circumference to its diameter, the Golden Ratio is the ratio of a whole to one of its parts, when the ratio of that part to the remainder is equal.

Visually, this diagram from Wikipedia helps explain it:


We find the Golden Ratio in the architecture of antiquity, from the Egyptians to the Greeks to the Romans, right up to the Renaissance and even modern times.



While the base of the pyramids are squares, the Golden Ratio can be observed as the base and the hypotenuse of a basic triangular cross section like so:


The floor plan of the Parthenon has a width/depth ratio matching the Golden Ratio...



For the first 300 years of printing, nearly all books were printed on pages whose length to width ratio matched that of the Golden Ratio.

Leonardo da Vinci used the Golden Ratio throughout his works.  I'm told that his Vitruvian Man displays the Golden Ratio...


From school, you probably remember that the Golden Ratio is approximately ~1.6 (and change).
There's a strong chance that your computer or laptop monitor has a 16:10 aspect ratio.  Does 1280x800 or 1680x1050 sound familiar?



That ~1.6 number is only an approximation, of course.  The Golden Ratio is in fact an irrational number and can be calculated to much greater precision through several different representations, including:


You can plug that number into your computer's calculator and crank out a dozen or so significant digits.


However, if you want to go much farther than that, Alexander Yee has created a program called y-cruncher, which as been used to calculate most of the famous constants to world record precision.  (Sorry free software readers of this blog -- y-cruncher is not open source code...)

I came across y-cruncher a few weeks ago when I was working on the mprime post, demonstrating how you can easily put any workload into a Docker container and then produce both Juju Charms and Ubuntu Snaps that package easily.  While I opted to use mprime in that post, I saved y-cruncher for this one :-)

Also, while doing some network benchmark testing of The Fan Networking among Docker containers, I experimented for the first time with some of Amazon's biggest instances, which have dedicated 10gbps network links.  While I had a couple of those instances up, I did some small scale benchmarking of y-cruncher.

Presently, none of the mathematical constant records are even remotely approachable with CPU and Memory alone.  All of them require multiple terabytes of disk, which act as a sort of swap space for temporary files, as bits are moved in and out of memory while the CPU crunches.  As such, approaching these are records are overwhelmingly I/O bound -- not CPU or Memory bound, as you might imagine.

After a variety of tests, I settled on the AWS d2.2xlarge instance size as the most affordable instance size to break the previous Golden Ratio record (1 trillion digits, by Alexander Yee on his gaming PC in 2010).  I say "affordable", in that I could have cracked that record "2x faster" with a d2.4xlarge or d2.8xlarge, however, I would have paid much more (4x) for the total instance hours.  This was purely an economic decision :-)


Let's geek out on technical specifications for a second...  So what's in a d2.2xlarge?
  • 8x Intel Xeon CPUs (E5-2676 v3 @ 2.4GHz)
  • 60GB of Memory
  • 6x 2TB HDDs
First, I arranged all 6 of those 2TB disks into a RAID0 with mdadm, and formatted it with xfs (which performed better than ext4 or btrfs in my cursory tests).

$ sudo mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=6 /dev/xvd?
$ sudo mkfs.xfs /dev/md0
$ df -h /mnt
/dev/md0         11T   34M   11T   1% /mnt

Here's a brief look at raw read performance with hdparm:

$ sudo hdparm -tT /dev/md0
 Timing cached reads:   21126 MB in  2.00 seconds = 10576.60 MB/sec
 Timing buffered disk reads: 1784 MB in  3.00 seconds = 593.88 MB/sec

The beauty here of RAID0 is that each of the 6 disks can be used to read and/or write simultaneously, perfectly in parallel.  600 MB/sec is pretty quick reads by any measure!  In fact, when I tested the d2.8xlarge, I put all 24x 2TB disks into the same RAID0 and saw nearly 2.4 GB/sec read performance across that 48TB array!

With /dev/md0 mounted on /mnt and writable by my ubuntu user, I kicked off y-crunch with these parameters:

Program Version:       0.6.8 Build 9461 (Linux - x64 AVX2 ~ Airi)
Constant:              Golden Ratio
Algorithm:             Newton's Method
Decimal Digits:        2,000,000,000,000
Hexadecimal Digits:    1,660,964,047,444
Threading Mode:        Thread Spawn (1 Thread/Task)  ? / 8
Computation Mode:      Swap Mode
Working Memory:        61,342,174,048 bytes  ( 57.1 GiB )
Logical Disk Usage:    8,851,913,469,608 bytes  ( 8.05 TiB )

Byobu was very handy here, being able to track in the bottom status bar my CPU load, memory usage, disk usage, and disk I/O, as well as connecting and disconnecting from the running session multiple times over the 4 days of running.


And approximately 79 hours later, it finished successfully!

Start Date:            Thu Jul 16 03:54:11 2015
End Date:              Sun Jul 19 11:14:52 2015

Computation Time:      221548.583 seconds
Total Time:            285640.965 seconds

CPU Utilization:           315.469 %
Multi-core Efficiency:     39.434 %

Last Digits:
5027026274 0209627284 1999836114 2950866539 8538613661  :  1,999,999,999,950
2578388470 9290671113 7339871816 2353911433 7831736127  :  2,000,000,000,000

Amazing, another person (who I don't know), named Ron Watkins, performed the exact same computation and published his results within 24 hours, on July 22nd/23rd.  As such, Ron and I are "sharing" credit for the Golden Ratio record.


Now, let's talk about the economics here, which I think are the most interesting part of this post.

Look at the above chart of records, which are published on the y-cruncher page, the vast majority of those have been calculated on physical PCs -- most of them seem to be gaming PCs running Windows.

What's different about my approach is that I used Linux in the Cloud -- specifically Ubuntu in AWS.  I paid hourly (actually, my employer, Canonical, reimbursed me for that expense, thanks!)  It took right at 160 hours to run the initial calculation (79 hours) as well as the verification calculation (81 hours), at the current rate of $1.38/hour for a d2.2xlarge, which is a grand total of $220!

$220 is a small fraction of the cost of 6x 2TB disks, 60 GB of memory, or 8 Xeon cores, not to mention the electricity and cooling required to run a system of this size (~750W) for 160 hours.

If we say the first first trillion digits were already known from the previous record, that comes out to approximately 4.5 billion record-digits per dollar, and 12.5 billion record-digits per hour!

Hopefully you find this as fascinating as I!

Cheers,
:-Dustin

Tuesday, July 28, 2015

Appellation of Origin: FROM ubuntu

tl;dr:  Your Ubuntu-based container is not a copyright violation.  Nothing to see here.  Carry on.
I am speaking for my employer, Canonical, when I say you are not violating our policies if you use Ubuntu with Docker in sensible, secure ways.  Some have claimed otherwise, but that’s simply sensationalist and untrue.

Canonical publishes Ubuntu images for Docker specifically so that they will be useful to people. You are encouraged to use them! We see no conflict between our policies and the common sense use of Docker.

Going further, we distribute Ubuntu in many different signed formats -- ISOs, root tarballs, VMDKs, AMIs, IMGs, Docker images, among others.  We take great pride in this work, and provide them to the world at large, on ubuntu.com, in public clouds like AWS, GCE, and Azure, as well as in OpenStack and on DockerHub.  These images, and their signatures, are mirrored by hundreds of organizations all around the world. We would not publish Ubuntu in the DockerHub if we didn’t hope it would be useful to people using the DockerHub. We’re delighted for you to use them in your public clouds, private clouds, and bare metal deployments.

Any Docker user will recognize these, as the majority of all Dockerfiles start with these two words....

FROM ubuntu

In fact, we gave away hundreds of these t-shirts at DockerCon.


We explicitly encourage distribution and redistribution of Ubuntu images and packages! We also embrace a very wide range of community remixes and modifications. We go further than any other commercially supported Linux vendor to support developers and community members scratching their itches. There are dozens of such derivatives and many more commercial initiatives based on Ubuntu - we are definitely not trying to create friction for people who want to get stuff done with Ubuntu.

Our policy exists to ensure that when you receive something that claims to be Ubuntu, you can trust that it will work to the same standard, regardless of where you got it from. And people everywhere tell us they appreciate that - when they get Ubuntu on a cloud or as a VM, it works, and they can trust it.  That concept is actually hundreds of years old, and we’ll talk more about that in a minute....


So, what do I mean by “sensible use” of Docker? In short - secure use of Docker. If you are using a Docker container then you are effectively giving the producer of that container ‘root’ on your host. We can safely assume that people sharing an Ubuntu docker based container know and trust one another, and their use of Ubuntu is explicitly covered as personal use in our policy. If you trust someone to give you a Docker container and have root on your system, then you can handle the risk that they inadvertently or deliberately compromise the integrity or reliability of your system.

Our policy distinguishes between personal use, which we can generalise to any group of collaborators who share root passwords, and third party redistribution, which is what people do when they exchange OS images with strangers.

Third party redistribution is more complicated because, when things go wrong, there’s a real question as to who is responsible for it. Here’s a real example: a school district buys laptops for all their students with free software. A local supplier takes their preferred Linux distribution and modifies parts of it (like the kernel) to work on their hardware, and sells them all the PCs. A month later, a distro kernel update breaks all the school laptops. In this case, the Linux distro who was not involved gets all the bad headlines, and the free software advocates who promoted the whole idea end up with egg on their faces.

We’ve seen such cases in real hardware, and in public clouds and other, similar environments.  Digital Ocean very famously published some modified and very broken Ubuntu images, outside of Canonical's policies.  That's inherently wrong, and easily avoidable.

So we simply say, if you’re going to redistribute Ubuntu to third parties who are trusting both you and Ubuntu to get it right, come and talk to Canonical and we’ll work out how to ensure everybody gets what they want and need.

Here’s a real exercise I hope you’ll try...

  1. Head over to your local purveyor of fine wines and liquors.
  2. Pick up a nice bottle of Champagne, Single Malt Scotch Whisky, Kentucky Straight Bourbon Whiskey, or my favorite -- a rare bottle of Lambic Oude Gueze.
  3. Carefully check the label, looking for a seal of Appellation d'origine contrôlée.
  4. In doing so, that bottle should earn your confidence that it was produced according to strict quality, format, and geographic standards.
  5. Before you pop the cork, check the seal, to ensure it hasn’t been opened or tampered with.  Now, drink it however you like.
  6. Pour that Champagne over orange juice (if you must).  Toss a couple ice cubes in your Scotch (if that’s really how you like it).  Pour that Bourbon over a Coke (if that’s what you want).
  7. Enjoy however you like -- straight up or mixed to taste -- with your own guests in the privacy of your home.  Just please don’t pour those concoctions back into the bottle, shove a cork in, put them back on the shelf at your local liquor store and try to pass them off as Champagne/Scotch/Bourbon.


Rather, if that’s really what you want to do -- distribute a modified version of Ubuntu -- simply contact us and ask us first (thanks for sharing that link, mjg59).  We have some amazing tools that can help you either avoid that situation entirely, or at least let’s do everyone a service and let us help you do it well.

Believe it or not, we’re really quite reasonable people!  Canonical has a lengthy, public track record, donating infrastructure and resources to many derivative Ubuntu distributions.  Moreover, we’ve successfully contracted mutually beneficial distribution agreements with numerous organizations and enterprises. The result is happy users and happy companies.

FROM ubuntu,
Dustin

The one and only Champagne region of France

Monday, July 20, 2015

Prime Time: Docker, Juju, and Snappy Ubuntu Core


As you probably remember from grade school math class, primes are numbers that are only divisible by 1 and themselves.  2, 3, 5, 7, and 11 are the first 5 prime numbers, for example.

Many computer operations, such as public-key cryptography, depends entirely on prime numbers.  In fact, RSA encryption, invented in 1978, uses a modulo of a product of two very large primes for encryption and decryption.  The security of asymmetric encryption is tightly coupled with the computational difficulty in factoring large numbers.  I actually use prime numbers as the status update intervals in Byobu, in order to improve performance and distribute the update spikes.

Euclid proved that there are infinitely many prime numbers around 300 BC.  But the Prime Number Theorem (proven in the 19th century) says that the probability of any number is prime is inversely proportional to its number of digits.  That means that larger prime numbers are notoriously harder to find, and it gets harder as they get bigger!
What's the largest known prime number in the world?

Well, it has 17,425,170 decimal digits!  If you wanted to print it out, size 11 font, it would take 6,543 pages -- or 14 reams of paper!

That number is actually one less than a very large power of 2.  257,885,161-1.  It was discovered by Curtis Cooper on January 25, 2013, on an Intel Core2 Duo.

Actually, each of the last 14 record largest prime numbers discovered (between 1996 and today) have been of that form, 2P-1.  Numbers of that form are called Mersenne Prime Numbers, named after Friar Marin Mersenne, a French priest who studied them in the 1600s.


Friar Mersenne's work continues today in the form of the Great Internet Mersenne Prime Search, and the mprime program, which has been used to find those 14 huge prime numbers since 1996.

mprime is a massive parallel, cpu scavenging utility, much like SETI@home or the Protein Folding Project.  It runs in the background, consuming resources, working on its little piece of the problem.  mprime is open source code, and also distributed as a statically compiled binary.  And it will make a fine example of how to package a service into a Docker container, a Juju charm, and a Snappy snap.


Docker Container

First, let's build the Docker container, which will serve as our fundamental building block.  You'll first need to download the mprime tarball from here.  Extract it, and the directory structure should look a little like this (or you can browse it here):

├── license.txt
├── local.txt
├── mprime
├── prime.log
├── prime.txt
├── readme.txt
├── results.txt
├── stress.txt
├── undoc.txt
├── whatsnew.txt
└── worktodo.txt

And then, create a Dockerfile, that copies the files we need into the image.  Here's our example.

FROM ubuntu
MAINTAINER Dustin Kirkland email@example.com
COPY ./mprime /opt/mprime/
COPY ./license.txt /opt/mprime/
COPY ./prime.txt /opt/mprime/
COPY ./readme.txt /opt/mprime/
COPY ./stress.txt /opt/mprime/
COPY ./undoc.txt /opt/mprime/
COPY ./whatsnew.txt /opt/mprime/
CMD ["/opt/mprime/mprime", "-w/opt/mprime/"]

Now, build your Docker image with:

$ sudo docker build .
Sending build context to Docker daemon 36.02 MB
Sending build context to Docker daemon 
Step 0 : FROM ubuntu
...
Successfully built de2e817b195f

Then publish the image to Dockerhub.

$ sudo docker push kirkland/mprime

You can see that image, which I've publicly shared here: https://registry.hub.docker.com/u/kirkland/mprime/



Now you can run this image anywhere you can run Docker.

$ sudo docker run -d kirkland/mprime

And verify that it's running:

$ sudo docker ps
CONTAINER ID        IMAGE                    COMMAND                CREATED             STATUS              PORTS               NAMES
c9233f626c85        kirkland/mprime:latest   "/opt/mprime/mprime    24 seconds ago      Up 23 seconds                           furious_pike     

Juju Charm

So now, let's create a Juju Charm that uses this Docker container.  Actually, we're going to create a subordinate charm.  Subordinate services in Juju are often monitoring and logging services, things that run along side primary services.  Something like mprime is a good example of something that could be a subordinate service, attached to one or many other services in a Juju model.

Our directory structure for the charm looks like this (or you can browse it here):

└── trusty
    └── mprime
        ├── config.yaml
        ├── copyright
        ├── hooks
        │   ├── config-changed
        │   ├── install
        │   ├── juju-info-relation-changed
        │   ├── juju-info-relation-departed
        │   ├── juju-info-relation-joined
        │   ├── start
        │   ├── stop
        │   └── upgrade-charm
        ├── icon.png
        ├── icon.svg
        ├── metadata.yaml
        ├── README.md
        └── revision
3 directories, 15 files

The three key files we should look at here are metadata.yaml, hooks/install and hooks/start:

$ cat metadata.yaml
name: mprime
summary: Search for Mersenne Prime numbers
maintainer: Dustin Kirkland 
description: |
  A Mersenne prime is a prime of the form 2^P-1.
  The first Mersenne primes are 3, 7, 31, 127
  (corresponding to P = 2, 3, 5, 7).
  There are only 48 known Mersenne primes, and
  the 13 largest known prime numbers in the world
  are all Mersenne primes.
  This charm uses a Docker image that includes the
  statically built, 64-bit Linux binary mprime
  which will consume considerable CPU and Memory,
  searching for the next Mersenne prime number.
  See http://www.mersenne.org/ for more details!
tags:
  - misc
subordinate: true
requires:
  juju-info:
    interface: juju-info
    scope: container

And:

$ cat hooks/install
#!/bin/bash
apt-get install -y docker.io
docker pull kirkland/mprime

And:

$ cat hooks/start
#!/bin/bash
service docker restart
docker run -d kirkland/mprime

Now, we can add the mprime service to any other running Juju service.  As an example here, I'll --bootstrap, deploy the Apache2 charm, and attach mprime to it.

$ juju bootrap
$ juju deploy apache2
$ juju deploy cs:~kirkland/mprime
$ juju add-relation apache2 mprime

Looking at our services, we can see everything deployed and running here:

$ juju status
services:
  apache2:
    charm: cs:trusty/apache2-14
    exposed: false
    service-status:
      current: unknown
      since: 20 Jul 2015 11:55:59-05:00
    relations:
      juju-info:
      - mprime
    units:
      apache2/0:
        workload-status:
          current: unknown
          since: 20 Jul 2015 11:55:59-05:00
        agent-status:
          current: idle
          since: 20 Jul 2015 11:56:03-05:00
          version: 1.24.2
        agent-state: started
        agent-version: 1.24.2
        machine: "1"
        public-address: 23.20.147.158
        subordinates:
          mprime/0:
            workload-status:
              current: unknown
              since: 20 Jul 2015 11:58:52-05:00
            agent-status:
              current: idle
              since: 20 Jul 2015 11:58:56-05:00
              version: 1.24.2
            agent-state: started
            agent-version: 1.24.2
            upgrading-from: local:trusty/mprime-1
            public-address: 23.20.147.158
  mprime:
    charm: local:trusty/mprime-1
    exposed: false
    service-status: {}
    relations:
      juju-info:
      - apache2
    subordinate-to:
    - apache2


Snappy Ubuntu Core Snap

Finally, let's build a Snap.  Snaps are applications that run in Ubuntu's transactional, atomic OS, Snappy Ubuntu Core.

We need the simple directory structure below (or you can browse it here):

├── meta
│   ├── icon.png
│   ├── icon.svg
│   ├── package.yaml
│   └── readme.md
└── start.sh
1 directory, 5 files

The package.yaml describes what we're actually building, and what capabilities the service needs.  It looks like this:

name: mprime
vendor: Dustin Kirkland 
architecture: [amd64]
icon: meta/icon.png
version: 28.5-11
frameworks:
  - docker
services:
  - name: mprime
    description: "Search for Mersenne Prime Numbers"
    start: start.sh
    caps:
      - docker_client
      - networking

And the start.sh launches the service via Docker.

#!/bin/sh
PATH=$PATH:/apps/docker/current/bin/
docker rm -v -f mprime
docker run --name mprime -d kirkland/mprime
docker wait mprime

Now, we can build the snap like so:

$ snappy build .
Generated 'mprime_28.5-11_amd64.snap' snap
$ ls -halF *snap
-rw-rw-r-- 1 kirkland kirkland 9.6K Jul 20 12:38 mprime_28.5-11_amd64.snap

First, let's install the Docker framework, upon which we depend:

$ snappy-remote --url ssh://snappy-nuc install docker
=======================================================
Installing docker from the store
Installing docker
Name          Date       Version   Developer 
ubuntu-core   2015-04-23 2         ubuntu    
docker        2015-07-20 1.6.1.002           
webdm         2015-04-23 0.5       sideload  
generic-amd64 2015-04-23 1.1                 
=======================================================

And now, we can install our locally built Snap.
$ snappy-remote --url ssh://snappy-nuc install mprime_28.5-11_amd64.snap
=======================================================
Installing mprime_28.5-11_amd64.snap from local environment
Installing /tmp/mprime_28.5-11_amd64.snap
2015/07/20 17:44:26 Signature check failed, but installing anyway as requested
Name          Date       Version   Developer 
ubuntu-core   2015-04-23 2         ubuntu    
docker        2015-07-20 1.6.1.002           
mprime        2015-07-20 28.5-11   sideload  
webdm         2015-04-23 0.5       sideload  
generic-amd64 2015-04-23 1.1                 
=======================================================

Alternatively, you can install the snap directly from the Ubuntu Snappy store, where I've already uploaded the mprime snap:

$ snappy-remote --url ssh://snappy-nuc install mprime.kirkland
=======================================================
Installing mprime.kirkland from the store
Installing mprime.kirkland
Name          Date       Version   Developer 
ubuntu-core   2015-04-23 2         ubuntu    
docker        2015-07-20 1.6.1.002           
mprime        2015-07-20 28.5-11   kirkland  
webdm         2015-04-23 0.5       sideload  
generic-amd64 2015-04-23 1.1                 
=======================================================

Conclusion

How long until this Docker image, Juju charm, or Ubuntu Snap finds a Mersenne Prime?  Almost certainly never :-)  I want to be clear: that was never the point of this exercise!

Rather I hope you learned how easy it is to run a Docker image inside either a Juju charm or an Ubuntu snap.  And maybe learned something about prime numbers along the way ;-)

Join us in #docker, #juju, and #snappy on irc.freenode.net.

Cheers,
Dustin