From the Canyon Edge -- :-Dustin

Wednesday, February 10, 2016

Docker, Alpine, Ubuntu, and You


There's no shortage of excitement, controversy, and readership, any time you can work "Docker" into a headline these days.  Perhaps a bit like "Donald Trump", but for CIO tech blogs and IT news -- a real hot button.  Hey, look, I even did it myself in the title of this post!

Sometimes an article even starts out about CoreOS, but gets diverted into a discussion about Docker, like this one, where shykes (Docker's founder and CTO) announced that Docker's default image would be moving away from Ubuntu to Alpine Linux.


I have personally been Canonical's business and technical point of contact with Docker Inc, since September of 2013, when I co-presented at an OpenStack Meetup in Austin, Texas, with Ben Golub and Nick Stinemates of Docker.  I can tell you that, along with most of the rest of the Docker community, this casual declaration in an unrelated Hacker News thread, came as a surprise to nearly all of us!

Docker's default container image is certainly Docker's decision to make.  But it would be prudent to examine at a few facts:

(1) Check DockerHub and you may notice that while Busybox (Alpine Linux) has surpassed Ubuntu in the number downloads (66M to 40M), Ubuntu is still by far the most "popular" by number of "stars" -- likes, favorites, +1's, whatever, (3.2K to 499).

(2) Ubuntu's compressed, minimal root tarball is 59 MB, which is what is downloaded over the Internet.  That's different from the 188 MB uncompressed root filesystem, which has been quoted a number of times in the press.

(3) The real magic of Docker is such that you only ever download that base image, one time!  And you only store one copy of the uncompressed root filesystem on your disk! Just once, sudo docker pull ubuntu, on your laptop at home or work, and then launch thousands of images at a coffee shop or airport lounge with its spotty wifi.  Build derivative images, FROM ubuntu, etc. and you only ever store the incremental differences.

Actually, I encourage you to test that out yourself...  I just launched a t2.micro -- Amazon's cheapest instance type with the lowest networking bandwidth.  It took 15.938s to sudo apt install docker.io.  And it took 9.230s to sudo docker pull ubuntu.  It takes less time to download Ubuntu than to install Docker!

ubuntu@ip-172-30-0-129:~⟫ time sudo apt install docker.io -y
...
real    0m15.938s
user    0m2.146s
sys     0m0.913s

As compared to...

ubuntu@ip-172-30-0-129:~⟫ time sudo docker pull ubuntu
latest: Pulling from ubuntu
f15ce52fc004: Pull complete 
c4fae638e7ce: Pull complete 
a4c5be5b6e59: Pull complete 
8693db7e8a00: Pull complete 
ubuntu:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:457b05828bdb5dcc044d93d042863fba3f2158ae249a6db5ae3934307c757c54
Status: Downloaded newer image for ubuntu:latest
real    0m9.230s
user    0m0.021s
sys     0m0.016s

Now, sure, it takes even less than that to download Alpine Linux (0.747s by my test), but again you only ever do that once!  After you have your initial image, launching Docker containers take the exact same amount of time (0.233s) and identical storage differences.  See:

ubuntu@ip-172-30-0-129:/tmp/docker⟫ time sudo docker run alpine /bin/true
real    0m0.233s
user    0m0.014s
sys     0m0.001s
ubuntu@ip-172-30-0-129:/tmp/docker⟫ time sudo docker run ubuntu /bin/true
real    0m0.234s
user    0m0.012s
sys     0m0.002s

(4) I regularly communicate sincere, warm congratulations to our friends at Docker Inc, on its continued growth.  shykes publicly mentioned the hiring of the maintainer of Alpine Linux in that Hacker News post.  As a long time Linux distro developer myself, I have tons of respect for everyone involved in building a high quality Linux distribution.  In fact, Canonical employs over 700 people, in 44 countries, working around the clock, all calendar year, to make Ubuntu the world's most popular Linux OS.  Importantly, that includes a dedicated security team that has an outstanding track record over the last 12 years, keeping Ubuntu servers, clouds, desktops, laptops, tablets, and phones up-to-date and protected against the latest security vulnerabilities.  I don't know personally Natanael, but I'm intimately aware of what a spectacular amount of work it is to maintain and secure an OS distribution, as it makes its way into enterprise and production deployments.  Good luck!

(5) There are currently 5,854 packages available via apk in Alpine Linux (sudo docker run alpine apk search -v).  There are 8,862 packages in Ubuntu Main (officially supported by Canonical), and 53,150 binary packages across all of Ubuntu Main, Universe, Restricted, and Multiverse, supported by the greater Ubuntu community.  Nearly all 50,000+ packages are updated every 6 months, on time, every time, and we release an LTS version of Ubuntu and the best of open source software in the world every 2 years.  Like clockwork.  Choice.  Velocity.  Stability.  That's what Ubuntu brings.

Docker holds a special place in the Ubuntu ecosystem, and Ubuntu has been instrumental in Docker's growth over the last 3 years.  Where we go from here, is largely up to the cross-section of our two vibrant communities.

And so I ask you honestly...what do you want to see?  How would you like to see Docker and Ubuntu operate together?

I'm Canonical's Product Manager for Ubuntu Server, I'm responsible for Canonical's relationship with Docker Inc, and I will read absolutely every comment posted below.

Cheers,
:-Dustin

p.s. I'm speaking at Container Summit in New York City today, and wrote this post from the top of the (inspiring!) One World Observatory at the World Trade Center this morning.  Please come up and talk to me, if you want to share your thoughts (at Container Summit, not the One World Observatory)!


Wednesday, January 27, 2016

adapt install [anything]


As always, I enjoyed speaking at the SCALE14x event, especially at the new location in Pasadena, California!

What if you could adapt a package from a newer version of Ubuntu, onto your stable LTS desktop/server?

Or, as a developer, what if you could provide your latest releases to your users running an older LTS version of Ubuntu?

Introducing adapt!

adapt is a lot like apt...  It’s a simple command that installs packages.

But it “adapts” a requested version to run on your current system.

It's a simple command that installs any package from any release of Ubuntu into any version of Ubuntu.

How does adapt work?

Simple… Containers!

More specifically, LXD system containers.

Why containers?

Containers can run anywhere, physical, virtual, desktops, servers, and any CPU architecture.

And containers are light and fast!  Zero latency and no virtualization overhead.

Most importantly, system containers are perfect copies of the released distribution, the operating system itself.

And all of that continuous integration testing we do perform on every single Ubuntu release?

We leverage that!
You can download a PDF of the slides for my talk here, or flip through them here:



I hope you enjoy some of the magic that LXD is making possible ;-)

Cheers!
Dustin

Tuesday, January 19, 2016

Data Driven Analysis: /tmp on tmpfs

tl;dr

  • Put /tmp on tmpfs and you'll improve your Linux system's I/O, reduce your carbon foot print and electricity usage, stretch the battery life of your laptop, extend the longevity of your SSDs, and provide stronger security.
  • In fact, we should do that by default on Ubuntu servers and cloud images.
  • Having tested 502 physical and virtual servers in production at Canonical, 96.6% of them could immediately fit all of /tmp in half of the free memory available and 99.2% could fit all of /tmp in (free memory + free swap).

Try /tmp on tmpfs Yourself

$ echo "tmpfs /tmp tmpfs rw,nosuid,nodev" | sudo tee -a /etc/fstab
$ sudo reboot

Background

In April 2009, I proposed putting /tmp on tmpfs (an in memory filesystem) on Ubuntu servers by default -- under certain conditions, like, well, having enough memory. The proposal was "approved", but got hung up for various reasons.  Now, again in 2016, I proposed the same improvement to Ubuntu here in a bug, and there's a lively discussion on the ubuntu-cloud and ubuntu-devel mailing lists.

The benefits of /tmp on tmpfs are:
  • Performance: reads, writes, and seeks are insanely fast in a tmpfs; as fast as accessing RAM
  • Security: data leaks to disk are prevented (especially when swap is disabled), and since /tmp is its own mount point, we should add the nosuid and nodev options (and motivated sysadmins could add noexec, if they desire).
  • Energy efficiency: disk wake-ups are avoided
  • Reliability: fewer NAND writes to SSD disks
In the interest of transparency, I'll summarize the downsides:
  • There's sometimes less space available in memory, than in your root filesystem where /tmp may traditionally reside
  • Writing to tmpfs could evict other information from memory to make space
You can learn more about Linux tmpfs here.

Not Exactly Uncharted Territory...

Fedora proposed and implemented this in Fedora 18 a few years ago, citing that Solaris has been doing this since 1994. I just installed Fedora 23 into a VM and confirmed that /tmp is a tmpfs in the default installation, and ArchLinux does the same. Debian debated doing so, in this thread, which starts with all the reasons not to put /tmp on a tmpfs; do make sure you read the whole thread, though, and digest both the pros and cons, as both are represented throughout the thread.

Full Data Treatment

In the current thread on ubuntu-cloud and ubuntu-devel, I was asked for some "real data"...

In fact, across the many debates for and against this feature in Ubuntu, Debian, Fedora, ArchLinux, and others, there is plenty of supposition, conjecture, guesswork, and presumption.  But seeing as we're talking about data, let's look at some real data!

Here's an analysis of a (non-exhaustive) set of 502 of Canonical's production servers that run Ubuntu.com, Launchpad.net, and hundreds of related services, including OpenStack, dozens of websites, code hosting, databases, and more. These servers sampled are slightly biased with more physical machines than virtual machines, but both are present in the survey, and a wide variety of uptime is represented, from less than a day of uptime, to 1306 days of uptime (with live patched kernels, of course).  Note that this is not an exhaustive survey of all servers at Canonical.

I humbly invite further study and analysis of the raw, tab-separated data, which you can find at:
The column headers are:
  • Column 1: The host names have been anonymized to sequential index numbers
  • Column 2: `du -s /tmp` disk usage of /tmp as of 2016-01-17 (ie, this is one snapshot in time)
  • Column 3-8: The output of the `free` command, memory in KB for each server
  • Column 9-11: The output of the `free` command, sway in KB for each server
  • Column 12: The number of inodes in /tmp
I have imported it into a Google Spreadsheet to do some data treatment. You're welcome to do the same, or use the spreadsheet of your choice.

For the numbers below, 1 MB = 1000 KB, and 1 GB = 1000 MB, per Wikipedia. (Let's argue MB and MiB elsewhere, shall we?)  The mean is the arithmetic average.  The median is the middle value in a sorted list of numbers.  The mode is the number that occurs most often.  If you're confused, this article might help.  All calculations are accurate to at least 2 significant digits.

Statistical summary of /tmp usage:

  • Max: 101 GB
  • Min: 4.0 KB
  • Mean: 453 MB
  • Median: 16 KB
  • Mode: 4.0 KB
Looking at all 502 servers, there are two extreme outliers in terms of /tmp usage. One server has 101 GB of data in /tmp, and the other has 42 GB. The latter is a very noisy django.log. There are 4 more severs using between 10 GB and 12 GB of /tmp. The remaining 496 severs surveyed (98.8%) are using less than 4.8 GB of /tmp. In fact, 483 of the servers surveyed (96.2%) use less than 1 GB of /tmp. 454 of the servers surveyed (90.4%) use less than 100 MB of /tmp. 414 of the servers surveyed (82.5%) use less than 10 MB of /tmp. And actually, 370 of the servers surveyed (73.7%) -- the overwhelming majority -- use less than 1MB of /tmp.

Statistical summary of total memory available:

  • Max: 255 GB
  • Min: 1.0 GB
  • Mean: 24 GB
  • Median: 10.2 GB
  • Mode: 4.1 GB
All of the machines surveyed (100%) have at least 1 GB of RAM.  495 of the machines surveyed (98.6%) have at least 2GB of RAM.   437 of the machines surveyed (87%) have at least 4 GB of RAM.   255 of the machines surveyed (50.8%) have at least 10GB of RAM.    157 of the machines surveyed (31.3%) have more than 24 GB of RAM.  74 of the machines surveyed (14.7%) have at least 64 GB of RAM.

Statistical summary of total swap available:

  • Max: 201 GB
  • Min: 0.0 KB
  • Mean: 13 GB
  • Median: 6.3 GB
  • Mode: 2.96 GB
485 of the machines surveyed (96.6%) have at least some swap enabled, while 17 of the machines surveyed (3.4%) have zero swap configured. One of these swap-less machines is using 415 MB of /tmp; that machine happens to have 32 GB of RAM. All of the rest of the swap-less machines are using between 4 KB and 52 KB (inconsequential) /tmp, and have between 2 GB and 28 GB of RAM.  5 machines (1.0%) have over 100 GB of swap space.

Statistical summary of swap usage:

  • Max: 19 GB
  • Min: 0.0 KB
  • Mean: 657 MB
  • Median: 18 MB
  • Mode: 0.0 KB
476 of the machines surveyed (94.8%) are using less than 4 GB of swap. 463 of the machines surveyed (92.2%) are using less than 1 GB of swap. And 366 of the machines surveyed (72.9%) are using less than 100 MB of swap.  There are 18 "swappy" machines (3.6%), using 10 GB or more swap.

Modeling /tmp on tmpfs usage

Next, I took the total memory (RAM) in each machine, and divided it in half which is the default allocation to /tmp on tmpfs, and subtracted the total /tmp usage on each system, to determine "if" all of that system's /tmp could actually fit into its tmpfs using free memory alone (ie, without swap or without evicting anything from memory).

485 of the machines surveyed (96.6%) could store all of their /tmp in a tmpfs, in free memory alone -- i.e. without evicting anything from cache.

Now, if we take each machine, and sum each system's "Free memory" and "Free swap", and check its /tmp usage, we'll see that 498 of the systems surveyed (99.2%) could store the entire contents of /tmp in tmpfs free memory + swap available. The remaining 4 are our extreme outliers identified earlier, with /tmp usages of [101 GB, 42 GB, 13 GB, 10 GB].

Performance of tmpfs versus ext4-on-SSD

Finally, let's look at some raw (albeit rough) read and write performance numbers, using a simple dd model.

My /tmp is on a tmpfs:
kirkland@x250:/tmp⟫ df -h .
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           7.7G  2.6M  7.7G   1% /tmp

Let's write 2 GB of data:
kirkland@x250:/tmp⟫ dd if=/dev/zero of=/tmp/zero bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 1.56469 s, 1.4 GB/s

And let's write it completely synchronously:
kirkland@x250:/tmp⟫ dd if=/dev/zero of=./zero bs=2G count=1 oflag=dsync
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 2.47235 s, 869 MB/s

Let's try the same thing to my Intel SSD:
kirkland@x250:/local⟫ df -h .
Filesystem      Size  Used Avail Use% Mounted on
/dev/dm-0       217G  106G  100G  52% /

And write 2 GB of data:
kirkland@x250:/local⟫ dd if=/dev/zero of=./zero bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 7.52918 s, 285 MB/s

And let's redo it completely synchronously:
kirkland@x250:/local⟫ dd if=/dev/zero of=./zero bs=2G count=1 oflag=dsync
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 11.9599 s, 180 MB/s

Let's go back and read the tmpfs data:
kirkland@x250:~⟫ dd if=/tmp/zero of=/dev/null bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 1.94799 s, 1.1 GB/s

And let's read the SSD data:
kirkland@x250:~⟫ dd if=/local/zero of=/dev/null bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 2.55302 s, 841 MB/s

Now, let's create 10,000 small files (1 KB) in tmpfs:
kirkland@x250:/tmp/foo⟫ time for i in $(seq 1 10000); do dd if=/dev/zero of=$i bs=1K count=1 oflag=dsync ; done
real    0m15.518s
user    0m1.592s
sys     0m7.596s

And let's do the same on the SSD:
kirkland@x250:/local/foo⟫ time for i in $(seq 1 10000); do dd if=/dev/zero of=$i bs=1K count=1 oflag=dsync ; done
real    0m26.713s
user    0m2.928s
sys     0m7.540s

For better or worse, I don't have any spinning disks, so I couldn't repeat the tests there.

So on these rudimentary read/write tests via dd, I got 869 MB/s - 1.4 GB/s write to tmpfs and 1.1 GB/s read from tmps, and 180 MB/s - 285 MB/s write to SSD and 841 MB/s read from SSD.

Surely there are more scientific ways of measuring I/O to tmpfs and physical storage, but I'm confident that, by any measure, you'll find tmpfs extremely fast when tested against even the fastest disks and filesystems.

Summary

  • /tmp usage
    • 98.8% of the servers surveyed use less than 4.8 GB of /tmp
    • 96.2% use less than 1.0 GB of /tmp
    • 73.7% use less than 1.0 MB of /tmp
    • The mean/median/mode are [453 MB / 16 KB / 4 KB]
  • Total memory available
    • 98.6% of the servers surveyed have at least 2.0 GB of RAM
    • 88.0% have least 4.0 GB of RAM
    • 57.4% have at least 8.0 GB of RAM
    • The mean/median/mode are [24 GB / 10 GB / 4 GB]
  • Swap available
    • 96.6% of the servers surveyed have some swap space available
    • The mean/median/mode are [13 GB / 6.3 GB / 3 GB]
  • Swap used
    • 94.8% of the servers surveyed are using less than 4 GB of swap
    • 92.2% are using less than 1 GB of swap
    • 72.9% are using less than 100 MB of swap
    • The mean/median/mode are [657 MB / 18 MB / 0 KB]
  • Modeling /tmp on tmpfs
    • 96.6% of the machines surveyed could store all of the data they currently have stored in /tmp, in free memory alone, without evicting anything from cache
    • 99.2% of the machines surveyed could store all of the data they currently have stored in /tmp in free memory + free swap
    • 4 of the 502 machines surveyed (0.8%) would need special handling, reconfiguration, or more swap

Conclusion


  • Can /tmp be mounted as a tmpfs always, everywhere?
    • No, we did identify a few systems (4 out of 502 surveyed, 0.8% of total) consuming inordinately large amounts of data in /tmp (101 GB, 42 GB), and with insufficient available memory and/or swap.
    • But those were very much the exception, not the rule.  In fact, 96.6% of the systems surveyed could fit all of /tmp in half of the freely available memory in the system.
  • Is this the first time anyone has suggested or tried this as a Linux/UNIX system default?
    • Not even remotely.  Solaris has used tmpfs for /tmp for 22 years, and Fedora and ArchLinux for at least the last 4 years.
  • Is tmpfs really that much faster, more efficient, more secure?
    • Damn skippy.  Try it yourself!
:-Dustin

Sunday, January 17, 2016

Intercession -- Check out this book!

https://www.inkshares.com/projects/intercession
A couple of years ago, a good friend of mine (who now works for Canonical) bought a book recommended in a byline in Wired magazine.  The book was called Daemon, by Leinad Zeraus.  I devoured it in one sitting.  It was so...different, so...technically correct, so....exciting and forward looking.  Self-driving cars and autonomous drones, as weapons in the hands of an anonymous network of hackers.  Yikes.  A thriller, for sure, but a thinker as well.  I loved it!

I blogged about it here in September of 2008, and that blog was actually read by the author of Daemon, who reached out to thank me for the review.  He sent me a couple of copies of the book, which I gave away to readers of my blog, who solved a couple of crypto-riddles in a series of blog posts linking eCryptfs to some of the techniques and technology used in Daemon.

I can now count Daniel Suarez (the award winning author who originally published Daemon under a pseudonym) as one of my most famous and interesting friends, and I'm always excited to receive an early draft of each of his new books.  I've enjoyed each of Daemon, Freedom™, Kill Decision, and Influx, and gladly recommend them to anyone interested in cutting edge, thrilling fiction.

Knowing my interest in the genre, another friend of mine quietly shared that they were working on their very first novel.  They sent me an early draft, which I loaded on my Kindle and read in a couple of days while on a ski vacation in Utah in February.  While it took me a few chapters to stop thinking about it as-a-story-written-by-a-friend-of-mine, once I did, I was in for a real treat!  I ripped through it in 3 sittings over two snowy days, on a mountain ski resort in Park City, Utah.

The title is Intercession.  It's an adventure story -- a hero, a heroin, and a villain.  It's a story about time -- intriguingly non-linear and thoughtfully complex.  There's subtle, deliberate character development, and a couple of face-palming big reveals, constructed through flashbacks across time.

They have published it now, under a pseudonym, Nyneve Ransom, on InkShares.com -- a super cool self-publishing platform (I've spent hours browsing and reading stories there now!).  If you love sci-fi, adventure, time, heroes, and villains, I'm sure you'll enjoy Intercession!  You can read a couple of chapters for free right now ;-)

Happy reading!
:-Dustin

Tuesday, December 22, 2015

How many people in the world use Ubuntu? More than anyone actually knows!

People of earth, waving at Saturn, courtesy of NASA.
“It Doesn't Look Like Ubuntu Reached Its Goal Of 200 Million Users This Year”, says Michael Larabel of Phoronix, in a post that it seems he's been itching to post for months.

Why the negativity?!? Are you sure? Did you count all of them?

No one has.

How many people in the world use Ubuntu?

Actually, no one can count all of the Ubuntu users in the world!

Canonical, unlike Apple, Microsoft, Red Hat, or Google, does not require each user to register their installation of Ubuntu.

Of course, you can buy laptops preloaded with Ubuntu from Dell, HP, Lenovo, and Asus.  And there are millions of them out there.  And you can buy servers powered by Ubuntu from IBM, Dell, HP, Cisco, Lenovo, Quanta, and compatible with the OpenCompute Project.

In 2011, hardware sales might have been how Mark Shuttleworth hoped to reach 200M Ubuntu users by 2015.

But in reality, hundreds of millions of PCs, servers, devices, virtual machines, and containers have booted Ubuntu to date!

Let's look at some facts...
  • Docker users have launched Ubuntu images over 35.5 million times.
  • HashiCorp's Vagrant images of Ubuntu 14.04 LTS 64-bit have been downloaded 10 million times.
  • At least 20 million unique instances of Ubuntu have launched in public clouds, private clouds, and bare metal in 2015 itself.
    • That's Ubuntu in clouds like AWS, Microsoft Azure, Google Compute Engine, Rackspace, Oracle Cloud, VMware, and others.
    • And that's Ubuntu in private clouds like OpenStack.
    • And Ubuntu at scale on bare metal with MAAS, often managed with Chef.
  • In fact, over 2 million new Ubuntu cloud instances launched in November 2015.
    • That's 67,000 new Ubuntu cloud instances launched per day.
    • That's 2,800 new Ubuntu cloud instances launched every hour.
    • That's 46 new Ubuntu cloud instances launched every minute.
    • That's nearly one new Ubuntu cloud instance launched every single second of every single day in November 2015.
  • And then there are Ubuntu phones from Meizu.
  • And more Ubuntu phones from BQ.
  • Of course, anyone can install Ubuntu on their Google Nexus tablet or phone.
  • Or buy a converged tablet/desktop preinstalled with Ubuntu from BQ.
  • Oh, and the Tesla entertainment system?  All electric Ubuntu.
  • Google's self-driving cars?  They're self-driven by Ubuntu.
  • George Hotz's home-made self-driving car?  It's a homebrewed Ubuntu autopilot.
  • Snappy Ubuntu downloads and updates for Raspberry Pi's and Beagle Bone Blacks -- the response has been tremendous.  Download numbers are astounding.
  • Drones, robots, network switches, smart devices, the Internet of Things.  More Snappy Ubuntu.
  • How about Walmart?  Everyday low prices.  Everyday Ubuntu.  Lots and lots of Ubuntu.
  • Are you orchestrating containers with Kubernetes or Apache Mesos?  There's plenty of Ubuntu in there.
  • Kicking PaaS with Cloud Foundry?  App instances are Ubuntu LXC containers.  Pivotal has lots of serious users.
  • And Heroku?  You bet your PaaS those hosted application containers are Ubuntu.  Plenty of serious users here too.
  • Tianhe-2, the world's largest super computer.  Merely 80,000 Xeons, 1.4 TB of memory, 12.4 PB of disk, all number crunching on Ubuntu.
  • Ever watch a movie on Netflix?  You were served by Ubuntu.
  • Ever hitch a ride with Uber or Lyft?  Your mobile app is talking to Ubuntu servers on the backend.
  • Did you enjoy watching The Hobbit?  Hunger Games?  Avengers?  Avatar?  All rendered on Ubuntu at WETA Digital.  Among many others.
  • Do you use Instagram?  Say cheese!
  • Listen to Spotify?  Music to my ears...
  • Doing a deal on Wall Street?  Ubuntu is serious business for Bloomberg.
  • Paypal, Dropbox, Snapchat, Pinterest, Reddit. Airbnb.  Yep.  More Ubuntu.
  • Wikipedia and Wikimedia, among the busiest sites on the Internet with 8 - 18 billion page views per month, are hosted on Ubuntu.
How many "users" of Ubuntu are there ultimately?  I bet there are over a billion people today, using Ubuntu -- both directly and indirectly.  Without a doubt, there are over a billion people on the planet benefiting from the services, security, and availability of Ubuntu today.
  • More people use Ubuntu than we know.
  • More people use Ubuntu than you know.
  • More people use Ubuntu than they know.
More people use Ubuntu than anyone actually knows.

Because of who we all are.

:-Dustin

Thursday, November 5, 2015

LXD in the Sky with Diamonds


Picture yourself containers on a server
With systemd trees and spawned tty's
Somebody calls you, you answer quite quickly
A world with the density so high

    - Sgt. Graber's LXD Smarts Club Band

Last week, we proudly released Ubuntu 15.10 (Wily) -- the final developer snapshot of the Ubuntu Server before we focus the majority of our attention on quality, testing, performance, documentation, and stability for the Ubuntu 16.04 LTS cycle in the next 6 months.

Notably, LXD has been promoted to the Ubuntu Main archive, now commercially supported by Canonical.  That has enabled us to install LXD by default on all Ubuntu Servers, from 15.10 forward.
Join us for an interactive, live webinar on November 12th at 5pm BST/12pm EST led by James Page, where he will demonstrate LXD as the fastest hypervisor in OpenStack!
That means that every Ubuntu server -- Intel, AMD, ARM, POWER, and even Virtual Machines in the cloud -- is now a full machine container hypervisor, capable of hosting hundreds of machine containers, right out of the box!

LXD in the Sky with Diamonds!  Well, LXD is in the Cloud with Diamond level support from Canonical, anyway.  You can even test it in your web browser here.

The development tree of Xenial (Ubuntu 16.04 LTS) has already inherited this behavior, and we will celebrate this feature broadly through our use of LXD containers in Juju, MAAS, and the reference platform of Ubuntu OpenStack, as well as the new nova-lxd hypervisor in the OpenStack Autopilot within Landscape.

While the young and the restless are already running Wily Ubuntu 15.10, the bold and the beautiful are still bound to their Trusty Ubuntu 14.04 LTS servers.

At Canonical, we understand both motivations, and this is why we have backported LXD to the Trusty archives, for safe, simple consumption and testing of this new generation of machine containers there, on your stable LTS.

Installing LXD on Trusty simply requires enabling the trusty-backports pocket, and installing the lxd package from there, with these 3 little commands:

sudo sed -i -e "/trusty-backports/ s/^# //" /etc/apt/sources.list
sudo apt-get update; sudo apt-get dist-upgrade -y
sudo apt-get -t trusty-backports install lxd

In minutes, you can launch your first LXD containers.  First, inherit your new group permissions, so you can execute the lxc command as your non-root user.  Then, import some images, and launch a new container named lovely-rita.  Shell into that container, and examine the process tree, install some packages, check the disk and memory and cpu available.  Finally, exit when you're done, and optionally delete the container.

newgrp lxd
lxd-images import ubuntu --alias ubuntu
lxc launch ubuntu lovely-rita
lxc list
lxc exec lovely-rita bash
  ps -ef
  apt-get update
  df -h
  free
  cat /proc/cpuinfo
  exit
lxc delete lovely-rita

I was able to run over 600 containers simultaneously on my Thinkpad (x250, 16GB of RAM), and over 60 containers on an m1.small in Amazon (1.6GB of RAM).

We're very interested in your feedback, as LXD is one of the most important features of the Ubuntu 16.04 LTS.  You can learn more about LXD, view the source code, file bugs, discuss on the mailing list, and peruse the Linux Containers upstream projects.

With a little help from my friends!
:-Dustin

Wednesday, September 30, 2015

Ubuntu and XOR.DDoS -- Nothing to see here


I woke this morning to a series of questions about a somewhat sensationalist article published by ZDnet this morning: Linux-powered botnet generates giant denial-of-service attacks

All Linux distributions -- Ubuntu, Red Hat, and others -- enable SSH for remote server login.  That’s just a fact of life in a Linux-powered, cloud and server world.  SSH is by far the most secure way to administer a Linux machine remotely, as it leverages both strong authentication and encryption technology, and is actively reviewed and maintained for security vulnerabilities.

However, in Ubuntu, we have never in 11 years asked a user to set a root password by default, and as of Ubuntu 14.04 LTS, we now explicitly disable root password logins over SSH.

Any Ubuntu machine that might be susceptible to this XOS.DDoS attack, is in a very small minority of the millions of Ubuntu systems in the world.  Specifically, a vulnerable Ubuntu machine has been individually and manually configured by its administrator to:

  1. permit SSH root password authentication, AND
  2. have set a root password, AND
  3. have chosen a poor quality root password that is subject to a brute force attack 

A poor password generally uses a simple dictionary word, or a short password without numbers, case sensitivity or symbols.

Moreover, the antivirus software ClamAV is freely available in Ubuntu (sudo apt-get install clamav), and is able to detect and purge XOR.DDoS from any affected system.

As a reminder, it’s important to:


For an exhaustive review of all Ubuntu security features, please refer to:



Cheers,
Dustin

Printfriendly