Sunday, February 27, 2011
So long, Netflix!
Dear Netflix,
Frankly, I am tired of being treated as a second class citizen with respect to your watch-instantly technology.
Your inability to deliver a streaming client that is compatible with Ubuntu and Linux web browsers (such as Firefox or Chromium) is costing you this long-time customer.
Your choice of Microsoft Silverlight as the delivery platform for video streaming was ill-advised and very unfortunate. In today's world of HTML5 and Flash, it's just downright sad to use such an incompatible, exclusionary format (hint: the root of the problem is in Microsoft's Playready DRM which is explicitly not Linux-compatible). I have patiently waited and hoped and communicated such frustrations in comments on your blog.
I was relieved earlier this week to see that the Amazon Prime video streaming service supports Ubuntu's web browsers quite well, in fact! I am inconsolably frustrated that you have been unable to do the same over the last 3 years, and I'm afraid that you've just been lapped by the competition.
So as of today, my household is no longer an active Netflix account. I suggest that you make an effort to be a little less discriminatory against your customers' choices of operating systems and web browsers in the future.
So long,
Dustin Kirkland
Frankly, I am tired of being treated as a second class citizen with respect to your watch-instantly technology.
Your inability to deliver a streaming client that is compatible with Ubuntu and Linux web browsers (such as Firefox or Chromium) is costing you this long-time customer.
Your choice of Microsoft Silverlight as the delivery platform for video streaming was ill-advised and very unfortunate. In today's world of HTML5 and Flash, it's just downright sad to use such an incompatible, exclusionary format (hint: the root of the problem is in Microsoft's Playready DRM which is explicitly not Linux-compatible). I have patiently waited and hoped and communicated such frustrations in comments on your blog.
I was relieved earlier this week to see that the Amazon Prime video streaming service supports Ubuntu's web browsers quite well, in fact! I am inconsolably frustrated that you have been unable to do the same over the last 3 years, and I'm afraid that you've just been lapped by the competition.
So as of today, my household is no longer an active Netflix account. I suggest that you make an effort to be a little less discriminatory against your customers' choices of operating systems and web browsers in the future.
So long,
Dustin Kirkland
Wednesday, February 23, 2011
ChromiumOS uses eCryptfs for Home Directories
While looking for something else today, I came across this ChromiumOS design document:
http://www.chromium.org/chromium-os/chromiumos-design-docs/protecting-cached-user-data
This is a very interesting read, about how the good folks at Google are using eCryptfs to secure user data on ChromiumOS devices. I found a few of the design points particularly interesting, such as the hashing of user names and integration with the TPM. I was also pleased to see that eCryptfs was chosen, in part, in accordance with their design needs for both performance and power consumption.
There are detractors out there who regularly snipe Canonical and Ubuntu for a perceived lack of contribution to the core engineering of Linux and free software. Such attacks continue to sadden and frustrate me.
I'm really quite proud of the the early work we did on eCryptfs in Ubuntu in 2008-2010, with our Encrypted Private Directory and eventually our Encrypted Home Directory features. It's quite clear to me that Google's usage of eCryptfs for per-user home directory encryption in ChromiumOS are extensions of one of Ubuntu's pioneering technical advancements of desktop Linux.
Cheers,
:-Dustin
Thursday, February 17, 2011
mcollective now in Ubuntu
The Marionette Collective
I thought some Ubuntu Server users and Linux system administrators out there might be interested in the fact that mcollective (from Puppet Labs) is now packaged and in the Ubuntu Natty (11.04) archive.
The Marionette Collective (mcollective) is a framework for server orchestration and parallel job execution. Have you ever needed to run the same command on thousands of systems? mcollective is a very flexible, powerful way of doing just that (and much more).
If you run into issues, you can file bugs here. Otherwise, the good people from Puppet Labs have some excellent documentation and demos.
Finally, if perhaps you'd like to get involved in Ubuntu or upstream development, mcollective is in need of a few basic manpages: mcollectived, mc-call-agent, mc-controller, mc-facts, mc-find-hosts, mc-inventory, mc-ping, mc-rpc.
If you're interested, I can help get you started! Please ping kirkland on irc.freenode.net.
Cheers,
:-Dustin
Labels:
Canonical,
mcollective,
puppet,
Ubuntu,
Ubuntu-Server
The Computer History Museum in Mountain View, California
I think some of Planet Ubuntu might enjoy this post from my family/travel blog, about a recent visit to the Computer History Museum in Mountain View, California.
http://www.thekirklands.net/2011/02/computer-history-museum-in-mountain.html
Enjoy!
:-Dustin
Trip Report: O'Reilly Strata 2011
I attended the O'Reilly Strata Conference in Santa Clara, CA. It was quite a different conference than the sort that I have frequented over the last ~10 years (which have overwhelmingly been Linux and Open Source focused). This conference focused on a new industry buzz term -- BIG DATA. Many of the speakers bragged about how large their databases were (100M rows, a 1B rows, 10M columns (WTF?), etc). If Ubuntu is the a darling of Linux conferences, Hadoop was undoubtedly the head honcho in this crowd. Speakers were employed by the likes of Amazon, Microsoft, IBM, LinkedIn, and others in that vein.
A healthy subset of the companies here were selling open source or at least open core solutions, most of which ran in Amazon's cloud, and most of those that I could gather were Ubuntu-based. The attendees, though, were hardly open source or free software zealots. I'd estimate that less than 2% of the 1700 attendee's laptops were running Linux.
This was very much a Mac crowd (and Windows to perhaps a lesser extent).
There was one notable exception, though, with regards to Ubuntu... I attended a half-day tutorial on Karmasphere -- an Eclipse-based UI that configures and manages Hadoop. The presenters passed out 200+ 4GB USB keys, each of which had a VMWare image of a stripped down Ubuntu 10.10 Desktop (i386), adding the Eclipse SDK and their software. The USB key also included the free (beer) VMWare Player for Windows and 32-bit Ubuntu. Of course, I'm running 64-bit Ubuntu, and spent the next half hour downloading the tools I needed to concatenate the 2 VMWare disks into 1 and booting their VM in KVM instead ;-)
So this was pretty cool -- at least 100+ Ubuntu VM's booting to the drumbeat noise throughout the room (at least the people who's sound was not muted). The focus was not on Ubuntu, but I'm sure a few people saw it for the first time and perhaps realized its potential as a development platform for the cloud through the Eclipse SDK.
I chatted with the Rackspace folks at their booth briefly, and attended Eric Day's talk on OpenStack. Remarkably, he spent at least half of his OpenStack talk describing the Ubuntu development process that they have adopted (6 month cycles, design summits, launchpad,
bzr, irc, wiki, mailing lists). I took it that this was a new set of concepts for some contingent of the crowd.
I didn't take any real "action items" away from this conference for the Ubuntu Server or Platform teams. Chief technical guys and gals from LinkedIn, Google, and Bit.ly's talked about their methods of using Map Reduce and various other "big data" techniques to tear through hundreds of millions of records and still deliver a real-time user experience. So as a server geek, I certainly enjoyed the conference and learned a lot, but I doubt I'll need to attend again in the future.
Cheers
:-Dustin
Labels:
Canonical,
Conference,
Ubuntu,
Ubuntu-Server
Working on Free Software while Traveling the World
Earlier this week, I passed my 3rd anniversary working for Canonical full time on Ubuntu!
I've tried to track the fun and interesting places to which I've traveled on behalf of Canonical or Ubuntu in a (nearly) yearly blog post here and here. Actually, my sister asked me to Skype with her junior high classes this week to talk a little bit about the places I've traveled for work.
I spent a few minutes looking for those two posts and decided to organize the lists a little better for my own future reference...
2008:
:-Dustin
I've tried to track the fun and interesting places to which I've traveled on behalf of Canonical or Ubuntu in a (nearly) yearly blog post here and here. Actually, my sister asked me to Skype with her junior high classes this week to talk a little bit about the places I've traveled for work.
I spent a few minutes looking for those two posts and decided to organize the lists a little better for my own future reference...
2008:
- Boston, Massachusetts in February for my first Canonical sprint
- Austin, Texas in April for the Linux Collaboration Summit
- Prague, Czech Republic in May for UDS Intrepid
- Boston, Massachusetts (again) in July for another sprint
- London, United Kingdom in August for a sprint
- Montreal, Canada in September to sprint on Landscape
- Paris, France in November for a little pre-UDS planning
- Mountain View, California in December for UDS Jaunty
- Berlin, Germany in February for the Jaunty Distro Sprint
- San Francisco, California in April for Linux Filesystems and Collaboration Summit
- Barcelona, Spain in May for UDS Karmic
- Santa Barbara, California in July for a Eucalyptus Sprint
- Dublin, Ireland in August for the Karmic Distro Sprint
- Portland, Oregon in September for LinuxCon
- Dallas, Texas in November for UDS Lucid
- Portland, Oregon in December (again) to meet with a customer
- Wellington, New Zealand for LCA 2010
- Portland, Oregon in February for the Lucid Distro Sprint
- Santa Barbara, California in March for a Eucalyptus Sprint
- Austin, Texas in April for the Texas Linux Fest
- Brussels, Belgium in May for UDS Maverick
- Austin, Texas in June for the OpenStack Design Summit (Austin)
- Prague, Czech Republic in July for the Maverick Distro Rally
- Boston, Massachusetts in August for LinuxCon
- Taipei, Taiwan in September for Canonical OEM Summit
- Portland, Oregon in September for a customer meeting
- Orlando, Florida in October for UDS Natty
- San Antonio, Texas in November for the OpenStack Design Summit (Bexar)
- Santa Clara, California in January for O'Reilly Strata
- Los Angeles, California in February for SCALE9x
- Cape Town, South Africa for a Canonical Sprint
- Montreal, Quebec, Canada for a Canonical Sprint
- Budapest, Hungary for UDS-Oneiric (then Croatia for vacation)
- Montreal, Quebec, Canada
- ...
:-Dustin
Tuesday, February 15, 2011
A Long Overdue Introduction: ecryptfs-migrate-home
One of my most popular (by number hits) posts on eCryptfs is the one on Migrating to An Encrypted Home Directory. This post contains a lengthy set of instructions when, if followed correctly, allows you to migrate to an encrypted home directory.
About a year ago, Yan Li, an engineer from Intel and the Gnome project, contributed an outstanding script to the eCryptfs project that simplifies this process considerably: ecryptfs-migrate-home.
At this point, I have tested this script thoroughly, and have used it to migrate several friends and family (as well as the rest of my own systems) to encrypted home directories.
The invocation is simple, however it does require root privileges:
This will setup the encrypted home directory for the USER and use rsync to do the migration. Critically important, USER must login before the next reboot to complete the migration. USER's randomly generated mount key is temporarily stored in memory until they login, and eCryptfs picks up the key and encrypts it with their mount passphrase.
The usual warnings apply ... Make a complete backup copy of the non-encrypted data to
another system or external media, just in case. Though unlikely, an unforeseen error could somehow result in data lost, or lock you out of your system. (I haven't seen that yet, though, but beware.)
Here's an example dialog with the utility:
Thanks again, Yan Li. Enjoy!
:-Dustin
About a year ago, Yan Li, an engineer from Intel and the Gnome project, contributed an outstanding script to the eCryptfs project that simplifies this process considerably: ecryptfs-migrate-home.
At this point, I have tested this script thoroughly, and have used it to migrate several friends and family (as well as the rest of my own systems) to encrypted home directories.
The invocation is simple, however it does require root privileges:
# ecryptfs-migrate-home -u USER
This will setup the encrypted home directory for the USER and use rsync to do the migration. Critically important, USER must login before the next reboot to complete the migration. USER's randomly generated mount key is temporarily stored in memory until they login, and eCryptfs picks up the key and encrypts it with their mount passphrase.
The usual warnings apply ... Make a complete backup copy of the non-encrypted data to
another system or external media, just in case. Though unlikely, an unforeseen error could somehow result in data lost, or lock you out of your system. (I haven't seen that yet, though, but beware.)
Here's an example dialog with the utility:
$ sudo ecryptfs-migrate-home -u testuser INFO: Checking disk space, this may take a few moments. Please be patient. INFO: Checking for open files in /home/testuser ************************************************************************ YOU SHOULD RECORD YOUR MOUNT PASSPHRASE AND STORE IT IN A SAFE LOCATION. ecryptfs-unwrap-passphrase ~/.ecryptfs/wrapped-passphrase THIS WILL BE REQUIRED IF YOU NEED TO RECOVER YOUR DATA AT A LATER TIME. ************************************************************************ Done configuring. INFO: Encrypted home has been set up, encrypting files now...this may take a while. ======================================================================== Some Important Notes! 1. The file encryption appears to have completed successfully, however, testuser MUST LOGIN IMMEDIATELY, _BEFORE_THE_NEXT_REBOOT_, TO COMPLETE THE MIGRATION!!! 2. If testuser can log in and read and write their files, then the migration is complete, and you should remove /home/testuser.W5LaceTJ. Otherwise, restore /home/testuser.W5LaceTJ back to /home/testuser. 3. testuser should also run 'ecryptfs-unwrap-passphrase' and record their randomly generated mount passphrase as soon as possible. 4. To ensure the integrity of all encrypted data on this system, you should also encrypted swap space with 'ecryptfs-setup-swap'. ========================================================================
$ sudo login testuser
Password:
$ mount | grep ecryptfs
/home/testuser/.Private on /home/testuser type ecryptfs (ecryptfs_sig=d9256e30b9034083,ecryptfs_fnek_sig=3a2c12c00d60accf,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs)
Thanks again, Yan Li. Enjoy!
:-Dustin
Wednesday, February 9, 2011
PC Pro Magazine runs an Ubuntu cover story, and goes Ubuntu for the day
For the first time in 17 years, PC Pro magazine will be running a Linux cover story.
Even more impressive than that, the company itself will migrate all of its web servers and employees over to Ubuntu Servers and Desktops for the day!
More info here:
http://www.pcpro.co.uk/blogs/2011/02/09/can-we-run-pc-pro-on-ubuntu/
And you can follow the #ubuntupro Twitter feed. I'm quite intrigued!
:-Dustin
Even more impressive than that, the company itself will migrate all of its web servers and employees over to Ubuntu Servers and Desktops for the day!
More info here:
http://www.pcpro.co.uk/blogs/2011/02/09/can-we-run-pc-pro-on-ubuntu/
And you can follow the #ubuntupro Twitter feed. I'm quite intrigued!
:-Dustin
Monday, February 7, 2011
Update on errno, ssh-import-id, and bikeshed
If you read my post from earlier today about run-one, you might notice that I used a new source and binary package to publish the run-one utility. This is a new practice that I'm going to use for stand-alone tools like this.
errno
It's worth mentioning that the errno utility has also moved out of ubuntu-dev-tools, at the strong request of the maintainer of ubuntu-dev-tools. I tried (in vain) to get errno into various other packages and upstream projects, and failed in all cases. As of Natty, you can:
For older releases:
As a reminder, you can use errno in these ways:
$ errno font
EBFONT 59 /* Bad font file format */
$ errno 36
ENAMETOOLONG 36 /* File name too long */
$ errno EPERM
EPERM 1 /* Operation not permitted */
You can find the sources with:
And the launchpad project is at http://launchpad.net/errno.
ssh-import-id
Similarly, the maintainer of the openssh package in Ubuntu urged the removal of the ssh-import-id utility. Once again, I offered the tool to the upstream openssh project, to no avail. So ssh-import-id now lives in its own source and binary packages. As of Natty, you can:
For older releases:
As a reminder, you can use ssh-import-id in this way:
You can find sources with:
And the launchpad project is at http://launchpad.net/ssh-import-id.
bikeshed
"So why didn't you just use bikeshed?" Great question! When I showed run-one to one of my colleagues, he said, "Neat, I'd use that, where can I get it?" And I pointed him to install bikeshed, to which he responded, "Oh, well, I just want run-one, but not all the other cruft you put into bikeshed." :-)
I tried not to be offended, but in the end, he was right. I thought about splitting bikeshed into a series of bikeshed-$FOO binary packages. This wasn't ideal, though, in my opinion, from the perspective of developing code or handling bugs/questions.
Thus, I've decided to create a new Launchpad project and team, and Ubuntu package for each of these stand-alone utilities.
I'll continue to use bikeshed to incubate new tools, and as soon as they're ready to stand alone, then I'll split them out to their own branch/project/team/package.
Cheers,
:-Dustin
errno
It's worth mentioning that the errno utility has also moved out of ubuntu-dev-tools, at the strong request of the maintainer of ubuntu-dev-tools. I tried (in vain) to get errno into various other packages and upstream projects, and failed in all cases. As of Natty, you can:
apt-get install errno
For older releases:
sudo apt-add-repository ppa:errno/ppa
sudo apt-get update
sudo apt-get install errnoAs a reminder, you can use errno in these ways:
$ errno font
EBFONT 59 /* Bad font file format */
$ errno 36
ENAMETOOLONG 36 /* File name too long */
$ errno EPERM
EPERM 1 /* Operation not permitted */
You can find the sources with:
bzr branch lp:errno
And the launchpad project is at http://launchpad.net/errno.
ssh-import-id
Similarly, the maintainer of the openssh package in Ubuntu urged the removal of the ssh-import-id utility. Once again, I offered the tool to the upstream openssh project, to no avail. So ssh-import-id now lives in its own source and binary packages. As of Natty, you can:
apt-get install ssh-import-id
For older releases:
sudo apt-add-repository ppa:ssh-import-id/ppa
sudo apt-get update
sudo apt-get install ssh-import-idAs a reminder, you can use ssh-import-id in this way:
$ ssh-import-id kirkland smoser
INFO: Successfully authorized [kirkland]
INFO: Successfully authorized [smoser]
INFO: Successfully authorized [kirkland]
INFO: Successfully authorized [smoser]
You can find sources with:
bzr branch lp:ssh-import-id
And the launchpad project is at http://launchpad.net/ssh-import-id.
bikeshed
"So why didn't you just use bikeshed?" Great question! When I showed run-one to one of my colleagues, he said, "Neat, I'd use that, where can I get it?" And I pointed him to install bikeshed, to which he responded, "Oh, well, I just want run-one, but not all the other cruft you put into bikeshed." :-)
I tried not to be offended, but in the end, he was right. I thought about splitting bikeshed into a series of bikeshed-$FOO binary packages. This wasn't ideal, though, in my opinion, from the perspective of developing code or handling bugs/questions.
Thus, I've decided to create a new Launchpad project and team, and Ubuntu package for each of these stand-alone utilities.
I'll continue to use bikeshed to incubate new tools, and as soon as they're ready to stand alone, then I'll split them out to their own branch/project/team/package.
Cheers,
:-Dustin
Labels:
Bikeshed,
Canonical,
run-one,
ssh-import-id,
Ubuntu,
Ubuntu-Server
Sunday, February 6, 2011
Introducing: run-one and run-this-one
I love cronjobs! They wake me up in the morning, fetch my mail, backup my data, sync my mirrors, update my systems, check the health of my hardware and RAIDs, transcode my MythTV recordings, and so many other things...
The robotic precision of cron ensures that each subsequent job runs, on time, every time.
But cron doesn't check that the previous execution of that same job completed first -- and that can cause big trouble.
This often happens to me when I'm traveling and my backup cronjob fires while I'm on a slow up-link. It's bad news when an hourly rsync takes longer than an hour to run, and my system heads down a nasty spiral, soon seeing 2 or 3 or 10 rsync's all running simultaneously. Dang.
For this reason, I found myself putting almost all of my cronjobs in a wrapper script, managing and respecting a pid file lock according to the typical UNIX sysvinit daemon method. Unfortunately, this led to extensively duplicated lock handling code spread across my multiple workstations and servers.
I'm proud to say, however, that I have now solved this problem on all of my servers, at least for myself, and perhaps for you too!
In Ubuntu 11.04 (Natty), you can now find a pair of utilities in the run-one package: run-one and run-this-one.
run-one
You can simply prepend the run-one utility on the beginning of any command (just like time or sudo). The tool will calculate the md5sum $HASH of the rest of $0 and $@ (the command and its arguments), and then try to obtain a lock on a file in $HOME/.cache/$HASH using flock. If it can obtain the lock, then your command is simply executed, releasing the lock when done. And if not, then another copy of your command is already running, and it quietly exits non-zero.
I can now be safely assured that there will only ever be one copy of this cronjob running on my local system as $USER at a time:
If a copy is already running, subsequent calls of the same invocation will quietly exit non-zero.
run-this-one
run-this-one is a slightly more forceful take on the same idea. Using pgrep, it finds any matching invocations owned by the user in the process table and kills those first, then continues, behaving just as run-one (establishing the lock and executing your command).
I rely on a handful of ssh tunnels and proxies, but I often suspend and resume my laptop many times a day, which can cause those ssh connections to go stale and hang around for a while before the connection times out. For these, I want to kill any old instances of the invocation, and then start a fresh one.
I now use this code snippet in a wrapper script to establish my ssh socks proxy, and a pair of local port forwarding tunnels for (squid and bip proxies):
Have you struggled with this before? Do you have a more elegant solution? Would you use run-one and/or run-this-one to solve a similar problem?
You can find the code in Launchpad/bzr here, and packages for Lucid, Maverick, and Natty in a PPA here.
Cheers,
:-Dustin
The robotic precision of cron ensures that each subsequent job runs, on time, every time.
But cron doesn't check that the previous execution of that same job completed first -- and that can cause big trouble.
This often happens to me when I'm traveling and my backup cronjob fires while I'm on a slow up-link. It's bad news when an hourly rsync takes longer than an hour to run, and my system heads down a nasty spiral, soon seeing 2 or 3 or 10 rsync's all running simultaneously. Dang.
For this reason, I found myself putting almost all of my cronjobs in a wrapper script, managing and respecting a pid file lock according to the typical UNIX sysvinit daemon method. Unfortunately, this led to extensively duplicated lock handling code spread across my multiple workstations and servers.
I'm proud to say, however, that I have now solved this problem on all of my servers, at least for myself, and perhaps for you too!
In Ubuntu 11.04 (Natty), you can now find a pair of utilities in the run-one package: run-one and run-this-one.
run-one
You can simply prepend the run-one utility on the beginning of any command (just like time or sudo). The tool will calculate the md5sum $HASH of the rest of $0 and $@ (the command and its arguments), and then try to obtain a lock on a file in $HOME/.cache/$HASH using flock. If it can obtain the lock, then your command is simply executed, releasing the lock when done. And if not, then another copy of your command is already running, and it quietly exits non-zero.
I can now be safely assured that there will only ever be one copy of this cronjob running on my local system as $USER at a time:
*/60 * * * * run-one rsync -azP $HOME example.com:/srv/backup
If a copy is already running, subsequent calls of the same invocation will quietly exit non-zero.
run-this-one
run-this-one is a slightly more forceful take on the same idea. Using pgrep, it finds any matching invocations owned by the user in the process table and kills those first, then continues, behaving just as run-one (establishing the lock and executing your command).
I rely on a handful of ssh tunnels and proxies, but I often suspend and resume my laptop many times a day, which can cause those ssh connections to go stale and hang around for a while before the connection times out. For these, I want to kill any old instances of the invocation, and then start a fresh one.
I now use this code snippet in a wrapper script to establish my ssh socks proxy, and a pair of local port forwarding tunnels for (squid and bip proxies):
run-this-one ssh -N -C -D 1080 -L 3128:localhost:3128 \
-L 7778:localhost:7778 example.com
Have you struggled with this before? Do you have a more elegant solution? Would you use run-one and/or run-this-one to solve a similar problem?
You can find the code in Launchpad/bzr here, and packages for Lucid, Maverick, and Natty in a PPA here.
bzr branch lp:run-one
sudo apt-add-repository ppa:run-one/ppa
sudo apt-get update
sudo apt-get install run-oneCheers,
:-Dustin
Labels:
Canonical,
run-one,
Ubuntu,
Ubuntu-Server
Subscribe to:
Posts (Atom)