Friday, April 29, 2011

dotdee: A modern proposal for improving Debian Conffile Management

SOME BACKGROUND

Debian's management of conffiles is a truly inspired approach to a very difficult topic in any Linux/UNIX distribution.

Conffiles are files in /etc that the Debian package management system (dpkg) won't automatically overwrite when you upgrade a package.  Local modifications by the system administrator are preserved.  This a critical policy expectation by Debian and Ubuntu system administrators, which ensures the proper in-place upgrade of packages on a running system.  Moreover, handling conffiles correctly, as a package maintainer according to Debian policy, requires considerable expertise.  Debian Developer RaphaĆ«l Hertzog has one of the best, most concise explanations of how dpkg handles conffiles in this blog post.

THE ISSUE


Having read in detail several times now the Debian policies on conffile management, I see some room for improvement, with respect to modern Debian and Ubuntu deployments.  I believe there are two key points that the current policy sufficient handle...
  1. Particularly in modern, massive Debian/Ubuntu deployments (think Cloud, Grid, and Cluster computing), it's no longer humans that are managing individual systems, but rather systems (puppet, chef, cfengine, et al.) managing systems.  (Insert your Skynet jokes here...)  As such, it's difficult (if not impossible) for a Debian package or a distribution to make configuration changes to another package's conffiles without violating Debian policy -- even when the change might objectively be the "right" or the "best" thing to do, in terms of end user experience and package operation (especially when the given system is only ever managed by a configuration management system).
  2. In other cases, one local package has a run-time dependency on a second package on the local system, but requires the package it depends on to be configured in a particular way.  Again, if that configuration lives in a conffile owned by the second package, the first package cannot automatically make that configuration change without violating said policy.
As an exception, rather than the rule, there are a couple of very mature packages that provide smart frameworks enabling system administrators, packagers, and distributions to configure them all within policy.

A good example would be apache2's /etc/apache2 directory, which allows for said admins, packagers, and distributions to drop their own configuration modifications as files (or symbolic links) into sourced directories such as /etc/apache2/conf.d, /etc/apache2/mods-available, and /etc/apache2/sites-available.

A PROPOSAL

The concept of a ".d" directory in /etc is very well understood in most Linux/UNIX circles, actually.  Check your own /etc directory with the command: find /etc -type d -name "*.d" and you should see quite a number of ".d" style configuration directories.

Here, I'm proposing a tool that I think would greatly benefit Debian and Debian-derived distributions such as Ubuntu.  For the purposes of this discussion, let's call the tool "dotdee".  Its stated goal is to turn any given flat file on your system to a dynamically concatenated flat file generated from a unique ".d" directory dedicated to that file.  With such a dedicated and well-formed directory, system administrators, Debian packagers, and distributions could conveniently place additional configuration snippets in particular conffile's dedicated ".d" directory.

I have written a first prototype of the tool dotdee, which you can examine here.  It's a very simple, straightforward shell script, inspired a bit by Debian's incredibly useful update-alternatives tool.

The script runs in 3 different modes:
  1. sudo dotdee --setup /etc/path/to/some.conf
  2. sudo dotdee --update /etc/path/to/some.conf
  3. sudo dotdee --undo /etc/path/to/some.conf

SETUP MODE

First the setup mode takes a flat file as a target.  Assuming the file is not already managed by dotdee, a new directory structure is created under /etc/dotdee.  In the example above, that would be /etc/dotdee/etc/path/to/some.conf.d.  So "/etc/dotdee" is prepended, and ".d" is appended to the path which is to be managed.  It's trivial to get back to the name of the managed file by stripping /etc/dotdee from the head, and .d from the tail of the string.

Next, the actual managed flat file is moved from /etc/path/to/some.conf to /etc/dotdee/etc/path/to/some.conf.d/50-dpkg-original.  Again, this a well-formed path, with "/etc/dotee" prepended, a ".d" appended, and the file itself is renamed to "50-dpkg-original".  This is intended to clearly denote that this is the original, base file, as installed by dpkg itself.  The number "50" is precisely halfway between "00" and "99", leaving plenty of room for other file snippets to be placed in ordered positions before and/or after the original file.

After this, we run the update function, which will concatenate in alphanumeric order all of the files in /etc/dotdee/etc/path/to/some.conf.d/* and write the output into /etc/dotdee/etc/path/to/some.conf.

Finally, the update-alternatives system is used to place a symlink at the original location, /etc/path/to/some.conf, pointing to /etc/dotdee/etc/path/to/some.conf.  Additionally, a second, lower-priority alternative is also set, pointing to dpkg's original at /etc/dotdee/path/to/some.conf.d/50-dpkg-original.

UPDATE MODE

As mentioned above, the update function performs the concatenation immediately, building the targeted path from its dotdee managed collection of snippets.  This should be run anytime a file is added, removed, or modified in the dotdee directory for a given managed file.  As a convenience, this could easily and automatically be performed by an inotify watch of the /etc/dotdee directory.  That, itself, would be a dotdee configuration option.

UNDO MODE

The undo function is something I added for my own development and debugging while working on the tool, however I quickly realized that it might be an important tool for other system administrators and packagers (think, postrm maintenance scripts!).

DPKG INTEGRATION

This would require some (minor?) integration with dpkg itself.  On package upgrade/installation, dpkg would need to need to detect when the target path of a file it wants to create/update is in fact a symbolic link referencing an /etc/dotdee path.  It would need to drill down into that path and place the file it wants to write on top of the 50-dpkg-original file instead.  I have not yet contacted the dpkg maintainers yet, so I don't know if this is a reasonable proposal or not.

IN PRACTICE

So what would this look like in practice?

Once integrated with dpkg, I'd like dotdee to be a utility that human system administrators could run to manually turn a generic conffile into a ".d" style configuration directory, such that they could append their own changes to some numbered file in the dotdee directory, avoid the interactive dpkg-conffile-changed prompts.

More importantly, I would like one package's postinst maintainer script to be able take another package that it depends upon and turn its conffile into a dotdee managed file, such that it could append or prepend configuration information necessary for proper operation.

COMMENTS?

I plan to lead a session on this topic at the Ubuntu Developer Summit in May 2011 in Budapest, and I have also proposed this on the debian-dpkg@ list as well.

But in the mean time, what do you think?  Have you encountered this problem before?  How have you solved it?  What parts of this proposal do you think are reasonable?  Are any parts completely unreasonable to you?  Can you think of any extensions or changes you'd make?  Would you use something like this?

Cheers,
:-Dustin

Tuesday, April 26, 2011

Introducing ecryptfs-recover-private -- Recover your Encrypted Private Directory!



Once again, this post is long, long, long overdue ;-)

I'm pleased to announce the general availability of a new utility -- ecryptfs-recover-private!

For several years now, we in the #ecryptfs IRC channel and in the eCryptfs community on Launchpad have been pointing people to this blog post of mine, which explains how to manually mount an Encrypted Home or Private directory from an Ubuntu LiveCD.

I'm quite happy to say that this is now an automated process, with the release of the Ubuntu 11.04 (Natty Narwhal) Desktop later this week!

If you find yourself in a situation where you need to recover your Encrypted Home or Encrypted Private directory, simply:
  1. boot the target system using an Ubuntu 12.04 (or newer) Desktop LiveCD
  2. make sure that your target system's hard drive is mounted
  3. open a terminal
  4. install ecryptfs-utils 'sudo apt-get install -y ecryptfs-utils'
  5. and run 'sudo ecryptfs-recover-private'
  6. follow the prompts
  7. access your decrypted data and save somewhere else
  8. you can also launch the graphical file browser with 'sudo nautilus'  and navigate to the temporary directory
The utility will do a deep find of the system's hard disk, looking for folders named ".Private", and will interactively ask you if it's the folder you'd like to recover.  If you answer "yes", you will then be prompted for the login passphrase that's used to decrypt your wrapped, mount passphrase.  Assuming you have the correct credentials, it will mount your Encrypted Home or Private directory in read-only mode, and point you at the temporary directory where it's mounted.

Here's a video demonstration...





Tossing you a life raft,
:-Dustin

Friday, April 15, 2011

The New Look of the Ubuntu 11.04 Server Installer!

With Natty Beta2, the Ubuntu 11.04 Server Installer received a little bit of the same aubergine love that the Ubuntu Desktop has enjoyed now for the last few releases.  Moving away from that 1980s MSDOS/PCDOS VGA blue look, the our Server installer now sports a distinctively Ubuntu color scheme!




Note that I used the Ubuntu quick install preseed to install the Ubuntu Server 11.04 Beta2 release, in a 64-bit KVM with 4 virtual CPUs, 2GB of memory, and with both the ISO and the backing qcow2 image in a tmpfs.  I captured the video with xvidcap, increased the frame rate a bit to match the music with avidemux, used pitivi to add the music to the video, and the music, of course is Purple Haze by Jimi Hendrix, live from San Diego Sports Arena, May 24, 1969 on The Jimi Hendrix Experience album.

:-Dustin

Thursday, April 14, 2011

apply-patch $URL

Two updates about the slick new apply-patch tool in Ubuntu's bikeshed...

First, it now can take a URL as an argument, first retrieving the patch via wget, and then iterating over it and automatically detecting the patch level.

Second, I spent several hours hacking on the source to patch itself, trying to add support for the automatic strip level detection.  I haven't yet succeeded, though, as the iterative approach I use with --dry-run in my wrapper script unfortunately doesn't apply.  There's no reentrant function that I can use over and over again.  Dry-run is a global variable that's either on, or off for the length of the running of the program.  There's special behavior based on the boolean value of dry-run.  Anyway, I have filed a bug with the upstream patch project, showing them what I have, and asking if they have advice on if, and how it might be applied directly into the patch source.  Stay tuned...

:-Dustin

Tuesday, April 12, 2011

Austin's Natty Release Party


Ubuntu's 11.04 Natty Narwhal release is just around the corner, so it's time for another Austin Release Party!

Let's celebrate alliteration and the Natty Narhwal at North by Northwest restaurant and brewery with some locally brewed frosty beverages on Thursday, April 28th from 6pm.


Please forward this invitation on to your LoCo, LUG, and your favorite Ubuntu-loving geeks :-)

:-Dustin

Tuesday, April 5, 2011

A Lesson Learned the Hard Way about SSDs

Everyone told me, when I started looking at SSD hard drives, "Buy Intel."

But I didn't listen.  And boy, did I pay for it.  Not once, but twice :-(



As of yesterday, my 1+ year saga with Patriot SSDs is finally over.  Stay tuned for the next post, where I'll talk about a few really important lessons learned, in terms of data backup, and some tools I now use to avoid this situation ever again.  Until then, here's a timeline, meticulously reconstructed from my email and system logs.
  • 17 December 2009
    • Paid $406.97 at Amazon.com for a Patriot SSD, expensive but, Merry Christmas to me!
    • Patriot Torqx 2.5-Inch 128 GB SATAII Solid State Drive with 220MB/s Read - PFZ128GS25SSDR
    • Received and installed Ubuntu Lucid a few days later
    • Read/write benchmarks were very close to advertised rates, and I bragged to my Intel-SSD-wielding colleagues
  • 3 March 2010
    • Hard drive simply "disappeared", doh!
    • Neither the BIOS nor kernel could see the hard drive
    • Patriot acknowledged the issue as a firmware bug, and provided a Windows executable to flash the controller on the hard drive
    • Flashing the controller would discard all data on the hard drive, no way to recover
    • There was no Linux alternative for the magic Windows executable
    • I had reasonable backups (within the last week or so), so I started the RMA process
  • 4 March 2010
    • Returned to Patriot via Fedex (at their expense)
  • 24 March 2010
    • Received replacement drive, 3+ weeks later
    • Re-installed Ubuntu Lucid
  • 19 November 2010
    • Another crash; again hard drive just "disappeared"
    • I was traveling at the time, and did not have a current backup :-(
    • I wrote the run-one utility days later (more on that in the next post), and redesigned where and how I store and backup data
  • 21 November 2010
    • Reinstalled Ubuntu Maverick onto an old, spare 5400rpm drive
    • Wow, I had not realized until now how much local hard drive performance directly affects my development productivity!
  • 22 November 2010
    • 2nd RMA filed with Patriot
  • 23 November 2010
    • Since I was traveling when the error occurred, my backups were way out of date, and I stood to lose quite a bit of valuable, irreplaceable data
    • So I shipped the dead drive (and a working 5400rpm drive for the recovered data) to a data recovery facility specializing in SSD/Flash -- A+ Perfect Computers
  • 24 November 2010
    • I paid $245.98 for a 120GB Intel SSD on Amazon.com, which is exactly what I should have done a year earlier :-(
  • 29 November 2010
    • I paid $475 for the recovery, which was explicitly not reimbursed by Patriot
    •  If A+ Perfect Computers can recover my data, I failed to see how/why Patriot could not do the same, at their expense -- very disappointing
    • I received a phone call from a friendly, knowledgeable, Linux-savvy A+ technologist, who emailed me a few of my eCryptfs encrypted files, for my verification
    • This technologist explained how their recovery worked, at a high level, bypassing Patriot's faulty on-board controller/firmware with a working one, for the duration of the recovery
    • Note that I very much appreciated having my private data encrypted, in this case, as I'm quite literally sharing my hard drive with an untrusted 3rd party
      • Ubuntu Encrypted Home for the win!!!
  • 3 December 2010
    • I received the original, broken Patriot hard drive back from A+ Perfect Computers, as well as my 5400rpm drive with a complete copy of the recovered data
    • The recovery appeared to be perfect, up until minutes before the drive disappeared
  • 5 December 2010
    • I received my 120GB Intel SSD and installed Ubuntu Natty
  • 6 December 2010
    • I shipped the broken Patriot hard drive back to the manufacturer for replacement
  • 22 November 2010 - 3 March 2011
    • 24 emails sent or received between myself and Patriot, during which I learned:
      • 128GB Torqx was no longer manufactured
      • 120GB Inferno was the only option for a replacement
      • The Inferno was in short supply, and shipments were delayed by months
  • 10 March 2011
    • 3+ months later, finally received a replacement drive
  • 4 April 2011
    • I sold my factory sealed, brand new Inferno replacement on eBay
This whole saga has cost me several hundred dollars, between the original price I paid for the Torqx, the data recovery fee, and with the huge loss at which I sold the replacement Inferno.

However, I believe my backup scheme today is absolutely better than ever!  And perhaps more importantly, the entire Ubuntu world now has the run-one and run-this-one utilities at its disposal ;-)

:-Dustin

Monday, April 4, 2011

Austin Blind Cafe

Kim and I experienced something truly unique, and entirely perspective-changing a couple of weeks ago...

Accepting an invitation from our friends Christian and Julia, we attended the first presentation of the Blind Cafe in Austin.

I had heard of something similar in London, where several of my colleagues attended a "team building" dinner some time ago.  But as far as I'm aware, this was a first for Austin.

We attended with a big group, I think there were 10 of us in our party (of which Kim and I only really knew one other couple).  We spent a few minutes in anxious anticipation in the hallway, and received a few instructions from our hosts.

Eventually, we were led through two ante-chambers of darkness by our blind guide, toward our seats in a banquet room, completely and absolutely devoid of all light!  You could open your eyes as wide as you could, and strain as hard as you wanted, but you would see absolutely nothing :-)  It took quite a while to get used to it, even after the immediate nervousness subsided.

Kim managed to put her hand directly into someone's salad as we made our way to our seats -- oops :-)  She sat on the end of a table, and had the security of a wall to her right.  I sat on her left, with the rest of our new friends to my left and across the table.

Eating in blindness was a distinct, unique experience.  Most of my friends and family know how much I hate salad, but here I had a huge salad sitting right in front of me :-)  Under normal circumstances, I might have been able to pick through and eat the "good" parts.  But here, I couldn't :-)  And I couldn't really tell how much I had left to eat.  I think that was the longest salad of my entire life!

Now the salad plate was already served for us when we sat down, but we had to help ourselves to the main course -- rice and curry.  We immediately dropped the rice serving utensil on the ground, never to be found again.  And so we improvised, with each of us sort of grabbing a handful of sticky rice, cupping it against our forks.  The curry was a little easier to serve, as thankfully we didn't drop that spoon.  The food was good, if a little bland for my tastes.  Everyone was served the same thing, which sort of necessitates a lowest common denominator of accessibility, so the food was vegan and not very spicy.  Dessert, though was absolutely delicious!  Vegan cuisine is exceptionally good at chocolate and dessert :-)  Mmm.

Okay, food aside, the experience was second to none.  It was particularly disarming in that most of our conversations were with people whom we had never really met (friends of Christian's and Julia's).  This was really interesting, talking and listening, without the aides of reading or displaying body language.

After dinner and still in the dark, we were treated to an excellent concert, mostly led by the organizer/producer of the Blind Cafe, Rosh.  He has sort of a John Mayer style, with a great voice and an acoustic guitar.  The additional acoustic instrumentation -- a pair of violins, a cello, and a viola, really filled out the ensemble.  As a musician myself, it was engrossing to listen, and only listen, to the music in the room around us.  With no visual distractions, the music just seemed to pour through me.

At the end of the evening, Kim said that she really didn't want to leave.  She was having a great time, and was still in the process of absorbing the experience.  The organizers lit one, lone candle in the middle of the room before we left.  The revelation of the room around us was mesmerizing, finally being able to "see" how the room was arrange, how big the space was, how far (or near) the next table was, and where that damn rice spoon went! :-)

Listen, if you ever get a chance to attend a presentation of the Blind Cafe in your town, don't hesitate -- do it!  Proceeds benefited charity and help with blindness awareness, and you'll enjoy an experience that will perhaps change your perspective entirely.  It seems that the event has been held in Portland OR, Boulder CO, and Austin TX so far, and there are events on the calendar for Cincinnati OH and Seattle WA.

:-Dustin

Friday, April 1, 2011

Windows in EC2 takes 15-30 minutes to generate a password? What the...?!?

I needed to check something on Windows today.  I don't have any Windows installations locally, so my good buddy Scott Moser suggested  that I just launch one in EC2.  A t1.micro Windows instances costs something like $0.03/hour.  Good idea.  That can't be too hard...

Here's what I did...
  1. I started at the web console, http://aws.amazon.com/console/
  2. Logged in, and then clicked on the EC2 tab
  3. Then I clicked on Launch Instance
  4. There was a popup for Quick Start, which listed a few AMIs, mostly Amazon's ripoff of CentOS, a couple of SUSE images, and Windows.  Notably, there's no Ubuntu AMIs here...
  5. I selected Windows Server 2008 Base (ami-c3e40daa), 32-bit
  6. I used a t1.micro, and clicked Launch Instance
  7. I clicked continue enough times to make a Canonical Design Team member drive a stake through their MacBook Pro
  8. I selected my ec2 keypair
  9. I accepted the default Security Group configuration, which opens the RDP port 3389
  10. I clicked Launch again (I think this is the 3 button in this process that said "Launch")
  11. Then I clicked a link to View your instances on the Instances page
  12. From there, I could see my instance running, and was given the hostname, and instructions on how to connect to the instance through Windows
  13. Instead, I dropped to an Ubuntu shell and ran:
    rdesktop ec2-NN-NN-NN-NN.compute-1.amazonaws.com
  14. Alternatively, I could have clicked Applications -> Internet -> Terminal Server Client
  15. Now I tried to login
  16. I wasn't able to do so, as I needed a password, so I went back to my AWS web page, right clicked on my running instance, and my jaw hit the floor when I saw this:

  17. Wow.  Wow.  Wow.  15-30 minutes to generate a 10-character password.  All I can think is that it takes this long to gather enough entropy to seed their equivalent of /dev/random.  Still, this seems broken, in so many ways...
  18. So I waited the obligatory 15-30 minutes, right-clicking and checking if my password was ready multiple times.  Eventually, it was.  I needed to dig up the clear text of my private ec2-keypair.pem to symmetrically decrypt that 10-character password.  (Another thing that seems so broken to me about AWS ... they generated my private key and gave it to me, rather than me giving them my public key, and us operating with a public/private asymmetric scheme.)
  19. Anyway, once this was all said and done, I had a Windows machine running in EC2.  That 30 minutes spent waiting for a password was kind of a waste, though...  :-/


If nothing else, it reminds me why I love me some Ubuntu and ssh-import-id :-)

:-Dustin