From the Canyon Edge -- :-Dustin

Tuesday, October 30, 2012

Seed /dev/urandom through Metadata (EC2, OpenStack, Eucalyptus)


If you attended my talk about Entropy at the OpenStack Summit in San Diego earlier this month, or you read my post and slides here on my blog, you noticed that I had a few suggestions as to how we might improve entropy in Linux cloud instances and virtual machines.

There's one very easy approach that you can handle entirely on your end, when launching an instance, if you use Ubuntu's cloud-init utility, which consumes the user-data field from the metadata service.

You simply need to use ec2-run-instances or euca-run-instances with the --user-data-file option.

Cloud-init supports a directive called write_files.  Here, you can specify a path, ownerships, permissions, encoding, and content of a given file, which cloud-init will write a boot time.  Leveraging, this, you can simply "create" (actually, just append to) the psuedo random device that the Linux kernel provides at /dev/urandom, with is owned by root:root and permissioned rw-rw-rw-.  The stanza should look like this:

write_files:
-   encoding: b64
    content: $SEED
    owner: root:root
    path: /dev/urandom
    perms: '0666'


Now, you'll need to generate this using a script on your end, and populate the $SEED variable.  To do that, simply use this on your host system where you launch your cloud instance:

SEED="$(head -c 512 /dev/urandom | base64 -w 0)"

This command will read 512 bytes from your locale system's /dev/urandom and base64 encode it without wrapping lines.  You could, alternatively, read from your local system's /dev/random if you have enough time and entropy.

Using the recipe above, you can ensure that your instance has at least some bit (actually, 4096 bits) of randomness that was collected outside of your cloud provider's environment.

I'm representing Gazzang this week at the Ubuntu Developer Summit this week in Copenhagen, Denmark pushing for better security, entropy, and key management inside of Ubuntu's cloud images.

Cheers,
:-Dustin

Friday, October 19, 2012

Encrypt Everything Everywhere (in OpenStack)


Here are the slides from the first of my two presentations on behalf of Gazzang today at the OpenStack Summit in San Diego, CA.

In this presentation, we started out examining the landscape of security within cloud computing, what's changed, and why encryption is essential to the integrity of the industry.  We talked about the types of sensitive data that need protection, and looked at some astounding statistics about data breaches in the past 3 years that could have been easily thwarted with comprehensive encryption.

We then discussed the several layers where encryption is essential:

  1. Network layer
  2. Filesystem layer
  3. Block storage layer
Within the network layer, SSH seems to be well understood and practiced, though we did talk a little about some tools that can help with SSH public key management, namely ssh-copy-id, ssh-import-id, and DNS records for SSHFP.  We also talked about TLS and SSL, the fact that many applications and services support SSL, but that it's rarely configured or enabled (even in OpenStack!).  PKI and key management tend to be the hard part here...

At the block storage layer, we discussed dmcrypt, and how it can be used to efficient protect entire block devices.  We discussed several places within OpenStack where dmcrypt could and should be used.  We also discussed some of the shortcomings or limitations of dmcrypt (single key for the entire volume, hard to incrementally backup, all-or-nothing encryption).

I then introduced overlayroot to the OpenStack crowd, as convenient way of all local changes and data within an OpenStack guest.

At the filesystem layer, we discussed per-file encryption with eCryptfs, as well as Gazzang's commercially supported distribution of eCryptfs called zNcrypt.  We compared and contrasted eCryptfs with dmcrypt, and discussed the most appropriate places to use each within OpenStack.

Finally we touched on the piece of technology required to bring all of this together -- key management. To actually secure any encrypted data, you need to safely and securely store the keys necessary to access the data somewhere other than on the same systems having the encrypted data.  We talked a little about OpenStack Keystone, and how it does and doesn't solve this problem.  We also introduced Gazzang zTrustee, which is our commercially supported key management solution that Gazzang offers as a turnkey solution in this space.

Enjoy!



:-Dustin

Thursday, October 18, 2012

Entropy, or Lack Thereof, in OpenStack Instances (and how to improve that)


I gave two presentations today at the OpenStack Design Summit in sunny San Diego, CA, as we prepare for the Grizzly development cycle.

In this presentation, I spent about 40 minutes discussing several research papers over the last 6 years showing the problems with entropy and randomness in cloud computing.  Namely:

  1. The Analysis of the Linux Random Number Generator (2006)
  2. The iSEC Partners Presentation at BlackHat (2009)
  3. Minding your P's and Q's (2012)

There's two pieces of the entropy problem in OpenStack and cloud computing that I'm interested in helping improve:

  1. Better initial seeds for the psuedo random number generator at instance initialization
  2. Better ongoing entropy gathering throughout the lifetime of the instance.
To the first point (better seeds), I suggested a series of technologies that could significantly improve the situation in OpenStack in the near term:
  1. The hypervisor could provide a random seed through a block device to the guest
  2. The hypervisor could expose a urandom device through the metadata service
    • Actually, I'm sitting next to Scott Moser right now, who attended my talk earlier today and merely hours after my talk, he has already hacked this into the OpenStack metadata service :-)  His merge proposal is here.  This is why I love open source software...
  3. The user can pass their own locally generated seed to the instance through cloud-init and the userdata
  4. Additional seed data can be assembled through the aNerd protocol
    • There's lots more to say about this one...I'll have another post on this soon!
As for improving the ongoing entropy gathering...
  1. Eventually, a new wave of cloud servers with modern CPUs will have Intel's DRNG feature and leverage the new rdrand instruction
    • Unfortunately, we're probably a little ways off from that being widely available
    • Colin King has benchmarked it -- really impressive performance!
  2. KVM's new virtio-rng driver is pretty cool too, allowing a server to pass through access to a hardware random number generator
  3. HAVEGE simply rocks, and should be installed in every cloud instance
  4. Gazzang's zTrustee encryption key manager also supports a secure, authenticated entropy service (as a commercial offering from my employer)

Enjoy!


:-Dustin

Printfriendly