From the Canyon Edge -- :-Dustin

Monday, August 24, 2009

Solar Installation - Part 3

Alright, so with the rails up, we're ready for Part 3...installing the panels. The panels are mounted on the aluminum frames, and connected to a DC voltage collector, which feeds into the AC-DC power inverter.

We have a total of 38 modules going on the roof, in 3 separate "strings" or "arrays" (solar guy's terms). To those of us that are programmers:


power_t *solar1, *solar2, *solar3;
// versus
power_t solar1[16];
power_t solar2[11];
power_t solar3[11];


Heh, okay, bad joke!

A little over half are installed now. We're waiting on the shipment of the rest. Hopefully, the rest should be installed by the end of the week. Stay tuned...

For other articles in this series, see:
http://blog.dustinkirkland.com/search/label/Solar

:-Dustin


Friday, August 21, 2009

qemu-kvm: call for testing

If you're running Karmic and you use KVM, I'd appreciate your help testing the qemu-kvm package.

sudo apt-get install qemu-kvm

And then just use KVM as you normally do. Please file bugs at:
Thanks,
:-Dustin

Wednesday, August 19, 2009

Solar Installation - Part 2


Step 2 is installing aluminum brackets on the roof, to which the solar panels will actually mount.

I was chatting with one of the two installers. He noticed that I was wearing a Linux Foundation t-shirt, and he mentioned that he ran Linux on his older computer. Ubuntu, as it turned out. He said that he found it mostly user-friendly, and was able to do almost everything he could under Windows, except use Real Player to listen to his favorite college radio station, kdvs.org.

So I checked out kdvs.org, and found multiple m3u streams, for both mp3 and ogg formats. Within minutes, we were streaming to the outdoor speakers while they were working on the roof ;-) When he got home, he confirmed that he was able to do the same with Amarok.

For other articles in this series, see:
http://blog.dustinkirkland.com/search/label/Solar

:-Dustin

Tuesday, August 18, 2009

A Statistical Analysis of Potential PowerNap Energy Savings


I was asked recently how much energy savings administrators might actually expect from an Ubuntu Enterprise Cloud, powered by Eucalyptus and using PowerNap.

I have blogged before about how much I enjoy mathematics and statistical analysis, and this is in fact another exciting, hard question!

The question was presented as the following hypothetical...
Given 10 quad-core servers in a cloud running 40 virtual machines. 20 virtual machines are simultaneously discarded. What power savings would you expect from PowerNap?
This is actually a rather complex combinatorics problem. I'll restate it as such:
Given 10 buckets, each of which can hold 0 - 4 items, how many ways can 20 items be distributed among those 10 buckets? Furthermore, what is the statistical distribution of empty buckets?
It's been 10 years since my university statistics classes, and my brain melted trying to derive the permutation formula. But a big thanks to my good friend Matt Dobson for solving that! If you're interested, you can view his combinatorics formula here.

It took me about an hour to hack a Python script that could empirically solve this problem by brute force, generating a comprehensive list of all of the permutations. You can now download and run /usr/bin/powernap_calculator provided by the powernap package in Karmic.

Here are the results running on the parameters above:

$ time powernap_calculator --hosts 10 --guests-per-host 4 --guests 20
Calculating...99%

In a cloud with [10] hosts, which can handle [4] guests-per-host, currently running [20] guests,
you may expect the following:

[ 5.2%] likely that [0/10] of your hosts would powernap, for a [0%] power savings
[ 27.5%] likely that [1/10] of your hosts would powernap, for a [10%] power savings
[ 42.5%] likely that [2/10] of your hosts would powernap, for a [20%] power savings
[ 21.8%] likely that [3/10] of your hosts would powernap, for a [30%] power savings
[ 2.9%] likely that [4/10] of your hosts would powernap, for a [40%] power savings
[ <1%] likely that [5/10] of your hosts would powernap, for a [50%] power savings

The overall expected value is [19.0%] power savings.

real 0m46.919s
user 0m46.227s
sys 0m0.276s


So at this snapshot in time, when your cloud suddenly dropped to 50% utilization, the expected value is a 19% power savings. Note that expected value is a very specific statistics term. Wikipedia says:
In probability theory and statistics, the expected value (or expectation value, or mathematical expectation, or mean, or first moment) of a random variable is the integral of the random variable with respect to its probability measure. For discrete random variables this is equivalent to the probability-weighted sum of the possible values, and for continuous random variables with a density function it is the probability density-weighted integral of the possible values.
Right, so we might expect about a 19% power savings for this random moment in time, when we simultaneously reduce our utilization by 50%.

However, if we observe the cloud over time, and with perhaps more realistic usage patterns, the distribution should be much better than random. VMs will come and go at more staggered intervals than simultaneous destruction of half the instances.

And you will have another tremendous factor working in your favor -- Eucalyptus' greedy scheduling algorithm. This ensures that each time a new VM is launched, it will land on a system that's already running. This is known as an annealing system -- one that's constantly correcting itself -- since under-utilized systems will automatically powernap, and new VMs will fill in the gaps on running hardware. Pretty cool, I think.

So I'm curious...
  • What does (or would) your cloud look like?
  • How many -h|--hosts, -p|--guests-per-host, and -g|--guests do you usually have?
  • What does powernap_calculator say about your parameters?
  • Post your results!
Note that the powernap_calculator algorithm is exponential, O((p+1)h), so large values of (p, h) will take a very long time to compute! I'm totally open to code review of powernap_calculator and any performance enhancements ;-)

In my cloud, I have 8 dual-core hosts. I typically limit my usage to 2 guests-per-host. And I rarely run more than 4 VMs at a time.
$ powernap_calculator -h 8 -p 2 -g 4
In a cloud with [8] hosts, which can handle [2] guests-per-host, currently running [4] guests,
you may expect the following power savings:
[ 26.3%] likely that [4/8] of your hosts would powernap, for a [50%] power savings
[ 63.2%] likely that [5/8] of your hosts would powernap, for a [62%] power savings
[ 10.5%] likely that [6/8] of your hosts would powernap, for a [75%] power savings
The overall expected value is [60.5%] power savings.
My servers run at about 80 Watts fully loaded. My electricity is about $0.10 per kilowatt-hour. So a year's worth of electricity with all 8 servers running all of the time is 8 * .08 KW * 24 hr/day * 365 days/year * $0.10/KW-hr = $560/year.

I like the prospects of saving approximately 60.5% of that, or $339/year!

So how does the calculator work?

Let's use a slightly smaller example: hosts,h=4, guests-per-host,p=2, guests,g=3

Since each system can hold 0, 1, or 2 virtual machines, we're going to use a base-3 number (which is p+1). And we're going to generate all possible 4-digit base-3 numbers (which is (p+1)h). In our case, that's 34 or 81 scenarios to consider. For each of those scenarios, we convert the decimal integer to a list of each of the digits 0-2, and sum the list, throwing out any "invalid scenarios", where the sum does not add up to the target number of guests, 3, g. Our valid scenarios is actually much smaller, the following 16:
[2, 1, 0, 0]
[1, 2, 0, 0]
[2, 0, 1, 0]
[1, 1, 1, 0]
[0, 2, 1, 0]
[1, 0, 2, 0]
[0, 1, 2, 0]
[2, 0, 0, 1]
[1, 1, 0, 1]
[0, 2, 0, 1]
[1, 0, 1, 1]
[0, 1, 1, 1]
[0, 0, 2, 1]
[1, 0, 0, 2]
[0, 1, 0, 2]
[0, 0, 1, 2]
Now, of these, we are interested in how many 0's are in each row, as this indicates a host that has no guests, and can therefore be powernapped.

Our calculations yield: [0, 4, 12, 0, 0], or:
  • 0 possible scenarios have no unused hosts
  • 4 possible scenarios have 1 unused hosts
  • 12 possible scenarios have 2 unused hosts
  • 0 possible scenarios have 3 unused hosts
  • 0 possible scenarios have 4 unused hosts
From this we can generate the following probabilities:
[ 25.0%] likely that [1/4] of your hosts would powernap, for a [25%] power savings
[ 75.0%] likely that [2/4] of your hosts would powernap, for a [50%] power savings
And weighting these probabilities, we can generate the expected value:
25%*.25 + 50%*.75 = 43.75%
The overall expected value is [43.8%] power savings.

Comments appreciated!

:-Dustin

Monday, August 17, 2009

Help translate eCryptfs to your language!



eCryptfs is now hooked into Launchpad's Translations functionality for internationalization of text strings (at least the shell scripts, for now).

If you are fluent in another language and would like to help translate eCryptfs, please help out at:

* https://translations.launchpad.net/ecryptfs

Also, a belated congratulations, and thank you to the Launchpad team on their open sourcing of all of Launchpad's functionality, under the AGPL!

Cheers,
:-Dustin

Thursday, August 6, 2009

Moving your Encrypted Home Meta Data out of /var/lib/ecryptfs

In the spirit of the FHS, Encrypted Home Directories in Ubuntu 9.04 stored certain configuration information about your Encrypted Home setup in /var/lib/ecryptfs.
However "correct" this location might be, it has caused considerable pain to a number of users, mostly because people don't backup /var/lib, generally. That said, it is totally possible to re-generate all of the information in your /var/lib/ecryptfs directory if you recorded your all-important mount passphrase.

In any case, this is not the most user-friendly place to store this information.

Thus, in Karmic, we are using /home/.ecryptfs instead of /var/lib/ecryptfs. Each user encrypting their home directory will have a a directory in /home/.ecryptfs/$USER which will contain the "real" .ecryptfs and .Private directories.

This provides a couple of advantages.

First, your /home directory is completely self-contained. You can backup that entire hierarchy and save all of the data necessary (excepting your secret passphrase, of course). Actually, many users make /home a separate partition.

Secondly, having access to /home/.ecryptfs/$USER/.Private means that you can much more easily perform backups of your encrypted data. This feature has been requested many, many times.

You can actually take advantage of this same configuration in Ubuntu 9.04, if you follow the guide below. I recommend doing so ;-)

As always, you should log out of all desktop sessions, and perform these instructions from a tty terminal, or an ssh session.


#!/bin/sh -e

# Move out of your home directory
cd /

# If your encrypted home is not mounted, try to mount it
grep -qs " $HOME ecryptfs " /proc/mounts || ecryptfs-mount-private

# With root privilege, create a /home/.ecryptfs/$USER directory
sudo mkdir -p /home/.ecryptfs/$USER

# Make sure $USER owns that
sudo chown $USER:$USER /home/.ecryptfs/$USER

# Rename your /var/lib/ecryptfs/$USER dir to the new location
sudo mv -f /var/lib/ecryptfs/$USER /home/.ecryptfs/$USER/.ecryptfs

# Remove the two symlinks in your mounted home, to .ecryptfs and .Private
rm -f $HOME/.ecryptfs $HOME/.Private

# Establish links to these two dirs
ln -sf /home/.ecryptfs/$USER/.ecryptfs $HOME/.ecryptfs
ln -sf /home/.ecryptfs/$USER/.Private $HOME/.Private

# Unmount home
while ecryptfs-umount-private | grep "Sessions still open"; do
true
done

# Make your unmounted home writable (briefly)
sudo chmod 700 $HOME

# Move the *real* .Private directory to the new location
mv -f $HOME/.Private /home/.ecryptfs/$USER/

# Remove the .ecryptfs and .Private links
rm -f $HOME/.ecryptfs $HOME/.Private

# Re-establish the .ecryptfs and .Private links
ln -sf /home/.ecryptfs/$USER/.ecryptfs $HOME/.ecryptfs
ln -sf /home/.ecryptfs/$USER/.Private $HOME/.Private

# Mount your home directory again
ecryptfs-mount-private


:-Dustin

Printfriendly