RoBlog – Refactoring Reality

Discrete thought packets on .Net, software development and the universe; transmitted by Rob Levine

Refix – .NET dependency management tool

by Rob Levine on 14-Jun-2010

Just a short blog entry to highlight an exceptionally cool project that a colleague of mine has been working on in his spare time: Refix. A .NET dependency management tool.

This tool attacks the problem of having a project with dependencies, where these dependencies themselves may have common, but different version, assemblies.

I’ve certainly had this problem many times: you have a solution that uses other third party assemblies all of which have common dependencies (such as log4net, perhaps), but different assemblies rely on different versions of this common item.

Refix helps you tackle this problem in two ways:

Firstly, it goes through your solution to work out if there is actually a common set of dependencies that can be used. If so it can re-write your project files accordingly.

If there isn’t, you can supply it with a list of compatible versions (e.g. tell it v 1.1.1.0 is compatible with v 1.1.2.0), and it can automatically insert the correct assembly redirects for you into the applicaiton configuration files.

It is looking to be an excellent tool so far. It is in the early stages, but completely useable and it tackles a problem that I’ve not seen solved in .Net before.

Check it out:

http://refix.codeplex.com/

Backing up VMs in VMWare ESXi 3.5

by Rob Levine on 30-Apr-2010

I’m a massive fan of VMWare products. I use the excellent VMWare workstation for most of my development on the desktop, and I use VMWare ESXi (3.5) for hosting my virtual servers.

VMWare ESXi is a truly fantastic product, and it is free, but with this free version you are limited in the management toolset available. One thing that is missing is an easy way to back up your VMs.

Over time, I’ve tried a few approaches to backing up – none really good enough:

  • Using the download option in the "datastore browser". Not reliable for large files, and most of my VMs are > 30GB
  • Enabling ssh in the VMWare ESXi hypervisor, and using SCP to download the vm files. Again, not reliable for large files
  • Using the “VMWare vCenter Converter Standalone” to copy/convert my VMs to local copies. This is often slow though, and I don’t really understand what aspect of my VMs are being "converted", and to what. In short, I don’t really understand what is happening and if it might impact the stability or performance of any restored VMs, so I am nervous about it.
  • I’ve investigated and given up on trying to get smbfs support running in the hypervisor shell. The idea here was to allow me to mount a remote Windows share, and just copy the vm files over. However, no smbfs support built in. No installer mechanism (e.g. no rpm and no apt-get), no compiler and no iso9660 support that would let me at least mount a cd-rom. This hypervisor is thin!
  • I tried using a Ubuntu 9.04 live CD, so I could start up the ESXi host machine in Ubuntu (of course, it means ESXi VM host server would not be available during this time), but then realised there was no vmfs support in Ubuntu so I wouldn’t be able to mount the disks containing the VMs.

All is not lost though; ESXi does support NFS natively (as a client), so it is relatively easy to create an NFS "backup share" on another server

One thing worthy of note – I have tried creating an NFS server on my Windows XP box (using Windows Services for UNIX 3.0) and on my Windows 2008 server (using Services for Network File System). In both cases I could not get ESXi to play nice with the share, so I switched back to using Linux to host my NFS server. Right tool, right job, and all that.

My setup will be this – a VM running Ubuntu (10.04) which exports an NFS share that can be used to copy the backup VM files to and from ESXi (from within the ESXi shell). Optionally (and beyond the scope of this article), this exported directory can also be shared out via Samba, to give your Windows clients access to the backed up files.

These notes are a list of how I did this specifically. Your mileage may vary:

Initial setup of your Ubuntu environment

  • Install Ubuntu (10.04) to a VM, a physical box, or somewhere.
  • Make sure you have plenty of disk space to host the backups on, and create your target shared directory. For this example I’ll be using /backups
  • Set the permissions accordingly on this directory

Installing NFS and exporting your share

  • Install your NFS server:
    sudo apt-get install portmap nfs-kernel-server
  • Export your shared directory by editing /etc/exports and adding the following (noting that 192.168.0.1 should be replaced by the ip address of your ESXi server):
    /backups 192.168.0.1(rw,async,no_subtree_check)
  • Issue the command
    sudo exportfs -ra
  • Start/Restart the NFS service by issuing the command:
    sudo /etc/init.d/nfs-kernel-server restart
  • You should now have an exported NFS share!

Mounting your NFS share from ESXi

  • Start up the VMWare Infrastructure client and log into your ESXi server
  • Select Configuration->Storage and choose "Add Storage"
  • Choose "Network File System"
  • For the "server", enter the IP address of your Ubuntu server
  • For the "folder", enter the full path to the exported folder: /backups
  • For the "datastore name", enter a meaningful name, e.g. nfsshare
  • You should now be able to see the remote NFS share from the ESXi host.

Backing up your VM

I prefer to perform this step from the command line rather than using the VMWare Infrastructure Client. Again, your mileage may vary.

  • Log onto your VM console by hitting <ctrl>-<alt>-F1 at the VMWare splash screen, typing "unsupported" (without the quotes) and hitting return, and then entering your password.
  • Move to the volumes directory with
    cd /vmfs/volumes
  • Note that this directory contains subdirectories of the physical drives, and the NFS share
  • Ensuring the power is off on the VM you are about to back up, copy the VMs directory to the NFS share. For example:
    cp Disk2/MyVM nfsshare -R
  • Check the folder has been copied to the target share, et voila – you have a backup of your VM

Restoring your VM

Again, I prefer to perform this step from the command line rather than using the VMWare Infrastructure Client. This is the same approach as backing up, except that we copy the files from the NFS share to the ESXi server. In other words, the last step becomes something like:

cp Disk2/MyVM nfsshare -R

Finally, you may need to re-import the restored VM into the ESXi host’s inventory. To do this, navigate to the VMs files in the VMWare Infrastructure Client, right click on the VMs .vmx file, and click "Add To Inventory"

Imaging a hard disk over the network

by Rob Levine on 19-Apr-2010

Yesterday I found myself in need of a way to take a hard disk image over the network; that is to image the entire hard disk to a remote Windows share.

It’s not a particularly complex operation, but it is nevertheless useful, so I thought I’d jot down the steps here.

In my case, the machine is a Windows Server (2008), which I am about to rebuild. However, I wanted to be able to image the drive so I could roll back to its current state the new install didn’t go so well.

Bear in mind, my aim is to take a snapshot of the drive, a “saved position”, in its current state so I can restore it. I am not attempting to take a backup with a view to being able to retrieve data from it later. As such I am taking an image of the whole drive, rather than creating mountable copies of the individual partitions or backing up files.

Anyway – to perform this imaging, I downloaded the latest Ubuntu as a "Live CD" with the aim of using this to snapshot the drive to a remote Windows share.

Here are the steps I went through:

  1. Create a share on the target machine (another Windows box) and set share and NTFS permissions to grant read/write permissions a user on the windows box.
  2. Boot the computer containing the drive to be backed up into Ubuntu
  3. Optionally zero the free space on the drive (see below for more information)
  4. Install smbfs; this allows you to mount windows shares:
    sudo apt-get install smbfs 
  5. Create a mount point for the remote Windows share:
    cd /mnt
    sudo mkdir remoteshare
  6. Mount your remote Windows share: 
    sudo smbmount //<remoteip>/<remoteshare> /mnt/remoteshare -o username=<username>

    in my case:

    sudo smbmount //192.168.1.1/backup /mnt/remoteshare -o username="robert levine"
  7. Finally, perform the disk image itself:
    sudo dd if=/dev/sda | gzip > /mnt/remoteshare/backup.dd.gz

    [Note: the drive I am backing up is /dev/sda. Your drive may not be – YOU MUST USE THE CORRECT DRIVE DEVICE NAME OR YOU WILL BACKUP THE WRONG DISK!]

Restoring the backup

To restore the backup, you follow the same steps of booting the target into Ubuntu, installing “smbfs”, creating the mount point for the remote share, and mounting the remote share.

To actually perform the restore, issue the following command:

gzip -dc /mnt/remoteshare/backup.dd.gz | dd of=/dev/sda

As with the backup – I am restoring to the drive /dev/sda. Your drive may not be – YOU MUST USE THE CORRECT DRIVE DEVICE NAME OR YOU WILL RESTORE OVER WRONG DISK AND LOSE YOUR DATA!

Well – it worked for me!

Zeroing the free space on the drive

You may wish to zero the free space on the drive. The backup operation detailed above uses gzip to compress the drive image. However, if your drive has a lot of free space that previously contained files, then this space will actually contain the detritus of those files and may not be very compressible.

In my case it contained many tens of gigs of mp3s that had been deleted. Writing a single file of zeros will clean this and make the free space compressible.

However, it may also take a bit of time 🙂

Here are the steps – you may want to repeat this for each partition on the drive. In my case, only one partition contained "deleted free space": /dev/sda2, so this is the only one I zeroed

  1. Create a mount point for the source drive in Ubuntu:
    cd /mnt
    sudo mkdir srcdrive
  2. Mount the source drive:
    sudo mount /dev/X /mnt/srcdrive

    where X is the device name and partition of the drive (in my case "sda2"). If you are not sure what the device name of the drive is, try listing all the drives/partitions on the system with:

    sudo fdisk -l

    This should help you to identify the drive/partition

  3. Once you have mounted the drive, write your zero file (this may take some time):
    cat /dev/zero > /mnt/sourcedrive/zero.dat

    This may take some time!

  4. Once done, delete the file, and then unmount the drive:
    sudo rm /mnt/srcdrive/zero.dat sudo umount /mnt/srcdrive

Vista install disc and the .wim files

by Rob Levine on 17-Apr-2008

Yesterday I needed to reinstall a Windows font that seemed to be misbehvaing. Over the years I have built up a fair bit of knowledge about Windows, so I thought I knew what to do:

  1. Delete the font from Windowsfonts
  2. Locate the i386 directory on the install media for Windows (or my local drive if I ever copied it over)
  3. Find the compressed font in question; e.g. for Wingdings, the compressed file would be WINGDINGS.TT_
  4. Use the command line expand.exe tool to expand this to WINGDINGS.TTF
  5. Install the font by copying it to the Windowsfonts directory.

If I remember correctly, the process has been the same for a long time. Back in the Windows 9x days there was no i386 (this was of Windows NT provenance), so you’d have to look in the cabinet (.cab) files. Before that, in the Win3.x/DOS days, you’d have to search the install media for any files you wished to reinstall. But overall, the process has been with us for some time, and the repository of files that is the i386 folder was the place to start.

The problem was that my Vista install did not contain a i386 directory. As a perused the directory structure of the DVD I discovered a directory named sources which contained a fair few files, but not nearly enough for a Windows install, and it didn’t contain the file I was looking for. In fact, the entire DVD didn’t seem to contain the file I was looking for. In fact, now I’d come to ask the question: “WHERE ARE ALL THE BL**DY INSTALL FILES?”.

And then I noticed install.wim, all 2,412,507,182 bytes of it (that is 2.24GB folks).

It is bound to be in there. It must be like the .cab file of yesteryear.

So I double-clicked it; no default viewer for this file type.

WinZip – “Not a valid archive”.

UltraISO (great utility) – “Invalid or unknown image file format”.

And then I thought “hang on – there must be a userspace tool that ships with Windows for this file format, even if it is command line”; after all, who’d ship an O.S. in a format that you can’t access unless you were the installer program? Clearly that would be silly.

So I searched my Windows installation, and I searched the install disc, and then I searched the web for info.

No.

Either I missed something obvious, or there is no quick way to get at the files contained in a .wim image file.

From all my searching, apart from the odd shareware utility that might have been able to do it, I found one official answer; the IMAGEX.EXE utility. A Microsoft tool that lets you mount a .wim file as a drive.

And where was this tool?

Available as a free download in the Windows Automated Install Kit.

And how big is this download?

Glad you asked.

It is 992.2MB.

3 observations to make here:

  • If I am right and there really isn’t a userspace tool to read .wim files that comes with Windows then that is absolutely pathetic.
  • I’m glad I got my broadband speed doubled last week.
  • Naturally the font I resinstalled didn’t fix the problem, so I still can’t print Franklin Gothic Book type in bold to my HP Photosmart 3310 printer.

Implementing a simple hashing algorithm (pt II)

by Rob Levine on 3-Apr-2008

In my last blog article I looked at implementing a hashing algorithm by trying different boolean mathematical operations on the constituent fields of our class. It was very clear that out of AND, OR and XOR, only XOR provided us with anything like a balanced hash code. However, although it worked well in the previous example (a music library), all is not quite as it seems. On closer inspection of the behaviour of the XOR hash, it turns out that this hashing algorithm has its own flaws and is not ideal in most situations.

 

The Commutativity Issue

The most obvious problem with the "XOR" all fields approach to hash codes is that any two fields XOR’d together will give you the same value, regardless of the order in which they are XOR’d (i.e. XOR is a commutative operation). This increases the chance that you will get hash code collision, which of course is a bad thing. Consider the following example (all fields contributing to hash):

Forename Middle name Surname
John Paul Jones
Paul John Jones

Clearly the two people above are not the same person, but they will have the same hashcode if we generate our hash with a simple XOR of the hash codes of the constituent fields.

As mentioned in previous posts, a hashcode is not a unique identifier, and the fact both people have the same hash-code won’t break a hashtable. However, it will lead to a less efficient hashtable, as both our people will end up in the same bucket of the hashtable when, ideally, they shouldn’t.

 

The Range Issue

Another problem that the XOR approach doesn’t address is that of the potential range of values of a hashcode. In my previous post I showed that the XOR implementation of hash code seemed acceptable for my music library example. However, in reality I was relying on the properties of the individual hash codes that made up my overall hash code. Specifically, many of the fields that contributed to the hash were strings, and the BCL implementation of System.String.GetHashCode() has a pretty good distribution. Had my music track entity not contained several string fields, things would have looked very different.

But what about fields that may have a poor "innate" distribution? Given that System.Integer.GetHashCode() returns the integer itself, what happens if I have an field representing a person’s age? The spread of values is, at best, 0 < age < 120, which is hardly the distribution across integer-space that you might want. A person’s height in cm? 0 < height < 250.

You see the problem? I don’t really want to be combining all these hash codes in the very low integer ranges using the XOR approach because it means my final "cumulative" hash code will be stuck within this range as well.

 

Examining the flaws in more detail

I don’t pretend to be an expert on hashing algorithms, but I can see that we have issues here and so the best way of discovering more (for me, at least) is to work through a broken example and see what I can do to fix it.

I very quickly settled on the idea of trying to write a replacement String.GetHashCode() implementation. I would get a dictionary of words (all in lower case), and try and write a hash code implementation based on the ASCII code of each letter in the word. Given that the the hashcode of our ASCII codes would be the ASCII code itself (since the hashcode of an integer is the integer itself), we would have a poor distribution of individual character hash codes; all in the range 97 (a) – 122 (z). This approach would also highlight problems such as commutativity (since many words have the same letters, but just in a different order).

My XOR implementation for this approach would look like this:

public int GetHashCode(string word)
{
    char[] chars = word.ToCharArray();
    int hash = 0;
    foreach (char c in chars)
    {
        int ascii = (int)c;
        hash ^= ascii.GetHashCode();
    }
    return hash;
}

I sourced my dictionary from here, and de-duplicated it (and converted the words to lower case) to produce this list of words.

As expected, this algorithm (referred to as AsciiChar_XOR in the diagram) has a spectacularly bad value distribution, as shown in this histogram:

Histogram for ASCIIXOR.

[Note that the width of each bucket here is 67108864, being 2^32 / 64]

Surprise, surprise – all 2898 words fall into the same histogram bucket! In fact they all fall into the far narrower range of 0-127 – the ASCII code range.

All of a sudden the simple XOR approach to hashing looks like a very poor performer indeed.

 

Examining better algorithms

Since we’ve already discussed the weaknesses of XOR, we should have a fairly good idea of where to focus our attention to create better algorithms. Firstly we should be choosing an algorithm that is non-commutative, and secondly we should be choosing an algorithm that uses the full range of integer-space, rather than limiting a hashcode to the range of its constituent members.

A bit of a search around reveals two approaches that are often discussed. The first one, given to me as a boiler-plate example by a Java developer friend, looks like this:

public override int GetHashCode()
{
    int hash = 23;
    hash = hash * 37 + (field1 == null ? 0 : field1.GetHashCode());
    hash = hash * 37 + (field2 == null ? 0 : field2.GetHashCode());
    hash = hash * 37 + (field3 == null ? 0 : field3.GetHashCode());
    return hash;
} 

[I shall refer to this type of algorithm as JavaStyleAddition as it seems to be a very common implementation in the Java world]

The second common pattern looks something like this:

public override int GetHashCode()
{
    int hash = 23;
    hash = (hash << 5) + (field1 == null ? 0 : field1.GetHashCode());
    hash = (hash << 5) + (field2 == null ? 0 : field2.GetHashCode());
    hash = (hash << 5) + (field3 == null ? 0 : field3.GetHashCode());
    return hash;
} 

[I shall refer to this type of algorithm as ShiftAdd]

Both of these have some similar characteristics. By taking the cumulative hash so far, applying an operation (multiply for JavaStyleAddition and left-shift-5 for ShiftAdd) and then adding the new field’s hash code, they both avoid the commutativity issue since

(x * n) + y is not generally equal to ( y * n ) + x.

They also both increase the range of the cumulative hash code above that of the constituent hash codes due to the effect of multiplying the cumulative hash code each time (remember that a single left shift is a multiplication by two).

You will also notice that both approaches use a prime number (23 in the examples shown) as the starting value for the hash, and JavaStyleAddition uses another prime (37) as the multiplier. My guess is that this, statistically, makes collisions less likely as you multiply up your hash code because if one side of the multiplication has no factors (other than 1 and itself), then you are lowering the statistical average number of factors of the result. Of course, I may be wrong about that :-s

A variant of ShiftAdd that I have seeing during my Google journeys is one in which the hash codes are shifted and then XOR’d (rather than added). I shall refer to this as ShiftXOR.

Histogram for SX_JSA_SA.

This certainly looks better than the histogram for AsciiChar_XOR 😀

However, all three algorithms still cluster around the centre of the number range, and all exhibit other major spikes in distribution.

What I could have done at this point is break into a major mathematical and statistical analysis of these three hashing algorithms, but I decided against it for two key reasons. Firstly – I would have got bored, and secondly – I wouldn’t have had the first idea where to start!

Nope – I felt it better to fall back on my hacker instincts and munge various forms of the above algorithms to see if I could produce a better algorithm for my particular use case.

I quickly came up with two further algorithms, both combining the left-shift approach with the the prime-add approach of the above algorithms. The difference between them being only that one adds the hashcodes each time, while the other XORs them:

 

public override int GetHashCode()
{
    int hash = 23;
    hash = ((hash << 5) * 37 ) + (field1 == null ? 0 : field1.GetHashCode());
    hash = ((hash << 5) * 37 ) + (field2 == null ? 0 : field2.GetHashCode());
    hash = ((hash << 5) * 37 ) + (field3 == null ? 0 : field3.GetHashCode());
    return hash;
} 

[ShiftPrimeAdd]

and

 

public override int GetHashCode()
{
    int hash = 23;
    hash = ((hash << 5) * 37) ^ (field1 == null ? 0 : field1.GetHashCode());
    hash = ((hash << 5) * 37) ^ (field2 == null ? 0 : field2.GetHashCode());
    hash = ((hash << 5) * 37) ^ (field3 == null ? 0 : field3.GetHashCode());
    return hash;
} 

[ShiftPrimeXOR]

For good measure, I threw these into the mix alongside the standard BCL implementation of System.String.GetHashCode() [labelled as StringGetHashCode in the diagram] to see how they would fare:

Histogram for SPX_SGHC_SPA.

[Note that the y-axis for this histogram is half that of the previous diagram]

Now – that is MUCH more like it. We have a well balanced hash code distribution across the entire range of integer space. There are a few minor spikes, but all three seem to compare favourably with each other. The approach of including the prime and ‘multiplying’ up each time really does seem to do the trick.

Which would I chose out of ShiftPrimeXOR and ShiftPrimeAdd? Not sure – I’d have to benchmark them first and see which was fastest!

 

Conclusion

In summary, just XORing fields together may well produce an awful hash code distribution, unless the constituent field hash codes are themselves well balanced.

However, during the course of writing this article, I have realised that there are other relatively simple implementations that provide a good hash code distribution (for this example at least). More than that, I have reinforced my belief that these things are best checked out if you have any doubts. It doesn’t take long to put together a test harness and profile your algorithms with a sample of your data.

One thing I have omitted from my investigation is any discussion on the speed of the algorithms. It would be worth benchmarking each one because if a class is being used in a hashtable, its .GetHashCode() method is called for each .Add, .Remove and .Contains call. However, the following thought does occur to me; with these sort of repetitve mathematical operations, the relative speed of XOR vs. shift vs. multiply (etc) may well actually depend on your CPU architecture.

On reflection, there is a lot more to hashing than the small amount I know and I’m sure many mathematical research papers have been written on the subject.

In the future, my default choice will probably lean towards ShiftPrimeXOR or ShiftPrimeAdd as a starting point. It would be a waste of time to spend days up front trying to work out the perfect hashing algorithm. My approach would be to choose one, use it, and keep an eye on its performance. If it proves to problematic then consider optimising it, otherwise leave it alone.

Right – enough already about hashing algorithms!

A Visual Studio 2008 project containg the console application I used to generate these results can be found here.