RoBlog – Refactoring Reality

Discrete thought packets on .Net, software development and the universe; transmitted by Rob Levine

Implementing a simple hashing algorithm (pt II)

by Rob Levine on 3-Apr-2008

In my last blog article I looked at implementing a hashing algorithm by trying different boolean mathematical operations on the constituent fields of our class. It was very clear that out of AND, OR and XOR, only XOR provided us with anything like a balanced hash code. However, although it worked well in the previous example (a music library), all is not quite as it seems. On closer inspection of the behaviour of the XOR hash, it turns out that this hashing algorithm has its own flaws and is not ideal in most situations.

 

The Commutativity Issue

The most obvious problem with the "XOR" all fields approach to hash codes is that any two fields XOR’d together will give you the same value, regardless of the order in which they are XOR’d (i.e. XOR is a commutative operation). This increases the chance that you will get hash code collision, which of course is a bad thing. Consider the following example (all fields contributing to hash):

Forename Middle name Surname
John Paul Jones
Paul John Jones

Clearly the two people above are not the same person, but they will have the same hashcode if we generate our hash with a simple XOR of the hash codes of the constituent fields.

As mentioned in previous posts, a hashcode is not a unique identifier, and the fact both people have the same hash-code won’t break a hashtable. However, it will lead to a less efficient hashtable, as both our people will end up in the same bucket of the hashtable when, ideally, they shouldn’t.

 

The Range Issue

Another problem that the XOR approach doesn’t address is that of the potential range of values of a hashcode. In my previous post I showed that the XOR implementation of hash code seemed acceptable for my music library example. However, in reality I was relying on the properties of the individual hash codes that made up my overall hash code. Specifically, many of the fields that contributed to the hash were strings, and the BCL implementation of System.String.GetHashCode() has a pretty good distribution. Had my music track entity not contained several string fields, things would have looked very different.

But what about fields that may have a poor "innate" distribution? Given that System.Integer.GetHashCode() returns the integer itself, what happens if I have an field representing a person’s age? The spread of values is, at best, 0 < age < 120, which is hardly the distribution across integer-space that you might want. A person’s height in cm? 0 < height < 250.

You see the problem? I don’t really want to be combining all these hash codes in the very low integer ranges using the XOR approach because it means my final "cumulative" hash code will be stuck within this range as well.

 

Examining the flaws in more detail

I don’t pretend to be an expert on hashing algorithms, but I can see that we have issues here and so the best way of discovering more (for me, at least) is to work through a broken example and see what I can do to fix it.

I very quickly settled on the idea of trying to write a replacement String.GetHashCode() implementation. I would get a dictionary of words (all in lower case), and try and write a hash code implementation based on the ASCII code of each letter in the word. Given that the the hashcode of our ASCII codes would be the ASCII code itself (since the hashcode of an integer is the integer itself), we would have a poor distribution of individual character hash codes; all in the range 97 (a) – 122 (z). This approach would also highlight problems such as commutativity (since many words have the same letters, but just in a different order).

My XOR implementation for this approach would look like this:

public int GetHashCode(string word)
{
    char[] chars = word.ToCharArray();
    int hash = 0;
    foreach (char c in chars)
    {
        int ascii = (int)c;
        hash ^= ascii.GetHashCode();
    }
    return hash;
}

I sourced my dictionary from here, and de-duplicated it (and converted the words to lower case) to produce this list of words.

As expected, this algorithm (referred to as AsciiChar_XOR in the diagram) has a spectacularly bad value distribution, as shown in this histogram:

Histogram for ASCIIXOR.

[Note that the width of each bucket here is 67108864, being 2^32 / 64]

Surprise, surprise – all 2898 words fall into the same histogram bucket! In fact they all fall into the far narrower range of 0-127 – the ASCII code range.

All of a sudden the simple XOR approach to hashing looks like a very poor performer indeed.

 

Examining better algorithms

Since we’ve already discussed the weaknesses of XOR, we should have a fairly good idea of where to focus our attention to create better algorithms. Firstly we should be choosing an algorithm that is non-commutative, and secondly we should be choosing an algorithm that uses the full range of integer-space, rather than limiting a hashcode to the range of its constituent members.

A bit of a search around reveals two approaches that are often discussed. The first one, given to me as a boiler-plate example by a Java developer friend, looks like this:

public override int GetHashCode()
{
    int hash = 23;
    hash = hash * 37 + (field1 == null ? 0 : field1.GetHashCode());
    hash = hash * 37 + (field2 == null ? 0 : field2.GetHashCode());
    hash = hash * 37 + (field3 == null ? 0 : field3.GetHashCode());
    return hash;
} 

[I shall refer to this type of algorithm as JavaStyleAddition as it seems to be a very common implementation in the Java world]

The second common pattern looks something like this:

public override int GetHashCode()
{
    int hash = 23;
    hash = (hash << 5) + (field1 == null ? 0 : field1.GetHashCode());
    hash = (hash << 5) + (field2 == null ? 0 : field2.GetHashCode());
    hash = (hash << 5) + (field3 == null ? 0 : field3.GetHashCode());
    return hash;
} 

[I shall refer to this type of algorithm as ShiftAdd]

Both of these have some similar characteristics. By taking the cumulative hash so far, applying an operation (multiply for JavaStyleAddition and left-shift-5 for ShiftAdd) and then adding the new field’s hash code, they both avoid the commutativity issue since

(x * n) + y is not generally equal to ( y * n ) + x.

They also both increase the range of the cumulative hash code above that of the constituent hash codes due to the effect of multiplying the cumulative hash code each time (remember that a single left shift is a multiplication by two).

You will also notice that both approaches use a prime number (23 in the examples shown) as the starting value for the hash, and JavaStyleAddition uses another prime (37) as the multiplier. My guess is that this, statistically, makes collisions less likely as you multiply up your hash code because if one side of the multiplication has no factors (other than 1 and itself), then you are lowering the statistical average number of factors of the result. Of course, I may be wrong about that :-s

A variant of ShiftAdd that I have seeing during my Google journeys is one in which the hash codes are shifted and then XOR’d (rather than added). I shall refer to this as ShiftXOR.

Histogram for SX_JSA_SA.

This certainly looks better than the histogram for AsciiChar_XOR 😀

However, all three algorithms still cluster around the centre of the number range, and all exhibit other major spikes in distribution.

What I could have done at this point is break into a major mathematical and statistical analysis of these three hashing algorithms, but I decided against it for two key reasons. Firstly – I would have got bored, and secondly – I wouldn’t have had the first idea where to start!

Nope – I felt it better to fall back on my hacker instincts and munge various forms of the above algorithms to see if I could produce a better algorithm for my particular use case.

I quickly came up with two further algorithms, both combining the left-shift approach with the the prime-add approach of the above algorithms. The difference between them being only that one adds the hashcodes each time, while the other XORs them:

 

public override int GetHashCode()
{
    int hash = 23;
    hash = ((hash << 5) * 37 ) + (field1 == null ? 0 : field1.GetHashCode());
    hash = ((hash << 5) * 37 ) + (field2 == null ? 0 : field2.GetHashCode());
    hash = ((hash << 5) * 37 ) + (field3 == null ? 0 : field3.GetHashCode());
    return hash;
} 

[ShiftPrimeAdd]

and

 

public override int GetHashCode()
{
    int hash = 23;
    hash = ((hash << 5) * 37) ^ (field1 == null ? 0 : field1.GetHashCode());
    hash = ((hash << 5) * 37) ^ (field2 == null ? 0 : field2.GetHashCode());
    hash = ((hash << 5) * 37) ^ (field3 == null ? 0 : field3.GetHashCode());
    return hash;
} 

[ShiftPrimeXOR]

For good measure, I threw these into the mix alongside the standard BCL implementation of System.String.GetHashCode() [labelled as StringGetHashCode in the diagram] to see how they would fare:

Histogram for SPX_SGHC_SPA.

[Note that the y-axis for this histogram is half that of the previous diagram]

Now – that is MUCH more like it. We have a well balanced hash code distribution across the entire range of integer space. There are a few minor spikes, but all three seem to compare favourably with each other. The approach of including the prime and ‘multiplying’ up each time really does seem to do the trick.

Which would I chose out of ShiftPrimeXOR and ShiftPrimeAdd? Not sure – I’d have to benchmark them first and see which was fastest!

 

Conclusion

In summary, just XORing fields together may well produce an awful hash code distribution, unless the constituent field hash codes are themselves well balanced.

However, during the course of writing this article, I have realised that there are other relatively simple implementations that provide a good hash code distribution (for this example at least). More than that, I have reinforced my belief that these things are best checked out if you have any doubts. It doesn’t take long to put together a test harness and profile your algorithms with a sample of your data.

One thing I have omitted from my investigation is any discussion on the speed of the algorithms. It would be worth benchmarking each one because if a class is being used in a hashtable, its .GetHashCode() method is called for each .Add, .Remove and .Contains call. However, the following thought does occur to me; with these sort of repetitve mathematical operations, the relative speed of XOR vs. shift vs. multiply (etc) may well actually depend on your CPU architecture.

On reflection, there is a lot more to hashing than the small amount I know and I’m sure many mathematical research papers have been written on the subject.

In the future, my default choice will probably lean towards ShiftPrimeXOR or ShiftPrimeAdd as a starting point. It would be a waste of time to spend days up front trying to work out the perfect hashing algorithm. My approach would be to choose one, use it, and keep an eye on its performance. If it proves to problematic then consider optimising it, otherwise leave it alone.

Right – enough already about hashing algorithms!

A Visual Studio 2008 project containg the console application I used to generate these results can be found here.

Implementing a simple hashing algorithm

by Rob Levine on 14-Mar-2008

In my last blog article I outlined a gotcha whereby a developer overrides .Equals() without providing a similarly meaningful override to .GetHashCode(). I gave a description, and illustration, of why it is so important to ensure that the same fields are considered for the two methods.

In this article I discuss a simple strategy for providing a .GetHashCode() implementation and compare it to some flawed equivalents. As with the previous blog article, my primary reason for writing about this is that I’ve seen some rather poor implementations of this, but it isn’t really complicated or difficult to come up with a reasonable solution.

Referring back to the same three requirements discussed previously in the MSDN object.GetHashCode documentation, the point that is most pertinent to this article is point three (since we’ve already discussed points one and two):

A hash function must have the following properties:

  • If two objects compare as equal, the GetHashCode method for each object must return the same value. However, if two objects do not compare as equal, the GetHashCode methods for the two object do not have to return different values.

  • The GetHashCode method for an object must consistently return the same hash code as long as there is no modification to the object state that determines the return value of the object’s Equals method. Note that this is true only for the current execution of an application, and that a different hash code can be returned if the application is run again.

  • For the best performance, a hash function must generate a random distribution for all input.

Points one and two above are absolute rules regarding the behaviour of .GetHashCode(). If you break either of these, then consumers of your hashcode (e.g. a hashtable) will not function correctly. As discussed in the previous article, a broken implementation may cause a hashtable to forever lose instances of your key placed within it and then report that it does not contain the key when it does. However, long as points one and two are obeyed, then your hash code (and hence hashtables) will work.

Point three is a recommendation. If you fail to adhere to this point, a consumer of your hash code will still function, it just won’t function very well.

 

A functional (but very bad) hashing algorithm

Consider the following implementation:

public override int GetHashCode()
{
    return 1;
}

This implementation does behave correctly with regard to the first two points above. If two objects are equal, then the method does return the same value; it just so happens that it also returns this value if they are not equal. That is fine and allowable – we are implementing a hash code, not a unique identifier. Point one above is satisfied. Additionally, our object will consistently return the same hash code (whether or not state has been modified). Point two above is satisfied.

It fails on point three though. Far from generating a random distribution for all input, it always returns the same number. Using this implementation will mean that all instances in a hashtable end up in the same bucket. This, consequently, means that the hashtable will have to do a .Equals() check on the full set of data to find a match. In other words, we have just reduced our hashtable to nothing more than a simple list (e.g. and ArrayList). No better (in fact probably marginally worse) than doing a foreach loop over the data set and manually looking for a match!

 

Other approaches to hashing

In order to examine possible hashing algorithms more closely, I decided to take a real world set of data, and use this as a basis of further investigation. The data set that sprang to mind was my music library, probably because I was staring at Winamp while trying to think of a suitable data source!

My music collection consists of a number of music tracks; for each of these there are several pieces of metadata. I exported my music library as iTunes xml and created the following interface to represent a single music track:

public interface IMusicTrackFile
{
    string TrackName { get; set; }
    string AlbumName { get; set; }
    string Artist { get; set; }
    string Format { get; set; }
    int? Year { get; set; }
    int? BitRate { get; set; }
    int? Size { get; set; }
    int? Time { get; set; }
    int? PlayCount { get; set; }
}

The aim of this interface is to represent track information that can then be stored as the key within a hashtable. Note that I am writing the word “hashtable” in lower case, because I am thinking of the general concept of a hashtable collection; in the .Net framework this includes System.Collections.Hashtable, as well as the generic System.Collections.Generic.Dictionary<> collection, and others.

Given that I will consider two tracks to be equal only if all the properties on the music track instance are the same, then ideally I want my hash code to be based on all these properties. The obvious approach here is to use boolean maths to combine the hash codes from the constituent properties to give one overall hash code.

Three immediate choices spring to mind to combine my constituent hash codes into one overall hash code: AND, OR or XOR. It doesn’t take a rocket scientist to realise that the only choice of these three worth considering seriously is XOR.

Why? Because ANDing many integers together will converge on all bits being zero; the number zero itself:

Hashcode graph for AND.

Every single track here gave a hash-code of zero – that is a 100% hash code collision rate. No better, in this particular case, than our “return 1” implementation above.

ORing many integers together will converge on all bits being set; the number -1 in the world of two’s-compliment integer representation:

Hashcode graph for OR.

[Note: The Y-Axis range is 0 to -25 million here]

There is some distribution of values here, due to “flutter” in two or three bits; overall only 311 (out of 6142) unique values, and all in the negative number range. This is better than the AND scenario, but still hardly a good spread of values.

Only XOR gives anything like a decent spread of values:

Hashcode graph for XOR.

[Note: The Y-Axis range here is +25 billion to -25 billion; over 2000 times larger than the OR case]

This is much more like it. 6133 unique values (out of 6142). Note that these numbers are also spread throughout the entire range of a signed integer.

 

Summary

In the example above, XORing the hash-codes of all constituent fields that form part of an equality check produced a functional and well balanced hash-code. This may not always be the case, and I am not advocating using an “XOR everything blindly” approach, but it often does produce a very reasonable hash. It isn’t particularly difficult to output and plot hash code distributions for a sample range of your data, and see whether you have a reasonable algorithm.

Bear in mind, as well, that even a hashing algorithm that is halfway good is probably good enough to start off with. If your hashtables turn out not to be performing well enough, revisit the algorithm (rather than spend too much time indulging in the root of all evil)!

In case anyone is interested, my sample music library can be found here, and the Visual Studio 2005 solution I used during this analysis is here.

A cache key with a bad GetHashCode() is another name for a memory leak

by Rob Levine on 7-Mar-2008

[With apologies to Raymond Chen for the title]

.GetHashCode() – the method exists on every single object you create; and yet I’ve come to the conclusion that a lot of developers neither know much about it, nor care about it. It is almost as though it is considered of no consequence.

Hash codes are not really very complicated in principle, but far from being of no consequence, they are critical whenever you are designing a class that may be used as a key in a collection (like a hashtable or generic dictionary). Failure to implement this method correctly can cause havoc.

The tale of the bad hash code

Some time ago, at a previous gig, we were developing a web application and we decided to that we would provide a crude cache for small amounts of data that were used repeatedly but never change (by never I mean that it changes so rarely that a recycle of the web app would be permissible; e.g. a list of U.S. counties). The aim here was to minimise the number of database calls for these small, frequently used and immutable sets of data. For reasons, lost in the mists of time, this cache was provided via a static Hashtable (rather than, for instance, the System.Web.Caching.Cache).

We were building on some building blocks “donated” to us by another project, and one of these included a class called CacheKey.

The idea was simple enough; each collection in the cache would have a name, (e.g. “USCounties”), and zero or more optional parameter fields (e.g. if you were caching U.S. counties in Texas only you might have name, “USCounties”, and a single field with the value “TX”).

The class looked something like this:

public class CacheKey
{
   public string Name;
   public object[] Fields;
}

Whoever wrote this class had overridden the .Equals() method in order to implement a notion of key equivalence; two keys are equal if their Names are equal, and their collection of Fields also have equal contents (ignoring null checking on Fields for brevity):

public override bool Equals(object obj)
{
    CacheKey supplied = obj as CacheKey;  

    if (supplied == null)
        return false;  

    if (this.Name != supplied.Name)
        return false;  

    if (this.Fields.Length != supplied.Fields.Length)
        return false;  

    for (int i = 0; i < this.Fields.Length; i++)
    {
        if (this.Fields[i] != supplied.Fields[i])
        {
            return false;
        }
    }
    return true;
}

However, they had overridden .GetHashCode() with the following implementation:

public override int GetHashCode()
{
    return base.GetHashCode();
}

It may be that no-one intentionally coded this method, but that they responded to the compiler warning ” ‘MyProj.CacheKey’ overrides Object.Equals(object o) but does not override Object.GetHashCode() ” by letting the IDE auto-generate the method (which gives exactly this implementation).

The problem is that, whether auto-generated or hand crafted, this is probably the worst implementation of GetHashCode possible. (Not that I am blaming the IDE; I’m sure the motivation for this auto-complete is to allow your code to compile, not to provide a meaningful implementation).

 

Why is this implementation so bad?

The golden rule of a hash code is that your GetHashCode implementation must match your Equals implementation. What does this mean? It means exactly the same fields that contribute to the equality check must also contribute to the hash code.

This is covered by the first two in the list of three requirements stated in the MSDN object.GetHashCode documentation:

A hash function must have the following properties:

  • If two objects compare as equal, the GetHashCode method for each object must return the same value. However, if two objects do not compare as equal, the GetHashCode methods for the two object do not have to return different values.

  • The GetHashCode method for an object must consistently return the same hash code as long as there is no modification to the object state that determines the return value of the object’s Equals method. Note that this is true only for the current execution of an application, and that a different hash code can be returned if the application is run again.

  • For the best performance, a hash function must generate a random distribution for all input.

There is no need to regurgitate the many descriptions of how hashtables actually work, but the very lightweight overview is this:

  1. Keys in the hash table are stored in individual buckets to reduce the overall search space when looking for an item in the hash table.
  2. The result of .GetHashCode() determines which bucket a key is stored in (i.e. when you Add an item to the hashtable, it calls .GetHashCode() on the key and uses the resulting value to determine which bucket to place the key in).
  3. When a “contains” check [e.g. hashtable.Contains(newKey)] is performed, the key being checked (newKey) has its .GetHashCode() method called. Based on this result, the “contains” method determines which bucket any possible matches may exist in, and performs an equality check on every item in this bucket until a match is found.

Point three means it is critical that two “equal” keys return the same hash code, otherwise the whole basis on which a hash table works comes crashing down. If we get a different value every time we call .GetHashCode() on different instances of equal keys, then our hashtable will forever be looking in the wrong buckets to see if we have a match; inevitably it will keep finding that we don’t have a match.

To put it another way, imagine trying to find your name in the phone book if you couldn’t rely on your brain’s ability to determine the first letter of your surname.

If every time I called brain.GetFirstLetterOfSurname() I got a random letter (instead of “L” for Levine), then most of the time I’d be trying to find the name “Levine” in the section for another letter.

 

What is the effect of this broken implementation?

So, back to the original issue; what is the effect of calling base.GetHashCode()? Our class inherits directly from System.Object, so we are calling System.Object‘s implementation of .GetHashCode(). I don’t know the detail of it’s implementation, but calling it 5000 times, each for a new instance of a CacheKey with a name of “USCounties” produced this:

Hashcode graph for broken implementation

which is bad; a different value for each call. What we wanted to see was the same hash code being returned for every instance (since they are all equal). In other words we wanted this (a straight line):

Hashcode graph for working implementation

What is the net effect of this in our cache? When I first do a call to the cache to get my list of counties, the cache informs me the data isn’t already cached (i.e. a cache miss) and so a call to the database is made, the data is retrieved, and this new collection is added to the cache. This is correct behaviour.

But on the second call to get my list of counties, the cache still informs me the data isn’t already cached and so we go to the database again, retrieve the data and add it to the cache. And again on the third call, and so on.

This gives us two issues. Firstly my cache is broken; I am making a database call every time I request the data, which defeats the purpose of the cache. But much much worse than that – every time I am actually caching the data and then losing it in the cache. So after 5000 requests for the data, I have it cached 5000 times*:

I have one great big memory leak.

The moral of the story here is don’t implement your own cache-key type classes unless you actually implement .Equals() and .GetHashCode() together in a compatible way. In fact, ninety-nine times out of a hundred you probably won’t need to do this; using a string or integer as a cache-key has no such issues – it only becomes an issue when you implement your own.

Implementing a simple, but reasonable hashing algorithm is fairly simple, and I’ll look at this in my next post.

* Due to the statistical likelihood of the occasional “random” hash code collision I may have the odd cache hit, so maybe I am only wasting 4998 instances instead of 4999!

Making the .Net XmlTextReader accept colons in element names.

by Rob Levine on 6-Mar-2008

This started as an addendum, to What is a valid XML element name?, but then I discovered something that made it worth breaking out into a separate post!

Ayende added a comment to his blog (under my comment) to say that he tried the ‘bad’ xml in question on three parsers and none of them could handle it. Naturally I thought I’d have a quick go too.

First I tried with the .Net System.Xml.XmlDocument class and the System.Xml.XmlTextReader class and neither of these would handle the “double-colon” element names. Next I tried two commercial XML editors, XmlSpy and StylusStudio, both of which were happy to let it pass their well-formed check without complaint (and they do both start complaining if you add other non-allowed characters). I don’t know what parsers either of these products are built on, but on the surface they seemed to be more compliant than .Net

Or so it appeared. One thing I noticed was that both System.Xml.XmlDocument class and System.Xml.XmlTextReader classes barf with the same exception, being raised from within System.Xml.XmlTextReaderImpl.ParseElement().

A quick look at this method using Lutz Roeder’s excellent Reflector revealed something new and interesting. This class (System.Xml.XmlTextReaderImpl) has an internal boolean property, Namespaces, which changes the behaviour of this element parsing method to allow or disallow multiple colons. This makes sense when you think about it; if you don’t support namespaces then there is no issue with multiple colons. If you do support namespaces then the colon is reserved to separate the namespace prefix from the element’s local name. It is this very point that the XML RFC refers to regarding colons, and which I quoted in the previous article.

A closer look still revealed that a Namespaces property is exposed on the System.Xml.XmlTextReader class. And guess what? Setting this property to false allows the reader to start accepting “multi-colon” element names! Well – that is certainly a new one to me.

However, I couldn’t find an equivalent way of changing the System.Xml.XmlDocument‘s behaviour to accept this type of xml. To be honest, I’m not too bothered, because I can’t imagine using this particular style of xml any time soon!

What is a valid XML element name?

by Rob Levine on 5-Mar-2008

Ayende Rahien has noted some peculiar looking XML that is output by Subversion.

Specifically, he takes issue with a start tag of

<C:bugtraq:label>

Well – I can honestly say that I’ve never seen XML like this before (and I’ve been using XML since the late Cretaceous Period) but I’m not so sure that it is actually wrong! Although I too balk at the sight of it, the XML RFC does seem to permit this as valid and parsable XML.
Taking a look at section on start tags would appear to allow for this:

A start tag is defined by

STag ::= ‘<‘ Name (S Attribute)* S? ‘>’

where Name is defined as

Name ::= (Letter | ‘_’ | ‘:’) (NameChar)*

and NameChar is defined as

NameChar ::= Letter | Digit | ‘.’ | ‘-‘ | ‘_’ | ‘:’ | CombiningChar | Extender

What this says to me is that this is an allowable start tag. In fact, from a syntactical standpoint, there seems to be no limit to the use of colons in element names.

However, the RFC does also have this to say about the use of colons:

“The Namespaces in XML Recommendation [XML Names] assigns a meaning to names containing colon characters. Therefore, authors should not use the colon in XML names except for namespace purposes, but XML processors must accept the colon as a name character.”

In other words – other parts of related XML specifications reserve the colon, but from a purely XML markup standpoint this would appear to be valid.

It may be a WTF, and it certainly isn’t nice but I think it is actually valid and is not “wrong, period” (as claimed), just “wrong, it makes me feel weird“!