For those of you who missed it, Engine Yard sponsored a contest that involved cracking (or attempting to find a collision match) of a SHA1 hash. For the price of an iPhone or two, and credits on their own cluster, they captured a small army of developers and an insane amount of mind share.
One of the big surprises was the emergence of the GPU as a huge factor. I've been excited about the potential of various GPU wrappers for a while now, but it was cool seeing them in action. For certain operations, the video card processors are insanely fast. Nvidia's CUDA was first to the scene, but Intel, ATI, and even Apple have their own wrappers. Write your code in C, but use the multiple pipelined, insanely fast GPUs. No specialized graphics experience needed. (Is this why current Apple laptops have two video cards in them?)
Here are a few links to stir your imagination.
Engine Yard's winners page
A very nice summary and write-up of one person's efforts. BTW, he tells you how he got 690,822,188 hashes/second on one machine's video cards.
Another competitor's Flickr pic of his results, separated by GPU and CPU.
A few teams even released a browser based engine for distributed cracking. Very, very cool, but too slow to be really competitive.
I started on a small effort myself, but I wrote it in Ruby, and it generated and tested a million sha1 hashes in the insanely slow time of 3.5 seconds. Not even worth reporting on. I did have quite a bit of fun thinking about the approach, and coming up with some very pragmatic trade offs. For example, if you know you can't come up with a solution that let's you cover the entire solution space, there's no point it spending hours on a perfect solution. (The contest only had 30 hours of run time.) I started to integrate the Polaris SSH C library, but decided to just code for fun instead. In hindsight, I should've gotten the C code embedded.
Also, I used half a dozen virtual machine instances at Mosso/Rackspace. At 1.5 cents an hour, it's easier than dragging my old dual Opteron out of the closet!
So when's the last time your company put out a thousand dollars or so, and got a few thousand developers to think about something interesting?
And when's the last time ~you~ looked at something non-traditional, like GPUs, for your high performance computing?