Firstly, welcome back. I think. It's nice to see you posting, even if the jcwren-bot isn't hovering about in the CB.
If I remember correctly, this came up as a way to defeat votebots, and a lot of people liked it and cited examples -- a long dead news archiving website used a similar scheme -- but it's clear (especially now that you've implemented the idea) that simple fixed width fonts are too susceptible to breaking for use in this type of verification.
My suggestions were
- to mix up fonts of varying proportions and styles -- this removes your ability to chop up an image into segments of the same dimension (9 x 17 in this case) for easier processing. This may also allow for characters to overlap (is this called kerning? I'm not down with fonts like that.) each others boundaries.
- to introduce noise into the image, ruining the ability of a ocr engine to detect the outline of the characters. This might be foiled by applying some sort of smoothing algorythm over the image in cases of minimal noise, though, and in large amounts, the noise may overtake the signal.
providing contextual data about an image, like "how many blocks in this image are hollow?" or " how many stars are point up?" & similar challenges.
of course, any type of image recognition should take into effect potential user handicaps -- a blind person could never register his favorite ice cream, some one who's color blind may be foiled if the challenge relies on sorting things by color, and so on.
Off topic -- I think this makes you a terrorist in the U.S. now.
Update : as for laziness, jcwren does note in his comments that he has ...
A small 'C' program then read the .BMP files, and built
# Perl code for the characters.
So, no foul there :-)