I wrote an internal web-based tool to track data entry statistics for a large scanning bureau - anything the OCR wasnt able to correctly identify had to be keyed.
A late request came in to add support for "benchmarks" - to highlight statistics in red or green to identify operatorsthat seemed to have consistently high or low performance. We also classified documents this way too - so we could track jobs that, regardless of operator, seemed to be above or below expected keying rates.
Two side-effects of this
- a fight between the union and HR that this was going to be a tool to sack low-performers - it wasn't intended for that and isn't used that way - supervisors say that when someone isnt performing there are plenty of other indicators aside from keying rates. In fact operators (who have zero knowledge of how the rates are collected or calculated), quickly worked out that keying stats referred to just that - no keying meant no stat. They would key as quickly as possible, then turn to answer aquestion or make a comment, key like the wind, then comment again. I was amazed at how quickly they adapted their work practices to maximise their reported stats.
- it highlighted almost immediately where a new job was going badly wrong, because it was down at the 10% level - a review of the scanning and operators resulted in a new work instruction, and the job was back on track - the tool went from a simple performance monitor to an active feed back instrument overnight