|Perl: the Markov chain saw|
All I have to offer is yet more questions.
My first guess if no one function is a gobbler of CPU (aside from parse_file) that maybe the algorithms that are making all these calls are doing more work than they need to. Perhaps the code is recalculating values over and over rather than caching them? Or perhaps some part of the code is inefficiently traversing a graph.
I notice that one of the routines is called guesswork. Is this the only place where guesswork is being done or is this a small piece of a larger heuristic algorithm? If so, what sort of heuristic are they using and what is your best guess of its big-O? Could that be the culprit?
Also what is the memory profile? Do you have any reason to believe that the CPU consumption is actually the result of page thrashing? I once had a program that was taking somewhere between 10 and 30 minutes to run. When I altered the program to write out data to disk as soon as it was produced rather than waiting until all the results were collected, the time dropped down to 2-3 minutes.
Could you provide a link to the source code? It might help us put the results you posted in context.
In reply to Re: More questions than observations on installman