While driving into work I realized the problem with a pure database solution: backtracking. If I have to do much backtracking across several tables, disk access times are going to completely kill the performance. Imagine trying to issue a new SQL call every time I need to backtrack. Yuck. Of course, I can cache the SQL statements, but the only way to get around this problem (that occurs to me right now) is to preload the data. For the issues with that, see the root post of this thread :)
I think for small to medium size datasets, the database solution may be okay, though. I'll have to play with it.
Join the Perlmonks Setiathome Group or just click on the the link and check out our stats.