Yeah, more or less.
“A million records” is a volume that is “vaguely interesting” to SQLite ... it will take a few minutes’s one-time cost to import the data (and might not require a program). A couple minutes more to add some indexes. From that point on, you can use any tool or combination of tools that has a DB-interface (including Perl of course ...) to get the job done, and now the query-engine is the one that’s doing all the heavy lifting. So long as the “transactions” caveat is carefully adhered-to esp. when doing updates, it’s really quite a remarkable piece of software engineering. (It’s rare when a piece of software genuinely
surprises me blows me away. Perl/CPAN did it. So did this.) I suspect that most of the things that the O.P. is right now “writing programs to do” can probably be reduced to a query (and perhaps a now-trivial program to digest the results). Furthermore, a huge bonus is that you can put results into a different table, which of course is a self-describing data structure. (A single SQLite file can contain any number of tables.)