P is for Practical | |
PerlMonks |
Re: Matching hash keys from different hashes and utilizing in new hashby afoken (Chancellor) |
on Oct 21, 2017 at 21:17 UTC ( [id://1201814]=note: print w/replies, xml ) | Need Help?? |
How about reading the tables into a database and using SQL instead? Your files look closely enough to CSV, so you better use Text::CSV and especially Text::CSV_XS for reading instead of manual parsing. Add DBI and DBD::SQLite and you have a performant, serverless database. Part one of your program would read the CSV files and write them into the SQLite database. Or, even easier but slower, use DBI and DBD::CSV (that sits on top of Text::CSV) to make your CSV files appear as tables in a relational database. Part two would just query the database. Update: Why a database? Because it can easily handle input files significantly larger than your available RAM. With pure hashes, you are limited by available RAM. You don't have to use SQLite, but it is a good start for tests. If things grow bigger, I would recommend using PostgreSQL. If you have a commercial RDBMS around (Oracle, MS SQL Server, ...), you may as well use that. Alexander
-- Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
In Section
Seekers of Perl Wisdom
|
|