Parallel::ForkManager is certainly a good tool for managing a bunch of processes all under the control of a single "master" process which, in your case, would be the one that reads the 100MB file. However, you need to be careful.
Things to consider include:
- How many parallel clients can the database handle before it becomes a significant bottleneck?
- What is the overhead of forking - it's almost certainly too high to naively fork a new process for processing each line in the file.
- What do you need to do with the data retrieved from the db? While Parallel::ForkManager can return data from each forked process, it fakes this up by going via the disk. Will this turn into an I/O bottleneck?
- What is the overhead of connecting to the DB, and how can you reduce that?