in reply to Suggestions for simplifying script to parse csv data

As with most common tasks in Perl, someone has already written an excellent (and fast!) CSV parser and made it available via the CPAN as Text::CSV_XS. Using that module, it's easy to build a generic solution that parses each row of your CSV file into a hash using names provided on the first row:

#!/usr/bin/env perl use strict; use warnings; use Text::CSV_XS; use IO::File; my $io = IO::File->new('/tmp/Allcontrol.csv','<') # open file or die("Cannot open data source file: $!"); # or die trying my $csv = Text::CSV_XS->new(); my $head_row = $csv->getline($io); # get first line (headers) my @dataset; while (my $row = $csv->getline($io)) { my %row_hash = map { $headrow->[$_] => $row->[$_] } (0..$#{$headrow +}); push @dataset,\%row_hash; }

At this point, @dataset is an array of hashes, where each hash represents a row in the database. From that general solution, you should be able to extrapolate your solution. For example, you can manipulate the data before pushing the row into the set, etc.

Ramblings and references
The Code that can be seen is not the true Code
I haven't found a problem yet that can't be solved by a well-placed trebuchet