in reply to Count the Duplicate Entries and make them uniq
A good rule of thumb is to always make the database do as much of the work as possible. For example, instead of reading data and then sorting it, add an 'ORDER BY' clause to your SELECT statement and read the already sorted data.
In this case, you don't really need a 'database' at all. Reading the columns of data from a text file would look something like this:
use strict; my %data; my $headers = <DATA>; while ( my $line = <DATA>) { chomp $line; my ($os,$release,$like) = split(/,/,$line); ++$data{$os}{$release}{ uc $like }; } for my $os (sort keys %data) { for my $release (sort keys %{ $data{$os} } ) { my $yes = $data{$os}{$release}{'YES'} || '0'; my $no = $data{$os}{$release}{'NO'} || '0'; print "$os, $release, $yes, $no\n"; } } __DATA__ OS,RELESASE,LIKE Ubuntu,Warty,No Ubuntu,Hoary,No Ubuntu,Breezy,Yes Ubuntu,Breazy,Yes Fedora,Yarrow,Yes Fedora,Stentz,No Fedora,Yarrow,Yes Fedora,Yarrow,Yes Windows,XP PRO,Yes Windows,XP PRO,Yes Windows,XP Home,No Windows,XP PRO,Yes
If you really want to use a database, then you could pull the data out like this:
my $sth = $dbh->prepare("select OS, RELEASE, LIKE from $table"); $sth->execute(); while ( my ($os, $release, $like) = $sth->fetchrow ) { ++$data{$os}{$release}{$like}; }
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^2: Count the Duplicate Entries and make them uniq
by slayedbylucifer (Scribe) on Aug 29, 2012 at 15:19 UTC | |
Re^2: Count the Duplicate Entries and make them uniq
by slayedbylucifer (Scribe) on Aug 30, 2012 at 04:25 UTC |
In Section
Seekers of Perl Wisdom