http://www.perlmonks.org?node_id=990469


in reply to Count the Duplicate Entries and make them uniq

A good rule of thumb is to always make the database do as much of the work as possible. For example, instead of reading data and then sorting it, add an 'ORDER BY' clause to your SELECT statement and read the already sorted data.

In this case, you don't really need a 'database' at all. Reading the columns of data from a text file would look something like this:

use strict; my %data; my $headers = <DATA>; while ( my $line = <DATA>) { chomp $line; my ($os,$release,$like) = split(/,/,$line); ++$data{$os}{$release}{ uc $like }; } for my $os (sort keys %data) { for my $release (sort keys %{ $data{$os} } ) { my $yes = $data{$os}{$release}{'YES'} || '0'; my $no = $data{$os}{$release}{'NO'} || '0'; print "$os, $release, $yes, $no\n"; } } __DATA__ OS,RELESASE,LIKE Ubuntu,Warty,No Ubuntu,Hoary,No Ubuntu,Breezy,Yes Ubuntu,Breazy,Yes Fedora,Yarrow,Yes Fedora,Stentz,No Fedora,Yarrow,Yes Fedora,Yarrow,Yes Windows,XP PRO,Yes Windows,XP PRO,Yes Windows,XP Home,No Windows,XP PRO,Yes

If you really want to use a database, then you could pull the data out like this:

my $sth = $dbh->prepare("select OS, RELEASE, LIKE from $table"); $sth->execute(); while ( my ($os, $release, $like) = $sth->fetchrow ) { ++$data{$os}{$release}{$like}; }

Replies are listed 'Best First'.
Re^2: Count the Duplicate Entries and make them uniq
by slayedbylucifer (Scribe) on Aug 29, 2012 at 15:19 UTC
    Thank. I am working on it. Will keep you posted. Thanks for your time.
Re^2: Count the Duplicate Entries and make them uniq
by slayedbylucifer (Scribe) on Aug 30, 2012 at 04:25 UTC
    Hello corpio17, Thanks for your response. However, for the time being, I am good with the solution provided by philiprbrenan. However, I will definitely explore what you have suggested as it is completelly works without the SQL statements. Thanks.