Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine
 
PerlMonks  

Re^5: Data manipulation on a file

by sstruthe (Novice)
on Oct 02, 2015 at 20:50 UTC ( [id://1143681]=note: print w/replies, xml ) Need Help??


in reply to Re^4: Data manipulation on a file
in thread Data manipulation on a file

Small problem now that the output when run on a large amount of linux hosts is producing duplicate mount points for different shares etc. This is due to the many mounts on the same mountpoint.

like this $ouputstring: host1: mountpoint1 mountpoint1 mountpoint1 mountpoint2 host25: mountpoint5 mountpoint5 mounpoint5 mountpoint6 m......78.

it would be really nice to remove the duplicates but this would mean logic before updating the array elements or logic coming out of the dump. As I am just dumping the full hash into a $scalar this could get tricky. Anyone got any suggestions on removing the duplicate mount points once the hash has been dumped to the scalar or should I put logic in the dump or even into the push. Perl is so powerfully but still a newbie here so any help appreciated. Many thanks
#!/usr/bin/perl use strict; use warnings; my $filer; my %filer_hash; for (qx(mount -t nfs | awk -F/ '{print \$1,\$3}' | sed -r 's/(blah.*:) +|(bblah.*:)//g' |sort)) { chomp; my ($host, $mp) = split; push @{ $filer_hash{$host} }, $mp; } foreach $filer ( sort { ${filer_hash{$b}} <=> ${filer_hash{$a}} } keys + %filer_hash ) { $outputstring .= "$filer:@{$filer_hash{$filer}}," } print " This is my output : $outputstring

Replies are listed 'Best First'.
Re^6: Data manipulation on a file
by CountZero (Bishop) on Oct 02, 2015 at 21:23 UTC
    Have a look at the uniq and distinct functions from List::MoreUtils.

    For example:

    use Modern::Perl qw/2015/; use List::MoreUtils qw/uniq/; my @mountpoints = qw/one two three three two four one one five six/; print join ' ', uniq sort @mountpoints;

    Output: five four one six three two

    CountZero

    A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James

    My blog: Imperial Deltronics

      Thanks this looks really cool and straight forward. Should I just run this on the output string that has been dumped from the hash of arrays. wow you guys are great, perl is getting cool

        No, you run it on the array. The argument to uniq is an array or list, not a string.

        In your script it will be $outputstring .= "$filer:" . (join ' :', uniq sort @{$filer_hash{$filer}}) . ','; (untested)

        CountZero

        A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James

        My blog: Imperial Deltronics
Re^6: Data manipulation on a file
by sstruthe (Novice) on Oct 04, 2015 at 18:20 UTC

    Thanks for the wisdom, however unfortunately if I use a module I would have to make sure that module was installed on every host that I want to use this on. Sadly that itself is a huge problem. It really has to be code from core perl I do not have the luxury of most of the CPAN modules. Unless of course "uniq" and "distinct" come with core perl. Any ideas are most welcome

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1143681]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (5)
As of 2024-04-16 05:30 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found