themonk has asked for the wisdom of the Perl Monks concerning the following question:


Problem statement: Ping 1000 hosts on port 80, protocol:TCP. Framework: Mojolicious

For 1000 hosts i could optimize(duhh!!!) to get results in around 1000secs(16 mins).Which is not so good.

I tried below modules,
1) Net::Ping(tried default and also tcp).
2) AnyEvent;
3) AnyEvent::Ping
5) AnyEvent::Ping::TCP
6) Mojo::IOLoop

can anyone please point me out to a better approach.

Sample code below of Net::Ping.

use Net::Ping; my $p; $p = Net::Ping->new('tcp',1); my $port = '3000'; $p->port_number($port); my $timeout = 10; my @ip = ('','','','','5 +','','','','225.2 +35.31.150','', '','','','','219 +.186.233.148', '',''); foreach (@ip){ if ($p->ping("$_")){ print "$_ is alive.\n" } else{ print "$_ is not alive \n"; } }
***************************AnyEvent Code******************
use AnyEvent; use AnyEvent::Ping; my $c = AnyEvent->condvar; my @ip = ('','','','','5 +','','','','225.2 +35.31.150','', '','','','','219 +.186.233.148', '',''); my $ping = AnyEvent::Ping->new(); foreach (@ip){ $ping->ping($_, 1, sub { my $result = shift; print "$_ Result: ", $result->[0][0], " in ", $result->[0][1], " seconds\n"; $c->send; }); } $c->recv;

Replies are listed 'Best First'.
Re: Optimized remote ping (syn/ack)
by BrowserUk (Pope) on Jul 17, 2015 at 11:40 UTC

    Try Net::Ping in syn/ack mode. With this method you ping all the hosts in the first pass and then gather the responses in the second pass:

    # Like tcp protocol, but with many hosts $p = Net::Ping->new( "syn" ); $p->port_number( getservbyname( "http", "tcp" ) ); ### send all the pings first foreach $host ( @host_array ) { $p->ping( $host ); } ### Then check which hosts responded. while( ( $host, $rtt, $ip ) = $p->ack ) { print "HOST: $host [$ip] ACKed in $rtt seconds.\n"; }

    In this way, all the delays are overlapped and it reduces the overall runtime significantly.

    It is harder to use; so read the docs carefully.

    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
    I'm with torvalds on this Agile (and TDD) debunked I told'em LLVM was the way to go. But did they listen!

      Hi BrowserUk,

      For my problem, you have hit the solution right on target. I checked the code and its response time is really what i needed. But couldnt understand the below line,

      $p->port_number( getservbyname( "http", "tcp" ) );
      I wanted to check reachability on the port '3000', so i modified it to below line,
      $p->port_number( getservbyname( "3000", "tcp" ) );

      But i think i made some mistake in the parameters as the code is still taking echo port(i.e., 7).

      Please explain me the above line.

      Thanks in advance.

        I got the correct parameters,

        $p->port_number("3000", "tcp");


        getservbyname function takes the first parameter and sends its defined port to tcp. E.g., "ftp" => 22, "http" => 80 If custom port is to be mentioned, dont use "getservbyname".

        Correct me if my understanding is wrong.

        Thanks guys for all the help. BTW, the response time now is "5secs for 50 IP's" GREAT Response time. :-)

Re: Optimized remote ping
by afoken (Canon) on Jul 17, 2015 at 08:47 UTC
    can anyone please point me out to a better approach.



    Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
Re: Optimized remote ping
by QM (Parson) on Jul 17, 2015 at 09:13 UTC
    There's probably a more unixish way, but have you tried Parallel::ForkManager?

    Update: Try Nmap::Scanner (see Batch mode).

    Quantum Mechanics: The dreams stuff is made of

Re: Optimized remote ping
by shmem (Chancellor) on Jul 17, 2015 at 09:37 UTC

    Use Net::Oping. I'm not sure if it's possible to set the destination port.

    perl -le'print map{pack c,($-++?1:13)+ord}split//,ESEL'
      Checked the cpan description, the module doesn't support TCP connection.
Re: Optimized remote ping
by Anonymous Monk on Jul 19, 2015 at 23:01 UTC

    I haven't played with non-blocking socket connects much, so I took this problem as a trial run. Note that Net::Ping in tcp mode just does a connect, nothing special, so I thought I'd try a simple non-blocking IO::Socket::INET and use IO::Select to keep track of outstanding connect attempts. Linux will show completion of a non-blocking connect by indicating "write ready" in select (see man 2 connect).

    I don't have a decent enough setup for testing this since I don't have 1000 machines, but it does run on what I can fake up on my two ArchLinux machine configuration.

    Plug in your own ips. Double check the number of open files allowed on your system (ulimit -a) and make $max somewhat smaller than that. It's 1024 by default on my system but it is changable.

    All the connects are done in parallel, and I can get it to run in under 4 seconds on my (very) kludged up test system, I'm hoping I can get results from "themonk" and any other interested monks.

    #!/usr/bin/perl # use IO::Socket; use IO::Select; use strict; use warnings; my @ips = map "192.168.0.$_", 1..11; # your IPs here my $port = 80; # your port here my $max = 1000; # somewhat smaller than max "open files" my %handles; my $n = 0; my $sel = IO::Select->new; while( @ips or $sel->count ) { if( @ips and $sel->count < $max ) { my $ip = shift @ips; my $fh = IO::Socket::INET->new(PeerAddr => "$ip:$port", Proto => 'tcp', Blocking => 0,); $handles{$fh} = "$ip\t\t" . ++$n; $sel->add($fh); } elsif( @ips ? $sel->count >= $max : $sel->count ) { for my $fh ( $sel->can_write ) { print $fh->connected ? ' ' : 'not', " alive $handles{$fh}\n"; $sel->remove($fh); delete $handles{$fh}; } } }
Re: Optimized remote ping
by thargas (Deacon) on Jul 20, 2015 at 13:33 UTC
Re: Optimized remote ping
by phillipo (Novice) on Sep 03, 2015 at 14:16 UTC


    I realise that this is a fairly old post now, and that you've probably moved on - but I'm posting this for the benefit of those who come after.

    I'm the author of AnyEvent::Ping::TCP - it was specifically designed to handle large numbers of hosts. If doing a TCP ping on 1,000 hosts is taking 1000 seconds, you're probably doing it with the synchronous 'tcp_ping' routine.

    AnyEvent::Ping::TCP does support an asynchronous mode, similar to Net::Ping's syn/ack mode.

    The maximum length of time it should take to ping any number of hosts is determined largely by the timeout specified, as that is what determines the maximum length of time it will wait for a host that is not responding - usually, a 1 second timeout will be sufficient for a TCP ping.

    Based on this, with 1000 hosts, a 1 second time out, and 1 port, it shouldn't take much more than 1 second. Adding a second port shouldn't push it up much more. Here's my test script:

    use AnyEvent::Ping::TCP; use Time::HiRes qw(time); my @hosts = (); my %results = (); foreach my $prefix (qw(192.168.40. 192.168.41. 192.168.50. 192.168.150 +.)) { for (my $i = 1; $i < 255; $i++) { push(@hosts, $prefix . $i); } } my $start_time = time; foreach my $host (@hosts) { tcp_ping_syn $host, 80, 1; # tcp_ping_syn $host, 443, 1; # tcp_ping_syn $host, 22, 1; } my $mid_time = time; foreach my $host (@hosts) { $results{$host . ':80'} = tcp_ping_ack $host, 80; # $results{$host . ':443'} = tcp_ping_ack $host, 443; # $results{$host . ':22'} = tcp_ping_ack $host, 22; } my $end_time = time; foreach my $result (keys %results) { print "\t$result: " . (defined($results{$result}) ? sprintf("% +.2f", $results{$result}) . " milliseconds" : "timed out") . "\n"; } print scalar(keys %results) . " pings sent in " . ($mid_time - $start_ +time) . " seconds\n"; print scalar(keys %results) . " ping results received in: " . ($end_ti +me - $mid_time) . " seconds\n"; print "Total time: " . ($end_time - $start_time) . " seconds\n";
    Output with just the port 80 test:
    $ perl | grep -v 'timed out'
   124.04 milliseconds
   113.96 milliseconds
   240.34 milliseconds
   181.19 milliseconds
   163.90 milliseconds
   165.69 milliseconds
   124.71 milliseconds
   124.38 milliseconds
   181.62 milliseconds
   111.27 milliseconds
    1016 pings sent in 0.243088960647583 seconds
    1016 ping results received in: 0.84105396270752 seconds
    Total time: 1.0841429233551 seconds
    Output including the port 443 test:
    $ perl | grep -v 'timed out'
   428.93 milliseconds
   278.25 milliseconds
   198.34 milliseconds
   196.59 milliseconds
   307.20 milliseconds
   281.97 milliseconds
   197.48 milliseconds
   307.89 milliseconds
   311.91 milliseconds
    2032 pings sent in 0.398411989212036 seconds
    2032 ping results received in: 0.694780111312866 seconds
    Total time: 1.0931921005249 seconds
    Output including the port 22 test:
    $ perl | grep -v 'timed out'
   462.20 milliseconds
   642.97 milliseconds
   413.96 milliseconds
   413.18 milliseconds
   417.56 milliseconds
   462.76 milliseconds
   422.05 milliseconds
   421.42 milliseconds
   463.17 milliseconds
   642.03 milliseconds
   635.39 milliseconds
   420.34 milliseconds
   436.70 milliseconds
   463.82 milliseconds
   469.35 milliseconds
    3048 pings sent in 0.566689968109131 seconds
    3048 ping results received in: 0.543115139007568 seconds
    Total time: 1.1098051071167 seconds
    Note that the reported latencies do go up when more hosts /ports are added. An unfortunate side affect of queuing up so many hosts.
A reply falls below the community's threshold of quality. You may see it by logging in.