Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Re^2: making NET:SSH quiet

by DeadPoet (Scribe)
on Jan 12, 2011 at 14:03 UTC ( #881893=note: print w/ replies, xml ) Need Help??


in reply to Re: making NET:SSH quiet
in thread making NET:SSH quiet

Yes there are some Secure Shell modules that do provide some multi-server operations. However, many of these modules can be difficult to compile on various platforms, require extensive modules like the crypt-style modules, only address parallel operations when running the remote command, do not easily account for hung Secure Shell sessions, etc... Moreover, what about pre and post processing of the information? Such post processing cases, all information in returned to the central point of execution and typically become single threaded--loss of efficiency. Respectfully this may be what you want, or you can have each thread perform its portion of the post process as part of its responsibility. Expand this thought to pre processing, what about checking if the server is pingable or if the name provided is in DNS? These are just a couple of examples of typical issues at hand.

The example provided accounts for a more robust solution, which allows for one to extend to suit their design goals. Also, this solution can easily be extended to Secure Copy, as this would just be another command to run.

Finally if you look at the 'SSH_OPT' which I have defined, these options can also be used with in shell-based scripts, as an alias in the user environment, etc... Such options should almost always be used with a batch style Secure Shell script. It accounts for connectivity time out, known_host issues, remote agent death, and more.

--Poet


Comment on Re^2: making NET:SSH quiet
Re^3: making NET:SSH quiet
by salva (Abbot) on Jan 12, 2011 at 16:52 UTC
    Finally if you look at the 'SSH_OPT' which I have defined...
    UserKnownHostsFile=/dev/null StrictHostKeyChecking=no
    Such options should almost always be used with a batch style Secure Shell script
    Don't tell anybody to use that options by default, please!

    They make SSH quite insecure!

      with a batch style Secure Shell script.

      You defeat the purpose of automation. A simple upgrade to SSH on UNIX and all your automation is no longer automated until known_host entries are cleared.

      Use the option, don't use the option, it is the implementer's choice and an understanding of their environment. To your point, which I agree with if you are implying "sparingly", this option should not be used to just be used, it should have a purpose--such as the case for automation. If you do not care about failed connections or are going to write some type of exception process, then yes you do not need it. But to just silently fail, such as with the '-q', why did it fail? Was it a true connection issue? Was it a bad command? Was it just the known_host file? Who knows!

      Moreover, before attempting a connection I would check for host availability, which I talked about. So, what would I know using this process?

      * Server is available.

      * SecureShell is available.

      * If the command fails then it is probably on the target side.

      Okay, now one is probably going to say well what about firewalls and port blockage? Well, as part of the preliminary checks, which includes ping, I would also check if the port is available one manner, and not the only way, is to use IO::Socket::INET. For example:

      # code snip... my $remote = new IO::Socket::INET PeerAddr => $o->{'host'}, PeerPort => $o->{'port'}, Proto => q{tcp}, Timeout => 8; if ( ! $remote ) { return RET_FAILURE; } # code snip...

      It all just depends on what one needs and wants to accomplish. If one is going to account for issues with some type of exception process, then you got it. If the script is going to be left unattended, then you probably need it.

      --Poet
        You defeat the purpose of automation

        So, you are advocating for ignoring any security issue just to make your work easier, right?

        That may be acceptable if you use ssh just to check that your machines are up and run some dummy commands, or if you are in a very controlled environment. But in general, telling ssh to ignore the known_hosts file is a very bad idea. Automation is not an excuse.

        A simple upgrade to SSH on UNIX and all your automation is no longer automated until known_host entries are cleared

        No SSH software that I know changes the server keys on upgrades. That only should happen the first time you install it.

        Anyway, handling host keys properly may be a lot of work, right, that's life, security is not something you get for free and those uppercased warnings you get from SSH does really mean something:

        @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle att +ack)! It is also possible that the RSA host key has just been changed. The fingerprint for the RSA key sent by the remote host is 15:a9:45:01:49:6c:64:10:3a:78:02:3d:52:39:2d:bf.
Re^3: making NET:SSH quiet
by salva (Abbot) on Jan 12, 2011 at 18:02 UTC
    Your comment talks about SSH modules as if they were all the same but they are not. There are several of them and every one has its own strengths and weaknesses and none is better than the others in all the matters.
    The example provided accounts for a more robust solution, which allows for one to extend to suit their design goals

    The piece of code you have posted suffer from the same problem, it may suit your needs but it is not the solution to all Perl & SSH tasks.

    It has several limitations: no connection reuse, no password authentication, no passphrases, no synchronization between different workers, no command quoting, usage of highly insecure options, wrong handling of hung sessions, etc.

    Also, this solution can easily be extended to Secure Copy, as this would just be another command to run.

    Almost any SSH module available from CPAN already supports SCP and/or SFTP out of the box.

    Really, you should get more familiar with the SSH modules available from CPAN, specially the new ones. They are better than what you think!

      I am more than willing to explore...

      The topic that I see as most interesting is wrong handling of hung sessions. Thus far, in our 2300+ server environment I have NOT come across a set of SSH options that can account for when a process becomes hung on the remote server, such as rpm, df, bdf, getconf, ioscan, flaky sshd, etc... That is why one may need to kill the process and prevent script lock.

      Oh please, provide an example and I will run tests.

      Next, I am not knocking the various authors of CPAN Secure Shell modules. These folks have done wonderful work, such as your work. But I did say:

      However, many of these modules can be difficult to compile on various platforms...

      And, one may not be able to meet the requirements needed to compile. If I remember correctly, each module that I have used ships with ActiveState Perl.

      Passwords and Pass Phrases could easily be addressed as the IPC::Open3 call provides STDIN, STDOUT, and STDERR. Synchronization between workers, what are they working on together? Are they working on the same server or different. Once again the %RESULTS shared hash is an example of bringing data back to the parent. However, could server(X) see server(y)'s data and act upon it, yes--the structure is shared. Could one just report on the collective information, as illustrated, yes. It is a concept and I stated adjust as needed.

      So now I will add to my original statement and say: Check your local listing and your mileage may vary.

      --Poet
        Thus far, in our 2300+ server environment I have NOT come across a set of SSH options that can account for when a process becomes hung on the remote server, such as rpm, df, bdf, getconf, ioscan, flaky sshd, etc... That is why one may need to kill the process and prevent script lock

        There isn't an easy and general way to handle remote commands hung. Killing the local ssh process is not enough as it may leave the remote processes running.

        IMO, the best approach is to try to identify the pid of the remote process and kill it running kill also via ssh on the remote host (setting ServerAliveInterval and ServerAliveCountMax should be enough to detect hung sessions due to connection errors).

        Passwords and Pass Phrases could easily be addressed as the IPC::Open3 call provides STDIN, STDOUT, and STDERR

        It is not that easy! ssh does not use STDIN/STDOUT for authentication dialogs but /dev/tty. Getting that right is pretty tricky.

        Try to do it while limiting yourself to the modules provided with ActiveState Perl or try to make it work under Windows!

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://881893]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others pondering the Monastery: (9)
As of 2014-12-27 17:10 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (177 votes), past polls