Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

The Monastery Gates

( #131=superdoc: print w/replies, xml ) Need Help??

Donations gladly accepted

If you're new here please read PerlMonks FAQ
and Create a new user.

New Questions
How can a script use a password without making the password visible?
9 direct replies — Read more / Contribute
by Cody Fendant
on Mar 01, 2017 at 06:07
    Not strictly a Perl problem.

    Say you have a script, on a Linux machine, which needs to communicate with a database. It needs a username and password.

    Hard-coding the password into the script isn't secure, and also the password will change.

    I'm sure this is a solved problem, but I can't think what the solution is. Can the script request a password from another place without a coder who is editing the script being able to view the password?
Using guards for script execution?
4 direct replies — Read more / Contribute
by R0b0t1
on Feb 28, 2017 at 16:58

    I've come from a Java, C, and Python background. My first inclination was to look up a pattern similar to the following:

    #!/usr/bin/env python3 def main(): print('Hello, world!') if __name__ == '__main__': main()

    And, indeed, it exists:

    #!/usr/bin/env perl use warnings; use strict; sub main { print "Hello, world!\n"; } unless (caller) { main; }

    I attempted to find as much justification for the above as possible, and the main argument seems to be to give an enclosed scope to the main body of the program. However based on Perl scripts that I have read I would agree with the detractors who say it is not idiomatic Perl, at least for very short programs and programs which provide a Unix-like interface.

    While this may end up being a matter of opinion I was hoping there may be people who can comment on the extensibility of small utilities which start in one form or the other, or perhaps have experience to offer which goes in a direction I haven't thought of.

length of string in array
3 direct replies — Read more / Contribute
by SavannahLion
on Feb 28, 2017 at 16:16

    After three days of trying to track down an annoying bug, I finally traced it to this innocuous line, I stripped all the cruft out:

    my @t = qw/aA bB cC dD eE fF gG hH iI jJ kK lL mMmM nN oO pP qQ rR sS +tT uU vV wW xX yY Zz/; my $x = 0; $x = length for @t[0 .. 1]; print $x;

    What the code is supposed to do is print out "4". Instead it prints out "2". Clearly there is something amiss here.

    What I thought the code does is get a slice of @t, loop through the sliced elements in @t, get the length of each one and assign the sum to $x.

    I've tweaked this portion of my code for three days and I can't seem to get it behaving right. Ironically, I've actually gotten similar (albeit more convoluted code) actually working in different language. So there is clearly some nuance I'm missing here.

problem in printing the result in the out put file
4 direct replies — Read more / Contribute
by hegaa
on Feb 28, 2017 at 10:58

    hi I'm using this code

    #!/usr/bin/perl use IO::Socket; my $ip = $ARGV[0]; my $port=$ARGV[1]; open DAT,$in_file2; my @ip=<DAT>; close DAT; chomp(@ip); foreach my $ip(@ip) { $host = IO::Socket::INET->new( PeerAddr=>$ip, PeerPort=>$port, proto =>'tcp', Timeout=> 0.01, ) and open(OUT, ">>port.txt"); print OUT "\n$ip:$port\n"; close(OUT); }

    it's printing the result like this

    192.168.1.2 :22 192.168.1.2 :22 192.168.1.2 :22 192.168.1.2 :22

    but i want it to print it like this

    192.168.1.2:22 192.168.1.2:22 192.168.1.2:22 192.168.1.2:22 192.168.1.2:22 192.168.1.2:22 192.168.1.2:22 192.168.1.2:22 192.168.1.2:22 192.168.1.2:22 192.168.1.2:22 192.168.1.2:22
    plz help me
error : Value too large for defined data type while checking file existence in perl
4 direct replies — Read more / Contribute
by abhishekv
on Feb 28, 2017 at 06:10

    I am using perl version v5.20.3 . Using QNAP machine

    Problem in cheking the existence of file. error : Value too large for defined data type

    #!/opt/bin/perl use strict; use warnings; my $fileName = '/share/CACHEDEV1_DATA/Download/testFile.avi'; if (! -e $fileName){ print "Error ".$!."\n"; } else{ print "\nSuccess $!\n"; }

    For text files its working fine. How to check the existance of '.avi' files?

searching keywords
2 direct replies — Read more / Contribute
by pdahal
on Feb 28, 2017 at 03:07
    I am trying to list the protein names listed in the PubMed abstracts. I made a file containing the list of proteins I want to search. The code is running well but I got a problem. For example, I have a got a protein name "NOV" in the keyword list. But even if the abstract doesn't contain NOV, it lists NOV if there are words like novel. How can I solve this problem?
Integrating codebases: namespace pollution, passing objects, and more
2 direct replies — Read more / Contribute
by Anonymous Monk
on Feb 27, 2017 at 11:06

    Hello Monks!

    I'm integrating two sets of code, one written for a Dancer web app, the other for CGI.pm. There's core code in the Dancer app that runs, and then the request parameters are parsed, the appropriate modules loaded, and the route-specific code is run, returning either a data structure (Dancer app) or printing to STDOUT (CGI.pm code). The content is then rendered in a template that also has page header, footer, navigation, etc., generated by the Dancer app.

    I'm running into various problems because of the way that the CGI.pm-based code is written, e.g. some of the modules auto-export all their functions into the global namespace, most of the modules initialise a load of request-specific variables on load, there are function name clashes between modules, and most were written without warnings (and some without strict mode) turned on. From my understanding of the way Dancer is operating, the Dancer app stays alive between requests, so the CGI.pm modules really need to be reloaded to reinitialise these request-specific variables. Additionally, some of the functionality in the CGI.pm modules takes a long time to execute (it was written before the days of AJAX) and might be better dealt with by presenting a 'loading' page in the UI and retrieving the data via javascript when it is ready.

    Is there a way to temporarily / safely execute the CGI.pm-based code within the Dancer framework without running into the issues above? Some kind of sandbox around the code, like a big eval but which also prevents namespace pollution (and ideally "unloads" the modules after use)? To recap, I'd like to:

    • load old modules without namespace pollution in the Dancer app (???)
    • override various functions, e.g. using Sub::Override (OK)
    • capture STDOUT into a variable (OK)
    • pass in objects (session, environment, etc.) from Dancer app, some of which (e.g. the session) will get modified by the CGI.pm app (mostly OK)
    • fork a new process or deal with long-running processes asynchronously (not yet)

    Apologies if this is not clearly enough described. I can provide more detail if helpful. Thank you in advance for any suggestions!

Is there a free IDE for Strawberry Perl, running in Windows, that is interactive?
9 direct replies — Read more / Contribute
by Amphiaraus
on Feb 27, 2017 at 10:25

    Is there a free IDE for Strawberry Perl, running in Windows, that is both visual and interactive?

    Some of our build servers have a 32-bit Windows OS, others have a 64-bit Windows OS. Is there a free IDE for Strawberry Perl that is compatible with both 32-bit and 64-bit Windows operating systems?

    Currently we debug our Perl scripts by using the "-d" debug option on the command line, to go through the Perl code line-by-line. However the command line option "perl -d", is text oriented, not visual, and is very cumbersome to use.

    Is there a Perl IDE that can be downloaded free, and would provide a visual interactive mode that is less cumbersome than the "perl -d" command line mode?

    I have studied the Perl IDEs "Padre" and "Perl Express". I found a webpage saying the "Padre" interactive mode has quit working. I found another webpage saying that "Perl Express" contains some bugs and has not been updated since December 2005.

    Currently I am studying the "Netbeans" IDE, which is freeware, and which has a plugin making it compatible with Perl. For Netbeans see https://netbeans.org/downloads/, for the Netbeans Perl plugin see http://plugins.netbeans.org/plugin/36183/perl-on-netbeans). Has anyone tried Netbeans + the Netbeans Perl plugin to debug Perl programs in a visual, interactive manner? Did it work successfully?

    Our requirements are having a Perl IDE: a) That is visually oriented (instead of text-oriented like "perl -d"). b) Allows a Perl program to be run and debugged line-by-line, in an easy to understand visual display. c) Is Freeware. d) Is compatible with Windows OS 32-bit and Windows OS 64-bit

    Is there an existing Perl IDE that can be downloaded free, that meets the requirements listed above? I need to download and test Perl IDE's on my home computer, not my work computer, so I can't do such testing at the moment

Critique of my "WebServerRemote" module
2 direct replies — Read more / Contribute
by nysus
on Feb 27, 2017 at 07:37

    If you are bored and looking for someone to beat some sense into, read on. Note: this post is somewhat of a followup to my question yesterday.

    First, a little background to put this in context. I started learning Perl in the late 90s. I'm a very on again/off again programmer. I don't code for a living and I write some hellacious spaghetti code. But every year or two I get the programming bug but usually end up biting off more than I can chew and/or get sidetracked with other stuff. But the last couple of weeks I've decided to pour my heart into getting as good as I can get at programming with perl so I can take on some larger projects I'd like to work on. First, to sharpen my chops, I decided to work on a smaller project, a family of modules and roles to manipulate my webserver from my local machine. The primary purpose of this project is not to write the best possible mechanism for issuing commands to a remote server. While I want this program to be useful, its primary purpose is to help me cut my teeth more with Moose, seeing what it can do, and getting more adept with it and other tools (like testing, vim, etc.).

    So anyway, what I'm looking for is a critique of what I have to see if I'm way out to lunch. I'm particularly concerned with how I'm using roles, which seems very convoluted. I'll explain in more detail as I show the code below. I have more to code but I'm far enough along to have enough shape to it. What I have written so far works and has been tested. Feel free to bash me. I can take it.

    So first I have my WebServerRemote class. It is intended to be the kind of glue that holds my family of my modules together and does odd tasks and delegates other tasks out to other modules and subclasses:
    package WebServerRemote 0.000001; use Carp; use Moose; use Data::Dumper; use Modern::Perl; use Params::Validate; use MyWebsite; use namespace::autoclean; with 'MyOpenSSH'; with 'Apache2Info'; sub get_file { validate_pos(@_, 1, 1); my $self = shift; my $file_path = shift; return $self->capture("cat $file_path"); } # get website objects based on domain name sub get_websites { validate_pos(@_, 1, 1); my ($self, $domain) = @_; my @websites = (); # website o +bjects my $config_files = $self->lookup_config_files($domain); # list of c +onfig files croak 'No config files found with ' . $domain if !@$config_files; foreach my $file (@$config_files) { my $config = $self->get_file($file); #my @cmds = qw( servername, suexecusergroup, customlog, serveralia +s ); foreach my $docroot (@{$self->get_docroots_from_string($config)}) +{ my $vh = $self->get_vh_context($config, 'documentroot' +, $docroot); my @aliases = (); my $aliases = ''; while (my $alias = $vh->cmd_config('serveralias')) { push @aliases, $alias; } $aliases = join ', ', @aliases; my $suexec = $vh->cmd_config('suexecusergroup') || ''; my $servername = $vh->cmd_config('servername') || ''; my $errorlog = $vh->cmd_config('errorlog') || ''; push @websites, MyWebsite->new ( docroot => $docroot, apache_config_path => $file, domain => $servername, suexecgroup => $suexec, aliases => $aliases, error_log => $errorlog, ssh => $self->ssh->get_user . '@' . $self +->ssh->get_host, ); } } return \@websites; } sub check_dir_for_files { validate_pos(@_, 1, 1, 1); my $self = shift; my $dir = shift; my $files = shift; my $listing = $self->capture('ls -1 ' . $dir); my %files = map { $_ => 1 } split /\n/, $listing; my @fail = (); # $files can be a scalar or an array if (ref $files) { push @fail, grep { !exists $files{$_} } @$files; return !@fail; } else { return $files{$files}; } } ##########################################

    The module above consumes two roles. First, there is MyOpenSSH which is just a wrapper for Net::OpenSSH:

    package MyOpenSSH 0.000001; use Carp; use Data::Dumper; use Moose::Role; use Modern::Perl; use Net::OpenSSH; use Params::Validate; has 'ssh' => (is => 'rw', isa => 'Net::OpenSSH', required => 1, lazy = +> 0, handles => qr/[^(capture)]/, ); around BUILDARGS => sub { my $orig = shift; my $class = shift; my %args = ref $_[0] ? %{$_[0]} : @_; croak 'a host must be supplied for ssh: ssh => (\'<user>@<host>\', % +opts)' if !%args; my ($host, %opts) = $args{ssh}; return $class->$orig( %args) if ref $host eq 'Net::OpenSSH'; delete $args{ssh}; my $ssh = Net::OpenSSH->new($host, %opts); $ssh->error and croak "could not connect to host: $ssh->error"; return $class->$orig( ssh => $ssh, %args ); }; # wrapper for system method sub exec { validate_pos(@_, 1, 1); my $self = shift; my $cmd = shift; $self->ssh->system($cmd) || carp 'Command failed: ' . $self->ssh->er +ror; } # wrapper for capture method sub capture { validate_pos(@_, 1, 1); my $self = shift; my $cmd = shift; $self->ssh->capture($cmd) || carp 'Command failed: ' . $self->ssh->e +rror; } ###########################################

    Now, I'm sure the first question will be, "Why is he using Net::OpenSSH and not just doing it directly on the machine?" Well, mostly because I wanted to get familiar with it and also because I want to be able to develop everything on my local machine to see if it can be done. I'm sure the other question will be "Why a wrapper for Net::OpenSSH?" The answer to that is twofold: I wanted to see how it might be done and two, I don't want to have to remember how to construct a Net::OpenSSH object. I can now create a WebServerRemote object with something as simple as $wsr = WebServerRemote(ssh => me@host). Yeah, I'm that lazy.

    I found some nice side benefits to wrapping Net::OpenSSH. For example, I can automatically check for errors every time I run a command on the remote server.

    Now, the BUILDARGS is the most interesting (convoluted?) feature of the Net::OpenSSH role. It was hacked together with trial and error until I got it to work. I will get back to this later. It's a doozy.

    So, the other role I have is called Apache2Info. Its job is to do boring things related to retrieving information from Apache config files. So far, it mostly has methods I will use for reporting. I've left out a lot of the code of this role because it's not very interesting or pertinent to this post:

    package Apache2Info 0.000001; use Carp; use Try::Tiny; use File::Spec; use File::Util; use Moose::Role; use Data::Dumper; use Modern::Perl; use File::Listing; use File::Basename; use Params::Validate; use Apache::ConfigFile; requires 'ssh'; # get document roots of a config file sub get_docroots_from_string { --snip-- } # searches a config file string for a command and returns virtual host + config # if a match is found sub get_vh_context { --snip-- } # Apache::ConfigParser requires a path to a file as an argument # so we save contents to a file first and then read it sub _read { --snip-- } # get list of absolute, non-canonical paths to all apache configuratio +n file sub get_enabled_apache_config_filenames { --snip-- } # get a listing of all directories where config files reside sub get_config_file_dirs { --snip-- } # get all docroots sub get_all_docroots { --snip-- } # find the config files for a given domain name sub lookup_config_files { --snip-- ###############################################

    The only really interesting thing here is the requires 'ssh' bit because this role needs a Net::OpenSSH functionality to get stuff from the server. I satisfy that in my consumers by having an ssh attribute. You'll notice the ssh attribute is supplied by the MyOpenSSH role. This is also where stuff gets kind of convoluted.

    The last piece of the puzzle is the MyWebsite class which extends WebServerRemote. I'm not sure if this is a good idea or not but in the interest of experimenting I decided to try it and see what happens. My thinking on this was that since I want my website objects to use the MyOpenSSH role and Apache2Info, that I could just extend WebServerRemote and have those features automatically included. Also, there are methods in WebServerRemote that will be used by my MyWebsite. So, anyway, here is the code:

    package MyWebsite 0.000001; use Carp; use Moose; use Modern::Perl; use Drupal; use WordPress; extends 'WebServerRemote'; with 'MyOpenSSH'; use namespace::autoclean; has 'db' => (is => 'rw', isa => 'Str', required => 0, +lazy => 1, default => '', ); has 'ver' => (is => 'rw', isa => 'Str', required => 0, +lazy => 1, default => '', ); has 'type' => (is => 'ro', isa => 'Str', required => 0, +lazy => 1, builder => '_set_type'); has 'domain' => (is => 'rw', isa => 'Str', required => 0, +lazy => 1, default => '', ); has 'aliases' => (is => 'rw', isa => 'Str', required => 0, +lazy => 1, default => '', ); has 'docroot' => (is => 'rw', isa => 'Str', required => 1, +lazy => 0 ); has 'db_user' => (is => 'ro', isa => 'Str', required => 0, +lazy => 1, default => '', writer => '_set_dbuser', ); has 'db_pass' => (is => 'rw', isa => 'Str', required => 0, +lazy => 1, default => '', ); has 'root_dir' => (is => 'rw', isa => 'Str', required => 0, +lazy => 1, default => '', ); has 'error_log' => (is => 'rw', isa => 'Str', required => 0, +lazy => 1, default => '', ); has 'site_config' => (is => 'rw', isa => 'Str', required => 0, +lazy => 1, default => '', ); has 'suexecgroup' => (is => 'rw', isa => 'Str', required => 0, +lazy => 1, default => '', ); has 'apache_config' => (is => 'rw', isa => 'Str', required => 0, +lazy => 1, default => '', ); has 'site_config_path' => (is => 'rw', isa => 'Str', required => 0, +lazy => 1, default => '', ); has 'apache_config_path' => (is => 'rw', isa => 'Str', required => 0, +lazy => 1, default => '', ); sub _set_type { my $self = shift; # check for drupal multi site if ($self->check_dir_for_files($self->docroot, ['sites', 'includes', + 'modules'])) { $self = Drupal->meta->rebless_instance($self); return 'drupal'; } if ($self->check_dir_for_files($self->docroot, 'wp-config.php')) { $self = WordPress->meta->rebless_instance($self); return 'wordpress'; } if ($self->check_dir_for_files($self->docroot, 'settings.php')) { $self = Drupal->meta->rebless_instance($self); return 'drupal'; } } ##############################################

    So, I have to have the with 'MyOpenSSH'; bit in there. I thought by extending MyWebsite I wouldn't need that. However, when I remove that line, things break when the _set_type method gets run. I get an error saying there is no capture method which is found in MyOpenSSH. But putting this line wreaked all other kinds of havoc. I think the ssh attribute was trying to get set twice. I'm not sure. Anyway, after fiddling with the BUILDARGS method of MyOpenSSH and playing with the order of the with statements, I was able to get it to work somehow. I feel in my bones this is a horrible hack but I don't know how to fix it properly.

    The other thing I do is apparently what's called an "object factory" where the MyWebsite object detects what kind of website it is and then subclasses itself when the _set_type method is called. Perhaps this is a bad idea. I'm not sure if there's a real good reason to do it except to see if it can be done. But I'm thinking it may come in handy because different kinds of websites will have different methods.

    Alright, that's it. Feel free to beat on me if this is a ridiculous mess. :)

    $PM = "Perl Monk's";
    $MCF = "Most Clueless Friar Abbot Bishop Pontiff Deacon Curate";
    $nysus = $PM . ' ' . $MCF;
    Click here if you love Perl Monks

Recursive Module Dependencies
1 direct reply — Read more / Contribute
by 13gurpreetsingh
on Feb 27, 2017 at 06:03
    Hi Monks,

    Tried searching many a times, but couldn't find. I believe there isn't a real solution to my problem, but might be !!

    So, I am working on Company Servers where I deploy modules in separate directory due to root permission issues and use them via PERL5LIB or 'use lib'.

    But problem comes when installing each module goes up and up with recursive dependencies. I have proxy restrictions due to which I can't connect to internet on shell and directly install via cpan. Each module, and thereafter its parent module, I have to download, unzip and install manually.

    Is there a way I can view all the dependencies of a module I am going to install on browser itself and view the order of installation?

    If it is possible, I will go and download them one by one and install it in that manner instead of opening multiple sessions to a server, multiple browser windows, doing scp from my local windows machine, unzip and blah blah. Although this is also a manual work, but atleast a bit better.

    Thanks for your help in advance.

Design question for berrybrew update
1 direct reply — Read more / Contribute
by stevieb
on Feb 26, 2017 at 17:36

    Hey all,

    So thankfully, Strawberry Perl is in the beginning stages of providing a JSON document with all of their releases. Here is their first example/mockup.

    Currently, in berrybrew, I hand pick releases, add them into an existing JSON file within the install, and allow others to manually edit this file as they see fit.

    The entire list is quite long, even using just the portable editions. For instance, each version has a 32-bit and a 64-bit cut, and each 32-bit cut has both a "with USE_64_BIT_INT" and "without USE_64_BIT_INT". I like the entire default list to show up in one cmd window without scrolling.

    My question is essentially looking for assistance on how I should decide which versions to include. First, the list file will be included in each distribution as it currently was when that release is done. The user will have to manually run a command line command to fetch any updates.

    I'm thinking about including only all 32 and 64 bit portable editions in the berrybrew available command, with some options to include others:

    berrybrew update_perls # only look for new default includes berrybrew update_perls include PDL berrybrew update_perls include 64_BIT_INT berrybrew update_perls all

    etc. After the new JSON data is fetched, we'll run a routine that will reformat everything to how it is used internally.

    What are your thoughts on this? If you use Perlbrew, is there anything you wish was/wasn't being done?

    All suggestions welcome, as I'm in the extremely early stages of drumming up a design on how this will be approached (and hopefully, make decent decisions early on, as to minimize work after if it needs to be modified).

    berrybrew is developed in C#. It is currently being reviewed for porting to C++ because I desire to get rid of the .Net requirement, if possible. However, that doesn't affect the outcome of this particular question. That said, any and all suggestions to how the software operates or acts is welcome, as I'm a *nix person by default, and would love feedback of all sorts from my fellow Monks who use Windows.

UP-TO-DATE Comparison of CGI Alternatives
6 direct replies — Read more / Contribute
by iaw4
on Feb 26, 2017 at 17:18

    A comparison of tradeoffs using various web technologies should probably be a FAQ and updated once a year. The web is important, and unlike ruby and rails (or python and django?), there is really not one recommended dominating web framework in perl to start with.

    I am going to start this post with what I understand.

    • CGI.pm was a simple low- or mid-level framework. It has been deprecated. It is still supported for existing projects, but no one should start a new web project with it.
    • PSGI/Plack is expressly middleware. While powerful and stable, it really is not designed for writing websites, but designed for use in higher-level frameworks. The authors are not too happy with (or equipped to) handle large number of noobie requests on how to use it, and the examples in the documentation are modest.
    • The two primary choices for new modest-size websites are Dancer2 and Mojolicious. They have good documentation and are suitable for newbies. (Both frameworks are or can be users of PSGI/Plack, but this is transparent to the user programmer.) They are good high-level, but not stable. In particular, I know that Mojolicious is evolving---projects can break upon M updates. I don't know about Dancer2.
    • For large projects, Catalyst becomes a third alternative.

    So, for someone new who wants to learn how to code a website, there seem to be two primary perl choices. If my reading of the landscape is not correct, then please correct it. And if someone could please post the pros and cons of Dancer2 and Mojolicious---so that one does not have to learn both first to start with one---it would be helpful.

    Personal Observation: What I liked about CGI.pm and Plack/PSGI over the frameworks was that lower-level code makes it easier to determine what perl code was responsible for displaying a given web page. With the frameworks, by the time all routes, templates, injections, etc., are considered, it becomes hard to trace how the given web page has been built. Where web programs are one's primary responsibility and used every day for years, the linkage within the frameworks is not a problem. One remembers instantly what was where. Where web programs are occasional tasks, separated by long periods of neglect, this becomes more difficult.

    thanks in advance to the experts for illuminating the issues.

New Meditations
Holy Crap! Programming Well is Hard Work
4 direct replies — Read more / Contribute
by nysus
on Feb 24, 2017 at 12:50

    As a hobbyist programmer and someone fascinated with the world of programming and learning what I can about it, it strikes me more and more just how obsessive to detail good programmers are. That's never been a strong suit of mine, unfortunately. I'm impatient and I often make wrong and bad assumptions that make it tough for me to write solid code. And I think what I like about programming so much–even though it often humbles me by making me feel like a bit of a dunce–is that it forces me to think like an engineer. But I have to work pretty damn hard at it and the process is slow and frequently frustrating.

    In the programming field, there is an extraordinary amount of information to take in, process, and put into practice. It seems the more I learn, the less I feel like I know. I marvel at the programmers to be able to do that and who have a natural knack for it. I'd like nothing better than to spend 15 hours a day lost in code (which I've been doing lately) but it still sometimes feels like I'm pushing a rock up a mountain. I have a few projects I want to write and write well but I end up getting diverted by having to learn some new skill first. Every day there seems to be countless new idioms, tools, and concepts that I need to learn and put into practice.

    But I'm eager for the day when it all just clicks, when I can look at someone else's piece of code and read it like a newspaper and know everything that's going on with it (or at least have a pretty good idea). It's frustrating to go down three dead ends or spend an hour figuring out why your code won't do what you want because of a stupid mental error. I'm guessing most programmers like me have gone through a similar phase where they can write code that gets simple stuff accomplished but aren't good enough to take on a really large or complex project.

    Anyway, just needed to vent. I feel better now. And thanks to the Perlmonks who have helped me on my journey toward my goal of achieving programming excellence. I could not keep pushing on this rock without you.

    $PM = "Perl Monk's";
    $MCF = "Most Clueless Friar Abbot Bishop Pontiff Deacon Curate";
    $nysus = $PM . ' ' . $MCF;
    Click here if you love Perl Monks

New Cool Uses for Perl
Lower-Level Serial Port Access on *NIX
No replies — Read more | Post response
by haukex
on Mar 01, 2017 at 10:07

    Dear Monks,

    Most likely, everyone who's needed to access a serial port on *NIX systems has used, or at least come across, Device::SerialPort. It's nice because it provides a decent level of portability, being designed to be a replacement for Win32::SerialPort. However, it's always bugged me a little bit that the module is a bit unwieldy, with a lot of configuration and functions I never use, several documented as being experimental, and that its filehandle interface is tied instead of native. So, I'd like to present an alternative that has been working well for me over the past months, IO::Termios. It's a subclass of IO::Handle, and the handles can be used directly in IO::Select loops, which can be used to implement nonblocking I/O and timeouts, or for example a POE POE::Wheel::ReadWrite, just to mention two possibilities. (Note: I'm not saying IO::Termios is "better" than Device::SerialPort, just that so far it has been a viable alternative.)

    Here's a basic example:

    use IO::Termios (); my $handle = IO::Termios->open('/tmp/fakepty', '4800,8,n,1') or die "IO::Termios->open: $!"; while (<$handle>) { # read the port line-by-line chomp; print time." <$_>\n"; # write something to the port print {$handle} "Three!\n" if /3/; } close $handle;

    An Aside: Fake Serial Ports on *NIX

    You may have noticed that in the above example, instead of the usual device names like e.g. /dev/ttyAMA*, /dev/ttyS*, or /dev/ttyUSB*, I used "/tmp/fakepty". I created this for testing using the versatile tool socat, here are two examples:

    # connect the fake pty to a process that generates output $ socat pty,raw,echo=0,link=/tmp/fakepty \ exec:'perl -e "$|=1;while(1){print q{Foo },$x++,qq{\n};sleep 2}"' # connect the fake pty to the current terminal $ socat pty,raw,echo=0,link=/tmp/fakepty -,icanon=0,min=1

    More Fine-Grained Control

    It's also possible to use sysopen for the ports, if you want to have control over the exact flags used to open the port. Also, if you need to set some stty modes, you can do so with IO::Stty. I've found that for several of the USB-to-Serial converters I've used that it's necessary to set the mode -echo for them to work correctly, and raw is necessary for binary data streams.

    use Fcntl qw/:DEFAULT/; use IO::Termios (); use IO::Stty (); sysopen my $fh, '/tmp/fakepty', O_RDWR or die "sysopen: $!"; my $handle = IO::Termios->new($fh) or die "IO::Termios->new: $!"; $handle->set_mode('4800,8,n,1'); IO::Stty::stty($handle, qw/ raw -echo /); my $tosend = "Hello, World!\n"; $handle->syswrite($tosend) == length($tosend) or die "syswrite"; for (1..3) { my $toread = 1; $handle->sysread(my $in, $toread) == $toread or die "sysread"; print "Read $_: <$in>\n"; } $handle->close;

    My error checking in the above example is a little simplistic, but I just wanted to demonstrate that using sysread and syswrite is possible like on any other handle.

    I've noticed that there is some interaction between IO::Termios and IO::Stty - for example, when I had to connect to a serial device using 7-bit and even parity, I hat to set the termios mode to 4800,7,e,1 and set the stty modes cs7 parenb -parodd raw -echo for things to work correctly.

    I have written a module that wraps an IO::Termios handle and provides read timeout, flexible readline, signal handling support, and a few other things. However, I need to point out that while I've been using the module successfully in several data loggers over the past few months in a research environment, it should not yet be considered production quality! The major reason is that it's not (yet?) a real CPAN distro, and it has zero tests! But if you're still curious, for example how I implemented a read timeout with IO::Select, you can find the code here.

    Update: Added mention of some /dev/* device names.

Fast gzip log reader with MCE
No replies — Read more | Post response
by marioroy
on Mar 01, 2017 at 05:54

    Greetings, fellow Monks.

    I came across an old thread. One might do the following to consume extra processing cores. The pigz binary is useful and depending on the data, may run faster than gzip.

    use strict; use warnings; use MCE::Loop chunk_size => 1, max_workers => 4; my @files = glob '*.gz'; mce_loop { my ($mce, $chunk_ref, $chunk_id) = @_; ## $file = $_; same thing when chunk_size => 1 my $file = $chunk_ref->[0]; ## http://www.zlib.net/pigz/ remember to specify -p1 ## open my $fh, "pigz -dc -p1 $file |" or do { ... } open my $fh, "gzip -dc $file |" or do { warn "open error ($file): $!\n"; MCE->next(); }; my $count = 0; while ( my $line = <$fh> ) { $count++; # simulate filtering or processing } close $fh; MCE->say("$file: $count lines"); } @files;

    Kind regards, Mario.

New Monk Discussion
Fine grained "a day ago" or "a week ago"
3 direct replies — Read more / Contribute
by stevieb
on Feb 22, 2017 at 23:27

    Request for new clicky-availability...

    When pointing at the arrows << and <, its a week ago and a day ago respective. We need something more updated than that. I do not have a solution, so this is a throw-out for discussion.

    This is, I suppose, a formal (public) application to become a pmdev, so I may become part of the team that can see what is happening, and potentially be part of new ideas.

Log In?
Username:
Password:

What's my password?
Create A New User
Chatterbox?
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others imbibing at the Monastery: (9)
As of 2017-03-01 19:30 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    Before electricity was invented, what was the Electric Eel called?






    Results (428 votes). Check out past polls.