Beefy Boxes and Bandwidth Generously Provided by pair Networks
Think about Loose Coupling
 
PerlMonks  

The Monastery Gates

( #131=superdoc: print w/replies, xml ) Need Help??

Donations gladly accepted

If you're new here please read PerlMonks FAQ
and Create a new user.

New Questions
Convert string to variable
4 direct replies — Read more / Contribute
by ShermW0829
on Feb 16, 2018 at 15:27

    Name: Sherman 71 years old retired and creating my own poker visits database

    Platform: HP Compaq 6710b

    Operating System: Ubuntu 17.10

    PERL Version: 5.26

    postgresql Version: 9.6

    Problem: Need to assign a string to act as a variable

    I am building a front end for postgresql entries. I have fourteen entries that I need to query the user and set the user's value into a variable. I am using IO::Prompt.

    If I straight-line the code everything works. e.g:

    my ($date_in, $date_out); $date_in - prompt("Start Date: "); $date_in =~ s/^\s+|\s+$//g; $date_out = prompt("$End Date: "); $date_out =~ s/^\s+|\s+$//g;

    And 12 more user inputs to get along with the no space check equals 24 more lines of code. Lots of code so I tried the following:

    my ($k, $date_in, $date_out); my %var_list = ('date_in'=>'Start Date', 'date_out'=>'End Date'); foreach $k (keys(%var_list)) { ## $k prints date_out and then date_in print( "\$k: \"$k\"\n" ); ## $var_list prints End Date and then Start Date print( "\$var_list{k}: \"$var_list{$k}\"\n" ); ## Below is where I want to go: ## $k = prompt($var_list{k}); ## and have $date_out = user's entry ## ## Now how do I assign a beginning $ ## to change date_out to $date_out? ## The below does not work ## ${$k} = prompt(${$var_list{k}}); ## ## I've tried $$k, $($k), and ${$k} ## An example error is the following: ## Can't use string ("date_out") as a SCALAR ## ref while "strict refs" in use at ## ./build_sql_entries line 25 }

    Thank you;

    Sherman

Appending downlad file LWP
1 direct reply — Read more / Contribute
by Anonymous Monk
on Feb 16, 2018 at 13:52

    Hello

    I am trying to improve some code. Basically, I want to build a downloader with progress bar. I know how to download file, how to build a progress bar (for example in Tk). What I cannot solve is appending the portion of file I have downloaded to the main (final) file. Here it is what I got (stripped of the GUI part. The code is based on a solution I found on the web.

    use strict; use warnings; use LWP::Simple; use LWP::UserAgent; my $url='www.myurl.com'; my $ttlDown = 0; my $resp = LWP::UserAgent->new()->get($url, ':content_cb' => sub { my ($data, $response) = @_; my $size = $response->content_length; $ttlDown += length $data; printf("%7.1f KB of %7.1f (%5.1f%%)$/", $ttlDown / 1024.0, $size / 1024.0, $ttlDown * 100.0 / $size ); #Here I need to append the bit of file downloaded, but I am down +loaded the entire file over and over... my $file = 'myfile.exe'; getstore($url, $file); });
including variables
6 direct replies — Read more / Contribute
by LloydRice
on Feb 16, 2018 at 11:32

    I have a short bit of Perl variable definitions that I would like to use in two different scripts. The simple plan is that I would not need to maintain 2 copies of that fragment. I have tried do, exec, use, module, a few more things. It all seems designed to defeat my simple goal by imposing all of the module structure. As I read the new Camel book, "do" should do what I want. But the variables do not get into the calling script. What am I missing?

HASH Hierarchy
3 direct replies — Read more / Contribute
by dirtdog
on Feb 16, 2018 at 10:40

    Hi Monks, I've got a working snippet of code that will take parameters and stuff them into a hash to be used later in the program to only keep the highest ranked value only (0 is top ranked) if it exists.

    hierarchy.pl POPE PATRIARCH ARCHBISHOP CARDINAL
    #!/usr/bin/env perl use strict; my @hierarchy; my %hierarchy; my $order =0; while ($#ARGV >= 0) { push @hierarchy,shift; } for (@hierarchy) { $hierarchy{$_} = $order; $order++; } while( my( $key, $value ) = each %hierarchy ){ print "$key: $value\n"; }

    The code appears to work, but I thought there might be a better way to do it so thought I'd check with the Monks.

    Much appreciated!

tinyDNS deconstruct.
2 direct replies — Read more / Contribute
by 0xdeadbad
on Feb 15, 2018 at 12:40

    Could somebody lend a hand with deconstructing tinydns CAA records please.

    "\000\005issueentrust.net" > "CAA IN 0 issue entrust.net" [flag] [tag] [issuer]

    The first octal is the Flag ( 000 )
    The second octal ( 005 ) is the taglength and used to deduce the tag and issuer from the ascii string.

    If I could just get "\000\005issueentrust.net" translated to "0,5,issueentrust.net" that would be a great help. I've been looking at pack and unpack but cannot work it out as I'm not a real programmer.

    many thanks
Use function as a regex
4 direct replies — Read more / Contribute
by stevieb
on Feb 15, 2018 at 11:22

    I've got a very large and complex distribution where several of the modules use a pretty high number of somewhat complex regexes. I have decided instead of having them peppered throughout the code, I'd create a new module, Regex.pm that would house and return these regexes based on name.

    Now, this all works well and fine after some fiddling and learning where certain flags need to be set. Here is a basic example:

    use warnings; use strict; package Re; { my %h = ( re => qr/ [Pp]erl-\d\.\d+\.\d+(?:_\w+)? \s+===.*? (?=(?:[Pp]erl-\d\.\d+\.\d+(?:_\w+)?\s+===|$)) /xs, ); sub re { return $h{re}; } } package main; { my $str; { local $/; $str = <DATA>; } my $re = Re::re(); my @results = $str =~ /$re/g; print scalar @results; } __DATA__ perl-5.26.1 ========== Reading '/home/spek/.cpan/Metadata' Database was generated on Tue, 13 Feb 2018 15:29:02 GMT App::cpanminus is up to date (1.7043). --> Working on . Configuring /home/spek/repos/mock-sub ... OK <== Installed dependencies for .. Finishing. --> Working on . Configuring /home/spek/repos/mock-sub ... Generating a Unix-style Make +file Writing Makefile for Mock::Sub Writing MYMETA.yml and MYMETA.json OK Building and testing Mock-Sub-1.10 ... Skip blib/lib/Mock/Sub.pm (unch +anged) Skip blib/lib/Mock/Sub/Child.pm (unchanged) Manifying 2 pod documents PERL_DL_NONLAZY=1 "/home/spek/perl5/perlbrew/perls/perl-5.26.1/bin/per +l" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Har +ness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t t/00-load.t .................... ok t/01-called.t .................. ok t/02-called_count.t ............ ok t/03-instantiate.t ............. ok t/04-return_value.t ............ ok t/05-side_effect.t ............. ok t/06-reset.t ................... ok t/07-name.t .................... ok t/08-called_with.t ............. ok t/09-void_context.t ............ ok t/10-unmock.t .................. ok t/11-state.t ................... ok t/12-mocked_subs.t ............. ok t/13-mocked_objects.t .......... ok t/14-core_subs.t ............... ok t/15-remock.t .................. ok t/16-non_exist_warn.t .......... ok t/17-no_warnings.t ............. ok t/18-bug_25-retval_override.t .. ok t/19-return_params.t ........... ok t/manifest.t ................... skipped: Author tests not required fo +r installation t/pod-coverage.t ............... skipped: Author tests not required fo +r installation t/pod.t ........................ skipped: Author tests not required fo +r installation All tests successful. Files=23, Tests=243, 2 wallclock secs ( 0.13 usr 0.04 sys + 1.75 cu +sr 0.13 csys = 2.05 CPU) Result: PASS OK Successfully tested Mock-Sub-1.10

    In the code, I've got this:

    my $re = Re::re(); my @results = $str =~ /$re/g;

    What I'm wondering, and haven't been able to sort out if it's possible, is skip the variable instantiation, and use the function call directly when using the regex, like this:

    my @results = $str =~ /Re::re()/g;

    Doable, or am I chasing down something impossible?

user net:openSSH and File::find::Rule together.
1 direct reply — Read more / Contribute
by garcimo
on Feb 15, 2018 at 09:52
    Hello I would like to search files in a remote system that match certain pattern that are older than one hour. the remote system is solaris. the find in solaris does not have parametres like mmtime. so I created this script that does more or less what i want in the remote system.
    #!/usr/bin/perl use File::Find::Rule; use POSIX qw(strftime); my $today = time(); my $onehour = $today - (60*60); my @files = File::Find::Rule->file() ->name("*.0") ->mtime("<$onehour") ->in( "/mypath/" ); for my $file (@files) { print "$file\n"; }
    now I want to use from a remote system using net:openssh so that it connects to the solaris with ssh finds the file and gives me the output..
    use Net::OpenSSH; my $dir = '/mypath'; my $host = 'myhost'; my $ssh = Net::OpenSSH->new($host, user => adm_garcimo); $ssh->error and die "Couldn't establish SSH connection: ". $ssh->error;
    is there a way to merge the two script in one central server without having to scp the scripts to all the hosts that the script needs to find files... I hope imy question is clear
Running a perl script with root, changing to another user and then back to root?
3 direct replies — Read more / Contribute
by morthed
on Feb 15, 2018 at 09:13

    I've tried to search a lot around the web for this question, to no avail. <\p>

    Currently I'm building a perl script that needs to do a specific job for each line in some database. The database contains several arguements including a "user" field. The script runs as root, and in each line iteration in the db, it will need to switch to the user provided and operate as the user, finishing the round and exiting back to root and then again, running as a different user. <\p>

    So far I've tried: <\p>

    using POSIX::setuid($User) in order to switch to the user within the same script, meaning:<\p><code>while (my $line = <$DB>) .. some code .. POSIX::setuid($User); #switch to user ... do stuff POSIX::setuid(0); #return to root <\code>

    But this solution didn't work so well, after changing once it was stuck in this user through all iterations of the loop, and couldn't return to root, which is obviously logical because it doesn't have the correct permissions.<\p>

    Since this didn't work, I thought to myself maybe I should split the job into 2 scripts. One script will run as root, whereas the other script will be called to by root in the first script and then use POSIX::setuid. I thought, that maybe because it's a subproc\shell it'll just return itself back to root once the job is done on the second script, but it doesn't work as well... I'm kinda out of ideas, anything you guys think'll help?<\p>

    ANOTHER THING: I know this is probably incorrect, so that's why I ask this around... please keep mean comments off or stuff like "you don't know what you're doing", if I did I wouldn't have asked... I basically look for a safe way to change the eid\uid of the SECOND SCRIPT only and keep root as the user for the main one... is there a way for that? Thanks! <\p>

Blockchain and Perl
1 direct reply — Read more / Contribute
by baxy77bax
on Feb 15, 2018 at 05:46
    Hi

    This is just an info question. In light of blockchain "crazies" I was wondering are there any tutorials on how to make your own with Perl or something, anything related. How to make a Dapp maybe? I seen there was a post (here) 2 years ago on the subject but I could not find any recent posts. I, myself am not very familiar with the tech but would love to learn and if possible to learn it through perl, that would be awesome :)

    Thank you !

Strange memory growth
5 direct replies — Read more / Contribute
by spica1001
on Feb 14, 2018 at 13:02

    Hi all. Here's an odd one, to me anyway. I'm running a script to process very large XML files with embedded JSON. When I run it, the memory increases indefinitely (the file can be 100s of GB and the RAM usage reaches 25GB+). Boiling it down, I reach the below. If I remove the line marked "## THIS LINE", the memory remains static, but leave it in and it increases again. Adding in a load of undefs seems to make no difference. It's evidently leaving the hash array around, but I can't see how to 'free' it.

    Why would accessing a non-existent hash value cause that, or of course even better how do I prevent it?! Thanks for any help...

    use JSON; open(IN,"<:utf8","$ARGV[0]"); while(<IN>) { if (m!^\s+<text.*?>({[^\{\|].+})</text>!) { my $jt = $+; $jt=~s/\&quot;/\"/g; my $json = new JSON; my $jp = $json->allow_nonref->utf8->relaxed->decode($jt); my $c = $jp->{'claims'}; # "claims":{"P31":[{"mainsnak":{"snaktype":"value","property": +"P31","hash":"...","datavalue":{"value":{"entity-type":"item","numeri +c-id":5},...}... }...}], if (ref($c) eq 'HASH') { foreach my $ch (keys %$c) { if (ref($c->{$ch}) eq 'ARRAY') { foreach my $cg (@{$c->{$ch}}) { if (defined $cg->{'mainsnak'}->{'datavalue'}-> +{'value'}->{'notexist'}) {} ## THIS LINE } } } } } }
crypt sometimes returning undef
4 direct replies — Read more / Contribute
by Beaker
on Feb 14, 2018 at 04:39

    I've moved to a new server, which has changed my Perl version from v5.10.1 to v5.16.3

    The following code is designed to generate a temporary user login password.

    srand(time ^ $$); my @passset = ('a'..'k', 'm'..'n', 'p'..'z', '2'..'9'); my $passwrd1 = ""; for (my $i=0; $i<8; $i++) { $passwrd1 .= $passset[int(rand($#passset + + 1))]; } my $cryptedpass = crypt($passwrd1, substr($never_blank_value, 0, 2)); use Babel; my $y = new Babel; my $newpasswrd = $y->encode($cryptedpass, "$passwrd1");

    Since moving server the value of $cryptedpass is sometimes undef and throwing warnings from Babel.pm

system(find) unable to find a file even if the file exists
5 direct replies — Read more / Contribute
by SitAllDay
on Feb 14, 2018 at 03:48

    Hi guys I am new to Perl. Recently I received a task to find a file using system(file...), however the output keeps displaying File Not Found...

    use strict; use warnings; my $destination = 'C:\Users\Documents'; if (-d ${destination}) { print "Info: Detected folder exists. \n"; } else { print "Folder not found!\n"; } print "Unzip destination folder\n"; system("find $destination -name '*.txt'");

    And here is what shown in my terminal:

    C:\Users\Documents>perl test3.pl

    Info: Detected folder exists.

    Unzip destination folder

    File not found - '*.txt'

    Hopefully someone can help me with this. Thanks a lot!

New Meditations
Do you like Perl?
4 direct replies — Read more / Contribute
by choroba
on Feb 13, 2018 at 15:21
    Do you like Perl? Do you count yourself among people?

    If both your answers are "Yes", you might want to add your reasons to the discussion Why do people like Perl? on dev.to.

    I guess we've had similar threads here over the years, but talking to a broader auditorium can be different.

    ($q=q:Sq=~/;[c](.)(.)/;chr(-||-|5+lengthSq)`"S|oS2"`map{chr |+ord }map{substrSq`S_+|`|}3E|-|`7**2-3:)=~y+S|`+$1,++print+eval$q,q,a,
New Cool Uses for Perl
MCE gather and relay demonstrations
No replies — Read more | Post response
by marioroy
on Feb 13, 2018 at 00:32

    Fellow Monks,

    I received a request from John Martel to process a large flat file and expand each record to many records based on splitting out items in field 4 delimited by semicolons. Each row in the output is given a unique ID starting with one while preserving output order.

    Thank you, John. This is a great use-case for MCE::Relay (2nd example).

    Input File -- Possibly larger than 500 GiB in size

    foo|field2|field3|item1;item2;item3;item4;itemN|field5|field6|field7 bar|field2|field3|item1;item2;item3;item4;itemN|field5|field6|field7 baz|field2|field3|item1;item2;item3;item4;itemN|field5|field6|field7 ...

    Output File

    000000000000001|item1|foo|field2|field3|field5|field6|field7 000000000000002|item2|foo|field2|field3|field5|field6|field7 000000000000003|item3|foo|field2|field3|field5|field6|field7 000000000000004|item4|foo|field2|field3|field5|field6|field7 000000000000005|itemN|foo|field2|field3|field5|field6|field7 000000000000006|item1|bar|field2|field3|field5|field6|field7 000000000000007|item2|bar|field2|field3|field5|field6|field7 000000000000008|item3|bar|field2|field3|field5|field6|field7 000000000000009|item4|bar|field2|field3|field5|field6|field7 000000000000010|itemN|bar|field2|field3|field5|field6|field7 000000000000011|item1|baz|field2|field3|field5|field6|field7 000000000000012|item2|baz|field2|field3|field5|field6|field7 000000000000013|item3|baz|field2|field3|field5|field6|field7 000000000000014|item4|baz|field2|field3|field5|field6|field7 000000000000015|itemN|baz|field2|field3|field5|field6|field7 ...

    Example One

    This example configures a custom function for preserving output order. Unfortunately, the sprintf function alone involves extra CPU time causing the manager process to fall behind. The workers may idle while waiting for the manager process to respond to the gather request.

    use strict; use warnings; use MCE::Loop; my $infile = shift or die "Usage: $0 infile\n"; my $newfile = 'output.dat'; open my $fh_out, '>', $newfile or die "open error $newfile: $!\n"; sub preserve_order { my ($fh) = @_; my ($order_id, $start_idx, $idx, %tmp) = (1, 1); return sub { my ($chunk_id, $aref) = @_; $tmp{ $chunk_id } = $aref; while ( my $aref = delete $tmp{ $order_id } ) { foreach my $line ( @{ $aref } ) { $idx = sprintf "%015d", $start_idx++; print $fh $idx, $line; } $order_id++; } } } MCE::Loop::init { chunk_size => 'auto', max_workers => 3, gather => preserve_order($fh_out) }; mce_loop_f { my ($mce, $chunk_ref, $chunk_id) = @_; my @buf; foreach my $line (@{ $chunk_ref }) { $line =~ s/\r//g; chomp $line; my ($f1,$f2,$f3,$items,$f5,$f6,$f7) = split /\|/, $line; my @items_array = split /;/, $items; foreach my $item (@items_array) { push @buf, "|$item|$f1|$f2|$f3|$f5|$f6|$f7\n"; } } MCE->gather($chunk_id, \@buf); } $infile; MCE::Loop::finish(); close $fh_out;

    Example Two

    To factor out sprintf from the manager process, another way is via MCE::Relay for incrementing the ID value. Workers obtain the current ID value and increment/relay for the next worker, ordered by chunk ID behind the scene. Workers call sprintf in parallel. This allows the manager process (out_iter_fh) to accommodate up to 32 workers and not fall behind. It also depends on IO performance, of course.

    The MCE::Relay module is loaded automatically whenever the MCE init_relay option is specified.

    use strict; use warnings; use MCE::Loop; use MCE::Candy; my $infile = shift or die "Usage: $0 infile\n"; my $newfile = 'output.dat'; open my $fh_out, '>', $newfile or die "open error $newfile: $!\n"; MCE::Loop::init { chunk_size => 'auto', max_workers => 8, gather => MCE::Candy::out_iter_fh($fh_out), init_relay => 1 }; mce_loop_f { my ($mce, $chunk_ref, $chunk_id) = @_; my @lines; foreach my $line (@{ $chunk_ref }) { $line =~ s/\r//g; chomp $line; my ($f1,$f2,$f3,$items,$f5,$f6,$f7) = split /\|/, $line; my @items_array = split /;/, $items; foreach my $item (@items_array) { push @lines, "$item|$f1|$f2|$f3|$f5|$f6|$f7\n"; } } my $idx = MCE::relay { $_ += scalar @lines }; my $buf = ''; foreach my $line ( @lines ) { $buf .= sprintf "%015d|%s", $idx++, $line } MCE->gather($chunk_id, $buf); } $infile; MCE::Loop::finish(); close $fh_out;

    Relay accounts for the worker handling the next chunk_id value. Therefore, do not call relay more than once inside the block. Doing so will cause IPC to stall.

    Regards, Mario

Easily back up all of your Github repositories and/or issues
No replies — Read more | Post response
by stevieb
on Feb 11, 2018 at 16:25

    It's been in the works at the lower-end of my priority list, but after having a bit of a bug-closing weekend, thought I'd tackle getting out an initial release of Github::Backup.

    The cloud is a great thing, until the sun evaporates it one way or another. Github, although fantastically reliable, is prone to issues just like any other site on the Internet. I'd go as far to say that even they could be prone to data loss in very rare circumstances.

    This distribution, which provides a command-line binary, allows you to quickly and easily back up your repositories and issues to your local machine. The repositories are cloned so all data is retrieved as-is as legitimate Git repos, and the issues are fetched and stored as JSON data. Useful if there was ever a catastrophic issue at Github, or simply for offline perusal of your information.

    At a basic level, you need to send in your Github username, API token (see this), a directory to stash the data retrieved, and a flag to signify you want to back up either your repos, issues or both.

    github_backup \ -u stevieb9 \ -t 003e12e0780025889f8da286d89d144323c20c1ff7 \ -d /home/steve/github_backup \ -r \ -i

    That'll back up both repos and issues. The structure of the backup directory is as follows:

    backup_dir/ - issues/ - repo1/ - issue_id_x - issue_id_y - repo2/ - issue_id_a - repo1/ - repository data - repo2/ - repository data

    Now, most don't like supplying keys/tokens/passwords on the command-line or within a script, so you can stash your Github API token into the GITHUB_TOKEN environment variable, and we'll fetch it from there instead:

    github_backup -u stevieb9 -d /home/steve/github_backup -r -i

    Full usage for the binary:

    Usage: github_backup -u username -t github_api_token -d /backup/direct +ory -r -i Options: -u | --user Your Github username -t | --token Your Github API token -d | --dir The backup directory -p | --proxy Optional proxy (https://proxy.example.com:PORT) -r | --repos Back up all of your repositories -i | --issues Back up all of your issues -h | --help Display this help page

    The API is very straightforward as well:

    use warnings; use strict; use Github::Backup; # token stashed in GITHUB_TOKEN env var my $gh = Github::Backup->new( api_user => 'stevieb9', dir => '/home/steve/github_backup' ); # back up all repos $gh->repos; # back up all issues $gh->issues;

    This is one distribution that I've released prior to being happy with my unit test regimen, so that's on the imminent to-do list. There are tests, but as always, there can never be enough. In this case, I, myself am not even happy, so if you run into any issues, please open a ticket, or reply back here.

    Going forward, I plan on adding functionality to independently back up *all* Github data for a user, not just repos and issues. I also plan to test restore operations, but that's not anything I'm considering short-term.

    Have fun!

    -stevieb

    Disclaimer: Also posted on my blog.

Log In?
Username:
Password:

What's my password?
Create A New User
Chatterbox?
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others lurking in the Monastery: (4)
As of 2018-02-18 05:55 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    When it is dark outside I am happiest to see ...














    Results (250 votes). Check out past polls.

    Notices?