Re: While Loops
by BrowserUk (Patriarch) on Jan 08, 2013 at 14:18 UTC
|
If you just want to page through a file 100 lines at a time: perl -pE"$. % 100 or <STDIN>" yourfile.
The same technique can be used within a script.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] |
Re: While Loops
by Riales (Hermit) on Jan 08, 2013 at 13:48 UTC
|
Assuming you don't need @data for anything else after the printing:
while (scalar @data) {
for (1 .. 100) {
formatted_print(shift @data) if scalar @data;
}
# Do whatever else between runs of 100
}
sub formatted_print
{
my $line_arrayref = shift;
# Print the line in whatever format you want
}
| [reply] [d/l] [select] |
Re: While Loops
by CountZero (Bishop) on Jan 08, 2013 at 14:54 UTC
|
Wait. Stop. Take one step back and ask yourself "Why do I need to print the records 100 at a time"?In the end there is no difference between printing the records as soon as they are read in, or waiting till the end and then printing them all at once, or anything else in between these two extremes. The end is that you have a screen (or several screens) full of data. Output is usually buffered anyhow, so printing in "blocks" of 100 is unlikely to speed things up or make your program more performant.
CountZero A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James My blog: Imperial Deltronics
| [reply] |
|
I'm not really trying to print, I was just using that as an example. I apologize for not being specific enough. My code is below and I'm basically opening a file and storing it into an array. I then take the first column and query our Quickbase database to return the primary key, then I map that primary key to @data and import back into Quickbase. The problem is I can only do the gen_results_table call for a 100 stores at a time, so I have to run that gen_results_table statement once for every 100 stores.
#!/usr/bin/perl
require 'c:/quickbase.pm';#REF Perl SDK
use LWP;
my $url = "http://somesite.com/forcast.txt";
my $ua = LWP::UserAgent->new();
$ua->proxy(['http'],'http://ourproxy.oursite.com:80');
$ua->credentials($netloc, $realm, $uname, $pass);
my $response = $ua->get($url);
open (FILE, ">C:\\snow.txt") || die "Can't open Snow.txt $!\n";
print FILE $response->content;
close(FILE);
open (FILE, "<C:\\snow.txt") || die "Can't open Snow.txt $!\n";
my $column=0;
while ($line = <FILE>){
chomp($line);
push @data, [split /\t/,$line];
}
close(FILE);
@column = map {$$_[0]} @data;
shift(@column);
$stores = join( ' OR ',@column);
my $qdb = HTTP::QuickBase->new();
$qdb->proxy('http://ourproxy.oursite.com:80');
$qdb->url_prefix("https://www.quickbase.com/db");
$qdb->authenticate("MyUserName","MyPass");
$qdb->apptoken('bj6azdybdeg8akbcu5ggabfdedsi');
my $recid=$qdb->gen_results_table("babvce7n4",{query=>"{'6'.EX.$stores
+}",clist=>"3",options=>"csv"});
my @recids = split /\n|\r\n/,$recid;
my $i = 0;
@data = map{unshift(@{$_},$recids[$i++]);$_} @data;
$a = join( "\n", map {join(',',@{$_})} @data);
my $import = $qdb->import_from_csv("bhabcde7f",$a,"6.7.8.9.10.11.12.13
+.14.15.16.17.18.19.20.21.22.23.24.25","1");
Keep in mind I am new to Perl | [reply] [d/l] |
While Loops
by blue_cowdawg (Monsignor) on Jan 08, 2013 at 13:50 UTC
|
If I am understanding you question correctly, how about:
#!/usr/bin/perl -w
use strict;
my @data=();
my $file='c:\text.txt';
open FILE,"< $file" or die "$file: $!";
while (my $line =<FILE> && scalar $#data < 100){
chomp $line;
push @data,[split /\t/,$line];
}
Peter L. Berghold -- Unix Professional
Peter -at- Berghold -dot- Net; AOL IM redcowdawg Yahoo IM: blue_cowdawg
| [reply] [d/l] |
|
I would like to read the 1st 100 records from the text file, print them, and read the next 100 and print them, etc until I run out of record in the text file
| [reply] |
|
With the improved spec, you'd want something more like:
#!/usr/bin/perl -w
use strict;
my @data=();
my $file='c:\text.txt';
open my $fh,"<", $file or die "$file: $!";
while (my $line = <$fh>){
push @data, $line;
if (@data == 100) {
print @data;
@data = ();
}
}
print @data;
#11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way.
| [reply] [d/l] |
|
#!/usr/bin/perl -w
use strict;
my @data=();
my $file='c:\text.txt';
my $count=0;
open FILE,"< $file" or die "$file: $!";
while (my $line =<FILE> ){
chomp $line;
push @data,[split /\t/,$line];
$count++;
next if $count < 99;
while ($#data >= 0 ) {
printf "%s\n",join(",",shift @data);
}
$count = 0;
}
So.. how much more of your homework do you need me to do?
Peter L. Berghold -- Unix Professional
Peter -at- Berghold -dot- Net; AOL IM redcowdawg Yahoo IM: blue_cowdawg
| [reply] [d/l] |
|
we're really guessing at what you are trying to do, so in that spirit maybe here's a real life example that has enough stuff in it to be useful on a few fronts...
test file #ls -la > test.txt ... abreviated
-rw-r--r-- 1 OQ91 Domain Users 273532 Sep 25 15:29 spy.csv.bak
-rw-r--r-- 1 OQ91 Domain Users 14704 Jan 8 12:08 spy.sdf
-rw-r--r-- 1 OQ91 Domain Users 4329 Oct 17 10:28 spy.sdf.bak
-rwxr-xr-x 1 OQ91 Domain Users 1196465 Sep 26 03:36 spywork.ods
-rw-r--r-- 1 OQ91 Domain Users 312739 Jan 8 09:40 spyx.csv
-rwxrwxr-x 1 OQ91 Domain Users 4875 Dec 26 22:05 stone.pl
-rwxr-xr-x 1 OQ91 Domain Users 2396 Oct 25 09:57 stone.pl.121025
-rwxr-xr-x 1 OQ91 Domain Users 3375 Oct 29 10:27 stone.pl.121026
-rwxr-xr-x 1 OQ91 Domain Users 3811 Oct 29 07:52 stone.pl.121029
-rwxr-xr-x 1 OQ91 Domain Users 3809 Nov 8 11:06 stone.pl.121108
-rwxr-xr-x 1 OQ91 Domain Users 77687808 Sep 26 03:46 Taylor_Book_08.xls
-rw-r--r-- 1 OQ91 Domain Users 9095 Dec 26 22:14 test15.pl
-rw-r--r-- 1 OQ91 Domain Users 4350 Nov 7 01:07 test15.pl.121107
-rw-r--r-- 1 OQ91 Domain Users 4424 Nov 7 11:57 test15.pl.121107a
-rw-r--r-- 1 OQ91 Domain Users 8630 Nov 7 13:57 test15.pl.121107b
-rw-r--r-- 1 OQ91 Domain Users 9039 Nov 8 08:59 test15.pl.121108
-rw-r--r-- 1 OQ91 Domain Users 9156 Dec 26 22:10 test15.pl.121226
-rw-r--r-- 1 OQ91 Domain Users 46987 Dec 4 13:25 zzz.out
$ cat test.txt | perl -l -n -e 'BEGIN {$x="OQ91";$linecount=0;$sum=0;} if((split)[2]==$x){$linecount++ ;$sum += (split)[5]; printf "%s\t %s\t\t %s\n",(split)[2],(split)[5],(split)[9]} ; END {printf "linecount: %d Sum: %d",$linecount,$sum}'
Here's some output... abreviated
OQ91 312739 spyx.csv
OQ91 4875 stone.pl
OQ91 2396 stone.pl.121025
OQ91 3375 stone.pl.121026
OQ91 3811 stone.pl.121029
OQ91 3809 stone.pl.121108
OQ91 77687808 Taylor_Book_08.xls
OQ91 9095 test15.pl
OQ91 4350 test15.pl.121107
OQ91 4424 test15.pl.121107a
OQ91 8630 test15.pl.121107b
OQ91 9039 test15.pl.121108
OQ91 9156 test15.pl.121226
OQ91 46987 zzz.out
linecount: 61 Sum: 88200728
sums the 5th column if the tests are true, counts the lines... prints the 2nd, 5th and 9th columns, AND it's a one liner... | [reply] [d/l] |
Re: While Loops
by kennethk (Abbot) on Jan 08, 2013 at 13:53 UTC
|
Your spec is a little ambiguous to me, but if I follow, you can do what you need using Slices and %, perhaps like:
for my $i (0 .. @data/100) {
my $max = $i * 100 + 99;
$max = @data - 1 if $max > @data - 1;
print @data[$i * 100 .. $max];
}
Of course, you'd presumably want to do something between those prints.
#11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way.
| [reply] [d/l] |
|
foreach $a (0..@data-1) {
print or do something ;
} ## end of foreach
If you have the length of the array it knows then when to stop. | [reply] [d/l] |
|
- You should not use $a as your index variable, or really ever as a generic variable. $a and $b have special meaning for sort.
- I'm unclear on how your strategy splits the array into chunks of 100 records. There are certainly a myriad of ways to get this done, but it's not obvious to me how this is one of them.
#11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way.
| [reply] [d/l] |
Re: While Loops (100 records chunk)
by LanX (Saint) on Jan 08, 2013 at 14:03 UTC
|
| [reply] |
Re: While Loops
by sundialsvc4 (Abbot) on Jan 08, 2013 at 14:36 UTC
|
The way that I have traditionally approached this problem is thusly: the ruling logic is the while-loop as written, and the exception to that loop is “100 records at a time.”
Therefore, I would first precede that loop by initializing a counter:
my $records_so_far = 0;
Then, within that loop (at the bottom) I would add, e.g.:
if ($records_so_far ++ >= 100) {
$records_so_far = 0;
<<do something interesting here>>
}
... and then, after the loop(!) ...
if ($records_so_far > 0) <<do it again i.e. call the same subroutine>>
| [reply] [d/l] [select] |
Re: While Loops
by tobyink (Canon) on Jan 10, 2013 at 08:25 UTC
|
open my $fh, '<', 'myfile.log';
until (eof $fh) {
my @hundred_lines = grep defined, map scalar <$fh>, 1..100;
chomp @hundred_lines;
...; # do something with the lines
}
perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'
| [reply] [d/l] |
Block processing of lines
by parv (Parson) on Jan 10, 2013 at 05:36 UTC
|
| [reply] |