Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses

Re^2: Reading File and Seperating into columns

by Jalcock501 (Sexton)
on Sep 26, 2013 at 08:03 UTC ( #1055798=note: print w/replies, xml ) Need Help??

in reply to Re: Reading File and Seperating into columns
in thread Reading File and Seperating into columns

Haha, you've almost nailed it, but its not different vehicles but different types of cover.

I'm just stuck and don't know how I am going to separate the fields into a readable format. Thanks for pointing out that 99HEADER appeared twice that slipped by me.

So this is what I have so far:
#! /usr/bin/perl -w use strict; my @files = <*.in>; for my $file (@files) { open my $handle, '<', $file; chomp(my @lines = <$handle>); close $handle; open my $write, '>', "$file.sep"; my @enr_data = grep {/^99/} @lines; s/99/\n99/g for (@enr_data); close($handle); }
This separates the lines I need from the file after more data analysis I realised that there are more areas with 99 Factors appear.I basically just need to cut fields up so that they can be read by your standard user.

Replies are listed 'Best First'.
Re^3: Reading File and Seperating into columns
by boftx (Deacon) on Sep 26, 2013 at 22:31 UTC

    This is just a crude outline, but given the large number of different record types (judging by the values in field 1) I would use a hash with keys consisting of the various types you are interested in and the values being a hashref that includes formatting strings for sprint. Something like this:

    # This is NOT real code, but just a concept my %record_types = ( 99HEADER => { format => "%s %s", code => undef, }, 99INSFAC => { format => "%s %s %s %04.2f", code = >\&process_99insfac, }, }; for my $line ( @input_lines ) { my ($rec_type,@rec_data) = split(/|/,$line); next unless exists $record_types{$rec_type); # Call a pre-processor if present, maybe skip empty records. # Note, the syntax for a proper dispatch table might be wrong here +. You # will probably need to play with this a bit, but it is nifty when + it works. next if defined( $record_types{$rec_type}{code} ) && !&{$record_types{$rec_type}{code}}( data => \@rec_data ); say sprint($record_types{$rec_type}{format},@rec_data); } exit; sub process_99insfac { my %args = @_; my @rec_data = @{$args{data}}; return unless $rec_data[2]; # no date? nothing to do $rec_data[3] = some calculation; # do something nifty here return 1; }

    This is a very crude presentation, but I think you can get the idea and can see that you can take advantage of the type hash by adding more info such as code references to sub-routines to do any special processing if needed. You would need to track entering and leaving each new record structure, but I doubt you would have much trouble with that logic. This approach should give you a lot of flexibility for layout and dealing with the different sub-record types.

    Update: added example for a preprocessor code ref.

    On time, cheap, compliant with final specs. Pick two.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1055798]
[erix]: didn't Sybase have pretty good auditing? :) (this is a vague memory)
[erix]: (culprits often are upstream of db of course)
[Corion]: Ah, how I missed it. After some years, I revisit slashdot on a click-bait link, and it provides the usual humor instantly: "I didn't know Drupal had rules for sex. It must be a plug-in"
[Corion]: erix: This is not for sybase, but for the input data files, resp. their contents.

How do I use this? | Other CB clients
Other Users?
Others chanting in the Monastery: (8)
As of 2017-03-28 08:59 GMT
Find Nodes?
    Voting Booth?
    Should Pluto Get Its Planethood Back?

    Results (328 votes). Check out past polls.