in reply to parsing text files continued

Assuming you always have the same fields for each records, here's a lazy way to obtain your CSV:
$ perl -F, -lane '@F % 2 and push @D, [@F] or push @{$D[-1]}, pop @F; +}{ $,=q{,}; print @{$_} for @D' input.txt
... running it against your sample input, you will get:
1/3/2007 12:20:01 AM,12.588309,9.432586,20:0.196329,7.418672,3.616305, +2.066482,6.873061,0.784989,1.859894,3.249620,0.450952,0.305768,0.8234 +02 1/3/2007 12:49:22 AM,10.958312,13.644527,41:0.483233,7.027840,4.222601 +,0.305821,7.443877,1.552915,1.202711,5.285398,0.233119,0.425521,0.560 +862
I repeat, this is very lazy as it assumes you always have the same fields, in the same order ;-)

A better/fancier way would be to gather everything in some hashes or something, and then use some "proper" formatter to dump results as CSV. Here's a way to obtain your hash:

$ perl -MData::Dumper -F, -lane '@F % 2 and ($k)=@F or push @{$D{$k}}, + @F; }{ $D{$_} = {@{$D{$_}}} for keys %D; print Dumper \%D' input.txt
$VAR1 = { '1/3/2007 12:49:22 AM' => { 'ClientAdd' => '1.552915', 'CMALoad' => '1.202711', 'SearchDelete' => '0.305821', 'CMASave' => '5.285398', 'ClientDelete' => '0.425521', 'CMADelete' => '0.233119', 'Login' => '10.958312', 'SearchCount' => '41:0.483233', 'SearchDetails' => '7.443877', 'Logout' => '0.560862', 'SearchResults' => '7.027840', 'SearchSave' => '4.222601', 'SearchLoad' => '13.644527' }, '1/3/2007 12:20:01 AM' => { 'ClientAdd' => '0.784989', 'CMALoad' => '1.859894', 'SearchDelete' => '2.066482', 'CMASave' => '3.249620', 'ClientDelete' => '0.305768', 'CMADelete' => '0.450952', 'Login' => '12.588309', 'SearchCount' => '20:0.196329', 'SearchDetails' => '6.873061', 'Logout' => '0.823402', 'SearchResults' => '7.418672', 'SearchSave' => '3.616305', 'SearchLoad' => '9.432586' } };


Replies are listed 'Best First'.
Re^2: parsing text files continued
by grashoper (Monk) on Jul 21, 2008 at 17:08 UTC
    actually I need to make it a little more intelligent than what I currently have as it assumes the fields are all there, and sometimes they are not, ie it might die at some point in my test script say at searchload, and next iteration would start a new row, what happens when I aggregate this is I get "rows" that don't actually line up with the others, munging my data in excel and freaking out my plans to put it into a db, so how would I account for this? I am guessing setting up a hash and testing for a complete row and if not a complete row then dump the data and move on to the next one, but I am just not sure how to do this as I am not that skilled, since the rows would kind of have a fixed field width testing for value in each field might help with this but again unsure how to do it. Thanks altBlue, I tried running your file but it doesn't run I get the following errors..string found where operator expected at at end of line do you need to predeclare lane? bareword found where operator expected near input (missing operator before input?) string found where operator expect at line 7 near "Logout" Login do you need to predeclare logout, syntax error at line 1 next token ??? execution of aborted due to compilation errors. do these switches not work in windows? I guess the lane statement are command line switches? L for label processing This option turns on line-ending processing. It can be used to set the output line terminator variable ($/) by specifying an octal value. See "Example: Using the -0 option" for an example of using octal numbers. If no octal number is specified, the output line terminator is set equal to the input line terminator (such as $\ = $/;). a -for This option must be used in conjunction with either the -n or -p option. Using the -a option will automatically feed input lines to the split function. The results of the split are placed into the @F variable. n- This option places a loop around your script. It will automatically read a line from the diamond operator and then execute the script. It is most often used with the -e option e- The option lets you specify a single line of code on the command line. This line of code will be executed in lieu of a script file. You can use multiple -e options to create a multiple line program-although given the probability of a typing mistake, I'd create a script file instead. Semi-colons must be used to end Perl statements just like a normal script.

      Hm, at first glance your reply sounds kinda messy, so, I guess providing some more details could help, still trying to avoid doing your homework at the same time (you know, the rules) ;-)

      First, let's drop some lines from your second record, so we could see what happens when fields are missing:

      1/3/2007 12:20:01 AM
      1/3/2007 12:49:22 AM

      Now let's rewrite my previous HoH solution and move the "key" (time stamp) *inside* the hash, using an AoH:

      $ perl -MData::Dumper -F, -lane '@F % 2 and push @D, {q{Stamp},@F} or +$D[-1] = { %{$D[-1]}, @F } }{ print Dumper @D' input.txt
      $VAR1 = {
                'Stamp' => '1/3/2007 12:20:01 AM',
                'ClientAdd' => '0.784989',
                'CMALoad' => '1.859894',
                'SearchDelete' => '2.066482',
                'CMASave' => '3.249620',
                'CMADelete' => '0.450952',
                'Login' => '12.588309',
                'ClientDelete' => '0.305768',
                'SearchCount' => '20:0.196329',
                'SearchDetails' => '6.873061',
                'Logout' => '0.823402',
                'SearchResults' => '7.418672',
                'SearchSave' => '3.616305',
                'SearchLoad' => '9.432586'
      $VAR2 = {
                'Stamp' => '1/3/2007 12:49:22 AM',
                'Login' => '10.958312',
                'SearchCount' => '41:0.483233'

      And now, let's print the fields we need from this data as CSV.

      $ perl -F, -lane '@F % 2 and push @D, {q{Stamp},@F} or $D[-1] = { %{$D +[-1]}, @F } }{ $,=","; print @{$_}{qw(Stamp Login SearchResults Searc +hLoad SearchCount Logout)} for @D' input.txt
      1/3/2007 12:20:01 AM,12.588309,7.418672,9.432586,20:0.196329,0.823402
      1/3/2007 12:49:22 AM,10.958312,,,41:0.483233,

      As you may notice, the values that are missing from any records generate empty fields, which should be just fine for CSV

      Obviously, this lazy toy will trigger "undefined" warnings, but I'm sure you'll know how to handle them in your real/production code. ;-)

      My apologies if this code looked too messy for you, I'll try adding some spoilers...

      Finally, I have to warn you again: DON'T use this code in production, this is just a proof of concept :)



        I see you ran this from the command line didn't you, I tried as a seperate script initially I just tried typing it all in now I am down to one error unable to find string terminator but I am not sure where the problem is, I am really impressed with your example though, I wish I had as deep an understanding and command of the language as you possess. :) my error is can't find string terminator "'" anywhere before eof at -e line 1 Doh single versus double quotes.. ok now it does run but all it outputs is 6 followed by 9 comma's then 7 followed by 9 comma's I don't understand why its not showing all of it.