Beefy Boxes and Bandwidth Generously Provided by pair Networks vroom
go ahead... be a heretic
 
PerlMonks  

Re^3: Reading files n lines a time

by ww (Bishop)
on Dec 06, 2012 at 20:27 UTC ( #1007643=note: print w/ replies, xml ) Need Help??


in reply to Re^2: Reading files n lines a time
in thread Reading files n lines a time

Perhaps you can post a real (or baudlerized sample) snippet of your actual data. It's amazing what a bit of exposure to regular expressions can help one spot, and here, you'll have many such well-educated eyes looking for proxy-para-markers.

I'm actually surprised -- no, very surprised -- that this request hasn't been posted higher in the thread.


Comment on Re^3: Reading files n lines a time
Re^4: Reading files n lines a time
by naturalsciences (Beadle) on Dec 07, 2012 at 17:45 UTC
    Right now it is simply a fasta file. Fasta files are for storing DNA sequence information and they are formatted as following.

    >nameofsequence\n

    ATCGTACGTTGCTE\n

    >anothername\n

    GTCTGT\n

    so that a line starting with > containing a sequence name is followed by a line containing sequences nucleotide information

    I am thinking of dredging them in 4 lines a time, because I have reasons to suspect that due to some certain previous operations there might be sequences directly following eachother with different names (on >sequencename\n line) but exactly the same sequence information (on following ATGCTGT\n line). Right now I'm looking to identify and remove such duplicates but I might make use of scripts dealing with many comparision extraction etc. of neighbouring sequences in my files. (Two neigbours means four lines)
      SuperSearch (done already - here's the link: ?node_id=3989;BIT=FASTA -- will give you a short list of recent discussions on dealing with FASTA files.

      My notion that your paragraphing might be identifiable with a regex is pretty useless here. However, there's no reason you can't read a 2 lines at a time and use hashes to ensure the two "neighbors" values are discrete.

      That too, however, breaks down if the dups appear other than adjacent to one another, given the size of your data.

      So if none of the above help, you may wish to read about bioperl at both the wikipedia article, http://en.wikipedia.org/wiki/BioPerl and at the project page, http://www.bioperl.org/wiki/Main_Page.

        Yes it would break down but right now I'm specifically looking for neighbouring dupes :) (There is an actual reason to suspect they are positioned so in those files) It wouldn't be hard for me to write a push/shift code that would kind of like slide a 4-line long reading frame over the whole text file.

        But I ran totally in a ditch trying to do the same so that the frame wouldn't "slide" but would be "lifted" four lines at a time.

        Then I could just

        <code> if ($frame[1]!=m/$frame[3]/){print @frame} <\code>

        But for some reason I mess the populating/emptying/and moving the frame up so readily.

        edit: disregard all that - a sliding frame is exactly what I need. So I guess were done here :D. Thank you all! Learned a lot of other stuff on the side also :)

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1007643]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (5)
As of 2014-04-20 05:20 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    April first is:







    Results (485 votes), past polls