Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Re^4: Reading files n lines a time

by naturalsciences (Beadle)
on Dec 07, 2012 at 17:45 UTC ( #1007807=note: print w/ replies, xml ) Need Help??


in reply to Re^3: Reading files n lines a time
in thread Reading files n lines a time

Right now it is simply a fasta file. Fasta files are for storing DNA sequence information and they are formatted as following.

>nameofsequence\n

ATCGTACGTTGCTE\n

>anothername\n

GTCTGT\n

so that a line starting with > containing a sequence name is followed by a line containing sequences nucleotide information

I am thinking of dredging them in 4 lines a time, because I have reasons to suspect that due to some certain previous operations there might be sequences directly following eachother with different names (on >sequencename\n line) but exactly the same sequence information (on following ATGCTGT\n line). Right now I'm looking to identify and remove such duplicates but I might make use of scripts dealing with many comparision extraction etc. of neighbouring sequences in my files. (Two neigbours means four lines)


Comment on Re^4: Reading files n lines a time
Re^5: Reading files n lines a time
by ww (Bishop) on Dec 07, 2012 at 19:22 UTC
    SuperSearch (done already - here's the link: ?node_id=3989;BIT=FASTA -- will give you a short list of recent discussions on dealing with FASTA files.

    My notion that your paragraphing might be identifiable with a regex is pretty useless here. However, there's no reason you can't read a 2 lines at a time and use hashes to ensure the two "neighbors" values are discrete.

    That too, however, breaks down if the dups appear other than adjacent to one another, given the size of your data.

    So if none of the above help, you may wish to read about bioperl at both the wikipedia article, http://en.wikipedia.org/wiki/BioPerl and at the project page, http://www.bioperl.org/wiki/Main_Page.

      Yes it would break down but right now I'm specifically looking for neighbouring dupes :) (There is an actual reason to suspect they are positioned so in those files) It wouldn't be hard for me to write a push/shift code that would kind of like slide a 4-line long reading frame over the whole text file.

      But I ran totally in a ditch trying to do the same so that the frame wouldn't "slide" but would be "lifted" four lines at a time.

      Then I could just

      <code> if ($frame[1]!=m/$frame[3]/){print @frame} <\code>

      But for some reason I mess the populating/emptying/and moving the frame up so readily.

      edit: disregard all that - a sliding frame is exactly what I need. So I guess were done here :D. Thank you all! Learned a lot of other stuff on the side also :)

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1007807]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others drinking their drinks and smoking their pipes about the Monastery: (12)
As of 2014-09-17 14:08 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    How do you remember the number of days in each month?











    Results (81 votes), past polls