Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

Re^2: Comparing a large set of DNA sequences

by roboticus (Canon)
on Nov 10, 2011 at 00:24 UTC ( #937249=note: print w/ replies, xml ) Need Help??


in reply to Re: Comparing a large set of DNA sequences
in thread Comparing a large set of DNA sequences

shamshersingh:

Here's what I tried:

#!/usr/bin/perl use strict; use warnings; my %H; open my $FH, '<', 'DNA_strings.dat' or die $!; while (<$FH>) { s/\s+$//; for my $i (0 .. length($_)-1) { my $k = $_; substr($k,$i,1) = '*'; push @{$H{$k}}, $_; } } for my $k (sort keys %H) { if ($#{$H{$k}} > 1) { print "$k\t", join(",\n\t\t", @{$H{$k}}), "\n"; } }

A quick experiment with a short file:

$ cat DNA_strings.dat CTGAG CGAGT ACGCT TATAC CTGAA GGAGC ATACA AAAAA ACAAA AGAAA AATAA AAAGA ACCAA AGCAC CCACG GCCAT AGCAA GGCAT GTTTG $ perl DNA_cmp.pl A*AAA: AAAAA, ACAAA, AGAAA A*CAA: ACCAA, AGCAA AA*AA: AAAAA, AATAA AAA*A: AAAAA, AAAGA AC*AA: ACAAA, ACCAA AG*AA: AGAAA, AGCAA AGCA*: AGCAC, AGCAA CTGA*: CTGAG, CTGAA G*CAT: GCCAT, GGCAT

It seems pretty fast, too. It took less than a minute to scan through 100,000 20-character strings, but it didn't find anything. (gen_random_strings.pl is on my my scratchpad)

$ perl gen_random_strings.pl 100000 20 20 ACGT >DNA_strings.dat $ time perl DNA_cmp.pl real 0m47.659s user 0m46.395s sys 0m1.252s

Update: I tried to make this node a reply to the OP, but for some reason, it kept failing on me, and when I'd refresh, it would show my node replacing BrowserUk's node. It was odd looking...

...roboticus

When your only tool is a hammer, all problems look like your thumb.


Comment on Re^2: Comparing a large set of DNA sequences
Select or Download Code
Re^3: Comparing a large set of DNA sequences
by aaron_baugher (Deacon) on Nov 10, 2011 at 16:17 UTC

    That's a brilliant way to do it. My own brute force method of comparing all the strings character by character required 5,000,050,000 comparisons and took almost 4 hours. Yours loops through 100,000 items once, then loops through a maximum of 2,000,000 keys once. It's not even close.

    Aaron B.
    My Woefully Neglected Blog, where I occasionally mention Perl.

      aaron baugher:

      We were lucky the problem was amenable to such a nice simplification!

      ...roboticus

      When your only tool is a hammer, all problems look like your thumb.

Re^3: Comparing a large set of DNA sequences
by BrowserUk (Pope) on Nov 10, 2011 at 16:26 UTC

    I concur. This is extremely clever.

    It has the downside that it doesn't scale so well if you need to allow more than one char difference. But for the OPs stated problem, quite brilliant.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      BrowserUk:

      Thanks!

      Yes, this method breaks down pretty quickly when the criteria are relaxed much. But I found a nifty variation that's a little bit more flexible (and still very limited): Rather than replace the character with an asterisk, we can remove the character entirely. That way, not only will it match a single wildcard position, but would allow it to find some two-character differences (such as a character deleted or inserted in two different locations). I need to find a simple way to group the replicated matches before I'm happy with it.

      I'm still playing around with it, but I'm at work and need to get stuff done. So I'm posting what I have so far, just in case I never get back around to it:

      $ cat 937249.pl #!/usr/bin/perl use strict; use warnings; my %H; open my $FH, '<', 'DNA_strings.dat' or die $!; while (<$FH>) { next if /^\s*(#.*)?$/; s/\s+$//; for my $i (0 .. length($_)-1) { my $k = $_; substr($k,$i,1) = ''; $H{$k}{$_}++; } } for my $k (sort keys %H) { if (keys %{$H{$k}} > 1) { print "$k\t", join("\n\t\t", keys %{$H{$k}}), "\n"; } } $ cat DNA_strings.dat # Add a random character to GTTAACCGGA in various positions GTxTAACCGGA GTTAAyCCGGA GTTAACCGzGA # Delete a random character from GAGGGTGATC in various positions GAGGGTGAT GAGGGTGTC GAGGTGATC AGGGTGATC GCAATTTGTC GCAAATTGTC GCAATTGGTC GTTTATAAGT TGGACAAGCT TCAGCGGATC CTACATAACT TTACTTCAGG CGGACCTTGG TGCGTGTGAC $ perl 937249.pl AGGGTGAT GAGGGTGAT AGGGTGATC AGGGTGTC GAGGGTGTC AGGGTGATC AGGTGATC AGGGTGATC GAGGTGATC GAGGGTGT GAGGGTGAT GAGGGTGTC GAGGTGAT GAGGGTGAT GAGGTGATC GAGGTGTC GAGGGTGTC GAGGTGATC GCAATTGTC GCAATTTGTC GCAATTGGTC GCAAATTGTC GGGTGATC AGGGTGATC GAGGTGATC GTTAACCGGA GTTAAyCCGGA GTxTAACCGGA GTTAACCGzGA

      I'd like to eliminate the duplicate groups, probably by building another hash from the original results. But I haven't figured out a nice way to do so yet.

      ...roboticus

      When your only tool is a hammer, all problems look like your thumb.

      Update: Minor formatting & text update.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://937249]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others scrutinizing the Monastery: (6)
As of 2014-12-28 11:04 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (180 votes), past polls