Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Re^3: Comparing a large set of DNA sequences

by BrowserUk (Patriarch)
on Nov 10, 2011 at 16:26 UTC ( [id://937410]=note: print w/replies, xml ) Need Help??


in reply to Re^2: Comparing a large set of DNA sequences
in thread Comparing a large set of DNA sequences

I concur. This is extremely clever.

It has the downside that it doesn't scale so well if you need to allow more than one char difference. But for the OPs stated problem, quite brilliant.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Re^3: Comparing a large set of DNA sequences

Replies are listed 'Best First'.
Re^4: Comparing a large set of DNA sequences
by roboticus (Chancellor) on Nov 10, 2011 at 18:39 UTC

    BrowserUk:

    Thanks!

    Yes, this method breaks down pretty quickly when the criteria are relaxed much. But I found a nifty variation that's a little bit more flexible (and still very limited): Rather than replace the character with an asterisk, we can remove the character entirely. That way, not only will it match a single wildcard position, but would allow it to find some two-character differences (such as a character deleted or inserted in two different locations). I need to find a simple way to group the replicated matches before I'm happy with it.

    I'm still playing around with it, but I'm at work and need to get stuff done. So I'm posting what I have so far, just in case I never get back around to it:

    $ cat 937249.pl #!/usr/bin/perl use strict; use warnings; my %H; open my $FH, '<', 'DNA_strings.dat' or die $!; while (<$FH>) { next if /^\s*(#.*)?$/; s/\s+$//; for my $i (0 .. length($_)-1) { my $k = $_; substr($k,$i,1) = ''; $H{$k}{$_}++; } } for my $k (sort keys %H) { if (keys %{$H{$k}} > 1) { print "$k\t", join("\n\t\t", keys %{$H{$k}}), "\n"; } } $ cat DNA_strings.dat # Add a random character to GTTAACCGGA in various positions GTxTAACCGGA GTTAAyCCGGA GTTAACCGzGA # Delete a random character from GAGGGTGATC in various positions GAGGGTGAT GAGGGTGTC GAGGTGATC AGGGTGATC GCAATTTGTC GCAAATTGTC GCAATTGGTC GTTTATAAGT TGGACAAGCT TCAGCGGATC CTACATAACT TTACTTCAGG CGGACCTTGG TGCGTGTGAC $ perl 937249.pl AGGGTGAT GAGGGTGAT AGGGTGATC AGGGTGTC GAGGGTGTC AGGGTGATC AGGTGATC AGGGTGATC GAGGTGATC GAGGGTGT GAGGGTGAT GAGGGTGTC GAGGTGAT GAGGGTGAT GAGGTGATC GAGGTGTC GAGGGTGTC GAGGTGATC GCAATTGTC GCAATTTGTC GCAATTGGTC GCAAATTGTC GGGTGATC AGGGTGATC GAGGTGATC GTTAACCGGA GTTAAyCCGGA GTxTAACCGGA GTTAACCGzGA

    I'd like to eliminate the duplicate groups, probably by building another hash from the original results. But I haven't figured out a nice way to do so yet.

    ...roboticus

    When your only tool is a hammer, all problems look like your thumb.

    Update: Minor formatting & text update.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://937410]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others musing on the Monastery: (5)
As of 2024-04-24 04:51 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found