go ahead... be a heretic PerlMonks

### Turning A Problem Upside Down

by Limbic~Region (Chancellor)
 on Aug 20, 2009 at 23:10 UTC Need Help??

All,
Recently, I have been writing algorithms to play games on FaceBook. This is the real game for me and I often explore hunches to see if they teach me anything. The most recent game is called Word Twister and you are presented with 7 randomly selected characters (dups allowed). The object is to come up with as many words comprised of 3 or more of those characters within a certain time-frame.

A naive approach to this might be:

1. Generate the powerset of the given characters
2. For each subset, generate all the permutations
3. For each permutation, check to see if it is in the dictionary
This is obviously not a very efficient approach but assuming you can fit the dictionary in memory as a hash, will work on a small number of characters such as this game (7). Update: If you are interested in the math and exclude duplicates - see this.

A smarter approach to this might be:

```my \$given = \$ARGV[0];
# ...
while (<\$dict_fh>) {
chomp;
print "\$_\n" if is_subset(\$_, \$given);
}
sub is_subset {
my (\$str1, \$str2) = @_;
my (%h1, %h2);
\$h1{\$_}++ for split //, \$str1;
\$h2{\$_}++ for split //, \$str2;
for my \$chr (keys %h2) {
return if ! \$h1{\$chr} || \$h2{\$chr} > \$h1{\$chr};
}
return 1;
}
There are obvious optimizations to be had such as pre-processing the dictionary and loading it in the hash form rather than creating it each time. If you don't look too closely, the algorithm scales based on the number of characters in the dictionary and the the number of characters in the given string.

I wanted to see if I could do better. My hunch told me there should be a way to

1. Convert the strings to numbers
2. Subtract the two numbers
3. Apply a test to the difference to determine if it was a subset
This would turn the problem into:
```my \$input = \$ARGV[0];
\$input = str2val(\$input);
# create hash of possible subset values
# ...
while (<\$dict_fh>) {
chomp;
my (\$word, \$val) = split /\t/;
print "\$word\n" if \$is_subset{\$input - \$val};
}

I found that the answer was yes if you set a limit on the maximum length of input you would accept. Here is a brief explanation of the math:

```a = 1
b = (a * max) + 1
c = (b * max) + 1
...
z = (y * max) + 1
What this does is count the total occurrences of each letter (order isn't important) and allow you to represent it as a single number. For instance, if max = 5 and your alphabet was only A .. F:
```sub str2val {
my (\$str) = @_;
my %convert = (A => 1, B => 6, C => 31, D => 156, E => 781, F => 3
+906);
my \$val = 0;
for my \$char (split //, \$str) {
\$val += \$convert{\$char};
}
return \$val;
}

sub val2str {
my (\$val) = @_;
my %convert = (A => 1, B => 6, C => 31, D => 156, E => 781, F => 3
+906);
my \$str = '';
return \$str if \$val < 1;
for my \$char (sort {\$convert{\$b} <=> \$convert{\$a}} keys %convert)
+{
my \$count = int(\$val / \$convert{\$char});
\$str .= (\$char x \$count);
\$val %= \$convert{\$char};
}
return \$str;
}
Ok, but how is the magic %is_subset constructed? It is simply the values of the powerset. Wait a minute I said. If I am generating the powerset why do I go through the hoops of converting to a number - why don't I just look up strings. Then I remember that in strings the order of letters is important (the naive brute force above). Ok, if I don't have to generate the permutations, this might be a practical approach.

I start looking at the math and I realize the approach without modification is not practical. To support a max string length of 7, the value of Z would be 1_564_580_056_274_625_717_608. The reason the numbers get so high so quickly is because you are summing maxN for N = 0 .. 25. Even if I am only doing subtraction, these numbers are too big to be working with and I want to be able to support strings larger than 7.

I started to think about a couple of ways that still may make this practical. The first was to not consider the general case (entire alphabet) but to work with only the characters in the given string. This was doomed to failure after I ran the numbers. First, you would have to generate all combinations of the various different string lengths you wanted to support. Remember the formula for N choose K is C = K! / N!(K - N)! So for N = 7, there are 657,800 unique combinations. You have to split the dictionary into that many files and then when you get your input string you can open the dictionary that only contains those letters (a smaller alphabet). You can see this solves one problem by creating a new one because you would need 1,562,275 additional files to support an input string of length 8. Of course, these needn't be actual files (database) but the problem is still not scalable and pre-processing will take forever.

The second approach was to go with a hybrid option. First, perform frequency analysis on the database to order the letters from most to least frequent. This means that instead of Z having the highest numerical value, Q would. We could then shave off the bottom X letters from the alphabet. The idea was to have two dictionary files (1 with words containing uncommon letters and 1 with only common letters). Once the input is examined, we dispatch to the straight forward brute forward approach with the alternate dictionary if an uncommon letter is observed and the weird math approach otherwise. After checking the numbers I realized it was still untenable.

What am I doing again?

Suddenly it dawns on me that a hybrid approach is what I want but not the one I was working on. The reason the naive brute force didn't work is because after generating the powerset, generating the permutations was necessary because order of letters in hash lookups. The weird math approach solved the order problem but created a new problem (numbers to big to work with). Could I eliminate the order of letters issue without using the weird math approach. Of course, just pre-process the dictionary by sorting the letters.

```my \$input = \$ARGV[0];
my %is_subset = map {\$_ => 1} powerset(split //, \$input);
while (<\$dict_fh>) {
chomp;
my (\$word, \$normalized) = split /\t/;
print \$word if \$is_subset{\$normalized};
}
sub powerset {
# ...
return map {join '', sort @\$_} @subsets;
}

This is what I call turning a problem on its (?:head|side|ear). I look at it in a completely different way to see if I learn anything. In this case, I learned several ways not to solve the problem but that's ok - I had fun and this was the real game for me anyway. I just wonder if other people do this? If so, what approaches do you take and do you have any examples to share?

Cheers - L~R

Replies are listed 'Best First'.
Re: Turning A Problem Upside Down
by ack (Deacon) on Aug 21, 2009 at 16:31 UTC

Wow! Limbic~Region, that is awesome! Thanks so much for sharing that; it really has me thinking.

I have, over the last several weeks, been struggling with a mentoring issue from one of my mentorees.

My mentoree, a budding systems engineer who is on the brink of becomming a Chief Systems Engineer on a project of his own. In that capacity much of the "engineering" is done by the systems engineers working for the "Chief" and the "Chief" takes on responsibilities for a much broader range of challenges (e.g., politics, customer interactions, security, safety, etc.). The transition from largely pure engineering to this broader challenge is the normal stumbling block and is why I spend a lot of time mentoring them to help them "find their stride".

Find that "stride" is unique to each individual and it is up to me to try to "bring it out." I find that the Socratic Method largely works; but occasionally I encounter unique challenges; and that was this case.

The mentoree's inquiry (in response to a question of mine to him) was "I watch you in meetings and I am struck by the frequency that you inocuously turn the meeting completely around and embark on a completely differnt solution direction. How do you do that? How do I learn to do that?"

At the time, I was a bit taken aback because I hadn't ever really realized that I did that. But as I thought about it more and more, I realized that it has always been apparent to me how easily folks, especially in groups, tend to lock into a single solution and then beat it to death...especially if it doesn't really solve the problem. This is true not just for engineering/technical challengs, but for human interactions (e.g., politics, inter-personal challenges, etc.), too.

Hence, over the years, I find myself almost always and almost uncounsciously, taking a problem and making myself step back and, as Limbic~Region says, "Turn the problem on its head." I try to find ways to look at or envision the problem from completely different perspectives.

An important aspect for me, and I see that one of the responders similarly talked of working with his friend, is getting other people to consider the problem and to listen carefully to their "take" on it. For me, the hardest thing to do...and to get others to do...is to not "tune out" other people's approaches. I have found that the vast majority of "leaps" into new solutions for me have come from ideas (sometimes just small, seemingly insignificant, comments) that came from someone with an entirely different perspective on the problem.

So, at least for me, the first step is to force myself to step back and try to think about the problem in entirely different ways. This did not come naturally and took me many years to begin to do it effectively and somewhat automatically.

The second step (not always possible, of course) is to listen to others' perspectives and ideas on the problem. In this, I don't usually focus so much on their particular ideas of how to solve the problem; rather, I try to think about how and why they are solving it they way that they are. This is one of the tremendous values, for example, that the Monestary brings to me. It is not the particular solutions that the Monks offer that is most important to me. Rather, it is their thinking and perspective on the problems that teaches me so much.

I have, on more than one occasion, even found myself repeatedly...for the same problem...stepping back and looking of yet more ways of seeing the problem...i.e., "standing it on its head, on its side, on its back, on its other side...." The need for such is rare, but I think that is what Limbic~Region did in his quest, too. And it paid off, obviously.

I think, at its heart, that innovation flows significantly from that ability to "turn a problem on its head."

So that is how I answered my mentoree, and it has formed the basis of some little exercises that I've been doing with him. I've picked out a few challengs and spent about 1/2 an hour or so on each with him eliciting a solution and then embarking on the exercise of "turning the problem on its head" to see what he might discover. I don't know, yet, if that will help him. But it always makes for some fun discussions and it seems to have yielded some "Ah-ha" moments for him, too. He is a budding Perl programmer, so I am thinking about trying to formulate a Perl example. Limbic~Region's example is a bit too complex and my mentoree doesn't have the math background, probably, to tackle that specific one. But it gave some food for thought and I think I can find an appropriate Perl challenge for him.

Thanks, again, Limbic~Region. Sharing your thoughts really crystallized my own thoughts and was uncannily timely.

ack Albuquerque, NM
ack,
Thank for your response. I too use it for actual issues and not just academic problems. It might not be obvious, but I don't think this is a strategy that should be applied frivolously to real world problems. This is why I stressed that this was the real game to me. Conventional wisdom should only be set aside when it makes sense. It became "conventional" and "wisdom" for a reason. I guess what I am trying to say here is that every problem shouldn't be attacked from all angles for the sake of doing it when the obvious straight forward approach works. I do advocate practicing the technique on your own liberally - even as just a thought experiment, because it is a skill that is improved with practice and becomes invaluable when needed for actual problems.

For me, the hardest thing to do...and to get others to do...is to not "tune out" other people's approaches. I have found that the vast majority of "leaps" into new solutions for me have come from ideas (sometimes just small, seemingly insignificant, comments) that came from someone with an entirely different perspective on the problem.

I find that a far easier task than not tuning out someone who is describing an approach I have already considered and dismissed. I have to force myself to keep listening to make sure they do not bring new information to the idea that I hadn't considered before dismissing it. Otherwise, I find myself abruptly cutting them off and explaining why it won't work. In comparison, it is relatively easy for me to listen to a completely new or different idea. I agree though that even that is a struggle when I believe I have already thought of the best solution.

Cheers - L~R

It might not be obvious, but I don't think this is a strategy that should be applied frivolously to real world problems. This is why I stressed that this was the real game to me. Conventional wisdom should only be set aside when it makes sense. It became "conventional" and "wisdom" for a reason.

"That's brilliant," Cargill said warmly. "A brilliant suggestion, Mr. Staley."

"Then we'll do it?"

"We will not. [...] And besides, it's a nitwit idea."

"Yes, sir."

"Nitwit ideas are for emergencies. You use them when you've got nothing else to try. If they work, they go in the Book. Otherwise, you follow the Book, which is largely a collection of nitwit ideas that worked."

The Mote In God's Eye
Larry Niven & Jerry Pournelle

Limbic~RegionWow! You wrote:

...It might not be obvious, but I don't think this is a strategy that should be applied frivolously to real world problems.

I absolutely agree. I also agree with your words:

...I do advocate practicing the technique on your own liberally - even as just a thought experiment, because it is a skill that is improved with practice and becomes invaluable when needed for actual problems...

You "hit the nail on the head" regarding it being a skill that is improved with practice. That is what I fervently hope for with my nentorees...that with practice they will become better at it.

I have been comtemplating your comment about being careful not to apply it "frivously". I think that is an incredibly important observation and one that is always just under my radar screen but that should be more above that screen.

I have occasionally seen the newbies trying to use the technique when it isn't necessary or when it is counter-productive. Each time it just felt "wrong" to me but I couldn't quite put my finger on what was making it feel that way to me. Your words crystalized what I guess I felt instinctively...one also has to learn when and how best to apply the technique(s).

I think I need to work on some exercises/"posers" (as Number 5 in the movie "Short Circuit" said) to the mentorees to try to help them begin to learn when it is appropriate and beneficial to use the "turn the problem on its head" strategy.

Thanks so much for that insight.

ack Albuquerque, NM
Re: Turning A Problem Upside Down
by gwadej (Chaplain) on Aug 21, 2009 at 13:47 UTC

I have a programmer friend who is spectacular at this sort of approach. When he and I worked together, I often would talk to him about problems I was having trouble solving. It never failed that Rick would make suggestions that were in a direction that had never occurred to me.

In most cases, he would not suggest anything obvious like looking at the problem backwards or sideways. He would come at the problem from a direction I would never have considered. (I wish I could think of an example right now.<shrug/>)

As a general rule, these suggestions would not solve the problem, but they would invariably change the way I saw the problem. A solution inevitably followed.

When in doubt, redefine the problem.
Re: Turning A Problem Upside Down
by BrowserUk (Pope) on Aug 22, 2009 at 03:30 UTC

Seems to me that you've fallen into a pattern of looking at things in terms of combinations, permutations & powersets et al.

The basic problem is a simply a lookup problem, but as you point out, hashes don't work for this because of ordering. You could build a trie, but they are not efficient built in terms of perl's available data structures.

The alternative is to use a set of bitstrings to to index the words containing each of the letters. The bitstring for 'a', contains a set bit at the offset corresponding to any word in the dictionary that contains an 'a'. Same for 'b', etc.

To find all the words in the dictionary that contain only those given letters, you first OR all the bitstrings for the given letters together, and then, AND NOT the result with each of the remaining alphabet. You end up with a mask where each set bit corresponds to a complient word in the dictionary.

Not sure how my crude implementation stack up against yours, but it should compare favourably (assuming I understood the rules):

```#! perl -slw
use strict;
use Data::Dump qw[ pp ];

sub uniq{ my %x; @x{@_} = (); keys %x }

my @words = do{ local *ARGV = ['words.txt']; <> };
chomp @words;

@words = grep length() > 2, @words;

my %index;

@index{ 'a' .. 'z' } = map chr(0) x int( ( @words + 8 )/8 ), 1 .. 26;

for my \$iWords ( 0 .. \$#words ) {
for my \$char ( sort uniq split '', \$words[ \$iWords ] ) {
vec( \$index{ \$char }, \$iWords, 1 ) = 1;
}
}

while( chomp( my \$given = <STDIN> ) ) {
my @given = split '', \$given;
my @excludes = grep{ !(1+index \$given, \$_ ) } 'a'..'z';

my \$mask = chr(0) x int( ( @words + 8 )/8 );

\$mask |= \$_ for @index{ @given };
\$mask &= ~ \$index{ \$_ } for @excludes;

my \$count = unpack "%32b*", \$mask;

print "Found \$count words:\n";

vec( \$mask, \$_, 1 ) and print \$words[ \$_ ]
for 0 .. \$#words;

print "\n\n";
}

__END__
c:\test>790206
fred
Found 30 words:

deed
deeded
deer
def
defer
deferred
deffer
ere
err
erred
fed
fee
feed
feeder
free
freed
freer
red
redder
reed
reef
reefed
reefer
ref
refer
referee
refereed
referred
referrer
reffed

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
BrowserUk,
assuming I understood the rules

I was just rudely awoken on my day off by work (brain not yet engaged) but I don't think you have. That's probably my fault an not yours. The object of the game is to in fact find subsets of the given set of letters (which is probably why I was focused there). Let me adjust the rules to see if they make more sense and I will give an example as well:

Original: You are presented with 7 randomly selected characters (dups allowed). The object is to come up with as many words comprised of 3 or more of those characters within a certain time-frame.

Updated: You are presented with 7 randomly selected characters which may contain duplicates. The object is to come up with as many words at least 3 characters long that are comprised entirely of a subset of the given characters in a certain time-frame.

```Given:  apetpxl

# The following are all acceptable (albeit not the entire list)
ape
tap
tape
apple
lap

# The following are not acceptable because they contain letters not in
+ given
apples
sex

Cheers - L~R

Ah! So, a final check is required to eliminate words that use too many of any given letter. Could probably be achieved more efficiently, but since it's only run on a very short list and short circuits...

```#! perl -slw
use strict;
use Data::Dump qw[ pp ];

sub finalCheck{
my( \$poss,  \$given ) = @_;
\$given =~ s[\$_][] or return for split '', \$poss;
return 1;
}
sub uniq{ my %x; @x{@_} = (); keys %x }

my @words = do{ local *ARGV = ['words.txt']; <> };
chomp @words;

@words = grep length() > 2, @words;

my %index;

@index{ 'a' .. 'z' } = map chr(0) x int( ( @words + 8 )/8 ), 1 .. 26;

for my \$iWords ( 0 .. \$#words ) {

for my \$char ( sort uniq split '', \$words[ \$iWords ] ) {
vec( \$index{ \$char }, \$iWords, 1 ) = 1;
}
}

while( chomp( my \$given = <STDIN> ) ) {
my @given = split '', \$given;
my @excludes = grep{ !(1+index \$given, \$_ ) } 'a'..'z';

my \$mask = chr(0) x int( ( @words + 8 )/8 );

\$mask |= \$_ for @index{ @given };
\$mask &= ~ \$index{ \$_ } for @excludes;

and finalCheck( \$words[ \$_ ], \$given )
and print \$words[ \$_ ]
for 0 .. \$#words;

print "\n\n";
}

__END__
fred
def
fed
red
ref

apetpxl
ale
alp
ape
apex
apple
applet
apt
ate
axe
axle
eat
eta
exalt
lap
lappet
late
latex
lax
lea
leap
leapt
lept
lepta
let
pal
pale
pap
pat
pate
pea
peal
peat
pelt
pep
pet
petal
plat
plate
plea
pleat
tale
tap
tape
tax
tea
teal

BTW: Your example helped, but is badly chosen. The problem was never including words that contained characters not in the supplied list (your 's'), but rather words that contained too many of one or more of the given letters. Eg. the second 'e' in 'freed' given 'fred'.

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
BrowserUk,
Seems to me that you've fallen into a pattern of looking at things in terms of combinations, permutations & powersets et al.

You are absolutely correct. I did look at the problem from many different sides but, with the subconscious objective of precomputing everything, they all turned out to be different variations on the same theme.

You could build a trie, but they are not efficient built in terms of perl's available data structures.

Assuming that the word list file will remain fairly static and assuming it transforms into a data structure (trie or trie like) small enough that can stay memory resident, this seems like a reasonable approach. Using the word list you linked to earlier (TWL06.txt) with 178,590 eligible words, I use just under 80MB with the following solution:

The alternative is to use a set of bitstrings to to index the words containing each of the letters.

I have had a note to go back and figure out what you were doing here for over a year now. Today, I sat down to do just that. Would you mind reviewing what I have and correcting anything I got wrong? Note: I rewrote it in my own style as a mechanism for understanding it.

Assuming I understood it correctly, there isn't a lot of room for optimizations. Instead of recreating the zeroed bitstring 27 times, just do it once. The finalCheck() could be inlined (or converted to Inline::C). It may be faster to skip candidate words that are longer than the input string. You could also use Storable the same way I did to reduce the constant time construction of the data structure. I feel silly that I didn't spend some time a year ago to try and properly understand this as it is quite beautiful.

Cheers - L~R

Using the word list you linked to earlier (TWL06.txt) with 178,590 eligible words, I use just under 80MB with the following solution:

You might be able to reduce that a bit and gain a little speed by using arrays instead of hashes. I tested this by replacing your trie builder with:

```if (defined \$opt{w}) {
my @data;
open(my \$fh, '<', \$opt{w}) or die "Unable to open '\$opt{w}' for re
while (<\$fh>) {
chomp;
next if length(\$_) < 3 || /[^a-zA-Z]/;
\$_ = lc(\$_);
my \$code = join('', 'push @{\$data', (map { \$_-=97; "[\$_]"} sor
+t{\$a<=>\$b} unpack 'C*', \$_), "}, '\$_';");
eval \$code;
}
store(\@data, \$opt{d}) or die "Can't store '%data' in '\$opt{d}'\n"
+;
}

And the resultant file is just 8MB rather than 80MB. I started to modify the rest of teh code to use it, but then got lost and gave up, but arrays should be quicker than hashes.

Note: I rewrote it in my own style as a mechanism for understanding it.

I do it myself all the time. (I hate what you did with it! But that's a (probably pointless) argument for another time. :)

Assuming I understood it correctly

You're spot on.

there isn't a lot of room for optimizations.

It only takes 5 seconds to build the index, so that doesn't seem to warrant the effort.

And if I feed it the top 7 characters ordered by frequency in the dictionary: esiarnt--which shoudl be close to worst case--it only takes 0.7 of a second to find the 243 complient words, so there's not much reason to optimise there either.

it is quite beautiful

I think that Perl's bitwise logical operators are one of it's best kept secrets.

They lend themselves to performing a huge variety of set-type operations, very quickly, in a single operation. Effectively 'in parallel'. They can perform a million SELECT-style operations on sets of 8000 items in 1.3 seconds:

``` \$a = ~( \$b = chr(0)x 1e3 );
\$t=time;
my \$x = \$a ^ \$b for 1 .. 1e6;
printf "Took %.6f seconds\n", time()-\$t;;
Took 1.297000 seconds

Or a thousand such operations upon sets of 8 million items in 1.7 seconds:

```\$a = ~( \$b = chr(0)x 1e6 );
\$t=time;
my \$x = \$a ^ \$b for 1 .. 1e3;
printf "Took %.6f seconds\n", time()-\$t;;
Took 1.645000 seconds

And storage usage doesn't get much better than 1-bit per item. I've never actually looked to see how tightly coded the underlying opcodes are. I've never found the need to.

Maybe I should one day though, as these tend to be the dark corners of the codebase that never get revisited. It's quite possible that they are currently written to operate byte-by-byte which is generally far slower than dealing with data in register-sized chunks. With the advent of 64-bit registers, it is quite possible that they could be speed up significantly.

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
Re: Turning A Problem Upside Down
by QM (Parson) on Aug 21, 2009 at 21:29 UTC

It's interesting to follow the development of a hard problem like this, to see what works and what doesn't, and more importantly to understand why.

This particular case appears to be a less strict version of the Scrabble problem (adding your letters to the existing board, instead of just forming words). I've looked at that space a bit myself.

A Word Twister solution might be
(1) create a sorted anagram lookup for the dictionary,
(2) join the anagram keys into a string with an appropriate separator,
(3) determine the letter counts in the given letters,
(4) create regex snippet strings for each letter count (e.g., "a{0,3}"),
(5) permute the regex snippets into larger regex strings,
(6) match the anagram key string against the larger regex strings,
(7) print the resulting words

Maybe something like this:

Using the http://perl.plover.com/qotw/words/Web2.gz, and posterboy as the given:

```> time word_twister.pl posterboy web2* >! posterboy.txt
729 words in dictionary (after filtering)
Creating dictionary anagrams
512 anagram keys
Creating given regexes
Permuting given regexes
40320 regex permutations
Matching anagram keys
460888 matching anagram keys
Looking up matching words
763232 words matched
137.060u 0.290s 2:20.51 97.7%   0+0k 0+0io 2871pf+0w

Now this is pretty slow at filtering the dictionary against the given, and even slower for longer given strings. Improvements welcome.

For your particular case, you might want it to loop, asking for a new given, and not reading in the dictionary every time. (But that doesn't save much work, as the dictionary filtering against the given is expensive the way I've done it here.)

-QM
--
Quantum Mechanics: The dreams stuff is made of

I realized I didn't need the permutation either, so the outline becomes

A Word Twister solution might be
(1) create a sorted anagram lookup for the dictionary,
(2) join the anagram keys into a string with an appropriate separator,
(3) determine the letter counts in the given letters,
(4) create regex snippet strings for each letter count (e.g., "a{0,3}"),
(5) match the given regex string against the monster dictionary anagram string,
(6) lookup the words from the matched anagrams,
(6) print the resulting words

I've also been toying with this idea more, and realized the previous result was bogus. So I've fixed that and cleaned it up:

-QM
--
Quantum Mechanics: The dreams stuff is made of

Create A New User
Node Status?
node history
Node Type: perlmeditation [id://790206]
Approved by planetscape
Front-paged by planetscape
help
Chatterbox?
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (3)
As of 2018-03-21 05:19 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
When I think of a mole I think of:

Results (263 votes). Check out past polls.

Notices?