Beefy Boxes and Bandwidth Generously Provided by pair Networks
Come for the quick hacks, stay for the epiphanies.
 
PerlMonks  

Re: Challenge: Design A Better Hangman Algorithm

by NiJo (Friar)
on Jul 15, 2009 at 20:40 UTC ( [id://780472]=note: print w/replies, xml ) Need Help??


in reply to Challenge: Design A Better Hangman Algorithm

An higher order approach would be to look at probabilities for chains of letters. E. g. in English the chance for an 'u' should be higher than average after a 'q'.

This is based on the concecpt of apes using a typewriter to produce texts (Sir Arthur Eddington, 1927). I own a 1988 special edition of Scientific American on Computer problems (translated to German), where a BASIC program is discussed to collect statistics from longer input texts and a random text is produced wheighing probabilities for the next letter with collected statistics for N preceeding letters. Interpunctation and space characters are included as letters into this randomly produced string. At N=4 one entry might be ('Per','l', 50%). At N=5 most of the words are correct and resemble the input text, but sentences are nonsense.

Perl really shines at these applications. The author Brian Hayes hints at large memory requirements for the N-dimensional array where many fields remain empty. A perfect application for a hash...

  • Comment on Re: Challenge: Design A Better Hangman Algorithm

Replies are listed 'Best First'.
Re^2: Challenge: Design A Better Hangman Algorithm
by Limbic~Region (Chancellor) on Jul 15, 2009 at 22:08 UTC
    NiJo,
    There is value in what you are suggesting - in fact, I suggested the very same thing to Lawliet before working out my solution. Let me explain why it isn't as great an idea as you might think. After each guess (right or wrong), the candidate list is pruned to only the possible remaining words. If 'q' is guessed and is correct, then the next time around, 'u' will already jump to the top of the list of letters with the most probability to be correct. You don't need to know any special knowledge about letter frequency (alone or in chains) to get this benefit.

    In other words, the data you are presented with already takes into consideration only the words that can possible fit given the letters provided and frequency analysis has been done on them. If you have something else in mind though, I am both open and interested in hearing it.

    Cheers - L~R

Re^2: Challenge: Design A Better Hangman Algorithm
by JavaFan (Canon) on Jul 16, 2009 at 10:09 UTC
    An higher order approach would be to look at probabilities for chains of letters. E. g. in English the chance for an 'u' should be higher than average after a 'q'.
    That's the wrong way of approaching it. One shouldn't look at the frequency of chains of letters in the particular language - but to the frequency of chains of letters in the list of words that may possibly match. But then, I don't see when this algorithm gives you any better guesses than looking for the letter which occurs in the most words. If for instance the current guess has a 'Q', then almost any word in the list of possible matches will contain a 'U'.

    Now, I'd be surprised to see an efficient hangman guessing algorithm to guess 'Q' before guessing 'U'.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://780472]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others exploiting the Monastery: (6)
As of 2024-04-20 00:48 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found