Recently I was reading through a thread referring to games, and it reminded me of the story I had read about the gentleman whom invented the Connect Four gaming machine.

It seems that when he tried to solve the game going forwards, playing move by move, and attempting to compute all possible moves, it became a staggering task which defied computational resources available to gaming machines. This sprung to mind reading tilly's comment regarding doing something similar with Go or Othello:

Be warned. While you might build a prototype in Perl, you will not have any possibility of succeeding in your eventual goal without rewriting in a different language. Perl is just too wasteful on memory to be great at this task.

I quite agree.

So... Keeping that in mind, the resolution to the connect four problem was reached by computing the problem backwards.

What I was considering was to start with computing the factorial variations which define the set of winning board states. This would consist, for example of greater than 50% of the board consisting of a single color when completely filled. This limits the computational space so that every potential state need not be explored.

Work backwards from these to define potential paths to attain them. This is a one time operation which will require vast amounts of storage and a long time to run. (This is where it gets fun...or completely breaks down)

The results are then subject to statistical analysis, which is once again a one-time operation.

By thereafter resolving which board-states are most common and which paths are shortest to a winning resolution you create a string of potential board state targets to try to attain at every move. In this case a move is defined as a search space related to the number of empty spaces left on the board.

My limited understanding of this subject leads me to believe that this would lead to more of a pattern matching problem added to playing the game.

This has something to do with human thinking also, since our brains are better at pattern recognition than repetitious computation.

The question then becomes how to filter out the 'line noise' of the competetor, whom will naturally not take the optimal path for his/her competetor to win. That is where I see the traditional min-max tree approach taking hold to try to return the state of the board to one which leads to a winning combination.

Something to play with anyway...

Wait! This isn't a Parachute, this is a Backpack!

Replies are listed 'Best First'.
Re: Looking backwards to GO forwards
by dragonchild (Archbishop) on Sep 05, 2001 at 16:45 UTC
    Unfortunately, Go is a game that has no easily-defined end-state. Unlike Connect-Four or Chess or Othello (or pretty much every other game most people know), a game of Go ends when both players agree it ends. Yes, there is the possibility that all 361 points (on a 19x19 board, which is tournament size) could be filled, but the chances of that happening are so slight that it's a non-issue.

    There are a number of excellent Go tutorials out there, for people who don't know the rules, and I would suggest you read up on them. It's a fascinating game. I mean, the game has only eight(!) rules!

    1. Black moves first into an empty board.
    2. You can place a piece on any open spot on the board (save when noted below).
    3. You capture a group of pieces when there is no spot on the board that the opponent can play that would connect to that group, either horizontally or vertically. (These are called liberties.)
    4. The edge of the board does not count as a liberty for anyone.
    5. You cannot repeat the same board position with the same person to move.
    6. You cannot move so that you occupy the last liberty of your own group.
    7. Anyone may pass their turn at any time.
    8. A game ends when both players pass in succession.
    That's it for the game. Everything else is dependant on your choice of how to score a game. Most games are scored using a territory-surrounded + pieces-captured formula. Territory that you control is defined as any open points that can only trace a path to the edge of the board or your pieces. Any open space that can trace back to pieces of both sides isn't controlled by anyone.

    Most games on a 19x19 board generally end after about 150-200 moves. At this point, it is usually clear who has won, but not always. A tournament game of Go is usually untimed (though I think custom dictates a certain limit, which I'm not sure about).

    Thus, retrograde analysis really wouldn't be able to offer much, unfortunately.

    We are the carpenters and bricklayers of the Information Age.

    Vote paco for President!

      I think that you have hit on the key as to why this is such a difficult task. It is the definition of the winning conditions. You would think that the game is ideally suited to programming as there are only two types of pieces and it is played on a grid but how do you determine that a computer does not want to put down another piece or pass on a move? Any time there is a group of nine points (in a square) there is a possibility for surrounding an opponent and therefore, a point. Whether or not it makes sense to do so is up to the opinion of the player so how do you write code for an opinion? There must be a way to do it. I have been working on this problem for some time and you have shed some light! Thanks for the insight!

      Let us chant our mantras...

      Laziness is a virtue... If you have to do something more than once, automate it!

      When a player cannot move, that means that a certain set of conditions have evolved. There should be recognizable patterns that can be detected. Otherwise, how would we as human thinking machines play the game.

      What you say is true, so we are adding another layer of complexity to the problem. This will make the solution set larger and more difficult to define, since an iterative approach will be necessary to find every winning combination, containing every possible 'blocked' pattern.

      Therein lies the inherent challenge in Go.

      But also commenting on Johnny Golden's reply, these 'opinions' you mention might also be described as strategem. A way to capture this sort of hint-based event/decision driven system is generally described as an expert-driven system, wherein expert metholodolgy & rulesets are compiled for automatic response with some degree of subtlety.

      Wait! This isn't a Parachute, this is a Backpack!

        The crux of the problem is that there is a huge disjunct between patterns that we can recognize and act on, and patterns that we can describe in sufficient detail to train a computer to act on.

        Bridging this gap is the main goal of AI research.

        As so-called expert systems demonstrate, it is possible in many cases to straightforwardly break down complex decisions into many simple ones. And so a computer program with a good rule-set can do routine diagnostics quite effectively.

        As chess-playing computers demonstrate, it is possible in many cases to give the computer very little general knowledge, but have them at each step work out from first principles how things will happen.

        The problem is that Go has enough states to be infeasible to work much out from first principles. You are talking about how large the solution set is. Unfortunately the solution set is already far too large for us to calculate it statically once, or to calculate on the fly enough of the local decision tree to recognize what a human would see on general principles.

        It isn't a question of just doing it intelligently. It is a question of being able to do it at all.

        At a guess the problem shares a lot problems we deal with in voice recognition. The raw signal comes in a form that is essentially useless to the task at hand. The rules that people actually use are done at a level which they cannot even think about conciously. What can we do?

        Well we can search for a transformation of the raw signal into alternate signals in which patterns are easier to state. For instance patterns in speech tend to involve features that are localized in both space and frequency. So we search for some way of describing that signal in terms of components that are themselves localized in both space and frequency. And we find that wavelets do that. Using wavelets, our signal is massively clearer. Still a mess, but oh well.

        We then improved that by finding ways to make wavelet transforms dynamically choose the trade-off between localizing in space and time. We chose the one with the minimum entropy, meaning the shortest internal description. What did that do? Well it made the wavelet transform naturally break on the phoenemes of speech. And gave each one a recognizable "signature" we could work with. And now we had something to work with.

        Something similar is true with games. Humans don't just sit down and learn a bunch of rules. Humans engage in the twin activities of learning how to recognize patterns, and learning how to react to them.

        A good player of the board game of your choice does not merely know more rules about that board, they actually see a different board. For instance consider the task of memorizing a chessboard. If you take good chess players and poor chess players, and hand them a board with pieces randomly arranged, they perform about equally. If you take both and put down random positions from middle-games of real tournament games, the good chess players are substantially faster and more accurate at memorizing the board.

        Why? Well when I heard about this, we didn't know for sure. But here is my best guess, that the good chess player has developed an internal model of pieces on a board that allows them to describe the board efficiently. (The efficiency of our internal descriptions is highly significant in memorization tasks. For instance native Welsh speakers can memorize more numbers if asked to do it in English than if they are asked in Welsh. That is because the same number said in Welsh has more syllables.) How efficiently? Well components are obvious because they are features you learn to analyze. Others because they are features which are characteristic of the opening.

        Therefore I would guess that there will turn out to be a deep connection between having a computer that can play Go well, and the task of describing positions which arise in real Go games efficiently...

        It's not a matter of when a player cannot move ... it's when a player should not move. Your list of "winning" positions is calculable in theory, but not in practice. My initial reaction is that you would have to go forwards to find the positions, then go backwards.

        One of the bigger issues that that Go, unlike Chess, has a number of identical positions. Not just considering the fact that the board is rotateable (which quadruples the number of "winning positions"), but that you could swap the NE and SW corners and have the same "winning position". Or, rotate a subsection of the board.

        The rules and starting position of Chess dictate that a huge majority of the potential board arrangements can never be reached, which is why it's a much simpler game.

        We are the carpenters and bricklayers of the Information Age.

        Vote paco for President!

Re: Looking backwards to GO forwards
by Hofmator (Curate) on Sep 05, 2001 at 13:49 UTC

    The method you mention here is called retrograde analysis and is e.g. applied to chess endgames. There have been quite some advancements in recent years proving for example a forced mate in 223 (!) moves from any starting position for a six pieces endgame with king, rook and bishop defeating king and two knights.

    -- Hofmator