# AI Engine

It’s a fact that we all play games. It can be very small games like Super Mario, Candy Crush, or even be huge games like Far Cry, Crysis or Assassin’s Creed.

But have you played any game, that it changes when you play the same level again? For example, if you play an FPS, when you go to a place, the enemies won’t come from the same place as the last time. It is because the game has a dynamic play. Either it was programmed to get a sequence from a pre-defined set, or actually it changes by itself. So we call those type of games have an “AI Engine” along with it’s Game Engine.

These AI Engines are made to learn by the way the game is played, which is different from player to player. For some games,we don’t feel that much an effect. However, if the game prompts the player to select an option out of a lot and the game play has to change according to that, the game developer simply can’t program the game for all those possibilities. Hence, the game has to be programmed to generate the next possibility.

If you’re confused, think of Chess. For a given situation, the human player (X) has many possible moves. When X takes a move, he thinks of all the possible moves that his opponent can take, when he took each of the possible move he take at the moment. If X is good enough, he can think a few more steps ahead as well. However, X will take the (theoretically) “best” move after considering all the possibilities. Then the computer needs to make a move accordingly, and that depends on the move of X. If X took a different move, computer may have to take a different move than in the previous case. Explicitly programming the game for all the possible moves, obviously sounds insane, because it gives a HUGE exponentially growing decision tree.

So the idea is to calculate the next best possible move on-the-fly according to that board configuration. But still, if we are to calculate the ALL possible moves that are needed to win the game, it still would be exponential. So the optimization is, only consider a limited number of steps ahead and do the best move according to them.

This logic is working well for two-player (or multiplayer) theoretical games. Here are some of the features of those type of games.

- Two (or finite number of) players
- Turn based
- Each player gets a chance after each other

- Perfect information
- The rules of the game are well defined (ex: A Bishop can only move via diagonals)

- Deterministic
- No “luck” is involved. You always know the outcome of a step.
- Ex : Backgamon is played with the help of a dice, hence the next move is random.

- Zero sum
- Loss (or gain) of utility of a player, is exactly equals to the gain (or loss) of the other player(s).

- Finite
- The game MUST end at some pointIn order to represent the possible decisions (in this case, moves) that can be taken by a player, and others’ responses to that, we can use a tree structure.

# Game Tree

This kind of a tree is called a “**Game Tree**”. Here are some features of a game tree.

- A deterministic, turn based, finite game with perfect information
- Can be represented by a game tree
- The root is the initial position
- The children of the root are possible positions after one move
- The grandchildren of the root are possible positions after two moves, and so on.
- The leaves represent possible endings
- They contain information about the value (utility) of the ending to each of the players, which is calculated for the player which has the current move of that step.
- Intermediate nodes also contains such information, but mostly they are not considered since the leaves have the effect of them.
- In a zero-sum game, what one player gains, the other player loses.

But creating a full game tree for game is really hard. For example, if the average number of possible moves is 35 (like in chess) and the average number of moves needs to end the game is 100, you’ll get 35^{100} possible endings. So for games like chess, the possible solution is to create only a partial tree with only a few levels ahead, and decide the next move from them.

At each level, each player tries to **maximize his earnings** and **minimize opponent’s**. Hence, at a given point, a player (X) creates a game tree for his side. That tree will have maximize nodes for his turns and minimize nodes for opponent’s turns. If we assume that both players play optimally, then **each player tries to maximize his (X) earnings and assumes that the opponent will try to minimize X’s earnings**. Each leaf node contains the utility of **Player 1**, so the decision he takes (theoretically) should maximize his final utility.

- If the node is a
**max**node, the child with the**maximum utility**is selected. - If the node is a
**min**node, the child with the**minimum utility**is selected.

At the root, the child that gives the **best** utility which is the,

- the
**maximum utility**if the root is a max node. - the
**minimum utility**if the root is a min node.

is selected.

## Utility

This defines *“how much beneficial is the current position to the current player”*. It can be calculated as a heuristic value by considering the configuration at the node. We can consider many aspects and give them values and get a weighted sum for each leaf node.

I know you’re confused by that, so let’s consider the following value points for Chess.

- How much pieces I have, and my opponents has
- It’s better to consider the number of pieces from each type, since having a rook is highly beneficial that having a pawn.

- How much threats I can give to opponent’s pieces by the current move, or even in the next move.
- How much pieces I have in strong positions
- If I can give a “check” in the next move
- If I can get a piece from the opponent in the next move.
- Do I get any of the opponent’s pieces isolated.
- And we have to calculate the negative points for each of those by considering the opponent’s side.

The weak point of having a fixed evaluation criteria is, it will give the same choice for the same board configuration even at different games.

More sophisticated AI Game engines have a learning mechanism and stores the winning positions. It has two main ways of doing it.

- Evaluate the board configuration by getting a probability of wining when having the current configuration.
- It either can be from the same player or even from
*other*players. - They will analyze their database of game plays, along with the ranking of those players and assign values by considering the pattern which the current player had played so far.
- For example, games played by world champions will get more values than normal players.
- The criteria to label a player as a “good player” is controversial. But they uses the players with highest winning rates and the similarities between them (as a graph).

- It either can be from the same player or even from
- Evaluate the game play and current position according to well known strategies.
- For example, there are several starting moves (i.e. Queen’s Gambit), special moves (i.e. Castling, Forks) etc.
- But this is also influenced by the first fact.

For your game engine, you can define whatever the utility function, but make sure that it is **not too much over-realistic**. 😛

In the next article, we will see the Mini-Max algorithm and Alpha-Beta Pruning which can be used to implement this.

Make your own AI Game Engine – 2 « Gnomez Grave, Direct link to comment[…] hope that you have already read my previous article. And if you haven’t, please read it so you can get a better idea what I’m talking […]

Make your own AI Game Engine – 3 « Gnomez Grave, Direct link to comment[…] I hope you have read the two previous articles, part 1 and part 2 of this. I promised to explain about Alpha-Beta pruning, and here we […]