This week I worked on implementing some random choice into the Tron AI. The first random element is that the AI chooses one of three strategies randomly, follows it for an amount of time (I have been experimenting with giving some strategies less time to run and others more) and then changes to another strategy. The first and second are the aggressive and evasive methods that I mentioned in my last post. The third method chooses a random direction to move in. I’m still having a few problems with this last one. The first problem is deciding how long to let it make random moves, because these can easily result in it trapping itself. The other problem is that the third method will sometimes run into a wall still. Another thing is that the AI almost never wins and sometimes kills itself, but will often tie the human player because of the way in which I look for the straight line distance to the human player. This isn’t so bad. It effectively turns this Tron implementation into a cat and mouse sort of game. I will be trying a few more things tonight, but this is probably close to the finished product for now.
I now have a working aggressive AI. Maybe too aggressive. The computer bot will try all of the next possible moves and check to see which of those has the shortest straight-line distance to the other player. It will move in this direction as long as it isn’t a wall or the opposite direction. This means that the AI chases me down right away, and I am forced to try evading it. It, however, will almost always sacrifice itself to defeat me. I will try to solve this by having it behave differently when it is close to me. This strategy won’t work in all cases, but so far it appears to yield good results.
Also, changing it to be an evasive bot by switching it to the maximum straight line distance works well, but the bot will enclose itself more easily and cause me to waste time while it slowly but surely dies.
While looking for backgrounds for my blog I found this. Its a tutorial for making a 3D model and short video clip of a light cycle. Might give it a try after the school year.
Still working on understanding the code and provided methods from https://code.google.com/p/tron-ai-challenge/. I am currently just turning left or right when I hit a wall, and working on prioritizing the direction of the turn so that the AI tries to move away from me. This will likely be a foundation for the rest of the algorithm. My goal is to implement a Voronai territory and Minimax algorithm approach, but so far my AI is only looking one move ahead for a wall. Currently having a bug where the game freezes when the AI gets in a corner. Hope I figure that out soon.
TRON LightCycle Game
My project will be to construct an AI to play against a human in a game of LightCycles. This is similar to two player snake, except that there is no food and your tail doesn’t grow. Your bike leaves a trail of light behind it and that trail doesn’t dissolve over time. A player can either get trapped in their own trail, run into an opponents trail, or hit a wall on the edges of the grid.
Google sponsored an AI challenge with the University of Waterloo Computer Science Club where contestants designed an AI over top of a provided starter pack Tron Game. I have been looking to locate this code, although I found another similar contest where everything is set up and there is a separate AI driver class which makes it easy to start designing an AI.
Currently I have a working version of tron light cycles in my Eclipse, and have tried a tunnel vision AI, which looks for the direction in which it can travel furthest, and an AI which simply mimics the human players turns. I look forward to experimenting with some different ideas.
From reading forums and searching google, I see that many contestant’s bots in the Google AI challenge used the minimax search algorithm, some with and some without pruning, and many also used a Voronoi territory approach, separating the board into squares that the human can reach first, the bot can reach first, and we both can reach in the same number of moves. This was the approach of the contestant who’s blog I mentioned in the last post.
Daniel brought it to my attention that creating an AI for a TRON game was Google’s AI challenge recently.
https://code.google.com/p/tron-ai-challenge/ here you can find code for a TRON game and for help in creating an AI.
http://aichallenge.org/ is where the AI challenge is posted, although the link for TRON is dead.
I also found a neat example of a guy who created a TRON implementation using the functional programming language Haskell. The algorithm uses a tree structure with alpha beta pruning and an iterative deepening search.
The fact that this was used in a big competition should make it easier to find examples of a few different search strategies.
spent time last week trying out the unity game engine, but I think I have decided that since I am more familiar with processing I will use that. Unity will be a nice program to have though. It can do quite a lot.
Spent some time today looking at implementations of snake in processing and also came across a post about the game im trying to make http://www.processing.org/discourse/beta/num_1131050071.html