ABSTRACT

May 11, 1997, was a watershed moment in the history of artificial intelligence (AI): the IBM supercomputer chess engine, Deep Blue, beat the world Chess champion, Garry Kasparov. It was the first time a machine had triumphed over a human player in a Chess tournament. Fast forward 19 years to May 9, 2016, DeepMind’s AlphaGo beat the world Go champion Lee Sedol. AI again stole the spotlight and generated a media frenzy. This time, a new type of AI algorithm, namely machine learning (ML) was the driving force behind the game strategies.

What exactly is ML? How is it related to AI? Why is deep learning (DL) so popular these days? This book explains how traditional rule-based AI and ML work and how they can be implemented in everyday games such as Last Coin Standing, Tic Tac Toe, or Connect Four. Game rules in these three games are easy to implement. As a result, readers will learn rule-based AI, deep reinforcement learning, and more importantly, how to combine the two to create powerful game strategies (the whole is indeed greater than the sum of its parts) without getting bogged down in complicated game rules.

Implementing rule-based AI and ML in these straightforward games is quick and not computationally intensive. Consequently, game strategies can be trained in mere minutes or hours without requiring GPU training or supercomputing facilities, showcasing AI's ability to achieve superhuman performance in these games. More importantly, readers will gain a thorough understanding of the principles behind rule-based AI, such as the MiniMax algorithm, alpha-beta pruning, and Monte Carlo Tree Search (MCTS), and how to integrate them with cutting-edge ML techniques like convolutional neural networks and deep reinforcement learning to apply them in their own business fields and tackle real-world challenges.

Written with clarity from the ground up, this book appeals to both general readers and industry professionals who seek to learn about rule-based AI and deep reinforcement learning, as well as students and educators in computer science and programming courses.

part I|140 pages

Rule-Based AI

chapter 2Chapter 1|18 pages

Rule-Based AI in the Coin Game

chapter Chapter 2|19 pages

Look-Ahead Search in Tic Tac Toe

chapter Chapter 3|23 pages

Planning Three Steps Ahead in Connect Four

chapter Chapter 4|11 pages

Recursion and MiniMax Tree Search

chapter Chapter 5|15 pages

Depth Pruning in MiniMax

chapter Chapter 6|17 pages

Alpha-Beta Pruning

chapter Chapter 7|14 pages

Position Evaluation in MiniMax

chapter Chapter 8|21 pages

Monte Carlo Tree Search

part II|56 pages

Deep Learning

chapter 142Chapter 9|23 pages

Deep Learning in the Coin Game

chapter Chapter 10|16 pages

Policy Networks in Tic Tac Toe

chapter Chapter 11|15 pages

A Policy Network in Connect Four

part III|70 pages

Reinforcement Learning

chapter 198Chapter 12|14 pages

Tabular Q-Learning in the Coin Game

chapter Chapter 13|16 pages

Self-Play Deep Reinforcement Learning

chapter Chapter 14|23 pages

Vectorization to Speed Up Deep Reinforcement Learning

chapter Chapter 15|15 pages

A Value Network in Connect Four

part IV|100 pages

AlphaGo Algorithms

chapter 268Chapter 16|16 pages

Implementing AlphaGo in the Coin Game

chapter Chapter 17|15 pages

AlphaGo in Tic Tac Toe and Connect Four

chapter Chapter 18|18 pages

Hyperparameter Tuning in AlphaGo

chapter Chapter 19|17 pages

The Actor-Critic Method and AlphaZero

chapter Chapter 20|18 pages

Iterative Self-Play and AlphaZero in Tic Tac Toe

chapter Chapter 21|14 pages

AlphaZero in Unsolved Games