Table of Contents
Our goal was to evaluate and quantify the performance of various intelligent agent approaches applied to chess. We have explored three approaches to machine learning:
- Tree traversal;
- neural network;
- Reinforcement learning approaches.
With the resources available, our best agent was our Negamax research agent, with an ELO of around 1300. The neural network approach produced an excellent agent for determining the best move mid-game, but not for a game. complete. Finally, our results for the reinforcement learning approach are comparable to those obtained initially by the opensource project Leela Zero, i.e. an ELO of around 500
This section lists any the major frameworks that we built our project using.
To get a local copy up and running follow these simple example steps.
You should run one of the two commands in your terminal in order to install all the requirements for the project.
-
pip
pip install -r requirements.txt
-
conda
conda install --file requirements. txt
- Clone the repo
git clone https://github.com/MAliElakhrass/DeepYed.git
- Download the Stockfish engine from https://stockfishchess.org/download/ and place it under the
/engines
folder - Download the opening books from https://rebel13.nl/download/books.html and uncompress the content of the folder books under the
/books
folder
You can read a pgn file in any text editor. However, if you want to watch the game, you have to download a Graphical User Interface (GUI) for chess. We recommend Scid or Arena.
For this approach there is nothing to train. In order to play this agent against Stockfish, run the following command in the command line
python3 Heuristic/play.py 1 3 10
The first argument represents the stockfish level, the second represents the depth of our algorithm and the third argument represents the number of games to play.
At the end of each game, a pgn file will be created and you'll be able to watch the game.
-
If you don't want to retrain the model: You can play against Stockfish by running experiment.py
python3 NeuralNetKeras/experiment.py 1 3 10
Again, the first argument represents the stockfish level, the second represents the depth of our algorithm and the third argument represents the number of games to play.
-
If you want to retrain the model:
-
Download the data from CCRL and uncompress the 7z file into the
/data
folder -
The first step is to generate the data
python3 NeuralNetKeras/DataGenerator.py
Once this step is over, you'll have two new files in your
/data
folder:black.npy
andwhite.npy
-
The second step is to train the autoencoder. (This step can be skipped if you want. We already save our encoder model)
python3 NeuralNetKeras/AutoEncoder.py
Once this is over, the encoder will be saved in your
/weights
folder under the nameencoder.h5
-
Finally, the last step is to train the siamese.
python3 NeuralNetKeras/SiameseNetwork.py
Once this step is over, the siamese model will be saved in your
/model
folder under the nameDeepYed.h5
-
For this part, we based our code from the AlphaZero General Framework available from AlphaZero General Framework
- If you don't want to retrain the model:
You can play against Stockfish by running experiment.py
The first argument represents the stockfish level, the second represents the number of games to play.
python3 ReinforcementLearning/pit.py 1 10
- If you want to retrain the model:
python3 ReinforcementLearning/main.py
Distributed under the MIT License. See LICENSE
for more information.