Skip to content

MAliElakhrass/DeepYed

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

IFT6756


Logo

Final project 6756

Table of Contents
  1. About The Project
  2. Getting Started
  3. Heuristic Approach
  4. Neural Network
  5. Heuristic Approach
  6. License
  7. Acknowledgements

About The Project

Our goal was to evaluate and quantify the performance of various intelligent agent approaches applied to chess. We have explored three approaches to machine learning:

  1. Tree traversal;
  2. neural network;
  3. Reinforcement learning approaches.

With the resources available, our best agent was our Negamax research agent, with an ELO of around 1300. The neural network approach produced an excellent agent for determining the best move mid-game, but not for a game. complete. Finally, our results for the reinforcement learning approach are comparable to those obtained initially by the opensource project Leela Zero, i.e. an ELO of around 500

Built With

This section lists any the major frameworks that we built our project using.

Getting Started

To get a local copy up and running follow these simple example steps.

Prerequisites

You should run one of the two commands in your terminal in order to install all the requirements for the project.

  • pip

    pip install -r requirements.txt
  • conda

    conda install --file requirements. txt

Installation

  1. Clone the repo
    git clone https://github.com/MAliElakhrass/DeepYed.git
  2. Download the Stockfish engine from https://stockfishchess.org/download/ and place it under the /engines folder
  3. Download the opening books from https://rebel13.nl/download/books.html and uncompress the content of the folder books under the /books folder

Open a pgn file

You can read a pgn file in any text editor. However, if you want to watch the game, you have to download a Graphical User Interface (GUI) for chess. We recommend Scid or Arena.

Negamax approach

For this approach there is nothing to train. In order to play this agent against Stockfish, run the following command in the command line

 python3 Heuristic/play.py 1 3 10

The first argument represents the stockfish level, the second represents the depth of our algorithm and the third argument represents the number of games to play.

At the end of each game, a pgn file will be created and you'll be able to watch the game.

Neural Network

  • If you don't want to retrain the model: You can play against Stockfish by running experiment.py

     python3 NeuralNetKeras/experiment.py 1 3 10

    Again, the first argument represents the stockfish level, the second represents the depth of our algorithm and the third argument represents the number of games to play.

  • If you want to retrain the model:

    1. Download the data from CCRL and uncompress the 7z file into the /data folder

    2. The first step is to generate the data

      python3 NeuralNetKeras/DataGenerator.py

      Once this step is over, you'll have two new files in your /data folder: black.npy and white.npy

    3. The second step is to train the autoencoder. (This step can be skipped if you want. We already save our encoder model)

      python3 NeuralNetKeras/AutoEncoder.py

      Once this is over, the encoder will be saved in your /weights folder under the name encoder.h5

    4. Finally, the last step is to train the siamese.

        python3 NeuralNetKeras/SiameseNetwork.py

      Once this step is over, the siamese model will be saved in your /model folder under the name DeepYed.h5

Reinforcement learning

For this part, we based our code from the AlphaZero General Framework available from AlphaZero General Framework

  • If you don't want to retrain the model: You can play against Stockfish by running experiment.py
    python3 ReinforcementLearning/pit.py 1 10
    The first argument represents the stockfish level, the second represents the number of games to play.
  • If you want to retrain the model:
    python3 ReinforcementLearning/main.py

License

Distributed under the MIT License. See LICENSE for more information.

Acknowledgements

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published