Skip to content

LukasEngedal/Deep-Contact

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Contact - Improving the iterative process of solving contacts for rigid bodies with the use of neural networks

This repository is for the rapport as well as the full implementation created in connection with the master thesis done at the University of Copenhagen in 2018 by me, Lukas Engedal.

This repository is forked from the GitHub repository where Jian Wu, Lucian Tirca and I originally shared the work we did on the initial parts of the project. I created a separate fork due to the significant number of changes I made to a lot of the code in the last months of the project, which I decided not to upload to the shared repository in order to avoid causing trouble for the other two participants and any code they may have written and not uploaded themselves.

The thesis is available as thesis.pdf in the main folder.

The main folder also contains two subfolders, pybox2d and src. The pybox2d folder contains our modified version of the Pybox2D simulator, which can be installed by running the setup.py twice, first with the flag build which will build the code, and secondly with the flag install which will install the simulator in Python. The Pybox2D code is written mainly in C++.

The src contains all of the code that we have created, which is organized rather haphazardly into another set of folders. The code is structured as a number of separate files containing the various functions that we have created, and another set of files intended to be used and run as scripts. The intended use of these files is to pick the one you want to run, open the file, edit the various parameters defined therein to fit your particular needs, and then run the file in a terminal. All of our code is written in Python 3.6, and requires the latest versions of Numpy, Scipy, OpenCV, Pandas, TensorFlow, TensorBoard and MatPlotLib, as well as having Pybox2D installed.

Let us assume that the intention is to test the full functionality of the code; to train and run a neural network. Currently all of the relevant script files are set up so as to create training data, create a neural network, test the neural network and plot the results, in the exact way that we used to create the Peak model and the results for this model.

The first file to run is generate_xml.py, found in the gen_data folder. This file will generate 110 sets of training data as xml files, 100 for training the network and 10 for validation. The next file to run is generate_grids, in the same folder, which will load the 110 xml files, transfer all of the data to sets of grids and then store the grids. In total this should take around an hour or so for the average pc.

The next step is then to run the run_training file found in the TensorFlow folder, which will create and train the neural network. Note that this will potentially take a long time, something like 10-20 hours on an average pc running the code on a CPU. Currently TensorFlow is told to train the neural network using the CPU, this can be changed by outcommenting line 83 in peak.py, causing it to run on the GPU instead. This of course requires that TensorFlow has been installed correctly to run on the GPU, which is not a trivial thing, and even then might not work if the GPU does not have sufficient memory for the large tensors that we are using for our dense layers. When training the neural network, the process can be monitored using TensorBoard.

After training the neural network, one then needs to run the performance tests with the model. This is done by running the run_model file in the performance folder. The results can then be plotted similarly to how we did in the results section by running the plot_results file in the same folder. In order to compare the results to that of other models, particularly the various simple models that we used, one has to first open the run_model file and change what model is used, by outcommenting the line setting the model to be the Peak model and then using one of the other outcommented lines instead. When all the desired models have been run, one similarly then has to open the plot_results file and specify what models to include in the plot.

About

A repository for my master-thesis in computer science at Copenhagen University

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 64.3%
  • C++ 35.5%
  • C 0.2%