Skip to content

yinjiayang/DL-Gesture-Recognition

 
 

Repository files navigation

Overview

This project uses front-end separation, and the client has the following three forms of implementation:

  • Manual gesture recognition, which means that the user determines the segmentation of continuous gestures
  • Dynamic gesture recognition based on frame difference method
  • Dynamic gesture recognition based on object tracking

The server encapsulates Temporal Relation Networks.

Server

Test on Ubuntu16.04 + Python3.6 + cuda9.0 + cudnn7.0.5 + Pytorch0.3.1 + opencv3.4aliyun NVIDIA P100), make sure you have installed the above environment.

Dependencies

$ pip install flask
$ pip install pillow
$ pip install moviepy
$ sudo apt-get install ffmpeg
$ pip install -U scikit-learn
$ pip install scipy
$ pip install flask_uploads

and then download the weight file and configuration file, and place them in the server/model folder. Finally, run server.py

.placeholder under empty folder can be deleted

Client

Test on Ubuntu16.04/Mac OS + Python3.6 + OpenCV3.4 + opencv_contrib

Dependencies

$ pip install pillow
$ pip install requests

Manual

  • server-address: gesture recognition server address
$ python run_manual.py -s [server-address]

Interactive mode: press the keyboard s key before each action, and press the s key again after the action is complete to complete the recognition

Frame difference

you can choose the Background Subtraction Methods

  • method: knn or mog
  • threshold: The sum of the length and width of the contour identified by the algorithm is greater than the threshold is considered to be the hand
$ python run_frameDifferent -s [server-address] --method [method] --threshold [threshold]

Object detection

GPU support is required to run this version, we tested on Ubuntu 16.04 + cuda9.0 + cudnn7.0.5 + tensorflow1.6. You need to install tensorflow1.6-gpu extra and darkflow, You can download darkflow from here.

$ pip install tensorflow-gpu
$ pip install Cython
$ cd darkflow
$ pip install .

# Check whether the installation is complete
$ flow --h

and then download the weight file and configuration file, and place them in the model folder and cfg folder respectively. Finally, run

$ python run_objectDetection.py -s [server-address]

About

A real time hand gesture recognition system based on deep learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 70.8%
  • C++ 24.6%
  • Jupyter Notebook 1.6%
  • Shell 1.4%
  • Lua 1.4%
  • Makefile 0.2%