Skip to content

xiangtju/Peg_in_hole_assembly

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Peg_in_hole_assembly

  • Our work bases on openAI baselines code espically DDPG framework.

  • The automatic completion of multiple peg-in-hole assembly tasks by robots remains a formidable challenge because the traditional control strategies require a complex analysis of the contact model.

  • We proposed a model-driven deep deterministic policy gradient (MDDPG) algorithm is proposed to accomplish the assembly task through the learned policy without analyzing the contact states.

  • To improve the learning efficiency, we utilize a fuzzy reward system for the complex assembly process. Then, simulations and realistic experiments of a dual peg-in-hole assembly demonstrate the effectiveness of the proposed algorithm.

OpenAI Baselines is a set of high-quality implementations of reinforcement learning algorithms.

These algorithms will make it easier for the research community to replicate, refine, and identify new ideas, and will create good baselines to build research on top of. Our DQN implementation and its variants are roughly on par with the scores in published papers. We expect they will be used as a base around which new ideas can be added, and as a tool for comparing a new approach against existing ones.

You can install it by typing:

git clone https://github.com/hzm2016/Peg_in_hole_assembly
cd baselines
pip install -e .

DDPG

Papers

First a simulation environment is setup to demonstrate the effectiveness.

  • Simulate_main.py

Second a dual peg-in-hole assembly experiments has been done.

  • Experiment_main.py

About

Deep reinforcement learning for robotic peg-in-hole assembly task

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%