ECE470 Final Project SP18 (Will be updating throughout the semester)
Checkpoint 1 (3/5) User log
In order to begin this endeavor, I first downloaded the V-REP executable file cited by a student for Windows 64bit. After following the prompt instructions...
- Download V-REP from Piazza link
- (I already have Anaconda Python 2.7 from past projects, so I will forego the step of obtaining Anaconda)
- Played with the V-REP environment first by placing in UR-3 arms
- Removed the arms and placed in Baxter
- Added gripper and suction cup and mated them with the robot body
- Added environment objects table and cup, with the latter being translated onto the table
- Copied over vrep.py and vrepConst.py to my workspace folder that I had created in the V-REP scenes folder
- Saved scene file with .ttt ext
- Played with initial code by replacing UR3_armjoint name with Baxter_leftArm_joint1
- Used different theta values to see the range of motion
- Put all of them together without the wait (realize it is quite important!!)
- Place all joint handlers into a list so that they can be called for future reference (will use a dictionary in the future)
- Using a for loop, iterate over all joints to move a specified amount of motion
- Same as 13, going the other direction
- Move the main base and monitor joints
Future Plans: Migrate to GitHub, implement full joint dictionary so that things aren't hard code, calibrate robot
Checkpoint 2 (3/12)
This week involved implementing Forward Kinematics to the Baxter Robot
- Consolidate all joint handlers into one joint library
- Separate the joint library into three separate joint dictionaries for each set of joints
- Draw up generalized schematic of Baxter robot using unit distances for each robot (assumed each arm was on the same plane)
- Wrote down initial T_01(0) homogeneous transformation matrices for each set of joints
- Calculated initial spatial screw axes (farthest we shall go on paper)
- Since the transformation matrix can be described as the sequential exponentials and the homogeneous transformation matrix, only one set of transformation matrices was written down (base to monitor).
- Organized code so that each part of the things setting up and running the simulation were in separate functions.
- Wrote down user input commands that would take in the theta positions for each robot joint depending on which joints the user picked.
- Added basic dummy to represent tool frame that will be target.
- Placed dummy frames in tool_frame positions, as well as placed base frame (Stopped here for deadline, will update later in the week for full forward kinematics module)
Note: I will not be moviong forward in implementing a full forward kinematics module, but a lot of the concepts that are used to derive forward kinematics are included in the following derivations for inverse kinematics, so a lot of the process is to interpret the various axis and dimensioning within the simulator and putting that into valid matrices for kinematics.
Checkpoint 3 (3/26)
To make up for lack of forward kinematics derivations, the main functions that could be used are in the following Checkpoint, as they are needed for inverse kinematics implementations...
- Implemented into new Checkpoint 3 python code a library of matrix operations that will be useful
- Create a spreadsheet for joint x,y,z distances
- Rewrite user prompt info for what frames the user wants to move
- Added initial screw calculations derived from joint spreadsheet
- Created desired transformation pose algorithm
- Implemented FK module for test transformation matrix
- Implemented full IK module
- Implemented joint movement logic that would enable it to determine where it will go while looking for the goal pose
- Added code for showing where the goal pose is, as well as having it change color when the arm reaches the correct pose
- bugfixing....
Current bugs: I would like to check over the screw matrix and initial transformation matrix derivations, as the relative logic for the IK and FK functions are relatively correct.
Checkpoint 4 (4/2)
- Looked into deciding what type of collision detector to use. (Currently going to work directly with V-REP)
- Ported existing code into the new module.
- Decided on algorithm to understand how robot should handle collision (decide on having the joint go back to original position and move on)
- Compiling set of joint thetas for the robot to perform, currently have 4 sets of thetas that will yield at least 30 configurations, with additional configurations that will occur when the robot has been in collision)
- Uploaded Robot Scene
Current Issues: The simulation may sometimes be wonky with the robot returning to the original position, as it will get stuck with the table.
CheckPoint 5 (4/11)
- Uploaded framework that will be used for path planning
- Added in valid theta checking function
- Implemented tree structure for nodal placement
- Added in iterations for selecting thetas
- put in logic for moving the joints and using collision checking at the same time.
Note: with additional dynamic script due to debugging being in the scene, the robot may start off awkwardly. Generated path to be found in the youtube description.
Final Video (5/1)
- Redo ALL of major issues from the past five weeks.
- Compose code for Baxter robot to be able to grasp an object
- Test initial gripping ability for Baxter
- Try orienting Baxter so that it can attempt to grasp the cup
- Upon finishing tests, begin writing general algorithm for video
- Find optimal position for Baxter to properly grasp the cup
- Decide on where to place the cup
- Add in additional obstacles for Baxter to avoid hitting and complicating the path
- Write in final code implementation and run simulation
Areas of improvement: Consult final report