In the final project of the Udacity Self-Driving Car Nanodegree, we were tasked with implementing core autonomous subsystem functionality to allow Udacity’s Self-Driving Lincoln MKZ to autonomously navigate around a test track. This test will verify the vehicle's ability to follow a path of provided waypoints and stop safely at red illuminated traffic lights.
Name |
---|
Jeremy Matson |
Mario Lüder |
Emilio Moyers |
Yang Sun |
Three major subsystems were configured to communicate with each other utilizing the Robot Operating System (ROS). Each subsystem is comprised of multiple components to fulfill the greater task.
The first subsystem to implement was the Planning subsystem. It consists of the Waypoint Loader and the Waypoint Updater nodes.
The Waypoint Loader node (/waypoint_loader) loads the initial waypoints for the track that the vehicle will be tested on. These waypoints contain information about the target pose of the vehicle (x, y, and heading) and the target velocity.
The Waypoint Updater node (/waypoint_updater) is responsible for adjusting the longitudinal velocity component of the waypoints to account for deceleration events. These events are determined by the Control and Perception subsystems.
As braking events are demanded when a RED traffic light is detected, the function implemented in this node starts by calculating the distance of each waypoint from the vehicle to the target stop line for the traffic light. Then, the time required to decelerate to a complete stop is calculated based upon the maximum deceleration rate configured for limiting jerk and passenger discomfort. Finally, the target velocity under braking is calculated and applied to each waypoint. This allows for a linear adaptive braking function that can scale with vehicle velocity.
The Control Subsystem was the next to implement. It consists of the Drive-By-Wire (/twist_controller) and Waypoint Follower (/waypoint_follower) nodes.
The DBW node (/dbw_node.py) is responsible for providing new proposed linear and angular velocities to allow the vehicle to maintain the path planned by the Waypoint Updater node. It consists of PID controller functions for throttle control (twist_controller.py, pid.py), a brake torque calculation, a yaw-controller (yaw_controller.py) to adjust heading direction, and a low-pass filter to reduce sensor noise (lowpass.py).
The braking function implemented calculates the distance
The Waypoint Follower node (/waypoint_follower.py) is Autoware open-source code that is responsible for outputting the control commands to the vehicle that have been provided by the DBW node.
The perception subsystem consists of a Traffic Light Detector. Note: The Obstacle Detection node has not been implemented but has been framed for future use.
The Traffic Light Detection node (/tl_detector) consists of a light detector, and a classifier. The Light Detector (tl_detector.py) subscribes to images published by the vehicle’s forward-facing camera, dynamically adjusts the image processing rate, sends images to the classifier for light state detection (RED, YELLOW, GREEN, or UNKNOWN), and publishes the location of the stop line for the detected stop light for the Planning Subsystem to act upon in the event of a RED light.
It was found that lag can be induced into the simulator as experienced by the vehicle exhibiting a waypoint trail extending behind it while traversing the test track. This is heavily dependent upon machine resources, but can be helped by classifying images at extended intervals or dropping images. In this project, it was decided upon to drop 9 out of every 10 images when the vehicle's position exceeded 100 waypoints of the traffic light stop line, and then this was reduced to 3 out of every 4 images when it was within this threshold.
The Traffic Light Classifier is a TensorFlow model that is fed the forward-facing camera image from the Traffic Light Detector and returns a state of the traffic light if it is found. The model chosen was the “Single Shot Detection Inception V2” algorithm, which offers better performance than the “Single Shot Detection Mobilenet V1” algorithm, at a slight expense of speed. This model performed very well for our application.
Additional information on the SSD Inception V2 Model can be found at:
A video of the simulator run can be found below:
This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.
Please use one of the two installation options, either native or docker installation.
-
Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.
-
If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:
- 2 CPU
- 2 GB system memory
- 25 GB of free hard drive space
The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.
-
Follow these instructions to install ROS
- ROS Kinetic if you have Ubuntu 16.04.
- ROS Indigo if you have Ubuntu 14.04.
-
- Use this option to install the SDK on a workstation that already has ROS installed: One Line SDK Install (binary)
-
Download the Udacity Simulator.
Build the docker container
docker build . -t capstone
Run the docker file
docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone
To set up port forwarding, please refer to the instructions from term 2
- Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
- Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
- Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
- Run the simulator
- Download training bag that was recorded on the Udacity self-driving car.
- Unzip the file
unzip traffic_light_bag_file.zip
- Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag
- Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch
- Confirm that traffic light detection works on real life images