Skip to content

nutmas/CarND-Capstone

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CarND-Capstone-Project

Programming a Real Self Driving Car



Overview

This is the final project in the Udacity Self Driving Car NanoDegree course. The task of this Capstone project was to create ROS nodes to implement core functionality of an autonomous vehicle system, including traffic light detection, vehicle control and waypoint path following. The development uses a simulator to support in evaluating the code performance. Once ready to run there was an opportunity to run the code on a real car - the Udacity AD vehicle Carla.

System Architecture Diagram

The following system diagram shows the architecture of the code that was implemented. The architecture is split into 3 main areas:

  • Perception (Traffic Light Detection)
  • Planning (Waypoint Following)
  • Control (Vehicle longitudinal and lateral control)

From this diagram the ROS topics can be seen communicating between the ROS nodes. Information is also passed on these topics to the Car simulator.

alt text

Installation steps

Usage

  1. Make a project directory mkdir project_udacity && cd project_udacity
  2. Clone this repository into the project_udacity directory. https://github.com/nutmas/CarND-Capstone.git
  3. Install python dependencies. cd CarND-Capstone-Project\ and pip install -r requirements.txt will install dependencies.
  4. Build code. cd ros\ and catkin_make and source devel/setup.sh
  5. Create a directory for simulator cd and mkdir Sim and cd Sim
  6. Download Simulator from here: Udacity Simulator
  7. Run the simulator cd linux_sys_int and ./sys_int.x86_64 (for linux 64bit system)

alt text

  1. Launch the code. cd CarND-Capstone-Project\ros\ and roslaunch launch\styx.launch

alt text

  1. Clicking the Camera checkbox will ready the car for autonomous mode. A green planned path appears.

alt text

  1. Now the vehicle is ready to drive autonomously around the track. Click the Manual checkbox and the vehicle will start to drive.

Traffic Light Detection and Classification End-to-End Approach using Tensorflow

Development Overview

An end-to-end approach in the traffic light detection context equates to passing the classifier an image; it then identifies the location in the scene and also categorises the traffic light state as RED, YELLOW or GREEN.

To achieve this I decided to develop a network model by retraining an existing model from the Tensorflow model zoo

The models selected were deemed suitable for the traffic light task, based on performance and output:
faster_rcnn_inception_v2_coco Speed: 60ms, Quality: 28mAP, Output: Boxes
faster_rcnn_resnet101_coco Speed: 106ms, Quality: 32mAP, Output: Boxes

The following process was utilised to retrain the models to enable them to classify traffic lights in the simulator.

  • Drive around simulator track and log images received from camera on rostopic /image_color. To get a range of traffic light conditions 3 Laps of track data was gathered.
  • A dataset was compiled using labelimg. Bounding boxes were drawn around the front facing traffic lights, and labelled as RED, YELLOW, GREEN or UNKOWN. Images with no traffic lights were not labelled.
  • The Object Detection libraries in Tensorflow v1.12 were required to enable re-training of the models. The dataset was converted to a tensorflow 'record' to proceed with training.
  • The basic configuration for each model in the training setup is:
    • Inception v1: Epoch: 2000 Input Dimensions: min:600 max:800
    • Inception v2: Epoch: 20000 Input Dimensions: min:600 max:800
    • Resnet: Epoch: 80000 Input Dimensions: min:600 max:800
  • Training the models was performed using the scripts available in the Tensorflow Object library.
  • I created python-notebook pipeline to test each model against a set of images which the model had not seen during training. The notebook painted bounding boxes on each image, providing the classification and confidence. 500 images passed through produced the results for Inception v2 are shown in this Video
  • After successful static image evaluation all models were frozen; For compatibility with Udacity environment freezing was performed using Tensorflow v1.4.
  • The frozen models were integrated into the tl_classifier.py node of the pipeline.
    • From ROS camera image is received by 'tl_detector.py' and passed into a shared lockable variable.
    • The function get_classification() is ran in a parallel thread to process the image and utilise the classifier. This avoids the classifier impacting on the ROS processing its other tasks.
    • The classifier processes the image and returns the detection and classification results.
    • The array of classification scores for each traffic light detection are evaluated and highest confidence classification is taken as the result to pass back to tl_detector.py
    • In Parallel to classification thread, the tl_detector.pyfunction run_main() continuously calculates the nearest traffic light based on current pose, to understand the distance to next stop line. When a position and classification are aligned, the node will only output a waypoint representing distance to stop line, if the traffic light is RED or YELLOW.
    • The waypoint_updater.py receives the stop line waypoint and will control the vehicle to bring it to a stop at the stop line position. Once a green light is present the waypoint is removed and the vehicle accelerates to the set speed.

Performance Evaluation

  • Inception v1 model has lower accuracy but runs faster producing results of ~330ms per classification (On 1050Ti GPU). However this required more classification outputs to establish a confirmed traffic light state.
  • Inception v2 model has very high accuracy but runs much slower ~1.5secs per classification (On 1050Ti GPU). This can work on a single state result.
  • Both models could successfully navigate the track and obey the traffic lights. However both classifications took over 1 second to have a confirmed state. v1 would sometimes mis-classify a number of times and due to the higher state change requirements could miss a red light.
  • The simulator would crash at a certain point sometimes and the styx server crash, this occurred more frequently on the v2 model. Videos showing the performance of each model are shown in the videos:
  • I evaluated the models on a 1080Ti GPU which is similar specification to the Udacity hardware. This hardware change significantly improved the speed performance time of the classifiers. The v2 dropped from 1.5s to 650ms and maintained it quality which meant ti was a good solution for successfully navigating the simulator. The results can be seen in this Video

Conclusion for end-to-end classifier

The v1 and v2 inception models are similar size once frozen (52MB vs 55MB). However the model which ran for 10x more epoch is significantly slower but has a much higher reliability for classification. The v2 model was chosen as it could perform to the meet the requirement of the simulator track. No real world data training or testing was performed on the classifier yet; To take this end-to-end classifier forwards it would need retraining on the real world data and have a switch in the launch file to select real world or simulator world models.

PID Tuning Parameters

The final values of PID controller for end-to-end Tensorflow model were the following: (KP = 0.25, KI = 0.0, KD = 0.15, MN = 0.0, MX = 0.5).

Results

This repo shows the Tensorflow traffic light classifier implementation. This has model not been trained for real world traffic lights; It will successfully navigate the simulator track using the RCNN Inception Net as the end-to-end traffic light Classifier.

This Video shows the end-to-end net in operation while the vehicle navigates around the simulator track.

License

For License information please see the LICENSE file for details


About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published