Skip to content

martinezedwin/AdvancedLaneLines

Repository files navigation

Advanced Lane Finding

Final result video

Goal: Find the lane lines and highlight the lane from a video.

Algorithm overview:

  • Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
  • Apply a distortion correction to raw images.
  • Use color transforms, gradients, etc., to create a thresholded binary image.
  • Apply a perspective transform to rectify binary image ("birds-eye view").
  • Detect lane pixels and fit to find the lane boundary.
  • Determine the curvature of the lane and vehicle position with respect to center.
  • Warp the detected lane boundaries back onto the original image.
  • Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.

Dependencies

This project was run with the following:

Files

  1. Calibration.py, get_calibration_factors.py, calibration.p: Calibrations and get_calibration_factors are used in conjunction once to get the calibration factors that are stored in calibration.p and used in the rest of the program.
  2. pipeline.py: Used to run the algorithm on a single image at a time.
  3. videos.py: Used to run the algorithm on a video.

How to run

Locally in your machine (Tested on Ubuntu 16.04 LTS):

  1. Clone this repository: https://github.com/martinezedwin/AdvancedLaneLines.git
  2. Go int othe repository: cd AdvancedLaneLines
  3. Install Dependencies: chmod +x requirements ; ./requirements
  4. Run pipleine.py: python3 pipeline.py or the pipeleine for video: python3 video.py

Using Docker:

  1. Install docker Docker
  2. Clone this repository: https://github.com/martinezedwin/AdvancedLaneLines.git
  3. Go int othe repository: cd AdvancedLaneLines
  4. Build the docker image: sudo docker build . -t advancedlanelines
  5. Run the docker container: sudo docker run --user=$(id -u) --env="DISPLAY" -p 4567:4567 -v $PWD:/AdcancedLaneLines -v /etc/group:/etc/group:ro -v/etc/passwd:/etc/passwd:ro -v /etc/sudoers.d:/etc/sudoers.d:ro -v /tmp/.X11-unix:/tmp/.X11-unix:rw --rm -it advancedlanelines
  6. Go int othe repository: cd AdvancedLaneLines
  7. Run pipleine.py: python3 pipeline.py or the pipeleine for video: python3 video.py

Details

Camera Calibration

Images of black and white checker boards from different angles where taken using the same camera used to take the lane video.

The code for this step is contained in lines #32 through #61 of the file called Calibration.py in conjuction with the get_calibration_factors.py.

I start by preparing "object points", which will be the (x, y, z) coordinates of the chessboard corners in the world. Here I am assuming the chessboard is fixed on the (x, y) plane at z=0, such that the object points are the same for each calibration image. Thus, objp is just a replicated array of coordinates, and objpoints will be appended with a copy of it every time I successfully detect all chessboard corners in a test image. imgpoints will be appended with the (x, y) pixel position of each of the corners in the image plane with each successful chessboard detection.

I then used the output objpoints and imgpoints to compute the camera calibration and distortion coefficients using the cv2.calibrateCamera() function. I applied this distortion correction to the test image using the cv2.undistort() function and obtained this result:

Before After
Calibration Image Calibration Image Undistorted

Example of a distortion-corrected image.

We will be using test_images/straight_lines1.jpg as an example for the rest of this tutorial:

Before After
Test Imiage Undistorted

Color transform and gradients to highlight lane lines.

A combination of color transforms and gradients were tested to see which would bring out the lane lines the best in binary images. The L from HLS color space and B from LAB color space where used as shown in lines 51 thorugh 79 of box.py showed the best results.

In the end the output looked something like this:

Before After
Undistorted Color transform and gradient

Perspective transform.

In order to perform a perspective transform or "Birds-eye view" a trapezoid was defined by four vertices that correspond to coordinates on the image.

These verteceis became the src. The destination points or dst where also define using the shape of the image. By using the Unwarp.unwarp funciton that contains hte cv2.getPersepectiveTransform() and cv2.warpPerspective() in lines 94 through 121 of box.py a birds-eye view was obtained.

vertices = np.array([[(BR_h, BR_v), (BL_h,BL_v), (TL_h, TL_v), (TR_h, TR_v) ]], dtype=np.float32)

h, w = img.shape[:2]

src = vertices
dst = np.array([[w, h], [0, h], [0, 0], [w, 0]], dtype = np.float32)

warped = Unwarp.unwarp(color_combined, src, dst)
Before After
Color transform and gradient Birds-eye view

Identified lane-line pixels and fit their positions with a polynomial.

In order to identify the pixels of a given binary image a historgram that shows high peaks where the lane lines are detected serves as a guide.

For new undetected pixel images we start from the bottom of the image which would be closest to the car and form "windows" of specified size to continue seraching for lane lines along the image forward in the lane for each lane line.

For videos where we have already identified pixels in the previous frame, a focused search to where the prvious lane pixels where identified helps speed things up. This is shown in FindPix.py in find_lane_pils() and search_around_poly(). The pixels are then used to fit a polynomial by the fit_poly() functions.

Lane pixels

Calculated the radius of curvature of the lane and the position of the vehicle with respect to center.

Using the radius curviture equation seen here: https://www.intmath.com/applications-differentiation/8-radius-curvature.php and the coefficients calculated by the np.polyfit() function in the fit_polynomial_cr(), the curvature for both lane lines was calculated. (FindPix.py lines 287 thorugh 307)

Assuming that the camera is placed in the center of the car and thus the center of the image, and calculating the center of the lane relative to the image, you can calculate how far off the car is from the center of the lane to the right or left. This is done in FIndPix.py in the get_offset() function.

Example image of your result plotted back down onto the road such that the lane area is identified clearly.

Final Output

About

Detect and highlight lane lines in a video!

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published