My favorite posts.
Robust Lane Tracking
This summer I’ve been taking some time off and working on a few side projects. The most interesting stuff has come from Udacity’s Self-Driving Car class, which has been a wonderful way to build on my experience and exercise what I already learned from my robotics experience at Duke (I just graduated some months ago!).
Today I’m going to talk about the “advanced” lane following project. The goals Udacity set up were direct, but certainly open-ended.
Given a video:
- Highlight the current lane the car is in,
- Determine the position of the car in the lane,
- Determine the lane’s radius of curvature.
These goals emulate some of the core information you would need for Level 2 automation, such as lane centering, drift alerts, and highway lane following, all of which we can see in existing production cars. Of course, tracking lanes is also used in higher levels of autonomy, but is paired with a ton more information to make it robust to more situations. For this project, there is no LIDAR, no high-resolution maps with known lane information, no GPS, no inertial data. All we are using is video.
If you’re looking for the code, head on over to the GitHub repository.