How I Learn (For the Long Term)

I enjoy learning how other people learn. Their ideas are battle-tested, cohesive, and manageable — at least to them. Experimenting with their ideas helps me stick to practical advice and find what works with my own approaches. In fact, I’ve been using a few new habits over the summer that I think they are worth sharing and could be worth your time to try.

I’ve focused on developing habits that identify and close gaps in understanding while helping internalize knowledge for the long-term. These goals hit directly what I want out of learning: to understand topics the low-level details to the high-level abstractions and to retain it all for the long-term.

Continue reading “How I Learn (For the Long Term)”

Exploring Udacity’s Open-Source Lidar Dataset

Knowing how to work with lidar sensors and the algorithms that utilize them can be a highly sought after skill in robotics, but is also hard to get started with. Lidar sensors (and depth mapping cameras) can be prohibitively expensive and a pain to set up, delaying and distracting developers from what they are actually interested in — coding something cool that makes use of the data. Compare that to computer vision, where all you have to do to get started is connect to a cheap webcam, or load up a video from the internet, and you are ready to get coding. For people studying computer vision, you can go from having nothing to making an face tracking program in a couple of hours or less.

There is nothing like that for lidar. No easy way to get some data and start working with it out of the box.

With the goal of making it simple and easy for developers to start working with lidar data, I went looking for datasets to work with and develop into some future guide. After a quick Twitter conversation with David Silver (head of the Udacity SDC program) and Oliver Cameron (CEO of Voyage), I decided to begin by investigating Udacity’s 3.5 hour driving dataset.

In this post I’ll be walking you through my exploration of the Udacity dataset and some of the troubleshooting needed to get it running. Ultimately, I was not able to get the point cloud data playing back with full accuracy, but I hope that with this work log and a bit of help from the community, we can get the data clean enough to start running algorithms and use as a teaching tool.

Continue reading “Exploring Udacity’s Open-Source Lidar Dataset”

Robust Lane Tracking

This summer I’ve been taking some time off and working on a few side projects. The most interesting stuff has come from Udacity’s Self-Driving Car class, which has been a wonderful way to build on my experience and exercise what I already learned from my robotics experience at Duke (I just graduated some months ago!).

Today I’m going to talk about the “advanced” lane following project. The goals Udacity set up were direct, but certainly open-ended.

Given a video:

  • Highlight the current lane the car is in,
  • Determine the position of the car in the lane,
  • Determine the lane’s radius of curvature.

These goals emulate some of the core information you would need for Level 2 automation, such as lane centering, drift alerts, and highway lane following, all of which we can see in existing production cars. Of course, tracking lanes is also used in higher levels of autonomy, but is paired with a ton more information to make it robust to more situations. For this project, there is no LIDAR, no high-resolution maps with known lane information, no GPS, no inertial data. All we are using is video.

If you’re looking for the code, head on over to the GitHub repository.

Continue reading “Robust Lane Tracking”

CSI Cameras on the TX2 (The Easy Way)

I love Nvidia’s new embedded computers. The Nvidia Jetson embedded computing product line, including the TK1, TX1, and TX2, are a series of small computers made to smoothly run software for computer vision, neural networks, and artificial intelligence without using tons of energy. Better yet, their developer kits can be used as excellent single board computers, so if you’ve ever wished for a beefed up Raspberry Pi, this is what you are looking for. I personally use the Jetson TX2, which is the most powerful module available and is widely used.

One of the big fallbacks with Jetson devices is that the documentation does not (and cannot) cover all use cases. The community has yet to mature to the point where you can find some random blog’s guide on any random thing you need to do (à la Raspberry Pi and Arduino), so you’ll often have to figure out things for yourself.

But, I am here to dispell the mystery around at least one thing — using CSI cameras on your TX2. These methods should work on other Jetson devices too!

We’re going to look at utilizing the Jetson’s image processing powers and capturing video from the TX2’s own special CSI camera port. Specifically, I’ll show you:

  • Why you’d even want a CSI camera.
  • Where to get a good CSI camera.
  • How to get high resolution, high framerate video off your CSI cameras using gstreamer and the Nvidia multimedia pipeline.
  • How to use that video in OpenCV and ROS.

Continue reading “CSI Cameras on the TX2 (The Easy Way)”