Totally agree with the sentiments here that this is a first pass and not for production use. In fact, that's exactly the intent of this project. It's meant to be a very first pass for people to get a sense of what's possible with simple off the shelf tools and a basic understanding of CV. We expect students to do this within 7 days of starting the program!
There will be a far more advanced lane detection project further in the Nanodegree that is intended to cover all the use cases that people have pointed out here.
This isn't lane detection, this is line detection. Don't mistake the best-case scenario weather and freeway setting with that.
In any case, there is nowadays a pretty dramatic gorge between 1) discrete transforms heaped upon each other like here (Mobileye style) and 2) just DNN. I feel 1 is falling out of favor until we will eventually realize 2 alone isn't the duck's guts either.
Interesting tidbit: The way we did this on the Stanford DARPA Urban Challenge vehicle was to use the intensity of the returned LIDAR signal ... the little reflective markers between white lane paint are actually covered using a retro-reflective coating. You can get extremely good detection rates. It's much more of a hack :)
While this is great and serves as a good starter for those looking to get a basic understanding of computer vision.
I'm not so sure if thats what real self-driving cars use. It seems too basic to apply a Canny-Edge detector which is taught in 3rd/4th year comp vision classes. I'm skeptical but also not very knowledgeable about the tech that goes into self driving cars- so if anyone with real experience can inform would be great.
I've written a survey paper about various edge detection algorithms and while Canny is one of the better ones, it still leaves a lot to be desired.
I don't see anything wrong with using those techniques. They're just not enough on their own to make the system sufficiently general to be useful. Extra steps are needed, and even then, there would still be cases where the lane markings can't be reliably made out. In those cases, human drivers would fall back on more general driving skills.
My layman's understanding is that it's mostly deep learning these days (what a surprise, I'm sure). For example, this article says that deep learning is behind most of Mobileye's computer vision stuff:
I am starting this program in coming session, and having taken a computer vision course, I am wondering how well did they cover Canny edge detection and Hough Transform in just about a week.
Did they go over all the math involved or was the treatment superficial? When I learned about Hough Transform the first time, it blew my mind! I am not very familiar with Deep Learning, so I am a bit concerned, whether I this program is going to teach me Deep Learning the way I want to learn, or will the treatment be a bit superficial? Nonetheless, I'm very excited.
I implemented a Hough circle detector in JavaScript that might be of interest - it's got UI to tweak the accumulator threshold and is all first principles without any calls to black box functions:
I could be wrong, but I understood that with the first Tesla crash death, it had issues because of too much light (was facing the sun) so apparently also another issue is sensors being blinded.
Snow is also a problem for several reasons. Also, roads where the painted lines are so faded as to make a poor contrast with surrounding features.
I've also had a lane sensor that worked well during the day, but was confused by the taillights of vehicles that passed me at night, apparently thinking that was the new left boundary and so warning me I was out of the lane on the right.
Good for educational purpose. But for practical purpose, this may have unpredictable issues under different lighting conditions, debris, broken markings, etc.
I wouldn't be surprised if it is real-time. It uses OpenCV, which is heavily optimized and therefore very fast. However, if you ever want to use your own algorithms to do per-pixel processing, instead of relying on the OpenCV library ones, you'd have to write them in C++ instead of Python.
There will be a far more advanced lane detection project further in the Nanodegree that is intended to cover all the use cases that people have pointed out here.