Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Simple Lane Detection (autojazari.com)
119 points by edward on Nov 1, 2016 | hide | past | favorite | 40 comments


Totally agree with the sentiments here that this is a first pass and not for production use. In fact, that's exactly the intent of this project. It's meant to be a very first pass for people to get a sense of what's possible with simple off the shelf tools and a basic understanding of CV. We expect students to do this within 7 days of starting the program!

There will be a far more advanced lane detection project further in the Nanodegree that is intended to cover all the use cases that people have pointed out here.


To what degree will the recent NHTSA self-driving guidelines be covered in your program?


Why do you think that should be part of the curriculum?


If you're writing code for a self driving mechanism, the guidelines become part of the requirements for that code.

At a minimum, being aware of the guidelines would be a good thing to inform discussions about self-driving technology.


No, if you're writing a production self-driving system that becomes important. Class time should be spent on fundamentals.


This isn't lane detection, this is line detection. Don't mistake the best-case scenario weather and freeway setting with that.

In any case, there is nowadays a pretty dramatic gorge between 1) discrete transforms heaped upon each other like here (Mobileye style) and 2) just DNN. I feel 1 is falling out of favor until we will eventually realize 2 alone isn't the duck's guts either.



Interesting tidbit: The way we did this on the Stanford DARPA Urban Challenge vehicle was to use the intensity of the returned LIDAR signal ... the little reflective markers between white lane paint are actually covered using a retro-reflective coating. You can get extremely good detection rates. It's much more of a hack :)


That's cool, but the little reflective markers don't get used in places that get snow (like Canada) because of snow plows.


Recessed markers are becoming increasingly common

https://www.azdot.gov/mobile/media/blog/posts/2013/01/25/tra...

Edit: anecdotally, in the NE US.


The Toronto area had them everywhere last time I was there.


While this is great and serves as a good starter for those looking to get a basic understanding of computer vision.

I'm not so sure if thats what real self-driving cars use. It seems too basic to apply a Canny-Edge detector which is taught in 3rd/4th year comp vision classes. I'm skeptical but also not very knowledgeable about the tech that goes into self driving cars- so if anyone with real experience can inform would be great.

I've written a survey paper about various edge detection algorithms and while Canny is one of the better ones, it still leaves a lot to be desired.


I don't see anything wrong with using those techniques. They're just not enough on their own to make the system sufficiently general to be useful. Extra steps are needed, and even then, there would still be cases where the lane markings can't be reliably made out. In those cases, human drivers would fall back on more general driving skills.


Yeah, there are plenty of two-lane roads (one- and two-way) in the real world without markings


My layman's understanding is that it's mostly deep learning these days (what a surprise, I'm sure). For example, this article says that deep learning is behind most of Mobileye's computer vision stuff:

http://www.computervisionblog.com/2015/03/mobileyes-quest-to...

(They're the ones who did Tesla's original Autopilot hardware, although not the new stuff they just started shipping.)


RIP MBLY. Greed got the best of them.


The one that thinks a trailer in the middle of a road is just a billboard. Safe to proceed.


Can you share the link to your survey paper?


I am starting this program in coming session, and having taken a computer vision course, I am wondering how well did they cover Canny edge detection and Hough Transform in just about a week. Did they go over all the math involved or was the treatment superficial? When I learned about Hough Transform the first time, it blew my mind! I am not very familiar with Deep Learning, so I am a bit concerned, whether I this program is going to teach me Deep Learning the way I want to learn, or will the treatment be a bit superficial? Nonetheless, I'm very excited.


The Lane-Finding Project is a fun introduction to see how quickly students can get something working.

There are longer and more in-depth modules on both Deep Learning and Computer Vision later in the program.


Thanks, David!


Very cool but wouldn't say simple haha!

I implemented a Hough circle detector in JavaScript that might be of interest - it's got UI to tweak the accumulator threshold and is all first principles without any calls to black box functions:

https://github.com/alexanderconway/Hough-Circle-Detection


How well does this work

    (a) in the presence of heavy traffic/clutter
    (b) at night
    (c) in heavy rain
    (d) on roads with frequent bends?


I could be wrong, but I understood that with the first Tesla crash death, it had issues because of too much light (was facing the sun) so apparently also another issue is sensors being blinded.


It doesn't.


Snow is also a problem for several reasons. Also, roads where the painted lines are so faded as to make a poor contrast with surrounding features.

I've also had a lane sensor that worked well during the day, but was confused by the taillights of vehicles that passed me at night, apparently thinking that was the new left boundary and so warning me I was out of the lane on the right.


I have trouble with snow.


a car of yours with lane detection or you personally?


It claims to be 'Simple' lane detection - it is not claiming to be robust or exhaustive.


Good for educational purpose. But for practical purpose, this may have unpredictable issues under different lighting conditions, debris, broken markings, etc.


http://comma.ai also has some code and materials on their site about CV for driving.


I wonder how a deep learning solution (convnet) would compare, both in development-time (training-time) and in accuracy.


Several orders of magnitude harder.


Are there any good videos of driving in varied conditions to test this out?


How fast does it work? Is the video “real-time” detection?


I wouldn't be surprised if it is real-time. It uses OpenCV, which is heavily optimized and therefore very fast. However, if you ever want to use your own algorithms to do per-pixel processing, instead of relying on the OpenCV library ones, you'd have to write them in C++ instead of Python.


>A collection of projects detailing how I learned to be a Self-Drive Car Engineer.

Easy typo fix there. "Selft-Driving Car Engineer" makes him sound like an adult, and not a toddler.


There's no need to be insulting, and you have a typo yourself.


You got me.


> "Selft-Driving Car Engineer"

"Self-Driving Car Engineer".

Every time. https://en.wikipedia.org/wiki/Muphry%27s_law




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: