Hacker News new | past | comments | ask | show | jobs | submit login

I want to see an AI that can improve itself by developing new algorithms for arbitrary tasks. I wonder how far off we are from that now?



You know, if you're at the point where you can give a human-readable spec of the problem and the AI can make a passable attempt at it, that's basically the Turing Test -- hence why I think it deserves its status as holy grail. Something that passes would really give the impression of "there's a ghost inside here".


Rather than a ghost, I wonder if we'll ever have the average person looking at brains and thinking "there's a program inside here."

And then to reverse it, imagine that the world really is some kind of massive simulation... and that there are backups of the save()-ed :)



The problem is that fundamentally all our AI techniques are heavily data-driven. It's not clear what sort of data to feed in to represent good/bad algorithm design




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: