This is pretty common in ML projects, and a big reason why there aren't many major companies whose core product is based in a complex ML algorithm that isn't fully baked by the academic community first.
In theory, if the approach to self-driving that Tesla is pursuing in any given year actually worked... then the release would be about two years away. In reality it hasn't been working well enough, and every year a new plan is drawn up to reach full autonomy in 2 years.
This is also coincidentally slightly longer than the average tenure for an engineer/scientist, and as such the champions of a given strategy/approach will have departed the company before someone observes the strategy not panning out.
As an ML researcher, I endorse this message. Casual readers may want to re-read what they wrote, because it's really true.
Exploratory AI should be thought of as "potentially kills your company if it doesn't work and you gamble on it working."
The ultimate truth is that you're outsourcing your thinking to ML researchers, much of the time. And as someone on the ground floor, let me tell you that we often don't have a clue how to achieve X any more than you do. We have ideas, testable theories, plans that may or may not work, and have a higher chance of success than random people. But it's still only a chance.
I don't think a lot of companies have fully internalized that. If your company is based around the premise of exploratory AI, you are literally gambling. And it's not the type of gambling you can win in the long run with. It's closer to "put it all on black" than "let's take a principled approach to poker."
I hope as an ML researcher, you're sensitive to the ML/AI usage here.
There are lots of ML algorithms that definitely work but none of involve claims of intelligence. So it's not just "exploratory AI" that you're talking about. It's "any AI", anything claiming to be "artificially intelligent" and fully replace a human in a key decision making position (but not "any ML" since ML encompasses much more boring stuff).
The number of companies that seem to be charging forward towards AGI is small. Most of the companies are doing what you might call "realistic AI": some definable goal (like self-driving) which everyone agrees will eventually happen (and probably will) but no one is quite sure how to get there in every detail.
I try to be an optimist, mostly because of how many counterexamples you see from history. Didn't some newspaper claim that human flight "may be invented in the next several thousand years" shortly before Kitty Hawk?
But for some reason, rich people keep coming to me with what they're trying to do. My advice is the same: you're going to lose your money unless you bet on proven techniques that exist today.
Take AI dungeon as the prime example. That is what I would call a grade-A idea. It was incredibly obvious that it would become a company (now Latitude) and that it would be successful, if someone was willing to put in the effort to make it so (which Nick Walton did). Once AI dungeon worked, and it had a "minimum viable working thing," the rest was a matter of execution.
But a lot of the ideas seem to fall into the category of... Well, for example, someone came to me saying they wanted to build a "virtual universe, filled with virtual people that you can talk to."
It sounds lovely on paper. But what are you selling, really? There has to be some specific value proposition. So strike one is that it's an unproven market. You yourself want a virtual universe. But is that virtual universe going to lead to something that will solve a lot of people's problems? And we haven't even begun to discuss how you're going to get there. What do you mean exactly by "virtual person"?
It's easy to pick on some of the outliers. But unfortunately, the problem runs much deeper. There are people who genuinely believe that AGI is within reach within our lifetimes, or perhaps within one generation. But whenever I try to corner them into giving specific details on how precisely to get there, the handwaving begins.
This is now a complete tangent, but, I found myself excited and enthusiastic to pursue AGI after a long conversation with a certain someone. They had "fresh eyes" -- a new way of viewing the situation, unlike anything that people were currently trying.
Unfortunately, after throwing myself into that mindset for several weeks, I had no choice but to conclude that their chance was closer to zero than 1%. And I was really trying hard to find that 1%, with all my mental effort (such as it is).
So what choice do we have but to let people pursue impossible dreams, and return to the work that we feel we can make an impact on? Live and let live. And of course, there's the likely outcome: our predictions will be incorrect, and we'll be talking with AGI in a virtual universe sooner than we think. But I wouldn't fall in love with their dream.
(I also think you were unfairly jumped on, and that you had a fine point, for what it's worth. Thanks for the prompt.)
Take AI dungeon as the prime example. That is what I would call a grade-A idea. It was incredibly obvious that it would become a company (now Latitude) and that it would be successful...
Not to further rain on the parade (without good reason) but I should mention...
"Recently we’ve learned that the server costs incurred for both the Griffin and Dragon models are higher than our total revenue. Nobody has ever deployed AI models this powerful before, and it requires extremely expensive hardware. If we don’t make changes to cover the costs of that hardware soon, AI Dungeon will not be financially viable."
AI dungeon and gpt-3 is exactly the kind of superficial BS that fails over the longer run, or only serve to fool people. A more advanced Eliza, but with content lifted from real people.
Most of the companies are doing what you might call "realistic AI": some definable goal (like self-driving)
Self-driving is absolutely the key problem, I'd say. I sympathize with you optimism. I'm optimistic about what "computers can do" but I'm pessimistic about DNNs + standard control becoming able to navigate the human-machine border.
Basically, I think a lot of problem are "AGI-complete", especially problem around human-computer interaction, more problems than people like to admit. And Remember, for things just in the NP-complete class, an "average" problem can be easy, it's the few examples that prove troublesome. It seems to me that AGI-complete problems are similar (and yes, I realize the term is a neolism, defined by only extension etc but I'd still say it's valid).
I also think you were unfairly jumped on, and that you had a fine point, for what it's worth. Thanks for the prompt.
Theoretically having lots of "karma" should make me not care about it. Theoretically...
Basically, I think a lot of problem are "AGI-complete", especially problem around human-computer interaction, more problems than people like to admit.
Having recently become a parent, I have a newfound appreciation for how complex spatial navigation tasks are. Children learn to recognize faces and objects within the first 6 weeks to 4 months of their lives. They learn to navigate environments over the next 1-5 years language is fully understood for non-gibberish use cases over the next 5-20 years. It's not a fair comparison, but it provides for roughly the only yard stick we know of.
What real world neural network algorithm is "fully baked by the academic community"? I don't think there are any.
I don't think there are companies with products based on AI where the AI has to work for the company. Google uses AI for search but search can screw up and search returns a lot of just indexed results. There's no "real world application" where AI works reliably (ie, gives a result that you can count on). That doesn't stop deep networks from an improvement on applications that were previous a combination of database queries. But this same only-relative-usefulness can be problematic when companies and institutions delegate AI to make decisions where it doesn't hurt them being wrong but it can mightly screw some random person (from credit to parole to whatever).
The relative improvement is both an oversell and an undersell depending on the context. For many applications the correct answer may be that a reasoned set of DB queries is about as good as it gets owing to lack of data, no better algorithm exiting, or product experience being mildly impacted by changes to the DB fetching component.
When confronted with these uncertainties internal stakeholder will often swing from "we just need more scientists working on this problem" to "it works fine, why would we spend time on this?" attitudes. The former almost always leads to over-investment where 3 teams of people are working on what should be one individuals project. The latter can sometimes be right, but I've also seen Fortune 500 Search rankings that have never been tuned let alone leverage an ML model.
In theory, if the approach to self-driving that Tesla is pursuing in any given year actually worked... then the release would be about two years away. In reality it hasn't been working well enough, and every year a new plan is drawn up to reach full autonomy in 2 years.
This is also coincidentally slightly longer than the average tenure for an engineer/scientist, and as such the champions of a given strategy/approach will have departed the company before someone observes the strategy not panning out.