You are right - for specific tasks. E.g. in computer vision they are able to surpass humans for static image recognition. For image segmentation they are getting close already. For generating art they IMO have surpassed some "artists" already ;-)
If you think about it, even with 90% accuracy they are worth it as they could almost automate some cognitively-difficult tasks and for those instances that they can't do it properly, a human can intervene. So instead of 10 humans watching over something you end up with a single one. It would take some time until all we have in the research is in production and that will change a lot of things. After that there might be another AI winter (who knows if we can do general intelligence?), but with current pipeline it might take a while to reach plateau.
What's the second AI winter then? I only know about the 1986-1990 era, where "AI" was used in a broad sense covering everything from Prolog, expert systems, LISP, heuristic search algorithms, fuzzy logic to ML, neural networks and their generalizations.
I've still got great shiny magazines from that time, though.
From MIT[1]:
"The first chill occurred in the 1970s, as progress slowed and government funding dried up; another struck in the 1980s as the latest trends failed to have the expected commercial impact."
I would not generalize from Watson to the current trends in AI.
I had researched into Watson when it had won Jeopardy, as much as I could publicly find, and concluded at that time itself that it won't stand up to the hype IBM is creating.