I remember the first AI winter. And the second. And the third.
From The Verge: Self-driving cars are headed toward an AI roadblock:
The dream of a fully autonomous car may be further than we realize. There’s growing concern among AI experts that it may be years, if not decades, before self-driving systems can reliably avoid accidents. As self-trained systems grapple with the chaos of the real world, experts like NYU’s Gary Marcus are bracing for a painful recalibration in expectations, a correction sometimes called “AI winter.”
From the Wikipedia entry on “AI winter”:
The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the “American Association of Artificial Intelligence”). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research. At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the “winter” of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.
My take: As the Wikipedia article demonstrates, the history of AI is littered with cycles of hype and disappointment. Early machine translation. Connectionism. Lisp machines. Expert systems. Fifth generation computers. The Strategic Computer Initiative.
As a young staff writer at Time I tried several times to produce story on AI that could run on the cover of the magazine—which was still a big thing in those days—but never managed to write an editable draft. I came to the conclusion that the human mind, or at least the mind of a Time editor, is by its nature threatened by the concept of a machine that can do what it—the mind—does.
The current cycle of AI hype is centered on machine learning and autonomous systems—which Tim Cook last year called “the mother of AI projects” and declared (belatedly) area of intense interest for Apple.
If history is any guide, disappointment will follow. The limits of machine learning—as the MIT Technology Review periodically reminds us—are well known to academics. For one thing, today’s machine learning modules are brittle, narrowly focused and don’t generalize well. They are also hard to edit because at their core their “learning” is incomprehensible, in the sense of being beyond human understanding.
That may be the real AI roadblock of the Verge’s headline. I’m not sure investors—never mind the general public—have wrapped their minds around what it means to trust your life to an autonomous system moving at highway speeds. After all, what is a self-driving car but a machine that can kill in the hands of a computer whose operations are beyond human ken.