Pages

April 13, 2012

IBM Watson system referencing a complete indexed and annotated whole lifetime video records to try to pass the Turing Test

Wired - “Two revolutionary advances in information technology may bring the Turing test out of retirement,” wrote Robert French, a cognitive scientist at the French National Center for Scientific Research, in an Apr. 12 Science essay. “The first is the ready availability of vast amounts of raw data — from video feeds to complete sound environments, and from casual conversations to technical documents on every conceivable subject. The second is the advent of sophisticated techniques for collecting, organizing, and processing this rich collection of data.”

“Is it possible to recreate something similar to the subcognitive low-level association network that we have? That’s experiencing largely what we’re experiencing? Would that be so impossible?” French said.

Science - Dusting Off the Turing Test

Hold up both hands and spread your fingers apart. Now put your palms together and fold your two middle fingers down till the knuckles on both fingers touch each other. While holding this position, one after the other, open and close each pair of opposing fingers by an inch or so. Notice anything? Of course you did. But could a computer without a body and without human experiences ever answer that question or a million others like it? And even if recent revolutionary advances in collecting, storing, retrieving, and analyzing data lead to such a computer, would this machine qualify as “intelligent”?




By the mid-1980s, the Turing test had been largely abandoned as a research goal (though it survives today in the annual Loebner prize for realistic chatbots, and momentarily realistic advertising bots are a regular feature of online life.) However, it helped spawn the two dominant themes of modern cognition and artificial intelligence: calculating probabilities and producing complex behavior from the interaction of many small, simple processes.

Unlike the so-called brute force computational approaches seen in programs like Deep Blue, the computer that famously defeated chess champion Garry Kasparov, these are considered accurate reflections of at least some of what occurs in human thought.



As of now, so-called probabilistic and connectionist approaches inform many real-world artificial intelligences: autonomous cars, Google searches, automated language translation, the IBM-developed Watson program that so thoroughly dominated at Jeopardy. They remain limited in scope — “If you say, ‘Watson, make me dinner,’ or ‘Watson, write a sonnet,’ it explodes,” said Goodman — but raise the alluring possibility of applying them to unprecedentedly large, detailed datasets.

“Suppose, for a moment, that all the words you have ever spoken, heard, written, or read, as well as all the visual scenes and all the sounds you have ever experienced, were recorded and accessible, along with similar data for hundreds of thousands, even millions, of other people. Ultimately, tactile, and olfactory sensors could also be added to complete this record of sensory experience over time,” wrote French in Science, with a nod to MIT researcher Deb Roy’s recordings of 200,000 hours of his infant son’s waking development.

He continued, “Assume also that the software exists to catalog, analyze, correlate, and cross-link everything in this sea of data. These data and the capacity to analyze them appropriately could allow a machine to answer heretofore computer-unanswerable questions” and even pass a Turing test.


If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks
blog comments powered by Disqus