17 ExaFLOP Deep Learning Optical Computer by 2020 ?

Baidu, Google and others are competing with Deep learning Artificial Intelligence.

Baidu built a neural network that roughly matched the Google Brain system for a 50th of the cost—only $20,000—using off-the-shelf graphics chips from Nvidia.

The Google Brain was designed to test the potential of deep learning, which involves feeding data through networks of simulated brain cells to mimic the electrical activity of real neurons in the neocortex, the seat of thought and perception. Such software can learn to identify patterns in images, sounds, and other sensory data. In one now-famous experiment, the researchers built a “brain” with one billion connections among its virtual neurons; it ran on 1,000 computers with 16 processors apiece. By processing 10 million images taken from YouTube videos, it learned to recognize cats, human faces, and other objects without any human help. The result validated deep learning as a practical way to make software that was smarter than anything possible with established approaches to machine learning. It led Google to invest heavily in the technology—quickly moving the Google Brain software into some of its products, hiring experts in the technique, and acquiring startups.

The GPGPUs that implemented the Baidu Deep learning brain may be replaced by new optical computers.

A startup company called Optalysis is trying to invent a fully-optical computer that would be aimed at many of the same tasks for which GPUs are currently used. Amazingly, Optalysis is claiming that they can create an optical solver supercomputer astonishing 17 exaFLOPS machine by 2020.

Deep Learning + 17 exaFLOP optical computer = 17 ExaFLOP Deep learning system by 2020.

This would be about 1 million times faster than the current Baidu Brain and might be near human brain in scale.

Alternative neuromorphic approach to human brain scale computing

The recent 1 million neuron IBM neuromorphic chip is another appoach to human brain scale computing.

It was actually built in 2013
IBM already has a mini-board with 16 million neurons (16 chips).
They should have a rack with 4 billion neurons by 2015.
They hope to market it with Watson.
They are planning to make improvements in terms of neuron quality
to get plasticity.
They are looking to scale to 20 billion neurons and 1 trillion synapses by 2019

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks