This article from the NeuroLogica Blog looks at the technological problems with AI theory.
Artificial Brain in 10 Years?
I have written previously about the various attempts to reverse engineer the brain and to develop artificial intelligence (AI). This is an exciting area of research. On the one hand researchers are trying to model the working of a mammalian brain, eventually a human brain, down to the tiniest detail. On the other, researchers are also trying to build an AI – either in hardware or virtually in software.
In the middle are attempts at interfacing brains and computer chips. Remember the monkeys who can move robot arms with their minds?
All these efforts are synergistic – modeling the mammalian brain will help AI researchers build their AI, and building AI computers and applications can teach us about brain function. The more we learn about both, the easier it will be to interface them.
However, it is very difficult to say how far away specific milestones and applications are on all these fronts. Progress is steady and promising, but predicting the future is hard.
That has not stopped Henry Markram from predicting at the current TED Global Conference that we can have a virtual model of the human brain in 10 years.
“It is not impossible to build a human brain and we can do it in 10 years,” he said.
“And if we do succeed, we will send a hologram to TED to talk.”
I hope he’s right – that would be a huge milestone. Markram works on the Blue Brain project, and right now they have managed to build a virtual model of a rat cortical column. This simulates about 10,000 neurons. Markram says that each neuron requires the processing power of a laptop to model, so they use an IBM Bluegene machine which has 10,000 processors. This is definitely not a desktop application.
And that’s just one cortical column. Modelling the entire brain would require 100 billion neurons – that gives you an idea of the processing power of the human brain. That’s 10 million supercomputers.
It seems to me that hardware is going to be the biggest limiting factor on Markram’s prediction. If we assume Moore’s Law of doubling processor power every 18 months, then it will take 36 years for supercomputers to exceed the power of 100 billion neurons.
Of course, simulating virtual neurons requires much more processing power than building artifical neurons directly – building a massively parallel processor to duplicate a brain in hardware, rather than creating a virtual brain. But that is also hard to predict because it requires the development of new hardware – not just software and information.
While I share Markram’s enthusiasm for this technology, and his optimism that all the components will eventually come into place (modeling the human brain and developing fast enough computers) – 10 years seems to be pushing it. I hope to be proven wrong.
No comments:
Post a Comment