Here is the Henry Markram interview by Sander Olson. [Note: there 14 questions and answers and a copyright by Dr. Markram at the end] Dr. Markram is the Project Director of the Blue Brain project, and Dr. Markram recently predicted that the Blue Brain project could have a human level brain simulation within a decade. Of note in this interview:
- The Blue Brain Project is not an AI project, but an attempt to unlock the mysteries of the brain. Dr. Markram is confident that Blue Brain models will eventually supplant AI.
-Knowledge of the brain is increasing exponentially. We are currently gathering as much information on the brain's structure and function each year as was gained in the entire 20th century. Neuroscientists are currently producing about 50,000 peer-reviewed articles per year. The Blue Brain project was launched in part to organize and coordinate this research.
-The Blue Brain project currently has the capability of electronically simulating 100 million neurons/100 billion synapse models, but is constrained by lack of funds to buy a sufficiently powerful computer.
-It currently requires 10-100 seconds of computer time to simulate one second of neuronal activity, but future computers should be able to simulate neurons in close to real time.
-A grid computing program to simulate and "build" individual neurons will soon be unveiled, and it will run on individual PCs as a screen saver.
- The blue brain project should result in extremely powerful "liquid computers" that can handle infinite parallelization.
Henry Markram Interview
Question 1: Tell us about the Blue Brain project.
HM: The Blue Brain Project aims to build a 21st century research facility capable of constructing and simulating computer models of the brain. Such a facility will be capable of constructing brain models across different levels of biological detail and for different animal species, ages, and diseases. The target is to be able to construct models of the human brain within about a decade. The facility will serve to aggregate, organize and unify all fragments of data and knowledge of the
brain (past and future), allow virtual experimentation to test hypotheses of brain function and dysfunction, and create a novel platform for virtualized medicine. A prototype facility has been completed, which is today capable of building any neural microcircuit or module of the brain with cellular level precision. The facility’s capability will be extended over the course of the next 10 years to building whole brain models at the subcellular, molecular and genetic levels. The facility will be accessible as an internet portal with different levels of access to provide neuroscientists with virtual laboratories, hospitals with advanced diagnostic and treatment planning facilities, clinicians with a facility to explore hypotheses of disease for specific patients, pharmaceutical companies to carry out virtual drug development, technology companies to design the next generation technologies, and for schools and institutions of higher education to take students for virtual tours into the brain’s of different animals to see and learn how the brain is designed and
function, and how it evolved, develops and then ages, and what can go wrong in different diseases. The facility will also be open for public virtual visits to allow anyone to better understand their own brain and what kind of reality it is genetically and environmentally programmed to create and to find out how they can shape the reality they create for themselves.
Question 2: You recently predicted that an artificial brain could be built within 10 years. What properties will this brain have? How closely will it resemble a human
HM: We are building brain models of different mammalian species (mouse, rat, cat, monkey) first before we reach the human brain. We do this because we need to learn how to use less and less invasive data and more and more indirect data to build human brain models. We build these models by sticking as close to biology as possible. This is done by abstracting the biological components and processes into mathematical equations and then into computer models. We do not aim at a specific function – they should all appear if we build it correctly. Computational Neuroscience over the past 50 years is a theory driven science, while Blue Brain is biology driven. When we reach the human brain, human-like perceptions and motor actions should emerge automatically. The model brains will be able to learn to do what we human’s can learn to do, perform complex decision-making, manifest emotions, intelligence and personalities. We see these all as straightforward emergent properties of the brain. Self-awareness and consciousness may also emerge if this phenomenon depends on neuronal, synaptic and/or molecular interactions. Anything that depends on the physical elements that can be measured in the brain should emerge if we are successful in building it accurately enough. If we inject some theories of brain while we build it and ignore the biology, then we are back to square one with computational neuroscience and we will almost certainly fail.
Question 3: Ray Kurzweil predicts that our understanding of the human brain is
increasing exponentially. Do you agree?
HM: Kurzweil is not entirely right nor entirely wrong, it depends on how you look at it. It is certainly true that we are generating today more data and knowledge about the brain’s structure and function probably in one year than we generated in the entire 20th century. The amount of data and knowledge about the brain that will be obtained in the 21st century is vast beyond imagination. Neuroscientists today are producing over 50’000 peer-reviewed articles in one year and growing exponentially. Machines and robotic technology can sequence and map parts of the brain at many levels that produces data thousands of time faster than any lab of the past. So yes, there is no doubt that we are generating a massive amount of data and knowledge about the brain, but this raises a dilemma of what the individual understands. No neuroscientists can even read more than about 200 articles per year and no neuroscientists is even remotely capable of comprehending the current pool of data and knowledge. Neuroscientists will almost certainly drown in data the 21st century. So, actually, the fraction of the known knowledge about the brain that each person has is actually decreasing(!) and will decrease even further until neuroscientists
are forced to become informaticians or robot operators. This is one of the reasons that we launched the Blue Brain Project. We need a global agenda to bring the data and knowledge together in a single working platform – a model, so that each scientist can test their tiny fragment of understanding against all data and knowledge that everyone has accumulated together. One needs to see and feel the all data and knowledge in one. I believe that unless we succeed in a project like the Blue Brain Project, we will never understand the brain. It is the unifying strategy, much the same way that models unified understandings in so many past revolutions in science.
Question 4: You recently completed Phase one of the Blue Brain project. What
did Phase one accomplish?
HM: The Blue Brain Project is not aiming at building just a single model, we are building an international facility that has the capability to build brain models. The facility roadmap is to gradually expand its capability and capacity to build whole brain models at ever greater levels of resolution. Over the past 4 years we built a prototype facility as a proof of concept and targeted the cellular level resolution. We also wanted to try to solve some fundamental challenges that, if we could not solve them, would mean that it is technically not be possible to build biologically realistic brain models. We solved these challenges and built the first prototype facility that can now build neural microcircuits at cellular level resolution. On our current supercomputer we can build and simulate up to 100’000 biologically detailed neurons interconnected with 10 million synapses. The facility actually has the capability of building much larger neural circuits even today (100 million neurons with 100 billion synapses), but we can’t afford to buy the big computer to simulate them. The facility is unique because it is designed in a way that the models “absorb” biological data and knowledge and continuously become as real as the available data and knowledge.
In building this facility, we already discovered some fascinating principles of how neural microcircuits are designed, how complex neural states emerge, which elements contribute to specific neural states, and we are close to testing a fundamental theory of how the brain generates a perceptual reality – the neural code.
Question 5: Is there a phase two?
HM: Of course, there are many many big phases and many more tiny steps before we reach the human brain. We are expanding the capability of the facility to build models at the sub-cellular (structures inside cells), molecular and genetic levels and we are expanding the capacity of the facility to build and simulate larger models till we reach whole brain models. Each phase needs more computing resources and a bigger team of engineers and scientists to deal with the new levels of issues.
Question 6: It takes electronic circuits about 100 seconds to emulate 1 second of
neuronal activity. How long before electronic circuits can emulate their biological counterparts in real time?
HM: Well, I am not sure where you got these numbers. Firstly, there is a difference between emulation and simulation; a neuron on a silicon chip emulates the behavior of a neuron, while a software model of a neuron simulates its behavior. We are simulating the brain, not emulating it. It it is very difficult and impossibly expensive to build even simple equations of complex neurons onto a silicon chip. The most that is possible today at the cellular level is around 50 or so neurons. These analog VLSI (aVLSI) chips can actually emulate neuronal behavior in real-time and even much faster than real-time. Because of this limit, what people mostly do is to put very simple neurons (very simplified equations) onto silicon chips. In these cases it is possible to build networks of thousands of what we call “point neurons”. There is a project called FACETS that has built a chip with over 100’000 neurons in a network which actually run the calculations 10’000 times faster than real-time. DARPA is also trying to build such chips that can run even larger neural networks with intelligent synapses. From the perspective of Blue Brain, these projects are
peripherally interesting as engineering projects that will probably build some mildly clever devices, but they do not come even close to the sophistication and capabilities of a Blue Brain model. Software models have the advantage that you can make the models as complex as you need to, but they have the draw back that they need very advanced supercomputers to get close to real-time. When we started Blue Brain, 1 second of biological time took over 1000 seconds to simulate, but we improved the software, and it now takes around 10-100 seconds. So it is still in slow motion. The future computers we are planning to build should get the simulations close to real-time.
Question 7: How many different types of neurons exist in the human brain? What, besides morphology, differentiates these neurons?
HM: There are around 400 brain regions and each brain region contains neurons with different types of morphologies. Some brain regions contain only 2 or 3 types, while others contain up to 50 types. The neocortex has 48 main types of morphologies. The average is around 5 types, so there are around 2000 different morphological types of neurons in the brain. Telling them apart however is not an entirely solved problem in neuroscience. It is like trying to mathematically describe the differences between any two trees in a forest. Neurons can also differ in many other ways. A very important way they differ is in their electrical “personalities”. There are about 15
classes of electrical personalities that a neuron can take on. Even neurons with the same morphology can take on different electrical personalities. The way neurons build their electrical personalities is to select which ion channels (membrane proteins that pass electrical current into the cell) are expressed by the genes. So if one looks at the genes that are switched on in neurons, then one sees that they switch on different genes to make different ion channels and by combinatorics, they create their different electrical personalities. Neurons can also differ in the
way they switch on other genes to build other types of proteins and so even neurons that look and behave the same, can process information differently. So there are actually many thousands of different types of neurons at the morphological-electrical-molecular levels. If you then also consider that each neuron is plugged into the brain in a different way, then one realizes that each of your 100 billion neurons is unique and no neuron in our brains are the same as any neuron in
anyone else’s brain. So the Blue Brain Project is trying to understand this complexity, rather than build simple models that do clever tricks.
Question 8: What is liquid computing?
HM: Let me first explain that a Turing machine is a machine that can solve any problem if the problem is given to it in discretely timed batches. So a Turing machine is a universal computer for what is called “batch processing”. But what a Turing machine can’t do is to solve problems universally while information is continuously coming in and disturbing it from finishing the operation it just started on. In other words, it can’t (without work arounds and cheating) strictly speaking solve problems presented to it on an analog time scale and produce answers on an analog time scale. A liquid computer is however a computer that can solve any problem in realtime and at any time (not discrete time). You can even call it “anytime processing”. So it is a universal theory for analog computing. You see, a big problem that the brain has, is to solve how to keep thinking about something that it just saw while the world around it never stops sending it new information. If you sent your computer continuously new information it will not be able to do anything because it can’t finish one thing before it has to start on another problem. The way liquid computing works is very much like an actual physical liquid. It makes sense of the perturbations rather than seeing them as a nuisance. We also call it high entropy computing or computing on transient states. This is a very important (but not complete) theory of how the brain works because it shows us how to tap into the vast amount of information that lies in a “surprise”. Another big challenge to understand the brain is that it is always physically changing. Your brain right now is already different from what it was just 1 hour ago, and extremely different from what it was when you were 10 years old. So, because your brain is constantly different and because every moment in your life is potentially (hopefully) also novel, there is a very good chance that most of the time, the responses produced in your brain are new to you (to your neurons) - never “seen” before. So if the brain produces a response that it never “experienced” before, how does it know what it means? The state that your brain is in right now, never happened before so how can your brain make sense of states it never saw before and connect them to all your moments before? Liquid computing provides a partial explanation for this problem by showing that the same state never actually needs to reoccur in the brain for you to make sense of the states – that is why we also call it computing on transient states. Liquid computing can in principle solve any problem instantaneously and keep solving them in real-time and with infinite parallelization. But, it is very difficult to build a good liquid computer. One of the benefits of Blue Brain is that it will be able to design and build extremely powerful liquid computers.
Question 9: The Blue Brain project is constrained by the lack of available computing power. Have you considered initiating a distributed computing project, along the lines of the protein folding or SETI projects?
HM: Indeed, CERN is our neighbor and they invented GRID computing. But, there are different limits for distributed computing. The brain is a perfect democracy and no neuron can make a decision without first listening to thousands of others, so interconnect is critical. GRID computing is not ideal for brain simulations. But what we are doing is to build a Blue GRID to help us build and analyze neurons because one just needs many processors working independently. Building biologically realistic neurons is even more challenging than simulating them, but this does not need supercomputing - it needs distributed computing. We call project the “Adopt a Neuron Project” and it will soon go live. Anyone will be able to adopt a neuron and have it work as a screen saver while helping us build and analyze the neurons.
Question 10: Your Blue Gene/L supercomputer is vital to the research. Do you have plans to augment or replace the Blue Gene/L supercomputer with a more powerful model?
HM: Already done. Blue Gene/L is history - we now have a Blue Gene/P supercomputer. This allows us to make the step to molecular level resolution for the models.
Question 11: It appears that the brain incorporates hybrid digital-analogue computing techniques. To what extent can these techniques be emulated by a purely digital computer?
HM: This is a complex question that can’t be answered properly here. The issue is actually multidimensional because digital vs analog computation pertains to discretization of space, time, amplitude and identities of the elements. To track the configuration changes of molecules is still reasonable, but to track every atom takes too many digital time steps to do it for long. So it is not yet too serious a barrier for molecular level simulations, but to simulate every atom’s position and movement in the brain will require the super-quantum computers of the 22nd century. In short, numerical precision of digital computers is good enough to capture the analog resolution of amplitudes, spaces and identities that is relevant to measureable biological processes. PS: simulation, not emulation.
Question 12: Have you collaborated with any members of the AI community? Is
your project affecting the AI field?
HM: No, Blue Brain adopts a philosophy that is pretty much 180 degrees opposite to the philosophy in AI. In my view, AI is an extreme form of engineering and applied math where you try to come up with a God formula to create magical human powers. If you want to go into AI, I think you have to realize you are making the assumption that your formula will have to capture 11 billion years of evolutionary intelligence. In most cases, AI researches do not even know what a neuron is, let alone how the brain works, but then they don’t need to because they are searching
for something else. I don’t blame them for trying because, if you want to build clever devices today, it is much easier to ignore the brain - it is just too complex to harvest the technology of the brain. Look at speech recognition today – the best ones out there don’t use neural principles. Having said that, we all know how inadequate the current devices are and that is just because AI can’t even come close to what the brain can do. Blue Brain is not trying to build clever devices, it is a biological project that will reveal systematically the secret formulas operating, but Blue Brain models and simpler derivative models will gradually replace all of AI.
Question 13: Who is funding your organization? To what extent is the Blue
Brain project constrained by limited funding?
HM: The Blue Brain Project is a project of the Swiss Federal Institute for Technology (EPFL), so I get funding from the EPFL (which means from the Swiss government), my research grants (European Union, Foundations, etc), some other entities and just one special visionary donor. Sure we are limited by funding. It is a multi-billion dollar project (about the cost of one F-18 fighter jet). I have a roadmap to finish within 10 years, but the uncertainty is the funding. Naturally I think this is the most important project the human race can ever undertake because it will explain how we create our individual realities and even explain reality itself. And then there are over 2 billion people on earth trapped in a distorted reality because of brain diseases. So how long should we wait?
Question 14: What do you estimate is the likelihood that an artificial brain based
on your research will become sentient?
HM: Well, this is also a loaded question because there are many preconceptions out there. Wiki says that “sentience is the ability to feel or perceive subjectively”. Not a bad definition at first sight, but actually nature (organic and inorganic) computes, and any computation can be argued as subjective. Even a tree can be seen as making a subjective “decision” about what it is responding to. Feeling and perceptions require decisions, billions of tiny decisions that have been worked out over billions of years of evolution. All feeling and perception is therefore subjective. Oxford Dictionary therefore drops this implied subjectivity and simply says that it is “the ability to feel or perceive things”. The “things” in their definition should give you a hint that they are also lost. Webster gives it a component of awareness – it must be aware it is feeling or perceiving (even slightly!). How different philosophies view “sentience” is discussed nicely on Wikipedia. Moralists tend to argue that sentience is for all those that feel pain and pleasure (which would of course exclude a Buddha since they have transcended pain and pleasure (and human morality)). Buddhists simply take sentient beings as those ones that need our love and respect and western philosophers say you are sentient if you have the “ability to experience sensations” (“qualia”). I received an email once telling me that Blue Brain will become sentient if I give it two eyes.
What you should focus on is the ultimate philosophical question: Is a simulation of a particular reality identical to the particular reality it simulates? Your answer to this question should give you your own private answer to whether Blue Brain will become sentient according to whatever definition you want to use.
COPYRIGHTS FOR TEXT PROVIDED BY HENRY MARKRAM BELONG TO HENRY
MARKRAM. TEXT MAY ONLY BE DISTRIBUTED IN IT’S EXACT FORM AND UNDER
QUOTATIONS. NO PARTS OF THE TEXT MAY BE EXTRACTED AND/OR CHANGED
WITHOUT THE WRITTEN PERMISSION OF HENRY MARKRAM
August 24, 2009
Henry Markam (the Project Director of the Blue Brain project) Interview
artificial intelligence, brain, computer, computer architecture, future, ibm, intelligence, interviews, sander olson, singularity, technology