In december 2008, a video post has been published on Abovetopsecret.com with the title “DARPA & IBM building a “global brain” “cognitive computer” for “monitoring people”. In this video, the leader of the IBM SyNAPSE project, Dharmendra Modha, talks about SyNAPSE.
This is an excerpt from the video:
“The quest is cognitive computing, which is about Engineering aspects of mind such as emotion, perception, sensation, cognition, emotion [again …], action, interaction, by reverse engineering the brain. And then, to deploy this technology by connecting to a vast array of sensors, billions, trillions of sensors, such as sight, hearing, taste, touch and smell, but even going further to non biological sensors, sensors such as monitoring the forest, sensors such as monitoring the ocean, sensors such as monitoring people, animals, organization, homes, cars, and to stream this vast amount of data in real time or near real time, to build a brain that can extract patterns, large scale invariant patterns from the sensory overload, and to act and respond to these data.”
But let’s go to some of the posts in the blog:
– “….if a “global brain” were ever developed (and the researchers seem confident that it will be), SyNAPSE will inevitably become “self-aware” at some point.” [by whitewave]
– “Why should the likes of IBM not push the boundaries? It’s thinking like that which has got us the tech we have today.” [celticniall]
In the spirit of Neurdon, this post urges a clarification. What Modha is referring to is neither new nor easy.
NOT NEW: Departments around the word have been working on reverse engineering the brain for decades. What SyNAPSE is about, and what IBM is good at, is building a hardware platform appropriate for these computational models. Which, by the way, are not designed by IBM, who is not a leader in brain-based models or cognitive modeling. IBM will team up with several universities to achieve this goal, as will HP and HRL. In fact, even reverse engineering “emotion”, one of the categories of mental functions named by Modha, is..
NOT EASY: while achieving high performance in specific brain processes, such as visually perceive and track objects, or segment and classify a speech sound, is within the grasp of current systems, the integrative aspect of putting all these things together and “magically” have a working brain is not only difficult, but it is overall a loose concept. No current models have such a high degree of complexity to deeply understand how to couple cortical and subcortical structures/dynamics. Another issue is the loosely defined goal stated by Modha. In other words, the human brain is a solution to a problem: the word around us. The architecture that Modha is picturing is a solution to what word? If it is an array of sensors monitoring the forest, the architecture (or the algorithm…) might not look like a mammalian brain.