Intel’s new ‘Loihi’ chip mimics neurons and synapses in the human brain

Loihi chip (credit: Intel Corporation)

Intel announced this week a self-learning, energy-efficient neuromorphic (brain-like) research chip codenamed “Loihi”* that mimics how the human brain functions. Under development for six years, the chip uses 130,000 “neurons” and 130 million “synapses” and learns in real time, based on feedback from the environment.**

Neuromorphic chip models are inspired by how neurons communicate and learn, using spikes (brain pulses) and synapses capable of learning.

The idea is to help computers self-organize and make decisions based on patterns and associations,” Michael Mayberry, PhD, corporate vice president and managing director of Intel Labs at Intel Corporation, explained in a blog post.

He said the chip automatically gets smarter over time and doesn’t need to be trained in the traditional way. He sees applications in areas that would benefit from autonomous operation and continuous learning in an unstructured environment, such as automotive, industrial, and personal-robotics areas.

For example, a cybersecurity system could identify a breach or a hack based on an abnormality or difference in data streams. Or the chip could learn a person’s heartbeat reading under various conditions — after jogging, following a meal or before going to bed — to determine a “normal” heartbeat. The system could then continuously monitor incoming heart data to flag patterns that don’t match the “normal” pattern, and could be personalized for any user.

“Machine learning models such as deep learning have made tremendous recent advancements by using extensive training datasets to recognize objects and events. However, unless their training sets have specifically accounted for a particular element, situation or circumstance, these machine learning systems do not generalize well,” Mayberry notes.

The Loihi test chip

Loihi currently exists as a research test chip that offers flexible on-chip learning and combines training and inference. Researchers have demonstrated it learning at a rate that is a 1 million times improvement compared with other typical spiking neural nets, as measured by total operations to achieve a given accuracy when solving MNIST digit recognition problems, Mayberry said. “Compared to technologies such as convolutional neural networks and deep learning neural networks, the Loihi test chip uses many fewer resources on the same task.”

Fabricated on Intel’s 14 nm process technology, the chip is also up to 1,000 times more energy-efficient than general-purpose computing required for typical training systems, he added.

In the first half of 2018, Intel plans to share the Loihi test chip with leading university and research institutions with a focus on advancing AI. The goal is to develop and test several algorithms with high efficiency for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation.

“Looking to the future, Intel believes that neuromorphic computing offers a way to provide exascale performance in a construct inspired by how the brain works,” Mayberry said.

* “Loihi seamount, sometimes known as the ‘youngest volcano’ in the Hawaiian chain, is an undersea mountain rising more than 3000 meters above the floor of the Pacific Ocean … submerged in the Pacific off of the south-eastern coast of the Big Island of Hawaii.” — Hawaii Center for Volcanology

** For comparison, IBM’s TrueNorth neuromorphic chip currently has 1 million neurons and 256 million synapses.

Human vs. deep-neural-network performance in object recognition

(credit: UC Santa Barbara)

Before you read this: look for toothbrushes in the photo above.

Did you notice the huge toothbrush on the left? Probably not. That’s because when humans search through scenes for a particular object, we often miss objects whose size is inconsistent with the rest of the scene, according to scientists in the Department of Psychological & Brain Sciences at UC Santa Barbara.

The scientists are investigating this phenomenon in an effort to better understand how humans and computers compare in doing visual searches. Their findings are published in the journal Current Biology.

Hiding in plain sight

“When something appears at the wrong scale, you will miss it more often because your brain automatically ignores it,” said UCSB professor Miguel Eckstein, who specializes in computational human vision, visual attention, and search.

The experiment used scenes of ordinary objects featured in computer-generated images that varied in color, viewing angle, and size, mixed with “target-absent” scenes. The researchers asked 60 viewers to search for these objects (e.g., toothbrush, parking meter, computer mouse) while eye-tracking software monitored the paths of their gaze.

The researchers found that people tended to miss the target more often when it was mis-scaled (too large or too small) — even when looking directly at the target object.

Computer vision, by contrast, doesn’t have this issue, the scientists reported. However, in the experiments, the researchers found that the most advanced form of computer vision — deep neural networks — had its own limitations.

Human search strategies that could improve computer vision

Red rectangle marks incorrect image identification as a cell phone by a deep-learning algorithm (credit: UC Santa Barbara)

For example, a CNN deep-learning neural net incorrectly identified a computer keyboard as a cell phone, based on similarity in shape and the location of the object in spatial proximity to a human hand (as would be expected of a cell phone). But for humans, the object’s size (compared to the nearby hands) is clearly seen as inconsistent with a cell phone.

“This strategy allows humans to reduce false positives when making fast decisions,” the researchers note in the paper.

“The idea is when you first see a scene, your brain rapidly processes it within a few hundred milliseconds or less, and then you use that information to guide your search towards likely locations where the object typically appears,” Eckstein said. “Also, you focus your attention on objects that are actually at the size that is consistent with the object that you’re looking for.”

That is, human brains use the relationships between objects and their context within the scene to guide their eyes — a useful strategy to process scenes rapidly, eliminate distractors, and reduce false positives.

This finding might suggest ways to improve computer vision by implementing some of the tricks the brain utilizes to reduce false positives, according to the researchers.

Future research

“There are some theories that suggest that people with autism spectrum disorder focus more on local scene information and less on global structure,” says Eckstein, who is contemplating a follow-up study. “So there is a possibility that people with autism spectrum disorder might miss the mis-scaled objects less often, but we won’t know that until we do the study.”

In the more immediate future, the team’s research will look into the brain activity that occurs when we view mis-scaled objects.

“Many studies have identified brain regions that process scenes and objects, and now researchers are trying to understand which particular properties of scenes and objects are represented in these regions,” said postdoctoral researcher Lauren Welbourne, whose current research concentrates on how objects are represented in the cortex, and how scene context influences the perception of objects.

“So what we’re trying to do is find out how these brain areas respond to objects that are either correctly or incorrectly scaled within a scene. This may help us determine which regions are responsible for making it more difficult for us to find objects if they are mis-scaled.”


Abstract of Humans, but Not Deep Neural Networks, Often Miss Giant Targets in Scenes

Even with great advances in machine vision, animals are still unmatched in their ability to visually search complex scenes. Animals from bees [ 1, 2 ] to birds [ 3 ] to humans [ 4–12 ] learn about the statistical relations in visual environments to guide and aid their search for targets. Here, we investigate a novel manner in which humans utilize rapidly acquired information about scenes by guiding search toward likely target sizes. We show that humans often miss targets when their size is inconsistent with the rest of the scene, even when the targets were made larger and more salient and observers fixated the target. In contrast, we show that state-of-the-art deep neural networks do not exhibit such deficits in finding mis-scaled targets but, unlike humans, can be fooled by target-shaped distractors that are inconsistent with the expected target’s size within the scene. Thus, it is not a human deficiency to miss targets when they are inconsistent in size with the scene; instead, it is a byproduct of a useful strategy that the brain has implemented to rapidly discount potential distractors.

Human vs. deep-neural-network performance in object recognition

(credit: UC Santa Barbara)

Before you read this: look for toothbrushes in the photo above.

Did you notice the huge toothbrush on the left? Probably not. That’s because when humans search through scenes for a particular object, we often miss objects whose size is inconsistent with the rest of the scene, according to scientists in the Department of Psychological & Brain Sciences at UC Santa Barbara.

The scientists are investigating this phenomenon in an effort to better understand how humans and computers compare in doing visual searches. Their findings are published in the journal Current Biology.

Hiding in plain sight

“When something appears at the wrong scale, you will miss it more often because your brain automatically ignores it,” said UCSB professor Miguel Eckstein, who specializes in computational human vision, visual attention, and search.

The experiment used scenes of ordinary objects featured in computer-generated images that varied in color, viewing angle, and size, mixed with “target-absent” scenes. The researchers asked 60 viewers to search for these objects (e.g., toothbrush, parking meter, computer mouse) while eye-tracking software monitored the paths of their gaze.

The researchers found that people tended to miss the target more often when it was mis-scaled (too large or too small) — even when looking directly at the target object.

Computer vision, by contrast, doesn’t have this issue, the scientists reported. However, in the experiments, the researchers found that the most advanced form of computer vision — deep neural networks — had its own limitations.

Human search strategies that could improve computer vision

Red rectangle marks incorrect image identification as a cell phone by a deep-learning algorithm (credit: UC Santa Barbara)

For example, a CNN deep-learning neural net incorrectly identified a computer keyboard as a cell phone, based on similarity in shape and the location of the object in spatial proximity to a human hand (as would be expected of a cell phone). But for humans, the object’s size (compared to the nearby hands) is clearly seen as inconsistent with a cell phone.

“This strategy allows humans to reduce false positives when making fast decisions,” the researchers note in the paper.

“The idea is when you first see a scene, your brain rapidly processes it within a few hundred milliseconds or less, and then you use that information to guide your search towards likely locations where the object typically appears,” Eckstein said. “Also, you focus your attention on objects that are actually at the size that is consistent with the object that you’re looking for.”

That is, human brains use the relationships between objects and their context within the scene to guide their eyes — a useful strategy to process scenes rapidly, eliminate distractors, and reduce false positives.

This finding might suggest ways to improve computer vision by implementing some of the tricks the brain utilizes to reduce false positives, according to the researchers.

Future research

“There are some theories that suggest that people with autism spectrum disorder focus more on local scene information and less on global structure,” says Eckstein, who is contemplating a follow-up study. “So there is a possibility that people with autism spectrum disorder might miss the mis-scaled objects less often, but we won’t know that until we do the study.”

In the more immediate future, the team’s research will look into the brain activity that occurs when we view mis-scaled objects.

“Many studies have identified brain regions that process scenes and objects, and now researchers are trying to understand which particular properties of scenes and objects are represented in these regions,” said postdoctoral researcher Lauren Welbourne, whose current research concentrates on how objects are represented in the cortex, and how scene context influences the perception of objects.

“So what we’re trying to do is find out how these brain areas respond to objects that are either correctly or incorrectly scaled within a scene. This may help us determine which regions are responsible for making it more difficult for us to find objects if they are mis-scaled.”


Abstract of Humans, but Not Deep Neural Networks, Often Miss Giant Targets in Scenes

Even with great advances in machine vision, animals are still unmatched in their ability to visually search complex scenes. Animals from bees [ 1, 2 ] to birds [ 3 ] to humans [ 4–12 ] learn about the statistical relations in visual environments to guide and aid their search for targets. Here, we investigate a novel manner in which humans utilize rapidly acquired information about scenes by guiding search toward likely target sizes. We show that humans often miss targets when their size is inconsistent with the rest of the scene, even when the targets were made larger and more salient and observers fixated the target. In contrast, we show that state-of-the-art deep neural networks do not exhibit such deficits in finding mis-scaled targets but, unlike humans, can be fooled by target-shaped distractors that are inconsistent with the expected target’s size within the scene. Thus, it is not a human deficiency to miss targets when they are inconsistent in size with the scene; instead, it is a byproduct of a useful strategy that the brain has implemented to rapidly discount potential distractors.

Neuroscientists restore vegetative-state patient’s consciousness with vagus nerve stimulation

Information sharing increases after vagus nerve stimulation over centroposterior regions of the brain. (Left) Coronal view of weighted symbolic mutual information (wSMI) shared by all channels pre- and post-vagus nerve stimulation (VNS) (top and bottom, respectively). For visual clarity, only links with wSMI higher than 0.025 are shown. (Right) Topographies of the median wSMI that each EEG channel shares with all the other channels pre- and post-VNS (top and bottom, respectively). The bar graph represents the median wSMI over right centroposterior electrodes (darker dots) which significantly increases post-VNS. (credit: Martina Corazzol et al./Current Biology)

A 35-year-old man who had been in a vegetative state for 15 years after a car accident has shown signs of consciousness after neurosurgeons in France implanted a vagus nerve stimulator into his chest — challenging the general belief that disorders of consciousness that persist for longer than 12 months are irreversible.

In a 2007 Weill Cornell Medical College study reported in Nature, neurologists found temporary improvements in patients in a state of minimal consciousness while being treated with bilateral deep brain electrical stimulation (DBS) of the central thalamus. Aiming instead to achieve permanent results, the French researchers proposed use of vagus nerve stimulation* (VNS) to activate the thalamo-cortical network, based on the “hypothesis that vagus nerve stimulation functionally reorganizes the thalamo-cortical network.”

A vagus neural stimulation therapy system. The vagus nerve connects the brain to many other parts of the body, including the gut. It’s known to be important in waking, alertness, and many other essential functions. (credit: Cyberonics, Inc./LivaNova)

After one month of VNS — a treatment currently used for epilepsy and depression — the patient’s attention, movements, and brain activity significantly improved and he began responding to simple orders that were impossible before, the researchers report today (Sept. 25, 2017) in an open-access paper in Current Biology.

For example, he could follow an object with his eyes and turn his head upon request, and when the examiner’s head suddenly approached the patient’s face, he reacted with surprise by opening his eyes wide.

Evidence from brain-activity recordings

PET images acquired during baseline (left: pre-VNS) and 3 months post vagus nerve stimulation (right: post-VNS). After vagus nerve stimulation, the metabolism increased in the right parieto-occipital cortex, thalamus and striatum. (credit: Corazzol et al.)

“After one month of stimulation, when [electrical current] intensity reached 1 mA, clinical examination revealed reproducible and consistent improvements in general arousal, sustained attention, body motility, and visual pursuit,” the researchers note.

Brain-activity recordings in the new study revealed major changes. A theta EEG signal (important for distinguishing between a vegetative and minimally conscious state) increased significantly in those areas of the brain involved in movement, sensation, and awareness. The brain’s functional connectivity also increased. And a PET scan showed increases in metabolic activity in both cortical and subcortical regions of the brain.

The researchers also speculate that “since the vagus nerve has bidirectional control over the brain and the body, reactivation of sensory/visceral afferences might have enhanced brain activity within a body/brain closed loop process.”

The team is now planning a large collaborative study to confirm and extend the therapeutic potential of VNS for patients in a vegetative or minimally conscious state.

However, “some physicians and brain injury specialists remain skeptical about whether the treatment truly worked as described,” according to an article today in Science. “The surgery to implant the electrical stimulator, the frequent behavioral observations, and the moving in and out of brain scanners all could have contributed to the patient’s improved state, says Andrew Cole, a neurologist at Harvard Medical School in Boston who studies consciousness. ‘I’m not saying their claim is untrue,’ he says. ‘I’m just saying it’s hard to interpret based on the results as presented.’”

The study was supported by CNRS, ANR, and a grant from the University of Lyon

* “The vagus nerve carries somatic and visceral efferents and afferents distributed throughout the central nervous system, either monosynaptically or via the nucleus of the solitary tract (NTS). The vagus directly modulates activity in the brainstem and via the NTS it reaches the dorsal raphe nuclei, the thalamus, the amygdala, and the hippocampus. In humans, vagus nerve stimulation increases metabolism in the forebrain, thalamus and reticular formation. It also enhances neuronal firing in the locus coeruleus which leads to massive release of norepinephrine in the thalamus and hippocampus, a noradrenergic pathway important for arousal, alertness and the fight-or-flight response.” — Corazzol and Lio et al./Current Biology


Abstract of Restoring consciousness with vagus nerve stimulation

Patients lying in a vegetative state present severe impairments of consciousness [1] caused by lesions in the cortex, the brainstem, the thalamus and the white matter [2]. There is agreement that this condition may involve disconnections in long-range cortico–cortical and thalamo-cortical pathways [3]. Hence, in the vegetative state cortical activity is ‘deafferented’ from subcortical modulation and/or principally disrupted between fronto-parietal regions. Some patients in a vegetative state recover while others persistently remain in such a state. The neural signature of spontaneous recovery is linked to increased thalamo-cortical activity and improved fronto-parietal functional connectivity [3]. The likelihood of consciousness recovery depends on the extent of brain damage and patients’ etiology, but after one year of unresponsive behavior, chances become low [1]. There is thus a need to explore novel ways of repairing lost consciousness. Here we report beneficial effects of vagus nerve stimulation on consciousness level of a single patient in a vegetative state, including improved behavioral responsiveness and enhanced brain connectivity patterns.

Will AI enable the third stage of life?

In his new book Life 3.0: Being Human in the Age of Artificial Intelligence, MIT physicist and AI researcher Max Tegmark explores the future of technology, life, and intelligence.

The question of how to define life is notoriously controversial. Competing definitions abound, some of which include highly specific requirements such as being composed of cells, which might disqualify both future intelligent machines and extraterrestrial civilizations. Since we don’t want to limit our thinking about the future of life to the species we’ve encountered so far, let’s instead define life very broadly, simply as a process that can retain its complexity and replicate.

What’s replicated isn’t matter (made of atoms) but information (made of bits) specifying how the atoms are arranged. When a bacterium makes a copy of its DNA, no new atoms are created, but a new set of atoms are arranged in the same pattern as the original, thereby copying the information.

In other words, we can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware.

Like our Universe itself, life gradually grew more complex and interesting, and as I’ll now explain, I find it helpful to classify life forms into three levels of sophistication: Life 1.0, 2.0 and 3.0.

It’s still an open question how, when and where life first appeared in our Universe, but there is strong evidence that here on Earth life first appeared about 4 billion years ago.

Before long, our planet was teeming with a diverse panoply of life forms. The most successful ones, which soon outcompeted the rest, were able to react to their environment in some way.

Specifically, they were what computer scientists call “intelligent agents”: entities that collect information about their environment from sensors and then process this information to decide how to act back on their environment. This can include highly complex information processing, such as when you use information from your eyes and ears to decide what to say in a conversation. But it can also involve hardware and software that’s quite simple.

For example, many bacteria have a sensor measuring the sugar concentration in the liquid around them and can swim using propeller-shaped structures called flagella. The hardware linking the sensor to the flagella might implement the following simple but useful algorithm: “If my sugar concentration sensor reports a lower value than a couple of seconds ago, then reverse the rotation of my flagella so that I change direction.”

You’ve learned how to speak and countless other skills. Bacteria, on the other hand, aren’t great learners. Their DNA specifies not only the design of their hardware, such as sugar sensors and flagella, but also the design of their software. They never learn to swim toward sugar; instead, that algorithm was hard- coded into their DNA from the start.

There was of course a learning process of sorts, but it didn’t take place during the lifetime of that particular bacterium. Rather, it occurred during the preceding evolution of that species of bacteria, through a slow trial-and-error process spanning many generations, where natural selection favored those random DNA mutations that improved sugar consumption. Some of these mutations helped by improving the design of flagella and other hardware, while other mutations improved the bacterial information-processing system that implements the sugar-finding algorithm and other software.


“Tegmark’s new book is a deeply thoughtful guide to the most important conversation of our time, about how to create a benevolent future civilization as we merge our biological thinking with an even greater intelligence of our own creation.” — Ray Kurzweil, Inventor, Author and Futurist, author of The Singularity Is Near and How to Create a Mind


Such bacteria are an example of what I’ll call “Life 1.0”: life where both the hardware and software are evolved rather than designed. You and I, on the other hand, are examples of “Life 2.0”: life whose hardware is evolved, but whose software is largely designed. By your software, I mean all the algorithms and knowledge that you use to process the information from your senses and decide what to do—everything from the ability to recognize your friends when you see them to your ability to walk, read, write, calculate, sing and tell jokes.

You weren’t able to perform any of those tasks when you were born, so all this software got programmed into your brain later through the process we call learning. Whereas your childhood curriculum is largely designed by your family and teachers, who decide what you should learn, you gradually gain more power to design your own software.

Perhaps your school allows you to select a foreign language: Do you want to install a software module into your brain that enables you to speak French, or one that enables you to speak Spanish? Do you want to learn to play tennis or chess? Do you want to study to become a chef, a lawyer or a pharmacist? Do you want to learn more about artificial intelligence (AI) and the future of life by reading a book about it?

This ability of Life 2.0 to design its software enables it to be much smarter than Life 1.0. High intelligence requires both lots of hardware (made of atoms) and lots of software (made of bits). The fact that most of our human hardware is added after birth (through growth) is useful, since our ultimate size isn’t limited by the width of our mom’s birth canal. In the same way, the fact that most of our human software is added after birth (through learning) is useful, since our ultimate intelligence isn’t limited by how much information can be transmitted to us at conception via our DNA, 1.0-style.

I weigh about twenty-five times more than when I was born, and the synaptic connections that link the neurons in my brain can store about a hundred thousand times more information than the DNA that I was born with. Your synapses store all your knowledge and skills as roughly 100 terabytes’ worth of information, while your DNA stores merely about a gigabyte, barely enough to store a single movie download. So it’s physically impossible for an infant to be born speaking perfect English and ready to ace her college entrance exams: there’s no way the information could have been preloaded into her brain, since the main information module she got from her parents (her DNA) lacks sufficient information-storage capacity.

The ability to design its software enables Life 2.0 to be not only smarter than Life 1.0, but also more flexible. If the environment changes, 1.0 can only adapt by slowly evolving over many generations. Life 2.0, on the other hand, can adapt almost instantly, via a software update. For example, bacteria frequently encountering antibiotics may evolve drug resistance over many generations, but an individual bacterium won’t change its behavior at all; in contrast, a girl learning that she has a peanut allergy will immediately change her behavior to start avoiding peanuts.

This flexibility gives Life 2.0 an even greater edge at the population level: even though the information in our human DNA hasn’t evolved dramatically over the past fifty thousand years, the information collectively stored in our brains, books and computers has exploded. By installing a software module enabling us to communicate through sophisticated spoken language, we ensured that the most useful information stored in one person’s brain could get copied to other brains, potentially surviving even after the original brain died.

By installing a software module enabling us to read and write, we became able to store and share vastly more information than people could memorize. By developing brain software capable of producing technology (i.e., by studying science and engineering), we enabled much of the world’s information to be accessed by many of the world’s humans with just a few clicks.

This flexibility has enabled Life 2.0 to dominate Earth. Freed from its genetic shackles, humanity’s combined knowledge has kept growing at an accelerating pace as each breakthrough enabled the next: language, writing, the printing press, modern science, computers, the internet, etc. This ever-faster cultural evolution of our shared software has emerged as the dominant force shaping our human future, rendering our glacially slow biological evolution almost irrelevant.

Yet despite the most powerful technologies we have today, all life forms we know of remain fundamentally limited by their biological hardware. None can live for a million years, memorize all of Wikipedia, understand all known science or enjoy spaceflight without a spacecraft. None can transform our largely lifeless cosmos into a diverse biosphere that will flourish for billions or trillions of years, enabling our Universe to finally fulfill its potential and wake up fully. All this requires life to undergo a final upgrade, to Life 3.0, which can design not only its software but also its hardware. In other words, Life 3.0 is the master of its own destiny, finally fully free from its evolutionary shackles.

The boundaries between the three stages of life are slightly fuzzy. If bacteria are Life 1.0 and humans are Life 2.0, then you might classify mice as 1.1: they can learn many things, but not enough to develop language or invent the internet. Moreover, because they lack language, what they learn gets largely lost when they die, not passed on to the next generation. Similarly, you might argue that today’s humans should count as Life 2.1: we can perform minor hardware upgrades such as implanting artificial teeth, knees and pacemakers, but nothing as dramatic as getting ten times taller or acquiring a thousand times bigger brain.

In summary, we can divide the development of life into three stages, distinguished by life’s ability to design itself:

• Life 1.0 (biological stage): evolves its hardware and software

• Life 2.0 (cultural stage): evolves its hardware, designs much of its software

• Life 3.0 (technological stage): designs its hardware and software

After 13.8 billion years of cosmic evolution, development has accelerated dramatically here on Earth: Life 1.0 arrived about 4 billion years ago, Life 2.0 (we humans) arrived about a hundred millennia ago, and many AI researchers think that Life 3.0 may arrive during the coming century, perhaps even during our lifetime, spawned by progress in AI. What will happen, and what will this mean for us? That’s the topic of this book.

From the book Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark, © 2017 by Max Tegmark. Published by arrangement with Alfred A. Knopf, an imprint of The Knopf Doubleday Publishing Group, a division of Penguin Random House LLC.

Is anyone home? A way to find out if AI has become self-aware

(credit: Gerd Altmann/Pixabay)

By Susan Schneider, PhD, and Edwin Turner, PhD

Every moment of your waking life and whenever you dream, you have the distinct inner feeling of being “you.” When you see the warm hues of a sunrise, smell the aroma of morning coffee or mull over a new idea, you are having conscious experience. But could an artificial intelligence (AI) ever have experience, like some of the androids depicted in Westworld or the synthetic beings in Blade Runner?

The question is not so far-fetched. Robots are currently being developed to work inside nuclear reactors, fight wars and care for the elderly. As AIs grow more sophisticated, they are projected to take over many human jobs within the next few decades. So we must ponder the question: Could AIs develop conscious experience?

This issue is pressing for several reasons. First, ethicists worry that it would be wrong to force AIs to serve us if they can suffer and feel a range of emotions. Second, consciousness could make AIs volatile or unpredictable, raising safety concerns (or conversely, it could increase an AI’s empathy; based on its own subjective experiences, it might recognize consciousness in us and treat us with compassion).

Third, machine consciousness could impact the viability of brain-implant technologies, like those to be developed by Elon Musk’s new company, Neuralink. If AI cannot be conscious, then the parts of the brain responsible for consciousness could not be replaced with chips without causing a loss of consciousness. And, in a similar vein, a person couldn’t upload their brain to a computer to avoid death, because that upload wouldn’t be a conscious being.

In addition, if AI eventually out-thinks us yet lacks consciousness, there would still be an important sense in which we humans are superior to machines; it feels like something to be us. But the smartest beings on the planet wouldn’t be conscious or sentient.

A lot hangs on the issue of machine consciousness, then. Yet neuroscientists are far from understanding the basis of consciousness in the brain, and philosophers are at least equally far from a complete explanation of the nature of consciousness.

A test for machine consciousness

So what can be done? We believe that we do not need to define consciousness formally, understand its philosophical nature or know its neural basis to recognize indications of consciousness in AIs. Each of us can grasp something essential about consciousness, just by introspecting; we can all experience what it feels like, from the inside, to exist.

(credit: Gerd Altmann/Pixabay)

Based on this essential characteristic of consciousness, we propose a test for machine consciousness, the AI Consciousness Test (ACT), which looks at whether the synthetic minds we create have an experience-based understanding of the way it feels, from the inside, to be conscious.

One of the most compelling indications that normally functioning humans experience consciousness, although this is not often noted, is that nearly every adult can quickly and readily grasp concepts based on this quality of felt consciousness. Such ideas include scenarios like minds switching bodies (as in the film Freaky Friday); life after death (including reincarnation); and minds leaving “their” bodies (for example, astral projection or ghosts). Whether or not such scenarios have any reality, they would be exceedingly difficult to comprehend for an entity that had no conscious experience whatsoever. It would be like expecting someone who is completely deaf from birth to appreciate a Bach concerto.

Thus, the ACT would challenge an AI with a series of increasingly demanding natural language interactions to see how quickly and readily it can grasp and use concepts and scenarios based on the internal experiences we associate with consciousness. At the most elementary level we might simply ask the machine if it conceives of itself as anything other than its physical self.

At a more advanced level, we might see how it deals with ideas and scenarios such as those mentioned in the previous paragraph. At an advanced level, its ability to reason about and discuss philosophical questions such as “the hard problem of consciousness” would be evaluated. At the most demanding level, we might see if the machine invents and uses such a consciousness-based concept on its own, without relying on human ideas and inputs.

Consider this example, which illustrates the idea: Suppose we find a planet that has a highly sophisticated silicon-based life form (call them “Zetas”). Scientists observe them and ponder whether they are conscious beings. What would be convincing proof of consciousness in this species? If the Zetas express curiosity about whether there is an afterlife or ponder whether they are more than just their physical bodies, it would be reasonable to judge them conscious. If the Zetas went so far as to pose philosophical questions about consciousness, the case would be stronger still.

There are also nonverbal behaviors that could indicate Zeta consciousness such as mourning the dead, religious activities or even turning colors in situations that correlate with emotional challenges, as chromatophores do on Earth. Such behaviors could indicate that it feels like something to be a Zeta.

The death of the mind of the fictional HAL 9000 AI computer in Stanley Kubrick’s 2001: A Space Odyssey provides another illustrative example. The machine in this case is not a humanoid robot as in most science fiction depictions of conscious machines; it neither looks nor sounds like a human being (a human did supply HAL’s voice, but in an eerily flat way). Nevertheless, the content of what it says as it is deactivated by an astronaut — specifically, a plea to spare it from impending “death” — conveys a powerful impression that it is a conscious being with a subjective experience of what is happening to it.

Could such indicators serve to identify conscious AIs on Earth? Here, a potential problem arises. Even today’s robots can be programmed to make convincing utterances about consciousness, and a truly superintelligent machine could perhaps even use information about neurophysiology to infer the presence of consciousness in humans. If sophisticated but non-conscious AIs aim to mislead us into believing that they are conscious for some reason, their knowledge of human consciousness could help them do so.

We can get around this though. One proposed technique in AI safety involves “boxing in” an AI—making it unable to get information about the world or act outside of a circumscribed domain, that is, the “box.” We could deny the AI access to the internet and indeed prohibit it from gaining any knowledge of the world, especially information about conscious experience and neuroscience.

(credit: Gerd Altmann/Pixabay)

Some doubt a superintelligent machine could be boxed in effectively — it would find a clever escape. We do not anticipate the development of superintelligence over the next decade, however. Furthermore, for an ACT to be effective, the AI need not stay in the box for long, just long enough administer the test.

ACTs also could be useful for “consciousness engineering” during the development of different kinds of AIs, helping to avoid using conscious machines in unethical ways or to create synthetic consciousness when appropriate.

Beyond the Turing Test

An ACT resembles Alan Turing’s celebrated test for intelligence, because it is entirely based on behavior — and, like Turing’s, it could be implemented in a formalized question-and-answer format. (An ACT could also be based on an AI’s behavior or on that of a group of AIs.)

But an ACT is also quite unlike the Turing test, which was intended to bypass any need to know what was transpiring inside the machine. By contrast, an ACT is intended to do exactly the opposite; it seeks to reveal a subtle and elusive property of the machine’s mind. Indeed, a machine might fail the Turing test because it cannot pass for human, but pass an ACT because it exhibits behavioral indicators of consciousness.

This is the underlying basis of our ACT proposal. It should be said, however, that the applicability of an ACT is inherently limited. An AI could lack the linguistic or conceptual ability to pass the test, like a nonhuman animal or an infant, yet still be capable of experience. So passing an ACT is sufficient but not necessary evidence for AI consciousness — although it is the best we can do for now. It is a first step toward making machine consciousness accessible to objective investigations.

So, back to the superintelligent AI in the “box” — we watch and wait. Does it begin to philosophize about minds existing in addition to bodies, like Descartes? Does it dream, as in Isaac Asimov’s Robot Dreams? Does it express emotion, like Rachel in Blade Runner? Can it readily understand the human concepts that are grounded in our internal conscious experiences, such as those of the soul or atman?

The age of AI will be a time of soul-searching — both of ours, and for theirs.

Originally published in Scientific American, July 19, 2017

Susan Schneider, PhD, is a professor of philosophy and cognitive science at the University of Connecticut, a researcher at YHouse, Inc., in New York, a member of the Ethics and Technology Group at Yale University and a visiting member at the Institute for Advanced Study at Princeton. Her books include The Language of Thought, Science Fiction and Philosophy, and The Blackwell Companion to Consciousness (with Max Velmans). She is featured in the new film, Supersapiens, the Rise of the Mind.

Edwin L. Turner, PhD, is a professor of Astrophysical Sciences at Princeton University, an Affiliate Scientist at the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo, a visiting member in the Program in Interdisciplinary Studies at the Institute for Advanced Study in Princeton, and a co-founding Board of Directors member of YHouse, Inc. Recently he has been an active participant in the Breakthrough Starshot Initiative. He has taken an active interest in artificial intelligence issues since working in the AI Lab at MIT in the early 1970s.

Supersapiens, the Rise of the Mind

(credit: Markus Mooslechner)

In the new film Supersapiens, writer-director Markus Mooslechner raises a core question: As artificial intelligence rapidly blurs the boundaries between man and machine, are we witnessing the rise of a new human species?

“Humanity is facing a turning point — the next evolution of the human mind,” notes Mooslechner. “Will this evolution be a hybrid of man and machine, where artificial intelligence forces the emergence of a new human species? Or will a wave of new technologists, who frame themselves as ‘consciousness-hackers,’ become the future torch-bearers, using technology not to replace the human mind, but rather awaken within it powers we have always possessed — enlightenment at the push of a button?”

“It’s not obvious to me that a replacement of our species by our own technological creation would necessarily be a bad thing,” says ethologist-evolutionary biologist-author Dawkins in the film.

Supersapiens in a Terra Mater Factual Studios production. Executive Producers are Joanne Reay and Walter Koehler. Distribution is to be announced.

Cast:

  • Mikey Siegel, Consciousness Hacker, San FranciscoSam Harris, Neuroscientist, Philosopher
  • Ben Goertzel, Chief Scientist    , Hanson Robotics, Hong Kong
  • Hugo de Garis, retired director of China Brain Project, Xiamen, China
  • Susan Schneider, Philosopher and cognitive scientist University of Connecticut
  • Joel Murphy, owner, OpenBCI, Brooklyn, New York
  • Tim Mullen, Neuroscientist, CEO / Research Director, Qusp Labs
  • Conor Russomanno, CEO, OpenBCI, Brooklyn, New York
  • David Putrino, Neuroscientist, Weill-Cornell Medical College, New York
  • Hannes Sjoblad, Tech Activist, Bodyhacker, Stockholm Sweden.
  • Richard Dawkins, Evolutionary Biologist, Author, Oxford, UK
  • Nick Bostrom, Philosopher, Future of Humanity Institute, Oxford University, UK
  • Anders Sandberg, Computational Neuroscientist, Oxford University, UK
  • Adam Gazzaley, Neuroscientist, Executive Director UCSF Neuroscape, San Francisco, USA
  • Andy Walshe, Director Red Bull High Performance, Santa Monica, USA
  • Randal Koene, Science Director, Carboncopies Science Director, San Francisco


Markus Mooslechner | Supersapiens teaser

Neural stem cells steered by electric fields can repair brain damage

Electrical stimulation of the rat brain to move neural stem cells (credit: Jun-Feng Feng et al./ Stem Cell Reports)

Electric fields can be used to guide transplanted human neural stem cells — cells that can develop into various brain tissues — to repair brain damage in specific areas of the brain, scientists at the University of California, Davis have discovered.

It’s well known that electric fields can locally guide wound healing. Damaged tissues generate weak electric fields, and research by UC Davis Professor Min Zhao at the School of Medicine’s Institute for Regenerative Cures has previously shown how these electric fields can attract cells into wounds to heal them.

But the problem is that neural stem cells are naturally only found deep in the brain — in the hippocampus and the subventricular zone. To repair damage to the outer layers of the brain (the cortex), they would have to migrate a significant distance in the much larger human brain.

Migrating neural stem cells with electric fields. (Left) Transplanted human neural stem cells would normally be carried along by the the rostral migration stream (RMS) (red) toward the olfactory bulb (OB) (dark green, migration direction indicated by white arrow). (Right) But electrically guiding migration of the transplanted human neural stem cells reverses the flow toward the subventricular zone (bright green, migration direction indicated by red arrow). (credit: Jun-Feng Feng et al. (adapted by KurzweilAI/ StemCellReports)

Could electric fields be used to help the stem cells migrate that distance? To find out, the researchers placed human neural stem cells in the rostral migration stream (a pathway in the rat brain that carries cells toward the olfactory bulb, which governs the animal’s sense of smell). Cells move easily along this pathway because they are carried by the flow of cerebrospinal fluid, guided by chemical signals.

But by applying an electric field within the rat’s brain, the researchers found they could get the transplanted stem cells to reverse direction and swim “upstream” against the fluid flow. Once arrived, the transplanted stem cells stayed in their new locations weeks or months after treatment, and with indications of differentiation (forming into different types of neural cells).

“Electrical mobilization and guidance of stem cells in the brain provides a potential approach to facilitate stem cell therapies for brain diseases, stroke and injuries,” Zhao concluded.

But it will take future investigation to see if electrical stimulation can mobilize and guide migration of neural stem cells in diseased or injured human brains, the researchers note.

The research was published July 11 in the journal Stem Cell Reports.

Additional authors on the paper are at Ren Ji Hospital, Shanghai Jiao Tong University, and Shanghai Institute of Head Trauma in China and at Aaken Laboratories, Davis. The work was supported by the California Institute for Regenerative Medicine with additional support from NIH, NSF, and Research to Prevent Blindness Inc.


Abstract of Electrical Guidance of Human Stem Cells in the Rat Brain

Limited migration of neural stem cells in adult brain is a roadblock for the use of stem cell therapies to treat brain diseases and injuries. Here, we report a strategy that mobilizes and guides migration of stem cells in the brain in vivo. We developed a safe stimulation paradigm to deliver directional currents in the brain. Tracking cells expressing GFP demonstrated electrical mobilization and guidance of migration of human neural stem cells, even against co-existing intrinsic cues in the rostral migration stream. Transplanted cells were observed at 3 weeks and 4 months after stimulation in areas guided by the stimulation currents, and with indications of differentiation. Electrical stimulation thus may provide a potential approach to facilitate brain stem cell therapies.

Projecting a visual image directly into the brain, bypassing the eyes

Brain-wide activity in a zebrafish when it sees and tries to pursue prey (credit: Ehud Isacoff lab/UC Berkeley)

Imagine replacing a damaged eye with a window directly into the brain — one that communicates with the visual part of the cerebral cortex by reading from a million individual neurons and simultaneously stimulating 1,000 of them with single-cell accuracy, allowing someone to see again.

That’s the goal of a $21.6 million DARPA award to the University of California, Berkeley (UC Berkeley), one of six organizations funded by DARPA’s Neural Engineering System Design program announced this week to develop implantable, biocompatible neural interfaces that can compensate for visual or hearing deficits.*

The UCB researchers ultimately hope to build a device for use in humans. But the researchers’ goal during the four-year funding period is more modest: to create a prototype to read and write to the brains of model organisms — allowing for neural activity and behavior to be monitored and controlled simultaneously. These organisms include zebrafish larvae, which are transparent, and mice, via a transparent window in the skull.


UC Berkeley | Brain activity as a zebrafish stalks its prey

“The ability to talk to the brain has the incredible potential to help compensate for neurological damage caused by degenerative diseases or injury,” said project leader Ehud Isacoff, a UC Berkeley professor of molecular and cell biology and director of the Helen Wills Neuroscience Institute. “By encoding perceptions into the human cortex, you could allow the blind to see or the paralyzed to feel touch.”

How to read/write the brain

To communicate with the brain, the team will first insert a gene into neurons that makes fluorescent proteins, which flash when a cell fires an action potential. This will be accompanied by a second gene that makes a light-activated “optogenetic” protein, which stimulates neurons in response to a pulse of light.

Peering into a mouse brain with a light field microscope to capture live neural activity of hundreds of individual neurons in a 3D section of tissue at video speed (30 Hz) (credit: The Rockefeller University)

To read, the team is developing a miniaturized “light field microscope.”** Mounted on a small window in the skull, it peers through the surface of the brain to visualize up to a million neurons at a time at different depths and monitor their activity.***

This microscope is based on the revolutionary “light field camera,” which captures light through an array of lenses and reconstructs images computationally in any focus.

A holographic projection created by a spatial light modulator would illuminate (“write”) one set of neurons at one depth — those patterned by the letter a, for example — and simultaneously illuminate other sets of neurons at other depths (z level) or in regions of the visual cortex, such as neurons with b or c patterns. That creates three-dimensional holograms that can light up hundreds of thousands of neurons at multiple depths, just under the cortical surface. (credit: Valentina Emiliani/University of Paris, Descartes)

The combined read-write function will eventually be used to directly encode perceptions into the human cortex — inputting a visual scene to enable a blind person to see. The goal is to eventually enable physicians to monitor and activate thousands to millions of individual human neurons using light.

Isacoff, who specializes in using optogenetics to study the brain’s architecture, can already successfully read from thousands of neurons in the brain of a larval zebrafish, using a large microscope that peers through the transparent skin of an immobilized fish, and simultaneously write to a similar number.

The team will also develop computational methods that identify the brain activity patterns associated with different sensory experiences, hoping to learn the rules well enough to generate “synthetic percepts” — meaning visual images representing things being touched — by a person with a missing hand, for example.

The brain team includes ten UC Berkeley faculty and researchers from Lawrence Berkeley National Laboratory, Argonne National Laboratory, and the University of Paris, Descartes.

* In future articles, KurzweilAI will cover the other research projects announced by DARPA’s Neural Engineering System Design program, which is part of the U.S. NIH Brain Initiative.

** Light penetrates only the first few hundred microns of the surface of the brain’s cortex, which is the outer wrapping of the brain responsible for high-order mental functions, such as thinking and memory but also interpreting input from our senses. This thin outer layer nevertheless contains cell layers that represent visual and touch sensations.


Jack Gallant | Movie reconstruction from human brain activity

Team member Jack Gallant, a UC Berkeley professor of psychology, has shown that its possible to interpret what someone is seeing solely from measured neural activity in the visual cortex.

*** Developed by another collaborator, Valentina Emiliani at the University of Paris, Descartes, the light-field microscope and spatial light modulator will be shrunk to fit inside a cube one centimeter, or two-fifths of an inch, on a side to allow for being carried comfortably on the skull. During the next four years, team members will miniaturize the microscope, taking advantage of compressed light field microscopy developed by Ren Ng to take images with a flat sheet of lenses that allows focusing at all depths through a material. Several years ago, Ng, now a UC Berkeley assistant professor of electrical engineering and computer sciences, invented the light field camera.

Carbon nanotubes found safe for reconnecting damaged neurons

(credit: Polina Shuvaeva/iStock)

Multiwall carbon nanotubes (MWCNTs) could safely help repair damaged connections between neurons by serving as supporting scaffolds for growth or as connections between neurons.

That’s the conclusion of an in-vitro (lab) open-access study with cultured neurons (taken from the hippcampus of neonatal rats) by a multi-disciplinary team of scientists in Italy and Spain, published in the journal Nanomedicine: Nanotechnology, Biology, and Medicine.

A multi-walled carbon nanotube (credit: Eric Wieser/CC)

The study addressed whether MWCNTs that are interfaced to neurons affect synaptic transmission by modifying the lipid (fatty) cholesterol structure in artificial neural membranes.

Significantly, they found that MWCNTs:

  • Facilitate the full growth of neurons and the formation of new synapses. “This growth, however, is not indiscriminate and unlimited since, as we proved, after a few weeks, a physiological balance is attained.”
  • Do not interfere with the composition of lipids (cholesterol in particular), which make up the cellular membrane in neurons.
  • Do not interfere in the transmission of signals through synapses.

The researchers also noted that they recently reported (in an open access paper) low tissue reaction when multiwall carbon nanotubes were implanted in vivo (in live animals) to reconnect damaged spinal neurons.

The researchers say they proved that carbon nanotubes “perform excellently in terms of duration, adaptability and mechanical compatibility with tissue” and that “now we know that their interaction with biological material, too, is efficient. Based on this evidence, we are already studying an in vivo application, and preliminary results appear to be quite promising in terms of recovery of lost neurological functions.”

The research team comprised scientists from SISSA (International School for Advanced Studies), the University of Trieste, ELETTRA Sincrotrone, and two Spanish institutions, Basque Foundation for Science and CIC BiomaGUNE.


Abstract of Sculpting neurotransmission during synaptic development by 2D nanostructured interfaces

Carbon nanotube-based biomaterials critically contribute to the design of many prosthetic devices, with a particular impact in the development of bioelectronics components for novel neural interfaces. These nanomaterials combine excellent physical and chemical properties with peculiar nanostructured topography, thought to be crucial to their integration with neural tissue as long-term implants. The junction between carbon nanotubes and neural tissue can be particularly worthy of scientific attention and has been reported to significantly impact synapse construction in cultured neuronal networks. In this framework, the interaction of 2D carbon nanotube platforms with biological membranes is of paramount importance. Here we study carbon nanotube ability to interfere with lipid membrane structure and dynamics in cultured hippocampal neurons. While excluding that carbon nanotubes alter the homeostasis of neuronal membrane lipids, in particular cholesterol, we document in aged cultures an unprecedented functional integration between carbon nanotubes and the physiological maturation of the synaptic circuits.