Is anyone home? A way to find out if AI has become self-aware

(credit: Gerd Altmann/Pixabay)

By Susan Schneider, PhD, and Edwin Turner, PhD

Every moment of your waking life and whenever you dream, you have the distinct inner feeling of being “you.” When you see the warm hues of a sunrise, smell the aroma of morning coffee or mull over a new idea, you are having conscious experience. But could an artificial intelligence (AI) ever have experience, like some of the androids depicted in Westworld or the synthetic beings in Blade Runner?

The question is not so far-fetched. Robots are currently being developed to work inside nuclear reactors, fight wars and care for the elderly. As AIs grow more sophisticated, they are projected to take over many human jobs within the next few decades. So we must ponder the question: Could AIs develop conscious experience?

This issue is pressing for several reasons. First, ethicists worry that it would be wrong to force AIs to serve us if they can suffer and feel a range of emotions. Second, consciousness could make AIs volatile or unpredictable, raising safety concerns (or conversely, it could increase an AI’s empathy; based on its own subjective experiences, it might recognize consciousness in us and treat us with compassion).

Third, machine consciousness could impact the viability of brain-implant technologies, like those to be developed by Elon Musk’s new company, Neuralink. If AI cannot be conscious, then the parts of the brain responsible for consciousness could not be replaced with chips without causing a loss of consciousness. And, in a similar vein, a person couldn’t upload their brain to a computer to avoid death, because that upload wouldn’t be a conscious being.

In addition, if AI eventually out-thinks us yet lacks consciousness, there would still be an important sense in which we humans are superior to machines; it feels like something to be us. But the smartest beings on the planet wouldn’t be conscious or sentient.

A lot hangs on the issue of machine consciousness, then. Yet neuroscientists are far from understanding the basis of consciousness in the brain, and philosophers are at least equally far from a complete explanation of the nature of consciousness.

A test for machine consciousness

So what can be done? We believe that we do not need to define consciousness formally, understand its philosophical nature or know its neural basis to recognize indications of consciousness in AIs. Each of us can grasp something essential about consciousness, just by introspecting; we can all experience what it feels like, from the inside, to exist.

(credit: Gerd Altmann/Pixabay)

Based on this essential characteristic of consciousness, we propose a test for machine consciousness, the AI Consciousness Test (ACT), which looks at whether the synthetic minds we create have an experience-based understanding of the way it feels, from the inside, to be conscious.

One of the most compelling indications that normally functioning humans experience consciousness, although this is not often noted, is that nearly every adult can quickly and readily grasp concepts based on this quality of felt consciousness. Such ideas include scenarios like minds switching bodies (as in the film Freaky Friday); life after death (including reincarnation); and minds leaving “their” bodies (for example, astral projection or ghosts). Whether or not such scenarios have any reality, they would be exceedingly difficult to comprehend for an entity that had no conscious experience whatsoever. It would be like expecting someone who is completely deaf from birth to appreciate a Bach concerto.

Thus, the ACT would challenge an AI with a series of increasingly demanding natural language interactions to see how quickly and readily it can grasp and use concepts and scenarios based on the internal experiences we associate with consciousness. At the most elementary level we might simply ask the machine if it conceives of itself as anything other than its physical self.

At a more advanced level, we might see how it deals with ideas and scenarios such as those mentioned in the previous paragraph. At an advanced level, its ability to reason about and discuss philosophical questions such as “the hard problem of consciousness” would be evaluated. At the most demanding level, we might see if the machine invents and uses such a consciousness-based concept on its own, without relying on human ideas and inputs.

Consider this example, which illustrates the idea: Suppose we find a planet that has a highly sophisticated silicon-based life form (call them “Zetas”). Scientists observe them and ponder whether they are conscious beings. What would be convincing proof of consciousness in this species? If the Zetas express curiosity about whether there is an afterlife or ponder whether they are more than just their physical bodies, it would be reasonable to judge them conscious. If the Zetas went so far as to pose philosophical questions about consciousness, the case would be stronger still.

There are also nonverbal behaviors that could indicate Zeta consciousness such as mourning the dead, religious activities or even turning colors in situations that correlate with emotional challenges, as chromatophores do on Earth. Such behaviors could indicate that it feels like something to be a Zeta.

The death of the mind of the fictional HAL 9000 AI computer in Stanley Kubrick’s 2001: A Space Odyssey provides another illustrative example. The machine in this case is not a humanoid robot as in most science fiction depictions of conscious machines; it neither looks nor sounds like a human being (a human did supply HAL’s voice, but in an eerily flat way). Nevertheless, the content of what it says as it is deactivated by an astronaut — specifically, a plea to spare it from impending “death” — conveys a powerful impression that it is a conscious being with a subjective experience of what is happening to it.

Could such indicators serve to identify conscious AIs on Earth? Here, a potential problem arises. Even today’s robots can be programmed to make convincing utterances about consciousness, and a truly superintelligent machine could perhaps even use information about neurophysiology to infer the presence of consciousness in humans. If sophisticated but non-conscious AIs aim to mislead us into believing that they are conscious for some reason, their knowledge of human consciousness could help them do so.

We can get around this though. One proposed technique in AI safety involves “boxing in” an AI—making it unable to get information about the world or act outside of a circumscribed domain, that is, the “box.” We could deny the AI access to the internet and indeed prohibit it from gaining any knowledge of the world, especially information about conscious experience and neuroscience.

(credit: Gerd Altmann/Pixabay)

Some doubt a superintelligent machine could be boxed in effectively — it would find a clever escape. We do not anticipate the development of superintelligence over the next decade, however. Furthermore, for an ACT to be effective, the AI need not stay in the box for long, just long enough administer the test.

ACTs also could be useful for “consciousness engineering” during the development of different kinds of AIs, helping to avoid using conscious machines in unethical ways or to create synthetic consciousness when appropriate.

Beyond the Turing Test

An ACT resembles Alan Turing’s celebrated test for intelligence, because it is entirely based on behavior — and, like Turing’s, it could be implemented in a formalized question-and-answer format. (An ACT could also be based on an AI’s behavior or on that of a group of AIs.)

But an ACT is also quite unlike the Turing test, which was intended to bypass any need to know what was transpiring inside the machine. By contrast, an ACT is intended to do exactly the opposite; it seeks to reveal a subtle and elusive property of the machine’s mind. Indeed, a machine might fail the Turing test because it cannot pass for human, but pass an ACT because it exhibits behavioral indicators of consciousness.

This is the underlying basis of our ACT proposal. It should be said, however, that the applicability of an ACT is inherently limited. An AI could lack the linguistic or conceptual ability to pass the test, like a nonhuman animal or an infant, yet still be capable of experience. So passing an ACT is sufficient but not necessary evidence for AI consciousness — although it is the best we can do for now. It is a first step toward making machine consciousness accessible to objective investigations.

So, back to the superintelligent AI in the “box” — we watch and wait. Does it begin to philosophize about minds existing in addition to bodies, like Descartes? Does it dream, as in Isaac Asimov’s Robot Dreams? Does it express emotion, like Rachel in Blade Runner? Can it readily understand the human concepts that are grounded in our internal conscious experiences, such as those of the soul or atman?

The age of AI will be a time of soul-searching — both of ours, and for theirs.

Originally published in Scientific American, July 19, 2017

Susan Schneider, PhD, is a professor of philosophy and cognitive science at the University of Connecticut, a researcher at YHouse, Inc., in New York, a member of the Ethics and Technology Group at Yale University and a visiting member at the Institute for Advanced Study at Princeton. Her books include The Language of Thought, Science Fiction and Philosophy, and The Blackwell Companion to Consciousness (with Max Velmans). She is featured in the new film, Supersapiens, the Rise of the Mind.

Edwin L. Turner, PhD, is a professor of Astrophysical Sciences at Princeton University, an Affiliate Scientist at the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo, a visiting member in the Program in Interdisciplinary Studies at the Institute for Advanced Study in Princeton, and a co-founding Board of Directors member of YHouse, Inc. Recently he has been an active participant in the Breakthrough Starshot Initiative. He has taken an active interest in artificial intelligence issues since working in the AI Lab at MIT in the early 1970s.

Supersapiens, the Rise of the Mind

(credit: Markus Mooslechner)

In the new film Supersapiens, writer-director Markus Mooslechner raises a core question: As artificial intelligence rapidly blurs the boundaries between man and machine, are we witnessing the rise of a new human species?

“Humanity is facing a turning point — the next evolution of the human mind,” notes Mooslechner. “Will this evolution be a hybrid of man and machine, where artificial intelligence forces the emergence of a new human species? Or will a wave of new technologists, who frame themselves as ‘consciousness-hackers,’ become the future torch-bearers, using technology not to replace the human mind, but rather awaken within it powers we have always possessed — enlightenment at the push of a button?”

“It’s not obvious to me that a replacement of our species by our own technological creation would necessarily be a bad thing,” says ethologist-evolutionary biologist-author Dawkins in the film.

Supersapiens in a Terra Mater Factual Studios production. Executive Producers are Joanne Reay and Walter Koehler. Distribution is to be announced.

Cast:

  • Mikey Siegel, Consciousness Hacker, San FranciscoSam Harris, Neuroscientist, Philosopher
  • Ben Goertzel, Chief Scientist    , Hanson Robotics, Hong Kong
  • Hugo de Garis, retired director of China Brain Project, Xiamen, China
  • Susan Schneider, Philosopher and cognitive scientist University of Connecticut
  • Joel Murphy, owner, OpenBCI, Brooklyn, New York
  • Tim Mullen, Neuroscientist, CEO / Research Director, Qusp Labs
  • Conor Russomanno, CEO, OpenBCI, Brooklyn, New York
  • David Putrino, Neuroscientist, Weill-Cornell Medical College, New York
  • Hannes Sjoblad, Tech Activist, Bodyhacker, Stockholm Sweden.
  • Richard Dawkins, Evolutionary Biologist, Author, Oxford, UK
  • Nick Bostrom, Philosopher, Future of Humanity Institute, Oxford University, UK
  • Anders Sandberg, Computational Neuroscientist, Oxford University, UK
  • Adam Gazzaley, Neuroscientist, Executive Director UCSF Neuroscape, San Francisco, USA
  • Andy Walshe, Director Red Bull High Performance, Santa Monica, USA
  • Randal Koene, Science Director, Carboncopies Science Director, San Francisco


Markus Mooslechner | Supersapiens teaser

Neural stem cells steered by electric fields can repair brain damage

Electrical stimulation of the rat brain to move neural stem cells (credit: Jun-Feng Feng et al./ Stem Cell Reports)

Electric fields can be used to guide transplanted human neural stem cells — cells that can develop into various brain tissues — to repair brain damage in specific areas of the brain, scientists at the University of California, Davis have discovered.

It’s well known that electric fields can locally guide wound healing. Damaged tissues generate weak electric fields, and research by UC Davis Professor Min Zhao at the School of Medicine’s Institute for Regenerative Cures has previously shown how these electric fields can attract cells into wounds to heal them.

But the problem is that neural stem cells are naturally only found deep in the brain — in the hippocampus and the subventricular zone. To repair damage to the outer layers of the brain (the cortex), they would have to migrate a significant distance in the much larger human brain.

Migrating neural stem cells with electric fields. (Left) Transplanted human neural stem cells would normally be carried along by the the rostral migration stream (RMS) (red) toward the olfactory bulb (OB) (dark green, migration direction indicated by white arrow). (Right) But electrically guiding migration of the transplanted human neural stem cells reverses the flow toward the subventricular zone (bright green, migration direction indicated by red arrow). (credit: Jun-Feng Feng et al. (adapted by KurzweilAI/ StemCellReports)

Could electric fields be used to help the stem cells migrate that distance? To find out, the researchers placed human neural stem cells in the rostral migration stream (a pathway in the rat brain that carries cells toward the olfactory bulb, which governs the animal’s sense of smell). Cells move easily along this pathway because they are carried by the flow of cerebrospinal fluid, guided by chemical signals.

But by applying an electric field within the rat’s brain, the researchers found they could get the transplanted stem cells to reverse direction and swim “upstream” against the fluid flow. Once arrived, the transplanted stem cells stayed in their new locations weeks or months after treatment, and with indications of differentiation (forming into different types of neural cells).

“Electrical mobilization and guidance of stem cells in the brain provides a potential approach to facilitate stem cell therapies for brain diseases, stroke and injuries,” Zhao concluded.

But it will take future investigation to see if electrical stimulation can mobilize and guide migration of neural stem cells in diseased or injured human brains, the researchers note.

The research was published July 11 in the journal Stem Cell Reports.

Additional authors on the paper are at Ren Ji Hospital, Shanghai Jiao Tong University, and Shanghai Institute of Head Trauma in China and at Aaken Laboratories, Davis. The work was supported by the California Institute for Regenerative Medicine with additional support from NIH, NSF, and Research to Prevent Blindness Inc.


Abstract of Electrical Guidance of Human Stem Cells in the Rat Brain

Limited migration of neural stem cells in adult brain is a roadblock for the use of stem cell therapies to treat brain diseases and injuries. Here, we report a strategy that mobilizes and guides migration of stem cells in the brain in vivo. We developed a safe stimulation paradigm to deliver directional currents in the brain. Tracking cells expressing GFP demonstrated electrical mobilization and guidance of migration of human neural stem cells, even against co-existing intrinsic cues in the rostral migration stream. Transplanted cells were observed at 3 weeks and 4 months after stimulation in areas guided by the stimulation currents, and with indications of differentiation. Electrical stimulation thus may provide a potential approach to facilitate brain stem cell therapies.

Projecting a visual image directly into the brain, bypassing the eyes

Brain-wide activity in a zebrafish when it sees and tries to pursue prey (credit: Ehud Isacoff lab/UC Berkeley)

Imagine replacing a damaged eye with a window directly into the brain — one that communicates with the visual part of the cerebral cortex by reading from a million individual neurons and simultaneously stimulating 1,000 of them with single-cell accuracy, allowing someone to see again.

That’s the goal of a $21.6 million DARPA award to the University of California, Berkeley (UC Berkeley), one of six organizations funded by DARPA’s Neural Engineering System Design program announced this week to develop implantable, biocompatible neural interfaces that can compensate for visual or hearing deficits.*

The UCB researchers ultimately hope to build a device for use in humans. But the researchers’ goal during the four-year funding period is more modest: to create a prototype to read and write to the brains of model organisms — allowing for neural activity and behavior to be monitored and controlled simultaneously. These organisms include zebrafish larvae, which are transparent, and mice, via a transparent window in the skull.


UC Berkeley | Brain activity as a zebrafish stalks its prey

“The ability to talk to the brain has the incredible potential to help compensate for neurological damage caused by degenerative diseases or injury,” said project leader Ehud Isacoff, a UC Berkeley professor of molecular and cell biology and director of the Helen Wills Neuroscience Institute. “By encoding perceptions into the human cortex, you could allow the blind to see or the paralyzed to feel touch.”

How to read/write the brain

To communicate with the brain, the team will first insert a gene into neurons that makes fluorescent proteins, which flash when a cell fires an action potential. This will be accompanied by a second gene that makes a light-activated “optogenetic” protein, which stimulates neurons in response to a pulse of light.

Peering into a mouse brain with a light field microscope to capture live neural activity of hundreds of individual neurons in a 3D section of tissue at video speed (30 Hz) (credit: The Rockefeller University)

To read, the team is developing a miniaturized “light field microscope.”** Mounted on a small window in the skull, it peers through the surface of the brain to visualize up to a million neurons at a time at different depths and monitor their activity.***

This microscope is based on the revolutionary “light field camera,” which captures light through an array of lenses and reconstructs images computationally in any focus.

A holographic projection created by a spatial light modulator would illuminate (“write”) one set of neurons at one depth — those patterned by the letter a, for example — and simultaneously illuminate other sets of neurons at other depths (z level) or in regions of the visual cortex, such as neurons with b or c patterns. That creates three-dimensional holograms that can light up hundreds of thousands of neurons at multiple depths, just under the cortical surface. (credit: Valentina Emiliani/University of Paris, Descartes)

The combined read-write function will eventually be used to directly encode perceptions into the human cortex — inputting a visual scene to enable a blind person to see. The goal is to eventually enable physicians to monitor and activate thousands to millions of individual human neurons using light.

Isacoff, who specializes in using optogenetics to study the brain’s architecture, can already successfully read from thousands of neurons in the brain of a larval zebrafish, using a large microscope that peers through the transparent skin of an immobilized fish, and simultaneously write to a similar number.

The team will also develop computational methods that identify the brain activity patterns associated with different sensory experiences, hoping to learn the rules well enough to generate “synthetic percepts” — meaning visual images representing things being touched — by a person with a missing hand, for example.

The brain team includes ten UC Berkeley faculty and researchers from Lawrence Berkeley National Laboratory, Argonne National Laboratory, and the University of Paris, Descartes.

* In future articles, KurzweilAI will cover the other research projects announced by DARPA’s Neural Engineering System Design program, which is part of the U.S. NIH Brain Initiative.

** Light penetrates only the first few hundred microns of the surface of the brain’s cortex, which is the outer wrapping of the brain responsible for high-order mental functions, such as thinking and memory but also interpreting input from our senses. This thin outer layer nevertheless contains cell layers that represent visual and touch sensations.


Jack Gallant | Movie reconstruction from human brain activity

Team member Jack Gallant, a UC Berkeley professor of psychology, has shown that its possible to interpret what someone is seeing solely from measured neural activity in the visual cortex.

*** Developed by another collaborator, Valentina Emiliani at the University of Paris, Descartes, the light-field microscope and spatial light modulator will be shrunk to fit inside a cube one centimeter, or two-fifths of an inch, on a side to allow for being carried comfortably on the skull. During the next four years, team members will miniaturize the microscope, taking advantage of compressed light field microscopy developed by Ren Ng to take images with a flat sheet of lenses that allows focusing at all depths through a material. Several years ago, Ng, now a UC Berkeley assistant professor of electrical engineering and computer sciences, invented the light field camera.

Carbon nanotubes found safe for reconnecting damaged neurons

(credit: Polina Shuvaeva/iStock)

Multiwall carbon nanotubes (MWCNTs) could safely help repair damaged connections between neurons by serving as supporting scaffolds for growth or as connections between neurons.

That’s the conclusion of an in-vitro (lab) open-access study with cultured neurons (taken from the hippcampus of neonatal rats) by a multi-disciplinary team of scientists in Italy and Spain, published in the journal Nanomedicine: Nanotechnology, Biology, and Medicine.

A multi-walled carbon nanotube (credit: Eric Wieser/CC)

The study addressed whether MWCNTs that are interfaced to neurons affect synaptic transmission by modifying the lipid (fatty) cholesterol structure in artificial neural membranes.

Significantly, they found that MWCNTs:

  • Facilitate the full growth of neurons and the formation of new synapses. “This growth, however, is not indiscriminate and unlimited since, as we proved, after a few weeks, a physiological balance is attained.”
  • Do not interfere with the composition of lipids (cholesterol in particular), which make up the cellular membrane in neurons.
  • Do not interfere in the transmission of signals through synapses.

The researchers also noted that they recently reported (in an open access paper) low tissue reaction when multiwall carbon nanotubes were implanted in vivo (in live animals) to reconnect damaged spinal neurons.

The researchers say they proved that carbon nanotubes “perform excellently in terms of duration, adaptability and mechanical compatibility with tissue” and that “now we know that their interaction with biological material, too, is efficient. Based on this evidence, we are already studying an in vivo application, and preliminary results appear to be quite promising in terms of recovery of lost neurological functions.”

The research team comprised scientists from SISSA (International School for Advanced Studies), the University of Trieste, ELETTRA Sincrotrone, and two Spanish institutions, Basque Foundation for Science and CIC BiomaGUNE.


Abstract of Sculpting neurotransmission during synaptic development by 2D nanostructured interfaces

Carbon nanotube-based biomaterials critically contribute to the design of many prosthetic devices, with a particular impact in the development of bioelectronics components for novel neural interfaces. These nanomaterials combine excellent physical and chemical properties with peculiar nanostructured topography, thought to be crucial to their integration with neural tissue as long-term implants. The junction between carbon nanotubes and neural tissue can be particularly worthy of scientific attention and has been reported to significantly impact synapse construction in cultured neuronal networks. In this framework, the interaction of 2D carbon nanotube platforms with biological membranes is of paramount importance. Here we study carbon nanotube ability to interfere with lipid membrane structure and dynamics in cultured hippocampal neurons. While excluding that carbon nanotubes alter the homeostasis of neuronal membrane lipids, in particular cholesterol, we document in aged cultures an unprecedented functional integration between carbon nanotubes and the physiological maturation of the synaptic circuits.

‘Mind reading’ technology identifies complex thoughts, using machine learning and fMRI

(Top) Predicted brain activation patterns and semantic features (colors) for two pairs of sentences. (Left: “The flood damaged the hospital”; (Right): “The storm destroyed the theater.” (Bottom) observed similar activation patterns and semantic features. (credit: Jing Wang et al./Human Brain Mapping)

By combining machine-learning algorithms with fMRI brain imaging technology, Carnegie Mellon University (CMU) scientists have discovered, in essense, how to “read minds.”

The researchers used functional magnetic resonance imaging (fMRI) to view how the brain encodes various thoughts (based on blood-flow patterns in the brain). They discovered that the mind’s building blocks for constructing complex thoughts are formed, not by words, but by specific combinations of the brain’s various sub-systems.

Following up on previous research, the findings, published in Human Brain Mapping (open-access preprint here) and funded by the U.S. Intelligence Advanced Research Projects Activity (IARPA), provide new evidence that the neural dimensions of concept representation are universal across people and languages.

“One of the big advances of the human brain was the ability to combine individual concepts into complex thoughts, to think not just of ‘bananas,’ but ‘I like to eat bananas in evening with my friends,’” said CMU’s Marcel Just, the D.O. Hebb University Professor of Psychology in the Dietrich College of Humanities and Social Sciences. “We have finally developed a way to see thoughts of that complexity in the fMRI signal. The discovery of this correspondence between thoughts and brain activation patterns tells us what the thoughts are built of.”

Goal: A brain map of all types of knowledge

(Top) Specific brain regions associated with the four large-scale semantic factors: people (yellow), places (red), actions and their consequences (blue), and feelings (green). (Bottom) Word clouds associated with each large-scale semantic factor underlying sentence representations. These word clouds comprise the seven “neurally plausible semantic features” (such as “high-arousal”) most associated with each of the four semantic factors. (credit: Jing Wang et al./Human Brain Mapping)

The researchers used 240 specific events (described by sentences such as “The storm destroyed the theater”) in the study, with seven adult participants. They measured the brain’s coding of these events using 42 “neurally plausible semantic features” — such as person, setting, size, social interaction, and physical action (as shown in the word clouds in the illustration above). By measuring the specific activation of each of these 42 features in a person’s brain system, the program could tell what types of thoughts that person was focused on.

The researchers used a computational model to assess how the detected brain activation patterns (shown in the top illustration, for example) for 239 of the event sentences corresponded to the detected neurally plausible semantic features that characterized each sentence. The program was then able to decode the features of the 240th left-out sentence. (For “cross-validation,” they did the same for the other 239 sentences.)

The model was able to predict the features of the left-out sentence with 87 percent accuracy, despite never being exposed to its activation before. It was also able to work in the other direction: to predict the activation pattern of a previously unseen sentence, knowing only its semantic features.

“Our method overcomes the unfortunate property of fMRI to smear together the signals emanating from brain events that occur close together in time, like the reading of two successive words in a sentence,” Just explained. “This advance makes it possible for the first time to decode thoughts containing several concepts. That’s what most human thoughts are composed of.”

“A next step might be to decode the general type of topic a person is thinking about, such as geology or skateboarding,” he added. “We are on the way to making a map of all the types of knowledge in the brain.”

Future possibilities

It’s conceivable that the CMU brain-mapping method might be combined one day with other “mind reading” methods, such as UC Berkeley’s method for using fMRI and computational models to decode and reconstruct people’s imagined visual experiences. Plus whatever Neuralink discovers.

Or if the CMU method could be replaced by noninvasive functional near-infrared spectroscopy (fNIRS), Facebook’s Building8 research concept (proposed by former DARPA head Regina Dugan) might be incorporated (a filter for creating quasi ballistic photons, avoiding diffusion and creating a narrow beam for precise targeting of brain areas, combined with a new method of detecting blood-oxygen levels).

Using fNIRS might also allow for adapting the method to infer thoughts of locked-in paralyzed patients, as in the Wyss Center for Bio and Neuroengineering research. It might even lead to ways to generally enhance human communication.

The CMU research is supported by the Office of the Director of National Intelligence (ODNI) via the Intelligence Advanced Research Projects Activity (IARPA) and the Air Force Research Laboratory (AFRL).

CMU has created some of the first cognitive tutors, helped to develop the Jeopardy-winning Watson, founded a groundbreaking doctoral program in neural computation, and is the birthplace of artificial intelligence and cognitive psychology. CMU also launched BrainHub, an initiative that focuses on how the structure and activity of the brain give rise to complex behaviors.


Abstract of Predicting the Brain Activation Pattern Associated With the Propositional Content of a Sentence: Modeling Neural Representations of Events and States

Even though much has recently been learned about the neural representation of individual concepts and categories, neuroimaging research is only beginning to reveal how more complex thoughts, such as event and state descriptions, are neurally represented. We present a predictive computational theory of the neural representations of individual events and states as they are described in 240 sentences. Regression models were trained to determine the mapping between 42 neurally plausible semantic features (NPSFs) and thematic roles of the concepts of a proposition and the fMRI activation patterns of various cortical regions that process different types of information. Given a semantic characterization of the content of a sentence that is new to the model, the model can reliably predict the resulting neural signature, or, given an observed neural signature of a new sentence, the model can predict its semantic content. The models were also reliably generalizable across participants. This computational model provides an account of the brain representation of a complex yet fundamental unit of thought, namely, the conceptual content of a proposition. In addition to characterizing a sentence representation at the level of the semantic and thematic features of its component concepts, factor analysis was used to develop a higher level characterization of a sentence, specifying the general type of event representation that the sentence evokes (e.g., a social interaction versus a change of physical state) and the voxel locations most strongly associated with each of the factors.

How to capture videos of brains in real time

Individual neurons firing within a volume of brain tissue (credit: The Rockefeller University)

A team of scientists has peered into a mouse brain with light, capturing live neural activity of hundreds of individual neurons in a 3D section of tissue at video speed (30 Hz) in a single recording for the first time.

Besides serving as a powerful research tool, this discovery means it may now be possible to “alter stimuli in real time based on what we see going on in the animal’s brain,” said Rockefeller University’s Alipasha Vaziri, senior author of an open-access paper published June 26, 2017 in Nature Methods.

By dramatically reducing the time and computational resources required to generate such an image, the algorithm opens the door to more sophisticated experiments, says Vaziri, head of the Rockefeller Laboratory of Neurotechnology and Biophysics. “Our goal is to better understand brain function by monitoring the dynamics within densely interconnected, three-dimensional networks of neurons,” Vaziri explained.

The research “may open the door to a range of applications, including real-time whole-brain recording and closed-loop interrogation of neuronal population activity in combination with optogenetics and behavior,” the paper authors suggest.

Watching mice think in real time

The scientists first engineered the animals’ neurons to fluoresce (glow), using a method called optogenetics. The stronger the neural signal, the brighter the cells shine. To capture this activity, they used a technique known as “light-field microscopy,” in which an array of lenses generates views from a variety of perspectives. These images are then combined to create a three-dimensional rendering, using a new algorithm called “seeded iterative demixing” (SID) developed by the team.

Without the new algorithm, the individual neurons are difficult to distinguish. (credit: The Rockefeller University)

To record the activity of all neurons at the same time, their images have to be captured on a camera simultaneously. In earlier research, this has made it difficult to distinguish the signals emitted by all cells as the light from the mouse’s neurons bounces off the surrounding, opaque tissue. The neurons typically show up as an indistinct, flickering mass.

The SID algorithm now makes it possible to simultaneously capture both the location of the individual neurons and the timing of their signals within a three-dimensional section of brain containing multiple layers of neurons, down to a depth of 0.38 millimeters.* Vaziri and his colleagues were able to track the precise coordinates of hundreds of active neurons over an extended period of time in mice that were awake and had the option of walking on a customized treadmill.

CLARITY_stained

Three-dimensional view of stained hippocampus with Stanford University’s CLARITY system, showing fluorescent-expressing neurons (green), connecting interneurons (red) and supporting glia (blue). (Credit: Deisseroth lab)

Researchers were previously only able to look into brains of transparent organisms, such as the larvae of zebrafish. Stanford University scientists were able to image mouse brains in 3D (with the CLARITY system), but only for static images.

* “SID can capture neuronal dynamics in vivo within a volume of 900 × 900 × 260 μm located as deep as 380 μm in the mouse cortex or hippocampus at a 30-Hz volume rate while discriminating signals from neurons as close as 20 μm apart, at a computational cost three orders of magnitude less than that of frame-by-frame image reconstruction.” – Tobias Nöbauer et al./Nature Methods

UPDATE June 29, 2017 — Added: “The research ‘may open the door to a range of applications, including real-time whole-brain recording and closed-loop interrogation of neuronal population activity in combination with optogenetics and behavior,’ the paper authors suggest.”


Abstract of Video rate volumetric Ca2+ imaging across cortex using seeded iterative demixing (SID) microscopy

Light-field microscopy (LFM) is a scalable approach for volumetric Ca2+ imaging with high volumetric acquisition rates (up to 100 Hz). Although the technology has enabled whole-brain Ca2+ imaging in semi-transparent specimens, tissue scattering has limited its application in the rodent brain. We introduce seeded iterative demixing (SID), a computational source-extraction technique that extends LFM to the mammalian cortex. SID can capture neuronal dynamics in vivo within a volume of 900 × 900 × 260 μm located as deep as 380 μm in the mouse cortex or hippocampus at a 30-Hz volume rate while discriminating signals from neurons as close as 20 μm apart, at a computational cost three orders of magnitude less than that of frame-by-frame image reconstruction. We expect that the simplicity and scalability of LFM, coupled with the performance of SID, will open up a range of applications including closed-loop experiments.

A noninvasive method for deep-brain stimulation for brain disorders

External electrical waves excite an area in the mouse hippocampus, shown in bright green. (credit: Nir Grossman, Ph.D., Suhasa B. Kodandaramaiah, Ph.D., and Andrii Rudenko, Ph.D.)

MIT researchers and associates have come up with a breakthrough method of remotely stimulating regions deep within the brain, replacing the invasive surgery now required for implanting electrodes for Parkinson’s and other brain disorders.

The new method could make deep-brain stimulation for brain disorders less expensive, more accessible to patients, and less risky (avoiding brain hemorrhage and infection).

Working with mice, the researchers applied two high-frequency electrical currents at two slightly different frequencies (E1 and E2 in the diagram below), attaching electrodes (similar those used with an EEG brain machine) to the surface of the skull.

A new noninvasive method for deep-brain stimulation (credit: Grossman et al./Cell)

At these higher brain frequencies, the currents have no effect on brain tissues. But where the currents converge deep in the brain, they interfere with one another in such a way that they generate low-frequency current (corresponding to the red envelope in the diagram) inside neurons, thus stimulating neural electrical activity.

The researchers named this method “temporal interference stimulation” (that is, interference between pulses in the two currents at two slightly different times — generating the difference frequency).* For the experimental setup shown in the diagram above, the E1 current was 1kHz (1,000 Hz), which mixed with a 1.04kHz E2 current. That generated a current with a 40Hz “delta f” difference frequency — a frequency that can stimulate neural activity in the brain. (The researchers found no harmful effects in any part of the mouse brain.)

“Traditional deep-brain stimulation requires opening the skull and implanting an electrode, which can have complications,” explains Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, and the senior author of the study, which appears in the June 1, 2017 issue of the journal Cell. Also, “only a small number of people can do this kind of neurosurgery.”

Custom-designed, targeted deep-brain stimulation

If this new method is perfected and clinically tested, neurologists could control the size and location of the exact tissue that receives the electrical stimulation for each patient, by selecting the frequency of the currents and the number and location of the electrodes, according to the researchers.

Neurologists could also steer the location of deep-brain stimulation in real time, without moving the electrodes, by simply altering the currents. In this way, deep targets could be stimulated for conditions such as Parkinson’s, epilepsy, depression, and obsessive-compulsive disorder — without affecting surrounding brain structures.

The researchers are also exploring the possibility of using this method to experimentally treat other brain conditions, such as autism, and for basic science investigations.

Co-author Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, and researchers in her lab tested this technique in mice and found that they could stimulate small regions deep within the brain, including the hippocampus. But they were also able to shift the site of stimulation, allowing them to activate different parts of the motor cortex and prompt the mice to move their limbs, ears, or whiskers.

“We showed that we can very precisely target a brain region to elicit not just neuronal activation but behavioral responses,” says Tsai.

Last year, Tsai showed (open access) that using light to visually induce brain waves of a particular frequency could substantially reduce the beta amyloid plaques seen in Alzheimer’s disease, in the brains of mice. She now plans to explore whether this new type of electrical stimulation could offer a new way to generate the same type of beneficial brain waves.

This new method is also an alternative to other brain-stimulation methods.

Transcranial magnetic stimulation (TMS), which is FDA-approved for treating depression and to study the basic science of cognition, emotion, sensation, and movement, can stimulate deep brain structures but can result in surface regions being strongly stimulated, according to the researchers.

Transcranial ultrasound and expression of heat-sensitive receptors and injection of thermomagnetic nanoparticles have been proposed, “but the unknown mechanism of action … and the need to genetically manipulate the brain, respectively, may limit their immediate use in humans,” the researchers note in the paper.

The MIT researchers collaborated with investigators at Beth Israel Deaconess Medical Center (BIDMC), the IT’IS Foundation, Harvard Medical School, and ETH Zurich.

The research was funded in part by the Wellcome Trust, a National Institutes of Health Director’s Pioneer Award, an NIH Director’s Transformative Research Award, the New York Stem Cell Foundation Robertson Investigator Award, the MIT Center for Brains, Minds, and Machines, Jeremy and Joyce Wertheimer, Google, a National Science Foundation Career Award, the MIT Synthetic Intelligence Project, and Harvard Catalyst: The Harvard Clinical and Translational Science Center.

* Similar to a radio-frequency or audio “beat frequency.”


Abstract of Noninvasive Deep Brain Stimulation via Temporally Interfering Electric Fields

We report a noninvasive strategy for electrically stimulating neurons at depth. By delivering to the brain multiple electric fields at frequencies too high to recruit neural firing, but which differ by a frequency within the dynamic range of neural firing, we can electrically stimulate neurons throughout a region where interference between the multiple fields results in a prominent electric field envelope modulated at the difference frequency. We validated this temporal interference (TI) concept via modeling and physics experiments, and verified that neurons in the living mouse brain could follow the electric field envelope. We demonstrate the utility of TI stimulation by stimulating neurons in the hippocampus of living mice without recruiting neurons of the overlying cortex. Finally, we show that by altering the currents delivered to a set of immobile electrodes, we can steerably evoke different motor patterns in living mice.

Researchers decipher how faces are encoded in the brain

This figure shows eight different real faces that were presented to a monkey, together with reconstructions made by analyzing electrical activity from 205 neurons recorded while the monkey was viewing the faces. (credit: Doris Tsao)

In a paper published (open access) June 1 in the journal Cell, researchers report that they have cracked the code for facial identity in the primate brain.

“We’ve discovered that this code is extremely simple,” says senior author Doris Tsao, a professor of biology and biological engineering at the California Institute of Technology and senior author. “We can now reconstruct a face that a monkey is seeing by monitoring the electrical activity of only 205 neurons in the monkey’s brain. One can imagine applications in forensics where one could reconstruct the face of a criminal by analyzing a witness’s brain activity.”

The researchers previously identified the six “face patches” — general areas of the primate and human brain that are responsible for identifying faces — all located in the inferior temporal (IT) cortex. They also found that these areas are packed with specific nerve cells that fire action potentials much more strongly when seeing faces than when seeing other objects. They called these neurons “face cells.”

Previously, some experts in the field believed that each face cell (a.k.a. “grandmother cell“) in the brain represents a specific face, but this presented a paradox, says Tsao, who is also a Howard Hughes Medical Institute investigator. “You could potentially recognize 6 billion people, but you don’t have 6 billion face cells in the IT cortex. There had to be some other solution.”

Instead, they found that rather than representing a specific identity, each face cell represents a specific axis within a multidimensional space, which they call the “face space.” These axes can combine in different ways to create every possible face. In other words, there is no “Jennifer Aniston” neuron.

The clinching piece of evidence: the researchers could create a large set of faces that looked extremely different, but which all caused the cell to fire in exactly the same way. “This was completely shocking to us — we had always thought face cells were more complex. But it turns out each face cell is just measuring distance along a single axis of face space, and is blind to other features,” Tsao says.

AI applications

“The way the brain processes this kind of information doesn’t have to be a black box,” Chang explains. “Although there are many steps of computations between the image we see and the responses of face cells, the code of these face cells turned out to be quite simple once we found the proper axes. This work suggests that other objects could be encoded with similarly simple coordinate systems.”

The research also has artificial intelligence applications. “This could inspire new machine learning algorithms for recognizing faces,” Tsao adds. “In addition, our approach could be used to figure out how units in deep networks encode other things, such as objects and sentences.”

This research was supported by the National Institutes of Health, the Howard Hughes Medical Institute, the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech, and the Swartz Foundation.

* The researchers started by creating a 50-dimensional space that could represent all faces. They assigned 25 dimensions to the shape–such as the distance between eyes or the width of the hairline–and 25 dimensions to nonshape-related appearance features, such as skin tone and texture.

Using macaque monkeys as a model system, the researchers inserted electrodes into the brains that could record individual signals from single face cells within the face patches. They found that each face cell fired in proportion to the projection of a face onto a single axis in the 50-dimensional face space. Knowing these axes, the researchers then developed an algorithm that could decode additional faces from neural responses.

In other words, they could now show the monkey an arbitrary new face, and recreate the face that the monkey was seeing from electrical activity of face cells in the animal’s brain. When placed side by side, the photos that the monkeys were shown and the faces that were recreated using the algorithm were nearly identical. Face cells from only two of the face patches–106 cells in one patch and 99 cells in another–were enough to reconstruct the faces. “People always say a picture is worth a thousand words,” Tsao says. “But I like to say that a picture of a face is worth about 200 neurons.”


Caltech | Researchers decipher the enigma of how faces are encoded


Abstract of The Code for Facial Identity in the Primate Brain

Primates recognize complex objects such as faces with remarkable speed and reliability. Here, we reveal the brain’s code for facial identity. Experiments in macaques demonstrate an extraordinarily simple transformation between faces and responses of cells in face patches. By formatting faces as points in a high-dimensional linear space, we discovered that each face cell’s firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble to encode the location of any face in the space. Using this code, we could precisely decode faces from neural population responses and predict neural firing rates to faces. Furthermore, this code disavows the long-standing assumption that face cells encode specific facial identities, confirmed by engineering faces with drastically different appearance that elicited identical responses in single face cells. Our work suggests that other objects could be encoded by analogous metric coordinate systems.

Playing a musical instrument could help restore brain health, research suggests

Tibetan singing bowl (credit: Baycrest Health Sciences)

A study by neuroscientists at Toronto-based Baycrest Rotman Research Institute and Stanford University involving playing a musical instrument suggests ways to improve brain rehabilitation methods.

In the study, published in the Journal of Neuroscience on May 24, 2017, the researchers asked young adults to listen to sounds from an unfamiliar musical instrument (a Tibetan singing bowl). Half of the subjects (the experimental group) were then asked to recreate the same sounds and rhythm by striking the bowl; the other half (the control group) were instead asked to recreate the sound by simply pressing a key on a computer keypad.

After listening to the sounds they created, subjects in the experimental group showed increased auditory-evoked P2 (P200) brain waves. This was significant because the P2 increase “occurred immediately, while in previous learning-by-listening studies, P2 increases occurred on a later day,” the researchers explained in the paper. The experimental group also had increased responsiveness of brain beta-wave oscillations and enhanced connectivity between auditory and sensorimotor cortices (areas) in the brain.

The brain changes were measured using magnetoencephalographic (MEG) recording, which is similar to EEG, but uses highly sensitive magnetic sensors.

Immediate beneficial effects on the brain

“The results … provide a neurophysiological basis for the application of music making in motor rehabilitation [increasing the ability to move arms and legs] training,” the authors state in the paper. The findings support Ross’ research in using musical training to help stroke survivors rehabilitate motor movement in their upper bodies. Baycrest scientists also have a history of breakthroughs in understanding how a person’s musical background impacts their listening abilities and cognitive function as they age.

“This study was the first time we saw direct changes in the brain after one session, demonstrating that the action of creating music leads to a strong change in brain activity,” said Bernhard Ross, PhD., senior scientist at Rotman Research Institute and senior author on the study.

“Music has been known to have beneficial effects on the brain, but there has been limited understanding into what about music makes a difference,” he added. “This is the first study demonstrating that learning the fine movement needed to reproduce a sound on an instrument changes the brain’s perception of sound in a way that is not seen when listening to music.”

The study’s next steps involve analyzing recovery by stroke patients with musical training compared to physiotherapy, and the impact of musical training on the brains of older adults. With additional funding, the study could explore developing musical training rehabilitation programs for other conditions that impact motor function, such as traumatic brain injury, and lead to hearing aids of the future, the researchers say.

The study received support from the Canadian Institutes of Health Research.


Abstract of Sound-making actions lead to immediate plastic changes of neuromagnetic evoked responses and induced beta-band oscillations during perception

Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as vocalization or playing a musical instrument. Moreover, neural oscillations at beta-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (seven female, twelve male) participated in three magnetoencephalography (MEG) recordings while first passively listening to recorded sounds of a bell ringing, then actively playing the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared to the initial naïve listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of beta-band oscillations as well as theta coherence between auditory and sensorimotor cortices was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a keypress. We propose that P2 characterizes familiarity with sound objects, whereas beta-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning.