Metalens with artificial muscle simulates (and goes way beyond) human-eye and camera optical functions

A silicon-based metalens just 30 micrometers thick is mounted on a transparent, stretchy polymer film. The colored iridescence is produced by the large number of nanostructures within the metalens. (credit:Harvard SEAS)

Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a breakthrough electronically controlled artificial eye. The thin, flat, adaptive silicon nanostructure (“metalens”) can simultaneously control focus, astigmatism, and image shift (three of the major contributors to blurry images) in real time, which the human eye (and eyeglasses) cannot do.

The 30-micrometers-thick metalens makes changes laterally to achieve optical zoom, autofocus, and image stabilization — making it possible to replace bulky lens systems in future optical systems used in eyeglasses, cameras, cell phones, and augmented and virtual reality devices.

The research is described in an open-access paper in Science Advances. In another paper recently published in Optics Express, the researchers demonstrated the design and fabrication of metalenses up to centimeters or more in diameter.* That makes it possible to unify two industries: semiconductor manufacturing and lens-making. So the same technology used to make computer chips will be used to make metasurface-based optical components, such as lenses.

The adaptive metalens (right) focuses light rays onto an image sensor (left), such as one in a camera. An electrical signal controls the shape of the metalens to produce the desired optical wavefront patterns (shown in red), resulting in improved images. In the future, adaptive metalenses will be built into imaging systems, such as cell phone cameras and microscopes, enabling flat, compact autofocus as well as the capability for simultaneously correcting optical aberrations and performing optical image stabilization, all in a single plane of control. (credit: Second Bay Studios/Harvard SEAS)

Simulating the human eye’s lens and ciliary muscles

In the human eye, the lens is surrounded by ciliary muscle, which stretches or compresses the lens, changing its shape to adjust its focal length. To achieve that function, the researchers adhered a metalens to a thin, transparent dielectric elastomer actuator (“artificial muscle”). The researchers chose a dielectic elastomer with low loss — meaning light travels through the material with little scattering — to attach to the lens.

(Top) Schematic of metasurface and dielectric elastomer actuators (“artificial muscles”), showing how the new artificial muscles change focus, similar to how the ciliary muscle in the eye work. An applied voltage supplies transparent, stretchable electrode layers (gray), made up of single-wall carbon-nanotube nanopillars, with electrical charges (acting as a capacitor). The resulting electrostatic attraction compresses (red arrows) the dielectric elastomer actuators (artificial muscles) in the thickness direction and expands (black arrows) the elastomers in the lateral direction. The silicon metasurface (in the center), applied by photolithography, can simultaneously focus, control aberrations caused by astigmatisms, and perform image shift. (Bottom) Photo of actual device. (credit: Alan She et al./Sci. Adv.)

Next, the researchers aim to further improve the functionality of the lens and decrease the voltage required to control it.

The research was performed at the Harvard John A. Paulson School of Engineering and Applied Sciences, supported in part by the Air Force Office of Scientific Research and by the National Science Foundation. This work was performed in part at the Center for Nanoscale Systems (CNS), which is supported by the National Science Foundation. The Harvard Office of Technology Development is exploring commercialization opportunities.

* To build the artificial eye with a larger (more functional) metalens, the researchers had to develop a new algorithm to shrink the file size to make it compatible with the technology currently used to fabricate integrated circuits.

** “All optical systems with multiple components — from cameras to microscopes and telescopes — have slight misalignments or mechanical stresses on their components, depending on the way they were built and their current environment, that will always cause small amounts of astigmatism and other aberrations, which could be corrected by an adaptive optical element,” said Alan She, a graduate student at SEAS and first author of the paper. “Because the adaptive metalens is flat, you can correct those aberrations and integrate different optical capabilities onto a single plane of control. Our results demonstrate the feasibility of embedded autofocus, optical zoom, image stabilization, and adaptive optics, which are expected to become essential for future chip-scale image sensors and  Furthermore, the device’s flat construction and inherently lateral actuation without the need for motorized parts allow for highly stackable systems such as those found in stretchable electronic eye camera sensors, providing possibilities for new kinds of imaging systems.”


Abstract of Adaptive metalenses with simultaneous electrical control of focal length, astigmatism, and shift

Focal adjustment and zooming are universal features of cameras and advanced optical systems. Such tuning is usually performed longitudinally along the optical axis by mechanical or electrical control of focal length. However, the recent advent of ultrathin planar lenses based on metasurfaces (metalenses), which opens the door to future drastic miniaturization of mobile devices such as cell phones and wearable displays, mandates fundamentally different forms of tuning based on lateral motion rather than longitudinal motion. Theory shows that the strain field of a metalens substrate can be directly mapped into the outgoing optical wavefront to achieve large diffraction-limited focal length tuning and control of aberrations. We demonstrate electrically tunable large-area metalenses controlled by artificial muscles capable of simultaneously performing focal length tuning (>100%) as well as on-the-fly astigmatism and image shift corrections, which until now were only possible in electron optics. The device thickness is only 30 μm. Our results demonstrate the possibility of future optical microscopes that fully operate electronically, as well as compact optical systems that use the principles of adaptive optics to correct many orders of aberrations simultaneously.


Abstract of Large area metalenses: design, characterization, and mass manufacturing

Optical components, such as lenses, have traditionally been made in the bulk form by shaping glass or other transparent materials. Recent advances in metasurfaces provide a new basis for recasting optical components into thin, planar elements, having similar or better performance using arrays of subwavelength-spaced optical phase-shifters. The technology required to mass produce them dates back to the mid-1990s, when the feature sizes of semiconductor manufacturing became considerably denser than the wavelength of light, advancing in stride with Moore’s law. This provides the possibility of unifying two industries: semiconductor manufacturing and lens-making, whereby the same technology used to make computer chips is used to make optical components, such as lenses, based on metasurfaces. Using a scalable metasurface layout compression algorithm that exponentially reduces design file sizes (by 3 orders of magnitude for a centimeter diameter lens) and stepper photolithography, we show the design and fabrication of metasurface lenses (metalenses) with extremely large areas, up to centimeters in diameter and beyond. Using a single two-centimeter diameter near-infrared metalens less than a micron thick fabricated in this way, we experimentally implement the ideal thin lens equation, while demonstrating high-quality imaging and diffraction-limited focusing.

Metalens with artificial muscle simulates (and goes way beyond) human-eye and camera optical functions

A silicon-based metalens just 30 micrometers thick is mounted on a transparent, stretchy polymer film. The colored iridescence is produced by the large number of nanostructures within the metalens. (credit:Harvard SEAS)

Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a breakthrough electronically controlled artificial eye. The thin, flat, adaptive silicon nanostructure (“metalens”) can simultaneously control focus, astigmatism, and image shift (three of the major contributors to blurry images) in real time, which the human eye (and eyeglasses) cannot do.

The 30-micrometers-thick metalens makes changes laterally to achieve optical zoom, autofocus, and image stabilization — making it possible to replace bulky lens systems in future optical systems used in eyeglasses, cameras, cell phones, and augmented and virtual reality devices.

The research is described in an open-access paper in Science Advances. In another paper recently published in Optics Express, the researchers demonstrated the design and fabrication of metalenses up to centimeters or more in diameter.* That makes it possible to unify two industries: semiconductor manufacturing and lens-making. So the same technology used to make computer chips will be used to make metasurface-based optical components, such as lenses.

The adaptive metalens (right) focuses light rays onto an image sensor (left), such as one in a camera. An electrical signal controls the shape of the metalens to produce the desired optical wavefront patterns (shown in red), resulting in improved images. In the future, adaptive metalenses will be built into imaging systems, such as cell phone cameras and microscopes, enabling flat, compact autofocus as well as the capability for simultaneously correcting optical aberrations and performing optical image stabilization, all in a single plane of control. (credit: Second Bay Studios/Harvard SEAS)

Simulating the human eye’s lens and ciliary muscles

In the human eye, the lens is surrounded by ciliary muscle, which stretches or compresses the lens, changing its shape to adjust its focal length. To achieve that function, the researchers adhered a metalens to a thin, transparent dielectric elastomer actuator (“artificial muscle”). The researchers chose a dielectic elastomer with low loss — meaning light travels through the material with little scattering — to attach to the lens.

(Top) Schematic of metasurface and dielectric elastomer actuators (“artificial muscles”), showing how the new artificial muscles change focus, similar to how the ciliary muscle in the eye work. An applied voltage supplies transparent, stretchable electrode layers (gray), made up of single-wall carbon-nanotube nanopillars, with electrical charges (acting as a capacitor). The resulting electrostatic attraction compresses (red arrows) the dielectric elastomer actuators (artificial muscles) in the thickness direction and expands (black arrows) the elastomers in the lateral direction. The silicon metasurface (in the center), applied by photolithography, can simultaneously focus, control aberrations caused by astigmatisms, and perform image shift. (Bottom) actual device. (credit: She et al./Sci. Adv.)

Next, the researchers aim to further improve the functionality of the lens and decrease the voltage required to control it.

The research was performed at the Harvard John A. Paulson School of Engineering and Applied Sciences, supported in part by the Air Force Office of Scientific Research and by the National Science Foundation. This work was performed in part at the Center for Nanoscale Systems (CNS), which is supported by the National Science Foundation. The Harvard Office of Technology Development is exploring commercialization opportunities.

* To build the artificial eye with a larger (more functional) metalens, the researchers had to develop a new algorithm to shrink the file size to make it compatible with the technology currently used to fabricate integrated circuits.

** “All optical systems with multiple components — from cameras to microscopes and telescopes — have slight misalignments or mechanical stresses on their components, depending on the way they were built and their current environment, that will always cause small amounts of astigmatism and other aberrations, which could be corrected by an adaptive optical element,” said Alan She, a graduate student at SEAS and first author of the paper. “Because the adaptive metalens is flat, you can correct those aberrations and integrate different optical capabilities onto a single plane of control. Our results demonstrate the feasibility of embedded autofocus, optical zoom, image stabilization, and adaptive optics, which are expected to become essential for future chip-scale image sensors and  Furthermore, the device’s flat construction and inherently lateral actuation without the need for motorized parts allow for highly stackable systems such as those found in stretchable electronic eye camera sensors, providing possibilities for new kinds of imaging systems.”


Abstract of Adaptive metalenses with simultaneous electrical control of focal length, astigmatism, and shift

Focal adjustment and zooming are universal features of cameras and advanced optical systems. Such tuning is usually performed longitudinally along the optical axis by mechanical or electrical control of focal length. However, the recent advent of ultrathin planar lenses based on metasurfaces (metalenses), which opens the door to future drastic miniaturization of mobile devices such as cell phones and wearable displays, mandates fundamentally different forms of tuning based on lateral motion rather than longitudinal motion. Theory shows that the strain field of a metalens substrate can be directly mapped into the outgoing optical wavefront to achieve large diffraction-limited focal length tuning and control of aberrations. We demonstrate electrically tunable large-area metalenses controlled by artificial muscles capable of simultaneously performing focal length tuning (>100%) as well as on-the-fly astigmatism and image shift corrections, which until now were only possible in electron optics. The device thickness is only 30 μm. Our results demonstrate the possibility of future optical microscopes that fully operate electronically, as well as compact optical systems that use the principles of adaptive optics to correct many orders of aberrations simultaneously.


Abstract of Large area metalenses: design, characterization, and mass manufacturing

Optical components, such as lenses, have traditionally been made in the bulk form by shaping glass or other transparent materials. Recent advances in metasurfaces provide a new basis for recasting optical components into thin, planar elements, having similar or better performance using arrays of subwavelength-spaced optical phase-shifters. The technology required to mass produce them dates back to the mid-1990s, when the feature sizes of semiconductor manufacturing became considerably denser than the wavelength of light, advancing in stride with Moore’s law. This provides the possibility of unifying two industries: semiconductor manufacturing and lens-making, whereby the same technology used to make computer chips is used to make optical components, such as lenses, based on metasurfaces. Using a scalable metasurface layout compression algorithm that exponentially reduces design file sizes (by 3 orders of magnitude for a centimeter diameter lens) and stepper photolithography, we show the design and fabrication of metasurface lenses (metalenses) with extremely large areas, up to centimeters in diameter and beyond. Using a single two-centimeter diameter near-infrared metalens less than a micron thick fabricated in this way, we experimentally implement the ideal thin lens equation, while demonstrating high-quality imaging and diffraction-limited focusing.

Are you a cyborg?

Bioprinting a brain

Cryogenic 3D-printing soft hydrogels. Top: the bioprinting process. Bottom: SEM image of general microstructure (scale bar: 100 µm). (credit: Z. Tan/Scientific Reports)

A new bioprinting technique combines cryogenics (freezing) and 3D printing to create geometrical structures that are as soft (and complex) as the most delicate body tissues — mimicking the mechanical properties of organs such as the brain and lungs.

The idea: “Seed” porous scaffolds that can act as a template for tissue regeneration (from neuronal cells, for example), where damaged tissues are encouraged to regrow — allowing the body to heal without tissue rejection or other problems. Using “pluripotent” stem cells that can change into different types of cells is also a possibility.

Smoothy. Solid carbon dioxide (dry ice) in an isopropanol bath is used to rapidly cool hydrogel ink (a rapid liquid-to-solid phase change) as it’s extruded, yogurt-smoothy-style. Once thawed, the gel is as soft as body tissues, but doesn’t collapse under its own weight — a previous problem.

Current structures produced with this technique are “organoids” a few centimeters in size. But the researchers hope to create replicas of actual body parts with complex geometrical structures — even whole organs. That could allow scientists to carry out experiments not possible on live subjects, or for use in medical training, replacing animal bodies for surgical training and simulations. Then on to mechanobiology and tissue engineering.

Source: Imperial College London, Scientific Reports (open-access).

How to generate electricity with your body

Bending a finger generates electricity in this prototype device. (credit: Guofeng Song et al./Nano Energy)

A new triboelectric nanogenerator (TENG) design, using a gold tab attached to your skin, will convert mechanical energy into electrical energy for future wearables and self-powered electronics. Just bend your finger or take a step.

Triboelectric charging occurs when certain materials become electrically charged after coming into contact with a different material. In this new design by University of Buffalo and Chinese scientists, when a stretched layer of gold is released, it crumples, creating what looks like a miniature mountain range. An applied force leads to friction between the gold layers and an interior PDMS layer, causing electrons to flow between the gold layers.

More power to you. Previous TENG designs have been difficult to manufacture (requiring complex lithography) or too expensive. The new 1.5-centimeters-long prototype generates a maximum of 124 volts but at only 10 microamps. It has a power density of 0.22 millwatts per square centimeter. The team plans larger pieces of gold to deliver more electricity and a portable battery.

Source: Nano Energy. Support: U.S. National Science Foundation, the National Basic Research Program of China, National Natural Science Foundation of China, Beijing Science and Technology Projects, Key Research Projects of the Frontier Science of the Chinese Academy of Sciences ,and National Key Research and Development Plan.

This artificial electrical eel may power your implants

How the eel’s electrical organs generate electricity by moving sodium (Na) and potassium (K) ions across a selective membrane. (credit: Caitlin Monney)

Taking it a giant (and a bit scary) step further, an artificial electric organ, inspired by the electric eel, could one day power your implanted implantable sensors, prosthetic devices, medication dispensers, augmented-reality contact lenses, and countless other gadgets. Unlike typical toxic batteries that need to be recharged, these systems are soft, flexible, transparent, and potentially biocompatible.

Doubles as a defibrillator? The system mimicks eels’ electrical organs, which use thousands of alternating compartments with excess potassium or sodium ions, separated by selective membranes. To create a jolt of electricity (600 volts at 1 ampere), an eel’s membranes allow the ions to flow together. The researchers built a similar system, but using sodium and chloride ions dissolved in a water-based hydrogel. It generates more than 100 volts, but at safe low current — just enough to power a small medical device like a pacemaker.

The researchers say the technology could also lead to using naturally occurring processes inside the body to generate electricity, a truly radical step.

Source: Nature, University of Fribourg, University of Michigan, University of California-San Diego. Funding: Air Force Office of Scientific Research, National Institutes of Health.

E-skin for Terminator wannabes

A section of “e-skin” (credit: Jianliang Xiao / University of Colorado Boulder)

A new type of thin, self-healing, translucent “electronic skin” (“e-skin,” which mimicks the properties of natural skin) has applications ranging from robotics and prosthetic development to better biomedical devices and human-computer interfaces.

Ready for a Terminator-style robot baby nurse? What makes this e-skin different and interesting is its embedded sensors, which can measure pressure, temperature, humidity and air flow. That makes it sensitive enough to let a robot take care of a baby, the University of Colorado mechanical engineers and chemists assure us. The skin is also rapidly self-healing (by reheating), as in The Terminator, using a mix of three commercially available compounds in ethanol.

The secret ingredient: A novel network polymer known as polyimine, which is fully recyclable at room temperature. Laced with silver nanoparticles, it can provide better mechanical strength, chemical stability and electrical conductivity. It’s also malleable, so by applying moderate heat and pressure, it can be easily conformed to complex, curved surfaces like human arms and robotic hands.

Source: University of Colorado, Science Advances (open-access). Funded in part by the National Science Foundation.

Altered Carbon

Vertebral cortical stack (credit: Netflix)

Altered Carbon takes place in the 25th century, when humankind has spread throughout the galaxy. After 250 years in cryonic suspension, a prisoner returns to life in a new body with one chance to win his freedom: by solving a mind-bending murder.

Resleeve your stack. Human consciousness can be digitized and downloaded into different bodies. A person’s memories have been encapsulated into “cortical stack” storage devices surgically inserted into the vertebrae at the back of the neck. Disposable physical bodies called “sleeves” can accept any stack.

But only the wealthy can acquire replacement bodies on a continual basis. The long-lived are called Meths, as in the Biblical figure Methuselah. The uber rich are also able to keep copies of their minds in remote storage, which they back up regularly, ensuring that even if their stack is destroyed, the stack can be resleeved (except for periods of time not backed up — as in the hack-murder).

Source: Netflix. Premiered on February 2, 2018. Based on the 2002 novel of the same title by Richard K. Morgan.

 

 

 

 

 

Is anyone home? A way to find out if AI has become self-aware

(credit: Gerd Altmann/Pixabay)

By Susan Schneider, PhD, and Edwin Turner, PhD

Every moment of your waking life and whenever you dream, you have the distinct inner feeling of being “you.” When you see the warm hues of a sunrise, smell the aroma of morning coffee or mull over a new idea, you are having conscious experience. But could an artificial intelligence (AI) ever have experience, like some of the androids depicted in Westworld or the synthetic beings in Blade Runner?

The question is not so far-fetched. Robots are currently being developed to work inside nuclear reactors, fight wars and care for the elderly. As AIs grow more sophisticated, they are projected to take over many human jobs within the next few decades. So we must ponder the question: Could AIs develop conscious experience?

This issue is pressing for several reasons. First, ethicists worry that it would be wrong to force AIs to serve us if they can suffer and feel a range of emotions. Second, consciousness could make AIs volatile or unpredictable, raising safety concerns (or conversely, it could increase an AI’s empathy; based on its own subjective experiences, it might recognize consciousness in us and treat us with compassion).

Third, machine consciousness could impact the viability of brain-implant technologies, like those to be developed by Elon Musk’s new company, Neuralink. If AI cannot be conscious, then the parts of the brain responsible for consciousness could not be replaced with chips without causing a loss of consciousness. And, in a similar vein, a person couldn’t upload their brain to a computer to avoid death, because that upload wouldn’t be a conscious being.

In addition, if AI eventually out-thinks us yet lacks consciousness, there would still be an important sense in which we humans are superior to machines; it feels like something to be us. But the smartest beings on the planet wouldn’t be conscious or sentient.

A lot hangs on the issue of machine consciousness, then. Yet neuroscientists are far from understanding the basis of consciousness in the brain, and philosophers are at least equally far from a complete explanation of the nature of consciousness.

A test for machine consciousness

So what can be done? We believe that we do not need to define consciousness formally, understand its philosophical nature or know its neural basis to recognize indications of consciousness in AIs. Each of us can grasp something essential about consciousness, just by introspecting; we can all experience what it feels like, from the inside, to exist.

(credit: Gerd Altmann/Pixabay)

Based on this essential characteristic of consciousness, we propose a test for machine consciousness, the AI Consciousness Test (ACT), which looks at whether the synthetic minds we create have an experience-based understanding of the way it feels, from the inside, to be conscious.

One of the most compelling indications that normally functioning humans experience consciousness, although this is not often noted, is that nearly every adult can quickly and readily grasp concepts based on this quality of felt consciousness. Such ideas include scenarios like minds switching bodies (as in the film Freaky Friday); life after death (including reincarnation); and minds leaving “their” bodies (for example, astral projection or ghosts). Whether or not such scenarios have any reality, they would be exceedingly difficult to comprehend for an entity that had no conscious experience whatsoever. It would be like expecting someone who is completely deaf from birth to appreciate a Bach concerto.

Thus, the ACT would challenge an AI with a series of increasingly demanding natural language interactions to see how quickly and readily it can grasp and use concepts and scenarios based on the internal experiences we associate with consciousness. At the most elementary level we might simply ask the machine if it conceives of itself as anything other than its physical self.

At a more advanced level, we might see how it deals with ideas and scenarios such as those mentioned in the previous paragraph. At an advanced level, its ability to reason about and discuss philosophical questions such as “the hard problem of consciousness” would be evaluated. At the most demanding level, we might see if the machine invents and uses such a consciousness-based concept on its own, without relying on human ideas and inputs.

Consider this example, which illustrates the idea: Suppose we find a planet that has a highly sophisticated silicon-based life form (call them “Zetas”). Scientists observe them and ponder whether they are conscious beings. What would be convincing proof of consciousness in this species? If the Zetas express curiosity about whether there is an afterlife or ponder whether they are more than just their physical bodies, it would be reasonable to judge them conscious. If the Zetas went so far as to pose philosophical questions about consciousness, the case would be stronger still.

There are also nonverbal behaviors that could indicate Zeta consciousness such as mourning the dead, religious activities or even turning colors in situations that correlate with emotional challenges, as chromatophores do on Earth. Such behaviors could indicate that it feels like something to be a Zeta.

The death of the mind of the fictional HAL 9000 AI computer in Stanley Kubrick’s 2001: A Space Odyssey provides another illustrative example. The machine in this case is not a humanoid robot as in most science fiction depictions of conscious machines; it neither looks nor sounds like a human being (a human did supply HAL’s voice, but in an eerily flat way). Nevertheless, the content of what it says as it is deactivated by an astronaut — specifically, a plea to spare it from impending “death” — conveys a powerful impression that it is a conscious being with a subjective experience of what is happening to it.

Could such indicators serve to identify conscious AIs on Earth? Here, a potential problem arises. Even today’s robots can be programmed to make convincing utterances about consciousness, and a truly superintelligent machine could perhaps even use information about neurophysiology to infer the presence of consciousness in humans. If sophisticated but non-conscious AIs aim to mislead us into believing that they are conscious for some reason, their knowledge of human consciousness could help them do so.

We can get around this though. One proposed technique in AI safety involves “boxing in” an AI—making it unable to get information about the world or act outside of a circumscribed domain, that is, the “box.” We could deny the AI access to the internet and indeed prohibit it from gaining any knowledge of the world, especially information about conscious experience and neuroscience.

(credit: Gerd Altmann/Pixabay)

Some doubt a superintelligent machine could be boxed in effectively — it would find a clever escape. We do not anticipate the development of superintelligence over the next decade, however. Furthermore, for an ACT to be effective, the AI need not stay in the box for long, just long enough administer the test.

ACTs also could be useful for “consciousness engineering” during the development of different kinds of AIs, helping to avoid using conscious machines in unethical ways or to create synthetic consciousness when appropriate.

Beyond the Turing Test

An ACT resembles Alan Turing’s celebrated test for intelligence, because it is entirely based on behavior — and, like Turing’s, it could be implemented in a formalized question-and-answer format. (An ACT could also be based on an AI’s behavior or on that of a group of AIs.)

But an ACT is also quite unlike the Turing test, which was intended to bypass any need to know what was transpiring inside the machine. By contrast, an ACT is intended to do exactly the opposite; it seeks to reveal a subtle and elusive property of the machine’s mind. Indeed, a machine might fail the Turing test because it cannot pass for human, but pass an ACT because it exhibits behavioral indicators of consciousness.

This is the underlying basis of our ACT proposal. It should be said, however, that the applicability of an ACT is inherently limited. An AI could lack the linguistic or conceptual ability to pass the test, like a nonhuman animal or an infant, yet still be capable of experience. So passing an ACT is sufficient but not necessary evidence for AI consciousness — although it is the best we can do for now. It is a first step toward making machine consciousness accessible to objective investigations.

So, back to the superintelligent AI in the “box” — we watch and wait. Does it begin to philosophize about minds existing in addition to bodies, like Descartes? Does it dream, as in Isaac Asimov’s Robot Dreams? Does it express emotion, like Rachel in Blade Runner? Can it readily understand the human concepts that are grounded in our internal conscious experiences, such as those of the soul or atman?

The age of AI will be a time of soul-searching — both of ours, and for theirs.

Originally published in Scientific American, July 19, 2017

Susan Schneider, PhD, is a professor of philosophy and cognitive science at the University of Connecticut, a researcher at YHouse, Inc., in New York, a member of the Ethics and Technology Group at Yale University and a visiting member at the Institute for Advanced Study at Princeton. Her books include The Language of Thought, Science Fiction and Philosophy, and The Blackwell Companion to Consciousness (with Max Velmans). She is featured in the new film, Supersapiens, the Rise of the Mind.

Edwin L. Turner, PhD, is a professor of Astrophysical Sciences at Princeton University, an Affiliate Scientist at the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo, a visiting member in the Program in Interdisciplinary Studies at the Institute for Advanced Study in Princeton, and a co-founding Board of Directors member of YHouse, Inc. Recently he has been an active participant in the Breakthrough Starshot Initiative. He has taken an active interest in artificial intelligence issues since working in the AI Lab at MIT in the early 1970s.

Supersapiens, the Rise of the Mind

(credit: Markus Mooslechner)

In the new film Supersapiens, writer-director Markus Mooslechner raises a core question: As artificial intelligence rapidly blurs the boundaries between man and machine, are we witnessing the rise of a new human species?

“Humanity is facing a turning point — the next evolution of the human mind,” notes Mooslechner. “Will this evolution be a hybrid of man and machine, where artificial intelligence forces the emergence of a new human species? Or will a wave of new technologists, who frame themselves as ‘consciousness-hackers,’ become the future torch-bearers, using technology not to replace the human mind, but rather awaken within it powers we have always possessed — enlightenment at the push of a button?”

“It’s not obvious to me that a replacement of our species by our own technological creation would necessarily be a bad thing,” says ethologist-evolutionary biologist-author Dawkins in the film.

Supersapiens in a Terra Mater Factual Studios production. Executive Producers are Joanne Reay and Walter Koehler. Distribution is to be announced.

Cast:

  • Mikey Siegel, Consciousness Hacker, San FranciscoSam Harris, Neuroscientist, Philosopher
  • Ben Goertzel, Chief Scientist    , Hanson Robotics, Hong Kong
  • Hugo de Garis, retired director of China Brain Project, Xiamen, China
  • Susan Schneider, Philosopher and cognitive scientist University of Connecticut
  • Joel Murphy, owner, OpenBCI, Brooklyn, New York
  • Tim Mullen, Neuroscientist, CEO / Research Director, Qusp Labs
  • Conor Russomanno, CEO, OpenBCI, Brooklyn, New York
  • David Putrino, Neuroscientist, Weill-Cornell Medical College, New York
  • Hannes Sjoblad, Tech Activist, Bodyhacker, Stockholm Sweden.
  • Richard Dawkins, Evolutionary Biologist, Author, Oxford, UK
  • Nick Bostrom, Philosopher, Future of Humanity Institute, Oxford University, UK
  • Anders Sandberg, Computational Neuroscientist, Oxford University, UK
  • Adam Gazzaley, Neuroscientist, Executive Director UCSF Neuroscape, San Francisco, USA
  • Andy Walshe, Director Red Bull High Performance, Santa Monica, USA
  • Randal Koene, Science Director, Carboncopies Science Director, San Francisco


Markus Mooslechner | Supersapiens teaser

Projecting a visual image directly into the brain, bypassing the eyes

Brain-wide activity in a zebrafish when it sees and tries to pursue prey (credit: Ehud Isacoff lab/UC Berkeley)

Imagine replacing a damaged eye with a window directly into the brain — one that communicates with the visual part of the cerebral cortex by reading from a million individual neurons and simultaneously stimulating 1,000 of them with single-cell accuracy, allowing someone to see again.

That’s the goal of a $21.6 million DARPA award to the University of California, Berkeley (UC Berkeley), one of six organizations funded by DARPA’s Neural Engineering System Design program announced this week to develop implantable, biocompatible neural interfaces that can compensate for visual or hearing deficits.*

The UCB researchers ultimately hope to build a device for use in humans. But the researchers’ goal during the four-year funding period is more modest: to create a prototype to read and write to the brains of model organisms — allowing for neural activity and behavior to be monitored and controlled simultaneously. These organisms include zebrafish larvae, which are transparent, and mice, via a transparent window in the skull.


UC Berkeley | Brain activity as a zebrafish stalks its prey

“The ability to talk to the brain has the incredible potential to help compensate for neurological damage caused by degenerative diseases or injury,” said project leader Ehud Isacoff, a UC Berkeley professor of molecular and cell biology and director of the Helen Wills Neuroscience Institute. “By encoding perceptions into the human cortex, you could allow the blind to see or the paralyzed to feel touch.”

How to read/write the brain

To communicate with the brain, the team will first insert a gene into neurons that makes fluorescent proteins, which flash when a cell fires an action potential. This will be accompanied by a second gene that makes a light-activated “optogenetic” protein, which stimulates neurons in response to a pulse of light.

Peering into a mouse brain with a light field microscope to capture live neural activity of hundreds of individual neurons in a 3D section of tissue at video speed (30 Hz) (credit: The Rockefeller University)

To read, the team is developing a miniaturized “light field microscope.”** Mounted on a small window in the skull, it peers through the surface of the brain to visualize up to a million neurons at a time at different depths and monitor their activity.***

This microscope is based on the revolutionary “light field camera,” which captures light through an array of lenses and reconstructs images computationally in any focus.

A holographic projection created by a spatial light modulator would illuminate (“write”) one set of neurons at one depth — those patterned by the letter a, for example — and simultaneously illuminate other sets of neurons at other depths (z level) or in regions of the visual cortex, such as neurons with b or c patterns. That creates three-dimensional holograms that can light up hundreds of thousands of neurons at multiple depths, just under the cortical surface. (credit: Valentina Emiliani/University of Paris, Descartes)

The combined read-write function will eventually be used to directly encode perceptions into the human cortex — inputting a visual scene to enable a blind person to see. The goal is to eventually enable physicians to monitor and activate thousands to millions of individual human neurons using light.

Isacoff, who specializes in using optogenetics to study the brain’s architecture, can already successfully read from thousands of neurons in the brain of a larval zebrafish, using a large microscope that peers through the transparent skin of an immobilized fish, and simultaneously write to a similar number.

The team will also develop computational methods that identify the brain activity patterns associated with different sensory experiences, hoping to learn the rules well enough to generate “synthetic percepts” — meaning visual images representing things being touched — by a person with a missing hand, for example.

The brain team includes ten UC Berkeley faculty and researchers from Lawrence Berkeley National Laboratory, Argonne National Laboratory, and the University of Paris, Descartes.

* In future articles, KurzweilAI will cover the other research projects announced by DARPA’s Neural Engineering System Design program, which is part of the U.S. NIH Brain Initiative.

** Light penetrates only the first few hundred microns of the surface of the brain’s cortex, which is the outer wrapping of the brain responsible for high-order mental functions, such as thinking and memory but also interpreting input from our senses. This thin outer layer nevertheless contains cell layers that represent visual and touch sensations.


Jack Gallant | Movie reconstruction from human brain activity

Team member Jack Gallant, a UC Berkeley professor of psychology, has shown that its possible to interpret what someone is seeing solely from measured neural activity in the visual cortex.

*** Developed by another collaborator, Valentina Emiliani at the University of Paris, Descartes, the light-field microscope and spatial light modulator will be shrunk to fit inside a cube one centimeter, or two-fifths of an inch, on a side to allow for being carried comfortably on the skull. During the next four years, team members will miniaturize the microscope, taking advantage of compressed light field microscopy developed by Ren Ng to take images with a flat sheet of lenses that allows focusing at all depths through a material. Several years ago, Ng, now a UC Berkeley assistant professor of electrical engineering and computer sciences, invented the light field camera.

Smart algorithm automatically adjusts exoskeletons for best walking performance

Walk this way: Metabolic feedback and optimization algorithm automatically tweaks exoskeleton for optimal performance. (credit: Kirby Witte, Katie Poggensee, Pieter Fiers, Patrick Franks & Steve Collins)

Researchers at the College of Engineering at Carnegie Mellon University (CMU) have developed a new automated feedback system for personalizing exoskeletons to achieve optimal performance.

Exoskeletons can be used to augment human abilities. For example, they can provide more endurance while walking, help lift a heavy load, improve athletic performance, and help a stroke patient walk again.

But current one-size-fits-all exoskeleton devices, despite their potential, “have not improved walking performance as much as we think they should,” said Steven Collins, a professor of Mechanical Engineering and senior author of a paper published published Friday June 23, 2017 in Science.

The problem: An exoskeleton needs to be adjusted (and re-adjusted) to work effectively for each user — currently, a time-consuming, iffy manual process.

So the CMU engineers developed a more effective “human-in-the-loop optimization” technique that measures the amount of energy the walker expends by monitoring their breathing* — automatically adjusting the exoskeleton’s ankle dynamics to minimize required human energy expenditure.**

Using real-time metabolic cost estimation for each individual, the CMU software algorithm, combined with versatile emulator hardware, optimized the exoskeleton torque pattern for one ankle while walking, running, and carrying a load on a treadmill. The algorithm automatically made optimized adjustments for each pattern, based on measurements of a person’s energy use for 32 different walking patterns over the course of an hour. (credit: Juanjuan Zhang et al./Science, adapted by KurzweilAI)

In a lab study with 11 healthy volunteers, the new technique resulted in an average reduction in effort of 24% compared to participants walking with the exoskeleton powered off. The technique yielded higher user benefits than in any exoskeleton study to date, including devices acting at all joints on both legs, according to the researchers.

* “In daily life, a proxy measure such as heart rate or muscle activity could be used for optimization, providing noisier but more abundant performance data.” — Juanjuan Zhang et al./Science

** Ankle torque in the lab study was determined by four parameters: peak torque, timing of peak torque, and rise and fall times. This method was chosen to allow comparisons to a prior study that used the same hardware.


Science/AAAS | Personalized Exoskeletons Are Taking Support One Step Farther


Abstract of Human-in-the-loop optimization of exoskeleton assistance during walking

Exoskeletons and active prostheses promise to enhance human mobility, but few have succeeded. Optimizing device characteristics on the basis of measured human performance could lead to improved designs. We have developed a method for identifying the exoskeleton assistance that minimizes human energy cost during walking. Optimized torque patterns from an exoskeleton worn on one ankle reduced metabolic energy consumption by 24.2 ± 7.4% compared to no torque. The approach was effective with exoskeletons worn on one or both ankles, during a variety of walking conditions, during running, and when optimizing muscle activity. Finding a good generic assistance pattern, customizing it to individual needs, and helping users learn to take advantage of the device all contributed to improved economy. Optimization methods with these features can substantially improve performance.

Best of MOOGFEST 2017

The Moogfest four-day festival in Durham, North Carolina next weekend (May 18 — 21) explores the future of technology, art, and music. Here are some of the sessions that may be especially interesting to KurzweilAI readers. Full #Moogfest2017 Program Lineup.

Culture and Technology

(credit: Google)

The Magenta by Google Brain team will bring its work to life through an interactive demo plus workshops on the creation of art and music through artificial intelligence.

Magenta is a Google Brain project to ask and answer the questions, “Can we use machine learning to create compelling art and music? If so, how? If not, why not?” It’s first a research project to advance the state-of-the art and creativity in music, video, image and text generation and secondly, Magenta is building a community of artists, coders, and machine learning researchers.

The interactive demo will go through a improvisation along with the machine learning models, much like the Al Jam Session. The workshop will cover how to use the open source library to build and train models and interact with them via MIDI.

Technical reference: Magenta: Music and Art Generation with Machine Intelligence


TEDx Talks | Music and Art Generation using Machine Learning | Curtis Hawthorne | TEDxMountainViewHighSchool


Miguel Nicolelis (credit: Duke University)

Miguel A. L. Nicolelis, MD, PhD will discuss state-of-the-art research on brain-machine interfaces, which make it possible for the brains of primates to interact directly and in a bi-directional way with mechanical, computational and virtual devices. He will review a series of recent experiments using real-time computational models to investigate how ensembles of neurons encode motor information. These experiments have revealed that brain-machine interfaces can be used not only to study fundamental aspects of neural ensemble physiology, but they can also serve as an experimental paradigm aimed at testing the design of novel neuroprosthetic devices.

He will also explore research that raises the hypothesis that the properties of a robot arm, or other neurally controlled tools, can be assimilated by brain representations as if they were extensions of the subject’s own body.

Theme: Transhumanism


Dervishes at Royal Opera House with Matthew Herbert (credit: ?)

Andy Cavatorta (MIT Media Lab) will present a conversation and workshop on a range of topics including the four-century history of music and performance at the forefront of technology. Known as the inventor of Bjork’s Gravity Harp, he has collaborated on numerous projects to create instruments using new technologies that coerce expressive music out of fire, glass, gravity, tiny vortices, underwater acoustics, and more. His instruments explore technologically mediated emotion and opportunities to express the previously inexpressible.

Theme: Instrument Design


Berklee College of Music

Michael Bierylo (credit: Moogfest)

Michael Bierylo will present his Modular Synthesizer Ensemble alongside the Csound workshops from fellow Berklee Professor Richard Boulanger.

Csound is a sound and music computing system originally developed at MIT Media Lab and can most accurately be described as a compiler or a software that takes textual instructions in the form of source code and converts them into object code which is a stream of numbers representing audio. Although it has a strong tradition as a tool for composing electro-acoustic pieces, it is used by composers and musicians for any kind of music that can be made with the help of the computer and has traditionally being used in a non-interactive score driven context, but nowadays it is mostly used in in a real-time context.

Michael Bierylo serves as the Chair of the Electronic Production and Design Department, which offers students the opportunity to combine performance, composition, and orchestration with computer, synthesis, and multimedia technology in order to explore the limitless possibilities of musical expression.


Berklee College of Music | Electronic Production and Design (EPD) at Berklee College of Music


Chris Ianuzzi (credit: William Murray)

Chris Ianuzzi, a synthesist of Ciani-Musica and past collaborator with pioneers such as Vangelis and Peter Baumann, will present a daytime performance and sound exploration workshops with the B11 braininterface and NeuroSky headset–a Brainwave Sensing Headset.

Theme: Hacking Systems


Argus Project (credit: Moogfest)

The Argus Project from Gan Golan and Ron Morrison of NEW INC is a wearable sculpture, video installation and counter-surveillance training, which directly intersects the public debate over police accountability. According to ancient Greek myth, Argus Panoptes was a giant with 100 eyes who served as an eternal watchman, both for – and against – the gods.

By embedding an array of camera “eyes” into a full body suit of tactical armor, the Argus exo-suit creates a “force field of accountability” around the bodies of those targeted. While some see filming the police as a confrontational or subversive act, it is in fact, a deeply democratic one.  The act of bearing witness to the actions of the state – and showing them to the world – strengthens our society and institutions. The Argus Project is not so much about an individual hero, but the Citizen Body as a whole. In between one of the music acts, a presentation about the project will be part of the Protest Stage.

Argus Exo Suit Design (credit: Argus Project)

Theme: Protest


Found Sound Nation (credit: Moogfest)

Democracy’s Exquisite Corpse from Found Sound Nation and Moogfest, an immersive installation housed within a completely customized geodesic dome, is a multi-person instrument and music-based round-table discussion. Artists, activists, innovators, festival attendees and community engage in a deeply interactive exploration of sound as a living ecosystem and primal form of communication.

Within the dome, there are 9 unique stations, each with their own distinct set of analog or digital sound-making devices. Each person’s set of devices is chained to the person sitting next to them, so that everybody’s musical actions and choices affect the person next to them, and thus affect everyone else at the table. This instrument is a unique experiment in how technology and the instinctive language of sound can play a role in the shaping of a truly collective unconscious.

Theme: Protest


(credit: Land Marking)

Land Marking, from Halsey Burgund and Joe Zibkow of MIT Open Doc Lab, is a mobile-based music/activist project that augments the physical landscape of protest events with a layer of location-based audio contributed by event participants in real-time. The project captures the audioscape and personal experiences of temporary, but extremely important, expressions of discontent and desire for change.

Land Marking will be teaming up with the Protest Stage to allow Moogfest attendees to contribute their thoughts on protests and tune into an evolving mix of commentary and field recordings from others throughout downtown Durham. Land Marking is available on select apps.

Theme: Protest


Taeyoon Choi (credit: Moogfest)

Taeyoon Choi, an artist and educator based in New York and Seoul, who will be leading a Sign Making Workshop as one of the Future Thought leaders on the Protest Stage. His art practice involves performance, electronics, drawings and storytelling that often leads to interventions in public spaces.

Taeyoon will also participate in the Handmade Computer workshop to build a1 Bit Computer, which demonstrates how binary numbers and boolean logic can be configured to create more complex components. On their own these components aren’t capable of computing anything particularly useful, but a computer is said to be Turing complete if it includes all of them, at which point it has the extraordinary ability to carry out any possible computation. He has participated in numerous workshops at festivals around the world, from Korea to Scotland, but primarily at the School for Poetic Computation (SFPC) — an artist run school co-founded by Taeyoon in NYC. Taeyoon Choi’s Handmade Computer projects.

Theme: Protest


(credit: Moogfest)

irlbb from Vivan Thi Tang, connects individuals after IRL (in real life) interactions and creates community that otherwise would have been missed. With a customized beta of the app for Moogfest 2017, irlbb presents a unique engagement opportunity.

Theme: Protest


Ryan Shaw and Michael Clamann (credit: Duke University)

Duke Professors Ryan Shaw, and Michael Clamann will lead a daily science pub talk series on topics that include future medicine, humans and anatomy, and quantum physics.

Ryan is a pioneer in mobile health—the collection and dissemination of information using mobile and wireless devices for healthcare–working with faculty at Duke’s Schools of Nursing, Medicine and Engineering to integrate mobile technologies into first-generation care delivery systems. These technologies afford researchers, clinicians, and patients a rich stream of real-time information about individuals’ biophysical and behavioral health in everyday environments.

Michael Clamann is a Senior Research Scientist in the Humans and Autonomy Lab (HAL) within the Robotics Program at Duke University, an Associate Director at UNC’s Collaborative Sciences Center for Road Safety, and the Lead Editor for Robotics and Artificial Intelligence for Duke’s SciPol science policy tracking website. In his research, he works to better understand the complex interactions between robots and people and how they influence system effectiveness and safety.

Theme: Hacking Systems


Dave Smith (credit: Moogfest)

Dave Smith, the iconic instrument innovator and Grammy-winner, will lead Moogfest’s Instruments Innovators program and host a headlining conversation with a leading artist revealed in next week’s release. He will also host a masterclass.

As the original founder of Sequential Circuits in the mid-70s and Dave designed the Prophet-5––the world’s first fully-programmable polyphonic synth and the first musical instrument with an embedded microprocessor. From the late 1980’s through the early 2000’s he has worked to develop next level synths with the likes of the Audio Engineering Society, Yamaha, Korg, Seer Systems (for Intel). Realizing the limitations of software, Dave returned to hardware and started Dave Smith Instruments (DSI), which released the Evolver hybrid analog/digital synthesizer in 2002. Since then the DSI product lineup has grown to include the Prophet-6, OB-6, Pro 2, Prophet 12, and Prophet ’08 synthesizers, as well as the Tempest drum machine, co-designed with friend and fellow electronic instrument designer Roger Linn.

Theme: Future Thought


Dave Rossum, Gerhard Behles, and Lars Larsen (credit: Moogfest)

EM-u Systems Founder Dave Rossum, Ableton CEO Gerhard Behles, and LZX Founder Lars Larsen will take part in conversations as part of the Instruments Innovators program.

Driven by the creative and technological vision of electronic music pioneer Dave Rossum, Rossum Electro-Music creates uniquely powerful tools for electronic music production and is the culmination of Dave’s 45 years designing industry-defining instruments and transformative technologies. Starting with his co-founding of E-mu Systems, Dave provided the technological leadership that resulted in what many consider the premier professional modular synthesizer system–E-mu Modular System–which became an instrument of choice for numerous recording studios, educational institutions, and artists as diverse as Frank Zappa, Leon Russell, and Hans Zimmer. In the following years, worked on developing Emulator keyboards and racks (i.e. Emulator II), Emax samplers, the legendary SP-12 and SP-1200 (sampling drum machines), the Proteus sound modules and the Morpheus Z-Plane Synthesizer.

Gerhard Behles co-founded Ableton in 1999 with Robert Henke and Bernd Roggendorf. Prior to this he had been part of electronic music act “Monolake” alongside Robert Henke, but his interest in how technology drives the way music is made diverted his energy towards developing music software. He was fascinated by how dub pioneers such as King Tubby ‘played’ the recording studio, and began to shape this concept into a music instrument that became Ableton Live.

LZX Industries was born in 2008 out of the Synth DIY scene when Lars Larsen of Denton, Texas and Ed Leckie of Sydney, Australia began collaborating on the development of a modular video synthesizer. At that time, analog video synthesizers were inaccessible to artists outside of a handful of studios and universities. It was their continuing mission to design creative video instruments that (1) stay within the financial means of the artists who wish to use them, (2) honor and preserve the legacy of 20th century toolmakers, and (3) expand the boundaries of possibility. Since 2015, LZX Industries has focused on the research and development of new instruments, user support, and community building.


Science

ATLAS detector (credit: Kaushik De, Brookhaven National Laboratory)

ATLAS @ CERN. The full ATLAS @ CERN program will be led by Duke University Professors Mark Kruse andKatherine Hayles along with ATLAS @ CERN Physicist Steven Goldfarb.

The program will include a “Virtual Visit” to the Large Hadron Collider — the world’s largest and most powerful particle accelerator — via a live video session,  a ½ day workshop analyzing and understanding LHC data, and a “Science Fiction versus Science Fact” live debate.

The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?

“Atlas Boogie” (referencing Higgs Boson):

ATLAS Experiment | The ATLAS Boogie

(credit: Kate Shaw)

Kate Shaw (ATLAS @ CERN), PhD, in her keynote, titled “Exploring the Universe and Impacting Society Worldwide with the Large Hadron Collider (LHC) at CERN,” will dive into the present-day and future impacts of the LHC on society. She will also share findings from the work she has done promoting particle physics in developing countries through her Physics without Frontiers program.

The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?

Theme: Future Thought


Arecibo (credit: Joe Davis/MIT)

In his keynote, Joe Davis (MIT) will trace the history of several projects centered on ideas about extraterrestrial communications that have given rise to new scientific techniques and inspired new forms of artistic practice. He will present his “swansong” — an interstellar message that is intended explicitly for human beings rather than for aliens.

Theme: Future Thought


Immortality bus (credit: Zoltan Istvan)

Zoltan Istvan (Immortality Bus), the former U.S. Presidential candidate for the Transhumanist party and leader of the Transhumanist movement, will explore the path to immortality through science with the purpose of using science and technology to radically enhance the human being and human experience. His futurist work has reached over 100 million people–some of it due to the Immortality Bus which he recently drove across America with embedded journalists aboard. The bus is shaped and looks like a giant coffin to raise life extension awareness.


Zoltan Istvan | 1-min Hightlight Video for Zoltan Istvan Transhumanism Documentary IMMORTALITY OR BUST

Theme: Transhumanism/Biotechnology


(credit: Moogfest)

Marc Fleury and members of the Church of Space — Park Krausen, Ingmar Koch, and Christ of Veillon — return to Moogfest for a second year to present an expanded and varied program with daily explorations in modern physics with music and the occult, Illuminati performances, theatrical rituals to ERIS, and a Sunday Mass in their own dedicated “Church” venue.

Theme: Techno-Shamanism

#Moogfest2017

Deep learning-based bionic hand grasps objects automatically

British biomedical engineers have developed a new generation of intelligent prosthetic limbs that allows the wearer to reach for objects automatically, without thinking — just like a real hand.

The hand’s camera takes a picture of the object in front of it, assesses its shape and size, picks the most appropriate grasp, and triggers a series of movements in the hand — all within milliseconds.

The research finding was published Wednesday May 3 in an open-access paper in the Journal of Neural Engineering.

A deep learning-based artificial vision and grasp system

Biomedical engineers at Newcastle University and associates developed a convolutional neural network (CNN), trained it with images of more than 500 graspable objects, and taught it to recognize the grip needed for different types of objects.

Object recognition (top) vs. grasp recognition (bottom) (credit: Ghazal Ghazaei/Journal of Neural Engineering)

Grouping objects by size, shape and orientation, according to the type of grasp that would be needed to pick them up, the team programmed the hand to perform four different grasps: palm wrist neutral (such as when you pick up a cup); palm wrist pronated (such as picking up the TV remote); tripod (thumb and two fingers), and pinch (thumb and first finger).

“We would show the computer a picture of, for example, a stick,” explains lead author Ghazal Ghazae. “But not just one picture; many images of the same stick from different angles and orientations, even in different light and against different backgrounds, and eventually the computer learns what grasp it needs to pick that stick up.”

A block diagram representation of the method (credit: Ghazal Ghazaei/Journal of Neural Engineering)

Current prosthetic hands are controlled directly via the user’s myoelectric signals (electrical activity of the muscles recorded from the skin surface of the stump). That takes learning, practice, concentration and, crucially, time.

A small number of amputees have already trialed the new technology. After training, subjects successfully picked up and moved the target objects with an overall success of up to 88%. Now the Newcastle University team is working with experts at Newcastle upon Tyne Hospitals NHS Foundation Trust to offer the “hands with eyes” to patients at Newcastle’s Freeman Hospital.

A future bionic hand

The work is part of a larger research project to develop a bionic hand that can sense pressure and temperature and transmit the information back to the brain.

Led by Newcastle University and involving experts from the universities of Leeds, Essex, Keele, Southampton and Imperial College London, the aim is to develop novel electronic devices that connect neural networks to the forearm to allow two-way communications with the brain.

The research is funded by the Engineering and Physical Sciences Research Council (EPSRC).


Abstract of Deep learning-based artificial vision for grasp classification in myoelectric hands

Objective. Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at ${{5}^{\circ}}$ intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. Main results. The classification accuracy in the offline tests reached $85 \% $ for the seen and $75 \% $ for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of $84 \% $ in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb UltraTM prosthetic hand and a motion controlTM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to $88 \% $ . In addition, we show that with training, subjects’ performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. Significance. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.

 

Elon Musk wants to enhance us as superhuman cyborgs to deal with superintelligent AI

(credit: Neuralink Corp.)

It’s the year 2021. A quadriplegic patient has just had one million “neural lace” microparticles injected into her brain, the world’s first human with an internet communication system using a wireless implanted brain-mind interface — and empowering her as the first superhuman cyborg. …

No, this is not a science-fiction movie plot. It’s the actual first public step — just four years from now — in Tesla CEO Elon Musk’s business plan for his latest new venture, Neuralink. It’s now explained for the first time on Tim Urban’s WaitButWhy blog.

Dealing with the superintelligence existential risk

Such a system would allow for radically improved communication between people, Musk believes. But for Musk, the big concern is AI safety. “AI is obviously going to surpass human intelligence by a lot,” he says. “There’s some risk at that point that something bad happens, something that we can’t control, that humanity can’t control after that point — either a small group of people monopolize AI power, or the AI goes rogue, or something like that.”

“This is what keeps Elon up at night,” says Urban. “He sees it as only a matter of time before superintelligent AI rises up on this planet — and when that happens, he believes that it’s critical that we don’t end up as part of ‘everyone else.’ That’s why, in a future world made up of AI and everyone else, he thinks we have only one good option: To be AI.”

Neural dust: an ultrasonic, low power solution for chronic brain-machine interfaces (credit: Swarm Lab/UC Berkeley)

To achieve his, Neuralink CEO Musk has met with more than 1,000 people, narrowing it down initially to eight experts, such as Paul Merolla, who spent the last seven years as the lead chip designer at IBM on their DARPA-funded SyNAPSE program to design neuromorphic (brain-inspired) chips with 5.4 billion transistors (each with 1 million neurons and 256 million synapses), and Dongjin (DJ) Seo, who while at UC Berkeley designed an ultrasonic backscatter system for powering and communicating with implanted bioelectronics called neural dust for recording brain activity.*

Mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution (credit: Lieber Research Group, Harvard University)

Becoming one with AI — a good thing?

Neuralink’s goal its to create a “digital tertiary layer” to augment the brain’s current cortex and limbic layers — a radical high-bandwidth, long-lasting, biocompatible, bidirectional communicative, non-invasively implanted system made up of micron-size (millionth of a meter) particles communicating wirelessly via the cloud and internet to achieve super-fast communication speed and increased bandwidth (carrying more information).

“We’re going to have the choice of either being left behind and being effectively useless or like a pet — you know, like a house cat or something — or eventually figuring out some way to be symbiotic and merge with AI. … A house cat’s a good outcome, by the way.”

Thin, flexible electrodes mounted on top of a biodegradable silk substrate could provide a better brain-machine interface, as shown in this model. (credit: University of Illinois at Urbana-Champaign)

But machine intelligence is already vastly superior to human intelligence in specific areas (such as Google’s Alpha Go) and often inexplicable. So how do we know superintelligence has the best interests of humanity in mind?

“Just an engineering problem”

Musk’s answer: “If we achieve tight symbiosis, the AI wouldn’t be ‘other’  — it would be you and with a relationship to your cortex analogous to the relationship your cortex has with your limbic system.” OK, but then how does an inferior intelligence know when it’s achieved full symbiosis with a superior one — or when AI goes rogue?

Brain-to-brain (B2B) internet communication system: EEG signals representing two words were encoded into binary strings (left) by the sender (emitter) and sent via the internet to a receiver. The signal was then encoded as a series of transcranial magnetic stimulation-generated phosphenes detected by the visual occipital cortex, which the receiver then translated to words (credit: Carles Grau et al./PLoS ONE)

And what about experts in neuroethics, psychology, law? Musk says it’s just “an engineering problem. … If we can just use engineering to get neurons to talk to computers, we’ll have done our job, and machine learning can do much of the rest.”

However, it’s not clear how we could be assured our brains aren’t hacked, spied on, and controlled by a repressive government or by other humans — especially those with a more recently updated software version or covert cyborg hardware improvements.

NIRS/EEG brain-computer interface system using non-invasive near-infrared light for sensing “yes” or “no” thoughts, shown on a model (credit: Wyss Center for Bio and Neuroengineering)

In addition, the devices mentioned in WaitButWhy all require some form of neurosurgery, unlike Facebook’s research project to use non-invasive near-infrared light, as shown in this experiment, for example.** And getting implants for non-medical use approved by the FDA will be a challenge, to grossly understate it.

“I think we are about 8 to 10 years away from this being usable by people with no disability,” says Musk, optimistically. However, Musk does not lay out a technology roadmap for going further, as MIT Technology Review notes.

Nonetheless, Neuralink sounds awesome — it should lead to some exciting neuroscience breakthroughs. And Neuralink now has 16 San Francisco job listings here.

* Other experts: Vanessa Tolosa, Lawrence Livermore National Laboratory, one of the world’s foremost researchers on biocompatible materials; Max Hodak, who worked on the development of some groundbreaking BMI technology at Miguel Nicolelis’s lab at Duke University, Ben Rapoport, Neuralink’s neurosurgery expert, with a Ph.D. in Electrical Engineering and Computer Science from MIT; Tim Hanson, UC Berkeley post-doc and expert in flexible Electrodes for Stable, Minimally-Invasive Neural Recording; Flip Sabes, professor, UCSF School of Medicine expert in cortical physiology, computational and theoretical modeling, and human psychophysics and physiology; and Tim Gardner, Associate Professor of Biology at Boston University, whose lab works on implanting BMIs in birds, to study “how complex songs are assembled from elementary neural units” and learn about “the relationships between patterns of neural activity on different time scales.”

** This binary experiment and the binary Brain-to-brain (B2B) internet communication system mentioned above are the equivalents of the first binary (dot–dash) telegraph message, sent May 24, 1844: ”What hath God wrought?”