Is anyone home? A way to find out if AI has become self-aware

(credit: Gerd Altmann/Pixabay)

By Susan Schneider, PhD, and Edwin Turner, PhD

Every moment of your waking life and whenever you dream, you have the distinct inner feeling of being “you.” When you see the warm hues of a sunrise, smell the aroma of morning coffee or mull over a new idea, you are having conscious experience. But could an artificial intelligence (AI) ever have experience, like some of the androids depicted in Westworld or the synthetic beings in Blade Runner?

The question is not so far-fetched. Robots are currently being developed to work inside nuclear reactors, fight wars and care for the elderly. As AIs grow more sophisticated, they are projected to take over many human jobs within the next few decades. So we must ponder the question: Could AIs develop conscious experience?

This issue is pressing for several reasons. First, ethicists worry that it would be wrong to force AIs to serve us if they can suffer and feel a range of emotions. Second, consciousness could make AIs volatile or unpredictable, raising safety concerns (or conversely, it could increase an AI’s empathy; based on its own subjective experiences, it might recognize consciousness in us and treat us with compassion).

Third, machine consciousness could impact the viability of brain-implant technologies, like those to be developed by Elon Musk’s new company, Neuralink. If AI cannot be conscious, then the parts of the brain responsible for consciousness could not be replaced with chips without causing a loss of consciousness. And, in a similar vein, a person couldn’t upload their brain to a computer to avoid death, because that upload wouldn’t be a conscious being.

In addition, if AI eventually out-thinks us yet lacks consciousness, there would still be an important sense in which we humans are superior to machines; it feels like something to be us. But the smartest beings on the planet wouldn’t be conscious or sentient.

A lot hangs on the issue of machine consciousness, then. Yet neuroscientists are far from understanding the basis of consciousness in the brain, and philosophers are at least equally far from a complete explanation of the nature of consciousness.

A test for machine consciousness

So what can be done? We believe that we do not need to define consciousness formally, understand its philosophical nature or know its neural basis to recognize indications of consciousness in AIs. Each of us can grasp something essential about consciousness, just by introspecting; we can all experience what it feels like, from the inside, to exist.

(credit: Gerd Altmann/Pixabay)

Based on this essential characteristic of consciousness, we propose a test for machine consciousness, the AI Consciousness Test (ACT), which looks at whether the synthetic minds we create have an experience-based understanding of the way it feels, from the inside, to be conscious.

One of the most compelling indications that normally functioning humans experience consciousness, although this is not often noted, is that nearly every adult can quickly and readily grasp concepts based on this quality of felt consciousness. Such ideas include scenarios like minds switching bodies (as in the film Freaky Friday); life after death (including reincarnation); and minds leaving “their” bodies (for example, astral projection or ghosts). Whether or not such scenarios have any reality, they would be exceedingly difficult to comprehend for an entity that had no conscious experience whatsoever. It would be like expecting someone who is completely deaf from birth to appreciate a Bach concerto.

Thus, the ACT would challenge an AI with a series of increasingly demanding natural language interactions to see how quickly and readily it can grasp and use concepts and scenarios based on the internal experiences we associate with consciousness. At the most elementary level we might simply ask the machine if it conceives of itself as anything other than its physical self.

At a more advanced level, we might see how it deals with ideas and scenarios such as those mentioned in the previous paragraph. At an advanced level, its ability to reason about and discuss philosophical questions such as “the hard problem of consciousness” would be evaluated. At the most demanding level, we might see if the machine invents and uses such a consciousness-based concept on its own, without relying on human ideas and inputs.

Consider this example, which illustrates the idea: Suppose we find a planet that has a highly sophisticated silicon-based life form (call them “Zetas”). Scientists observe them and ponder whether they are conscious beings. What would be convincing proof of consciousness in this species? If the Zetas express curiosity about whether there is an afterlife or ponder whether they are more than just their physical bodies, it would be reasonable to judge them conscious. If the Zetas went so far as to pose philosophical questions about consciousness, the case would be stronger still.

There are also nonverbal behaviors that could indicate Zeta consciousness such as mourning the dead, religious activities or even turning colors in situations that correlate with emotional challenges, as chromatophores do on Earth. Such behaviors could indicate that it feels like something to be a Zeta.

The death of the mind of the fictional HAL 9000 AI computer in Stanley Kubrick’s 2001: A Space Odyssey provides another illustrative example. The machine in this case is not a humanoid robot as in most science fiction depictions of conscious machines; it neither looks nor sounds like a human being (a human did supply HAL’s voice, but in an eerily flat way). Nevertheless, the content of what it says as it is deactivated by an astronaut — specifically, a plea to spare it from impending “death” — conveys a powerful impression that it is a conscious being with a subjective experience of what is happening to it.

Could such indicators serve to identify conscious AIs on Earth? Here, a potential problem arises. Even today’s robots can be programmed to make convincing utterances about consciousness, and a truly superintelligent machine could perhaps even use information about neurophysiology to infer the presence of consciousness in humans. If sophisticated but non-conscious AIs aim to mislead us into believing that they are conscious for some reason, their knowledge of human consciousness could help them do so.

We can get around this though. One proposed technique in AI safety involves “boxing in” an AI—making it unable to get information about the world or act outside of a circumscribed domain, that is, the “box.” We could deny the AI access to the internet and indeed prohibit it from gaining any knowledge of the world, especially information about conscious experience and neuroscience.

(credit: Gerd Altmann/Pixabay)

Some doubt a superintelligent machine could be boxed in effectively — it would find a clever escape. We do not anticipate the development of superintelligence over the next decade, however. Furthermore, for an ACT to be effective, the AI need not stay in the box for long, just long enough administer the test.

ACTs also could be useful for “consciousness engineering” during the development of different kinds of AIs, helping to avoid using conscious machines in unethical ways or to create synthetic consciousness when appropriate.

Beyond the Turing Test

An ACT resembles Alan Turing’s celebrated test for intelligence, because it is entirely based on behavior — and, like Turing’s, it could be implemented in a formalized question-and-answer format. (An ACT could also be based on an AI’s behavior or on that of a group of AIs.)

But an ACT is also quite unlike the Turing test, which was intended to bypass any need to know what was transpiring inside the machine. By contrast, an ACT is intended to do exactly the opposite; it seeks to reveal a subtle and elusive property of the machine’s mind. Indeed, a machine might fail the Turing test because it cannot pass for human, but pass an ACT because it exhibits behavioral indicators of consciousness.

This is the underlying basis of our ACT proposal. It should be said, however, that the applicability of an ACT is inherently limited. An AI could lack the linguistic or conceptual ability to pass the test, like a nonhuman animal or an infant, yet still be capable of experience. So passing an ACT is sufficient but not necessary evidence for AI consciousness — although it is the best we can do for now. It is a first step toward making machine consciousness accessible to objective investigations.

So, back to the superintelligent AI in the “box” — we watch and wait. Does it begin to philosophize about minds existing in addition to bodies, like Descartes? Does it dream, as in Isaac Asimov’s Robot Dreams? Does it express emotion, like Rachel in Blade Runner? Can it readily understand the human concepts that are grounded in our internal conscious experiences, such as those of the soul or atman?

The age of AI will be a time of soul-searching — both of ours, and for theirs.

Originally published in Scientific American, July 19, 2017

Susan Schneider, PhD, is a professor of philosophy and cognitive science at the University of Connecticut, a researcher at YHouse, Inc., in New York, a member of the Ethics and Technology Group at Yale University and a visiting member at the Institute for Advanced Study at Princeton. Her books include The Language of Thought, Science Fiction and Philosophy, and The Blackwell Companion to Consciousness (with Max Velmans). She is featured in the new film, Supersapiens, the Rise of the Mind.

Edwin L. Turner, PhD, is a professor of Astrophysical Sciences at Princeton University, an Affiliate Scientist at the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo, a visiting member in the Program in Interdisciplinary Studies at the Institute for Advanced Study in Princeton, and a co-founding Board of Directors member of YHouse, Inc. Recently he has been an active participant in the Breakthrough Starshot Initiative. He has taken an active interest in artificial intelligence issues since working in the AI Lab at MIT in the early 1970s.

Supersapiens, the Rise of the Mind

(credit: Markus Mooslechner)

In the new film Supersapiens, writer-director Markus Mooslechner raises a core question: As artificial intelligence rapidly blurs the boundaries between man and machine, are we witnessing the rise of a new human species?

“Humanity is facing a turning point — the next evolution of the human mind,” notes Mooslechner. “Will this evolution be a hybrid of man and machine, where artificial intelligence forces the emergence of a new human species? Or will a wave of new technologists, who frame themselves as ‘consciousness-hackers,’ become the future torch-bearers, using technology not to replace the human mind, but rather awaken within it powers we have always possessed — enlightenment at the push of a button?”

“It’s not obvious to me that a replacement of our species by our own technological creation would necessarily be a bad thing,” says ethologist-evolutionary biologist-author Dawkins in the film.

Supersapiens in a Terra Mater Factual Studios production. Executive Producers are Joanne Reay and Walter Koehler. Distribution is to be announced.

Cast:

  • Mikey Siegel, Consciousness Hacker, San FranciscoSam Harris, Neuroscientist, Philosopher
  • Ben Goertzel, Chief Scientist    , Hanson Robotics, Hong Kong
  • Hugo de Garis, retired director of China Brain Project, Xiamen, China
  • Susan Schneider, Philosopher and cognitive scientist University of Connecticut
  • Joel Murphy, owner, OpenBCI, Brooklyn, New York
  • Tim Mullen, Neuroscientist, CEO / Research Director, Qusp Labs
  • Conor Russomanno, CEO, OpenBCI, Brooklyn, New York
  • David Putrino, Neuroscientist, Weill-Cornell Medical College, New York
  • Hannes Sjoblad, Tech Activist, Bodyhacker, Stockholm Sweden.
  • Richard Dawkins, Evolutionary Biologist, Author, Oxford, UK
  • Nick Bostrom, Philosopher, Future of Humanity Institute, Oxford University, UK
  • Anders Sandberg, Computational Neuroscientist, Oxford University, UK
  • Adam Gazzaley, Neuroscientist, Executive Director UCSF Neuroscape, San Francisco, USA
  • Andy Walshe, Director Red Bull High Performance, Santa Monica, USA
  • Randal Koene, Science Director, Carboncopies Science Director, San Francisco


Markus Mooslechner | Supersapiens teaser

Projecting a visual image directly into the brain, bypassing the eyes

Brain-wide activity in a zebrafish when it sees and tries to pursue prey (credit: Ehud Isacoff lab/UC Berkeley)

Imagine replacing a damaged eye with a window directly into the brain — one that communicates with the visual part of the cerebral cortex by reading from a million individual neurons and simultaneously stimulating 1,000 of them with single-cell accuracy, allowing someone to see again.

That’s the goal of a $21.6 million DARPA award to the University of California, Berkeley (UC Berkeley), one of six organizations funded by DARPA’s Neural Engineering System Design program announced this week to develop implantable, biocompatible neural interfaces that can compensate for visual or hearing deficits.*

The UCB researchers ultimately hope to build a device for use in humans. But the researchers’ goal during the four-year funding period is more modest: to create a prototype to read and write to the brains of model organisms — allowing for neural activity and behavior to be monitored and controlled simultaneously. These organisms include zebrafish larvae, which are transparent, and mice, via a transparent window in the skull.


UC Berkeley | Brain activity as a zebrafish stalks its prey

“The ability to talk to the brain has the incredible potential to help compensate for neurological damage caused by degenerative diseases or injury,” said project leader Ehud Isacoff, a UC Berkeley professor of molecular and cell biology and director of the Helen Wills Neuroscience Institute. “By encoding perceptions into the human cortex, you could allow the blind to see or the paralyzed to feel touch.”

How to read/write the brain

To communicate with the brain, the team will first insert a gene into neurons that makes fluorescent proteins, which flash when a cell fires an action potential. This will be accompanied by a second gene that makes a light-activated “optogenetic” protein, which stimulates neurons in response to a pulse of light.

Peering into a mouse brain with a light field microscope to capture live neural activity of hundreds of individual neurons in a 3D section of tissue at video speed (30 Hz) (credit: The Rockefeller University)

To read, the team is developing a miniaturized “light field microscope.”** Mounted on a small window in the skull, it peers through the surface of the brain to visualize up to a million neurons at a time at different depths and monitor their activity.***

This microscope is based on the revolutionary “light field camera,” which captures light through an array of lenses and reconstructs images computationally in any focus.

A holographic projection created by a spatial light modulator would illuminate (“write”) one set of neurons at one depth — those patterned by the letter a, for example — and simultaneously illuminate other sets of neurons at other depths (z level) or in regions of the visual cortex, such as neurons with b or c patterns. That creates three-dimensional holograms that can light up hundreds of thousands of neurons at multiple depths, just under the cortical surface. (credit: Valentina Emiliani/University of Paris, Descartes)

The combined read-write function will eventually be used to directly encode perceptions into the human cortex — inputting a visual scene to enable a blind person to see. The goal is to eventually enable physicians to monitor and activate thousands to millions of individual human neurons using light.

Isacoff, who specializes in using optogenetics to study the brain’s architecture, can already successfully read from thousands of neurons in the brain of a larval zebrafish, using a large microscope that peers through the transparent skin of an immobilized fish, and simultaneously write to a similar number.

The team will also develop computational methods that identify the brain activity patterns associated with different sensory experiences, hoping to learn the rules well enough to generate “synthetic percepts” — meaning visual images representing things being touched — by a person with a missing hand, for example.

The brain team includes ten UC Berkeley faculty and researchers from Lawrence Berkeley National Laboratory, Argonne National Laboratory, and the University of Paris, Descartes.

* In future articles, KurzweilAI will cover the other research projects announced by DARPA’s Neural Engineering System Design program, which is part of the U.S. NIH Brain Initiative.

** Light penetrates only the first few hundred microns of the surface of the brain’s cortex, which is the outer wrapping of the brain responsible for high-order mental functions, such as thinking and memory but also interpreting input from our senses. This thin outer layer nevertheless contains cell layers that represent visual and touch sensations.


Jack Gallant | Movie reconstruction from human brain activity

Team member Jack Gallant, a UC Berkeley professor of psychology, has shown that its possible to interpret what someone is seeing solely from measured neural activity in the visual cortex.

*** Developed by another collaborator, Valentina Emiliani at the University of Paris, Descartes, the light-field microscope and spatial light modulator will be shrunk to fit inside a cube one centimeter, or two-fifths of an inch, on a side to allow for being carried comfortably on the skull. During the next four years, team members will miniaturize the microscope, taking advantage of compressed light field microscopy developed by Ren Ng to take images with a flat sheet of lenses that allows focusing at all depths through a material. Several years ago, Ng, now a UC Berkeley assistant professor of electrical engineering and computer sciences, invented the light field camera.

Smart algorithm automatically adjusts exoskeletons for best walking performance

Walk this way: Metabolic feedback and optimization algorithm automatically tweaks exoskeleton for optimal performance. (credit: Kirby Witte, Katie Poggensee, Pieter Fiers, Patrick Franks & Steve Collins)

Researchers at the College of Engineering at Carnegie Mellon University (CMU) have developed a new automated feedback system for personalizing exoskeletons to achieve optimal performance.

Exoskeletons can be used to augment human abilities. For example, they can provide more endurance while walking, help lift a heavy load, improve athletic performance, and help a stroke patient walk again.

But current one-size-fits-all exoskeleton devices, despite their potential, “have not improved walking performance as much as we think they should,” said Steven Collins, a professor of Mechanical Engineering and senior author of a paper published published Friday June 23, 2017 in Science.

The problem: An exoskeleton needs to be adjusted (and re-adjusted) to work effectively for each user — currently, a time-consuming, iffy manual process.

So the CMU engineers developed a more effective “human-in-the-loop optimization” technique that measures the amount of energy the walker expends by monitoring their breathing* — automatically adjusting the exoskeleton’s ankle dynamics to minimize required human energy expenditure.**

Using real-time metabolic cost estimation for each individual, the CMU software algorithm, combined with versatile emulator hardware, optimized the exoskeleton torque pattern for one ankle while walking, running, and carrying a load on a treadmill. The algorithm automatically made optimized adjustments for each pattern, based on measurements of a person’s energy use for 32 different walking patterns over the course of an hour. (credit: Juanjuan Zhang et al./Science, adapted by KurzweilAI)

In a lab study with 11 healthy volunteers, the new technique resulted in an average reduction in effort of 24% compared to participants walking with the exoskeleton powered off. The technique yielded higher user benefits than in any exoskeleton study to date, including devices acting at all joints on both legs, according to the researchers.

* “In daily life, a proxy measure such as heart rate or muscle activity could be used for optimization, providing noisier but more abundant performance data.” — Juanjuan Zhang et al./Science

** Ankle torque in the lab study was determined by four parameters: peak torque, timing of peak torque, and rise and fall times. This method was chosen to allow comparisons to a prior study that used the same hardware.


Science/AAAS | Personalized Exoskeletons Are Taking Support One Step Farther


Abstract of Human-in-the-loop optimization of exoskeleton assistance during walking

Exoskeletons and active prostheses promise to enhance human mobility, but few have succeeded. Optimizing device characteristics on the basis of measured human performance could lead to improved designs. We have developed a method for identifying the exoskeleton assistance that minimizes human energy cost during walking. Optimized torque patterns from an exoskeleton worn on one ankle reduced metabolic energy consumption by 24.2 ± 7.4% compared to no torque. The approach was effective with exoskeletons worn on one or both ankles, during a variety of walking conditions, during running, and when optimizing muscle activity. Finding a good generic assistance pattern, customizing it to individual needs, and helping users learn to take advantage of the device all contributed to improved economy. Optimization methods with these features can substantially improve performance.

Best of MOOGFEST 2017

The Moogfest four-day festival in Durham, North Carolina next weekend (May 18 — 21) explores the future of technology, art, and music. Here are some of the sessions that may be especially interesting to KurzweilAI readers. Full #Moogfest2017 Program Lineup.

Culture and Technology

(credit: Google)

The Magenta by Google Brain team will bring its work to life through an interactive demo plus workshops on the creation of art and music through artificial intelligence.

Magenta is a Google Brain project to ask and answer the questions, “Can we use machine learning to create compelling art and music? If so, how? If not, why not?” It’s first a research project to advance the state-of-the art and creativity in music, video, image and text generation and secondly, Magenta is building a community of artists, coders, and machine learning researchers.

The interactive demo will go through a improvisation along with the machine learning models, much like the Al Jam Session. The workshop will cover how to use the open source library to build and train models and interact with them via MIDI.

Technical reference: Magenta: Music and Art Generation with Machine Intelligence


TEDx Talks | Music and Art Generation using Machine Learning | Curtis Hawthorne | TEDxMountainViewHighSchool


Miguel Nicolelis (credit: Duke University)

Miguel A. L. Nicolelis, MD, PhD will discuss state-of-the-art research on brain-machine interfaces, which make it possible for the brains of primates to interact directly and in a bi-directional way with mechanical, computational and virtual devices. He will review a series of recent experiments using real-time computational models to investigate how ensembles of neurons encode motor information. These experiments have revealed that brain-machine interfaces can be used not only to study fundamental aspects of neural ensemble physiology, but they can also serve as an experimental paradigm aimed at testing the design of novel neuroprosthetic devices.

He will also explore research that raises the hypothesis that the properties of a robot arm, or other neurally controlled tools, can be assimilated by brain representations as if they were extensions of the subject’s own body.

Theme: Transhumanism


Dervishes at Royal Opera House with Matthew Herbert (credit: ?)

Andy Cavatorta (MIT Media Lab) will present a conversation and workshop on a range of topics including the four-century history of music and performance at the forefront of technology. Known as the inventor of Bjork’s Gravity Harp, he has collaborated on numerous projects to create instruments using new technologies that coerce expressive music out of fire, glass, gravity, tiny vortices, underwater acoustics, and more. His instruments explore technologically mediated emotion and opportunities to express the previously inexpressible.

Theme: Instrument Design


Berklee College of Music

Michael Bierylo (credit: Moogfest)

Michael Bierylo will present his Modular Synthesizer Ensemble alongside the Csound workshops from fellow Berklee Professor Richard Boulanger.

Csound is a sound and music computing system originally developed at MIT Media Lab and can most accurately be described as a compiler or a software that takes textual instructions in the form of source code and converts them into object code which is a stream of numbers representing audio. Although it has a strong tradition as a tool for composing electro-acoustic pieces, it is used by composers and musicians for any kind of music that can be made with the help of the computer and has traditionally being used in a non-interactive score driven context, but nowadays it is mostly used in in a real-time context.

Michael Bierylo serves as the Chair of the Electronic Production and Design Department, which offers students the opportunity to combine performance, composition, and orchestration with computer, synthesis, and multimedia technology in order to explore the limitless possibilities of musical expression.


Berklee College of Music | Electronic Production and Design (EPD) at Berklee College of Music


Chris Ianuzzi (credit: William Murray)

Chris Ianuzzi, a synthesist of Ciani-Musica and past collaborator with pioneers such as Vangelis and Peter Baumann, will present a daytime performance and sound exploration workshops with the B11 braininterface and NeuroSky headset–a Brainwave Sensing Headset.

Theme: Hacking Systems


Argus Project (credit: Moogfest)

The Argus Project from Gan Golan and Ron Morrison of NEW INC is a wearable sculpture, video installation and counter-surveillance training, which directly intersects the public debate over police accountability. According to ancient Greek myth, Argus Panoptes was a giant with 100 eyes who served as an eternal watchman, both for – and against – the gods.

By embedding an array of camera “eyes” into a full body suit of tactical armor, the Argus exo-suit creates a “force field of accountability” around the bodies of those targeted. While some see filming the police as a confrontational or subversive act, it is in fact, a deeply democratic one.  The act of bearing witness to the actions of the state – and showing them to the world – strengthens our society and institutions. The Argus Project is not so much about an individual hero, but the Citizen Body as a whole. In between one of the music acts, a presentation about the project will be part of the Protest Stage.

Argus Exo Suit Design (credit: Argus Project)

Theme: Protest


Found Sound Nation (credit: Moogfest)

Democracy’s Exquisite Corpse from Found Sound Nation and Moogfest, an immersive installation housed within a completely customized geodesic dome, is a multi-person instrument and music-based round-table discussion. Artists, activists, innovators, festival attendees and community engage in a deeply interactive exploration of sound as a living ecosystem and primal form of communication.

Within the dome, there are 9 unique stations, each with their own distinct set of analog or digital sound-making devices. Each person’s set of devices is chained to the person sitting next to them, so that everybody’s musical actions and choices affect the person next to them, and thus affect everyone else at the table. This instrument is a unique experiment in how technology and the instinctive language of sound can play a role in the shaping of a truly collective unconscious.

Theme: Protest


(credit: Land Marking)

Land Marking, from Halsey Burgund and Joe Zibkow of MIT Open Doc Lab, is a mobile-based music/activist project that augments the physical landscape of protest events with a layer of location-based audio contributed by event participants in real-time. The project captures the audioscape and personal experiences of temporary, but extremely important, expressions of discontent and desire for change.

Land Marking will be teaming up with the Protest Stage to allow Moogfest attendees to contribute their thoughts on protests and tune into an evolving mix of commentary and field recordings from others throughout downtown Durham. Land Marking is available on select apps.

Theme: Protest


Taeyoon Choi (credit: Moogfest)

Taeyoon Choi, an artist and educator based in New York and Seoul, who will be leading a Sign Making Workshop as one of the Future Thought leaders on the Protest Stage. His art practice involves performance, electronics, drawings and storytelling that often leads to interventions in public spaces.

Taeyoon will also participate in the Handmade Computer workshop to build a1 Bit Computer, which demonstrates how binary numbers and boolean logic can be configured to create more complex components. On their own these components aren’t capable of computing anything particularly useful, but a computer is said to be Turing complete if it includes all of them, at which point it has the extraordinary ability to carry out any possible computation. He has participated in numerous workshops at festivals around the world, from Korea to Scotland, but primarily at the School for Poetic Computation (SFPC) — an artist run school co-founded by Taeyoon in NYC. Taeyoon Choi’s Handmade Computer projects.

Theme: Protest


(credit: Moogfest)

irlbb from Vivan Thi Tang, connects individuals after IRL (in real life) interactions and creates community that otherwise would have been missed. With a customized beta of the app for Moogfest 2017, irlbb presents a unique engagement opportunity.

Theme: Protest


Ryan Shaw and Michael Clamann (credit: Duke University)

Duke Professors Ryan Shaw, and Michael Clamann will lead a daily science pub talk series on topics that include future medicine, humans and anatomy, and quantum physics.

Ryan is a pioneer in mobile health—the collection and dissemination of information using mobile and wireless devices for healthcare–working with faculty at Duke’s Schools of Nursing, Medicine and Engineering to integrate mobile technologies into first-generation care delivery systems. These technologies afford researchers, clinicians, and patients a rich stream of real-time information about individuals’ biophysical and behavioral health in everyday environments.

Michael Clamann is a Senior Research Scientist in the Humans and Autonomy Lab (HAL) within the Robotics Program at Duke University, an Associate Director at UNC’s Collaborative Sciences Center for Road Safety, and the Lead Editor for Robotics and Artificial Intelligence for Duke’s SciPol science policy tracking website. In his research, he works to better understand the complex interactions between robots and people and how they influence system effectiveness and safety.

Theme: Hacking Systems


Dave Smith (credit: Moogfest)

Dave Smith, the iconic instrument innovator and Grammy-winner, will lead Moogfest’s Instruments Innovators program and host a headlining conversation with a leading artist revealed in next week’s release. He will also host a masterclass.

As the original founder of Sequential Circuits in the mid-70s and Dave designed the Prophet-5––the world’s first fully-programmable polyphonic synth and the first musical instrument with an embedded microprocessor. From the late 1980’s through the early 2000’s he has worked to develop next level synths with the likes of the Audio Engineering Society, Yamaha, Korg, Seer Systems (for Intel). Realizing the limitations of software, Dave returned to hardware and started Dave Smith Instruments (DSI), which released the Evolver hybrid analog/digital synthesizer in 2002. Since then the DSI product lineup has grown to include the Prophet-6, OB-6, Pro 2, Prophet 12, and Prophet ’08 synthesizers, as well as the Tempest drum machine, co-designed with friend and fellow electronic instrument designer Roger Linn.

Theme: Future Thought


Dave Rossum, Gerhard Behles, and Lars Larsen (credit: Moogfest)

EM-u Systems Founder Dave Rossum, Ableton CEO Gerhard Behles, and LZX Founder Lars Larsen will take part in conversations as part of the Instruments Innovators program.

Driven by the creative and technological vision of electronic music pioneer Dave Rossum, Rossum Electro-Music creates uniquely powerful tools for electronic music production and is the culmination of Dave’s 45 years designing industry-defining instruments and transformative technologies. Starting with his co-founding of E-mu Systems, Dave provided the technological leadership that resulted in what many consider the premier professional modular synthesizer system–E-mu Modular System–which became an instrument of choice for numerous recording studios, educational institutions, and artists as diverse as Frank Zappa, Leon Russell, and Hans Zimmer. In the following years, worked on developing Emulator keyboards and racks (i.e. Emulator II), Emax samplers, the legendary SP-12 and SP-1200 (sampling drum machines), the Proteus sound modules and the Morpheus Z-Plane Synthesizer.

Gerhard Behles co-founded Ableton in 1999 with Robert Henke and Bernd Roggendorf. Prior to this he had been part of electronic music act “Monolake” alongside Robert Henke, but his interest in how technology drives the way music is made diverted his energy towards developing music software. He was fascinated by how dub pioneers such as King Tubby ‘played’ the recording studio, and began to shape this concept into a music instrument that became Ableton Live.

LZX Industries was born in 2008 out of the Synth DIY scene when Lars Larsen of Denton, Texas and Ed Leckie of Sydney, Australia began collaborating on the development of a modular video synthesizer. At that time, analog video synthesizers were inaccessible to artists outside of a handful of studios and universities. It was their continuing mission to design creative video instruments that (1) stay within the financial means of the artists who wish to use them, (2) honor and preserve the legacy of 20th century toolmakers, and (3) expand the boundaries of possibility. Since 2015, LZX Industries has focused on the research and development of new instruments, user support, and community building.


Science

ATLAS detector (credit: Kaushik De, Brookhaven National Laboratory)

ATLAS @ CERN. The full ATLAS @ CERN program will be led by Duke University Professors Mark Kruse andKatherine Hayles along with ATLAS @ CERN Physicist Steven Goldfarb.

The program will include a “Virtual Visit” to the Large Hadron Collider — the world’s largest and most powerful particle accelerator — via a live video session,  a ½ day workshop analyzing and understanding LHC data, and a “Science Fiction versus Science Fact” live debate.

The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?

“Atlas Boogie” (referencing Higgs Boson):

ATLAS Experiment | The ATLAS Boogie

(credit: Kate Shaw)

Kate Shaw (ATLAS @ CERN), PhD, in her keynote, titled “Exploring the Universe and Impacting Society Worldwide with the Large Hadron Collider (LHC) at CERN,” will dive into the present-day and future impacts of the LHC on society. She will also share findings from the work she has done promoting particle physics in developing countries through her Physics without Frontiers program.

The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?

Theme: Future Thought


Arecibo (credit: Joe Davis/MIT)

In his keynote, Joe Davis (MIT) will trace the history of several projects centered on ideas about extraterrestrial communications that have given rise to new scientific techniques and inspired new forms of artistic practice. He will present his “swansong” — an interstellar message that is intended explicitly for human beings rather than for aliens.

Theme: Future Thought


Immortality bus (credit: Zoltan Istvan)

Zoltan Istvan (Immortality Bus), the former U.S. Presidential candidate for the Transhumanist party and leader of the Transhumanist movement, will explore the path to immortality through science with the purpose of using science and technology to radically enhance the human being and human experience. His futurist work has reached over 100 million people–some of it due to the Immortality Bus which he recently drove across America with embedded journalists aboard. The bus is shaped and looks like a giant coffin to raise life extension awareness.


Zoltan Istvan | 1-min Hightlight Video for Zoltan Istvan Transhumanism Documentary IMMORTALITY OR BUST

Theme: Transhumanism/Biotechnology


(credit: Moogfest)

Marc Fleury and members of the Church of Space — Park Krausen, Ingmar Koch, and Christ of Veillon — return to Moogfest for a second year to present an expanded and varied program with daily explorations in modern physics with music and the occult, Illuminati performances, theatrical rituals to ERIS, and a Sunday Mass in their own dedicated “Church” venue.

Theme: Techno-Shamanism

#Moogfest2017

Deep learning-based bionic hand grasps objects automatically

British biomedical engineers have developed a new generation of intelligent prosthetic limbs that allows the wearer to reach for objects automatically, without thinking — just like a real hand.

The hand’s camera takes a picture of the object in front of it, assesses its shape and size, picks the most appropriate grasp, and triggers a series of movements in the hand — all within milliseconds.

The research finding was published Wednesday May 3 in an open-access paper in the Journal of Neural Engineering.

A deep learning-based artificial vision and grasp system

Biomedical engineers at Newcastle University and associates developed a convolutional neural network (CNN), trained it with images of more than 500 graspable objects, and taught it to recognize the grip needed for different types of objects.

Object recognition (top) vs. grasp recognition (bottom) (credit: Ghazal Ghazaei/Journal of Neural Engineering)

Grouping objects by size, shape and orientation, according to the type of grasp that would be needed to pick them up, the team programmed the hand to perform four different grasps: palm wrist neutral (such as when you pick up a cup); palm wrist pronated (such as picking up the TV remote); tripod (thumb and two fingers), and pinch (thumb and first finger).

“We would show the computer a picture of, for example, a stick,” explains lead author Ghazal Ghazae. “But not just one picture; many images of the same stick from different angles and orientations, even in different light and against different backgrounds, and eventually the computer learns what grasp it needs to pick that stick up.”

A block diagram representation of the method (credit: Ghazal Ghazaei/Journal of Neural Engineering)

Current prosthetic hands are controlled directly via the user’s myoelectric signals (electrical activity of the muscles recorded from the skin surface of the stump). That takes learning, practice, concentration and, crucially, time.

A small number of amputees have already trialed the new technology. After training, subjects successfully picked up and moved the target objects with an overall success of up to 88%. Now the Newcastle University team is working with experts at Newcastle upon Tyne Hospitals NHS Foundation Trust to offer the “hands with eyes” to patients at Newcastle’s Freeman Hospital.

A future bionic hand

The work is part of a larger research project to develop a bionic hand that can sense pressure and temperature and transmit the information back to the brain.

Led by Newcastle University and involving experts from the universities of Leeds, Essex, Keele, Southampton and Imperial College London, the aim is to develop novel electronic devices that connect neural networks to the forearm to allow two-way communications with the brain.

The research is funded by the Engineering and Physical Sciences Research Council (EPSRC).


Abstract of Deep learning-based artificial vision for grasp classification in myoelectric hands

Objective. Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at ${{5}^{\circ}}$ intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. Main results. The classification accuracy in the offline tests reached $85 \% $ for the seen and $75 \% $ for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of $84 \% $ in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb UltraTM prosthetic hand and a motion controlTM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to $88 \% $ . In addition, we show that with training, subjects’ performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. Significance. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.

 

Elon Musk wants to enhance us as superhuman cyborgs to deal with superintelligent AI

(credit: Neuralink Corp.)

It’s the year 2021. A quadriplegic patient has just had one million “neural lace” microparticles injected into her brain, the world’s first human with an internet communication system using a wireless implanted brain-mind interface — and empowering her as the first superhuman cyborg. …

No, this is not a science-fiction movie plot. It’s the actual first public step — just four years from now — in Tesla CEO Elon Musk’s business plan for his latest new venture, Neuralink. It’s now explained for the first time on Tim Urban’s WaitButWhy blog.

Dealing with the superintelligence existential risk

Such a system would allow for radically improved communication between people, Musk believes. But for Musk, the big concern is AI safety. “AI is obviously going to surpass human intelligence by a lot,” he says. “There’s some risk at that point that something bad happens, something that we can’t control, that humanity can’t control after that point — either a small group of people monopolize AI power, or the AI goes rogue, or something like that.”

“This is what keeps Elon up at night,” says Urban. “He sees it as only a matter of time before superintelligent AI rises up on this planet — and when that happens, he believes that it’s critical that we don’t end up as part of ‘everyone else.’ That’s why, in a future world made up of AI and everyone else, he thinks we have only one good option: To be AI.”

Neural dust: an ultrasonic, low power solution for chronic brain-machine interfaces (credit: Swarm Lab/UC Berkeley)

To achieve his, Neuralink CEO Musk has met with more than 1,000 people, narrowing it down initially to eight experts, such as Paul Merolla, who spent the last seven years as the lead chip designer at IBM on their DARPA-funded SyNAPSE program to design neuromorphic (brain-inspired) chips with 5.4 billion transistors (each with 1 million neurons and 256 million synapses), and Dongjin (DJ) Seo, who while at UC Berkeley designed an ultrasonic backscatter system for powering and communicating with implanted bioelectronics called neural dust for recording brain activity.*

Mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution (credit: Lieber Research Group, Harvard University)

Becoming one with AI — a good thing?

Neuralink’s goal its to create a “digital tertiary layer” to augment the brain’s current cortex and limbic layers — a radical high-bandwidth, long-lasting, biocompatible, bidirectional communicative, non-invasively implanted system made up of micron-size (millionth of a meter) particles communicating wirelessly via the cloud and internet to achieve super-fast communication speed and increased bandwidth (carrying more information).

“We’re going to have the choice of either being left behind and being effectively useless or like a pet — you know, like a house cat or something — or eventually figuring out some way to be symbiotic and merge with AI. … A house cat’s a good outcome, by the way.”

Thin, flexible electrodes mounted on top of a biodegradable silk substrate could provide a better brain-machine interface, as shown in this model. (credit: University of Illinois at Urbana-Champaign)

But machine intelligence is already vastly superior to human intelligence in specific areas (such as Google’s Alpha Go) and often inexplicable. So how do we know superintelligence has the best interests of humanity in mind?

“Just an engineering problem”

Musk’s answer: “If we achieve tight symbiosis, the AI wouldn’t be ‘other’  — it would be you and with a relationship to your cortex analogous to the relationship your cortex has with your limbic system.” OK, but then how does an inferior intelligence know when it’s achieved full symbiosis with a superior one — or when AI goes rogue?

Brain-to-brain (B2B) internet communication system: EEG signals representing two words were encoded into binary strings (left) by the sender (emitter) and sent via the internet to a receiver. The signal was then encoded as a series of transcranial magnetic stimulation-generated phosphenes detected by the visual occipital cortex, which the receiver then translated to words (credit: Carles Grau et al./PLoS ONE)

And what about experts in neuroethics, psychology, law? Musk says it’s just “an engineering problem. … If we can just use engineering to get neurons to talk to computers, we’ll have done our job, and machine learning can do much of the rest.”

However, it’s not clear how we could be assured our brains aren’t hacked, spied on, and controlled by a repressive government or by other humans — especially those with a more recently updated software version or covert cyborg hardware improvements.

NIRS/EEG brain-computer interface system using non-invasive near-infrared light for sensing “yes” or “no” thoughts, shown on a model (credit: Wyss Center for Bio and Neuroengineering)

In addition, the devices mentioned in WaitButWhy all require some form of neurosurgery, unlike Facebook’s research project to use non-invasive near-infrared light, as shown in this experiment, for example.** And getting implants for non-medical use approved by the FDA will be a challenge, to grossly understate it.

“I think we are about 8 to 10 years away from this being usable by people with no disability,” says Musk, optimistically. However, Musk does not lay out a technology roadmap for going further, as MIT Technology Review notes.

Nonetheless, Neuralink sounds awesome — it should lead to some exciting neuroscience breakthroughs. And Neuralink now has 16 San Francisco job listings here.

* Other experts: Vanessa Tolosa, Lawrence Livermore National Laboratory, one of the world’s foremost researchers on biocompatible materials; Max Hodak, who worked on the development of some groundbreaking BMI technology at Miguel Nicolelis’s lab at Duke University, Ben Rapoport, Neuralink’s neurosurgery expert, with a Ph.D. in Electrical Engineering and Computer Science from MIT; Tim Hanson, UC Berkeley post-doc and expert in flexible Electrodes for Stable, Minimally-Invasive Neural Recording; Flip Sabes, professor, UCSF School of Medicine expert in cortical physiology, computational and theoretical modeling, and human psychophysics and physiology; and Tim Gardner, Associate Professor of Biology at Boston University, whose lab works on implanting BMIs in birds, to study “how complex songs are assembled from elementary neural units” and learn about “the relationships between patterns of neural activity on different time scales.”

** This binary experiment and the binary Brain-to-brain (B2B) internet communication system mentioned above are the equivalents of the first binary (dot–dash) telegraph message, sent May 24, 1844: ”What hath God wrought?”

These may be the last glasses you will ever need to buy

Early prototype of “smart glasses” with liquid-based lenses that can automatically adjust the focus on what a person is seeing, whether it’s far away or close up. The battery-powered frames can automatically adjust the focal length. Researchers expect to have smaller, lighter frames and packaged technology within three years. (credit: Dan Hixson/University of Utah College of Engineering)

Don’t throw away your bifocals or multiple glasses yet, but those days might soon be over. A team led by University of Utah engineers has created “smart glasses” with liquid-based lenses that can automatically adjust the focus on what you’re seeing, at any distance.

They’ve created eyeglass lenses made of glycerin, a thick colorless liquid, enclosed by flexible rubber-like membranes in the front and back. The rear membrane in each lens is connected to a series of three mechanical actuators that push the membrane back and forth like a transparent piston, changing the curvature of the liquid lens and therefore the focal length between the lens and the eye.

Simplified schematic of soft-membrane liquid lens (excluding actuators). The lens optical power is adjusted by vertically displacing the fluid with a transparent piston, deflecting the top membrane and changing its curvature. (credit: Nazmul Hasan et al./Optics Express)

In the bridge of the glasses is a distance meter that measures the distance from the glasses to an object via pulses of near-infrared light. When the wearer looks at an object, the meter instantly measures the distance and tells the actuators how to curve the lenses. If the user then sees another object that’s closer, the distance meter readjusts and tells the actuators to reshape the lens for farsightedness.

The lenses can change focus from one object to another in 14 milliseconds (faster than human reaction time). A rechargeable battery in the frames could last more than 24 hours per charge, according to electrical and computer engineering professor Carlos Mastrangelo, senior author of an open-access paper in a special edition of the journal Optics Express.

Before putting them on for the first time, users would input their eyeglasses prescription into an accompanying smartphone app, which then calibrates the lenses automatically via Bluetooth. Users only need to do that once, except for when their prescription changes over time. Theoretically, eyeglass wearers will never have to buy another pair again since these glasses would constantly adjust to their eyesight.

A startup company, Sharpeyes LLC, has been created to commercialize the glasses. The project was funded with a grant from the National Institutes of Health and the National Institute of Biomedical Imaging and Bioengineering.


University of Utah | Smart glasses that automatically focus on whatever you look at


Abstract of Tunable-focus lens for adaptive eyeglasses

We demonstrate the implementation of a compact tunable-focus liquid lens suitable for adaptive eyeglass application. The lens has an aperture diameter of 32 mm, optical power range of 5.6 diopter, and electrical power consumption less than 20 mW. The lens inclusive of its piezoelectric actuation mechanism is 8.4 mm thick and weighs 14.4 gm. The measured lens RMS wavefront aberration error was between 0.73 µm and 0.956 µm.

Terasem Colloquium in Second Life

The 2016 Terasem Annual Colloquium on the Law of Futuristic Persons will take place in Second Life  in  ”Terasem sim” on Saturday, Dec. 10, 2016 at noon EDT. The main themes: “Legal Aspects of Futuristic Persons: Cyber-Humans” and “A Tribute to the ‘Father of Artificial Intelligence,’ Marvin Minsky, PhD.”

Each year on December 10th, International Human Rights Day, Terasem conducts a Colloquium on the Law of Futuristic Persons. The event seeks to provide the public with informed perspectives regarding the legal rights and obligations of “futuristic persons” via VR events with expert presentations and discussions. Terasem hopes to facilitate development of a body of law covering the rights and obligations of entities that transcend, and yet encompass, conventional conceptions of humanness,” according to Terasem Movement, Inc.

12:10–12:30PM —How Marvin Minsky Inspired Me To Have a Mindclone Living on An O’Neill Space Habitat
Martine Rothblatt, JD, PhD
Co-Founder, Terasem Movement, Inc.
Space Coast, FL
Avatar name: Vitology Destiny

12:30–12:50PM — Formal Interaction

12:50–1:10PM — The Emerging Law of Cyborgs
Woodrow “Woody” Barfield, PhD, JD, LLM
Author: Cyber-Humans: Our Future with Machines
Chapel Hill, NC
Avatar name: WoodyBarfield

1:10–1:30PM — Formal Interaction

1:30–1:50PM — Cyborgs and Family Law Challenges
Rich Lee
Human Enhancement & Augmentation
St. George, UT
Avatar name: RichLee78

1:50–2:10PM — Formal Interaction

2:10–2:30PM — Synthetic Brain Simulations and Mens Rea*
Stephen Thaler, PhD.
President & CEO, Imagination-engines, Inc.
St. Charles, MO
Avatar name: SteveThaler

* Mens Rea refers to criminal intent. Moreover, it is the state of mind indicating culpability which is required by statute as an element of a crime. — Cornell University Legal Information Institute

 

Americans worried about gene editing, brain chip implants, and synthetic blood

(iStock Photo)

Many in the general U.S. public are concerned about technologies to make people’s minds sharper and their bodies stronger and healthier than ever before, according to a new Pew Research Center survey of more than 4,700 U.S. adults.

The survey covers broad public reaction to scientific advances and examines public attitudes about the potential use of three specific emerging technologies for human enhancement.

The nationally representative survey centered on public views about gene editing that might give babies a lifetime with much reduced risk of serious disease, implantation of brain chips that potentially could give people a much improved ability to concentrate and process information, and transfusions of synthetic blood that might give people much greater speed, strength, and stamina.

A majority of Americans would be “very” or “somewhat” worried about gene editing (68%); brain chips (69%); and synthetic blood (63%), while no more than half say they would be enthusiastic about each of these developments.

Among the key data:

  • More say they would not want enhancements of their brains and their blood–66% and 63%, respectively–than say they would want them (32% and 35%). U.S. adults are closely split on the question of whether they would want gene editing to help prevent diseases for their babies (48% would, 50% would not).
  • Majorities say these enhancements could exacerbate the divide between haves and have-nots. For instance, 73% believe inequality will increase if brain chips become available because initially they will be obtainable only by the wealthy. At least seven-in-ten predict each of these technologies will become available before they have been fully tested or understood.
  • Substantial shares say they are not sure whether these interventions are morally acceptable. But among those who express an opinion, more people say brain and blood enhancements would be morally unacceptable than say they are acceptable.
  • More adults say the downsides of brain and blood enhancements would outweigh the benefits for society than vice versa. Americans are a bit more positive about the impact of gene editing to reduce disease; 36% think it will have more benefits than downsides, while 28% think it will have more downsides than benefits.
  • Opinion is closely divided when it comes to the fundamental question of whether these potential developments are “meddling with nature” and cross a line that should not be crossed, or whether they are “no different” from other ways that humans have tried to better themselves over time. For example, 49% of adults say transfusions with synthetic blood for much improved physical abilities would be “meddling with nature,” while a roughly equal share (48%) say this idea is no different than other ways human have tried to better themselves.

The survey data reveal several patterns surrounding Americans’ views about these ideas:

  • People’s views about these human enhancements are strongly linked with their religiosity.
  • People are less accepting of enhancements that produce extreme changes in human abilities. And, if an enhancement is permanent and cannot be undone, people are less inclined to support it.
  • Women tend to be more wary than men about these potential enhancements from cutting-edge technologies.

The survey also finds some similarities between what Americans think about these three potential, future enhancements and their attitudes toward the kinds of enhancements already widely available today. As a point of comparison, this study examined public thinking about a handful of current enhancements, including elective cosmetic surgery, laser eye surgery, skin or lip injections, cosmetic dental procedures to improve one’s smile, hair replacement surgery and contraceptive surgery.

  • 61% of Americans say people are too quick to undergo cosmetic procedures to change their appearance in ways that are not really important, while 36% “it’s understandable that more people undergo cosmetic procedures these days because it’s a competitive world and people who look more attractive tend to have an advantage.”
  • When it comes to views about elective cosmetic surgery, in particular, 34% say elective cosmetic surgery is “taking technology too far,” while 62% say it is an “appropriate use of technology.” Some 54% of U.S. adults say elective cosmetic surgery leads to about equal benefits and downsides for society, while 26% express the belief that there are more downsides than benefits, and just 16% say society receives more benefits than downsides from cosmetic surgery.

The survey data is drawn from a nationally representative survey of 4,726 U.S. adults conducted by Pew Research Center online and by mail from March 2-28, 2016.

Pew Research Center is a nonpartisan “fact tank” that informs the public about the issues, attitudes and trends shaping America and the world. It does not take policy positions. The center is a subsidiary of The Pew Charitable Trusts, its primary funder.