KurzweilAI special project August 7–18

Dear reader,

The KurzweilAI editorial/research team will be working on a special 10-day project starting Monday August 7, so we will be suspending newsletter publication until Monday August 21.

Our website will remain up and we will continue to welcome your emails. We hope you’re enjoying your summer vacation.

Thanks for your always-interesting participation,

Amara D. Angelica
Research Director/Editor, KurzweilAI

How to turn a crystal into an erasable electrical circuit

Washington State University researchers used light to write a highly conducting electrical path in a crystal that can be erased and reconfigured. (Left) A photograph of a sample with four metal contacts. (Right) An illustration of a laser drawing a conductive path between two contacts. (credit: Washington State University)

Washington State University (WSU) physicists have found a way to write an electrical circuit into a crystal, opening up the possibility of transparent, three-dimensional electronics that, like an Etch A Sketch, can be erased and reconfigured.

Ordinarily, a crystal does not conduct electricity. But when the researchers heated up crystal strontium titanate under the specific conditions, the crystal was altered so that light made it conductive. The circuit could be erased by heating it with an optical pen.

Schematic diagram of experiment in writing an electrical circuit into a crystal (credit: Washington State University)

The physicists were able to increase the crystal’s conductivity 1,000-fold. The phenomenon occurred at room temperature.

“It opens up a new type of electronics where you can define a circuit optically and then erase it and define a new one,” said Matt McCluskey, a WSU professor of physics and materials science.

The work was published July 27, 2017 in the open-access on-line journal Scientific Reports. The research was funded by the National Science Foundation.


Abstract of Using persistent photoconductivity to write a low-resistance path in SrTiO3

Materials with persistent photoconductivity (PPC) experience an increase in conductivity upon exposure to light that persists after the light is turned off. Although researchers have shown that this phenomenon could be exploited for novel memory storage devices, low temperatures (below 180 K) were required. In the present work, two-point resistance measurements were performed on annealed strontium titanate (SrTiO3, or STO) single crystals at room temperature. After illumination with sub-gap light, the resistance decreased by three orders of magnitude. This markedly enhanced conductivity persisted for several days in the dark. Results from IR spectroscopy, electrical measurements, and exposure to a 405 nm laser suggest that contact resistance plays an important role. The laser was then used as an “optical pen” to write a low-resistance path between two contacts, demonstrating the feasibility of optically defined, transparent electronics.

Ray Kurzweil reveals plans for ‘linguistically fluent’ Google software

Smart Reply (credit: Google Research)

Ray Kuzweil, a director of engineering at Google, reveals plans for a future version of Google’s “Smart Reply” machine-learning email software (and more) in a Wired article by Tom Simonite published Wednesday (Aug. 2, 2017).

Running on mobile Gmail and Google Inbox, Smart Reply suggests up to three replies to an email message, saving typing time or giving you ideas for a better reply.

Smarter autocomplete

Kurzweil’s team is now “experimenting with empowering Smart Reply to elaborate on its initial terse suggestions,” Simonite says.

“Tapping a Continue button [in response to an email] might cause ‘Sure I’d love to come to your party!’ to expand to include, for example, ‘Can I bring something?’ He likes the idea of having AI pitch in anytime you’re typing, a bit like an omnipresent, smarter version of Google’s search autocomplete. ‘You could have similar technology to help you compose documents or emails by giving you suggestions of how to complete your sentence,’ Kurzweil says.”

As Simonite notes, Kurzweil’s software is based on his hierarchical theory of intelligence, articulated in Kurzweil’s latest book, How to Create a Mind and in more detail in an arXiv paper by Kurzweil and key members of his team, published in May.

“Kurzweil’s work outlines a path to create a simulation of the human neocortex (the outer layer of the brain where we do much of our thinking) by building a hierarchy of similarly structured components that encode increasingly abstract ideas as sequences,” according to the paper. “Kurzweil provides evidence that the neocortex is a self-organizing hierarchy of modules, each of which can learn, remember, recognize and/or generate a sequence, in which each sequence consists of a sequential pattern from lower-level modules.”

The paper further explains that Smart Reply previously used a “long short-term memory” (LSTM) network*, “which are much slower than feed-forward networks [used in the new software] for training and inference” because with LSTM, it takes more computation to handle longer sequences of words.

Kurzweil’s team was able to produce email responses of similar quality to LSTM, but using fewer computational resources by training hierarchically connected layers of simulated neurons on clustered numerical representations of text. Essentially, the approach propagates information through a sequence of ever more complex pattern recognizers until the final patterns are matched to optimal responses.

Kona: linguistically fluent software

But underlying Smart Reply is “a system for understanding the meaning of language, according to Kurzweil,” Simonite reports.

“Codenamed Kona, the effort is aiming for nothing less than creating software as linguistically fluent as you or me. ‘I would not say it’s at human levels, but I think we’ll get there,’ Kurzweil says. More applications of Kona are in the works and will surface in future Google products, he promises.”

* The previous sequence-to-sequence (Seq2Seq) framework [described in this paper] uses “recurrent neural networks (RNNs), typically long short-term memory (LSTM) networks, to encode sequences of word embeddings into representations that depend on the order, and uses a decoder RNN to generate output sequences word by word. …While Seq2Seq models provide a generalized solution, it is not obvious that they are maximally efficient, and training these systems can be slow and complicated.”

Ray Kurzweil reveals plans for ‘linguistically fluent’ Google software

Smart Reply (credit: Google Research)

Ray Kuzweil, a director of engineering at Google, reveals plans for a future version of Google’s “Smart Reply” machine-learning email software (and more) in a Wired article by Tom Simonite published Wednesday (Aug. 2, 2017).

Running on mobile Gmail and Google Inbox, Smart Reply suggests up to three replies to an email message, saving typing time or giving you ideas for a better reply.

Smarter autocomplete

Kurzweil’s team is now “experimenting with empowering Smart Reply to elaborate on its initial terse suggestions,” Simonite says.

“Tapping a Continue button [in response to an email] might cause ‘Sure I’d love to come to your party!’ to expand to include, for example, ‘Can I bring something?’ He likes the idea of having AI pitch in anytime you’re typing, a bit like an omnipresent, smarter version of Google’s search autocomplete. ‘You could have similar technology to help you compose documents or emails by giving you suggestions of how to complete your sentence,’ Kurzweil says.”

As Simonite notes, Kurzweil’s software is based on his hierarchical theory of intelligence, articulated in Kurzweil’s latest book, How to Create a Mind and in more detail in an arXiv paper by Kurzweil and key members of his team, published in May.

“Kurzweil’s work outlines a path to create a simulation of the human neocortex (the outer layer of the brain where we do much of our thinking) by building a hierarchy of similarly structured components that encode increasingly abstract ideas as sequences,” according to the paper. “Kurzweil provides evidence that the neocortex is a self-organizing hierarchy of modules, each of which can learn, remember, recognize and/or generate a sequence, in which each sequence consists of a sequential pattern from lower-level modules.”

The paper further explains that Smart Reply previously used a “long short-term memory” (LSTM) network*, “which are much slower than feed-forward networks [used in the new software] for training and inference” because with LSTM, it takes more computation to handle longer sequences of words.

Kurzweil’s team was able to produce email responses of similar quality to LSTM, but using fewer computational resources by training hierarchically connected layers of simulated neurons on clustered numerical representations of text. Essentially, the approach propagates information through a sequence of ever more complex pattern recognizers until the final patterns are matched to optimal responses.

Kona: linguistically fluent software

But underlying Smart Reply is “a system for understanding the meaning of language, according to Kurzweil,” Simonite reports.

“Codenamed Kona, the effort is aiming for nothing less than creating software as linguistically fluent as you or me. ‘I would not say it’s at human levels, but I think we’ll get there,’ Kurzweil says. More applications of Kona are in the works and will surface in future Google products, he promises.”

* The previous sequence-to-sequence (Seq2Seq) framework [described in this paper] uses “recurrent neural networks (RNNs), typically long short-term memory (LSTM) networks, to encode sequences of word embeddings into representations that depend on the order, and uses a decoder RNN to generate output sequences word by word. …While Seq2Seq models provide a generalized solution, it is not obvious that they are maximally efficient, and training these systems can be slow and complicated.”

Saturn moon Titan has chemical that could form bio-like ‘membranes’ says NASA

Molecules of vinyl cyanide reside in the atmosphere of Titan, Saturn’s largest moon, says NASA. Titan is shown here in an optical (atmosphere) infrared (surface) composite from NASA’s Cassini spacecraft. Titan’s atmosphere is a veritable chemical factory, harnessing the light of the sun and the energy from fast-moving particles that orbit around Saturn to convert simple organic molecules into larger, more complex chemicals. (credit: B. Saxton (NRAO/AUI/NSF); NASA)

NASA researchers have found large quantities (2.8 parts per billion) of acrylonitrile* (vinyl cyanide, C2H3CN) in Titan’s atmosphere that could self-assemble as a sheet of material similar to a cell membrane.

Acrylonitrile (credit: NASA Goddard)

Consider these findings, presented July 28, 2017 in the open-access journal Science Advances, based on data from the ALMA telescope in Chile (and confirming earlier observations by NASA’s Cassini spacecraft):

Azotozome illustration (credit: James Stevenson/Cornell)

1. Researchers have proposed that acrylonitrile molecules could come together as a sheet of material similar to a cell membrane. The sheet could form a hollow, microscopic sphere that they dubbed an “azotosome.”

A bilayer, made of two layers of lipid molecules (credit: Mariana Ruiz Villarreal/CC)

2. The azotosome sphere could serve as a tiny storage and transport container, much like the spheres that biological lipid bilayers can form. The thin, flexible lipid bilayer is the main component of the cell membrane, which separates the inside of a cell from the outside world.

“The ability to form a stable membrane to separate the internal environment from the external one is important because it provides a means to contain chemicals long enough to allow them to interact,” said Michael Mumma, director of the Goddard Center for Astrobiology, which is funded by the NASA Astrobiology Institute.

Organic rain falling on a methane sea on Titan (artist’s impression) (credit: NASA Goddard)

3. Acrylonitrile condenses in the cold lower atmosphere and rains onto its solid icy surface, ending up in seas of methane liquids on its surface.

Illustration showing organic compounds in Titan’s seas and lakes (ESA)

4. A lake on Titan named Ligeia Mare that could have accumulated enough acrylonitrile to form about 10 million azotosomes in every milliliter (quarter-teaspoon) of liquid. Compare that to roughly a million bacteria per milliliter of coastal ocean water on Earth.

Chemistry in Titan’s atmosphere. Nearly as large as Mars, Titan has a hazy atmosphere made up mostly of nitrogen with a smattering of organic, carbon-based molecules, including methane (CH4) and ethane (C2H6). Planetary scientists theorize that this chemical make-up is similar to Earth’s primordial atmosphere. The conditions on Titan, however, are not conducive to the formation of life as we know it; it’s simply too cold (95 kelvins or -290 degrees Fahrenheit). (credit: ESA)

6. A related open-access study published July 26, 2017 in The Astrophysical Journal Letters notes that Cassini has also made the surprising detection of negatively charged molecules known as “carbon chain anions” in Titan’s upper atmosphere. These molecules are understood to be building blocks towards more complex molecules, and may have acted as the basis for the earliest forms of life on Earth.

“This is a known process in the interstellar medium, but now we’ve seen it in a completely different environment, meaning it could represent a universal process for producing complex organic molecules,” says Ravi Desai of University College London and lead author of the study.

* On Earth, acrylonitrile  is used in manufacturing of plastics.


NASA Goddard | A Titan Discovery


Abstract of ALMA detection and astrobiological potential of vinyl cyanide on Titan

Recent simulations have indicated that vinyl cyanide is the best candidate molecule for the formation of cell membranes/vesicle structures in Titan’s hydrocarbon-rich lakes and seas. Although the existence of vinyl cyanide (C2H3CN) on Titan was previously inferred using Cassini mass spectrometry, a definitive detection has been lacking until now. We report the first spectroscopic detection of vinyl cyanide in Titan’s atmosphere, obtained using archival data from the Atacama Large Millimeter/submillimeter Array (ALMA), collected from February to May 2014. We detect the three strongest rotational lines of C2H3CN in the frequency range of 230 to 232 GHz, each with >4σ confidence. Radiative transfer modeling suggests that most of the C2H3CN emission originates at altitudes of ≳200 km, in agreement with recent photochemical models. The vertical column densities implied by our best-fitting models lie in the range of 3.7 × 1013 to 1.4 × 1014 cm−2. The corresponding production rate of vinyl cyanide and its saturation mole fraction imply the availability of sufficient dissolved material to form ~107 cell membranes/cm3 in Titan’s sea Ligeia Mare.

Disney Research’s ‘Magic Bench’ makes augmented reality a headset-free group experience

Magic Bench (credit: Disney Research)

Disney Research has created the first shared, combined augmented/mixed-reality experience, replacing first-person head-mounted displays or handheld devices with a mirrored image on a large screen — allowing people to share the magical experience as a group.

Sit on Disney Research’s Magic Bench and you may see an elephant hand you a glowing orb, hear its voice, and feel it sit down next to you, for example. Or you might get rained on and find yourself underwater.

How it works

Flowchart of the Magic Bench installation (credit: Disney Research)

People seated on the Magic Bench can see themselves on a large video display in front of them. The scene is reconstructed using a combined depth sensor/video camera (Microsoft‰ Kinect) to image participants, bench, and surroundings. An image of the participants is projected on a large screen, allowing them to occupy the same 3D space as a computer-generated character or object. The system can also infer participants’ gaze.*

Speakers and haptic sensors built into the bench add to the experience (by vibrating the bench when the elephant sits down in this example).

The research team will present and demonstrate the Magic Bench at SIGGRAPH 2017, the Computer Graphics and Interactive Techniques Conference, which began Sunday, July 30 in Los Angeles.

* To eliminate depth shadows that occur in areas where the depth sensor has no corresponding line of sight with the color camera, a modified algorithm creates a 2D backdrop, according to the researchers. The 3D and 2D reconstructions are positioned in virtual space and populated with 3D characters and effects in such a way that the resulting real-time rendering is a seamless composite, fully capable of interacting with virtual physics, light, and shadows.


DisneyResearchHub | Magic Bench


Abstract of Magic Bench

Mixed Reality (MR) and Augmented Reality (AR) create exciting opportunities to engage users in immersive experiences, resulting in natural human-computer interaction. Many MR interactions are generated around a €first-person Point of View (POV). In these cases, the user directs to the environment, which is digitally displayed either through a head-mounted display or a handheld computing device. One drawback of such conventional AR/MR platforms is that the experience is user-specific. Moreover, these platforms require the user to wear and/or hold an expensive device, which can be cumbersome and alter interaction techniques. We create a solution for multi-user interactions in AR/MR, where a group can share the same augmented environment with any computer generated (CG) asset and interact in a shared story sequence through a third-person POV. Our approach is to instrument the environment leaving the user unburdened of any equipment, creating a seamless walk-up-and-play experience. We demonstrate this technology in a series of vignettes featuring humanoid animals. Participants can not only see and hear these characters, they can also feel them on the bench through haptic feedback. Many of the characters also interact with users directly, either through speech or touch. In one vignettŠe an elephant hands a participant a glowing orb. ŒThis demonstrates HCI in its simplest form: a person walks up to a computer, and the computer hands the person an object.

A living programmable biocomputing device based on RNA

“Ribocomputing devices” ( yellow) developed by a team at the Wyss Institute can now be used by synthetic biologists to sense and interpret multiple signals in cells and logically instruct their ribosomes (blue and green) to produce different proteins. (credit: Wyss Institute at Harvard University)

Synthetic biologists at Harvard’s Wyss Institute for Biologically Inspired Engineering and associates have developed a living programmable “ribocomputing” device based on networks of precisely designed, self-assembling synthetic RNAs (ribonucleic acid). The RNAs can sense multiple biosignals and make logical decisions to control protein production with high precision.

As reported in Nature, the synthetic biological circuits could be used to produce drugs, fine chemicals, and biofuels or detect disease-causing agents and release therapeutic molecules inside the body. The low-cost diagnostic technologies may even lead to nanomachines capable of hunting down cancer cells or switching off aberrant genes.

Biological logic gates

Similar to a digital circuit, these synthetic biological circuits can process information and make logic-guided decisions, using basic logic operations — AND, OR, and NOT. But instead of detecting voltages, the decisions are based on specific chemicals or proteins, such as toxins in the environment, metabolite levels, or inflammatory signals. The specific ribocomputing parts can be readily designed on a computer.

E. coli bacteria engineered to be ribocomputing devices output a green-glowing protein when they detect a specific set of programmed RNA molecules as input signals (credit: Harvard University)

The research was performed with E. coli bacteria, which regulate the expression of a fluorescent (glowing) reporter protein when the bacteria encounter a specific complex set of intra-cellular stimuli. But the researchers believe ribocomputing devices can work with other host organisms or in extracellular settings.

Previous synthetic biological circuits have only been able to sense a handful of signals, giving them an incomplete picture of conditions in the host cell. They are also built out of different types of molecules, such as DNAs, RNAs, and proteins, that must find, bind, and work together to sense and process signals. Identifying molecules that cooperate well with one another is difficult and makes development of new biological circuits a time-consuming and often unpredictable process.

Brain-like neural networks next

Ribocomputing devices could also be freeze-dried on paper, leading to paper-based biological circuits, including diagnostics that can sense and integrate several disease-relevant signals in a clinical sample, the researchers say.

The next stage of research will focus on the use of RNA “toehold” technology* to produce neural networks within living cells — circuits capable of analyzing a range of excitatory and inhibitory inputs, averaging them, and producing an output once a particular threshold of activity is reached. (Similar to how a neuron averages incoming signals from other neurons.)

Ultimately, researchers hope to induce cells to communicate with one another via programmable molecular signals, forming a truly interactive, brain-like network, according to lead author Alex Green, an assistant professor at Arizona State University’s Biodesign Institute.

Wyss Institute Core Faculty member Peng Yin, Ph.D., who led the study, is also Professor of Systems Biology at Harvard Medical School.

The study was funded by the Wyss Institute’s Molecular Robotics Initiative, a Defense Advanced Research Projects Agency (DARPA) Living Foundries grant, and grants from the National Institute of Health (NIH), the Office of Naval Research (ONR), the National Science Foundation (NSF) and the Defense Threat Reduction Agency (DTRA).

* The team’s approach evolved from its previous development of “toehold switches” in 2014 — programmable hairpin-like nano-structures made of RNA. In principle, RNA toehold wwitches can control the production of a specific protein: when a desired complementary “trigger” RNA, which can be part of the cell’s natural RNA repertoire, is present and binds to the toehold switch, the hairpin structure breaks open. Only then will the cell’s ribosomes get access to the RNA and produce the desired protein.


Wyss Institute | Mechanism of the Toehold Switch


Abstract of Complex cellular logic computation using ribocomputing devices

Synthetic biology aims to develop engineering-driven approaches to the programming of cellular functions that could yield transformative technologies. Synthetic gene circuits that combine DNA, protein, and RNA components have demonstrated a range of functions such as bistability, oscillation, feedback, and logic capabilities. However, it remains challenging to scale up these circuits owing to the limited number of designable, orthogonal, high-performance parts, the empirical and often tedious composition rules, and the requirements for substantial resources for encoding and operation. Here, we report a strategy for constructing RNA-only nanodevices to evaluate complex logic in living cells. Our ‘ribocomputing’ systems are composed of de-novo-designed parts and operate through predictable and designable base-pairing rules, allowing the effective in silico design of computing devices with prescribed configurations and functions in complex cellular environments. These devices operate at the post-transcriptional level and use an extended RNA transcript to co-localize all circuit sensing, computation, signal transduction, and output elements in the same self-assembled molecular complex, which reduces diffusion-mediated signal losses, lowers metabolic cost, and improves circuit reliability. We demonstrate that ribocomputing devices in Escherichia coli can evaluate two-input logic with a dynamic range up to 900-fold and scale them to four-input AND, six-input OR, and a complex 12-input expression (A1 AND A2 AND NOT A1*) OR (B1 AND B2 AND NOT B2*) OR (C1 AND C2) OR (D1 AND D2) OR (E1 AND E2). Successful operation of ribocomputing devices based on programmable RNA interactions suggests that systems employing the same design principles could be implemented in other host organisms or in extracellular settings.

How to run faster, smarter AI apps on smartphones

(credit: iStock)

When you use smartphone AI apps like Siri, you’re dependent on the cloud for a lot of the processing — limited by your connection speed. But what if your smartphone could do more of the processing directly on your device — allowing for smarter, faster apps?

MIT scientists have taken a step in that direction with a new way to enable artificial-intelligence systems called convolutional neural networks (CNNs) to run locally on mobile devices. (CNN’s are used in areas such as autonomous driving, speech recognition, computer vision, and automatic translation.) Neural networks take up a lot of memory and consume a lot of power, so they usually run on servers in the cloud, which receive data from desktop or mobile devices and then send back their analyses.

The new MIT analytic method can determine how much power a neural network will actually consume when run on a particular type of hardware. The researchers used the method to evaluate new techniques for paring down neural networks so that they’ll run more efficiently on handheld devices.

The new CNN designs are also optimized to run on an energy-efficient computer chip optimized for neural networks that the researchers developed in 2016.

Reducing energy consumption

The new MIT software method uses “energy-aware pruning” — meaning they reduce a neural networks’ power consumption by cutting out the layers of the network that contribute very little to a neural network’s final output and consume the most energy.

Associate professor of electrical engineering and computer science Vivienne Sze and colleagues describe the work in an open-access paper they’re presenting this week (of July 24, 2017) at the Computer Vision and Pattern Recognition Conference. They report that the methods offered up to 73 percent reduction in power consumption over the standard implementation of neural networks — 43 percent better than the best previous method.

Meanwhile, another MIT group at the Computer Science and Artificial Intelligence Laboratory has designed a hardware approach to reduce energy consumption and increase computer-chip processing speed for specific apps, using “cache hierarchies.” (“Caches” are small, local memory banks that store data that’s frequently used by computer chips to cut down on time- and energy-consuming communication with off-chip memory.)**

The researchers tested their system on a simulation of a chip with 36 cores, or processing units. They found that compared to its best-performing predecessors, the system increased processing speed by 20 to 30 percent while reducing energy consumption by 30 to 85 percent. They presented the new system, dubbed Jenga, in an open-access paper at the International Symposium on Computer Architecture earlier in July 2017.

Better batteries — or maybe, no battery?

Another solution to better mobile AI is improving rechargeable batteries in cell phones (and other mobile devices), which have limited charge capacity and short lifecycles, and perform poorly in cold weather.

Recently, DARPA-funded researchers from the University of Houston (and at the University of California-San Diego and Northwestern University) have discovered that quinones — an inexpensive, earth-abundant and easily recyclable material that is low-cost and nonflammable — can address current battery limitations.

“One of these batteries, as a car battery, could last 10 years,” said Yan Yao, associate professor of electrical and computer engineering. In addition to slowing the deterioration of batteries for vehicles and stationary electricity storage batteries, it also would make battery disposal easier because the material does not contain heavy metals. The research is described in Nature Materials.

The first battery-free cellphone that can send and receive calls using only a few microwatts of power. (credit: Mark Stone/University of Washington)

But what if we eliminated batteries altogether? University of Washington researchers have invented a cellphone that requires no batteries. Instead, it harvests 3.5 microwatts of power from ambient radio signals, light, or even the vibrations of a speaker.

The new technology is detailed in a paper published July 1, 2017 in the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies.

The UW researchers demonstrated how to harvest this energy from ambient radio signals transmitted by a WiFi base station up to 31 feet away. “You could imagine in the future that all cell towers or Wi-Fi routers could come with our base station technology embedded in it,” said co-author Vamsi Talla, a former UW electrical engineering doctoral student and Allen School research associate. “And if every house has a Wi-Fi router in it, you could get battery-free cellphone coverage everywhere.”

A cellphone CPU (computer processing unit) typically requires several watts or more (depending on the app), so we’re not quite there yet. But that power requirement could one day be sufficiently reduced by future special-purpose chips and MIT’s optimized algorithms.

It might even let you do amazing things. :)

* Loosely based on the anatomy of the brain, neural networks consist of thousands or even millions of simple but densely interconnected information-processing nodes, usually organized into layers. The connections between nodes have “weights” associated with them, which determine how much a given node’s output will contribute to the next node’s computation. During training, in which the network is presented with examples of the computation it’s learning to perform, those weights are continually readjusted, until the output of the network’s last layer consistently corresponds with the result of the computation. With the proposed pruning method, the energy consumption of AlexNet and GoogLeNet are reduced by 3.7x and 1.6x, respectively, with less than 1% top-5 accuracy loss.

** The software reallocates cache access on the fly to reduce latency (delay), based on the physical locations of the separate memory banks that make up the shared memory cache. If multiple cores are retrieving data from the same DRAM [memory] cache, this can cause bottlenecks that introduce new latencies. So after Jenga has come up with a set of cache assignments, cores don’t simply dump all their data into the nearest available memory bank; instead, Jenga parcels out the data a little at a time, then estimates the effect on bandwidth consumption and latency. 

*** The stumbling block, Yao said, has been the anode, the portion of the battery through which energy flows. Existing anode materials are intrinsically structurally and chemically unstable, meaning the battery is only efficient for a relatively short time. The differing formulations offer evidence that the material is an effective anode for both acid batteries and alkaline batteries, such as those used in a car, as well as emerging aqueous metal-ion batteries.

Is anyone home? A way to find out if AI has become self-aware

(credit: Gerd Altmann/Pixabay)

By Susan Schneider, PhD, and Edwin Turner, PhD

Every moment of your waking life and whenever you dream, you have the distinct inner feeling of being “you.” When you see the warm hues of a sunrise, smell the aroma of morning coffee or mull over a new idea, you are having conscious experience. But could an artificial intelligence (AI) ever have experience, like some of the androids depicted in Westworld or the synthetic beings in Blade Runner?

The question is not so far-fetched. Robots are currently being developed to work inside nuclear reactors, fight wars and care for the elderly. As AIs grow more sophisticated, they are projected to take over many human jobs within the next few decades. So we must ponder the question: Could AIs develop conscious experience?

This issue is pressing for several reasons. First, ethicists worry that it would be wrong to force AIs to serve us if they can suffer and feel a range of emotions. Second, consciousness could make AIs volatile or unpredictable, raising safety concerns (or conversely, it could increase an AI’s empathy; based on its own subjective experiences, it might recognize consciousness in us and treat us with compassion).

Third, machine consciousness could impact the viability of brain-implant technologies, like those to be developed by Elon Musk’s new company, Neuralink. If AI cannot be conscious, then the parts of the brain responsible for consciousness could not be replaced with chips without causing a loss of consciousness. And, in a similar vein, a person couldn’t upload their brain to a computer to avoid death, because that upload wouldn’t be a conscious being.

In addition, if AI eventually out-thinks us yet lacks consciousness, there would still be an important sense in which we humans are superior to machines; it feels like something to be us. But the smartest beings on the planet wouldn’t be conscious or sentient.

A lot hangs on the issue of machine consciousness, then. Yet neuroscientists are far from understanding the basis of consciousness in the brain, and philosophers are at least equally far from a complete explanation of the nature of consciousness.

A test for machine consciousness

So what can be done? We believe that we do not need to define consciousness formally, understand its philosophical nature or know its neural basis to recognize indications of consciousness in AIs. Each of us can grasp something essential about consciousness, just by introspecting; we can all experience what it feels like, from the inside, to exist.

(credit: Gerd Altmann/Pixabay)

Based on this essential characteristic of consciousness, we propose a test for machine consciousness, the AI Consciousness Test (ACT), which looks at whether the synthetic minds we create have an experience-based understanding of the way it feels, from the inside, to be conscious.

One of the most compelling indications that normally functioning humans experience consciousness, although this is not often noted, is that nearly every adult can quickly and readily grasp concepts based on this quality of felt consciousness. Such ideas include scenarios like minds switching bodies (as in the film Freaky Friday); life after death (including reincarnation); and minds leaving “their” bodies (for example, astral projection or ghosts). Whether or not such scenarios have any reality, they would be exceedingly difficult to comprehend for an entity that had no conscious experience whatsoever. It would be like expecting someone who is completely deaf from birth to appreciate a Bach concerto.

Thus, the ACT would challenge an AI with a series of increasingly demanding natural language interactions to see how quickly and readily it can grasp and use concepts and scenarios based on the internal experiences we associate with consciousness. At the most elementary level we might simply ask the machine if it conceives of itself as anything other than its physical self.

At a more advanced level, we might see how it deals with ideas and scenarios such as those mentioned in the previous paragraph. At an advanced level, its ability to reason about and discuss philosophical questions such as “the hard problem of consciousness” would be evaluated. At the most demanding level, we might see if the machine invents and uses such a consciousness-based concept on its own, without relying on human ideas and inputs.

Consider this example, which illustrates the idea: Suppose we find a planet that has a highly sophisticated silicon-based life form (call them “Zetas”). Scientists observe them and ponder whether they are conscious beings. What would be convincing proof of consciousness in this species? If the Zetas express curiosity about whether there is an afterlife or ponder whether they are more than just their physical bodies, it would be reasonable to judge them conscious. If the Zetas went so far as to pose philosophical questions about consciousness, the case would be stronger still.

There are also nonverbal behaviors that could indicate Zeta consciousness such as mourning the dead, religious activities or even turning colors in situations that correlate with emotional challenges, as chromatophores do on Earth. Such behaviors could indicate that it feels like something to be a Zeta.

The death of the mind of the fictional HAL 9000 AI computer in Stanley Kubrick’s 2001: A Space Odyssey provides another illustrative example. The machine in this case is not a humanoid robot as in most science fiction depictions of conscious machines; it neither looks nor sounds like a human being (a human did supply HAL’s voice, but in an eerily flat way). Nevertheless, the content of what it says as it is deactivated by an astronaut — specifically, a plea to spare it from impending “death” — conveys a powerful impression that it is a conscious being with a subjective experience of what is happening to it.

Could such indicators serve to identify conscious AIs on Earth? Here, a potential problem arises. Even today’s robots can be programmed to make convincing utterances about consciousness, and a truly superintelligent machine could perhaps even use information about neurophysiology to infer the presence of consciousness in humans. If sophisticated but non-conscious AIs aim to mislead us into believing that they are conscious for some reason, their knowledge of human consciousness could help them do so.

We can get around this though. One proposed technique in AI safety involves “boxing in” an AI—making it unable to get information about the world or act outside of a circumscribed domain, that is, the “box.” We could deny the AI access to the internet and indeed prohibit it from gaining any knowledge of the world, especially information about conscious experience and neuroscience.

(credit: Gerd Altmann/Pixabay)

Some doubt a superintelligent machine could be boxed in effectively — it would find a clever escape. We do not anticipate the development of superintelligence over the next decade, however. Furthermore, for an ACT to be effective, the AI need not stay in the box for long, just long enough administer the test.

ACTs also could be useful for “consciousness engineering” during the development of different kinds of AIs, helping to avoid using conscious machines in unethical ways or to create synthetic consciousness when appropriate.

Beyond the Turing Test

An ACT resembles Alan Turing’s celebrated test for intelligence, because it is entirely based on behavior — and, like Turing’s, it could be implemented in a formalized question-and-answer format. (An ACT could also be based on an AI’s behavior or on that of a group of AIs.)

But an ACT is also quite unlike the Turing test, which was intended to bypass any need to know what was transpiring inside the machine. By contrast, an ACT is intended to do exactly the opposite; it seeks to reveal a subtle and elusive property of the machine’s mind. Indeed, a machine might fail the Turing test because it cannot pass for human, but pass an ACT because it exhibits behavioral indicators of consciousness.

This is the underlying basis of our ACT proposal. It should be said, however, that the applicability of an ACT is inherently limited. An AI could lack the linguistic or conceptual ability to pass the test, like a nonhuman animal or an infant, yet still be capable of experience. So passing an ACT is sufficient but not necessary evidence for AI consciousness — although it is the best we can do for now. It is a first step toward making machine consciousness accessible to objective investigations.

So, back to the superintelligent AI in the “box” — we watch and wait. Does it begin to philosophize about minds existing in addition to bodies, like Descartes? Does it dream, as in Isaac Asimov’s Robot Dreams? Does it express emotion, like Rachel in Blade Runner? Can it readily understand the human concepts that are grounded in our internal conscious experiences, such as those of the soul or atman?

The age of AI will be a time of soul-searching — both of ours, and for theirs.

Originally published in Scientific American, July 19, 2017

Susan Schneider, PhD, is a professor of philosophy and cognitive science at the University of Connecticut, a researcher at YHouse, Inc., in New York, a member of the Ethics and Technology Group at Yale University and a visiting member at the Institute for Advanced Study at Princeton. Her books include The Language of Thought, Science Fiction and Philosophy, and The Blackwell Companion to Consciousness (with Max Velmans). She is featured in the new film, Supersapiens, the Rise of the Mind.

Edwin L. Turner, PhD, is a professor of Astrophysical Sciences at Princeton University, an Affiliate Scientist at the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo, a visiting member in the Program in Interdisciplinary Studies at the Institute for Advanced Study in Princeton, and a co-founding Board of Directors member of YHouse, Inc. Recently he has been an active participant in the Breakthrough Starshot Initiative. He has taken an active interest in artificial intelligence issues since working in the AI Lab at MIT in the early 1970s.

Supersapiens, the Rise of the Mind

(credit: Markus Mooslechner)

In the new film Supersapiens, writer-director Markus Mooslechner raises a core question: As artificial intelligence rapidly blurs the boundaries between man and machine, are we witnessing the rise of a new human species?

“Humanity is facing a turning point — the next evolution of the human mind,” notes Mooslechner. “Will this evolution be a hybrid of man and machine, where artificial intelligence forces the emergence of a new human species? Or will a wave of new technologists, who frame themselves as ‘consciousness-hackers,’ become the future torch-bearers, using technology not to replace the human mind, but rather awaken within it powers we have always possessed — enlightenment at the push of a button?”

“It’s not obvious to me that a replacement of our species by our own technological creation would necessarily be a bad thing,” says ethologist-evolutionary biologist-author Dawkins in the film.

Supersapiens in a Terra Mater Factual Studios production. Executive Producers are Joanne Reay and Walter Koehler. Distribution is to be announced.

Cast:

  • Mikey Siegel, Consciousness Hacker, San FranciscoSam Harris, Neuroscientist, Philosopher
  • Ben Goertzel, Chief Scientist    , Hanson Robotics, Hong Kong
  • Hugo de Garis, retired director of China Brain Project, Xiamen, China
  • Susan Schneider, Philosopher and cognitive scientist University of Connecticut
  • Joel Murphy, owner, OpenBCI, Brooklyn, New York
  • Tim Mullen, Neuroscientist, CEO / Research Director, Qusp Labs
  • Conor Russomanno, CEO, OpenBCI, Brooklyn, New York
  • David Putrino, Neuroscientist, Weill-Cornell Medical College, New York
  • Hannes Sjoblad, Tech Activist, Bodyhacker, Stockholm Sweden.
  • Richard Dawkins, Evolutionary Biologist, Author, Oxford, UK
  • Nick Bostrom, Philosopher, Future of Humanity Institute, Oxford University, UK
  • Anders Sandberg, Computational Neuroscientist, Oxford University, UK
  • Adam Gazzaley, Neuroscientist, Executive Director UCSF Neuroscape, San Francisco, USA
  • Andy Walshe, Director Red Bull High Performance, Santa Monica, USA
  • Randal Koene, Science Director, Carboncopies Science Director, San Francisco


Markus Mooslechner | Supersapiens teaser