Two new wearable sensors may replace traditional medical diagnostic devices

Throat-motion sensor monitors stroke effects more effectively

A radical new type of stretchable, wearable sensor that measures vocal-cord movements could be a “game changer” for stroke rehabilitation, according to Northwestern University scientists. The sensors can also measure swallowing ability (which may be affected by stroke), heart function, muscle activity, and sleep quality. Developed in the lab of engineering professor John A. Rogers, Ph.D., in partnership with Shirley Ryan AbilityLab in Chicago, the new sensors have been deployed to tens of patients.

“One of the biggest problems we face with stroke patients is that their gains tend to drop off when they leave the hospital,” said Arun Jayaraman, Ph.D., research scientist at the Shirley Ryan AbilityLab and a wearable-technology expert. “With the home monitoring enabled by these sensors, we can intervene at the right time, which could lead to better, faster recoveries for patients.”

(credit: Elliott Abel/ Shirley Ryan AbilityLab)

Monitoring movements, not sounds. The new band-aid-like stretchable throat sensor (two are applied) measures speech patterns by detecting throat movements to improve diagnosis and treatment of aphasia, a communication disorder associated with stroke.

Speech-language pathologists currently use microphones to monitor patients’ speech functions, which can’t distinguish between patients’ voices and ambient noise.

(credit: Elliott Abel/ Shirley Ryan AbilityLab)

Full-body kinematics. AbilityLab also uses similar electronic biosensors (developed in Rogers’ lab) on the legs, arms and chest to monitor stroke patients’ recovery progress. The sensors stream data wirelessly to clinicians’ phones and computers, providing a quantitative, full-body picture of patients’ advanced physical and physiological responses in real time.

Patients can wear them even after they leave the hospital, allowing doctors to understand how their patients are functioning in the real world.

 

(credit: Elliott Abel/ Shirley Ryan AbilityLab)

Mobile displays. Data from the sensors will be presented in a simple iPad-like display that is easy for both clinicians and patients to understand. It will send alerts when patients are under-performing on a certain metric and allow them to set and track progress toward their goals. A smartphone app can also help patients make corrections.

The researchers plan to test the sensors on patients with other conditions, such as Parkinson’s disease.

 

(credit: Elliott Abel/ Shirley Ryan AbilityLab)

Body-chemicals sensor. Another patch developed by the Rogers Lab does colorimetric analysis — determining the concentration of a chemical — for measuring sweat rate/loss and electrolyte loss. The Rogers Lab has a contract with Gatorade, and is testing this technology with the U.S. Air Force, the Seattle Mariners, and other unnamed sports teams.

Phone apps will also be available to capture precise colors and for data extraction, using algorithms.

A wearable electrocardiogram

Electrocardiogram on a prototype skin sensor (credit: 2018 Takao Someya Research Group)

Wearing your heart of your sleeve. Imagine looking at a electrocardiogram displayed on your wrist, using a simple skin sensor (replacing the usual complex array of EKG body electrodes), linked wirelessly to a smartphone or the cloud.

That’s the concept for a new wearable device developed by a team headed by Professor Takao Someya at the University of Tokyo’s Graduate School of Engineering and Dai Nippon Printing (DNP). It’s designed to provide continuous, non-invasive health monitoring.

 

The soft, flexible skin display is about 1 millimeter thick. (credit: 2018 Takao Someya Research Group.)

Stretchable nanomesh. The device uses a lightweight sensor made from a nanomesh electrode and a display made from a 16 x 24 array of micro LEDs and stretchable wiring, mounted on a rubber sheet. It’s stretchable by up 45 percent of its original length and can be worn on the skin continuously for a week without causing inflammation.

The sensor can also measure temperature, pressure, and the electrical properties of muscle, and can display messages on skin.

DNP hopes to bring the integrated skin display to market within three years.

Neuroscientists reverse Alzheimer’s disease in mice

The brain of a 10-month-old mouse with Alzheimer’s disease (left) is full of amyloid plaques (red). These hallmarks of Alzheimer’s disease are reversed in animals that have gradually lost the BACE1 enzyme (right). (credit: Hu et al., 2018)

Researchers from the Cleveland Clinic Lerner Research Institute have completely reversed the formation of amyloid plaques in the brains of mice with Alzheimer’s disease by gradually depleting an enzyme called BACE1. The procedure also improved the animals’ cognitive function.

The study, published February 14 in the Journal of Experimental Medicine, raises hopes that drugs targeting this enzyme will be able to successfully treat Alzheimer’s disease in humans.


Background: Serious side effects

One of the earliest events in Alzheimer’s disease is an abnormal buildup of beta-amyloid peptide, which can form large, amyloid plaques in the brain and disrupt the function of neuronal synapses. The BACE1 (aka beta-secretase) enzyme helps produce beta-amyloid peptide by cleaving (splitting) amyloid precursor protein (APP). So drugs that inhibit BACE1 are being developed as potential Alzheimer’s disease treatments. But that’s a problem because BACE1 also controls many important neural processes; accidental cleaving of other proteins instead of APP could lead these drugs to have serious side effects. For example, mice completely lacking BACE1 suffer severe neurodevelopmental defects.


A genetic-engineering solution

To deal with the serious side effects, the researchers generated mice that gradually lose the BACE1 enzyme as they grow older. These mice developed normally and appeared to remain perfectly healthy over time. The researchers then bred these rodents with mice that start to develop amyloid plaques and Alzheimer’s disease when they are 75 days old.

The resulting offspring’s BACE1 levels were approximately 50% lower than normal (but also formed plaques at this age). However, as these mice continued to age and lose BACE1 activity, there were lower beta-amyloid peptide levels and the plaques began to disappear. At 10 months old, the mice had no plaques in their brains. Loss of BACE1 also improved the learning and memory of mice with Alzheimer’s disease.

“To our knowledge, this is the first observation of such a dramatic reversal of amyloid deposition in any study of Alzheimer’s disease mouse models,” says senior author Riqiang Yan, who will become chair of the department of neuroscience at the University of Connecticut this spring.

Decreasing BACE1 activity also reversed other hallmarks of Alzheimer’s disease, such as activation of microglial cells and the formation of abnormal neuronal processes.

However, the researchers also found that depletion of BACE1 only partially restored synaptic function, suggesting that BACE1 may be required for optimal synaptic activity and cognition.

“Our study provides genetic evidence that preformed amyloid deposition can be completely reversed after sequential and increased deletion of BACE1 in the adult,” says  Yan. “Our data show that BACE1 inhibitors have the potential to treat Alzheimer’s disease patients without unwanted toxicity. Future studies should develop strategies to minimize the synaptic impairments arising from significant inhibition of BACE1 to achieve maximal and optimal benefits for Alzheimer’s patients.”


Abstract of BACE1 deletion in the adult mouse reverses preformed amyloid deposition and improves cognitive functions

BACE1 initiates the generation of the β-amyloid peptide, which likely causes Alzheimer’s disease (AD) when accumulated abnormally. BACE1 inhibitory drugs are currently being developed to treat AD patients. To mimic BACE1 inhibition in adults, we generated BACE1 conditional knockout (BACE1fl/fl) mice and bred BACE1fl/fl mice with ubiquitin-CreER mice to induce deletion of BACE1 after passing early developmental stages. Strikingly, sequential and increased deletion of BACE1 in an adult AD mouse model (5xFAD) was capable of completely reversing amyloid deposition. This reversal in amyloid deposition also resulted in significant improvement in gliosis and neuritic dystrophy. Moreover, synaptic functions, as determined by long-term potentiation and contextual fear conditioning experiments, were significantly improved, correlating with the reversal of amyloid plaques. Our results demonstrate that sustained and increasing BACE1 inhibition in adults can reverse amyloid deposition in an AD mouse model, and this observation will help to provide guidance for the proper use of BACE1 inhibitors in human patients.

How to train a robot to do complex abstract thinking

Robot inspects cooler, ponders next step (credit: Intelligent Robot Lab / Brown University)

Robots are great at following programmed steps. But asking a robot to “move the green bottle from the cooler to the cupboard” would require it to have abstract representations of these things and actions, plus knowledge of its surroundings.

(“Hmm, which of those millions of pixels is a ‘cooler,’ whatever than means? How do I get inside it and also the ‘cupboard’? …”)

To help robots answer these kinds of questions and plan complex multi-step tasks, robots can construct two kinds of abstract representations of the world around them, say Brown University and MIT researchers:

  • “Procedural abstractions”: bundling all the low-level movements composed into higher-level skills (such as opening a door). Most of those robots doing fancy athletic tricks are explicitly programmed with such procedural abstractions, say the researchers.
  • “Perceptual abstractions”: making sense out of the millions of confusing pixels in the real world.

Building truly intelligent robots

According to George Konidaris, Ph.D., an assistant professor of computer science at Brown and the lead author of the new study, there’s been less progress in perceptual abstraction — the focus of the new research.

To explore this, the researchers trained a robot they called “Anathema” (aka “Ana”). They started by teaching Ana “procedural abstractions” in a room containing a cupboard, a cooler, a switch that controls a light inside the cupboard, and a bottle that could be left in either the cooler or the cupboard. They gave Ana a set of high-level motor skills for manipulating the objects in the room, such as opening and closing both the cooler and the cupboard, flipping the switch, and picking up a bottle.

Ana was also able to learn a very abstract description of the visual environment that contained only what was necessary for her to be able to perform a particular skill. Once armed with these learned abstract procedures and perceptions, the researchers gave Ana a challenge: “Take the bottle from the cooler and put it in the cupboard.”


Ana’s dynamic concept of a “cooler,” based on configurations of pixels in open and closed positions. (credit: Intelligent Robot Lab / Brown University)

Accepting the challenge, Ana navigated to the cooler. She had learned the configuration of pixels in her visual field associated with the cooler lid being closed (the only way to open it). She had also learned how to open it: stand in front of it and don’t do anything (because she needed both hands to open the lid).

She opened the cooler and sighted the bottle. But she didn’t pick it up. Not yet.

She realized that if she had the bottle in her gripper, she wouldn’t be able to open the cupboard — that requires both hands. Instead, she went directly to the cupboard.

There, she saw that the light switch was in the “on” position, and instantly realized that opening the cupboard would block the switch. So she turned the switch off before opening the cupboard. Finally, she returned to the cooler, retrieved the bottle, and placed it in the cupboard.

She had developed the entire plan in about four milliseconds.


“She learned these abstractions on her own”

Once a robot has high-level motor skills, it can automatically construct a compatible high-level symbolic representation of the world by making sense of its pixelated surroundings, according to Konidaris. “We didn’t provide Ana with any of the abstract representations she needed to plan for the task,” he said. “She learned those abstractions on her own, and once she had them, planning was easy.”

Her entire knowledge and skill set was represented in a text file just 126 lines long.

Konidaris says the research provides an important theoretical building block for applying artificial intelligence to robotics. “We believe that allowing our robots to plan and learn in the abstract rather than the concrete will be fundamental to building truly intelligent robots,” he said. “Many problems are often quite simple, if you think about them in the right way.”

Source: Journal of Artificial Intelligence Research (open-access). Funded by DARPA and MIT’s Intelligence Initiative.


IRL Lab | Learning Symbolic Representations for High-Level Robot Planning

Ray Kurzweil’s ‘singularity’ prediction supported by prominent AI scientists


According to an article in web magazine Futurism today, two prominent artificial intelligence (AI) experts have agreed with inventor, author and futurist Ray Kurzweil’s prediction of singularity — a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed — to happen in about 30 years: Massachusetts Institute of Technology Patrick Winston, Ph.D., Ford Professor of Artificial Intelligence and Computer Science, and Jurgen Schmidhuber, Ph.D., Chief Scientist of the company NNAISENSE, which aims at building the first practical general purpose AI.

Schmidhuber is confident that the singularity “is just 30 years away, if the trend doesn’t break, and there will be rather cheap computational devices that have as many connections as your brain, but are much faster. There is no doubt in my mind that AIs are going to become super smart.”

Are you a cyborg?

Bioprinting a brain

Cryogenic 3D-printing soft hydrogels. Top: the bioprinting process. Bottom: SEM image of general microstructure (scale bar: 100 µm). (credit: Z. Tan/Scientific Reports)

A new bioprinting technique combines cryogenics (freezing) and 3D printing to create geometrical structures that are as soft (and complex) as the most delicate body tissues — mimicking the mechanical properties of organs such as the brain and lungs.

The idea: “Seed” porous scaffolds that can act as a template for tissue regeneration (from neuronal cells, for example), where damaged tissues are encouraged to regrow — allowing the body to heal without tissue rejection or other problems. Using “pluripotent” stem cells that can change into different types of cells is also a possibility.

Smoothy. Solid carbon dioxide (dry ice) in an isopropanol bath is used to rapidly cool hydrogel ink (a rapid liquid-to-solid phase change) as it’s extruded, yogurt-smoothy-style. Once thawed, the gel is as soft as body tissues, but doesn’t collapse under its own weight — a previous problem.

Current structures produced with this technique are “organoids” a few centimeters in size. But the researchers hope to create replicas of actual body parts with complex geometrical structures — even whole organs. That could allow scientists to carry out experiments not possible on live subjects, or for use in medical training, replacing animal bodies for surgical training and simulations. Then on to mechanobiology and tissue engineering.

Source: Imperial College London, Scientific Reports (open-access).

How to generate electricity with your body

Bending a finger generates electricity in this prototype device. (credit: Guofeng Song et al./Nano Energy)

A new triboelectric nanogenerator (TENG) design, using a gold tab attached to your skin, will convert mechanical energy into electrical energy for future wearables and self-powered electronics. Just bend your finger or take a step.

Triboelectric charging occurs when certain materials become electrically charged after coming into contact with a different material. In this new design by University of Buffalo and Chinese scientists, when a stretched layer of gold is released, it crumples, creating what looks like a miniature mountain range. An applied force leads to friction between the gold layers and an interior PDMS layer, causing electrons to flow between the gold layers.

More power to you. Previous TENG designs have been difficult to manufacture (requiring complex lithography) or too expensive. The new 1.5-centimeters-long prototype generates a maximum of 124 volts but at only 10 microamps. It has a power density of 0.22 millwatts per square centimeter. The team plans larger pieces of gold to deliver more electricity and a portable battery.

Source: Nano Energy. Support: U.S. National Science Foundation, the National Basic Research Program of China, National Natural Science Foundation of China, Beijing Science and Technology Projects, Key Research Projects of the Frontier Science of the Chinese Academy of Sciences ,and National Key Research and Development Plan.

This artificial electrical eel may power your implants

How the eel’s electrical organs generate electricity by moving sodium (Na) and potassium (K) ions across a selective membrane. (credit: Caitlin Monney)

Taking it a giant (and a bit scary) step further, an artificial electric organ, inspired by the electric eel, could one day power your implanted implantable sensors, prosthetic devices, medication dispensers, augmented-reality contact lenses, and countless other gadgets. Unlike typical toxic batteries that need to be recharged, these systems are soft, flexible, transparent, and potentially biocompatible.

Doubles as a defibrillator? The system mimicks eels’ electrical organs, which use thousands of alternating compartments with excess potassium or sodium ions, separated by selective membranes. To create a jolt of electricity (600 volts at 1 ampere), an eel’s membranes allow the ions to flow together. The researchers built a similar system, but using sodium and chloride ions dissolved in a water-based hydrogel. It generates more than 100 volts, but at safe low current — just enough to power a small medical device like a pacemaker.

The researchers say the technology could also lead to using naturally occurring processes inside the body to generate electricity, a truly radical step.

Source: Nature, University of Fribourg, University of Michigan, University of California-San Diego. Funding: Air Force Office of Scientific Research, National Institutes of Health.

E-skin for Terminator wannabes

A section of “e-skin” (credit: Jianliang Xiao / University of Colorado Boulder)

A new type of thin, self-healing, translucent “electronic skin” (“e-skin,” which mimicks the properties of natural skin) has applications ranging from robotics and prosthetic development to better biomedical devices and human-computer interfaces.

Ready for a Terminator-style robot baby nurse? What makes this e-skin different and interesting is its embedded sensors, which can measure pressure, temperature, humidity and air flow. That makes it sensitive enough to let a robot take care of a baby, the University of Colorado mechanical engineers and chemists assure us. The skin is also rapidly self-healing (by reheating), as in The Terminator, using a mix of three commercially available compounds in ethanol.

The secret ingredient: A novel network polymer known as polyimine, which is fully recyclable at room temperature. Laced with silver nanoparticles, it can provide better mechanical strength, chemical stability and electrical conductivity. It’s also malleable, so by applying moderate heat and pressure, it can be easily conformed to complex, curved surfaces like human arms and robotic hands.

Source: University of Colorado, Science Advances (open-access). Funded in part by the National Science Foundation.

Altered Carbon

Vertebral cortical stack (credit: Netflix)

Altered Carbon takes place in the 25th century, when humankind has spread throughout the galaxy. After 250 years in cryonic suspension, a prisoner returns to life in a new body with one chance to win his freedom: by solving a mind-bending murder.

Resleeve your stack. Human consciousness can be digitized and downloaded into different bodies. A person’s memories have been encapsulated into “cortical stack” storage devices surgically inserted into the vertebrae at the back of the neck. Disposable physical bodies called “sleeves” can accept any stack.

But only the wealthy can acquire replacement bodies on a continual basis. The long-lived are called Meths, as in the Biblical figure Methuselah. The uber rich are also able to keep copies of their minds in remote storage, which they back up regularly, ensuring that even if their stack is destroyed, the stack can be resleeved (except for periods of time not backed up — as in the hack-murder).

Source: Netflix. Premiered on February 2, 2018. Based on the 2002 novel of the same title by Richard K. Morgan.

 

 

 

 

 

How to shine light deeper into the brain

Near-infrared (NIR) light can easily pass through brain tissue with minimal scattering, allowing it to reach deep structures. There, up-conversion nanoparticles (UCNPs; blue) previously inserted in the tissue can absorb this light to generate shorter-wavelength blue-green light that can activate nearby neurons. (credit: RIKEN)

An international team of researchers has developed a way to shine light at new depths in the brain. It may lead to development of new, non-invasive clinical treatments for neurological disorders and new research tools.

The new method extends the depth that optogenetics — a method for stimulating neurons with light — can reach. With optogenetics, blue-green light is used to turn on “light-gated ion channels” in neurons to stimulate neural activity. But blue-green light is heavily scattered by tissue. That limits how deep the light can reach and currently requires insertion of invasive optical fibers.

The researchers took a new approach to brain stimulation, as they reported in Science on February 9.

  1. They used longer-wavelength (650 to 1350nm) near-infrared (NIR) light, which can penetrate deeper into the brain (via the skull) of mice.
  2. The NIR light illuminated “upconversion nanoparticles” (UCNPs), which absorbed the near-infrared laser light and glowed blue-green in formerly inaccessible (deep) targeted neural areas.*
  3. The blue-green light then triggered (via chromophores, light-responsive molecules) ion channels in the neurons to turn on memory cells in the hippocampus and other areas. These included the medial septum, where nanoparticle-emitted light contributed to synchronizing neurons in a brain wave called the theta cycle.**

Non-invasive activation of neurons in the VTA, a reward center of the mouse brain. The blue-light sensitive ChR2 chromophores (green) were expressed (from an injection) on both sides of the VTA. But upconversion nanoparticles (blue) were only injected on the right. So when near-IR light was applied to both sides, it only activated the expression of the activity-induced chromophore cFos gene (red) on the side with the nanoparticles. (credit: RIKEN)

This study was a collaboration between scientists at the RIKEN Brain Science Institute, the National University of Singapore, the University of Tokyo, Johns Hopkins University, and Keio University.

Non-invasive light therapy

“Nanoparticles effectively extend the reach of our lasers, enabling ‘remote’ delivery of light and potentially leading to non-invasive therapies,” says Thomas McHugh, research group leader at the RIKEN Brain Science Institute in Japan. In addition to activating neurons, UCNPs can also be used for inhibition. In this study, UCNPs were able to quell experimental seizures in mice by emitting yellow light to silence hyperexcitable neurons.

Schematic showing near-infrared radiation (NIR) being absorbed by upconversion nanoparticles (UCNPs) and re-radiated as shorter-wavelength (peaking at 450 and 475 nm) blue light that triggers a previously injected chromophore (a light emitting molecule expressed by neurons) — in this case, channelrhodopsin-2 (ChR2). In one experiment, the chromophore triggered a calcium ion channel in neurons in the ventral tegmental area (VTA) of the mouse brain (a region located ~4.2 mm below the skull), causing stimulation of neurons. (credit: Shuo Chen et al./Science)

While current deep brain stimulation is effective in alleviating specific neurological symptoms, it lacks cell-type specificity and requires permanently implanted electrodes, the researchers note.

The nanoparticles described in this study are compatible with the various light-activated channels currently in use in the optogenetics field and can be employed for neural activation or inhibition in many deep brain structures. “The nanoparticles appear to be quite stable and biocompatible, making them viable for long-term use. Plus, the low dispersion means we can target neurons very specifically,” says McHugh.

However, “a number of challenges must be overcome before this technique can be used in patients,” say Neus Feliu et al. in “Toward an optically controlled brain, Science  09 Feb 2018. “Specifically, neurons have to be transfected with light-gated ion channels … a substantial challenge [and] … placed close to the target neurons. … Neuronal networks undergo continuous changes [so] the stimulation pattern and placement of [nanoparticles] may have to be adjusted over time. … Potent upconverting NPs are also needed … [which] may change properties over time, such as structural degradation and loss of functional properties. … Long-term toxicity studies also need to be carried out.”

* “The lanthanide-doped up-conversion nanoparticles (UCNPs) were capable of converting low-energy incident NIR photons into high-energy visible emission with an efficiency orders of magnitude greater than that of multiphoton processes. … The core-shell UCNPs exhibited a characteristic up-conversion emission spectrum peaking at 450 and 475 nm upon excitation at 980 nm. Upon transcranial delivery of 980-nm CW laser pulses at a peak power of 2.0 W (25-ms pulses at 20 Hz over 1 s), an upconverted emission with a power density of ~0.063 mW/mm2 was detected. The conversion yield of NIR to blue light was ~2.5%. NIR pulses delivered across a wide range of laser energies to living tissue result in little photochemical or thermal damage.” — Shuo Chen et al./Science

** “Memory recall in mice also persisted in tests two weeks later. This indicates that the UCNPs remained at the injection site, which was confirmed through microscopy of the brains.” — Shuo Chen et al./Science

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Abstract of Near-infrared deep brain stimulation via upconversion nanoparticle–mediated optogenetics

Optogenetics has revolutionized the experimental interrogation of neural circuits and holds promise for the treatment of neurological disorders. It is limited, however, because visible light cannot penetrate deep inside brain tissue. Upconversion nanoparticles (UCNPs) absorb tissue-penetrating near-infrared (NIR) light and emit wavelength-specific visible light. Here, we demonstrate that molecularly tailored UCNPs can serve as optogenetic actuators of transcranial NIR light to stimulate deep brain neurons. Transcranial NIR UCNP-mediated optogenetics evoked dopamine release from genetically tagged neurons in the ventral tegmental area, induced brain oscillations through activation of inhibitory neurons in the medial septum, silenced seizure by inhibition of hippocampal excitatory cells, and triggered memory recall. UCNP technology will enable less-invasive optical neuronal activity manipulation with the potential for remote therapy.

AI algorithm with ‘social skills’ teaches humans how to collaborate

(credit: Iyad Rahwan)

An international team has developed an AI algorithm with social skills that has outperformed humans in the ability to cooperate with people and machines in playing a variety of two-player games.

The researchers, led by Iyad Rahwan, PhD, an MIT Associate Professor of Media Arts and Sciences, tested humans and the algorithm, called S# (“S sharp”), in three types of interactions: machine-machine, human-machine, and human-human. In most instances, machines programmed with S# outperformed humans in finding compromises that benefit both parties.

“Two humans, if they were honest with each other and loyal, would have done as well as two machines,” said lead author BYU computer science professor Jacob Crandall. “As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are better [since it’s programmed to not lie] and it also learns to maintain cooperation once it emerges.”

“The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills,” said Crandall. “AI needs to be able to respond to us and articulate what it’s doing. It has to be able to interact with other people.”

How casual talk by AI helps humans be more cooperative

One important finding: colloquial phrases (called “cheap talk” in the study) doubled the amount of cooperation. In tests, if human participants cooperated with the machine, the machine might respond with a “Sweet. We are getting rich!” or “I accept your last proposal.” If the participants tried to betray the machine or back out of a deal with them, they might be met with a trash-talking “Curse you!”, “You will pay for that!” or even an “In your face!”

And when machines used cheap talk, their human counterparts were often unable to tell whether they were playing a human or machine — a sort of mini “Turing test.”

The research findings, Crandall hopes, could have long-term implications for human relationships. “In society, relationships break down all the time,” he said. “People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better.”

The research is described in an open-access paper in Nature Communications.

A human-machine collaborative chatbot system 

An actual conversation on Evorus, combining multiple chatbots and workers. (credit: T. Huang et al.)

In a related study, Carnegie Mellon University (CMU) researchers have created a new collaborative chatbot called Evorus that goes beyond Siri, Alexa, and Cortana by adding humans in the loop.

Evorus combines a chatbot called Chorus with inputs by paid crowd workers at Amazon Mechanical Turk, who answer questions from users and vote on the best answer. Evorus keeps track of the questions asked and answered and, over time, begins to suggest these answers for subsequent questions. It can also use multiple chatbots, such as vote bots, Yelp Bot (restaurants) and Weather Bot to provide enhanced information.

Humans are simultaneously training the system’s AI, making it gradually less dependent on people, says Jeff Bigham, associate professor in the CMU Human-Computer Interaction Institute.

The hope is that as the system grows, the AI will be able to handle an increasing percentage of questions, while the number of crowd workers necessary to respond to “long tail” questions will remain relatively constant.

Keeping humans in the loop also reduces the risk that malicious users will manipulate the conversational agent inappropriately, as occurred when Microsoft briefly deployed its Tay chatbot in 2016, noted co-developer Ting-Hao Huang, a Ph.D. student in the Language Technologies Institute (LTI).

The preliminary system is available for download and use by anyone willing to be part of the research effort. It is deployed via Google Hangouts, which allows for voice input as well as access from computers, phones, and smartwatches. The software architecture can also accept automated question-answering components developed by third parties.

A open-access research paper on Evorus, available online, will be presented at CHI 2018, the Conference on Human Factors in Computing Systems in Montreal, April 21–26, 2018.


Abstract of Cooperating with machines

Since Alan Turing envisioned artificial intelligence, technical progress has often been measured by the ability to defeat humans in zero-sum encounters (e.g., Chess, Poker, or Go). Less attention has been given to scenarios in which human–machine cooperation is beneficial but non-trivial, such as scenarios in which human and machine preferences are neither fully aligned nor fully in conflict. Cooperation does not require sheer computational power, but instead is facilitated by intuition, cultural norms, emotions, signals, and pre-evolved dispositions. Here, we develop an algorithm that combines a state-of-the-art reinforcement-learning algorithm with mechanisms for signaling. We show that this algorithm can cooperate with people and other algorithms at levels that rival human cooperation in a variety of two-player repeated stochastic games. These results indicate that general human–machine cooperation is achievable using a non-trivial, but ultimately simple, set of algorithmic mechanisms.


Abstract of A Crowd-powered Conversational Assistant Built to Automate Itself Over Time

Crowd-powered conversational assistants have been shown to be more robust than automated systems, but do so at the cost of higher response latency and monetary costs. A promising direction is to combine the two approaches for high quality, low latency, and low cost solutions. In this paper, we introduce Evorus, a crowd-powered conversational assistant built to automate itself over time by (i) allowing new chatbots to be easily integrated to automate more scenarios, (ii) reusing prior crowd answers, and (iii) learning to automatically approve response candidates. Our 5-month-long deployment with 80 participants and 281 conversations shows that Evorus can automate itself without compromising conversation quality. Crowd-AI architectures have long been proposed as a way to reduce cost and latency for crowd-powered systems; Evorus demonstrates how automation can be introduced successfully in a deployed system. Its architecture allows future researchers to make further innovation on the underlying automated components in the context of a deployed open domain dialog system.

Superconducting ‘synapse’ could enable powerful future neuromorphic supercomputers

NIST’s artificial synapse, designed for neuromorphic computing, mimics the operation of switch between two neurons. One artificial synapse is located at the center of each X. This chip is 1 square centimeter in size. (The thick black vertical lines are electrical probes used for testing.) (credit: NIST)

A superconducting “synapse” that “learns” like a biological system, operating like the human brain, has been built by researchers at the National Institute of Standards and Technology (NIST).

The NIST switch, described in an open-access paper in Science Advances, provides a missing link for neuromorphic (brain-like) computers, according to the researchers. Such “non-von Neumann architecture” future computers could significantly speed up analysis and decision-making for applications such as self-driving cars and cancer diagnosis.

The research is supported by the Intelligence Advanced Research Projects Activity (IARPA) Cryogenic Computing Complexity Program, which was launched in 2014 with the goal of paving the way to “a new generation of superconducting supercomputer development beyond the exascale.”*

A synapse is a connection or switch between two neurons, controlling transmission of signals. (credit: NIST)

NIST’s artificial synapse is a metallic cylinder 10 micrometers in diameter — about 10 times larger than a biological synapse. It simulates a real synapse by processing incoming electrical spikes (pulsed current from a neuron) and customizing spiking output signals. The more firing between cells (or processors), the stronger the connection. That process enables both biological and artificial synapses to maintain old circuits and create new ones.

Dramatically faster, lower-energy-required, compared to human synapses

But the NIST synapse has two unique features that the researchers say are superior to human synapses and to other artificial synapses:

  • Operating at 100 GHz, it can fire at a rate that is much faster than the human brain — 1 billion times per second, compared to a brain cell’s rate of about 50 times per second.
  • It uses only about one ten-thousandth as much energy as a human synapse. The spiking energy is less than 1 attojoule** — roughly equivalent to the miniscule chemical energy bonding two atoms in a molecule — compared to the roughly 10 femtojoules (10,000 attojoules) per synaptic event in the human brain. Current neuromorphic platforms are orders of magnitude less efficient than the human brain. “We don’t know of any other artificial synapse that uses less energy,” NIST physicist Mike Schneider said.

Superconducting devices mimicking brain cells and transmission lines have been developed, but until now, efficient synapses — a crucial piece — have been missing. The new Josephson junction-based artificial synapse would be used in neuromorphic computers made of superconducting components (which can transmit electricity without resistance), so they would be more efficient than designs based on semiconductors or software. Data would be transmitted, processed, and stored in units of magnetic flux.

The brain is especially powerful for tasks like image recognition because it processes data both in sequence and simultaneously and it stores memories in synapses all over the system. A conventional computer processes data only in sequence and stores memory in a separate unit.

The new NIST artificial synapses combine small size, superfast spiking signals, and low energy needs, and could be stacked into dense 3D circuits for creating large systems. They could provide a unique route to a far more complex and energy-efficient neuromorphic system than has been demonstrated with other technologies, according to the researchers.

Nature News does raise some concerns about the research, quoting neuromorphic-technology experts: “Millions of synapses would be necessary before a system based on the technology could be used for complex computing; it remains to be seen whether it will be possible to scale it to this level. … The synapses can only operate at temperatures close to absolute zero, and need to be cooled with liquid helium. That this might make the chips impractical for use in small devices, although a large data centre might be able to maintain them. … We don’t yet understand enough about the key properties of the [biological] synapse to know how to use them effectively.”


Inside a superconducting synapse 

The NIST synapse is a customized Josephson junction***, long used in NIST voltage standards. These junctions are a sandwich of superconducting materials with an insulator as a filling. When an electrical current through the junction exceeds a level called the critical current, voltage spikes are produced.

Illustration showing the basic operation of NIST’s artificial synapse, based on a Josephson junction. Very weak electrical current pulses are used to control the number of nanoclusters (green) pointing in the same direction. Shown here: a “magnetically disordered state” (left) vs. “magnetically ordered state” (right). (credit: NIST)

Each artificial synapse uses standard niobium electrodes but has a unique filling made of nanoscale clusters (“nanoclusters”) of manganese in a silicon matrix. The nanoclusters — about 20,000 per square micrometer — act like tiny bar magnets with “spins” that can be oriented either randomly or in a coordinated manner. The number of nanoclusters pointing in the same direction can be controlled, which affects the superconducting properties of the junction.

Diagram of circuit used in the simulation. The blue and red areas represent pre- and post-synapse neurons, respectively. The X symbol represents the Josephson junction. (credit: Michael L. Schneider et al./Science Advances)

The synapse rests in a superconducting state, except when it’s activated by incoming current and starts producing voltage spikes. Researchers apply current pulses in a magnetic field to boost the magnetic ordering — that is, the number of nanoclusters pointing in the same direction.

This magnetic effect progressively reduces the critical current level, making it easier to create a normal conductor and produce voltage spikes. The critical current is the lowest when all the nanoclusters are aligned. The process is also reversible: Pulses are applied without a magnetic field to reduce the magnetic ordering and raise the critical current. This design, in which different inputs alter the spin alignment and resulting output signals, is similar to how the brain operates.

Synapse behavior can also be tuned by changing how the device is made and its operating temperature. By making the nanoclusters smaller, researchers can reduce the pulse energy needed to raise or lower the magnetic order of the device. Raising the operating temperature slightly from minus 271.15 degrees C (minus 456.07 degrees F) to minus 269.15 degrees C (minus 452.47 degrees F), for example, results in more and higher voltage spikes.


* Future exascale supercomputers would run at 1018 exaflops (“flops” = floating point operations per second) or more. The current fastest supercomputer — the Sunway TaihuLight — operates at about 0.1 exaflops; zettascale computers, the next step beyond exascale, would run 10,000 times faster than that.

** An attojoule is 10-18 joule, a unit of energy, and is one-thousandth of a femtojoule.

*** The Josephson effect is the phenomenon of supercurrent — i.e., a current that flows indefinitely long without any voltage applied — across a device known as a Josephson junction, which consists of two superconductors coupled by a weak link. — Wikipedia


Abstract of Ultralow power artificial synapses using nanotextured magnetic Josephson junctions

Neuromorphic computing promises to markedly improve the efficiency of certain computational tasks, such as perception and decision-making. Although software and specialized hardware implementations of neural networks have made tremendous accomplishments, both implementations are still many orders of magnitude less energy efficient than the human brain. We demonstrate a new form of artificial synapse based on dynamically reconfigurable superconducting Josephson junctions with magnetic nanoclusters in the barrier. The spiking energy per pulse varies with the magnetic configuration, but in our demonstration devices, the spiking energy is always less than 1 aJ. This compares very favorably with the roughly 10 fJ per synaptic event in the human brain. Each artificial synapse is composed of a Si barrier containing Mn nanoclusters with superconducting Nb electrodes. The critical current of each synapse junction, which is analogous to the synaptic weight, can be tuned using input voltage spikes that change the spin alignment of Mn nanoclusters. We demonstrate synaptic weight training with electrical pulses as small as 3 aJ. Further, the Josephson plasma frequencies of the devices, which determine the dynamical time scales, all exceed 100 GHz. These new artificial synapses provide a significant step toward a neuromorphic platform that is faster, more energy-efficient, and thus can attain far greater complexity than has been demonstrated with other technologies.

Cancer ‘vaccine’ eliminates all traces of cancer in mice

Effects of in situ vaccination with CpG and anti-OX40 agents. Left: Mice genetically engineered to spontaneously develop breast cancers in all 10 of their mammary pads were injected into the first arising tumor (black arrow) with either a vehicle (inactive fluid) (left) or with CpG and anti-OX40 (right). Pictures were taken on day 80. (credit: Idit Sagiv-Barfi et al./ Sci. Transl. Med.)

Injecting minute amounts of two immune-stimulating agents directly into solid tumors in mice was able to eliminate all traces of cancer in the animals — including distant, untreated metastases (spreading cancer locations), according to a study by Stanford University School of Medicine researchers.

The researchers believe this new “in situ vaccination” method could serve as a rapid and relatively inexpensive cancer therapy — one that is unlikely to cause the adverse side effects often seen with bodywide immune stimulation.

The  approach works for many different types of cancers, including those that arise spontaneously, the study found.

“When we use these two agents together, we see the elimination of tumors all over the body,” said Ronald Levy*, MD, professor of oncology and senior author of the study, which was published Jan. 31 in Science Translational Medicine. “This approach bypasses the need to identify tumor-specific immune targets and doesn’t require wholesale activation of the immune system or customization of a patient’s immune cells.”

Many current immunotherapy approaches have been successful, but they each have downsides — from difficult-to-handle side effects to high-cost and lengthy preparation or treatment times.** “Our approach uses a one-time application of very small amounts of two agents to stimulate the immune cells only within the tumor itself,” Levy said. “In the mice, we saw amazing, bodywide effects, including the elimination of tumors all over the animal.”

Cancer-destroying T cells that target other tumors in the body

Levy’s method reactivates cancer-specific T cells (a type of white blood cell) by injecting microgram (one-millionth of a gram) amounts of the two agents directly into the tumor site.*** Because the two agents are injected directly into the tumor, only T cells that have infiltrated the tumor are activated. In effect, these T cells are “prescreened” by the body to recognize only cancer-specific proteins.

Some of these tumor-specific, activated T cells then leave the original tumor to find and destroy other identical tumors throughout the body.


“I don’t think there’s a limit to the type of tumor we could potentially treat, as long as it has been infiltrated by the immune system.” — Ronald Levy, MD.


The approach worked “startlingly well” in laboratory mice with transplanted mouse lymphoma tumors in two sites on their bodies, the researchers say. Injecting one tumor site with the two agents caused the regression not just of the treated tumor, but also of the second, untreated tumor. In this way, 87 of 90 mice were cured of the cancer. Although the cancer recurred in three of the mice, the tumors again regressed after a second treatment. The researchers saw similar results in mice bearing breast, colon and melanoma tumors.

Mice genetically engineered to spontaneously develop breast cancers in all 10 of their mammary pads also responded to the treatment. Treating the first tumor that arose often prevented the occurrence of future tumors and significantly increased the animals’ life span, the researchers found.

Finally, researchers explored the specificity of the T cells. They transplanted two types of tumors into the mice. They transplanted the same lymphoma cancer cells in two locations, and transplanted a colon cancer cell line in a third location. Treatment of one of the lymphoma sites caused the regression of both lymphoma tumors but did not affect the growth of the colon cancer cells.

“This is a very targeted approach,” Levy said. “Only the tumor that shares the protein targets displayed by the treated site is affected. We’re attacking specific targets without having to identify exactly what proteins the T cells are recognizing.”

Lymphoma clinical trial

The current clinical trial is expected to recruit about 15 patients with low-grade lymphoma. If successful, Levy believes the treatment could be useful for many tumor types. He envisions a future in which clinicians inject the two agents into solid tumors in humans prior to surgical removal of the cancer. This would prevent recurrence of cancer due to unidentified metastases or lingering cancer cells, or even head off the development of future tumors that arise due to genetic mutations like BRCA1 and 2.

* Levy, who holds the Robert K. and Helen K. Summy Professorship in the School of Medicine, is also a member of the Stanford Cancer Institute and Stanford Bio-X. Levy is a pioneer in the field of cancer immunotherapy, in which researchers try to harness the immune system to combat cancer. Research in his laboratory formerly led to the development of rituximab, one of the first monoclonal antibodies approved for use as an anticancer treatment in humans. Professor of radiology Sanjiv Gambhir, MD, PhD, senior author of the paper, is the founder and equity holder in CellSight Inc., which develops and translates multimodality strategies to image cell trafficking and transplantation. The research was supported by the National Institutes of Health, the Leukemia and Lymphoma Society, the Boaz and Varda Dotan Foundation, and the Phil N. Allen Foundation. Stanford’s Department of Medicine also supported the work.

** Some immunotherapy approaches rely on stimulating the immune system throughout the body. Others target naturally occurring checkpoints that limit the anti-cancer activity of immune cells. Still others, like the CAR T-cell therapy recently approved to treat some types of leukemia and lymphomas, require a patient’s immune cells to be removed from the body and genetically engineered to attack the tumor cells. Immune cells like T cells recognize the abnormal proteins often present on cancer cells and infiltrate to attack the tumor. However, as the tumor grows, it often devises ways to suppress the activity of the T cells.

*** One agent, CpG, that induces an immune response in a short stretch of DNA called a CpG oligonucleotide, works with other nearby immune cells to amplify the expression of an activating receptor called OX40 on the surface of the T cells. The other agent, an antibody that binds to OX40, activates the T cells to lead the charge against the cancer cells.


Abstract of Eradication of spontaneous malignancy by local immunotherapy

It has recently become apparent that the immune system can cure cancer. In some of these strategies, the antigen targets are preidentified and therapies are custom-made against these targets. In others, antibodies are used to remove the brakes of the immune system, allowing preexisting T cells to attack cancer cells. We have used another noncustomized approach called in situ vaccination. Immunoenhancing agents are injected locally into one site of tumor, thereby triggering a T cell immune response locally that then attacks cancer throughout the body. We have used a screening strategy in which the same syngeneic tumor is implanted at two separate sites in the body. One tumor is then injected with the test agents, and the resulting immune response is detected by the regression of the distant, untreated tumor. Using this assay, the combination of unmethylated CG–enriched oligodeoxynucleotide (CpG)—a Toll-like receptor 9 (TLR9) ligand—and anti-OX40 antibody provided the most impressive results. TLRs are components of the innate immune system that recognize molecular patterns on pathogens. Low doses of CpG injected into a tumor induce the expression of OX40 on CD4+T cells in the microenvironment in mouse or human tumors. An agonistic anti-OX40 antibody can then trigger a T cell immune response, which is specific to the antigens of the injected tumor. Remarkably, this combination of a TLR ligand and an anti-OX40 antibody can cure multiple types of cancer and prevent spontaneous genetically driven cancers.

Penn researchers create first optical transistor comparable to an electronic transistor

By precisely controlling the mixing of optical signals, Ritesh Agarwal’s research team says they have taken an important step toward photonic (optical) computing. (credit: Sajal Dhara)

In an open-access paper published in Nature Communications, Ritesh Agarwal, a professor the University of Pennsylvania School of Engineering and Applied Science, and his colleagues say that they have made significant progress in photonic (optical) computing by creating a prototype of a working optical transistor with properties similar to those of a conventional electronic transistor.*

Optical transistors, using photons instead of electrons, promise to one day be more powerful than the electronic transistors currently used in computers.

Agarwal’s research on photonic computing has been focused on finding the right combination and physical configuration of nonlinear materials that can amplify and mix light waves in ways that are analogous to electronic transistors. “One of the hurdles in doing this with light is that materials that are able to mix optical signals also tend to have very strong background signals as well. That background signal would drastically reduce the contrast and on/off ratios leading to errors in the output,” Agarwal explained.

How the new optical transistor works

Schematic of a cadmium sulfide nanobelt device with source (S) and drain (D) electrodes. The fundamental wave at the frequency of ω, which is normally incident upon the belt, excites the second-harmonic (twice the frequency) wave at 2ω, which is back-scattered. (credit: Ming-Liang Ren et al./Nature Communications)

To address this issue, Agarwal’s research group started by creating a system with no disruptive optical background signal. To do that, they used a “nanobelt”* made out of cadmium sulfide. Then, by applying an electrical field across the nanobelt, the researchers were able to introduce optical nonlinearities (similar to the nonlinearities in electronic transistors), which enabled a signal mixing output that was otherwise zero.

“Our system turns on from zero to extremely large values,” Agarwal said.** “For the first time, we have an optical device with output that truly resembles an electronic transistor.”

The next steps toward a fully functioning photonic computer will involve integrating optical circuits with optical interconnects, modulators, and detectors to achieve actual on-chip integrated photonic computation.

The research was supported by the US Army Research Office and the National Science Foundation.

* “Made of semiconducting metal oxides, nanobelts are extremely thin and flat structures. They are chemically pure, structurally uniform, largely defect-free, with clean surfaces that do not require protection against oxidation. Each is made up of a single crystal with specific surface planes and shape.” — Reade International Corp.

** That is, the system was capable of precisely controlling the mixing of optical signals via controlled electric fields, with outputs with near-perfect contrast and extremely large on/off ratios. “Our study demonstrates a new way to dynamically control nonlinear optical signals in nanoscale materials with ultrahigh signal contrast and signal saturation, which can enable the development of nonlinear optical transistors and modulators for on-chip photonic devices with high-performance metrics and small-form factors, which can be further enhanced by integrating with nanoscale optical cavities,” the researchers note in the paper.


Abstract of Strong modulation of second-harmonic generation with very large contrast in semiconducting CdS via high-field domain

Dynamic control of nonlinear signals is critical for a wide variety of optoelectronic applications, such as signal processing for optical computing. However, controlling nonlinear optical signals with large modulation strengths and near-perfect contrast remains a challenging problem due to intrinsic second-order nonlinear coefficients via bulk or surface contributions. Here, via electrical control, we turn on and tune second-order nonlinear coefficients in semiconducting CdS nanobelts from zero to up to 151 pm V−1, a value higher than other intrinsic nonlinear coefficients in CdS. We also observe ultrahigh ON/OFF ratio of >104 and modulation strengths ~200% V−1 of the nonlinear signal. The unusual nonlinear behavior, including super-quadratic voltage and power dependence, is ascribed to the high-field domain, which can be further controlled by near-infrared optical excitation and electrical gating. The ability to electrically control nonlinear optical signals in nanostructures can enable optoelectronic devices such as optical transistors and modulators for on-chip integrated photonics.