Using ‘cooperative perception’ between intelligent vehicles to reduce risks

Networked intelligent vehicles (credit: EPFL)

Researchers at École polytechnique fédérale de Lausanne (EPFL) have combined data from two autonomous cars to create a wider field of view, extended situational awareness, and greater safety.

Autonomous vehicles get their intelligence from cameras, radar, light detection and ranging (LIDAR) sensors, and navigation and mapping systems. But there are ways to make them even smarter. Researchers at EPFL are working to improve the reliability and fault tolerance of these systems by sharing data between vehicles. For example, this can extend the field of view of a car that is behind another car.

Using simulators and road tests, the team has developed a flexible software framework for networking intelligent vehicles so that they can interact.

Cooperative perception

“Today, intelligent vehicle development is focused on two main issues: the level of autonomy and the level of cooperation,” says Alcherio Martinoli, who heads EPFL’s Distributed Intelligent Systems and Algorithms Laboratory (DISAL). As part of his PhD thesis, Milos Vasic has developed cooperative perception algorithms, which extend an intelligent vehicle’s situational awareness by fusing data from onboard sensors with data provided by cooperative vehicles nearby.

Milos Vasic, PhD, and Alcherio Martinoli made two regular cars intelligent using off-the-shelf equipment. (credit: Alain Herzog/EPFL)

The researchers used  cooperative perception algorithms as the basis for the software framework. Cooperative perception means that an intelligent vehicle can combine its own data with that of another vehicle to help make driving decisions.

They developed an assistance system that assesses the risk of passing, for example. The risk assessment factors in the probability of an oncoming car in the opposite lane as well as kinematic conditions such as driving speeds, the distance required to overtake, and the distance to the oncoming car.

Difficulties in fusing data

The team retrofitted two Citroen C-Zero electric cars with a Mobileye camera, an accurate localization system, a router to enable Wi-Fi communication, a computer to run the software and an external battery to power everything. “These were not autonomous vehicles,” says Martinoli, “but we made them intelligent using off-the-shelf equipment.”

One of the difficulties in fusing data from the two vehicles involved relative localization. The cars needed to be able to know precisely where they are in relation to each other as well to objects in the vicinity.

For example, if a single pedestrian does not appear to both cars to be in the same exact spot, there is a risk that, together, they will see two figures instead of one. By using other signals, particularly those provided by the LIDAR sensors and cameras, the researchers were able to correct flaws in the navigation system and adjust their algorithms accordingly. This exercise was even more challenging because the data had to be processed in real time while the vehicles were in motion.

Although the tests involved only two vehicles, the longer-term goal is to create a network between multiple vehicles as well with the roadway infrastructure.

In addition to driving safety and comfort, cooperative networks of this sort could eventually be used to optimize a vehicle’s trajectory, save energy, and improve traffic flows.

Of course, determining liability in case of an accident becomes more complicated when vehicles cooperate. “The answers to these issues will play a key role in determining whether autonomous vehicles are accepted,” says Martinoli.


École polytechnique fédérale de Lausanne (EPFL) | Networked intelligent vehicles

Ray Kurzweil on The Age of Spiritual Machines: A 1999 TV interview

Dear readers,

For your interest, this 1999 interview with me, which I recently re-watched, describes some interesting predictions that are still coming true today. It’s intriguing to look back at the last 18 years to see what actually unfolded. This video is a compelling glimpse into the future, as we’re living it today.

Enjoy!

— Ray


Dear readers,

This interview by Harold Hudson Channer was recorded on Jan. 14, 1999 and aired February 1, 1999 on a Manhattan Neighborhood Network cable-access show, Conversations with Harold Hudson Channer.

In the discussion, Ray explains many of the ahead-of-their-time ideas presented in The Age of Spiritual Machines*, such as the “law of accelerating returns” (how technological change is exponential, contrary to the common-sense “intuitive linear” view); the forthcoming revolutionary impacts of AI; nanotech brain and body implants for increased intelligence, improved health, and life extension; and technological impacts on economic growth.

I was personally inspired by the book in 1999 and by Ray’s prophetic, uplifting vision of the future. I hope you also enjoy this blast from the past.

— Amara D. Angelica, Editor

* First published in hardcover January 1, 1999 by Viking. The series also includes The Age of Intelligent Machines (The MIT Press, 1992) and The Singularity Is Near (Penquin Books, 2006).

Intel’s new ‘Loihi’ chip mimics neurons and synapses in the human brain

Loihi chip (credit: Intel Corporation)

Intel announced this week a self-learning, energy-efficient neuromorphic (brain-like) research chip codenamed “Loihi”* that mimics how the human brain functions. Under development for six years, the chip uses 130,000 “neurons” and 130 million “synapses” and learns in real time, based on feedback from the environment.**

Neuromorphic chip models are inspired by how neurons communicate and learn, using spikes (brain pulses) and synapses capable of learning.

The idea is to help computers self-organize and make decisions based on patterns and associations,” Michael Mayberry, PhD, corporate vice president and managing director of Intel Labs at Intel Corporation, explained in a blog post.

He said the chip automatically gets smarter over time and doesn’t need to be trained in the traditional way. He sees applications in areas that would benefit from autonomous operation and continuous learning in an unstructured environment, such as automotive, industrial, and personal-robotics areas.

For example, a cybersecurity system could identify a breach or a hack based on an abnormality or difference in data streams. Or the chip could learn a person’s heartbeat reading under various conditions — after jogging, following a meal or before going to bed — to determine a “normal” heartbeat. The system could then continuously monitor incoming heart data to flag patterns that don’t match the “normal” pattern, and could be personalized for any user.

“Machine learning models such as deep learning have made tremendous recent advancements by using extensive training datasets to recognize objects and events. However, unless their training sets have specifically accounted for a particular element, situation or circumstance, these machine learning systems do not generalize well,” Mayberry notes.

The Loihi test chip

Loihi currently exists as a research test chip that offers flexible on-chip learning and combines training and inference. Researchers have demonstrated it learning at a rate that is a 1 million times improvement compared with other typical spiking neural nets, as measured by total operations to achieve a given accuracy when solving MNIST digit recognition problems, Mayberry said. “Compared to technologies such as convolutional neural networks and deep learning neural networks, the Loihi test chip uses many fewer resources on the same task.”

Fabricated on Intel’s 14 nm process technology, the chip is also up to 1,000 times more energy-efficient than general-purpose computing required for typical training systems, he added.

In the first half of 2018, Intel plans to share the Loihi test chip with leading university and research institutions with a focus on advancing AI. The goal is to develop and test several algorithms with high efficiency for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation.

“Looking to the future, Intel believes that neuromorphic computing offers a way to provide exascale performance in a construct inspired by how the brain works,” Mayberry said.

* “Loihi seamount, sometimes known as the ‘youngest volcano’ in the Hawaiian chain, is an undersea mountain rising more than 3000 meters above the floor of the Pacific Ocean … submerged in the Pacific off of the south-eastern coast of the Big Island of Hawaii.” — Hawaii Center for Volcanology

** For comparison, IBM’s TrueNorth neuromorphic chip currently has 1 million neurons and 256 million synapses.

Why futurist Ray Kurzweil isn’t worried about technology stealing your job — Fortune

1985: Ray Kurzweil looks on as Stevie Wonder experiences the Kurzweil 250, the first synthesizer to accurately reproduce the sounds of the piano — replacing piano-maker jobs but adding many more jobs for musicians (credit: Kurzweil Music Systems)

Last week, Fortune magazine asked Ray Kurzweil to comment on some often-expressed questions about the future.

Does AI pose an existential threat to humanity?

Kurzweil sees the future as nuanced, notes writer Michal Lev-Ram. “A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation,” Kurzweil said. “It’s very important for your survival to be sensitive to bad news. … I think if you look at history, though, we’re being helped [by new technology] more than we’re being hurt.”

How will artificial intelligence and other technologies impact jobs?

“We have already eliminated all jobs several times in human history,” said Kurzweil, pointing out that “for every job we eliminate, we’re going to create more jobs at the top of the skill ladder. … You can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.”

Why are we so bad at predicting certain things? For example, Donald Trump winning the presidency?

Kurzweil: “He’s not technology.”

Read Fortune article here.

Why futurist Ray Kurzweil isn’t worried about technology stealing your job — Fortune

1985: Ray Kurzweil looks on as Stevie Wonder experiences the Kurzweil 250, the first synthesizer to accurately reproduce the sounds of the piano — replacing piano-maker jobs but adding many more jobs for musicians (credit: Kurzweil Music Systems)

Last week, Fortune magazine asked Ray Kurzweil to comment on some often-expressed questions about the future.

Does AI pose an existential threat to humanity?

Kurzweil sees the future as nuanced, notes writer Michal Lev-Ram. “A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation,” Kurzweil said. “It’s very important for your survival to be sensitive to bad news. … I think if you look at history, though, we’re being helped [by new technology] more than we’re being hurt.”

How will artificial intelligence and other technologies impact jobs?

“We have already eliminated all jobs several times in human history,” said Kurzweil, pointing out that “for every job we eliminate, we’re going to create more jobs at the top of the skill ladder. … You can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.”

Why are we so bad at predicting certain things? For example, Donald Trump winning the presidency?

Kurzweil: “He’s not technology.”

Read Fortune article here.

Human vs. deep-neural-network performance in object recognition

(credit: UC Santa Barbara)

Before you read this: look for toothbrushes in the photo above.

Did you notice the huge toothbrush on the left? Probably not. That’s because when humans search through scenes for a particular object, we often miss objects whose size is inconsistent with the rest of the scene, according to scientists in the Department of Psychological & Brain Sciences at UC Santa Barbara.

The scientists are investigating this phenomenon in an effort to better understand how humans and computers compare in doing visual searches. Their findings are published in the journal Current Biology.

Hiding in plain sight

“When something appears at the wrong scale, you will miss it more often because your brain automatically ignores it,” said UCSB professor Miguel Eckstein, who specializes in computational human vision, visual attention, and search.

The experiment used scenes of ordinary objects featured in computer-generated images that varied in color, viewing angle, and size, mixed with “target-absent” scenes. The researchers asked 60 viewers to search for these objects (e.g., toothbrush, parking meter, computer mouse) while eye-tracking software monitored the paths of their gaze.

The researchers found that people tended to miss the target more often when it was mis-scaled (too large or too small) — even when looking directly at the target object.

Computer vision, by contrast, doesn’t have this issue, the scientists reported. However, in the experiments, the researchers found that the most advanced form of computer vision — deep neural networks — had its own limitations.

Human search strategies that could improve computer vision

Red rectangle marks incorrect image identification as a cell phone by a deep-learning algorithm (credit: UC Santa Barbara)

For example, a CNN deep-learning neural net incorrectly identified a computer keyboard as a cell phone, based on similarity in shape and the location of the object in spatial proximity to a human hand (as would be expected of a cell phone). But for humans, the object’s size (compared to the nearby hands) is clearly seen as inconsistent with a cell phone.

“This strategy allows humans to reduce false positives when making fast decisions,” the researchers note in the paper.

“The idea is when you first see a scene, your brain rapidly processes it within a few hundred milliseconds or less, and then you use that information to guide your search towards likely locations where the object typically appears,” Eckstein said. “Also, you focus your attention on objects that are actually at the size that is consistent with the object that you’re looking for.”

That is, human brains use the relationships between objects and their context within the scene to guide their eyes — a useful strategy to process scenes rapidly, eliminate distractors, and reduce false positives.

This finding might suggest ways to improve computer vision by implementing some of the tricks the brain utilizes to reduce false positives, according to the researchers.

Future research

“There are some theories that suggest that people with autism spectrum disorder focus more on local scene information and less on global structure,” says Eckstein, who is contemplating a follow-up study. “So there is a possibility that people with autism spectrum disorder might miss the mis-scaled objects less often, but we won’t know that until we do the study.”

In the more immediate future, the team’s research will look into the brain activity that occurs when we view mis-scaled objects.

“Many studies have identified brain regions that process scenes and objects, and now researchers are trying to understand which particular properties of scenes and objects are represented in these regions,” said postdoctoral researcher Lauren Welbourne, whose current research concentrates on how objects are represented in the cortex, and how scene context influences the perception of objects.

“So what we’re trying to do is find out how these brain areas respond to objects that are either correctly or incorrectly scaled within a scene. This may help us determine which regions are responsible for making it more difficult for us to find objects if they are mis-scaled.”


Abstract of Humans, but Not Deep Neural Networks, Often Miss Giant Targets in Scenes

Even with great advances in machine vision, animals are still unmatched in their ability to visually search complex scenes. Animals from bees [ 1, 2 ] to birds [ 3 ] to humans [ 4–12 ] learn about the statistical relations in visual environments to guide and aid their search for targets. Here, we investigate a novel manner in which humans utilize rapidly acquired information about scenes by guiding search toward likely target sizes. We show that humans often miss targets when their size is inconsistent with the rest of the scene, even when the targets were made larger and more salient and observers fixated the target. In contrast, we show that state-of-the-art deep neural networks do not exhibit such deficits in finding mis-scaled targets but, unlike humans, can be fooled by target-shaped distractors that are inconsistent with the expected target’s size within the scene. Thus, it is not a human deficiency to miss targets when they are inconsistent in size with the scene; instead, it is a byproduct of a useful strategy that the brain has implemented to rapidly discount potential distractors.

Human vs. deep-neural-network performance in object recognition

(credit: UC Santa Barbara)

Before you read this: look for toothbrushes in the photo above.

Did you notice the huge toothbrush on the left? Probably not. That’s because when humans search through scenes for a particular object, we often miss objects whose size is inconsistent with the rest of the scene, according to scientists in the Department of Psychological & Brain Sciences at UC Santa Barbara.

The scientists are investigating this phenomenon in an effort to better understand how humans and computers compare in doing visual searches. Their findings are published in the journal Current Biology.

Hiding in plain sight

“When something appears at the wrong scale, you will miss it more often because your brain automatically ignores it,” said UCSB professor Miguel Eckstein, who specializes in computational human vision, visual attention, and search.

The experiment used scenes of ordinary objects featured in computer-generated images that varied in color, viewing angle, and size, mixed with “target-absent” scenes. The researchers asked 60 viewers to search for these objects (e.g., toothbrush, parking meter, computer mouse) while eye-tracking software monitored the paths of their gaze.

The researchers found that people tended to miss the target more often when it was mis-scaled (too large or too small) — even when looking directly at the target object.

Computer vision, by contrast, doesn’t have this issue, the scientists reported. However, in the experiments, the researchers found that the most advanced form of computer vision — deep neural networks — had its own limitations.

Human search strategies that could improve computer vision

Red rectangle marks incorrect image identification as a cell phone by a deep-learning algorithm (credit: UC Santa Barbara)

For example, a CNN deep-learning neural net incorrectly identified a computer keyboard as a cell phone, based on similarity in shape and the location of the object in spatial proximity to a human hand (as would be expected of a cell phone). But for humans, the object’s size (compared to the nearby hands) is clearly seen as inconsistent with a cell phone.

“This strategy allows humans to reduce false positives when making fast decisions,” the researchers note in the paper.

“The idea is when you first see a scene, your brain rapidly processes it within a few hundred milliseconds or less, and then you use that information to guide your search towards likely locations where the object typically appears,” Eckstein said. “Also, you focus your attention on objects that are actually at the size that is consistent with the object that you’re looking for.”

That is, human brains use the relationships between objects and their context within the scene to guide their eyes — a useful strategy to process scenes rapidly, eliminate distractors, and reduce false positives.

This finding might suggest ways to improve computer vision by implementing some of the tricks the brain utilizes to reduce false positives, according to the researchers.

Future research

“There are some theories that suggest that people with autism spectrum disorder focus more on local scene information and less on global structure,” says Eckstein, who is contemplating a follow-up study. “So there is a possibility that people with autism spectrum disorder might miss the mis-scaled objects less often, but we won’t know that until we do the study.”

In the more immediate future, the team’s research will look into the brain activity that occurs when we view mis-scaled objects.

“Many studies have identified brain regions that process scenes and objects, and now researchers are trying to understand which particular properties of scenes and objects are represented in these regions,” said postdoctoral researcher Lauren Welbourne, whose current research concentrates on how objects are represented in the cortex, and how scene context influences the perception of objects.

“So what we’re trying to do is find out how these brain areas respond to objects that are either correctly or incorrectly scaled within a scene. This may help us determine which regions are responsible for making it more difficult for us to find objects if they are mis-scaled.”


Abstract of Humans, but Not Deep Neural Networks, Often Miss Giant Targets in Scenes

Even with great advances in machine vision, animals are still unmatched in their ability to visually search complex scenes. Animals from bees [ 1, 2 ] to birds [ 3 ] to humans [ 4–12 ] learn about the statistical relations in visual environments to guide and aid their search for targets. Here, we investigate a novel manner in which humans utilize rapidly acquired information about scenes by guiding search toward likely target sizes. We show that humans often miss targets when their size is inconsistent with the rest of the scene, even when the targets were made larger and more salient and observers fixated the target. In contrast, we show that state-of-the-art deep neural networks do not exhibit such deficits in finding mis-scaled targets but, unlike humans, can be fooled by target-shaped distractors that are inconsistent with the expected target’s size within the scene. Thus, it is not a human deficiency to miss targets when they are inconsistent in size with the scene; instead, it is a byproduct of a useful strategy that the brain has implemented to rapidly discount potential distractors.

Artificial ‘skin’ gives robotic hand a sense of touch

University of Houston researchers have reported a development in stretchable electronics that can serve as artificial skin for a robotic hand and biomedical devices (credit: University of Houston)

A team of researchers from the University of Houston has reported a development in stretchable electronics that can serve as an artificial skin, allowing a robotic hand to sense the difference between hot and cold, and also offering advantages for a wide range of biomedical devices.

The work, reported in the open-access journal Science Advances, describes a new mechanism for producing stretchable electronics, a process that relies upon readily available materials and could be scaled up for commercial production.

Cunjiang Yu, Bill D. Cook Assistant Professor of mechanical engineering and lead author of the paper, said the work is the first to create a semiconductor in a rubber composite format, designed to allow the electronic components to retain functionality even after the material is stretched by 50 percent.

He noted that traditional semiconductors are brittle and using them in otherwise stretchable materials has required a complicated system of mechanical accommodations. That’s both more complex and less stable than the new discovery, as well as more expensive, he said. “Our strategy has advantages for simple fabrication, scalable manufacturing, high-density integration, large strain tolerance, and low cost,” he said.

Photograph of a robotic hand with intrinsically stretchable rubbery sensors (credit: Hae-Jin Kim et al./Science Advances)

The team used the skin to demonstrate that a robotic hand could sense the temperature of hot and iced water in a cup. The skin also was able to interpret computer signals sent to the hand and reproduce the signals as American Sign Language.

Uses of the stretchable skin include soft wearable electronics such as health monitors, medical implants, and human-machine interfaces.

The stretchable composite semiconductor was prepared by using a silicon-based polymer known as polydimethylsiloxane (PDMS) and tiny nanowires to create a solution that was then hardened into a material that used the nanowires to transport electric current.


Abstract of Rubbery electronics and sensors from intrinsically stretchable elastomeric composites of semiconductors and conductors

A general strategy to impart mechanical stretchability to stretchable electronics involves engineering materials into special architectures to accommodate or eliminate the mechanical strain in nonstretchable electronic materials while stretched. We introduce an all solution–processed type of electronics and sensors that are rubbery and intrinsically stretchable as an outcome from all the elastomeric materials in percolated composite formats with P3HT-NFs [poly(3-hexylthiophene-2,5-diyl) nanofibrils] and AuNP-AgNW (Au nanoparticles with conformally coated silver nanowires) in PDMS (polydimethylsiloxane). The fabricated thin-film transistors retain their electrical performances by more than 55% upon 50% stretching and exhibit one of the highest P3HT-based field-effect mobilities of 1.4 cm2/V∙s, owing to crystallinity improvement. Rubbery sensors, which include strain, pressure, and temperature sensors, show reliable sensing capabilities and are exploited as smart skins that enable gesture translation for sign language alphabet and haptic sensing for robotics to illustrate one of the applications of the sensors.

A battery-free origami robot powered and controlled by external magnetic fields

Wirelessly powered and controlled magnetic folding robot arm can grasp and bend (credit: Wyss Institute at Harvard University)

Harvard University researchers have created a battery-free, folding robot “arm” with multiple “joints,” gripper “hand,” and actuator “muscles” — all powered and controlled wirelessly by an external resonant magnetic field.

The design is inspired by the traditional Japanese art of origami (used to transform a simple sheet of paper into complex, three-dimensional shapes through a specific pattern of folds, creases, and crimps). The prototype device is capable of complex, repeatable movements at millimeter to centimeter scales.

The research, by scientists at the Wyss Institute for Biologically Inspired Engineering and the John A. Paulson School of Engineering and Applied Sciences (SEAS), is reported in Science Robotics.

How it works

Design of small-scale-structure prototype of wirelessly controlled robotic arm (credit: Mustafa Boyvat et al./Science Robotics)

The researchers designed a 0.8-gram prototype small-scale-structure* prototype robotic “arm” capable of bending and opening or closing a gripper around an object. The “arm” is constructed with a special origami-like pattern that uses hinges (“joints”) to permit it to bend. There is also a “hand” (gripper — left panel in above image) that opens or closes.

To power the device, an external coil with its own power source (see video below) is used to generate a low-frequency magnetic field that induces an electrical current in three magnetic coils. The current heats the spiral-wire shape-memory-alloy actuator wires (coiled wire shown in inset above). That causes the actuator wires (“muscles”) to contract, making the attached nearby “joints” bend, and folding the robot body.

Mechanism of the origami gripper (for small-scale prototype design). (Left) The coil SMA actuator pushes the center link connected to both fingers and the gripper opens fingers, enabled by dynamic folding at the joints (left). The plate spring, which is a passive compression spring, pulls the link back as the gripper closes the fingers, again by rotations at folding joints (center). (Right) A photo of the gripper showing the SMA actuator wire attached at the center link. (credit: Mustafa Boyvat et al./Science Robotics)

By changing the resonant frequency of the external electromagnetic field, the two longer actuator wires (coiled wires shown in above illustration) are instead heated and stretched, opening the gripper (“hand”).

In both cases, when the external field-induced current stops, the actuators relax, springing back to their “memory” positions and causing the robot body to straighten out or the gripper’s outer triangles to close.

Minimally invasive medicine and surgery applications

As an example of a practical future application, instead of having an uncomfortable endoscope put down their throat to assist a doctor with surgery, a patient could just swallow a micro-robot that could move around and perform simple tasks, like holding tissue or filming, powered by a coil outside their body.

Using a much larger source coil — on the order of yards in diameter — could enable wireless, battery-free communication between multiple “smart” objects in a room or building.

“Medical devices today are commonly limited by the size of the batteries that power them, whereas these remotely powered origami robots can break through that size barrier and potentially offer entirely new, minimally invasive approaches for medicine and surgery in the future,” says Wyss Founding Director Donald Ingber, who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, as well as a Professor of Bioengineering at Harvard’s School of Engineering and Applied Sciences.

This work was supported by the National Science Foundation, the U.S. Army Research Laboratory, and the Swiss National Science Foundation.

* A large-scale-structure prototype version has minor differences, including 12-cm folding lines vs. 1.7-cm folding lines in the smaller version.

Wyss Institute | Battery-Free Folding Robots


Abstract of Addressable wireless actuation for multijoint folding robots and devices

“Printing” robots and other complex devices through a process of origami-like folding is an emerging and promising manufacturing method due to the inherent simplicity and low cost of folding-based assembly. Folding is used in this class of device to create both complex static structures and flexure-based compliant mechanisms. Dependency on batteries to power these folds with no external wires is a hurdle to giving small-scale folding robots and devices functionality. We demonstrate a battery-free wireless folding method for dynamic multijoint structures, achieving addressable folding motions—both individual and collective folding—using only basic passive electronic components on the device. The method is based on electromagnetic power transmission and resonance selectivity for actuation of resistive shape memory alloy actuators without the need for physical connection or line of sight. We demonstrate the utility of this approach using two folded devices at different sizes using different circuit approaches.

Scientists remove one of the final barriers to making lifelike robots

(L) The electrically actuated muscle with thin resistive wire in a rest position; (R) The muscle is expanded using only a low voltage (8V). (credit: Aslan Miriyev/Columbia Engineering)

Researchers at the Columbia Engineering Creative Machines lab have developed a 3D-printable, synthetic soft muscle that can mimic natural biological systems, lifting 1000 times its own weight. The artificial muscle is three times stronger than natural muscle and can push, pull, bend, twist, and lift weight — no external devices required.

Existing soft-actuator technologies are typically based on bulky pneumatic or hydraulic inflation of elastomer skins that expand when air or liquid is supplied to them, which require external compressors and pressure-regulating equipment.

“We’ve been making great strides toward making robot minds, but robot bodies are still primitive,” said Hod Lipson, PhD, a professor of mechanical engineering. “This is a big piece of the puzzle and, like biology, the new actuator can be shaped and reshaped a thousand ways. We’ve overcome one of the final barriers to making lifelike robots.”

The research findings are described in an open-access study published Tuesday Sept. 19, 2017 by Nature Communications.

Replicating natural motion

Inspired by living organisms, soft-material robotics hold promise for areas where robots need to contact and interact with humans, such as manufacturing and healthcare. Unlike rigid robots, soft robots can replicate natural motion — grasping and manipulation — to provide medical and other types of assistance, perform delicate tasks, or pick up soft objects.

Structure and principle of operation of the soft composite material (stereoscope image scale bar is 1 mm). Upon heating the composite to a temperature of 78.4 °C, ethanol boils and the local pressure inside the micro-bubbles grows, forcing the elastic silicone elastomer matrix to comply by expansion in order to reduce the pressure. (credit: Aslan Miriyev et al./Nature Communications)

To achieve an actuator with high stress and high strain coupled with low density, the researchers used a silicone rubber matrix with ethanol (alcohol) distributed throughout in micro-bubbles. This design combines the elastic properties and extreme volume change attributes of other material systems while also being easy to fabricate, low cost, and made of environmentally safe materials.*

The researchers next plan to use conductive (heatable) materials to replace the embedded wire, accelerate the muscle’s response time, and increase its shelf life. Long-term, they plan to involve artificial intelligence to learn to control the muscle — perhaps a final milestone towards replicating natural human motion.

* After being 3D-printed into the desired shape, the artificial muscle was electrically actuated using a thin resistive wire and low-power (8V). It was tested in a variety of robotic applications, where it showed significant expansion-contraction ability and was capable of expansion up to 900% when electrically heated to 80°C. The new material has a strain density (the amount of deformation in the direction of an applied force without damage) that is 15 times larger than natural muscle.


Columbia Engineering | Soft Materials for Soft Actuators

Roboticists show off their new advances in “soft robots” (credit: Reuters TV)


Abstract of Soft material for soft actuators

Inspired by natural muscle, a key challenge in soft robotics is to develop self-contained electrically driven soft actuators with high strain density. Various characteristics of existing technologies, such as the high voltages required to trigger electroactive polymers ( > 1KV), low strain ( < 10%) of shape memory alloys and the need for external compressors and pressure-regulating components for hydraulic or pneumatic fluidicelastomer actuators, limit their practicality for untethered applications. Here we show a single self-contained soft robust composite material that combines the elastic properties of a polymeric matrix and the extreme volume change accompanying liquid–vapor transition. The material combines a high strain (up to 900%) and correspondingly high stress (up to 1.3 MPa) with low density (0.84 g cm−3). Along with its extremely low cost (about 3 cent per gram), simplicity of fabrication and environment-friendliness, these properties could enable new kinds of electrically driven entirely soft robots.