Smart algorithm automatically adjusts exoskeletons for best walking performance

Walk this way: Metabolic feedback and optimization algorithm automatically tweaks exoskeleton for optimal performance. (credit: Kirby Witte, Katie Poggensee, Pieter Fiers, Patrick Franks & Steve Collins)

Researchers at the College of Engineering at Carnegie Mellon University (CMU) have developed a new automated feedback system for personalizing exoskeletons to achieve optimal performance.

Exoskeletons can be used to augment human abilities. For example, they can provide more endurance while walking, help lift a heavy load, improve athletic performance, and help a stroke patient walk again.

But current one-size-fits-all exoskeleton devices, despite their potential, “have not improved walking performance as much as we think they should,” said Steven Collins, a professor of Mechanical Engineering and senior author of a paper published published Friday June 23, 2017 in Science.

The problem: An exoskeleton needs to be adjusted (and re-adjusted) to work effectively for each user — currently, a time-consuming, iffy manual process.

So the CMU engineers developed a more effective “human-in-the-loop optimization” technique that measures the amount of energy the walker expends by monitoring their breathing* — automatically adjusting the exoskeleton’s ankle dynamics to minimize required human energy expenditure.**

Using real-time metabolic cost estimation for each individual, the CMU software algorithm, combined with versatile emulator hardware, optimized the exoskeleton torque pattern for one ankle while walking, running, and carrying a load on a treadmill. The algorithm automatically made optimized adjustments for each pattern, based on measurements of a person’s energy use for 32 different walking patterns over the course of an hour. (credit: Juanjuan Zhang et al./Science, adapted by KurzweilAI)

In a lab study with 11 healthy volunteers, the new technique resulted in an average reduction in effort of 24% compared to participants walking with the exoskeleton powered off. The technique yielded higher user benefits than in any exoskeleton study to date, including devices acting at all joints on both legs, according to the researchers.

* “In daily life, a proxy measure such as heart rate or muscle activity could be used for optimization, providing noisier but more abundant performance data.” — Juanjuan Zhang et al./Science

** Ankle torque in the lab study was determined by four parameters: peak torque, timing of peak torque, and rise and fall times. This method was chosen to allow comparisons to a prior study that used the same hardware.


Science/AAAS | Personalized Exoskeletons Are Taking Support One Step Farther


Abstract of Human-in-the-loop optimization of exoskeleton assistance during walking

Exoskeletons and active prostheses promise to enhance human mobility, but few have succeeded. Optimizing device characteristics on the basis of measured human performance could lead to improved designs. We have developed a method for identifying the exoskeleton assistance that minimizes human energy cost during walking. Optimized torque patterns from an exoskeleton worn on one ankle reduced metabolic energy consumption by 24.2 ± 7.4% compared to no torque. The approach was effective with exoskeletons worn on one or both ankles, during a variety of walking conditions, during running, and when optimizing muscle activity. Finding a good generic assistance pattern, customizing it to individual needs, and helping users learn to take advantage of the device all contributed to improved economy. Optimization methods with these features can substantially improve performance.

Two drones see through walls in 3D using WiFi signals

Transmit and receive drones perform 3D imaging through walls using WiFi (credit: Chitra R. Karanam and Yasamin Mostofi/ACM)

Researchers at the University of California Santa Barbara have demonstrated the first three-dimensional imaging of objects through walls using an ordinary wireless signal.

Applications could include emergency search-and-rescue, archaeological discovery, and structural monitoring, according to the researchers. Other applications could include military and law-enforcement surveillance.

Calculating 3D images from WiFi signals

In the research, two octo-copters (drones) took off and flew outside an enclosed, four-sided brick structure whose interior was unknown to the drones. One drone continuously transmitted a WiFi signal; the other drone (located on a different side of the structure) received that signal and transmitted the changes in received signal strength (“RSSI”) during the flight to a computer, which then calculated 3D high-resolution images of the objects inside (which do not need to move).

Structure and resulting 3D image (credit: Chitra R. Karanam and Yasamin Mostofi/ACM)

Interestingly, the equipment is all commercially available: two drones with “yagi” antenna, WiFi router, Tango tablet (for real-time localization), and Raspberry Pi computer with network interface to record measurements.

This development builds on previous 2D work by professor Yasamin Mostofi’s lab, which has pioneered sensing and imaging with everyday radio frequency signals such as WiFi. Mostofi says the success of the 3D experiments is due to the drones’ ability to approach the area from several angles, and to new methodology* developed by her lab.

The research is described in an open-access paper published April 2017 in proceedings of the Association for Computing Machinery/Institute of Electrical and Electronics Engineers International Conference on Information Processing in Sensor Networks (IPSN).

A later paper by Technical University of Munich physicists also reported a system intended for 3D imaging with WiFi, but with only simulated (and cruder) images. (An earlier 2009 paper by Mostofi et al. also reported simulated results for 3D see-through imaging of structures.)

Block diagram of the 3D through-wall imaging system (credit: Chitra R. Karanam and Yasamin Mostofi/ACM)

* The researchers’ approach to enabling 3D through-wall imaging utilizes four tightly integrated key components, according to the paper.

(1) They proposed robotic paths that can capture the spatial variations in all three dimensions as much as possible, while maintaining the efficiency of the operation. 

(2) They modeled the three-dimensional unknown area of interest as a Markov Random Field to capture the spatial dependencies, and utilized a graph-based belief propagation approach to update the imaging decision of each voxel (the smallest unit of a 3D image) based on the decisions of the neighboring voxels. 

(3) To approximate the interaction of the transmitted wave with the area of interest, they used a linear wave model.

(4) They took advantage of the compressibility of the information content to image the area with a very small number of WiFi measurements (less than 4 percent).


Mostofi Lab | X-ray Eyes in the Sky: Drones and WiFi for 3D Through-Wall Imaging


Abstract of 3D Through-Wall Imaging with Unmanned Aerial Vehicles Using WiFi

In this paper, we are interested in the 3D through-wall imaging of a completely unknown area, using WiFi RSSI and Unmanned Aerial Vehicles (UAVs) that move outside of the area of interest to collect WiFi measurements. It is challenging to estimate a volume represented by an extremely high number of voxels with a small number of measurements. Yet many applications are time-critical and/or limited on resources, precluding extensive measurement collection. In this paper, we then propose an approach based on Markov random field modeling, loopy belief propagation, and sparse signal processing for 3D imaging based on wireless power measurements. Furthermore, we show how to design ecient aerial routes that are informative for 3D imaging. Finally, we design and implement a complete experimental testbed and show high-quality 3D robotic through-wall imaging of unknown areas with less than 4% of measurements.

Crystal ‘domain walls’ may lead to tinier electronic devices

Abstract art? No, nanoscale crystal sheets with moveable conductive “domain walls” that can modify a circuit’s electronic properties (credit: Queen’s University Belfast)

Queen’s University Belfast physicists have discovered a radical new way to modify the conductivity (ease of electron flow) of electronic circuits — reducing the size of future devices.

The two latest KurzweilAI articles on graphene cited faster/lower-power performance and device-compatibility features. This new research takes another approach: Altering the properties of a crystal to eliminate the need for multiple circuits in devices.

Reconfigurable nanocircuitry

To do that, the scientists used “ferroelectric copper-chlorine boracite” crystal sheets, which are almost as thin as graphene. The researchers discovered that squeezing the crystal sheets with a sharp needle at a precise location causes a jigsaw-puzzle-like pattern of “domains walls” to develop around the contact point.

Then, using external applied electric fields, these writable, erasable domain walls can be repeatedly moved around in the crystal to create a variety of new electronic properties. They can appear, disappear, or move around within the crystal, all without permanently altering the crystal itself.

Eliminating the need for multiple circuits may reduce the size of future computers and other devices, according to the researchers.

The team’s findings have been published in an open-access paper in Nature Communications.


Abstract of Injection and controlled motion of conducting domain walls in improper ferroelectric Cu-Cl boracite

Ferroelectric domain walls constitute a completely new class of sheet-like functional material. Moreover, since domain walls are generally writable, erasable and mobile, they could be useful in functionally agile devices: for example, creating and moving conducting walls could make or break electrical connections in new forms of reconfigurable nanocircuitry. However, significant challenges exist: site-specific injection and annihilation of planar walls, which show robust conductivity, has not been easy to achieve. Here, we report the observation, mechanical writing and controlled movement of charged conducting domain walls in the improper-ferroelectric Cu3B7O13Cl. Walls are straight, tens of microns long and exist as a consequence of elastic compatibility conditions between specific domain pairs. We show that site-specific injection of conducting walls of up to hundreds of microns in length can be achieved through locally applied point-stress and, once created, that they can be moved and repositioned using applied electric fields.

New chemical method could revolutionize graphene use in electronics

Adding a molecular structure containing carbon, chromium, and oxygen atoms retains graphene’s superior conductive properties. The metal atoms (silver, in this experiment) to be bonded are then added to the oxygen atoms on top. (credit: Songwei Che et al./Nano Letters)

University of Illinois at Chicago scientists have solved a fundamental problem that has held back the use of wonder material graphene in a wide variety of electronics applications.

When graphene is bonded (attached) to metal atoms (such as molybdenum) in devices such as solar cells, graphene’s superior conduction properties degrade.

The solution: Instead of adding molecules directly to the individual carbon atoms of graphene, the new method first adds a sort of buffer (consisting of chromium, carbon, and oxygen atoms) to the graphene, and then adds the metal atoms to this buffer material instead. That enables the graphene to retain its unique properties of electrical conduction.

In an experiment, the researchers successfully added silver nanoparticles to graphene with this method. That increased the material’s ability to boost the efficiency of graphene-based solar cells by 11 fold, said Vikas Berry, associate professor and department head of chemical engineering and senior author of a paper on the research, published in Nano Letters.

Researchers at Indian Institute of Technology and Clemson University were also involved in the study. The research was funded by the National Science Foundation.


Abstract of Retained Carrier-Mobility and Enhanced Plasmonic-Photovoltaics of Graphene via ring-centered η6 Functionalization and Nanointerfacing

Binding graphene with auxiliary nanoparticles for plasmonics, photovoltaics, and/or optoelectronics, while retaining the trigonal-planar bonding of sp2 hybridized carbons to maintain its carrier-mobility, has remained a challenge. The conventional nanoparticle-incorporation route for graphene is to create nucleation/attachment sites via “carbon-centered” covalent functionalization, which changes the local hybridization of carbon atoms from trigonal-planar sp2to tetrahedral sp3. This disrupts the lattice planarity of graphene, thus dramatically deteriorating its mobility and innate superior properties. Here, we show large-area, vapor-phase, “ring-centered” hexahapto (η6) functionalization of graphene to create nucleation-sites for silver nanoparticles (AgNPs) without disrupting its sp2 character. This is achieved by the grafting of chromium tricarbonyl [Cr(CO)3] with all six carbon atoms (sigma-bonding) in the benzenoid ring on graphene to form an (η6-graphene)Cr(CO)3 complex. This nondestructive functionalization preserves the lattice continuum with a retention in charge carrier mobility (9% increase at 10 K); with AgNPs attached on graphene/n-Si solar cells, we report an ∼11-fold plasmonic-enhancement in the power conversion efficiency (1.24%).

Graphene-based computer would be 1,000 times faster than silicon-based, use 100th the power

How a graphene-based transistor would work. A graphene nanoribbon (GNR) is created by unzipping (opening up) a portion of a carbon nanotube (CNT) (the flat area, shown with pink arrows above it). The GRN switching is controlled by two surrounding parallel CNTs. The magnitudes and relative directions of the control current, ICTRL (blue arrows) in the CNTs determine the rotation direction of the magnetic fields, B (green). The magnetic fields then control the GNR magnetization (based on the recent discovery of negative magnetoresistance), which causes the GNR to switch from resistive (no current) to conductive, resulting in current flow, IGNR (pink arrows) — in other words, causing the GNR to act as a transistor gate. The magnitude of the current flow through the GNR functions as the binary gate output — with binary 1 representing the current flow of the conductive state and binary 0 representing no current (the resistive state). (credit: Joseph S. Friedman et al./Nature Communications)

A future graphene-based transistor using spintronics could lead to tinier computers that are a thousand times faster and use a hundredth of the power of silicon-based computers.

The radical transistor concept, created by a team of researchers at Northwestern University, The University of Texas at Dallas, University of Illinois at Urbana-Champaign, and University of Central Florida, is explained this month in an open-access paper in the journal Nature Communications.

Transistors act as on and off switches. A series of transistors in different arrangements act as logic gates, allowing microprocessors to solve complex arithmetic and logic problems. But the speed of computer microprocessors that rely on silicon transistors has been relatively stagnant since around 2005, with clock speeds mostly in the 3 to 4 gigahertz range.

Clock speeds approaching the terahertz range

The researchers discovered that by applying a magnetic field to a graphene ribbon (created by unzipping a carbon nanotube), they could change the resistance of current flowing through the ribbon. The magnetic field — controlled by increasing or decreasing the current through adjacent carbon nanotubes — increased or decreased the flow of current.

A cascading series of graphene transistor-based logic circuits could produce a massive jump, with clock speeds approaching the terahertz range — a thousand times faster.* They would also be smaller and substantially more efficient, allowing device-makers to shrink technology and squeeze in more functionality, according to Ryan M. Gelfand, an assistant professor in The College of Optics & Photonics at the University of Central Florida.

The researchers hope to inspire the fabrication of these cascaded logic circuits to stimulate a future transformative generation of energy-efficient computing.

* Unlike other spintronic logic proposals, these new logic gates can be cascaded directly through the carbon materials without requiring intermediate circuits and amplification between gates. That would result in compact circuits with reduced area that are far more efficient than with CMOS switching, which is limited by charge transfer and accumulation from RLC (resistance-inductance-capacitance) interconnect delays.


Abstract of Cascaded spintronic logic with low-dimensional carbon

Remarkable breakthroughs have established the functionality of graphene and carbon nanotube transistors as replacements to silicon in conventional computing structures, and numerous spintronic logic gates have been presented. However, an efficient cascaded logic structure that exploits electron spin has not yet been demonstrated. In this work, we introduce and analyse a cascaded spintronic computing system composed solely of low-dimensional carbon materials. We propose a spintronic switch based on the recent discovery of negative magnetoresistance in graphene nanoribbons, and demonstrate its feasibility through tight-binding calculations of the band structure. Covalently connected carbon nanotubes create magnetic fields through graphene nanoribbons, cascading logic gates through incoherent spintronic switching. The exceptional material properties of carbon materials permit Terahertz operation and two orders of magnitude decrease in power-delay product compared to cutting-edge microprocessors. We hope to inspire the fabrication of these cascaded logic circuits to stimulate a transformative generation of energy-efficient computing.

High-speed light-based systems could replace supercomputers for certain ‘deep learning’ calculations

(a) Optical micrograph of an experimentally fabricated on-chip optical interference unit; the physical region where the optical neural network program exists is highlighted in gray. A programmable nanophotonic processor uses a field-programmable gate array (similar to an FPGA integrated circuit ) — an array of interconnected waveguides, allowing the light beams to be modified as needed for a specific deep-learning matrix computation. (b) Schematic illustration of the optical neural network program, which performs matrix multiplication and amplification fully optically. (credit: Yichen Shen et al./Nature Photonics)

A team of researchers at MIT and elsewhere has developed a new approach to deep learning systems — using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep-learning computations.

Deep-learning systems are based on artificial neural networks that mimic the way the brain learns from an accumulation of examples. They can enable technologies such as face- and voice-recognition software, or scour vast amounts of medical data to find patterns that could be useful diagnostically, for example.

But the computations these systems carry out are highly complex and demanding, even for supercomputers. Traditional computer architectures are not very efficient for calculations needed for neural-network tasks that involve repeated multiplications of matrices (arrays of numbers). These can be computationally intensive for conventional CPUs or even GPUs.

Programmable nanophotonic processor

Instead, the new approach uses an optical device that the researchers call a “programmable nanophotonic processor.” Multiple light beams are directed in such a way that their waves interact with each other, producing interference patterns that “compute” the intended operation.

The optical chips using this architecture could, in principle, carry out dense matrix multiplications (the most power-hungry and time-consuming part in AI algorithms) for learning tasks much faster, compared to conventional electronic chips. The researchers expect a computational speed enhancement of at least two orders of magnitude over the state-of-the-art and three orders of magnitude in power efficiency.

“This chip, once you tune it, can carry out matrix multiplication with, in principle, zero energy, almost instantly,” says Marin Soljacic, one of the MIT researchers on the team.

To demonstrate the concept, the team set the programmable nanophotonic processor to implement a neural network that recognizes four basic vowel sounds. Even with the prototype system, they were able to achieve a 77 percent accuracy level, compared to about 90 percent for conventional systems. There are “no substantial obstacles” to scaling up the system for greater accuracy, according to Soljacic.

The team says is will still take a lot more time and effort to make this system useful. However, once the system is scaled up and fully functioning, the low-power system should find many uses, especially for situations where power is limited, such as in self-driving cars, drones, and mobile consumer devices. Other uses include signal processing for data transmission and computer centers.

The research was published Monday (June 12, 2017) in a paper in the journal Nature Photonics (open-access version available on arXiv).

The team also included researchers at Elenion Technologies of New York and the Université de Sherbrooke in Quebec. The work was supported by the U.S. Army Research Office through the Institute for Soldier Nanotechnologies, the National Science Foundation, and the Air Force Office of Scientific Research.


Abstract of Deep learning with coherent nanophotonic circuits

Artificial neural networks are computational network models inspired by signal processing in the brain. These models have dramatically improved performance for many machine-learning tasks, including speech and image recognition. However, today’s computing hardware is inefficient at implementing neural networks, in large part because much of it was designed for von Neumann computing schemes. Significant effort has been made towards developing electronic architectures tuned to implement artificial neural networks that exhibit improved computational speed and accuracy. Here, we propose a new architecture for a fully optical neural network that, in principle, could offer an enhancement in computational speed and power efficiency over state-of-the-art electronics for conventional inference tasks. We experimentally demonstrate the essential part of the concept using a programmable nanophotonic processor featuring a cascaded array of 56 programmable Mach–Zehnder interferometers in a silicon photonic integrated circuit and show its utility for vowel recognition.

A noninvasive method for deep-brain stimulation for brain disorders

External electrical waves excite an area in the mouse hippocampus, shown in bright green. (credit: Nir Grossman, Ph.D., Suhasa B. Kodandaramaiah, Ph.D., and Andrii Rudenko, Ph.D.)

MIT researchers and associates have come up with a breakthrough method of remotely stimulating regions deep within the brain, replacing the invasive surgery now required for implanting electrodes for Parkinson’s and other brain disorders.

The new method could make deep-brain stimulation for brain disorders less expensive, more accessible to patients, and less risky (avoiding brain hemorrhage and infection).

Working with mice, the researchers applied two high-frequency electrical currents at two slightly different frequencies (E1 and E2 in the diagram below), attaching electrodes (similar those used with an EEG brain machine) to the surface of the skull.

A new noninvasive method for deep-brain stimulation (credit: Grossman et al./Cell)

At these higher brain frequencies, the currents have no effect on brain tissues. But where the currents converge deep in the brain, they interfere with one another in such a way that they generate low-frequency current (corresponding to the red envelope in the diagram) inside neurons, thus stimulating neural electrical activity.

The researchers named this method “temporal interference stimulation” (that is, interference between pulses in the two currents at two slightly different times — generating the difference frequency).* For the experimental setup shown in the diagram above, the E1 current was 1kHz (1,000 Hz), which mixed with a 1.04kHz E2 current. That generated a current with a 40Hz “delta f” difference frequency — a frequency that can stimulate neural activity in the brain. (The researchers found no harmful effects in any part of the mouse brain.)

“Traditional deep-brain stimulation requires opening the skull and implanting an electrode, which can have complications,” explains Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, and the senior author of the study, which appears in the June 1, 2017 issue of the journal Cell. Also, “only a small number of people can do this kind of neurosurgery.”

Custom-designed, targeted deep-brain stimulation

If this new method is perfected and clinically tested, neurologists could control the size and location of the exact tissue that receives the electrical stimulation for each patient, by selecting the frequency of the currents and the number and location of the electrodes, according to the researchers.

Neurologists could also steer the location of deep-brain stimulation in real time, without moving the electrodes, by simply altering the currents. In this way, deep targets could be stimulated for conditions such as Parkinson’s, epilepsy, depression, and obsessive-compulsive disorder — without affecting surrounding brain structures.

The researchers are also exploring the possibility of using this method to experimentally treat other brain conditions, such as autism, and for basic science investigations.

Co-author Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, and researchers in her lab tested this technique in mice and found that they could stimulate small regions deep within the brain, including the hippocampus. But they were also able to shift the site of stimulation, allowing them to activate different parts of the motor cortex and prompt the mice to move their limbs, ears, or whiskers.

“We showed that we can very precisely target a brain region to elicit not just neuronal activation but behavioral responses,” says Tsai.

Last year, Tsai showed (open access) that using light to visually induce brain waves of a particular frequency could substantially reduce the beta amyloid plaques seen in Alzheimer’s disease, in the brains of mice. She now plans to explore whether this new type of electrical stimulation could offer a new way to generate the same type of beneficial brain waves.

This new method is also an alternative to other brain-stimulation methods.

Transcranial magnetic stimulation (TMS), which is FDA-approved for treating depression and to study the basic science of cognition, emotion, sensation, and movement, can stimulate deep brain structures but can result in surface regions being strongly stimulated, according to the researchers.

Transcranial ultrasound and expression of heat-sensitive receptors and injection of thermomagnetic nanoparticles have been proposed, “but the unknown mechanism of action … and the need to genetically manipulate the brain, respectively, may limit their immediate use in humans,” the researchers note in the paper.

The MIT researchers collaborated with investigators at Beth Israel Deaconess Medical Center (BIDMC), the IT’IS Foundation, Harvard Medical School, and ETH Zurich.

The research was funded in part by the Wellcome Trust, a National Institutes of Health Director’s Pioneer Award, an NIH Director’s Transformative Research Award, the New York Stem Cell Foundation Robertson Investigator Award, the MIT Center for Brains, Minds, and Machines, Jeremy and Joyce Wertheimer, Google, a National Science Foundation Career Award, the MIT Synthetic Intelligence Project, and Harvard Catalyst: The Harvard Clinical and Translational Science Center.

* Similar to a radio-frequency or audio “beat frequency.”


Abstract of Noninvasive Deep Brain Stimulation via Temporally Interfering Electric Fields

We report a noninvasive strategy for electrically stimulating neurons at depth. By delivering to the brain multiple electric fields at frequencies too high to recruit neural firing, but which differ by a frequency within the dynamic range of neural firing, we can electrically stimulate neurons throughout a region where interference between the multiple fields results in a prominent electric field envelope modulated at the difference frequency. We validated this temporal interference (TI) concept via modeling and physics experiments, and verified that neurons in the living mouse brain could follow the electric field envelope. We demonstrate the utility of TI stimulation by stimulating neurons in the hippocampus of living mice without recruiting neurons of the overlying cortex. Finally, we show that by altering the currents delivered to a set of immobile electrodes, we can steerably evoke different motor patterns in living mice.

How to design and build your own robot

Two robots — robot calligrapher and puppy — produced using an interactive display tool and selecting off-the-shelf components and 3D-printed parts (credit: Carnegie Mellon University)

Carnegie Mellon University (CMU) Robotics Institute researchers have developed a simplified interactive design tool that lets you design and make your own customized legged or wheeled robot, using a mix of 3D-printed parts and off-the-shelf components.

The current process of creating new robotic systems is challenging, time-consuming, and resource-intensive. So the CMU researchers have created a visual design tool with a simple drag-and-drop interface that lets you choose from a library of standard building blocks (such as actuators and mounting brackets that are either off-the-shelf/mass-produced or can be 3D-printed) that you can combine to create complex functioning robotic systems.

(a) The design interface consists of two workspaces. The left workspace allows for designing the robot. It displays a list of various modules at the top. The leftmost menu provides various functions that allow users to define preferences for the search process visualization and for physical simulation. The right workspace (showing the robot design on a plane) runs a physics simulation of the robot for testing. (b) When you select a new module from the modules list, the system automatically makes visual suggestions (shown in red) about possible connections for this module that are relevant to the current design. (credit: Carnegie Mellon University)

An iterative design process lets you experiment by changing the number and location of actuators and adjusting the physical dimensions of your robot. An auto-completion feature can automatically generate assemblies of components by searching through possible component arrangements. It even suggests components that are compatible with each other, points out where actuators should go, and automatically generates 3D-printable structural components to connect those actuators.

Automated design process. (a) Start with a guiding mesh for the robot you want to make and select the orientations of its motors, using the drag and drop interface. (b) The system then searches for possible designs that connect a given pair of motors in user-defined locations, according to user-defined preferences. You can reject the solution and re-do the search with different preferences anytime. A proposed search solution connecting the root motor to the target motor (highlighted in dark red) is shown in light blue. Repeat this process for each pair of motors. (c) Since the legs are symmetric in this case, you would only need to use the search process for two legs. The interface lets you create the other pair of legs by simple editing operations. Finally, attach end-effectors of your choice and create a body plate to complete your awesome robot design. (d) shows the final design (with and without the guiding mesh). The dinosaur head mesh was manually added after this particular design, for aesthetic appeal. (credit: Carnegie Mellon University)

The research team, headed by Stelian Coros, CMU Robotics Institute assistant professor of robotics, designed a number of robots with the tool and verified its feasibility by fabricating two test robots (shown above) — a wheeled robot with a manipulator arm that can hold a pen for drawing, and a four-legged “puppy” robot that can walk forward or sideways. “Our work aims to make robotics more accessible to casual users,” says Coros.

Robotics Ph.D. student Ruta Desai presented a report on the design tool at the IEEE International Conference on Robotics and Automation (ICRA 2017) May 29–June 3 in Singapore. No date for the availability of this tool has been announced.

This work was supported by the National Science Foundation.


Ruta Desai | Computational Abstractions for Interactive Design of Robotic Devices (ICRA 2017)


Abstract of Computational Abstractions for Interactive Design of Robotic Devices

We present a computational design system that allows novices and experts alike to easily create custom robotic devices using modular electromechanical components. The core of our work consists of a design abstraction that models the way in which these components can be combined to form complex robotic systems. We use this abstraction to develop a visual design environment that enables an intuitive exploration of the space of robots that can be created using a given set of actuators, mounting brackets and 3d-printable components. Our computational system also provides support for design auto-completion operations, which further simplifies the task of creating robotic devices. Once robot designs are finished, they can be tested in physical simulations and iteratively improved until they meet the individual needs of their users. We demonstrate the versatility of our computational design system by creating an assortment of legged and wheeled robotic devices. To test the physical feasibility of our designs, we fabricate a wheeled device equipped with a 5-DOF arm and a quadrupedal robot.

3D-printed ‘bionic skin’ could give robots and prosthetics the sense of touch

Schematic of a new kind of 3D printer that can print touch sensors directly on a model hand. (credit: Shuang-Zhuang Guo and Michael McAlpine/Advanced Materials )

Engineering researchers at the University of Minnesota have developed a process for 3D-printing stretchable, flexible, and sensitive electronic sensory devices that could give robots or prosthetic hands — or even real skin — the ability to mechanically sense their environment.

One major use would be to give surgeons the ability to feel during minimally invasive surgeries instead of using cameras, or to increase the sensitivity of surgical robots. The process could also make it easier for robots to walk and interact with their environment.

Printing electronics directly on human skin could be used for pulse monitoring, energy harvesting (of movements), detection of finger motions (on a keyboard or other devices), or chemical sensing (for example, by soldiers in the field to detect dangerous chemicals or explosives). Or imagine a future computer mouse built into your fingertip, with haptic touch on any surface.

“While we haven’t printed on human skin yet, we were able to print on the curved surface of a model hand using our technique,” said Michael McAlpine, a University of Minnesota mechanical engineering associate professor and lead researcher on the study.* “We also interfaced a printed device with the skin and were surprised that the device was so sensitive that it could detect your pulse in real time.”

The researchers also visualize use in “bionic organs.”

A unique skin-compatible 3D-printing process

(left) Schematic of the tactile sensor. (center) Top view. (right) Optical image showing the conformally printed 3D tactile sensor on a fingertip. Scale bar = 4 mm. (credit: Shuang-Zhuang Guo et al./Advanced Materials)

McAlpine and his team made the sensing fabric with a one-of-a kind 3D printer they built in the lab. The multifunctional printer has four nozzles to print the various specialized “inks” that make up the layers of the device — a base layer of silicone**, top and bottom electrodes made of a silver-based piezoresistive conducting ink, a coil-shaped pressure sensor, and a supporting layer that holds the top layer in place while it sets (later washed away in the final manufacturing process).

Surprisingly, all of the layers of “inks” used in the flexible sensors can set at room temperature. Conventional 3D printing using liquid plastic is too hot and too rigid to use on the skin. The sensors can stretch up to three times their original size.

The researchers say the next step is to move toward semiconductor inks and printing on a real surface. “The manufacturing is built right into the process, so it is ready to go now,” McAlpine said.

The research was published online in the journal Advanced Materials. It was funded by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health.

* McAlpine integrated electronics and novel 3D-printed nanomaterials to create a “bionic ear” in 2013.

** The silicone rubber has a low modulus of elasticity of 150 kPa, similar to that of skin, and lower hardness (Shore A 10) than that of human skin, according to the Advanced Materials paper.


College of Science and Engineering, UMN | 3D Printed Stretchable Tactile Sensors


Abstract of 3D Printed Stretchable Tactile Sensors

The development of methods for the 3D printing of multifunctional devices could impact areas ranging from wearable electronics and energy harvesting devices to smart prosthetics and human–machine interfaces. Recently, the development of stretchable electronic devices has accelerated, concomitant with advances in functional materials and fabrication processes. In particular, novel strategies have been developed to enable the intimate biointegration of wearable electronic devices with human skin in ways that bypass the mechanical and thermal restrictions of traditional microfabrication technologies. Here, a multimaterial, multiscale, and multifunctional 3D printing approach is employed to fabricate 3D tactile sensors under ambient conditions conformally onto freeform surfaces. The customized sensor is demonstrated with the capabilities of detecting and differentiating human movements, including pulse monitoring and finger motions. The custom 3D printing of functional materials and devices opens new routes for the biointegration of various sensors in wearable electronics systems, and toward advanced bionic skin applications.

Precision typing on a smartwatch with finger gestures

The “Watchsense” prototype uses a small depth camera attached to the arm, mimicking a depth camera on a smartwatch. It could make it easy to type, or in a music program, volume could be increased by simply raising a finger. (credit: Srinath Sridhar et al.)

If you wear a smartwatch, you know how limiting it is to type it on or otherwise operate it. Now European researchers have developed an input method that uses a depth camera (similar to the Kinect game controller) to track fingertip touch and location on the back of the hand or in mid-air, allowing for precision control.

The researchers have created a prototype called “WatchSense,” worn on the user’s arm. It captures the movements of the thumb and index finger on the back of the hand or in the space above it. It would also work with smartphones, smart TVs, and virtual-reality or augmented reality devices, explains Srinath Sridhar, a researcher in the Graphics, Vision and Video group at the Max Planck Institute for Informatics.

KurzweilAI has covered a variety of attempts to use depth cameras for controlling devices, but developers have been plagued with the lack of precise control with current camera devices and software.

The new software, based on machine learning, recognizes the exact positions of the thumb and index finger in the 3D image from the depth sensor, says Sridhar, identifying specific fingers and dealing with the unevenness of the back of the hand and the fact that fingers can occlude each other when they are moved.

A smartwatch (or other device) could have an embedded depth sensor on its side, aimed at the back of the hand and the space above it, allowing for easy typing and control. (credit: Srinath Sridhar et al.)

“The currently available depth sensors do not fit inside a smartwatch, but from the trend it’s clear that in the near future, smaller depth sensors will be integrated into smartwatches,” Sridhar says.

The researchers, which include Christian Theobalt, head of the Graphics, Vision and Video group at MPI, Anders Markussen and Sebastian Boring at the University of Copenhagen, and Antti Oulasvirta at Aalto University in Finland, will present WatchSense at the ACM CHI Conference on Human Factors in Computing Systems in Denver (May 6–11, 2017). Their open-access paper is also available.


Srinath Sridhar et al. | WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor


Abstract of WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor

This paper contributes a novel sensing approach to support on- and above-skin finger input for interaction on the move. WatchSense uses a depth sensor embedded in a wearable device to expand the input space to neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and occlusions. It can accurately detect fingertips, their locations, and whether they are touching the skin or hovering above it. It extends previous work that supported either mid-air or multitouch input by simultaneously supporting both. We demonstrate feasibility with a compact, wearable prototype attached to a user’s forearm (simulating an integrated depth sensor). Our prototype—which runs in real-time on consumer mobile devices—enables a 3D input space on the back of the hand. We evaluated the accuracy and robustness of the approach in a user study. We also show how WatchSense increases the expressiveness of input by interweaving mid-air and multitouch for several interactive applications.