How to turn a crystal into an erasable electrical circuit

Washington State University researchers used light to write a highly conducting electrical path in a crystal that can be erased and reconfigured. (Left) A photograph of a sample with four metal contacts. (Right) An illustration of a laser drawing a conductive path between two contacts. (credit: Washington State University)

Washington State University (WSU) physicists have found a way to write an electrical circuit into a crystal, opening up the possibility of transparent, three-dimensional electronics that, like an Etch A Sketch, can be erased and reconfigured.

Ordinarily, a crystal does not conduct electricity. But when the researchers heated up crystal strontium titanate under the specific conditions, the crystal was altered so that light made it conductive. The circuit could be erased by heating it with an optical pen.

Schematic diagram of experiment in writing an electrical circuit into a crystal (credit: Washington State University)

The physicists were able to increase the crystal’s conductivity 1,000-fold. The phenomenon occurred at room temperature.

“It opens up a new type of electronics where you can define a circuit optically and then erase it and define a new one,” said Matt McCluskey, a WSU professor of physics and materials science.

The work was published July 27, 2017 in the open-access on-line journal Scientific Reports. The research was funded by the National Science Foundation.


Abstract of Using persistent photoconductivity to write a low-resistance path in SrTiO3

Materials with persistent photoconductivity (PPC) experience an increase in conductivity upon exposure to light that persists after the light is turned off. Although researchers have shown that this phenomenon could be exploited for novel memory storage devices, low temperatures (below 180 K) were required. In the present work, two-point resistance measurements were performed on annealed strontium titanate (SrTiO3, or STO) single crystals at room temperature. After illumination with sub-gap light, the resistance decreased by three orders of magnitude. This markedly enhanced conductivity persisted for several days in the dark. Results from IR spectroscopy, electrical measurements, and exposure to a 405 nm laser suggest that contact resistance plays an important role. The laser was then used as an “optical pen” to write a low-resistance path between two contacts, demonstrating the feasibility of optically defined, transparent electronics.

How to run faster, smarter AI apps on smartphones

(credit: iStock)

When you use smartphone AI apps like Siri, you’re dependent on the cloud for a lot of the processing — limited by your connection speed. But what if your smartphone could do more of the processing directly on your device — allowing for smarter, faster apps?

MIT scientists have taken a step in that direction with a new way to enable artificial-intelligence systems called convolutional neural networks (CNNs) to run locally on mobile devices. (CNN’s are used in areas such as autonomous driving, speech recognition, computer vision, and automatic translation.) Neural networks take up a lot of memory and consume a lot of power, so they usually run on servers in the cloud, which receive data from desktop or mobile devices and then send back their analyses.

The new MIT analytic method can determine how much power a neural network will actually consume when run on a particular type of hardware. The researchers used the method to evaluate new techniques for paring down neural networks so that they’ll run more efficiently on handheld devices.

The new CNN designs are also optimized to run on an energy-efficient computer chip optimized for neural networks that the researchers developed in 2016.

Reducing energy consumption

The new MIT software method uses “energy-aware pruning” — meaning they reduce a neural networks’ power consumption by cutting out the layers of the network that contribute very little to a neural network’s final output and consume the most energy.

Associate professor of electrical engineering and computer science Vivienne Sze and colleagues describe the work in an open-access paper they’re presenting this week (of July 24, 2017) at the Computer Vision and Pattern Recognition Conference. They report that the methods offered up to 73 percent reduction in power consumption over the standard implementation of neural networks — 43 percent better than the best previous method.

Meanwhile, another MIT group at the Computer Science and Artificial Intelligence Laboratory has designed a hardware approach to reduce energy consumption and increase computer-chip processing speed for specific apps, using “cache hierarchies.” (“Caches” are small, local memory banks that store data that’s frequently used by computer chips to cut down on time- and energy-consuming communication with off-chip memory.)**

The researchers tested their system on a simulation of a chip with 36 cores, or processing units. They found that compared to its best-performing predecessors, the system increased processing speed by 20 to 30 percent while reducing energy consumption by 30 to 85 percent. They presented the new system, dubbed Jenga, in an open-access paper at the International Symposium on Computer Architecture earlier in July 2017.

Better batteries — or maybe, no battery?

Another solution to better mobile AI is improving rechargeable batteries in cell phones (and other mobile devices), which have limited charge capacity and short lifecycles, and perform poorly in cold weather.

Recently, DARPA-funded researchers from the University of Houston (and at the University of California-San Diego and Northwestern University) have discovered that quinones — an inexpensive, earth-abundant and easily recyclable material that is low-cost and nonflammable — can address current battery limitations.

“One of these batteries, as a car battery, could last 10 years,” said Yan Yao, associate professor of electrical and computer engineering. In addition to slowing the deterioration of batteries for vehicles and stationary electricity storage batteries, it also would make battery disposal easier because the material does not contain heavy metals. The research is described in Nature Materials.

The first battery-free cellphone that can send and receive calls using only a few microwatts of power. (credit: Mark Stone/University of Washington)

But what if we eliminated batteries altogether? University of Washington researchers have invented a cellphone that requires no batteries. Instead, it harvests 3.5 microwatts of power from ambient radio signals, light, or even the vibrations of a speaker.

The new technology is detailed in a paper published July 1, 2017 in the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies.

The UW researchers demonstrated how to harvest this energy from ambient radio signals transmitted by a WiFi base station up to 31 feet away. “You could imagine in the future that all cell towers or Wi-Fi routers could come with our base station technology embedded in it,” said co-author Vamsi Talla, a former UW electrical engineering doctoral student and Allen School research associate. “And if every house has a Wi-Fi router in it, you could get battery-free cellphone coverage everywhere.”

A cellphone CPU (computer processing unit) typically requires several watts or more (depending on the app), so we’re not quite there yet. But that power requirement could one day be sufficiently reduced by future special-purpose chips and MIT’s optimized algorithms.

It might even let you do amazing things. :)

* Loosely based on the anatomy of the brain, neural networks consist of thousands or even millions of simple but densely interconnected information-processing nodes, usually organized into layers. The connections between nodes have “weights” associated with them, which determine how much a given node’s output will contribute to the next node’s computation. During training, in which the network is presented with examples of the computation it’s learning to perform, those weights are continually readjusted, until the output of the network’s last layer consistently corresponds with the result of the computation. With the proposed pruning method, the energy consumption of AlexNet and GoogLeNet are reduced by 3.7x and 1.6x, respectively, with less than 1% top-5 accuracy loss.

** The software reallocates cache access on the fly to reduce latency (delay), based on the physical locations of the separate memory banks that make up the shared memory cache. If multiple cores are retrieving data from the same DRAM [memory] cache, this can cause bottlenecks that introduce new latencies. So after Jenga has come up with a set of cache assignments, cores don’t simply dump all their data into the nearest available memory bank; instead, Jenga parcels out the data a little at a time, then estimates the effect on bandwidth consumption and latency. 

*** The stumbling block, Yao said, has been the anode, the portion of the battery through which energy flows. Existing anode materials are intrinsically structurally and chemically unstable, meaning the battery is only efficient for a relatively short time. The differing formulations offer evidence that the material is an effective anode for both acid batteries and alkaline batteries, such as those used in a car, as well as emerging aqueous metal-ion batteries.

Neural stem cells steered by electric fields can repair brain damage

Electrical stimulation of the rat brain to move neural stem cells (credit: Jun-Feng Feng et al./ Stem Cell Reports)

Electric fields can be used to guide transplanted human neural stem cells — cells that can develop into various brain tissues — to repair brain damage in specific areas of the brain, scientists at the University of California, Davis have discovered.

It’s well known that electric fields can locally guide wound healing. Damaged tissues generate weak electric fields, and research by UC Davis Professor Min Zhao at the School of Medicine’s Institute for Regenerative Cures has previously shown how these electric fields can attract cells into wounds to heal them.

But the problem is that neural stem cells are naturally only found deep in the brain — in the hippocampus and the subventricular zone. To repair damage to the outer layers of the brain (the cortex), they would have to migrate a significant distance in the much larger human brain.

Migrating neural stem cells with electric fields. (Left) Transplanted human neural stem cells would normally be carried along by the the rostral migration stream (RMS) (red) toward the olfactory bulb (OB) (dark green, migration direction indicated by white arrow). (Right) But electrically guiding migration of the transplanted human neural stem cells reverses the flow toward the subventricular zone (bright green, migration direction indicated by red arrow). (credit: Jun-Feng Feng et al. (adapted by KurzweilAI/ StemCellReports)

Could electric fields be used to help the stem cells migrate that distance? To find out, the researchers placed human neural stem cells in the rostral migration stream (a pathway in the rat brain that carries cells toward the olfactory bulb, which governs the animal’s sense of smell). Cells move easily along this pathway because they are carried by the flow of cerebrospinal fluid, guided by chemical signals.

But by applying an electric field within the rat’s brain, the researchers found they could get the transplanted stem cells to reverse direction and swim “upstream” against the fluid flow. Once arrived, the transplanted stem cells stayed in their new locations weeks or months after treatment, and with indications of differentiation (forming into different types of neural cells).

“Electrical mobilization and guidance of stem cells in the brain provides a potential approach to facilitate stem cell therapies for brain diseases, stroke and injuries,” Zhao concluded.

But it will take future investigation to see if electrical stimulation can mobilize and guide migration of neural stem cells in diseased or injured human brains, the researchers note.

The research was published July 11 in the journal Stem Cell Reports.

Additional authors on the paper are at Ren Ji Hospital, Shanghai Jiao Tong University, and Shanghai Institute of Head Trauma in China and at Aaken Laboratories, Davis. The work was supported by the California Institute for Regenerative Medicine with additional support from NIH, NSF, and Research to Prevent Blindness Inc.


Abstract of Electrical Guidance of Human Stem Cells in the Rat Brain

Limited migration of neural stem cells in adult brain is a roadblock for the use of stem cell therapies to treat brain diseases and injuries. Here, we report a strategy that mobilizes and guides migration of stem cells in the brain in vivo. We developed a safe stimulation paradigm to deliver directional currents in the brain. Tracking cells expressing GFP demonstrated electrical mobilization and guidance of migration of human neural stem cells, even against co-existing intrinsic cues in the rostral migration stream. Transplanted cells were observed at 3 weeks and 4 months after stimulation in areas guided by the stimulation currents, and with indications of differentiation. Electrical stimulation thus may provide a potential approach to facilitate brain stem cell therapies.

How to turn audio clips into realistic lip-synced video


UW (University of Washington) | UW researchers create realistic video from audio files alone

University of Washington researchers at the UW Graphics and Image Laboratory have developed new algorithms that turn audio clips into a realistic, lip-synced video, starting with an existing video of  that person speaking on a different topic.

As detailed in a paper to be presented Aug. 2 at  SIGGRAPH 2017, the team successfully generated a highly realistic video of former president Barack Obama talking about terrorism, fatherhood, job creation and other topics, using audio clips of those speeches and existing weekly video addresses in which he originally spoke on a different topic decades ago.

Realistic audio-to-video conversion has practical applications like improving video conferencing for meetings (streaming audio over the internet takes up far less bandwidth than video, reducing video glitches), or holding a conversation with a historical figure in virtual reality, said Ira Kemelmacher-Shlizerman, an assistant professor at the UW’s Paul G. Allen School of Computer Science & Engineering.


Supasorn Suwajanakorn | Teaser — Synthesizing Obama: Learning Lip Sync from Audio

This beats previous audio-to-video conversion processes, which have involved filming multiple people in a studio saying the same sentences over and over to try to capture how a particular sound correlates to different mouth shapes, which is expensive, tedious and time-consuming. The new machine learning tool may also help overcome the “uncanny valley” problem, which has dogged efforts to create realistic video from audio.

How to do it

A neural network first converts the sounds from an audio file into basic mouth shapes. Then the system grafts and blends those mouth shapes onto an existing target video and adjusts the timing to create a realistic, lip-synced video of the person delivering the new speech. (credit: University of Washington)

1. Find or record a video of the person (or use video chat tools like Skype to create a new video) for the neural network to learn from. There are millions of hours of video that already exist from interviews, video chats, movies, television programs and other sources, the researchers note. (Obama was chosen because there were hours of presidential videos in the public domain.)

2. Train the neural network to watch videos of the person and translate different audio sounds into basic mouth shapes.

3. The system then uses the audio of an individual’s speech to generate realistic mouth shapes, which are then grafted onto and blended with the head of that person. Use a small time shift to enable the neural network to anticipate what the person is going to say next.

4. Currently, the neural network is designed to learn on one individual at a time, meaning that Obama’s voice — speaking words he actually uttered — is the only information used to “drive” the synthesized video. Future steps, however, include helping the algorithms generalize across situations to recognize a person’s voice and speech patterns with less data, with only an hour of video to learn from, for instance, instead of 14 hours.

Fakes of fakes

So the obvious question is: Can you use someone else’s voice on a video (assuming enough videos)? The researchers said they decided against going down the path, but they didn’t say it was impossible.

Even more pernicious: the original video person’s words (not just the voice) could be faked using Princeton/Adobe’s “VoCo” software (when available) — simply by editing a text transcript of their voice recording — or the fake voice itself could be modified.

Or Disney Research’s FaceDirector could be used to edit recorded substitute facial expressions (along with the fake voice) into the video.

However, by reversing the process — feeding video into the neural network instead of just audio — one could also potentially develop algorithms that could detect whether a video is real or manufactured, the researchers note.

The research was funded by Samsung, Google, Facebook, Intel, and the UW Animation Research Labs. You can contact the research team at audiolipsync@cs.washington.edu.


Abstract of Synthesizing Obama: Learning Lip Sync from Audio

Given audio of President Barack Obama, we synthesize a high quality video of him speaking with accurate lip sync, composited into a target video clip. Trained on many hours of his weekly address footage, a recurrent neural network learns the mapping from raw audio features to mouth shapes. Given the mouth shape at each time instant, we synthesize high quality mouth texture, and composite it with proper 3D pose matching to change what he appears to be saying in a target video to match the input audio track. Our approach produces photorealistic results.

How to ‘talk’ to your computer or car with hand or body poses

Researchers at Carnegie Mellon University’s Robotics Institute have developed a system that can detect and understand body poses and movements of multiple people from a video in real time — including, for the first time, the pose of each individual’s fingers.

The ability to recognize finger or hand poses, for instance, will make it possible for people to interact with computers in new and more natural ways, such as simply pointing at things.

That will also allow robots to perceive you’re doing, what moods you’re in, and whether you can be interrupted, for example. Your self-driving car could get an early warning that a pedestrian is about to step into the street by monitoring your body language. The technology could also be used for behavioral diagnosis and rehabilitation for conditions such as autism, dyslexia, and depression, the researchers say.

This new method was developed at CMU’s NSF-funded Panoptic Studio, a two-story dome embedded with 500 video cameras, but the researchers can now do the same thing with a single camera and laptop computer.

The researchers have released their computer code. It’s already being widely used by research groups, and more than 20 commercial groups, including automotive companies, have expressed interest in licensing the technology, according to Yaser Sheikh, associate professor of robotics.

Tracking multiple people in real time, particularly in social situations where they may be in contact with each other, presents a number of challenges. Sheikh and his colleagues took a bottom-up approach, which first localizes all the body parts in a scene — arms, legs, faces, etc. — and then associates those parts with particular individuals.

Sheikh and his colleagues will present reports on their multiperson and hand-pose detection methods at CVPR 2017, the Computer Vision and Pattern Recognition Conference, July 21–26 in Honolulu.

Radical new vertically integrated 3D chip design combines computing and data storage

Four vertical layers in new 3D nanosystem chip. Top (fourth layer): sensors and more than one million carbon-nanotube field-effect transistor (CNFET) logic inverters; third layer, on-chip non-volatile RRAM (1 Mbit memory); second layer, CNFET logic with classification accelerator (to identify sensor inputs); first (bottom) layer, silicon FET logic. (credit: Max M. Shulaker et al./Nature)

A radical new 3D chip that combines computation and data storage in vertically stacked layers — allowing for processing and storing massive amounts of data at high speed in future transformative nanosystems — has been designed by researchers at Stanford University and MIT.

The new 3D-chip design* replaces silicon with carbon nanotubes (sheets of 2-D graphene formed into nanocylinders) and integrates resistive random-access memory (RRAM) cells.

Carbon-nanotube field-effect transistors (CNFETs) are an emerging transistor technology that can scale beyond the limits of silicon MOSFETs (conventional chips), and promise an order-of-magnitude improvement in energy-efficient computation. However, experimental demonstrations of CNFETs so far have been small-scale and limited to integrating only tens or hundreds of devices (see earlier 2015 Stanford research, “Skyscraper-style carbon-nanotube chip design…”).

The researchers integrated more than 1 million RRAM cells and 2 million carbon-nanotube field-effect transistors in the chip, making it the most complex nanoelectronic system ever made with emerging nanotechnologies, according to the researchers. RRAM is an emerging memory technology that promises high-capacity, non-volatile data storage, with improved speed, energy efficiency, and density, compared to dynamic random-access memory (DRAM).

Instead of requiring separate components, the RRAM cells and carbon nanotubes are built vertically over one another, creating a dense new 3D computer architecture** with interleaving layers of logic and memory. By using ultradense through-chip vias (electrical interconnecting wires passing between layers), the high delay with conventional wiring between computer components is eliminated.

The new 3D nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce “highly processed” information. “Such complex nanoelectronic systems will be essential for future high-performance, highly energy-efficient electronic systems,” the researchers say.

How to combine computation and storage

Illustration of separate CPU (bottom) and RAM memory (top) in current computer architecture (images credit: iStock)

The new chip design aims to replace current chip designs, which separate computing and data storage, resulting in limited-speed connections.

Separate 2D chips have been required because “building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” explains lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT and lead author of a paper published July 5, 2017 in the journal Nature. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

Instead, carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures: below 200 C. “This means they can be built up in layers without harming the circuits beneath,” says Shulaker.

Overcoming communication and computing bottlenecks

As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on increasingly miniaturized chips, there is not enough room to place chips side-by-side.

At the same time, embedded intelligence in areas ranging from autonomous driving to personalized medicine is now generating huge amounts of data, but silicon transistors are no longer improving at the historic rate that they have for decades.

Instead, three-dimensional integration is the most promising approach to continue the technology-scaling path set forth by Moore’s law, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

Three-dimensional integration “leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” he says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

The new 3D design provides several benefits for future computing systems, including:

  • Logic circuits made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon.
  • RRAM memory is denser, faster, and more energy-efficient compared to conventional DRAM (dynamic random-access memory) devices.
  • The dense through-chip vias (wires) can enable vertical connectivity that is 1,000 times more dense than conventional packaging and chip-stacking solutions allow, which greatly improves the data communication bandwidth between vertically stacked functional layers. For example, each sensor in the top layer can connect directly to its respective underlying memory cell with an inter-layer via. This enables the sensors to write their data in parallel directly into memory and at high speed.
  • The design is compatible in both fabrication and design with today’s CMOS silicon infrastructure.

Shulaker next plans to work with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system.

This work was funded by the Defense Advanced Research Projects Agency, the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

* As a working-prototype demonstration of the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip, they placed more than 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases for detecting signs of disease by sensing particular compounds in a patient’s breath, says Shulaker. By layering sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth in just one device, according to Shulaker. The top layer could be replaced with additional computation or data storage subsystems, or with other forms of input/output, he explains.

** Previous R&D in 3D chip technologies and their limitations are covered here, noting that “in general, 3D integration is a broad term that includes such technologies as 3D wafer-level packaging (3DWLP); 2.5D and 3D interposer-based integration; 3D stacked ICs (3D-SICs), monolithic 3D ICs; 3D heterogeneous integration; and 3D systems integration.” The new Stanford-MIT nanosystem design significantly expands this definition.


Abstract of Three-dimensional integration of nanotechnologies for computing and data storage on a single chip

The computing demands of future data-intensive applications will greatly exceed the capabilities of current electronics, and are unlikely to be met by isolated improvements in transistors, data storage technologies or integrated circuit architectures alone. Instead, transformative nanosystems, which use new nanotechnologies to simultaneously realize improved devices and new integrated circuit architectures, are required. Here we present a prototype of such a transformative nanosystem. It consists of more than one million resistive random-access memory cells and more than two million carbon-nanotube field-effect transistors—promising new nanotechnologies for use in energy-efficient digital logic circuits and for dense data storage—fabricated on vertically stacked layers in a single chip. Unlike conventional integrated circuit architectures, the layered fabrication realizes a three-dimensional integrated circuit architecture with fine-grained and dense vertical connectivity between layers of computing, data storage, and input and output (in this instance, sensing). As a result, our nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce ‘highly processed’ information. As a working prototype, our nanosystem senses and classifies ambient gases. Furthermore, because the layers are fabricated on top of silicon logic circuitry, our nanosystem is compatible with existing infrastructure for silicon-based technologies. Such complex nano-electronic systems will be essential for future high-performance and highly energy-efficient electronic systems.

Smart algorithm automatically adjusts exoskeletons for best walking performance

Walk this way: Metabolic feedback and optimization algorithm automatically tweaks exoskeleton for optimal performance. (credit: Kirby Witte, Katie Poggensee, Pieter Fiers, Patrick Franks & Steve Collins)

Researchers at the College of Engineering at Carnegie Mellon University (CMU) have developed a new automated feedback system for personalizing exoskeletons to achieve optimal performance.

Exoskeletons can be used to augment human abilities. For example, they can provide more endurance while walking, help lift a heavy load, improve athletic performance, and help a stroke patient walk again.

But current one-size-fits-all exoskeleton devices, despite their potential, “have not improved walking performance as much as we think they should,” said Steven Collins, a professor of Mechanical Engineering and senior author of a paper published published Friday June 23, 2017 in Science.

The problem: An exoskeleton needs to be adjusted (and re-adjusted) to work effectively for each user — currently, a time-consuming, iffy manual process.

So the CMU engineers developed a more effective “human-in-the-loop optimization” technique that measures the amount of energy the walker expends by monitoring their breathing* — automatically adjusting the exoskeleton’s ankle dynamics to minimize required human energy expenditure.**

Using real-time metabolic cost estimation for each individual, the CMU software algorithm, combined with versatile emulator hardware, optimized the exoskeleton torque pattern for one ankle while walking, running, and carrying a load on a treadmill. The algorithm automatically made optimized adjustments for each pattern, based on measurements of a person’s energy use for 32 different walking patterns over the course of an hour. (credit: Juanjuan Zhang et al./Science, adapted by KurzweilAI)

In a lab study with 11 healthy volunteers, the new technique resulted in an average reduction in effort of 24% compared to participants walking with the exoskeleton powered off. The technique yielded higher user benefits than in any exoskeleton study to date, including devices acting at all joints on both legs, according to the researchers.

* “In daily life, a proxy measure such as heart rate or muscle activity could be used for optimization, providing noisier but more abundant performance data.” — Juanjuan Zhang et al./Science

** Ankle torque in the lab study was determined by four parameters: peak torque, timing of peak torque, and rise and fall times. This method was chosen to allow comparisons to a prior study that used the same hardware.


Science/AAAS | Personalized Exoskeletons Are Taking Support One Step Farther


Abstract of Human-in-the-loop optimization of exoskeleton assistance during walking

Exoskeletons and active prostheses promise to enhance human mobility, but few have succeeded. Optimizing device characteristics on the basis of measured human performance could lead to improved designs. We have developed a method for identifying the exoskeleton assistance that minimizes human energy cost during walking. Optimized torque patterns from an exoskeleton worn on one ankle reduced metabolic energy consumption by 24.2 ± 7.4% compared to no torque. The approach was effective with exoskeletons worn on one or both ankles, during a variety of walking conditions, during running, and when optimizing muscle activity. Finding a good generic assistance pattern, customizing it to individual needs, and helping users learn to take advantage of the device all contributed to improved economy. Optimization methods with these features can substantially improve performance.

Two drones see through walls in 3D using WiFi signals

Transmit and receive drones perform 3D imaging through walls using WiFi (credit: Chitra R. Karanam and Yasamin Mostofi/ACM)

Researchers at the University of California Santa Barbara have demonstrated the first three-dimensional imaging of objects through walls using an ordinary wireless signal.

Applications could include emergency search-and-rescue, archaeological discovery, and structural monitoring, according to the researchers. Other applications could include military and law-enforcement surveillance.

Calculating 3D images from WiFi signals

In the research, two octo-copters (drones) took off and flew outside an enclosed, four-sided brick structure whose interior was unknown to the drones. One drone continuously transmitted a WiFi signal; the other drone (located on a different side of the structure) received that signal and transmitted the changes in received signal strength (“RSSI”) during the flight to a computer, which then calculated 3D high-resolution images of the objects inside (which do not need to move).

Structure and resulting 3D image (credit: Chitra R. Karanam and Yasamin Mostofi/ACM)

Interestingly, the equipment is all commercially available: two drones with “yagi” antenna, WiFi router, Tango tablet (for real-time localization), and Raspberry Pi computer with network interface to record measurements.

This development builds on previous 2D work by professor Yasamin Mostofi’s lab, which has pioneered sensing and imaging with everyday radio frequency signals such as WiFi. Mostofi says the success of the 3D experiments is due to the drones’ ability to approach the area from several angles, and to new methodology* developed by her lab.

The research is described in an open-access paper published April 2017 in proceedings of the Association for Computing Machinery/Institute of Electrical and Electronics Engineers International Conference on Information Processing in Sensor Networks (IPSN).

A later paper by Technical University of Munich physicists also reported a system intended for 3D imaging with WiFi, but with only simulated (and cruder) images. (An earlier 2009 paper by Mostofi et al. also reported simulated results for 3D see-through imaging of structures.)

Block diagram of the 3D through-wall imaging system (credit: Chitra R. Karanam and Yasamin Mostofi/ACM)

* The researchers’ approach to enabling 3D through-wall imaging utilizes four tightly integrated key components, according to the paper.

(1) They proposed robotic paths that can capture the spatial variations in all three dimensions as much as possible, while maintaining the efficiency of the operation. 

(2) They modeled the three-dimensional unknown area of interest as a Markov Random Field to capture the spatial dependencies, and utilized a graph-based belief propagation approach to update the imaging decision of each voxel (the smallest unit of a 3D image) based on the decisions of the neighboring voxels. 

(3) To approximate the interaction of the transmitted wave with the area of interest, they used a linear wave model.

(4) They took advantage of the compressibility of the information content to image the area with a very small number of WiFi measurements (less than 4 percent).


Mostofi Lab | X-ray Eyes in the Sky: Drones and WiFi for 3D Through-Wall Imaging


Abstract of 3D Through-Wall Imaging with Unmanned Aerial Vehicles Using WiFi

In this paper, we are interested in the 3D through-wall imaging of a completely unknown area, using WiFi RSSI and Unmanned Aerial Vehicles (UAVs) that move outside of the area of interest to collect WiFi measurements. It is challenging to estimate a volume represented by an extremely high number of voxels with a small number of measurements. Yet many applications are time-critical and/or limited on resources, precluding extensive measurement collection. In this paper, we then propose an approach based on Markov random field modeling, loopy belief propagation, and sparse signal processing for 3D imaging based on wireless power measurements. Furthermore, we show how to design ecient aerial routes that are informative for 3D imaging. Finally, we design and implement a complete experimental testbed and show high-quality 3D robotic through-wall imaging of unknown areas with less than 4% of measurements.

Crystal ‘domain walls’ may lead to tinier electronic devices

Abstract art? No, nanoscale crystal sheets with moveable conductive “domain walls” that can modify a circuit’s electronic properties (credit: Queen’s University Belfast)

Queen’s University Belfast physicists have discovered a radical new way to modify the conductivity (ease of electron flow) of electronic circuits — reducing the size of future devices.

The two latest KurzweilAI articles on graphene cited faster/lower-power performance and device-compatibility features. This new research takes another approach: Altering the properties of a crystal to eliminate the need for multiple circuits in devices.

Reconfigurable nanocircuitry

To do that, the scientists used “ferroelectric copper-chlorine boracite” crystal sheets, which are almost as thin as graphene. The researchers discovered that squeezing the crystal sheets with a sharp needle at a precise location causes a jigsaw-puzzle-like pattern of “domains walls” to develop around the contact point.

Then, using external applied electric fields, these writable, erasable domain walls can be repeatedly moved around in the crystal to create a variety of new electronic properties. They can appear, disappear, or move around within the crystal, all without permanently altering the crystal itself.

Eliminating the need for multiple circuits may reduce the size of future computers and other devices, according to the researchers.

The team’s findings have been published in an open-access paper in Nature Communications.


Abstract of Injection and controlled motion of conducting domain walls in improper ferroelectric Cu-Cl boracite

Ferroelectric domain walls constitute a completely new class of sheet-like functional material. Moreover, since domain walls are generally writable, erasable and mobile, they could be useful in functionally agile devices: for example, creating and moving conducting walls could make or break electrical connections in new forms of reconfigurable nanocircuitry. However, significant challenges exist: site-specific injection and annihilation of planar walls, which show robust conductivity, has not been easy to achieve. Here, we report the observation, mechanical writing and controlled movement of charged conducting domain walls in the improper-ferroelectric Cu3B7O13Cl. Walls are straight, tens of microns long and exist as a consequence of elastic compatibility conditions between specific domain pairs. We show that site-specific injection of conducting walls of up to hundreds of microns in length can be achieved through locally applied point-stress and, once created, that they can be moved and repositioned using applied electric fields.

New chemical method could revolutionize graphene use in electronics

Adding a molecular structure containing carbon, chromium, and oxygen atoms retains graphene’s superior conductive properties. The metal atoms (silver, in this experiment) to be bonded are then added to the oxygen atoms on top. (credit: Songwei Che et al./Nano Letters)

University of Illinois at Chicago scientists have solved a fundamental problem that has held back the use of wonder material graphene in a wide variety of electronics applications.

When graphene is bonded (attached) to metal atoms (such as molybdenum) in devices such as solar cells, graphene’s superior conduction properties degrade.

The solution: Instead of adding molecules directly to the individual carbon atoms of graphene, the new method first adds a sort of buffer (consisting of chromium, carbon, and oxygen atoms) to the graphene, and then adds the metal atoms to this buffer material instead. That enables the graphene to retain its unique properties of electrical conduction.

In an experiment, the researchers successfully added silver nanoparticles to graphene with this method. That increased the material’s ability to boost the efficiency of graphene-based solar cells by 11 fold, said Vikas Berry, associate professor and department head of chemical engineering and senior author of a paper on the research, published in Nano Letters.

Researchers at Indian Institute of Technology and Clemson University were also involved in the study. The research was funded by the National Science Foundation.


Abstract of Retained Carrier-Mobility and Enhanced Plasmonic-Photovoltaics of Graphene via ring-centered η6 Functionalization and Nanointerfacing

Binding graphene with auxiliary nanoparticles for plasmonics, photovoltaics, and/or optoelectronics, while retaining the trigonal-planar bonding of sp2 hybridized carbons to maintain its carrier-mobility, has remained a challenge. The conventional nanoparticle-incorporation route for graphene is to create nucleation/attachment sites via “carbon-centered” covalent functionalization, which changes the local hybridization of carbon atoms from trigonal-planar sp2to tetrahedral sp3. This disrupts the lattice planarity of graphene, thus dramatically deteriorating its mobility and innate superior properties. Here, we show large-area, vapor-phase, “ring-centered” hexahapto (η6) functionalization of graphene to create nucleation-sites for silver nanoparticles (AgNPs) without disrupting its sp2 character. This is achieved by the grafting of chromium tricarbonyl [Cr(CO)3] with all six carbon atoms (sigma-bonding) in the benzenoid ring on graphene to form an (η6-graphene)Cr(CO)3 complex. This nondestructive functionalization preserves the lattice continuum with a retention in charge carrier mobility (9% increase at 10 K); with AgNPs attached on graphene/n-Si solar cells, we report an ∼11-fold plasmonic-enhancement in the power conversion efficiency (1.24%).