Using ‘cooperative perception’ between intelligent vehicles to reduce risks

Networked intelligent vehicles (credit: EPFL)

Researchers at École polytechnique fédérale de Lausanne (EPFL) have combined data from two autonomous cars to create a wider field of view, extended situational awareness, and greater safety.

Autonomous vehicles get their intelligence from cameras, radar, light detection and ranging (LIDAR) sensors, and navigation and mapping systems. But there are ways to make them even smarter. Researchers at EPFL are working to improve the reliability and fault tolerance of these systems by sharing data between vehicles. For example, this can extend the field of view of a car that is behind another car.

Using simulators and road tests, the team has developed a flexible software framework for networking intelligent vehicles so that they can interact.

Cooperative perception

“Today, intelligent vehicle development is focused on two main issues: the level of autonomy and the level of cooperation,” says Alcherio Martinoli, who heads EPFL’s Distributed Intelligent Systems and Algorithms Laboratory (DISAL). As part of his PhD thesis, Milos Vasic has developed cooperative perception algorithms, which extend an intelligent vehicle’s situational awareness by fusing data from onboard sensors with data provided by cooperative vehicles nearby.

Milos Vasic, PhD, and Alcherio Martinoli made two regular cars intelligent using off-the-shelf equipment. (credit: Alain Herzog/EPFL)

The researchers used  cooperative perception algorithms as the basis for the software framework. Cooperative perception means that an intelligent vehicle can combine its own data with that of another vehicle to help make driving decisions.

They developed an assistance system that assesses the risk of passing, for example. The risk assessment factors in the probability of an oncoming car in the opposite lane as well as kinematic conditions such as driving speeds, the distance required to overtake, and the distance to the oncoming car.

Difficulties in fusing data

The team retrofitted two Citroen C-Zero electric cars with a Mobileye camera, an accurate localization system, a router to enable Wi-Fi communication, a computer to run the software and an external battery to power everything. “These were not autonomous vehicles,” says Martinoli, “but we made them intelligent using off-the-shelf equipment.”

One of the difficulties in fusing data from the two vehicles involved relative localization. The cars needed to be able to know precisely where they are in relation to each other as well to objects in the vicinity.

For example, if a single pedestrian does not appear to both cars to be in the same exact spot, there is a risk that, together, they will see two figures instead of one. By using other signals, particularly those provided by the LIDAR sensors and cameras, the researchers were able to correct flaws in the navigation system and adjust their algorithms accordingly. This exercise was even more challenging because the data had to be processed in real time while the vehicles were in motion.

Although the tests involved only two vehicles, the longer-term goal is to create a network between multiple vehicles as well with the roadway infrastructure.

In addition to driving safety and comfort, cooperative networks of this sort could eventually be used to optimize a vehicle’s trajectory, save energy, and improve traffic flows.

Of course, determining liability in case of an accident becomes more complicated when vehicles cooperate. “The answers to these issues will play a key role in determining whether autonomous vehicles are accepted,” says Martinoli.


École polytechnique fédérale de Lausanne (EPFL) | Networked intelligent vehicles

Controlled by a synthetic gene circuit, self-assembling bacteria build working electronic sensors

Bacteria create a functioning 3D pressure-sensor device. A gene circuit (left) triggers the production of an engineered protein that enables pattern-forming bacteria on growth membranes (center) to assemble gold nanoparticles into a hybrid organic-inorganic dome structure whose size and shape can be controlled by altering the growth environment. In this proof-of-concept demonstration, the gold structure serves as a functioning pressure switch (right) that responds to touch. (credit: Yangxiaolu Cao et al./Nature Biotechnology)

Using a synthetic gene circuit, Duke University researchers have programmed self-assembling bacteria to build useful electronic devices — a first.

Other experiments have successfully grown materials using bacterial processes (for example, MIT engineers have coaxed bacterial cells to produce biofilms that can incorporate nonliving materials, such as gold nanoparticles and quantum dots). However, they have relied entirely on external control over where the bacteria grow and they have been limited to two dimensions.

In the new study, the researchers demonstrated the production of a composite structure by programming the cells themselves and controlling their access to nutrients, but still leaving the bacteria free to grow in three dimensions.*

As a demonstration, the bacteria were programmed to assemble into a finger-pressure sensor.

To create the pressure sensor, two identical arrays of domes were grown on a membrane (left) on two substrate surfaces. The two substrates were then sandwiched together (center) so that each dome was positioned directly above its counterpart on the other substrate. A battery was connected to the domes by copper wiring. When pressure was applied (right) to the sandwich, the domes pressed into one another, causing a deformation, resulting in an increase in conductivity, with resulting increased current (as shown the arrow in the ammeter). (credit: Yangxiaolu Cao et al./Nature Biotechnology)

Inspired by nature, but going beyond it

“This technology allows us to grow a functional device from a single cell,” said Lingchong You, the Paul Ruffin Scarborough Associate Professor of Engineering at Duke. “Fundamentally, it is no different from programming a cell to grow an entire tree.”

Nature is full of examples of life combining organic and inorganic compounds to make better materials. Mollusks grow shells consisting of calcium carbonate interlaced with a small amount of organic components, resulting in a microstructure three times tougher than calcium carbonate alone. Our own bones are a mix of organic collagen and inorganic minerals made up of various salts.

Harnessing such construction abilities in bacteria would have many advantages over current manufacturing processes. In nature, biological fabrication uses raw materials and energy very efficiently. In this synthetic system, for example, tweaking growth instructions to create different shapes and patterns could theoretically be much cheaper and faster than casting the new dies or molds needed for traditional manufacturing.

“Nature is a master of fabricating structured materials consisting of living and non-living components,” said You. “But it is extraordinarily difficult to program nature to create self-organized patterns. This work, however, is a proof-of-principle that it is not impossible.”

Self-healing materials

According to the researchers, in addition to creating circuits from bacteria, if the bacteria are kept alive, it may be possible to create materials that could heal themselves and respond to environmental changes.

“Another aspect we’re interested in pursuing is how to generate much more complex patterns,” said You. “Bacteria can create complex branching patterns, we just don’t know how to make them do that ourselves — yet.”

It’s a “very exciting work,” Timothy Lu, a synthetic biologist at MIT, who was not involved in the research, told The Register. “I think this represents a major step forward in the field of living materials.” Lu believes self-assembling materials “could create new manufacturing processes that may use less energy or be better for the environment than the ones today,” the article said. “But ‘the design rules for enabling bottoms-up assembly of novel materials are still not well understood,’ he cautioned.”

The study appeared online on October 9, 2107 in Nature Biotechnology. This study was supported by the Office of Naval Research, the National Science Foundation, the Army Research Office, the National Institutes of Health, the Swiss National Science Foundation, and a David and Lucile Packard Fellowship.

* The gene circuit is like a biological package of instructions that researchers embed into a bacterium’s DNA. The directions first tell the bacteria to produce a protein called T7 RNA polymerase (T7RNAP), which then activates its own expression in a positive feedback loop. It also produces a small molecule called AHL that can diffuse into the environment like a messenger. As the cells multiply and grow outward, the concentration of the small messenger molecule hits a critical concentration threshold, triggering the production of two more proteins called T7 lysozyme and curli. The former inhibits the production of T7RNAP while the latter acts as sort of biological Velcro, which grabs onto gold nanoparticles supplied by the researchers, forming a dome shell (the structure of the sensor). The researchers were able to alter the size and shape of the dome by controlling the properties of the porous membrane it grows on. For example, changing the size of the pores or how much the membrane repels water affects how many nutrients are passed to the cells, altering their growth pattern.


Abstract of Programmed assembly of pressure sensors using pattern-forming bacteria

Conventional methods for material fabrication often require harsh reaction conditions, have low energy efficiency, and can cause a negative impact on the environment and human health. In contrast, structured materials with well-defined physical and chemical properties emerge spontaneously in diverse biological systems. However, these natural processes are not readily programmable. By taking a synthetic-biology approach, we demonstrate here the programmable, three-dimensional (3D) material fabrication using pattern-forming bacteria growing on top of permeable membranes as the structural scaffold. We equip the bacteria with an engineered protein that enables the assembly of gold nanoparticles into a hybrid organic-inorganic dome structure. The resulting hybrid structure functions as a pressure sensor that responds to touch. We show that the response dynamics are determined by the geometry of the structure, which is programmable by the membrane properties and the extent of circuit activation. Taking advantage of this property, we demonstrate signal sensing and processing using one or multiple bacterially assembled structures. Our work provides the first demonstration of using engineered cells to generate functional hybrid materials with programmable architecture.

3D ‘body-on-a-chip’ project aims to accelerate drug testing, reduce costs

Scientists created miniature models (“organoids”) of heart, liver, and lung  in dishes and combined them into an integrated “body-on-a-chip” system fed with nutrient-rich fluid, mimicking blood. (credit: Wake Forest Baptist Medical Center)

A team of scientists at Wake Forest Institute for Regenerative Medicine and nine other institutions has engineered miniature 3D human hearts, lungs, and livers to achieve more realistic testing of how the human body responds to new drugs.

The “body-on-a-chip” project, funded by the Defense Threat Reduction Agency, aims to help reduce the estimated $2 billion cost and 90 percent failure rate that pharmaceutical companies face when developing new medications. The research is described in an open-access paper in Scientific Reports, published by Nature.

Using the same expertise they’ve employed to build new organs for patients, the researchers connected together micro-sized 3D liver, heart, and lung organs-on-a chip (or “organoids”) on a single platform to monitor their function. They selected heart and liver for the system because toxicity to these organs is a major reason for drug candidate failures and drug recalls. And lungs were selected because they’re the point of entry for toxic particles and for aerosol drugs such as asthma inhalers.

The integrated three-tissue organ-on-a-chip platform combines liver, heart, and lung organoids. (Top) Liver and cardiac modules are created by bioprinting spherical organoids using customized bioinks, resulting in 3D hydrogel constructs (upper left) that are placed into the microreactor devices. (Bottom) Lung modules are formed by creating layers of cells over porous membranes within microfluidic devices. TEER (trans-endothelial [or epithelial] electrical resistance sensors allow for monitoring tissue barrier function integrity over time. The three organoids are placed in a sealed, monitored system with a real-time camera. A nutrient-filled liquid that circulates through the system keeps the organoids alive and is used to introduce potential drug therapies into the system. (credit: Aleksander Skardal et al./Scientific Reports)

Why current drug testing fails

Drug compounds are currently screened in the lab using human cells and then tested in animals. But these methods don’t adequately replicate how drugs affect human organs. “If you screen a drug in livers only, for example, you’re never going to see a potential side effect to other organs,” said Aleks Skardal, Ph.D., assistant professor at Wake Forest Institute for Regenerative Medicine and lead author of the paper.

In many cases during testing of new drug candidates — and sometimes even after the drugs have been approved for use — drugs also have unexpected toxic effects in tissues not directly targeted by the drugs themselves, he explained. “By using a multi-tissue organ-on-a-chip system, you can hopefully identify toxic side effects early in the drug development process, which could save lives as well as millions of dollars.”

“There is an urgent need for improved systems to accurately predict the effects of drugs, chemicals and biological agents on the human body,” said Anthony Atala, M.D., director of the institute and senior researcher on the multi-institution study. “The data show a significant toxic response to the drug as well as mitigation by the treatment, accurately reflecting the responses seen in human patients.”

Advanced drug screening, personalized medicine

The scientists conducted multiple scenarios to ensure that the body-on-a-chip system mimics a multi-organ response.

For example, they introduced a drug used to treat cancer into the system. Known to cause scarring of the lungs, the drug also unexpectedly affected the system’s heart. (A control experiment using only the heart failed to show a response.) The scientists theorize that the drug caused inflammatory proteins from the lung to be circulated throughout the system. As a result, the heart increased beats and then later stopped altogether, indicating a toxic side effect.

“This was completely unexpected, but it’s the type of side effect that can be discovered with this system in the drug development pipeline,” Skardal noted.

Test of “liver on a chip” response to two drugs to demonstrate clinical relevance. Liver construct toxicity response was assessed following exposure to acetaminophen (APAP) and the clinically-used APAP countermeasure N-acetyl-L-cysteine (NAC). Liver constructs in the fluidic system (left) were treated with no drug (b), 1 mM APAP (c), and 10 mM APAP (d) — showing progressive loss of function and cell death, compared to 10 mM APAP +20 mM NAC (e), which mitigated those negative effects. The data shows both a significant cytotoxic (cell-damage) response to APAP as well as its mitigation by NAC treatment — accurately reflecting the clinical responses seen in human patients. (credit: Aleksander Skardal et al./Scientific Reports)

The scientists are now working to increase the speed of the system for large scale screening and add additional organs.

“Eventually, we expect to demonstrate the utility of a body-on-a-chip system containing many of the key functional organs in the human body,” said Atala. “This system has the potential for advanced drug screening and also to be used in personalized medicine — to help predict an individual patient’s response to treatment.”

Several patent applications comprising the technology described in the paper have been filed.

The international collaboration included researchers at Wake Forest Institute for Regenerative Medicine at the Wake Forest School of Medicine, Harvard-MIT Division of Health Sciences and Technology, Wyss Institute for Biologically Inspired Engineering at Harvard University, Biomaterials Innovation Research Center at Harvard Medical School, Bloomberg School of Public Health at Johns Hopkins University, Virginia Tech-Wake Forest School of Biomedical Engineering and Sciences, Brigham and Women’s Hospital, University of Konstanz, Konkuk University (Seoul), and King Abdulaziz University.


Abstract of Multi-tissue interactions in an integrated three-tissue organ-on-a-chip platform

Many drugs have progressed through preclinical and clinical trials and have been available – for years in some cases – before being recalled by the FDA for unanticipated toxicity in humans. One reason for such poor translation from drug candidate to successful use is a lack of model systems that accurately recapitulate normal tissue function of human organs and their response to drug compounds. Moreover, tissues in the body do not exist in isolation, but reside in a highly integrated and dynamically interactive environment, in which actions in one tissue can affect other downstream tissues. Few engineered model systems, including the growing variety of organoid and organ-on-a-chip platforms, have so far reflected the interactive nature of the human body. To address this challenge, we have developed an assortment of bioengineered tissue organoids and tissue constructs that are integrated in a closed circulatory perfusion system, facilitating inter-organ responses. We describe a three-tissue organ-on-a-chip system, comprised of liver, heart, and lung, and highlight examples of inter-organ responses to drug administration. We observe drug responses that depend on inter-tissue interaction, illustrating the value of multiple tissue integration for in vitro study of both the efficacy of and side effects associated with candidate drugs.

Teleoperating robots with virtual reality: getting inside a robot’s head

A new VR system from MIT’s Computer Science and Artificial Intelligence Laboratory could make it easy for factory workers to telecommute. (credit: Jason Dorfman, MIT CSAIL)

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a virtual-reality (VR) system that lets you teleoperate a robot using an Oculus Rift or HTC Vive VR headset.

CSAIL’s “Homunculus Model” system (the classic notion of a small human sitting inside the brain and controlling the actions of the body) embeds you in a VR control room with multiple sensor displays, making it feel like you’re inside the robot’s head. By using gestures, you can control the robot’s matching movements to perform various tasks.

The system can be connected either via a wired local network or via a wireless network connection over the Internet. (The team demonstrated that the system could pilot a robot from hundreds of miles away, testing it on a hotel’s wireless network in Washington, DC to control Baxter at MIT.)

According to CSAIL postdoctoral associate Jeffrey Lipton, lead author on an open-access arXiv paper about the system (presented this week at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) in Vancouver), “By teleoperating robots from home, blue-collar workers would be able to telecommute and benefit from the IT revolution just as white-collars workers do now.”

Jobs for video-gamers too

The researchers imagine that such a system could even help employ jobless video-gamers by “game-ifying” manufacturing positions. (Users with gaming experience had the most ease with the system, the researchers found in tests.)

Homunculus Model system. A Baxter robot (left) is outfitted with a stereo camera rig and various end-effector devices. A virtual control room (user’s view, center), generated on an Oculus Rift CV1 headset (right), allows the user to feel like they are inside Baxter’s head while operating it. Using VR device controllers, including Razer Hydra hand trackers used for inputs (right), users can interact with controls that appear in the virtual space — opening and closing the hand grippers to pick up, move, and retrieve items. A user can plan movements based on the distance between the arm’s location marker and their hand while looking at the live display of the arm. (credit: Jeffrey I. Lipton et al./arXiv).

To make these movements possible, the human’s space is mapped into the virtual space, and the virtual space is then mapped into the robot space to provide a sense of co-location.

The team demonstrated the Homunculus Model system using the Baxter humanoid robot from Rethink Robotics, but the approach could work on other robot platforms, the researchers said.

In tests involving pick and place, assembly, and manufacturing tasks (such as “pick an item and stack it for assembly”) comparing the Homunculus Model system with existing state-of-the-art automated remote-control, CSAIL’s Homunculus Model system had a 100% success rate compared with a 66% success rate for state-of-the-art automated systems. The CSAIL system was also better at grasping objects 95 percent of the time and 57 percent faster at doing tasks.*

“This contribution represents a major milestone in the effort to connect the user with the robot’s space in an intuitive, natural, and effective manner.” says Oussama Khatib, a computer science professor at Stanford University who was not involved in the paper.

The team plans to eventually focus on making the system more scalable, with many users and different types of robots that are compatible with current automation technologies.

* The Homunculus Model system solves a delay problem with existing systems, which use a GPU or CPU, introducing delay. 3D reconstruction from the stereo HD cameras is instead done by the human’s visual cortex, so the user constantly receives visual feedback from the virtual world with minimal latency (delay). This also avoids user fatigue and nausea caused by motion sickness (known as simulator sickness) generated by “unexpected incongruities, such as delays or relative motions, between proprioception and vision [that] can lead to the nausea,” the researchers explain in the paper.


MITCSAIL | Operating Robots with Virtual Reality


Abstract of Baxter’s Homunculus: Virtual Reality Spaces for Teleoperation in Manufacturing

Expensive specialized systems have hampered development of telerobotic systems for manufacturing systems. In this paper we demonstrate a telerobotic system which can reduce the cost of such system by leveraging commercial virtual reality(VR) technology and integrating it with existing robotics control software. The system runs on a commercial gaming engine using off the shelf VR hardware. This system can be deployed on multiple network architectures from a wired local network to a wireless network connection over the Internet. The system is based on the homunculus model of mind wherein we embed the user in a virtual reality control room. The control room allows for multiple sensor display, dynamic mapping between the user and robot, does not require the production of duals for the robot, or its environment. The control room is mapped to a space inside the robot to provide a sense of co-location within the robot. We compared our system with state of the art automation algorithms for assembly tasks, showing a 100% success rate for our system compared with a 66% success rate for automated systems. We demonstrate that our system can be used for pick and place, assembly, and manufacturing tasks.

Fast-moving spinning magnetized nanoparticles could lead to ultra-high-speed, high-density data storage

Artist’s impression of skyrmion data storage (credit: Moritz Eisebitt)

An international team led by MIT associate professor of materials science and engineering Geoffrey Beach has demonstrated a practical way to use “skyrmions” to create a radical new high-speed, high-density data-storage method that could one day replace disk drives — and even replace high-speed RAM memory.

Rather than reading and writing data one bit at a time by changing the orientation of magnetized nanoparticles on a surface, Skyrmions could store data using only a tiny area of a magnetic surface — perhaps just a few atoms across — and for long periods of time, without the need for further energy input (unlike disk drives and RAM).

Beach and associates conceive skyrmions as little sub-nanosecond spin-generating eddies of magnetism controlled by electric fields — replacing the magnetic-disk system of reading and writing data one bit at a time. In experiments, skyrmions have been generated on a thin metallic film sandwiched with non-magnetic heavy metals and transition-metal ferromagnetic layers — exploiting a defect, such as a constriction in the magnetic track.*

Skyrmions are also highly stable to external magnetic and mechanical perturbations, unlike the individual magnetic poles in a conventional magnetic storage device — allowing for vastly more data to be written onto a surface of a given size.

A practical data-storage system

Google data center (credit: Google Inc.)

Beach has recently collaborated with researchers at MIT and others in Germany** to demonstrate experimentally for the first time that it’s possible to create skyrmions in specific locations, which is needed for a data-storage system. The new findings were reported October 2, 2017 in the journal Nature Nanotechnology.

Conventional magnetic systems are now reaching speed and density limits set by the basic physics of their existing materials. The new system, once perfected, could provide a way to continue that progress toward ever-denser data storage, Beach says.

However, the researchers note that to create a commercialized system will require an efficient, reliable way to create skyrmions when and where they were needed, along with a way to read out the data (which now requires sophisticated, expensive X-ray magnetic spectroscopy). The team is now pursuing possible strategies to accomplish that.***

* The system focuses on the boundary region between atoms whose magnetic poles are pointing in one direction and those with poles pointing the other way. This boundary region can move back and forth within the magnetic material, Beach says. What he and his team found four years ago was that these boundary regions could be controlled by placing a second sheet of nonmagnetic heavy metal very close to the magnetic layer. The nonmagnetic layer can then influence the magnetic one, with electric fields in the nonmagnetic layer pushing around the magnetic domains in the magnetic layer. Skyrmions are little swirls of magnetic orientation within these layers. The key to being able to create skyrmions at will in particular locations lays in material defects. By introducing a particular kind of defect in the magnetic layer, the skyrmions become pinned to specific locations on the surface, the team found. Those surfaces with intentional defects can then be used as a controllable writing surface for data encoded in the skyrmions.

** The team also includes researchers at the Max Born Institute and the Institute of Optics and Atomic Physics, both in Berlin; the Institute for Laser Technologies in Medicine and Metrology at the University of Ulm, in Germany; and the Deutches Elektroniken-Syncrotron (DESY), in Hamburg. The work was supported by the U.S. Department of Energy and the German Science Foundation.

*** The researchers believe an alternative way of reading the data is possible, using an additional metal layer added to the other layers. By creating a particular texture on this added layer, it may be possible to detect differences in the layer’s electrical resistance depending on whether a skyrmion is present or not in the adjacent layer.


Abstract of Field-free deterministic ultrafast creation of magnetic skyrmions by spin–orbit torques

Magnetic skyrmions are stabilized by a combination of external magnetic fields, stray field energies, higher-order exchange interactions and the Dzyaloshinskii–Moriya interaction (DMI). The last favours homochiral skyrmions, whose motion is driven by spin–orbit torques and is deterministic, which makes systems with a large DMI relevant for applications. Asymmetric multilayers of non-magnetic heavy metals with strong spin–orbit interactions and transition-metal ferromagnetic layers provide a large and tunable DMI. Also, the non-magnetic heavy metal layer can inject a vertical spin current with transverse spin polarization into the ferromagnetic layer via the spin Hall effect. This leads to torques that can be used to switch the magnetization completely in out-of-plane magnetized ferromagnetic elements, but the switching is deterministic only in the presence of a symmetry-breaking in-plane field. Although spin–orbit torques led to domain nucleation in continuous films and to stochastic nucleation of skyrmions in magnetic tracks, no practical means to create individual skyrmions controllably in an integrated device design at a selected position has been reported yet. Here we demonstrate that sub-nanosecond spin–orbit torque pulses can generate single skyrmions at custom-defined positions in a magnetic racetrack deterministically using the same current path as used for the shifting operation. The effect of the DMI implies that no external in-plane magnetic fields are needed for this aim. This implementation exploits a defect, such as a constriction in the magnetic track, that can serve as a skyrmion generator. The concept is applicable to any track geometry, including three-dimensional designs.

New transistor design enables flexible, high-performance wearable/mobile electronics

Advanced flexible transistor developed at UW-Madison (photo credit: Jung-Hun Seo/University at Buffalo, State University of New York)

A team of University of Wisconsin–Madison (UW–Madison) engineers has created “the most functional flexible transistor in the world,” along with a fast, simple, inexpensive fabrication process that’s easily scalable to the commercial level.

The development promises to allow manufacturers to add advanced, smart-wireless capabilities to wearable and mobile devices that curve, bend, stretch and move.*

The UW–Madison group’s advance is based on a BiCMOS (bipolar complementary metal oxide semiconductor) thin-film transistor, combining speed, high current, and low power dissipation (heat and wasted energy) on just one surface (a silicon nanomembrane, or “Si NM”).**

BiCMOS transistors are the chip of choice for “mixed-signal” devices (combining analog and digital capabilities), which include many of today’s portable electronic devices such as cellphones. “The [BiCMOS] industry standard is very good,” says Zhenqiang (Jack) Ma, the Lynn H. Matthias Professor and Vilas Distinguished Achievement Professor in electrical and computer engineering at UW–Madison. “Now we can do the same things with our transistor — but it can bend.”

The research was described in the inaugural issue of Nature Publishing Group’s open-access journal Flexible Electronics, published Sept. 27, 2017.***

Making traditional BiCMOS flexible electronics is difficult, in part because the process takes several months and requires a multitude of delicate, high-temperature steps. Even a minor variation in temperature at any point could ruin all of the previous steps.

Ma and his collaborators fabricated their flexible electronics on a single-crystal silicon nanomembrane on a single bendable piece of plastic. The secret to their success is their unique process, which eliminates many steps and slashes both the time and cost of fabricating the transistors.

“In industry, they need to finish these in three months,” he says. “We finished it in a week.”

He says his group’s much simpler, high-temperature process can scale to industry-level production right away.

“The key is that parameters are important,” he says. “One high-temperature step fixes everything — like glue. Now, we have more powerful mixed-signal tools. Basically, the idea is for [the flexible electronics platform] to expand with this.”

* Some companies (such as Samsung) have developed flexible displays, but not other flexible electronic components in their devices, Ma explained to KurzweilAI.

** “Flexible electronics have mainly focused on their form factors such as bendability, lightweight, and large area with low-cost processability…. To date, all the [silicon, or Si]-based thin-film transistors (TFTs) have been realized with CMOS technology because of their simple structure and process. However, as more functions are required in future flexible electronic applications (i.e., advanced bioelectronic systems or flexible wireless power applications), an integration of functional devices in one flexible substrate is needed to handle complex signals and/or various power levels.” — Jung Hun Seo et al./Flexible Electronics. The n-channel, p-channel metal-oxide semiconductor field-effect transistors (N-MOSFETs & P-MOSFETs), and NPN bipolar junction transistors (BJTs) were realized together on a 340-nm thick Si NM layer. 

*** Co-authors included researchers at the University at Buffalo, State University of New York, and the University of Texas at Arlington. This work was supported by the Air Force Office Of Scientific Research.


Abstract of High-performance flexible BiCMOS electronics based on single-crystal Si nanomembrane

In this work, we have demonstrated for the first time integrated flexible bipolar-complementary metal-oxide-semiconductor (BiCMOS) thin-film transistors (TFTs) based on a transferable single crystalline Si nanomembrane (Si NM) on a single piece of bendable plastic substrate. The n-channel, p-channel metal-oxide semiconductor field-effect transistors (N-MOSFETs & P-MOSFETs), and NPN bipolar junction transistors (BJTs) were realized together on a 340-nm thick Si NM layer with minimized processing complexity at low cost for advanced flexible electronic applications. The fabrication process was simplified by thoughtfully arranging the sequence of necessary ion implantation steps with carefully selected energies, doses and anneal conditions, and by wisely combining some costly processing steps that are otherwise separately needed for all three types of transistors. All types of TFTs demonstrated excellent DC and radio-frequency (RF) characteristics and exhibited stable transconductance and current gain under bending conditions. Overall, Si NM-based flexible BiCMOS TFTs offer great promises for high-performance and multi-functional future flexible electronics applications and is expected to provide a much larger and more versatile platform to address a broader range of applications. Moreover, the flexible BiCMOS process proposed and demonstrated here is compatible with commercial microfabrication technology, making its adaptation to future commercial use straightforward.

Ray Kurzweil on The Age of Spiritual Machines: A 1999 TV interview

Dear readers,

For your interest, this 1999 interview with me, which I recently re-watched, describes some interesting predictions that are still coming true today. It’s intriguing to look back at the last 18 years to see what actually unfolded. This video is a compelling glimpse into the future, as we’re living it today.

Enjoy!

— Ray


Dear readers,

This interview by Harold Hudson Channer was recorded on Jan. 14, 1999 and aired February 1, 1999 on a Manhattan Neighborhood Network cable-access show, Conversations with Harold Hudson Channer.

In the discussion, Ray explains many of the ahead-of-their-time ideas presented in The Age of Spiritual Machines*, such as the “law of accelerating returns” (how technological change is exponential, contrary to the common-sense “intuitive linear” view); the forthcoming revolutionary impacts of AI; nanotech brain and body implants for increased intelligence, improved health, and life extension; and technological impacts on economic growth.

I was personally inspired by the book in 1999 and by Ray’s prophetic, uplifting vision of the future. I hope you also enjoy this blast from the past.

— Amara D. Angelica, Editor

* First published in hardcover January 1, 1999 by Viking. The series also includes The Age of Intelligent Machines (The MIT Press, 1992) and The Singularity Is Near (Penquin Books, 2006).

Intel’s new ‘Loihi’ chip mimics neurons and synapses in the human brain

Loihi chip (credit: Intel Corporation)

Intel announced this week a self-learning, energy-efficient neuromorphic (brain-like) research chip codenamed “Loihi”* that mimics how the human brain functions. Under development for six years, the chip uses 130,000 “neurons” and 130 million “synapses” and learns in real time, based on feedback from the environment.**

Neuromorphic chip models are inspired by how neurons communicate and learn, using spikes (brain pulses) and synapses capable of learning.

The idea is to help computers self-organize and make decisions based on patterns and associations,” Michael Mayberry, PhD, corporate vice president and managing director of Intel Labs at Intel Corporation, explained in a blog post.

He said the chip automatically gets smarter over time and doesn’t need to be trained in the traditional way. He sees applications in areas that would benefit from autonomous operation and continuous learning in an unstructured environment, such as automotive, industrial, and personal-robotics areas.

For example, a cybersecurity system could identify a breach or a hack based on an abnormality or difference in data streams. Or the chip could learn a person’s heartbeat reading under various conditions — after jogging, following a meal or before going to bed — to determine a “normal” heartbeat. The system could then continuously monitor incoming heart data to flag patterns that don’t match the “normal” pattern, and could be personalized for any user.

“Machine learning models such as deep learning have made tremendous recent advancements by using extensive training datasets to recognize objects and events. However, unless their training sets have specifically accounted for a particular element, situation or circumstance, these machine learning systems do not generalize well,” Mayberry notes.

The Loihi test chip

Loihi currently exists as a research test chip that offers flexible on-chip learning and combines training and inference. Researchers have demonstrated it learning at a rate that is a 1 million times improvement compared with other typical spiking neural nets, as measured by total operations to achieve a given accuracy when solving MNIST digit recognition problems, Mayberry said. “Compared to technologies such as convolutional neural networks and deep learning neural networks, the Loihi test chip uses many fewer resources on the same task.”

Fabricated on Intel’s 14 nm process technology, the chip is also up to 1,000 times more energy-efficient than general-purpose computing required for typical training systems, he added.

In the first half of 2018, Intel plans to share the Loihi test chip with leading university and research institutions with a focus on advancing AI. The goal is to develop and test several algorithms with high efficiency for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation.

“Looking to the future, Intel believes that neuromorphic computing offers a way to provide exascale performance in a construct inspired by how the brain works,” Mayberry said.

* “Loihi seamount, sometimes known as the ‘youngest volcano’ in the Hawaiian chain, is an undersea mountain rising more than 3000 meters above the floor of the Pacific Ocean … submerged in the Pacific off of the south-eastern coast of the Big Island of Hawaii.” — Hawaii Center for Volcanology

** For comparison, IBM’s TrueNorth neuromorphic chip currently has 1 million neurons and 256 million synapses.

Why futurist Ray Kurzweil isn’t worried about technology stealing your job — Fortune

1985: Ray Kurzweil looks on as Stevie Wonder experiences the Kurzweil 250, the first synthesizer to accurately reproduce the sounds of the piano — replacing piano-maker jobs but adding many more jobs for musicians (credit: Kurzweil Music Systems)

Last week, Fortune magazine asked Ray Kurzweil to comment on some often-expressed questions about the future.

Does AI pose an existential threat to humanity?

Kurzweil sees the future as nuanced, notes writer Michal Lev-Ram. “A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation,” Kurzweil said. “It’s very important for your survival to be sensitive to bad news. … I think if you look at history, though, we’re being helped [by new technology] more than we’re being hurt.”

How will artificial intelligence and other technologies impact jobs?

“We have already eliminated all jobs several times in human history,” said Kurzweil, pointing out that “for every job we eliminate, we’re going to create more jobs at the top of the skill ladder. … You can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.”

Why are we so bad at predicting certain things? For example, Donald Trump winning the presidency?

Kurzweil: “He’s not technology.”

Read Fortune article here.