3D-printing biocompatible living bacteria

3D-printing with an ink containing living bacteria (credit: Bara Krautz/bara@scienceanimated.com)

Researchers at ETH Zurich university have developed a technique for 3D-printing biocompatible living bacteria for the first time — making it possible to produce produce high-purity cellulose for biomedical applications and nanofilters that can break down toxic substances (in drinking water, for example) or for use in disastrous oil spills, for example.

The technique, called “Flink” (“functional living ink”) allows for printing mini biochemical factories with properties that vary based on which species of bacteria are used. Up to four different inks containing different species of bacteria at different concentrations can be printed in a single pass.

Schematics of the Flink 3D bacteria-printing process for creating two types of functional living materials. (Left and center) Bacteria are embedded in a biocompatible hydrogel (which provides the supporting structure). (Right) The inclusion of P. putida* or A. xylinum* bacteria in the ink yields 3D-printed materials capable of degrading environmental pollutants (top) or forming bacterial cellulose in situ for biomedical applications (bottom), respectively. (credit: Manuel Schaffner et al./Science Advances)

The technique was described Dec. 1, 2017 in the open-access journal Science Advances.

(Left) A. xylinum bacteria were used in printing a cellulose nanofibril network (scanning electron microscope image), which was deposited (Right) on a doll face, forming a cellulose-reinforced hydrogel that, after removal of all biological residues, could serve as a skin transplant. (credit: Manuel Schaffner et al./Science Advances)

“The in situ formation of reinforcing cellulose fibers within the hydrogel is particularly attractive for regions under mechanical tension, such as the elbow and knee, or when administered as a pouch onto organs to prevent fibrosis after surgical implants and transplantations,” the researchers note in the paper. “Cellulose films grown in complex geometries precisely match the topography of the site of interest, preventing the formation of wrinkles and entrapments of contaminants that could impair the healing process. We envision that long-term medical applications will benefit from the presented multimaterial 3D printing process by locally deploying bacteria where needed.”

 * Pseudomonas putida breaks down the toxic chemical phenol, which is produced on a grand scale in the chemical industry; Acetobacter xylinum secretes high-purity nanocellulose, which relieves pain, retains moisture and is stable, opening up potential applications in the treatment of burns.


Abstract of 3D printing of bacteria into functional complex materials

Despite recent advances to control the spatial composition and dynamic functionalities of bacteria embedded in materials, bacterial localization into complex three-dimensional (3D) geometries remains a major challenge. We demonstrate a 3D printing approach to create bacteria-derived functional materials by combining the natural diverse metabolism of bacteria with the shape design freedom of additive manufacturing. To achieve this, we embedded bacteria in a biocompatible and functionalized 3D printing ink and printed two types of “living materials” capable of degrading pollutants and of producing medically relevant bacterial cellulose. With this versatile bacteria-printing platform, complex materials displaying spatially specific compositions, geometry, and properties not accessed by standard technologies can be assembled from bottom up for new biotechnological and biomedical applications.

New technology allows robots to visualize their own future


UC Berkeley | Vestri the robot imagines how to perform tasks

UC Berkeley researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered before. It could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes.

The initial prototype focuses on learning simple manual skills entirely from autonomous play — similar to how children can learn about their world by playing with toys, moving them around, grasping, etc.

Using this technology, called visual foresight, the robots can predict what their cameras will see if they perform a particular sequence of movements. These robotic imaginations are still relatively simple for now — predictions made only several seconds into the future — but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles.

The robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment, or what the objects are. That’s because the visual imagination is learned entirely from scratch from unattended and unsupervised (no humans involved) exploration, where the robot plays with objects on a table.

After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before.

“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Sergey Levine, assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences, whose lab developed the technology. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”

The research team demonstrated the visual foresight technology at the Neural Information Processing Systems conference in Long Beach, California, on Monday, December 4, 2017.

Learning by playing: how it works

Robot’s imagined predictions (credit: UC Berkeley)

At the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA). DNA-based models predict how pixels in an image will move from one frame to the next, based on the robot’s actions. Recent improvements to this class of models, as well as greatly improved planning capabilities, have enabled robotic control based on video prediction to perform increasingly complex tasks, such as sliding toys around obstacles and repositioning multiple objects.

“In that past, robots have learned skills with a human supervisor helping and providing feedback. What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own,” said Chelsea Finn, a doctoral student in Levine’s lab and inventor of the original DNA model.

With the new technology, a robot pushes objects on a table, then uses the learned prediction model to choose motions that will move an object to a desired location. Robots use the learned model from raw camera observations to teach themselves how to avoid obstacles and push objects around obstructions.

Since control through video prediction relies only on observations that can be collected autonomously by the robot, such as through camera images, the resulting method is general and broadly applicable. Building video prediction models only requires unannotated video, which can be collected by the robot entirely autonomously.

That contrasts with conventional computer-vision methods, which require humans to manually label thousands or even millions of images.

Why (most) future robots won’t look like robots

A future robot’s body could combine soft actuators and stiff structure, with distributed computation throughout — an example of the new “material robotics.” (credit: Nikolaus Correll/University of Colorado)

Future robots won’t be limited to humanoid form (like Boston Robotics’ formidable backflipping Atlas). They’ll be invisibly embedded everywhere in common objects.

Such as a shoe that can intelligently support your gait, change stiffness as you’re running or walking, and adapt to different surfaces — or even help you do backflips.

That’s the vision of researchers at Oregon State University, the University of Colorado, Yale University, and École Polytechnique Fédérale de Lausanne, who describe the burgeoning new field of  “material robotics” in a perspective article published Nov. 29, 2017 in Science Robotics. (The article cites nine articles in this special issue, three of which you can access for free.)

Disappearing into the background of everyday life

The authors challenge a widespread basic assumption: that robots are either “machines that run bits of code” or “software ‘bots’ interacting with the world through a physical instrument.”

“We take a third path: one that imbues intelligence into the very matter of a robot,” says Oregon State University researcher Yiğit Mengüç, an assistant professor of mechanical engineering in OSU’s College of Engineering and part of the college’s Collaborative Robotics and Intelligent Systems Institute.

On that path, materials scientists are developing new bulk materials with the inherent multifunctionality required for robotic applications, while roboticists are working on new material systems with tightly integrated components, disappearing into the background of everyday life. “The spectrum of possible ap­proaches spans from soft grippers with zero knowledge and zero feedback all the way to humanoids with full knowledge and full feed­back,” the authors note in the paper.

For example, “In the future, your smartphone may be made from stretchable, foldable material so there’s no danger of it shattering,” says Mengüç. “Or it might have some actuation, where it changes shape in your hand to help with the display, or it can be able to communicate something about what you’re observing on the screen. All these bits and pieces of technology that we take for granted in life will be living, physically responsive things, moving, changing shape in response to our needs, not just flat, static screens.”

Soft robots get superpowers

Origami-inspired artificial muscles capable of lifting up to 1,000 times their own weight, simply by applying air or water pressure (credit: Shuguang Li/Wyss Institute at Harvard University)

As a good example of material-enabled robotics, researchers at the Wyss Institute at Harvard University and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed origami-inspired, programmable, super-strong artificial muscles that will allow future soft robots to lift objects that are up to 1,000 times their own weight — using only air or water pressure.

The actuators are “programmed” by the structural design itself. The skeleton folds define how the whole structure moves — no control system required.

That allows the muscles to be very compact and simple, which makes them more appropriate for mobile or body-mounted systems that can’t accommodate large or heavy machinery, says Shuguang Li, Ph.D., a Postdoctoral Fellow at the Wyss Institute and MIT CSAIL and first author of an an open-access article on the research published Nov. 21, 2017 in Proceedings of the National Academy of Sciences (PNAS).

Each artificial muscle consists of an inner “skeleton” that can be made of various materials, such as a metal coil or a sheet of plastic folded into a certain pattern, surrounded by air or fluid and sealed inside a plastic or textile bag that serves as the “skin.” The structural geometry of the skeleton itself determines the muscle’s motion. A vacuum applied to the inside of the bag initiates the muscle’s movement by causing the skin to collapse onto the skeleton, creating tension that drives the motion. Incredibly, no other power source or human input is required to direct the muscle’s movement — it’s automagically determined entirely by the shape and composition of the skeleton. (credit: Shuguang Li/Wyss Institute at Harvard University)

Resilient, multipurpose, scalable

Not only can the artificial muscles move in many ways, they do so with impressive resilience. They can generate about six times more force per unit area than mammalian skeletal muscle can, and are also incredibly lightweight. A 2.6-gram muscle can lift a 3-kilogram object, which is the equivalent of a mallard duck lifting a car. Additionally, a single muscle can be constructed within ten minutes using materials that cost less than $1, making them cheap and easy to test and iterate.

These muscles can be powered by a vacuum, which makes them safer than most of the other artificial muscles currently being tested. The muscles have been built in sizes ranging from a few millimeters up to a meter. So the muscles can be used in numerous applications at multiple scales, from miniature surgical devices to wearable robotic exoskeletons, transformable architecture, and deep-sea manipulators for research or construction, up to large deployable structures for space exploration.

The team could also construct the muscles out of the water-soluble polymer PVA. That opens the possibility of bio-friendly robots that can perform tasks in natural settings with minimal environmental impact, or ingestible robots that move to the proper place in the body and then dissolve to release a drug.

The team constructed dozens of muscles using materials ranging from metal springs to packing foam to sheets of plastic, and experimented with different skeleton shapes to create muscles that can contract down to 10% of their original size, lift a delicate flower off the ground, and twist into a coil, all simply by sucking the air out of them.

This research was funded by the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation (NSF), and the Wyss Institute for Biologically Inspired Engineering.


Wyss Institute | Origami-Inspired Artificial Muscles


Abstract of Fluid-driven origami-inspired artificial muscles

Artificial muscles hold promise for safe and powerful actuation for myriad common machines and robots. However, the design, fabrication, and implementation of artificial muscles are often limited by their material costs, operating principle, scalability, and single-degree-of-freedom contractile actuation motions. Here we propose an architecture for fluid-driven origami-inspired artificial muscles. This concept requires only a compressible skeleton, a flexible skin, and a fluid medium. A mechanical model is developed to explain the interaction of the three components. A fabrication method is introduced to rapidly manufacture low-cost artificial muscles using various materials and at multiple scales. The artificial muscles can be programed to achieve multiaxial motions including contraction, bending, and torsion. These motions can be aggregated into systems with multiple degrees of freedom, which are able to produce controllable motions at different rates. Our artificial muscles can be driven by fluids at negative pressures (relative to ambient). This feature makes actuation safer than most other fluidic artificial muscles that operate with positive pressures. Experiments reveal that these muscles can contract over 90% of their initial lengths, generate stresses of ∼600 kPa, and produce peak power densities over 2 kW/kg—all equal to, or in excess of, natural muscle. This architecture for artificial muscles opens the door to rapid design and low-cost fabrication of actuation systems for numerous applications at multiple scales, ranging from miniature medical devices to wearable robotic exoskeletons to large deployable structures for space exploration.

Using light instead of electrons promises faster, smaller, more-efficient computers and smartphones

Trapped light for optical computation (credit: Imperial College London)

By forcing light to go through a smaller gap than ever before, a research team at Imperial College London has taken a step toward computers based on light instead of electrons.

Light would be preferable for computing because it can carry much-higher-density information, it’s much faster, and more efficient (generates little to no heat). But light beams don’t easily interact with one other. So information on high-speed fiber-optic cables (provided by your cable TV company, for example) currently has to be converted (via a modem or other device) into slower signals (electrons on wires or wireless signals) to allow for processing the data on devices such as computers and smartphones.

Electron-microscope image of an optical-computing nanofocusing device that is 25 nanometers wide and 2 micrometers long, using grating couplers (vertical lines) to interface with fiber-optic cables. (credit: Nielsen et al., 2017/Imperial College London)

To overcome that limitation, the researchers used metamaterials to squeeze light into a metal channel only 25 nanometers (billionths of a meter) wide, increasing its intensity and allowing photons to interact over the range of micrometers (millionths of meters) instead of centimeters.*

That means optical computation that previously required a centimeters-size device can now be realized on the micrometer (one millionth of a meter) scale, bringing optical processing into the size range of electronic transistors.

The results were published Thursday Nov. 30, 2017 in the journal Science.

* Normally, when two light beams cross each other, the individual photons do not interact or alter each other, as two electrons do when they meet. That means a long span of material is needed to gradually accumulate the effect and make it useful. Here, a “plasmonic nanofocusing” waveguide is used, strongly confining light within a nonlinear organic polymer.


Abstract of Giant nonlinear response at a plasmonic nanofocus drives efficient four-wave mixing

Efficient optical frequency mixing typically must accumulate over large interaction lengths because nonlinear responses in natural materials are inherently weak. This limits the efficiency of mixing processes owing to the requirement of phase matching. Here, we report efficient four-wave mixing (FWM) over micrometer-scale interaction lengths at telecommunications wavelengths on silicon. We used an integrated plasmonic gap waveguide that strongly confines light within a nonlinear organic polymer. The gap waveguide intensifies light by nanofocusing it to a mode cross-section of a few tens of nanometers, thus generating a nonlinear response so strong that efficient FWM accumulates over wavelength-scale distances. This technique opens up nonlinear optics to a regime of relaxed phase matching, with the possibility of compact, broadband, and efficient frequency mixing integrated with silicon photonics.

New nanomaterial, quantum encryption system could be ultimate defenses against hackers

New physically unclonable nanomaterial (credit: Abdullah Alharbi et al./ACS Nano)

Recent advances in quantum computers may soon give hackers access to machines powerful enough to crack even the toughest of standard internet security codes. With these codes broken, all of our online data — from medical records to bank transactions — could be vulnerable to attack.

Now, a new low-cost nanomaterial developed by New York University Tandon School of Engineering researchers can be tuned to act as a secure authentication key to encrypt computer hardware and data. The layered molybdenum disulfide (MoS2) nanomaterial cannot be physically cloned (duplicated) — replacing programming, which can be hacked.

In a paper published in the journal ACS Nano, the researchers explain that the new nanomaterial has the highest possible level of structural randomness, making it physically unclonable. It achieves this with randomly occurring regions that alternately emit or do not emit light. When exposed to light, this pattern can be used to create a one-of-a-kind binary cryptographic authentication key that could secure hardware components at minimal cost.

The research team envisions a future in which similar nanomaterials can be inexpensively produced at scale and applied to a chip or other hardware component. “No metal contacts are required, and production could take place independently of the chip fabrication process,” according to Davood Shahrjerdi, Assistant Professor of Electrical and Computer Engineering. “It’s maximum security with minimal investment.”

The National Science Foundation and the U.S. Army Research Office supported the research.

A high-speed quantum encryption system to secure the future internet

Schematic of the experimental quantum key distribution setup (credit: Nurul T. Islam et al./Science Advances)

Another approach to the hacker threat is being developed by scientists at Duke University, The Ohio State University and Oak Ridge National Laboratory. It would use the properties that drive quantum computers to create theoretically hack-proof forms of quantum data encryption.

Called quantum key distribution (QKD), it takes advantage of one of the fundamental properties of quantum mechanics: Measuring tiny bits of matter like electrons or photons automatically changes their properties, which would immediately alert both parties to the existence of a security breach. However, current QKD systems can only transmit keys at relatively low rates — up to hundreds of kilobits per second — which are too slow for most practical uses on the internet.

The new experimental QKD system is capable of creating and distributing encryption codes at megabit-per-second rates — five to 10 times faster than existing methods and on a par with current internet speeds when running several systems in parallel. In an online open-access article in Science Advances, the researchers show that the technique is secure from common attacks, even in the face of equipment flaws that could open up leaks.

This research was supported by the Office of Naval Research, the Defense Advanced Research Projects Agency, and Oak Ridge National Laboratory.


Abstract of Physically Unclonable Cryptographic Primitives by Chemical Vapor Deposition of Layered MoS2

Physically unclonable cryptographic primitives are promising for securing the rapidly growing number of electronic devices. Here, we introduce physically unclonable primitives from layered molybdenum disulfide (MoS2) by leveraging the natural randomness of their island growth during chemical vapor deposition (CVD). We synthesize a MoS2 monolayer film covered with speckles of multilayer islands, where the growth process is engineered for an optimal speckle density. Using the Clark–Evans test, we confirm that the distribution of islands on the film exhibits complete spatial randomness, hence indicating the growth of multilayer speckles is a spatial Poisson process. Such a property is highly desirable for constructing unpredictable cryptographic primitives. The security primitive is an array of 2048 pixels fabricated from this film. The complex structure of the pixels makes the physical duplication of the array impossible (i.e., physically unclonable). A unique optical response is generated by applying an optical stimulus to the structure. The basis for this unique response is the dependence of the photoemission on the number of MoS2 layers, which by design is random throughout the film. Using a threshold value for the photoemission, we convert the optical response into binary cryptographic keys. We show that the proper selection of this threshold is crucial for maximizing combination randomness and that the optimal value of the threshold is linked directly to the growth process. This study reveals an opportunity for generating robust and versatile security primitives from layered transition metal dichalcogenides.


Abstract of Provably secure and high-rate quantum key distribution with time-bin qudits

The security of conventional cryptography systems is threatened in the forthcoming era of quantum computers. Quantum key distribution (QKD) features fundamentally proven security and offers a promising option for quantum-proof cryptography solution. Although prototype QKD systems over optical fiber have been demonstrated over the years, the key generation rates remain several orders of magnitude lower than current classical communication systems. In an effort toward a commercially viable QKD system with improved key generation rates, we developed a discrete-variable QKD system based on time-bin quantum photonic states that can generate provably secure cryptographic keys at megabit-per-second rates over metropolitan distances. We use high-dimensional quantum states that transmit more than one secret bit per received photon, alleviating detector saturation effects in the superconducting nanowire single-photon detectors used in our system that feature very high detection efficiency (of more than 70%) and low timing jitter (of less than 40 ps). Our system is constructed using commercial off-the-shelf components, and the adopted protocol can be readily extended to free-space quantum channels. The security analysis adopted to distill the keys ensures that the demonstrated protocol is robust against coherent attacks, finite-size effects, and a broad class of experimental imperfections identified in our system.

Space dust may transport life between worlds

Imagine what this amazingly resilient microscopic (0.2 to 0.7 millimeter) milnesium tardigradum animal could evolve into on another planet. (credit: Wikipedia)

Life on our planet might have originated from biological particles brought to Earth in streams of space dust, according to a study published in the journal Astrobiology.

A huge amount of space dust (~10,000 kilograms — about the weight of two elephants) enters our atmosphere every day — possibly delivering organisms from far-off worlds, according to Professor Arjun Berera from the University of Edinburgh School of Physics and Astronomy, who led the study.

The dust streams could also collide with bacteria and other biological particles at 150 km or higher above Earth’s surface with enough energy to knock them into space, carrying Earth-based organisms to other planets and perhaps beyond.

The finding suggests that large asteroid impacts may not be the sole mechanism by which life could transfer between planets, as previously thought.

“The streaming of fast space dust is found throughout planetary systems and could be a common factor in proliferating life,” said Berera. Some bacteria, plants, and even microscopic animals called tardigrades* are known to be able to survive in space, so it is possible that such organisms — if present in Earth’s upper atmosphere — might collide with fast-moving space dust and withstand a journey to another planet.**

The study was partly funded by the U.K. Science and Technology Facilities Council.

* “Some tardigrades can withstand extremely cold temperatures down to 1 K (−458 °F; −272 °C) (close to absolute zero), while others can withstand extremely hot temperatures up to 420 K (300 °F; 150 °C)[12] for several minutes, pressures about six times greater than those found in the deepest ocean trenches, ionizing radiation at doses hundreds of times higher than the lethal dose for a human, and the vacuum of outer space. They can go without food or water for more than 30 years, drying out to the point where they are 3% or less water, only to rehydrate, forage, and reproduce.” — Wikipedia

** “Over the lifespan of the Earth of four billion years, particles emerging from Earth by this manner in principle could have traveled out as far as tens of kiloparsecs [one kiloparsec = 3,260 light years; our galaxy is about 100,000 light-years across]. This material horizon, as could be called the maximum distance on pure kinematic grounds that a material particle from Earth could travel outward based on natural processes, would cover most of our Galactic disk [the "Milky Way"], and interestingly would be far enough out to reach the Earth-like or potentially habitable planets that have been identified.” — Arjun Berera/Astrobiology


Abstract of Space Dust Collisions as a Planetary Escape Mechanism

It is observed that hypervelocity space dust, which is continuously bombarding Earth, creates immense momentum flows in the atmosphere. Some of this fast space dust inevitably will interact with the atmospheric system, transferring energy and moving particles around, with various possible consequences. This paper examines, with supporting estimates, the possibility that by way of collisions the Earth-grazing component of space dust can facilitate planetary escape of atmospheric particles, whether they are atoms and molecules that form the atmosphere or larger-sized particles. An interesting outcome of this collision scenario is that a variety of particles that contain telltale signs of Earth’s organic story, including microbial life and life-essential molecules, may be “afloat” in Earth’s atmosphere. The present study assesses the capability of this space dust collision mechanism to propel some of these biological constituents into space. Key Words: Hypervelocity space dust—Collision—Planetary escape—Atmospheric constituents—Microbial life. Astrobiology 17, xxx–xxx.

Using microrobots to diagnose and treat illness in remote areas of the body

Spirulina algae coated with magnetic particles to form a microrobot. Devices such as these could be developed to diagnose and treat illness in hard-to-reach parts of the body. (credit: Yan et al./Science Robotics)

Imagine a swarm of remote-controlled microrobots, a few micrometers in length (blood-vessel-sized), unleashed into your body to swim through your intestinal track or blood vessels, for example. Goal: to diagnose illness and treat it in hard-to-reach areas of the body.

An international team of researchers, led by the Chinese University of Hong Kong, is now experimenting with this idea (starting with rats) — using microscopic Spirulina algae coated with biocompatible magnetic nanoparticles to form the microswimmers.

Schematic of dip-coating S. platensis algae in a suspension of magnetite nanoparticles and growing microrobots. The time taken for the robots to function and biodegrade within the body could be tailored by adjusting the thickness of the coating. (credit: Xiaohui Yan et al./Science Robotics)

There are two methods being studied: (1) track the microswimmers in tissue close to the skin’s surface by imaging the algae’s natural luminescence; and (2) track them in hard-to-reach deeper tissue by coating with magnetite (Fe3O4) to make them detectable with magnetic resonance imaging (MRI). The devices could also sense chemical changes linked to the onset of illness.

In lab tests, during degradation, the microswimmers were able to release potent compounds from the algae core that selectively attacked cancer cells while leaving healthy cells unharmed. Further research could show whether this might have potential as a treatment, the researchers say.

The study, published in an open-access paper in Science Robotics, was carried out in collaboration with the Universities of Edinburgh and Manchester and was supported by the Research Grants Council of Hong Kong.


Abstract of Multifunctional biohybrid magnetite microrobots for imaging-guided therapy

Magnetic microrobots and nanorobots can be remotely controlled to propel in complex biological fluids with high precision by using magnetic fields. Their potential for controlled navigation in hard-to-reach cavities of the human body makes them promising miniaturized robotic tools to diagnose and treat diseases in a minimally invasive manner. However, critical issues, such as motion tracking, biocompatibility, biodegradation, and diagnostic/therapeutic effects, need to be resolved to allow preclinical in vivo development and clinical trials. We report biohybrid magnetic robots endowed with multifunctional capabilities by integrating desired structural and functional attributes from a biological matrix and an engineered coating. Helical microswimmers were fabricated from Spirulinamicroalgae via a facile dip-coating process in magnetite (Fe3O4) suspensions, superparamagnetic, and equipped with robust navigation capability in various biofluids. The innate properties of the microalgae allowed in vivo fluorescence imaging and remote diagnostic sensing without the need for any surface modification. Furthermore, in vivo magnetic resonance imaging tracked a swarm of microswimmers inside rodent stomachs, a deep organ where fluorescence-based imaging ceased to work because of its penetration limitation. Meanwhile, the microswimmers were able to degrade and exhibited selective cytotoxicity to cancer cell lines, subject to the thickness of the Fe3O4 coating, which could be tailored via the dip-coating process. The biohybrid microrobots reported herein represent a microrobotic platform that could be further developed for in vivo imaging–guided therapy and a proof of concept for the engineering of multifunctional microrobotic and nanorobotic devices.

Take a fantastic 3D voyage through the brain with immersive VR system


Wyss Center for Bio and Neuroengineering/Lüscher lab (UNIGE) | Brain circuits related to natural reward

What happens when you combine access to unprecedented huge amounts of anatomical data of brain structures with the ability to display billions of voxels (3D pixels) in real time, using high-speed graphics cards?

Answer: An awesome new immersive virtual reality (VR) experience for visualizing and interacting with up to 10 terabytes (trillions of bytes) of anatomical brain data.

Developed by researchers from the Wyss Center for Bio and Neuroengineering and the University of Geneva, the system is intended to allow neuroscientists to highlight, select, slice, and zoom on down to individual neurons at the micrometer cellular level.

This 2-D brain image of a mouse brain injected with a fluorescent retrograde virus in the brain stem — captured with a lightsheet microscope — represents the kind of rich, detailed visual data that can be explored with a new VR system. (credit: Courtine Lab/EPFL/Leonie Asboth, Elodie Rey)

The new VR system grew out of a problem with using the Wyss Center’s lightsheet microscope (one of only three in the world): how can you navigate and make sense out the immense volume of neuroanatomical data?

“The system provides a practical solution to experience, analyze and quickly understand these exquisite, high-resolution images,” said Stéphane Pages, PhD, Staff Scientist at the Wyss Center and Senior Research Associate at the University of Geneva, senior author of a dynamic poster presented November 15 at the annual meeting of the Society for Neuroscience 2017.

For example, using “mini-brains,” researchers will be able to see how new microelectrode probes behave in brain tissue, and how tissue reacts to them.

Journey to the center of the cell: VR movies

A team of researchers in Australia has taken the next step: allowing scientists, students, and members of the public to explore these kinds of images — even interact with cells and manipulate models of molecules.

As described in a paper published in the journal Traffic, the researchers built a 3D virtual model of a cell, combining lightsheet microscope images (for super-resolution, real-time, single-molecule detection of fluorescent proteins in cells and tissues) with scanning electron microscope imaging data (for a more complete view of the cellular architecture).

To demonstrate this, they created VR movies (shown below) of the surface of a breast cancer cell. The movies can be played on a Samsung Gear VR or Google cardboard device or using the built-in YouTube 360 player with Chrome, Firefox, MS Edge, or Opera browsers. The movies will also play on a conventional smartphone (but without 3D immersion).

UNSW 3D Visualisation Aesthetics Lab | The cell “paddock” view puts the user on the surface of the cell and demonstrates different mechanisms by which nanoparticles can be internalized into cells.

UNSW 3D Visualisation Aesthetics Lab | The cell “cathedral” view takes the user inside the cell and allows them to explore key cellular compartments, including the mitochondria (red), lysosomes (green), early endosomes (light blue), and the nucleus (purple).


Abstract of Analyzing volumetric anatomical data with immersive virtual reality tools

Recent advances in high-resolution 3D imaging techniques allow researchers to access unprecedented amounts of anatomical data of brain structures. In parallel, the computational power of commodity graphics cards has made rendering billions of voxels in real-time possible. Combining these technologies in an immersive virtual reality system creates a novel tool wherein observers can physically interact with the data. We present here the possibilities and demonstrate the value of this approach for reconstructing neuroanatomical data. We use a custom built digitally scanned light-sheet microscope (adapted from Tomer et al., Cell, 2015), to image rodent clarified whole brains and spinal cords in which various subpopulations of neurons are fluorescently labeled. Improvements of existing microscope designs allow us to achieve an in-plane submicronic resolution in tissue that is immersed in a variety of media (e. g. organic solvents, Histodenz). In addition, our setup allows fast switching between different objectives and thus changes image resolution within seconds. Here we show how the large amount of data generated by this approach can be rapidly reconstructed in a virtual reality environment for further analyses. Direct rendering of raw 3D volumetric data is achieved by voxel-based algorithms (e.g. ray marching), thus avoiding the classical step of data segmentation and meshing along with its inevitable artifacts. Visualization in a virtual reality headset together with interactive hand-held pointers allows the user with to interact rapidly and flexibly with the data (highlighting, selecting, slicing, zooming etc.). This natural interface can be combined with semi-automatic data analysis tools to accelerate and simplify the identification of relevant anatomical structures that are otherwise difficult to recognize using screen-based visualization. Practical examples of this approach are presented from several research projects using the lightsheet microscope, as well as other imaging techniques (e.g., EM and 2-photon).


Abstract of Journey to the centre of the cell: Virtual reality immersion into scientific data

Visualization of scientific data is crucial not only for scientific discovery but also to communicate science and medicine to both experts and a general audience. Until recently, we have been limited to visualizing the three-dimensional (3D) world of biology in 2 dimensions. Renderings of 3D cells are still traditionally displayed using two-dimensional (2D) media, such as on a computer screen or paper. However, the advent of consumer grade virtual reality (VR) headsets such as Oculus Rift and HTC Vive means it is now possible to visualize and interact with scientific data in a 3D virtual world. In addition, new microscopic methods provide an unprecedented opportunity to obtain new 3D data sets. In this perspective article, we highlight how we have used cutting edge imaging techniques to build a 3D virtual model of a cell from serial block-face scanning electron microscope (SBEM) imaging data. This model allows scientists, students and members of the public to explore and interact with a “real” cell. Early testing of this immersive environment indicates a significant improvement in students’ understanding of cellular processes and points to a new future of learning and public engagement. In addition, we speculate that VR can become a new tool for researchers studying cellular architecture and processes by populating VR models with molecular data.

Disturbing video depicts near-future ubiquitous lethal autonomous weapons


Campaign to Stop Killer Robots | Slaughterbots

In response to growing concerns about autonomous weapons, the Campaign to Stop Killer Robots, a coalition of AI researchers and advocacy organizations, has released a fictional video that depicts a disturbing future in which lethal autonomous weapons have become cheap and ubiquitous worldwide.

UC Berkeley AI researcher Stuart Russell presented the video at the United Nations Convention on Certain Conventional Weapons in Geneva, hosted by the Campaign to Stop Killer Robots earlier this week. Russell, in an appearance at the end of the video, warns that the technology described in the film already exists* and that the window to act is closing fast.

Support for a ban against autonomous weapons has been mounting. On Nov. 2, more than 200 Canadian scientists and more than 100 Australian scientists in academia and industry penned open letters to Prime Minister Justin Trudeau and Malcolm Turnbull urging them to support the ban.

Earlier this summer, more than 130 leaders of AI companies signed a letter in support of this week’s discussions. These letters follow a 2015 open letter released by the Future of Life Institute and signed by more than 20,000 AI/robotics researchers and others, including Elon Musk and Stephen Hawking.

“Many of the world’s leading AI researchers worry that if these autonomous weapons are ever developed, they could dramatically lower the threshold for armed conflict, ease and cheapen the taking of human life, empower terrorists, and create global instability,” according to an article published by the Future of Life Institute, which funded the video. “The U.S. and other nations have used drones and semi-automated systems to carry out attacks for several years now, but fully removing a human from the loop is at odds with international humanitarian and human rights law.”

“The Campaign to Stop Killer Robots is not trying to stifle innovation in artificial intelligence and robotics and it does not wish to ban autonomous systems in the civilian or military world,” explained Noel Sharkey of the International Committee for Robot Arms Control. Rather we see an urgent need to prevent automation of the critical functions for selecting targets and applying violent force without human deliberation and to ensure meaningful human control for every attack.”

For more information about autonomous weapons:

* As suggested in this U.S. Department of Defense video:


Perdix Drone Swarm – Fighters Release Hive-mind-controlled Weapon UAVs in Air | U.S. Naval Air Systems Command

How to open the blood-brain-barrier with precision for safer drug delivery

Schematic representation of the feedback-controlled focused ultrasound drug delivery system. Serving as the acoustic indicator of drug-delivery dosage, the microbubble emission signal was sensed and compared with the expected value. The difference was used as feedback to the ultrasound transducer for controlling the level of the ultrasound transmission. The ultrasound transducer and sensor were located outside the rat skull. The microbubbles were generated in the bloodstream at the target location in the brain. (credit: Tao Sun/Brigham and Women’s Hospital; adapted by KurzweilAI)

Researchers at Brigham and Women’s Hospital have developed a safer way to use focused ultrasound to temporarily open the blood-brain barrier* to allow for delivering vital drugs for treating glioma brain tumors — an alternative to invasive incision or radiation.

Focused ultrasound drug delivery to the brain uses “cavitation” — creating microbubbles — to temporarily open the blood-brain barrier. The problem with this method has been that if these bubbles destabilize and collapse, they could damage the critical vasculature in the brain.

To create a finer degree of control over the microbubbles and improve safety, the researchers placed a sensor outside of the rat brain to listen to ultrasound echoes bouncing off the microbubbles, as an indication of how stable the bubbles were.** That data was used to modify the ultrasound intensity, stabilizing the microbubbles to maintain safe ultrasound exposure.

The team tested the approach in both healthy rats and in an animal model of glioma brain cancer. Further research will be needed to adapt the technique for humans, but the approach could offer improved safety and efficacy control for human clinical trials, which are now underway in Canada.

The research, published this week in the journal Proceedings of the National Academy of Sciences, was supported by the National Institutes of Health in Canada.

* The blood brain barrier is an impassable obstacle for 98% of drugs, which it treats as pathogens and blocks them from passing from patients’ bloodstream into the brain. Using focused ultrasound, drugs can administered using an intravenous injection of innocuous lipid-coated gas microbubbles.

** For the ultrasound transducer, the researchers combined two spherically curved transducers (operating at a resonant frequency at 274.3 kHz) to double the effective aperture size and provide significantly improved focusing in the axial direction.


Abstract of Closed-loop control of targeted ultrasound drug delivery across the blood–brain/tumor barriers in a rat glioma model

Cavitation-facilitated microbubble-mediated focused ultrasound therapy is a promising method of drug delivery across the blood–brain barrier (BBB) for treating many neurological disorders. Unlike ultrasound thermal therapies, during which magnetic resonance thermometry can serve as a reliable treatment control modality, real-time control of modulated BBB disruption with undetectable vascular damage remains a challenge. Here a closed-loop cavitation controlling paradigm that sustains stable cavitation while suppressing inertial cavitation behavior was designed and validated using a dual-transducer system operating at the clinically relevant ultrasound frequency of 274.3 kHz. Tests in the normal brain and in the F98 glioma model in vivo demonstrated that this controller enables reliable and damage-free delivery of a predetermined amount of the chemotherapeutic drug (liposomal doxorubicin) into the brain. The maximum concentration level of delivered doxorubicin exceeded levels previously shown (using uncontrolled sonication) to induce tumor regression and improve survival in rat glioma. These results confirmed the ability of the controller to modulate the drug delivery dosage within a therapeutically effective range, while improving safety control. It can be readily implemented clinically and potentially applied to other cavitation-enhanced ultrasound therapies.