Metalens with artificial muscle simulates (and goes way beyond) human-eye and camera optical functions

A silicon-based metalens just 30 micrometers thick is mounted on a transparent, stretchy polymer film. The colored iridescence is produced by the large number of nanostructures within the metalens. (credit:Harvard SEAS)

Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a breakthrough electronically controlled artificial eye. The thin, flat, adaptive silicon nanostructure (“metalens”) can simultaneously control focus, astigmatism, and image shift (three of the major contributors to blurry images) in real time, which the human eye (and eyeglasses) cannot do.

The 30-micrometers-thick metalens makes changes laterally to achieve optical zoom, autofocus, and image stabilization — making it possible to replace bulky lens systems in future optical systems used in eyeglasses, cameras, cell phones, and augmented and virtual reality devices.

The research is described in an open-access paper in Science Advances. In another paper recently published in Optics Express, the researchers demonstrated the design and fabrication of metalenses up to centimeters or more in diameter.* That makes it possible to unify two industries: semiconductor manufacturing and lens-making. So the same technology used to make computer chips will be used to make metasurface-based optical components, such as lenses.

The adaptive metalens (right) focuses light rays onto an image sensor (left), such as one in a camera. An electrical signal controls the shape of the metalens to produce the desired optical wavefront patterns (shown in red), resulting in improved images. In the future, adaptive metalenses will be built into imaging systems, such as cell phone cameras and microscopes, enabling flat, compact autofocus as well as the capability for simultaneously correcting optical aberrations and performing optical image stabilization, all in a single plane of control. (credit: Second Bay Studios/Harvard SEAS)

Simulating the human eye’s lens and ciliary muscles

In the human eye, the lens is surrounded by ciliary muscle, which stretches or compresses the lens, changing its shape to adjust its focal length. To achieve that function, the researchers adhered a metalens to a thin, transparent dielectric elastomer actuator (“artificial muscle”). The researchers chose a dielectic elastomer with low loss — meaning light travels through the material with little scattering — to attach to the lens.

(Top) Schematic of metasurface and dielectric elastomer actuators (“artificial muscles”), showing how the new artificial muscles change focus, similar to how the ciliary muscle in the eye work. An applied voltage supplies transparent, stretchable electrode layers (gray), made up of single-wall carbon-nanotube nanopillars, with electrical charges (acting as a capacitor). The resulting electrostatic attraction compresses (red arrows) the dielectric elastomer actuators (artificial muscles) in the thickness direction and expands (black arrows) the elastomers in the lateral direction. The silicon metasurface (in the center), applied by photolithography, can simultaneously focus, control aberrations caused by astigmatisms, and perform image shift. (Bottom) Photo of actual device. (credit: Alan She et al./Sci. Adv.)

Next, the researchers aim to further improve the functionality of the lens and decrease the voltage required to control it.

The research was performed at the Harvard John A. Paulson School of Engineering and Applied Sciences, supported in part by the Air Force Office of Scientific Research and by the National Science Foundation. This work was performed in part at the Center for Nanoscale Systems (CNS), which is supported by the National Science Foundation. The Harvard Office of Technology Development is exploring commercialization opportunities.

* To build the artificial eye with a larger (more functional) metalens, the researchers had to develop a new algorithm to shrink the file size to make it compatible with the technology currently used to fabricate integrated circuits.

** “All optical systems with multiple components — from cameras to microscopes and telescopes — have slight misalignments or mechanical stresses on their components, depending on the way they were built and their current environment, that will always cause small amounts of astigmatism and other aberrations, which could be corrected by an adaptive optical element,” said Alan She, a graduate student at SEAS and first author of the paper. “Because the adaptive metalens is flat, you can correct those aberrations and integrate different optical capabilities onto a single plane of control. Our results demonstrate the feasibility of embedded autofocus, optical zoom, image stabilization, and adaptive optics, which are expected to become essential for future chip-scale image sensors and  Furthermore, the device’s flat construction and inherently lateral actuation without the need for motorized parts allow for highly stackable systems such as those found in stretchable electronic eye camera sensors, providing possibilities for new kinds of imaging systems.”


Abstract of Adaptive metalenses with simultaneous electrical control of focal length, astigmatism, and shift

Focal adjustment and zooming are universal features of cameras and advanced optical systems. Such tuning is usually performed longitudinally along the optical axis by mechanical or electrical control of focal length. However, the recent advent of ultrathin planar lenses based on metasurfaces (metalenses), which opens the door to future drastic miniaturization of mobile devices such as cell phones and wearable displays, mandates fundamentally different forms of tuning based on lateral motion rather than longitudinal motion. Theory shows that the strain field of a metalens substrate can be directly mapped into the outgoing optical wavefront to achieve large diffraction-limited focal length tuning and control of aberrations. We demonstrate electrically tunable large-area metalenses controlled by artificial muscles capable of simultaneously performing focal length tuning (>100%) as well as on-the-fly astigmatism and image shift corrections, which until now were only possible in electron optics. The device thickness is only 30 μm. Our results demonstrate the possibility of future optical microscopes that fully operate electronically, as well as compact optical systems that use the principles of adaptive optics to correct many orders of aberrations simultaneously.


Abstract of Large area metalenses: design, characterization, and mass manufacturing

Optical components, such as lenses, have traditionally been made in the bulk form by shaping glass or other transparent materials. Recent advances in metasurfaces provide a new basis for recasting optical components into thin, planar elements, having similar or better performance using arrays of subwavelength-spaced optical phase-shifters. The technology required to mass produce them dates back to the mid-1990s, when the feature sizes of semiconductor manufacturing became considerably denser than the wavelength of light, advancing in stride with Moore’s law. This provides the possibility of unifying two industries: semiconductor manufacturing and lens-making, whereby the same technology used to make computer chips is used to make optical components, such as lenses, based on metasurfaces. Using a scalable metasurface layout compression algorithm that exponentially reduces design file sizes (by 3 orders of magnitude for a centimeter diameter lens) and stepper photolithography, we show the design and fabrication of metasurface lenses (metalenses) with extremely large areas, up to centimeters in diameter and beyond. Using a single two-centimeter diameter near-infrared metalens less than a micron thick fabricated in this way, we experimentally implement the ideal thin lens equation, while demonstrating high-quality imaging and diffraction-limited focusing.

Metalens with artificial muscle simulates (and goes way beyond) human-eye and camera optical functions

A silicon-based metalens just 30 micrometers thick is mounted on a transparent, stretchy polymer film. The colored iridescence is produced by the large number of nanostructures within the metalens. (credit:Harvard SEAS)

Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a breakthrough electronically controlled artificial eye. The thin, flat, adaptive silicon nanostructure (“metalens”) can simultaneously control focus, astigmatism, and image shift (three of the major contributors to blurry images) in real time, which the human eye (and eyeglasses) cannot do.

The 30-micrometers-thick metalens makes changes laterally to achieve optical zoom, autofocus, and image stabilization — making it possible to replace bulky lens systems in future optical systems used in eyeglasses, cameras, cell phones, and augmented and virtual reality devices.

The research is described in an open-access paper in Science Advances. In another paper recently published in Optics Express, the researchers demonstrated the design and fabrication of metalenses up to centimeters or more in diameter.* That makes it possible to unify two industries: semiconductor manufacturing and lens-making. So the same technology used to make computer chips will be used to make metasurface-based optical components, such as lenses.

The adaptive metalens (right) focuses light rays onto an image sensor (left), such as one in a camera. An electrical signal controls the shape of the metalens to produce the desired optical wavefront patterns (shown in red), resulting in improved images. In the future, adaptive metalenses will be built into imaging systems, such as cell phone cameras and microscopes, enabling flat, compact autofocus as well as the capability for simultaneously correcting optical aberrations and performing optical image stabilization, all in a single plane of control. (credit: Second Bay Studios/Harvard SEAS)

Simulating the human eye’s lens and ciliary muscles

In the human eye, the lens is surrounded by ciliary muscle, which stretches or compresses the lens, changing its shape to adjust its focal length. To achieve that function, the researchers adhered a metalens to a thin, transparent dielectric elastomer actuator (“artificial muscle”). The researchers chose a dielectic elastomer with low loss — meaning light travels through the material with little scattering — to attach to the lens.

(Top) Schematic of metasurface and dielectric elastomer actuators (“artificial muscles”), showing how the new artificial muscles change focus, similar to how the ciliary muscle in the eye work. An applied voltage supplies transparent, stretchable electrode layers (gray), made up of single-wall carbon-nanotube nanopillars, with electrical charges (acting as a capacitor). The resulting electrostatic attraction compresses (red arrows) the dielectric elastomer actuators (artificial muscles) in the thickness direction and expands (black arrows) the elastomers in the lateral direction. The silicon metasurface (in the center), applied by photolithography, can simultaneously focus, control aberrations caused by astigmatisms, and perform image shift. (Bottom) actual device. (credit: She et al./Sci. Adv.)

Next, the researchers aim to further improve the functionality of the lens and decrease the voltage required to control it.

The research was performed at the Harvard John A. Paulson School of Engineering and Applied Sciences, supported in part by the Air Force Office of Scientific Research and by the National Science Foundation. This work was performed in part at the Center for Nanoscale Systems (CNS), which is supported by the National Science Foundation. The Harvard Office of Technology Development is exploring commercialization opportunities.

* To build the artificial eye with a larger (more functional) metalens, the researchers had to develop a new algorithm to shrink the file size to make it compatible with the technology currently used to fabricate integrated circuits.

** “All optical systems with multiple components — from cameras to microscopes and telescopes — have slight misalignments or mechanical stresses on their components, depending on the way they were built and their current environment, that will always cause small amounts of astigmatism and other aberrations, which could be corrected by an adaptive optical element,” said Alan She, a graduate student at SEAS and first author of the paper. “Because the adaptive metalens is flat, you can correct those aberrations and integrate different optical capabilities onto a single plane of control. Our results demonstrate the feasibility of embedded autofocus, optical zoom, image stabilization, and adaptive optics, which are expected to become essential for future chip-scale image sensors and  Furthermore, the device’s flat construction and inherently lateral actuation without the need for motorized parts allow for highly stackable systems such as those found in stretchable electronic eye camera sensors, providing possibilities for new kinds of imaging systems.”


Abstract of Adaptive metalenses with simultaneous electrical control of focal length, astigmatism, and shift

Focal adjustment and zooming are universal features of cameras and advanced optical systems. Such tuning is usually performed longitudinally along the optical axis by mechanical or electrical control of focal length. However, the recent advent of ultrathin planar lenses based on metasurfaces (metalenses), which opens the door to future drastic miniaturization of mobile devices such as cell phones and wearable displays, mandates fundamentally different forms of tuning based on lateral motion rather than longitudinal motion. Theory shows that the strain field of a metalens substrate can be directly mapped into the outgoing optical wavefront to achieve large diffraction-limited focal length tuning and control of aberrations. We demonstrate electrically tunable large-area metalenses controlled by artificial muscles capable of simultaneously performing focal length tuning (>100%) as well as on-the-fly astigmatism and image shift corrections, which until now were only possible in electron optics. The device thickness is only 30 μm. Our results demonstrate the possibility of future optical microscopes that fully operate electronically, as well as compact optical systems that use the principles of adaptive optics to correct many orders of aberrations simultaneously.


Abstract of Large area metalenses: design, characterization, and mass manufacturing

Optical components, such as lenses, have traditionally been made in the bulk form by shaping glass or other transparent materials. Recent advances in metasurfaces provide a new basis for recasting optical components into thin, planar elements, having similar or better performance using arrays of subwavelength-spaced optical phase-shifters. The technology required to mass produce them dates back to the mid-1990s, when the feature sizes of semiconductor manufacturing became considerably denser than the wavelength of light, advancing in stride with Moore’s law. This provides the possibility of unifying two industries: semiconductor manufacturing and lens-making, whereby the same technology used to make computer chips is used to make optical components, such as lenses, based on metasurfaces. Using a scalable metasurface layout compression algorithm that exponentially reduces design file sizes (by 3 orders of magnitude for a centimeter diameter lens) and stepper photolithography, we show the design and fabrication of metasurface lenses (metalenses) with extremely large areas, up to centimeters in diameter and beyond. Using a single two-centimeter diameter near-infrared metalens less than a micron thick fabricated in this way, we experimentally implement the ideal thin lens equation, while demonstrating high-quality imaging and diffraction-limited focusing.

The Princess Leia project: ‘volumetric’ 3D images that float in ‘thin air’

Inspired by the iconic Stars Wars scene with Princess Leia in distress, Brigham Young University engineers and physicists have created the “Princess Leia project” — a new technology for creating 3D “volumetric images” that float in the air and that you can walk all around and see from almost any angle.*

“Our group has a mission to take the 3D displays of science fiction and make them real,” said electrical and computer engineering professor and holography expert Daniel Smalley, lead author of a Jan. 25 Nature paper on the discovery.

The image of Princess Leia portrayed in the movie is actually not a hologram, he explains. A holographic display scatters light only on a 2D surface. So you have to be looking at a limited range of angles to see the image, which is also normally static. Instead, a moving volumetric display can be seen from any angle and you can even reach your hand into it. Examples include the 3D displays Tony Stark interacts with in Ironman and the massive image-projecting table in Avatar.*

How to create a 3D volumetric image from a single moving particle

BYU student Erich Nygaard, depicted as a moving 3D image, mimicks the Princess Leia projection in the iconic Star Wars scene (“Help me Obi Wan Kenobi, you’re my only hope”). (credit: Dan Smalley Lab)

The team’s free-space volumetric display technology, called “Optical Trap Display,” is based on photophoretic** optical trapping (controlled by a laser beam) of a rapidly moving particle (of a plant fiber called cellulose in this case). This technique takes advantage of human persistence of vision (at more than 10 images per second we don’t see a moving point of light, just the pattern it traces in space — the same phenomenon that makes movies and video work).

As the laser beam moves the trapped particle around, three more laser beams illuminate the particle with RGB (red-green-blue) light. The resulting fast-moving dot traces out a color image in three dimensions (you can see the vertical scan lines in one vertical slice in the Princess Leia image above) — producing a full-color, volumetric (3D) still image in air with 10-micrometer resolution, which allows for fine detail. The technology also features low noticeable speckle (the annoying specks seen in holograms).***

Applications in the real (and virtual) world

So far, Smalley and his student researchers have 3D light-printed a butterfly, a prism, the stretch-Y BYU logo, rings that wrap around an arm, and an individual in a lab coat crouched in a position similar to Princess Leia as she begins her projected message. The images in this proof-of-concept prototype are still in the range of millimeters. But in the Nature paper, the researchers say they anticipate that the device “can readily be scaled using parallelism and [they] consider this platform to be a viable method for creating 3D images that share the same space as the user, as physical objects would.”

What about augmented and virtual-reality uses? “While I think this technology is not really AR or VR but just ‘R,’  there are a lot of interesting ways volumetric images can enhance and augment the world around us,” Smalley told KurzweilAI in an email. “A very-near-term application could be the use of levitated particles as ‘streamers’ to show the expected flow of air over actual physical objects. That is, instead of looking at a computer screen to see fluid flow over a turbine blade, you could set a volumetric projector next to the actual turbine blade and see particles form ribbons to shown expected fluid flow juxtaposed on the real object.

“In a scaled-up version of the display, a projector could place a superimposed image of a part on an engine showing a technician the exact location and orientation of that part. An even more refined version could create a magic portal in your home where you could see the size of shoes you just ordered and set your foot inside to (visually) check the fit. Other applications would included sparse telepresence, satellite tracking, command and control surveillance, surgical planning, tissue tagging, catheter guidance and other medical visualization applications.”

How soon? “I won’t make a prediction on exact timing but if we make as much progress in the next four years as we did in the last four years (a big ‘if’), then we would have a display of usable size by the end of that period. We have had a number of interested parties from a variety of fields. We are open to an exclusive agreement, given the right partner.”

* Smalley says he has long dreamed of building the kind of 3D holograms that pepper science-fiction films. But watching inventor Tony Stark thrust his hands through ghostly 3D body armor in the 2008 film Iron Man, Smalley realized that he could never achieve that using holography, the current standard for high-tech 3D display, because Stark’s hand would block the hologram’s light source. “That irritated me,” he says. He immediately tried to work out how to get around that.

** “Photophoresis denotes the phenomenon that small particles suspended in gas (aerosols) or liquids (hydrocolloids) start to migrate when illuminated by a sufficiently intense beam of light.” — Wikipedia

*** Previous researchers have created volumetric imagery, but the Smalley team says it’s the first to use optical trapping and color effectively. “Among volumetric systems, we are aware of only three such displays that have been successfully demonstrated in free space: induced plasma displays, modified air displays, and acoustic levitation displays. Plasma displays have yet to demonstrate RGB color or occlusion in free space. Modified air displays and acoustic levitation displays rely on mechanisms that are too coarse or too inertial to compete directly with holography at present.” — D.E. Smalley et al./Nature


Nature video | Pictures in the air: 3D printing with light


Abstract of A photophoretic-trap volumetric display

Free-space volumetric displays, or displays that create luminous image points in space, are the technology that most closely resembles the three-dimensional displays of popular fiction. Such displays are capable of producing images in ‘thin air’ that are visible from almost any direction and are not subject to clipping. Clipping restricts the utility of all three-dimensional displays that modulate light at a two-dimensional surface with an edge boundary; these include holographic displays, nanophotonic arrays, plasmonic displays, lenticular or lenslet displays and all technologies in which the light scattering surface and the image point are physically separate. Here we present a free-space volumetric display based on photophoretic optical trapping that produces full-colour graphics in free space with ten-micrometre image points using persistence of vision. This display works by first isolating a cellulose particle in a photophoretic trap created by spherical and astigmatic aberrations. The trap and particle are then scanned through a display volume while being illuminated with red, green and blue light. The result is a three-dimensional image in free space with a large colour gamut, fine detail and low apparent speckle. This platform, named the Optical Trap Display, is capable of producing image geometries that are currently unobtainable with holographic and light-field technologies, such as long-throw projections, tall sandtables and ‘wrap-around’ displays.

Space dust may transport life between worlds

Imagine what this amazingly resilient microscopic (0.2 to 0.7 millimeter) milnesium tardigradum animal could evolve into on another planet. (credit: Wikipedia)

Life on our planet might have originated from biological particles brought to Earth in streams of space dust, according to a study published in the journal Astrobiology.

A huge amount of space dust (~10,000 kilograms — about the weight of two elephants) enters our atmosphere every day — possibly delivering organisms from far-off worlds, according to Professor Arjun Berera from the University of Edinburgh School of Physics and Astronomy, who led the study.

The dust streams could also collide with bacteria and other biological particles at 150 km or higher above Earth’s surface with enough energy to knock them into space, carrying Earth-based organisms to other planets and perhaps beyond.

The finding suggests that large asteroid impacts may not be the sole mechanism by which life could transfer between planets, as previously thought.

“The streaming of fast space dust is found throughout planetary systems and could be a common factor in proliferating life,” said Berera. Some bacteria, plants, and even microscopic animals called tardigrades* are known to be able to survive in space, so it is possible that such organisms — if present in Earth’s upper atmosphere — might collide with fast-moving space dust and withstand a journey to another planet.**

The study was partly funded by the U.K. Science and Technology Facilities Council.

* “Some tardigrades can withstand extremely cold temperatures down to 1 K (−458 °F; −272 °C) (close to absolute zero), while others can withstand extremely hot temperatures up to 420 K (300 °F; 150 °C)[12] for several minutes, pressures about six times greater than those found in the deepest ocean trenches, ionizing radiation at doses hundreds of times higher than the lethal dose for a human, and the vacuum of outer space. They can go without food or water for more than 30 years, drying out to the point where they are 3% or less water, only to rehydrate, forage, and reproduce.” — Wikipedia

** “Over the lifespan of the Earth of four billion years, particles emerging from Earth by this manner in principle could have traveled out as far as tens of kiloparsecs [one kiloparsec = 3,260 light years; our galaxy is about 100,000 light-years across]. This material horizon, as could be called the maximum distance on pure kinematic grounds that a material particle from Earth could travel outward based on natural processes, would cover most of our Galactic disk [the "Milky Way"], and interestingly would be far enough out to reach the Earth-like or potentially habitable planets that have been identified.” — Arjun Berera/Astrobiology


Abstract of Space Dust Collisions as a Planetary Escape Mechanism

It is observed that hypervelocity space dust, which is continuously bombarding Earth, creates immense momentum flows in the atmosphere. Some of this fast space dust inevitably will interact with the atmospheric system, transferring energy and moving particles around, with various possible consequences. This paper examines, with supporting estimates, the possibility that by way of collisions the Earth-grazing component of space dust can facilitate planetary escape of atmospheric particles, whether they are atoms and molecules that form the atmosphere or larger-sized particles. An interesting outcome of this collision scenario is that a variety of particles that contain telltale signs of Earth’s organic story, including microbial life and life-essential molecules, may be “afloat” in Earth’s atmosphere. The present study assesses the capability of this space dust collision mechanism to propel some of these biological constituents into space. Key Words: Hypervelocity space dust—Collision—Planetary escape—Atmospheric constituents—Microbial life. Astrobiology 17, xxx–xxx.

Scientists report first detection of gravitational waves produced by colliding neutron stars

Astronomers detect gravitational waves and a gamma-ray burst from two colliding neutron stars. (credit: National Science Foundation/LIGO/Sonoma State University/A. Simonnet)

Scientists reported today (Oct. 16, 2017) the first simultaneous detection of both gravitational waves and light — an astounding collision of two neutron stars.

The discovery was made nearly simultaneously by three gravitational-wave detectors, followed by observations by some 70 ground- and space-based light observatories.

Neutron stars are the smallest, densest stars known to exist and are formed when massive stars explode in supernovas.


MIT | Neutron Stars Collide

As these neutron stars spiraled together, they emitted gravitational waves that were detectable for about 100 seconds. When they collided, a flash of light in the form of gamma rays was emitted and seen on Earth about two seconds after the gravitational waves. In the days and weeks following the smashup, other forms of light, or electromagnetic radiation — including X-ray, ultraviolet, optical, infrared, and radio waves — were detected.

The stars were estimated to be in a range from around 1.1 to 1.6 times the mass of the sun, in the mass range of neutron stars. A neutron star is about 20 kilometers, or 12 miles, in diameter and is so dense that a teaspoon of neutron star material has a mass of about a billion tons.

The initial gamma-ray measurements, combined with the gravitational-wave detection, provide confirmation for Einstein’s general theory of relativity, which predicts that gravitational waves should travel at the speed of light. The observations also reveal signatures of recently synthesized material, including gold and platinum, solving a decades-long mystery of where about half of all elements heavier than iron are produced.


Georgia Tech | The Collision of Two Neutron Stars (audible frequencies start at ~25 seconds)

“This detection has genuinely opened the doors to a new way of doing astrophysics,” said Laura Cadonati, professor of physics at Georgia Tech and deputy spokesperson for the LIGO Scientific Collaboration. I expect it will be remembered as one of the most studied astrophysical events in history.”

In the weeks and months ahead, telescopes around the world will continue to observe the afterglow of the neutron star merger and gather further evidence about various stages of the merger, its interaction with its surroundings, and the processes that produce the heaviest elements in the universe.

The research was published today in Physical Review Letters and in an open-access paper in The Astrophysical Journal Letters.

Timeline

KurzweilAI has assembled this timeline of the observations from various reports:

  • About 130 million years ago: Two neutron stars are in their final moments of orbiting each other, separated only by about 300 kilometers (200 miles) and gathering speed while closing the distance between them. As the stars spiral faster and closer together, they stretch and distort the surrounding space-time, giving off energy in the form of powerful gravitational waves, before smashing into each other. At the moment of collision, the bulk of the two neutron stars merge into one ultradense object, emitting a “fireball” of gamma rays.
  • Aug. 17, 2017, 1241:04 ET: Virgo detector in Pisa, Italy picks up a new strong “chirp” gravitational wave signal, designated GW170817. The LIGO detector in Livingston, Louisiana detects the signal just 22 milliseconds later, then the twin LIGO detector in Hanford, Washington, 3 milliseconds after that. Based on the signal duration (about 100 minutes) and the signal frequencies, scientists at the three facilities conclude it’s likely from neutron stars — not from more massive black holes (as in the previously three gravitational wave detections). And based on the signal strengths and timing between the three detectors, scientists are able to precisely  triangulate the position in the sky.  (The most precise gravitational-wave detection so far.)
  •  1.7 seconds later: NASA’s Fermi Gamma-ray Space Telescope and the European INTEGRAL satellite detect a gamma-ray burst (GRB) lasting nearly 2 seconds from the same general direction of sky. Both the Fermi and LIGO teams quickly alert astronomers around the world to search for an afterglow.
  • Hours later: Armed with these precise coordinates, a handful of observatories around the world starts searching the region of the sky where the signal was thought to originate. A new point of light, resembling a new star, is found by optical telescopes first. Known as a “kilonova,” it’s a phenomenon by which the material that is left over from the neutron star collision, which glows with light, is blown out of the immediate region and far out into space.
  • Days and weeks following: About 70 observatories on the ground and in space observe the event at various longer wavelengths (starting at gamma and then X-ray, ultraviolet, optical, infrared, and ending up at radio wave frequencies).
  •  In the weeks and months ahead: Telescopes around the world will continue to observe the radio-wave afterglow of the neutron star merger and gather further evidence about various stages of the merger, its interaction with its surroundings, and the processes that produce the heaviest elements in the universe.

“Multimessenger” astronomy

Caltech’s David H. Reitze, executive director of the LIGO Laboratory puts the observations in context: “This detection opens the window of a long-awaited ‘multimessenger’ astronomy. It’s the first time that we’ve observed a cataclysmic astrophysical event in both gravitational waves and electromagnetic waves — our cosmic messengers. Gravitational-wave astronomy offers new opportunities to understand the properties of neutron stars in ways that just can’t be achieved with electromagnetic astronomy alone.”


caltech | Variety of Gravitational Waves and a Chirp (audible sound for GW170817 starts ~30 seconds)

Astronomers detect 15 high-frequency ‘fast radio bursts’ from distant galaxy

Green Bank Telescope in West Virginia (credit: Geremia/CC)

Using the Green Bank radio telescope, astronomers at Breakthrough Listen, a $100 million initiative to find signs of intelligent life in the universe, have detected 15 brief but powerful “fast radio bursts” (FRBs). These microwave radio pulses are from a mysterious source known as FRB 121102* in a dwarf galaxy about 3 billion light years from Earth, transmitting at record high frequencies (4 to 8 GHz), according to the researchers

This sequence of 14 of the 15 detected fast radio bursts illustrates their dispersed spectrum and extreme variability. The streaks across the colored energy plot are the bursts appearing at different times and different energies because of dispersion caused by 3 billion years of travel through intergalactic space. In the top frequency spectrum, the dispersion has been removed to show the 300 microsecond pulse spike. (credit: Berkeley SETI Research Center)

Andrew Siemion, director of the Berkeley SETI Research Center and of the Breakthrough Listen program, and his team alerted the astronomical community to the high-frequency activity via an Astronomer’s Telegram on Monday evening, Aug. 28.

A schematic illustration of CSIRO’s Parkes radio telescope in Australia receiving a fast radio burst signal in 2014 (credit: Swinburne Astronomy Productions)

First detected in 2007, fast radio bursts are brief, bright pulses of radio emission detected from distant but largely unknown sources.

Breakthrough Starshot’s plan to use powerful laser pulses to propel nano-spacecraft to Proxima Centauri (credit: Breakthrough Initiatives)

Possible explanations for the repeating bursts range from outbursts from magnetars (rotating neutron stars with extremely strong magnetic fields) to directed energy sources — powerful bursts used by extraterrestrial civilizations to power exploratory spacecraft, akin to Breakthrough Starshot’s plan to use powerful laser pulses to propel nano-spacecraft to Earth’s nearest star, Proxima Centauri.

* FRB 121102 was discovered Nov 2, 2014 (hence its name) with the Arecibo radio telescope, and in 2015 it was the first fast radio burst seen to repeat. More than 150 high-energy bursts have been observed so far. (The repetition ruled out the possibility that FRBs were caused by catastrophic events.)


FRB 121102: Detection at 4 – 8 GHz band with Breakthrough Listen backend at Green Bank

On Saturday, August 26 at 13:51:44 UTC we initiated observations of the well-known repeating fast radio burst FRB 121102 [Spitler et al., Nature, 531, 7593 202-205, 2016] using the Breakthrough Listen Digital Backend with the C-band receiver at the Green Bank Telescope. We recorded baseband voltage data across 5.4375 GHz of bandwidth, completely covering the C-band receiver’s nominal 4-8 GHz band [MacMahon et al. arXiv:1707.06024v2]. Observations were conducted over ten 30-minute scans, as detailed in Table 1. Immediately after observations, the baseband data were reduced to form high time resolution (300 us integration) Stokes-I products using a GPU-accelerated spectroscopy suite. These reduced products were searched for dispersed pulses consistent with the known dispersion measure of FRB 121102 (557 pc cm^-3); baseband voltage data were preserved. We detected 15 bursts above our detection threshold of 10 sigma in the first two 30-minute scans, denoted 11A-L and 12A-B in Table 2. In Table 2, we include the detection signal-to-noise ratio (SNR) of each burst, along with a very rough estimate of pulse energy density assuming a 12 Jy system equivalent flux density, 300 us pulse width, and uniform 3800 MHz bandwidth. We note the following phenomenological properties of the detected bursts: 1. Bursts show marked changes in spectral extent, with characteristic spectral structure in the 100 MHz – 1 GHz range. 2. Several bursts appear to peak in brightness at frequencies above 6 GHz.


‘Wearable’ PET brain scanner enables studies of moving patients

Julie Brefczynski-Lewis, a neuroscientist at West Virginia University, places a helmet-like PET scanner on a research subject. The mobile scanner enables studies of human interaction, movement disorders, and more. (credit: West Virginia University)

Two scientists have developed a miniaturized positron emission tomography (PET) brain scanner that can be “worn” like a helmet.

The new Ambulatory Microdose Positron Emission Tomography (AMPET) scanner allows research subjects to stand and move around as the device scans, instead of having to lie completely still and be administered anesthesia — making it impossible to find associations between movement and brain activity.

Conventional positron emission tomography (PET) scanners immobilize patients (credit: Jens Maus/CC)

The AMPET scanner was developed by Julie Brefczynski-Lewis, a neuroscientist at West Virginia University (WVU), and Stan Majewski, a physicist at WVU and now at the University of Virginia. It could make possible new psychological and clinical studies on how the brain functions when affected by diseases from epilepsy to addiction, and during ordinary and dysfunctional social interactions.

Helmet support prototype with weighted helmet, allowing for freedom of movement. The counterbalance currently supports up to 10 kg but can be upgraded. Digitizing electronics will be mounted to the support above the patient. (credit: Samantha Melroy et al./Sensors)

Because AMPET sits so close to the brain, it can also “catch” more of the photons stemming from the radiotracers used in PET than larger scanners can. That means researchers can administer a lower dose of radioactive material and still get a good biological snapshot. Catching more signals also allows AMPET to create higher resolution images than regular PET.

The AMPET idea was sparked by the Rat Conscious Animal PET (RatCAP) scanner for studying rats at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory.** The scanner is a 250-gram ring that fits around the head of a rat, suspended by springs to support its weight and let the rat scurry about as the device scans. (credit: Brookhaven Lab)

The researchers plan to build a laboratory-ready version next.

Seeing more deeply into the brain

A patient or animal about to undergo a PET scan is injected with a low dose of a radiotracer — a radioactive form of a molecule that is regularly used in the body. These molecules emit anti-matter particles called positrons, which then manage to only travel a tiny distance through the body. As soon as one of these positrons meets an electron in biological tissue, the pair annihilates and converts their mass to energy. This energy takes the form of two high-energy light rays, called gamma photons, that shoot off in opposite directions. PET machines detect these photons and track their paths backward to their point of origin — the tracer molecule. By measuring levels of the tracer, for instance, doctors can map areas of high metabolic activity. Mapping of different tracers provides insight into different aspects of a patient’s health. (credit: Brookhaven Lab)

PET scans allow researchers to see farther into the body than other imaging tools. This lets AMPET reach deep neural structures while the research subjects are upright and moving. “A lot of the important things that are going on with emotion, memory, and behavior are way deep in the center of the brain: the basal ganglia, hippocampus, amygdala,” Brefczynski-Lewis notes.

“Currently we are doing tests to validate the use of virtual reality environments in future experiments,” she said. In this virtual reality, volunteers would read from a script designed to make the subject angry, for example, as his or her brain is scanned.

In the medical sphere, the scanning helmet could help explain what happens during drug treatments. Or it could shed light on movement disorders such as epilepsy, and watch what happens in the brain during a seizure; or study the sub-population of Parkinson’s patients who have great difficulty walking, but can ride a bicycle .

The RatCAP project at Brookhaven was funded by the DOE Office of Science. RHIC is a DOE Office of Science User Facility for nuclear physics research. Brookhaven Lab physicists use technology similar to PET scanners at the Relativistic Heavy Ion Collider (RHIC), where they must track the particles that fly out of near-light speed collisions of charged nuclei. PET research at the Lab dates back to the early 1960s and includes the creation of the first single-plane scanner as well as various tracer molecules.


Abstract of Development and Design of Next-Generation Head-Mounted Ambulatory Microdose Positron-Emission Tomography (AM-PET) System

Several applications exist for a whole brain positron-emission tomography (PET) brain imager designed as a portable unit that can be worn on a patient’s head. Enabled by improvements in detector technology, a lightweight, high performance device would allow PET brain imaging in different environments and during behavioral tasks. Such a wearable system that allows the subjects to move their heads and walk—the Ambulatory Microdose PET (AM-PET)—is currently under development. This imager will be helpful for testing subjects performing selected activities such as gestures, virtual reality activities and walking. The need for this type of lightweight mobile device has led to the construction of a proof of concept portable head-worn unit that uses twelve silicon photomultiplier (SiPM) PET module sensors built into a small ring which fits around the head. This paper is focused on the engineering design of mechanical support aspects of the AM-PET project, both of the current device as well as of the coming next-generation devices. The goal of this work is to optimize design of the scanner and its mechanics to improve comfort for the subject by reducing the effect of weight, and to enable diversification of its applications amongst different research activities.

‘Negative mass’ created at Washington State University

Experimental images of an expanding spin-orbit superfluid Bose-Einstein condensate at different expansion times (credit: M. A. Khamehchi et al./Physical Review Letters)

Washington State University (WSU) physicists have created a fluid with “negative mass,” which means that if you push it, it accelerates toward you instead of away, in apparent violation of Newton’s laws.

The phenomenon can be used to explore some of the more challenging concepts of the cosmos, said Michael Forbes, PhD, a WSU assistant professor of physics and astronomy and an affiliate assistant professor at the University of Washington. The research appeared Monday (April 17, 2017)  in the journal Physical Review Letters.

How to create negative mass

The researchers created the conditions for negative mass by cooling about 10,000 rubidium atoms to just above absolute zero, creating a Bose-Einstein condensate (in which individual atoms move as one object). In this state, particles move extremely slowly and, following the principles of quantum mechanics, behave like waves. They also synchronize and move in unison as a “superfluid” that flows without losing energy.

The lasers trapped the atoms as if they were in a bowl measuring less than a hundred micrometers across. At this point, the rubidium superfluid has regular mass. Breaking the bowl will allow the rubidium to rush out, expanding as the rubidium in the center pushes outward.

To create negative mass, the researchers applied a second set of lasers that kicked the atoms back and forth and changed the way they spin. Now when the rubidium rushes out fast enough, if behaves as if it has negative mass.

The technique used by the WSU researchers avoids some of the underlying defects encountered in previous attempts to create negative mass. It could hold clues to the behavior occurring in the heart of ultracold neutron stars, which also act as superfluids, and cosmological phenomena like black holes and dark energy, said Forbes.

The work was supported in part by a WSU New Faculty Seed Grant and the National Science Foundation.


Abstract of Negative-Mass Hydrodynamics in a Spin-Orbit–Coupled Bose-Einstein Condensate

A negative effective mass can be realized in quantum systems by engineering the dispersion relation. A powerful method is provided by spin-orbit coupling, which is currently at the center of intense research efforts. Here we measure an expanding spin-orbit coupled Bose-Einstein condensate whose dispersion features a region of negative effective mass. We observe a range of dynamical phenomena, including the breaking of parity and of Galilean covariance, dynamical instabilities, and self-trapping. The experimental findings are reproduced by a single-band Gross-Pitaevskii simulation, demonstrating that the emerging features—shock waves, soliton trains, self-trapping, etc.—originate from a modified dispersion. Our work also sheds new light on related phenomena in optical lattices, where the underlying periodic structure often complicates their interpretation.

Astronomers detect atmosphere around Earth-like planet

Artist’s impression of atmosphere around super-Earth planet GJ 1132b (credit: MPIA)

Astronomers have detected an atmosphere around an Earth-like planet beyond our solar system for the first time: the super-Earth planet GJ 1132b in the Southern constellation Vela, at a distance of 39 light-years from Earth.

The team, led by Keele University’s John Southworth, PhD, used the 2.2 m ESO/MPG telescope in Chile to take images of the planet’s host star GJ 1132. The astronomers made the detection by measuring the slight decrease in brightness, finding that its atmosphere absorbed some of the starlight while transiting (passing in front of) the host star. Previous detections of exoplanet atmospheres all involved gas giants reminiscent of a high-temperature Jupiter.

Possible “water world”

“With this research, we have taken the first tentative step into studying the atmospheres of smaller, Earth-like, planets,” said Southworth. “We simulated a range of possible atmospheres for this planet, finding that those rich in water and/or methane would explain the observations of GJ 1132b. The planet is significantly hotter and a bit larger than Earth, so one possibility is that it is a ‘water world’ with an atmosphere of hot steam.”

Very low-mass stars are extremely common (much more so than Sun-like stars), and are known to host lots of small planets. But they also show a lot of magnetic activity, causing high levels of X-rays and ultraviolet light to be produced, which might completely evaporate the planets’ atmospheres. The properties of GJ 1132b show that an atmosphere can endure for a billion years without being destroyed, the astronomers say.

Given the huge number of very low-mass stars and planets, this could mean the conditions suitable for life are common in the Universe, the astronomers suggest.

The discovery, reported March 31 in Astronomical Journal, makes GJ 1132b one of the highest-priority targets for further study by current top facilities, such as the Hubble Space Telescope and ESO’s Very Large Telescope, as well as the James Webb Space Telescope, slated for launch in 2018.

The team also included astronomers at Luigi Mancini Max Planck Institute for Astronomy (MPIA), University of Rome, University of Cambridge, and Stockholm University.

Neural networks promise sharpest-ever telescope images

From left to right: an example of an original galaxy image; the same image deliberately degraded; the image after recovery by the neural network; and for comparison, deconvolution. This figure visually illustrates the neural-networks’s ability to recover features that conventional deconvolutions cannot. (credit: K. Schawinski / C. Zhang / ETH Zurich)

Swiss researchers are using neural networks to achieve the sharpest-ever images in optical astronomy. The work appears in an open-access paper in Monthly Notices of the Royal Astronomical Society.

The aperture (diameter) of any telescope is fundamentally limited by its lens or mirror. The bigger the mirror or lens, the more light it gathers, allowing astronomers to detect fainter objects, and to observe them more clearly. Other factors affecting image quality are noise and atmospheric distortion.

The Swiss study uses “generative adversarial network” (GAN) machine-learning technology (see this KurzweilAI article) to go beyond this limit by using two neural networks that compete with each other to create a series of more realistic images. The researchers first train the neural network to “see” what galaxies look like (using blurred and sharp images of the same galaxy), and then ask it to automatically fix the blurred images of a galaxy, converting them to sharp ones.

Schematic illustration of the neural-network training process. The input is a set of original images. From these, the researchers automatically generate degraded images, and train a GAN. In the testing phase, only the generator will be used to recover images. (credit: K. Schawinski / C. Zhang / ETH Zurich)

The trained neural networks were able to recognize and reconstruct features that the telescope could not resolve, such as star-forming regions and dust lanes in galaxies. The scientists checked the reconstructed images against the original high-resolution images to test its performance, finding it better able to recover features than anything used to date.

“We can start by going back to sky surveys made with telescopes over many years, see more detail than ever before, and, for example, learn more about the structure of galaxies,” said lead author Prof. Kevin Schawinski of ETH Zurich in Switzerland. “There is no reason why we can’t then apply this technique to the deepest images from Hubble, and the coming James Webb Space Telescope, to learn more about the earliest structures in the Universe.”

ETH Zurich is hosting this work on the space.ml cross-disciplinary astrophysics/computer-science initiative, where the code is available to the general public.


Abstract of Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit

Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon–Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.