Scientists report first detection of gravitational waves produced by colliding neutron stars

Astronomers detect gravitational waves and a gamma-ray burst from two colliding neutron stars. (credit: National Science Foundation/LIGO/Sonoma State University/A. Simonnet)

Scientists reported today (Oct. 16, 2017) the first simultaneous detection of both gravitational waves and light — an astounding collision of two neutron stars.

The discovery was made nearly simultaneously by three gravitational-wave detectors, followed by observations by some 70 ground- and space-based light observatories.

Neutron stars are the smallest, densest stars known to exist and are formed when massive stars explode in supernovas.


MIT | Neutron Stars Collide

As these neutron stars spiraled together, they emitted gravitational waves that were detectable for about 100 seconds. When they collided, a flash of light in the form of gamma rays was emitted and seen on Earth about two seconds after the gravitational waves. In the days and weeks following the smashup, other forms of light, or electromagnetic radiation — including X-ray, ultraviolet, optical, infrared, and radio waves — were detected.

The stars were estimated to be in a range from around 1.1 to 1.6 times the mass of the sun, in the mass range of neutron stars. A neutron star is about 20 kilometers, or 12 miles, in diameter and is so dense that a teaspoon of neutron star material has a mass of about a billion tons.

The initial gamma-ray measurements, combined with the gravitational-wave detection, provide confirmation for Einstein’s general theory of relativity, which predicts that gravitational waves should travel at the speed of light. The observations also reveal signatures of recently synthesized material, including gold and platinum, solving a decades-long mystery of where about half of all elements heavier than iron are produced.


Georgia Tech | The Collision of Two Neutron Stars (audible frequencies start at ~25 seconds)

“This detection has genuinely opened the doors to a new way of doing astrophysics,” said Laura Cadonati, professor of physics at Georgia Tech and deputy spokesperson for the LIGO Scientific Collaboration. I expect it will be remembered as one of the most studied astrophysical events in history.”

In the weeks and months ahead, telescopes around the world will continue to observe the afterglow of the neutron star merger and gather further evidence about various stages of the merger, its interaction with its surroundings, and the processes that produce the heaviest elements in the universe.

The research was published today in Physical Review Letters and in an open-access paper in The Astrophysical Journal Letters.

Timeline

KurzweilAI has assembled this timeline of the observations from various reports:

  • About 130 million years ago: Two neutron stars are in their final moments of orbiting each other, separated only by about 300 kilometers (200 miles) and gathering speed while closing the distance between them. As the stars spiral faster and closer together, they stretch and distort the surrounding space-time, giving off energy in the form of powerful gravitational waves, before smashing into each other. At the moment of collision, the bulk of the two neutron stars merge into one ultradense object, emitting a “fireball” of gamma rays.
  • Aug. 17, 2017, 1241:04 ET: Virgo detector in Pisa, Italy picks up a new strong “chirp” gravitational wave signal, designated GW170817. The LIGO detector in Livingston, Louisiana detects the signal just 22 milliseconds later, then the twin LIGO detector in Hanford, Washington, 3 milliseconds after that. Based on the signal duration (about 100 minutes) and the signal frequencies, scientists at the three facilities conclude it’s likely from neutron stars — not from more massive black holes (as in the previously three gravitational wave detections). And based on the signal strengths and timing between the three detectors, scientists are able to precisely  triangulate the position in the sky.  (The most precise gravitational-wave detection so far.)
  •  1.7 seconds later: NASA’s Fermi Gamma-ray Space Telescope and the European INTEGRAL satellite detect a gamma-ray burst (GRB) lasting nearly 2 seconds from the same general direction of sky. Both the Fermi and LIGO teams quickly alert astronomers around the world to search for an afterglow.
  • Hours later: Armed with these precise coordinates, a handful of observatories around the world starts searching the region of the sky where the signal was thought to originate. A new point of light, resembling a new star, is found by optical telescopes first. Known as a “kilonova,” it’s a phenomenon by which the material that is left over from the neutron star collision, which glows with light, is blown out of the immediate region and far out into space.
  • Days and weeks following: About 70 observatories on the ground and in space observe the event at various longer wavelengths (starting at gamma and then X-ray, ultraviolet, optical, infrared, and ending up at radio wave frequencies).
  •  In the weeks and months ahead: Telescopes around the world will continue to observe the radio-wave afterglow of the neutron star merger and gather further evidence about various stages of the merger, its interaction with its surroundings, and the processes that produce the heaviest elements in the universe.

“Multimessenger” astronomy

Caltech’s David H. Reitze, executive director of the LIGO Laboratory puts the observations in context: “This detection opens the window of a long-awaited ‘multimessenger’ astronomy. It’s the first time that we’ve observed a cataclysmic astrophysical event in both gravitational waves and electromagnetic waves — our cosmic messengers. Gravitational-wave astronomy offers new opportunities to understand the properties of neutron stars in ways that just can’t be achieved with electromagnetic astronomy alone.”


caltech | Variety of Gravitational Waves and a Chirp (audible sound for GW170817 starts ~30 seconds)

Astronomers detect 15 high-frequency ‘fast radio bursts’ from distant galaxy

Green Bank Telescope in West Virginia (credit: Geremia/CC)

Using the Green Bank radio telescope, astronomers at Breakthrough Listen, a $100 million initiative to find signs of intelligent life in the universe, have detected 15 brief but powerful “fast radio bursts” (FRBs). These microwave radio pulses are from a mysterious source known as FRB 121102* in a dwarf galaxy about 3 billion light years from Earth, transmitting at record high frequencies (4 to 8 GHz), according to the researchers

This sequence of 14 of the 15 detected fast radio bursts illustrates their dispersed spectrum and extreme variability. The streaks across the colored energy plot are the bursts appearing at different times and different energies because of dispersion caused by 3 billion years of travel through intergalactic space. In the top frequency spectrum, the dispersion has been removed to show the 300 microsecond pulse spike. (credit: Berkeley SETI Research Center)

Andrew Siemion, director of the Berkeley SETI Research Center and of the Breakthrough Listen program, and his team alerted the astronomical community to the high-frequency activity via an Astronomer’s Telegram on Monday evening, Aug. 28.

A schematic illustration of CSIRO’s Parkes radio telescope in Australia receiving a fast radio burst signal in 2014 (credit: Swinburne Astronomy Productions)

First detected in 2007, fast radio bursts are brief, bright pulses of radio emission detected from distant but largely unknown sources.

Breakthrough Starshot’s plan to use powerful laser pulses to propel nano-spacecraft to Proxima Centauri (credit: Breakthrough Initiatives)

Possible explanations for the repeating bursts range from outbursts from magnetars (rotating neutron stars with extremely strong magnetic fields) to directed energy sources — powerful bursts used by extraterrestrial civilizations to power exploratory spacecraft, akin to Breakthrough Starshot’s plan to use powerful laser pulses to propel nano-spacecraft to Earth’s nearest star, Proxima Centauri.

* FRB 121102 was discovered Nov 2, 2014 (hence its name) with the Arecibo radio telescope, and in 2015 it was the first fast radio burst seen to repeat. More than 150 high-energy bursts have been observed so far. (The repetition ruled out the possibility that FRBs were caused by catastrophic events.)


FRB 121102: Detection at 4 – 8 GHz band with Breakthrough Listen backend at Green Bank

On Saturday, August 26 at 13:51:44 UTC we initiated observations of the well-known repeating fast radio burst FRB 121102 [Spitler et al., Nature, 531, 7593 202-205, 2016] using the Breakthrough Listen Digital Backend with the C-band receiver at the Green Bank Telescope. We recorded baseband voltage data across 5.4375 GHz of bandwidth, completely covering the C-band receiver’s nominal 4-8 GHz band [MacMahon et al. arXiv:1707.06024v2]. Observations were conducted over ten 30-minute scans, as detailed in Table 1. Immediately after observations, the baseband data were reduced to form high time resolution (300 us integration) Stokes-I products using a GPU-accelerated spectroscopy suite. These reduced products were searched for dispersed pulses consistent with the known dispersion measure of FRB 121102 (557 pc cm^-3); baseband voltage data were preserved. We detected 15 bursts above our detection threshold of 10 sigma in the first two 30-minute scans, denoted 11A-L and 12A-B in Table 2. In Table 2, we include the detection signal-to-noise ratio (SNR) of each burst, along with a very rough estimate of pulse energy density assuming a 12 Jy system equivalent flux density, 300 us pulse width, and uniform 3800 MHz bandwidth. We note the following phenomenological properties of the detected bursts: 1. Bursts show marked changes in spectral extent, with characteristic spectral structure in the 100 MHz – 1 GHz range. 2. Several bursts appear to peak in brightness at frequencies above 6 GHz.


‘Wearable’ PET brain scanner enables studies of moving patients

Julie Brefczynski-Lewis, a neuroscientist at West Virginia University, places a helmet-like PET scanner on a research subject. The mobile scanner enables studies of human interaction, movement disorders, and more. (credit: West Virginia University)

Two scientists have developed a miniaturized positron emission tomography (PET) brain scanner that can be “worn” like a helmet.

The new Ambulatory Microdose Positron Emission Tomography (AMPET) scanner allows research subjects to stand and move around as the device scans, instead of having to lie completely still and be administered anesthesia — making it impossible to find associations between movement and brain activity.

Conventional positron emission tomography (PET) scanners immobilize patients (credit: Jens Maus/CC)

The AMPET scanner was developed by Julie Brefczynski-Lewis, a neuroscientist at West Virginia University (WVU), and Stan Majewski, a physicist at WVU and now at the University of Virginia. It could make possible new psychological and clinical studies on how the brain functions when affected by diseases from epilepsy to addiction, and during ordinary and dysfunctional social interactions.

Helmet support prototype with weighted helmet, allowing for freedom of movement. The counterbalance currently supports up to 10 kg but can be upgraded. Digitizing electronics will be mounted to the support above the patient. (credit: Samantha Melroy et al./Sensors)

Because AMPET sits so close to the brain, it can also “catch” more of the photons stemming from the radiotracers used in PET than larger scanners can. That means researchers can administer a lower dose of radioactive material and still get a good biological snapshot. Catching more signals also allows AMPET to create higher resolution images than regular PET.

The AMPET idea was sparked by the Rat Conscious Animal PET (RatCAP) scanner for studying rats at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory.** The scanner is a 250-gram ring that fits around the head of a rat, suspended by springs to support its weight and let the rat scurry about as the device scans. (credit: Brookhaven Lab)

The researchers plan to build a laboratory-ready version next.

Seeing more deeply into the brain

A patient or animal about to undergo a PET scan is injected with a low dose of a radiotracer — a radioactive form of a molecule that is regularly used in the body. These molecules emit anti-matter particles called positrons, which then manage to only travel a tiny distance through the body. As soon as one of these positrons meets an electron in biological tissue, the pair annihilates and converts their mass to energy. This energy takes the form of two high-energy light rays, called gamma photons, that shoot off in opposite directions. PET machines detect these photons and track their paths backward to their point of origin — the tracer molecule. By measuring levels of the tracer, for instance, doctors can map areas of high metabolic activity. Mapping of different tracers provides insight into different aspects of a patient’s health. (credit: Brookhaven Lab)

PET scans allow researchers to see farther into the body than other imaging tools. This lets AMPET reach deep neural structures while the research subjects are upright and moving. “A lot of the important things that are going on with emotion, memory, and behavior are way deep in the center of the brain: the basal ganglia, hippocampus, amygdala,” Brefczynski-Lewis notes.

“Currently we are doing tests to validate the use of virtual reality environments in future experiments,” she said. In this virtual reality, volunteers would read from a script designed to make the subject angry, for example, as his or her brain is scanned.

In the medical sphere, the scanning helmet could help explain what happens during drug treatments. Or it could shed light on movement disorders such as epilepsy, and watch what happens in the brain during a seizure; or study the sub-population of Parkinson’s patients who have great difficulty walking, but can ride a bicycle .

The RatCAP project at Brookhaven was funded by the DOE Office of Science. RHIC is a DOE Office of Science User Facility for nuclear physics research. Brookhaven Lab physicists use technology similar to PET scanners at the Relativistic Heavy Ion Collider (RHIC), where they must track the particles that fly out of near-light speed collisions of charged nuclei. PET research at the Lab dates back to the early 1960s and includes the creation of the first single-plane scanner as well as various tracer molecules.


Abstract of Development and Design of Next-Generation Head-Mounted Ambulatory Microdose Positron-Emission Tomography (AM-PET) System

Several applications exist for a whole brain positron-emission tomography (PET) brain imager designed as a portable unit that can be worn on a patient’s head. Enabled by improvements in detector technology, a lightweight, high performance device would allow PET brain imaging in different environments and during behavioral tasks. Such a wearable system that allows the subjects to move their heads and walk—the Ambulatory Microdose PET (AM-PET)—is currently under development. This imager will be helpful for testing subjects performing selected activities such as gestures, virtual reality activities and walking. The need for this type of lightweight mobile device has led to the construction of a proof of concept portable head-worn unit that uses twelve silicon photomultiplier (SiPM) PET module sensors built into a small ring which fits around the head. This paper is focused on the engineering design of mechanical support aspects of the AM-PET project, both of the current device as well as of the coming next-generation devices. The goal of this work is to optimize design of the scanner and its mechanics to improve comfort for the subject by reducing the effect of weight, and to enable diversification of its applications amongst different research activities.

‘Negative mass’ created at Washington State University

Experimental images of an expanding spin-orbit superfluid Bose-Einstein condensate at different expansion times (credit: M. A. Khamehchi et al./Physical Review Letters)

Washington State University (WSU) physicists have created a fluid with “negative mass,” which means that if you push it, it accelerates toward you instead of away, in apparent violation of Newton’s laws.

The phenomenon can be used to explore some of the more challenging concepts of the cosmos, said Michael Forbes, PhD, a WSU assistant professor of physics and astronomy and an affiliate assistant professor at the University of Washington. The research appeared Monday (April 17, 2017)  in the journal Physical Review Letters.

How to create negative mass

The researchers created the conditions for negative mass by cooling about 10,000 rubidium atoms to just above absolute zero, creating a Bose-Einstein condensate (in which individual atoms move as one object). In this state, particles move extremely slowly and, following the principles of quantum mechanics, behave like waves. They also synchronize and move in unison as a “superfluid” that flows without losing energy.

The lasers trapped the atoms as if they were in a bowl measuring less than a hundred micrometers across. At this point, the rubidium superfluid has regular mass. Breaking the bowl will allow the rubidium to rush out, expanding as the rubidium in the center pushes outward.

To create negative mass, the researchers applied a second set of lasers that kicked the atoms back and forth and changed the way they spin. Now when the rubidium rushes out fast enough, if behaves as if it has negative mass.

The technique used by the WSU researchers avoids some of the underlying defects encountered in previous attempts to create negative mass. It could hold clues to the behavior occurring in the heart of ultracold neutron stars, which also act as superfluids, and cosmological phenomena like black holes and dark energy, said Forbes.

The work was supported in part by a WSU New Faculty Seed Grant and the National Science Foundation.


Abstract of Negative-Mass Hydrodynamics in a Spin-Orbit–Coupled Bose-Einstein Condensate

A negative effective mass can be realized in quantum systems by engineering the dispersion relation. A powerful method is provided by spin-orbit coupling, which is currently at the center of intense research efforts. Here we measure an expanding spin-orbit coupled Bose-Einstein condensate whose dispersion features a region of negative effective mass. We observe a range of dynamical phenomena, including the breaking of parity and of Galilean covariance, dynamical instabilities, and self-trapping. The experimental findings are reproduced by a single-band Gross-Pitaevskii simulation, demonstrating that the emerging features—shock waves, soliton trains, self-trapping, etc.—originate from a modified dispersion. Our work also sheds new light on related phenomena in optical lattices, where the underlying periodic structure often complicates their interpretation.

Astronomers detect atmosphere around Earth-like planet

Artist’s impression of atmosphere around super-Earth planet GJ 1132b (credit: MPIA)

Astronomers have detected an atmosphere around an Earth-like planet beyond our solar system for the first time: the super-Earth planet GJ 1132b in the Southern constellation Vela, at a distance of 39 light-years from Earth.

The team, led by Keele University’s John Southworth, PhD, used the 2.2 m ESO/MPG telescope in Chile to take images of the planet’s host star GJ 1132. The astronomers made the detection by measuring the slight decrease in brightness, finding that its atmosphere absorbed some of the starlight while transiting (passing in front of) the host star. Previous detections of exoplanet atmospheres all involved gas giants reminiscent of a high-temperature Jupiter.

Possible “water world”

“With this research, we have taken the first tentative step into studying the atmospheres of smaller, Earth-like, planets,” said Southworth. “We simulated a range of possible atmospheres for this planet, finding that those rich in water and/or methane would explain the observations of GJ 1132b. The planet is significantly hotter and a bit larger than Earth, so one possibility is that it is a ‘water world’ with an atmosphere of hot steam.”

Very low-mass stars are extremely common (much more so than Sun-like stars), and are known to host lots of small planets. But they also show a lot of magnetic activity, causing high levels of X-rays and ultraviolet light to be produced, which might completely evaporate the planets’ atmospheres. The properties of GJ 1132b show that an atmosphere can endure for a billion years without being destroyed, the astronomers say.

Given the huge number of very low-mass stars and planets, this could mean the conditions suitable for life are common in the Universe, the astronomers suggest.

The discovery, reported March 31 in Astronomical Journal, makes GJ 1132b one of the highest-priority targets for further study by current top facilities, such as the Hubble Space Telescope and ESO’s Very Large Telescope, as well as the James Webb Space Telescope, slated for launch in 2018.

The team also included astronomers at Luigi Mancini Max Planck Institute for Astronomy (MPIA), University of Rome, University of Cambridge, and Stockholm University.

Neural networks promise sharpest-ever telescope images

From left to right: an example of an original galaxy image; the same image deliberately degraded; the image after recovery by the neural network; and for comparison, deconvolution. This figure visually illustrates the neural-networks’s ability to recover features that conventional deconvolutions cannot. (credit: K. Schawinski / C. Zhang / ETH Zurich)

Swiss researchers are using neural networks to achieve the sharpest-ever images in optical astronomy. The work appears in an open-access paper in Monthly Notices of the Royal Astronomical Society.

The aperture (diameter) of any telescope is fundamentally limited by its lens or mirror. The bigger the mirror or lens, the more light it gathers, allowing astronomers to detect fainter objects, and to observe them more clearly. Other factors affecting image quality are noise and atmospheric distortion.

The Swiss study uses “generative adversarial network” (GAN) machine-learning technology (see this KurzweilAI article) to go beyond this limit by using two neural networks that compete with each other to create a series of more realistic images. The researchers first train the neural network to “see” what galaxies look like (using blurred and sharp images of the same galaxy), and then ask it to automatically fix the blurred images of a galaxy, converting them to sharp ones.

Schematic illustration of the neural-network training process. The input is a set of original images. From these, the researchers automatically generate degraded images, and train a GAN. In the testing phase, only the generator will be used to recover images. (credit: K. Schawinski / C. Zhang / ETH Zurich)

The trained neural networks were able to recognize and reconstruct features that the telescope could not resolve, such as star-forming regions and dust lanes in galaxies. The scientists checked the reconstructed images against the original high-resolution images to test its performance, finding it better able to recover features than anything used to date.

“We can start by going back to sky surveys made with telescopes over many years, see more detail than ever before, and, for example, learn more about the structure of galaxies,” said lead author Prof. Kevin Schawinski of ETH Zurich in Switzerland. “There is no reason why we can’t then apply this technique to the deepest images from Hubble, and the coming James Webb Space Telescope, to learn more about the earliest structures in the Universe.”

ETH Zurich is hosting this work on the space.ml cross-disciplinary astrophysics/computer-science initiative, where the code is available to the general public.


Abstract of Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit

Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon–Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.

Compact new microscope chemically identifies micrometer-sized particles

Multiple types of micrometer-sized particles are simultaneously illuminated by a far-infrared laser and a green laser beam. Absorption of the infrared laser energy by the particles increases their temperatures, causing them to expand and slightly altering their visible-light optical properties. These changes are unique to the material composition of each particle and can be measured by examining the modulation of scattered green light from each particle. (credit: Ryan Sullenberger, MIT Lincoln Laboratory)

MIT researchers have developed a radical design for a low-cost, miniaturized microscope that can chemically identify individual micrometer-sized particles. It could one day be used in airports or other high-security venues as a highly sensitive and low-cost way to rapidly screen people for microscopic amounts of potentially dangerous materials. It could also be used for scientific analysis of very small samples or for measuring the optical properties of materials.

Optical setup for PMMS measurement scheme. A tunable far-IR laser (QCL) or pump projects laser light (shown in this illustration as red, but actually invisible far-IR, or thermal energy) and a 532 nm laser (probe) projects green light onto the same location on a sample, which consists of microspheres deposited onto a ZnSe substrate in this experiment. A visible-light camera fitted with a 16× microscopic lens images the particles directly. The white LED is used to help locate the particles. (credit: R. M. Sullenberger et al./ Optics Letters)

In an open-access paper in the journal Optics Letters, from The Optical Society (OSA), the researchers demonstrated their new “photothermal modulation of Mie scattering” (PMMS) microscope by measuring infrared spectra of individual 3-micrometer spheres made of silica or acrylic. The new technique uses a simple optical setup consisting of compact components that will allow the instrument to be miniaturized into a portable device about the size of a shoebox.

The new microscope’s use of visible wavelengths for imaging gives it a spatial resolution of around 1 micrometer, compared to the roughly 10-micrometer resolution of traditional infrared spectroscopy methods. This increased resolution allows the new technique to distinguish and identify individual particles that are extremely small and close together.*

“If there are two very different particles in the field of view, we’re able to identify each of them,” said Stolyarov. “This would never be possible with a conventional infrared technique because the image would be indistinguishable.”

“The most important advantage of our new technique is its highly sensitive, yet remarkably simple design,” said Ryan Sullenberger, associate staff at MIT Lincoln Labs and first author of the paper. “It provides new opportunities for nondestructive chemical analysis while paving the way towards ultra-sensitive and more compact instrumentation.”

Probing spectral fingerprints

A typical far-IR spectrometer (credit: NYU)

Infrared spectroscopy is typically used to identify unknown materials because almost every material can be identified by its unique far-infrared absorption spectrum, or fingerprint. The new method detects this fingerprint without using actual far-infrared detectors, which add significant bulk to traditional instruments. That limits their use as portable devices — also because of their requirement for cooling.

The new technique works by illuminating particles with both an far-infrared laser and a green laser. The far-infrared laser deposits energy into the particles, causing them to heat up and expand. The green laser light is then scattered by these heated particles. A visible-wavelength camera is used to monitor this scattering, tracking physical changes of the individual particles through the microscope’s lens.

The instrument can be used to identify the material composition of individual particles by tuning the far-infrared laser to different wavelengths and collecting the visible scattered light at each wavelength. The slight heating of the particles doesn’t impart any permanent changes to the material, making the technique ideal for non-destructive analysis.

The ability to excite particles with infrared light and then look at their scattering with visible wavelengths — a process called photothermal modulation of Mie scattering — has been used since the 1980s. This new work uses more advanced optical components to create and detect the Mie scattering and is the first to use an imaging configuration to detect multiple species of particles.

“We’re actually imaging the area that we’re interrogating,” said Alexander Stolyarov, technical staff and a co-author of the paper. “This means we can simultaneously probe multiple particles on the surface at the same time.”

Compact, tunable infrared laser

The development of compact, tunable quantum-cascade infrared lasers was a key enabling technology for the new technique. The researchers combined a quantum-cascade laser with a very stable visible laser source and a commercially available scientific-grade camera.

“We are hoping to see an improvement in high-power wavelength-tunable quantum cascade lasers,” said Sullenberger. “A more powerful infrared laser enables us to interrogate larger areas in the same amount of time, allowing more particles to be probed simultaneously.”

The researchers plan to test their microscope on additional materials, including particles that are not spherical in shape. They also want to test their setup in more realistic environments that might contain interfering particles.

The work was supported by the U.S. Assistant Secretary of Defense for Research and Engineering under an Air Force contract.

* “By using a visible probe beam and camera for registering the particle absorption, we are able to spectroscopically identify individual particles that are spaced closer than the IR diffraction limit, which represents a significant improvement over conventional IR spectroscopic imaging techniques,” the authors note.


Abstract of Spatially-resolved individual particle spectroscopy using photothermal modulation of Mie scattering

We report a photothermal modulation of Mie scattering (PMMS) method that enables concurrent spatial and spectral discrimination of individual micron-sized particles. This approach provides a direct measurement of the “fingerprint” infrared absorption spectrum with the spatial resolution of visible light. Trace quantities (tens of picograms) of material were deposited onto an infrared-transparent substrate and simultaneously illuminated by a wavelength-tunable intensity-modulated quantum cascade pump laser and a continuous-wave 532 nm probe laser. Absorption of the pump laser by the particles results in direct modulation of the scatter field of the probe laser. The probe light scattered from the interrogated region is imaged onto a visible camera, enabling simultaneous probing of spatially-separated individual particles. By tuning the wavelength of the pump laser, the IR absorption spectrum is obtained. Using this approach, we measured the infrared absorption spectra of individual 3 μm PMMA and silica spheres. Experimental PMMS signal amplitudes agree with modeling using an extended version of the Mie scattering theory for particles on substrates, enabling the prediction of the PMMS signal magnitude based on the material and substrate properties.

How to 3D-print your own baby universe

Cosmic microwave background radiation — 2D view (credit: NASA)

Researchers have created a 3D-printed cosmic microwave background (CMB) — a map of the oldest light in the universe — and have provided the files for download.

The cosmic microwave background (CMB) is the “glow” that the universe had in the microwave range. It maps the oldest light in the universe and tells astronomers more about the early universe and the formation of structures within it, such as galaxies. It was imprinted when the universe was only 380,000 years old — when the universe first became transparent, instead of an opaque fog of plasma and radiation.

The Planck satellite is making increasingly more detailed maps of the CMB, which are increasingly difficult to view and explore. To address this issue, Dave Clements, PhD, from the Department of Physics at Imperial College London and team have created plans for 3D printing the CMB. The work is published (open access) in the European Journal of Physics.

3D-printed CMB model. The bumps (not to scale) and associated colors represent both higher temperatures and higher densities of matter (credit: D. L. Clements et al./European Journal of Physics)

The 3D-printed model represents differences in the temperature as bumps and dips on a spherical surface and also as colors (from blue for coldest to red as warmest, corresponding to colors in the flat view). These temperature differences relate to different densities of matter, which correspond to the formation of structures in the universe, including stars, galaxies, galaxy clusters, and superclusters.

The CMB can be printed from a range of 3D printers, and two file types have been created by the team: one for simple single-color structures and one that includes the temperature differences represented as colors as well as bumps and dips. The files are free to download.

Dave Clements’ latest book, Infrared Astronomy — Seeing the Heat: from William Herschel to the Herschel Space Observatory, is available now.


Abstract of Cosmic sculpture: a new way to visualise the cosmic microwave background

3D printing presents an attractive alternative to visual representation of physical datasets such as astronomical images that can be used for research, outreach or teaching purposes, and is especially relevant to people with a visual disability. We here report the use of 3D printing technology to produce a representation of the all-sky cosmic microwave background (CMB) intensity anisotropy maps produced by the Planck mission. The success of this work in representing key features of the CMB is discussed as is the potential of this approach for representing other astrophysical data sets. 3D printing such datasets represents a highly complementary approach to the usual 2D projections used in teaching and outreach work, and can also form the basis of undergraduate projects. The CAD files used to produce the models discussed in this paper are made available.

Zapping deep tumors with microwave-heated photosensitizer nanoparticle

A schematic illustration of microwave-induced photodynamic therapy for cancer treatment (credit: UTA)

Physicists at The University of Texas at Arlington have invented a new photosensitizer  nanoparticle called copper-cysteamine (Cu-Cy) that when heated by microwave energy can precisely zap cancer cells deep in the body .

Photodynamic therapy kills cancer cells when a photosensitizer* nanoparticle introduced into tumor tissue is stimulated by (typically) near-infrared light, generating toxic reactive oxygen species (ROS), such as singlet oxygen, by photoexcitation. However, near-IR light cannot penetrate deeper than 10 mm in tissue while retaining enough energy to activate ROS.**

The new “microwave-induced photodynamic therapy (MIPDT)” method can “propagate through all types of tissues and target deeply situated tumors, without harming surrounding tissue,” according to Wei Chen, UTA professor of physics and lead author of the study, published in the October 2016 issue of The Journal of Biomedical Nanotechnology.***

TEM images of Cu-Cy particles (left) and particles after uptake by osteosarcoma cells (right) (Mengyu Yao et al./J. Biomed. Nanotechnol.)

The new nanoparticle demonstrates “very low toxicity, is easy to make and inexpensive, and also emits intense luminescence, which means it can also be used as an imaging agent,” said Chen.

*  A photosensitizer is a molecule that can be activated by light to a high-energy state. It may then collide with oxygen and transfer its extra energy to oxygen, forming toxic singlet oxygen.

** In previous research, the researchers found that the Cu-Cy nanoparticle could be activated by X-rays to produce singlet oxygen and slow the growth of tumors. X-ray radiation, however, poses significant risks to patients and can harm healthy tissue. Other photodynamic therapy activation methods that have been explored, with limited results, include upconversion nanoparticles that can be excited at NIR and emit light in the UV-visible range, scintillation or afterglow nanoparticles, and Cerenkov light (generated in nuclear reactors).

*** Scientists from the The Guangdong Key Laboratory of Orthopaedic Technology and Implant Materials, Department of Orthopedics, Guangzhou General Hospital of Guangzhou Military Command, Guangzhou, China; and the Physics Department at Beihang University in Beijing, China, were also involved in the research. The U.S. Army Medical Research Acquisition Activity, the National Science Foundation, the Department of Homeland Security’s joint Academic Research Initiative program, the National Basic Research Program of China, the National Natural Science Foundation of China, and the five-year plan of the Chinese Military.


Abstract of A New Modality for Cancer Treatment—Nanoparticle Mediated Microwave Induced Photodynamic Therapy

Photodynamic therapy (PDT) has attracted ever-growing attention as a promising modality for cancer treatment. However, due to poor tissue penetration by light, photodynamic therapy has rarely been used for deeply situated tumors. This problem can be solved if photosensitizers are activated by microwaves (MW) that are able to penetrate deeply into tissues. Here, for the first time, we report microwave-induced photodynamic therapy and exploit copper cysteamine nanoparticles as a new type of photosensitizer that can be activated by microwaves to produce singlet oxygen for cancer treatment. Both in vitro and in vivo studies on a rat osteosarcoma cell line (UMR 106-01) have shown significant cell destruction using copper cysteamine (Cu-Cy) under microwave activation. The heating effects and the release of copper ions from Cu-Cy upon MW stimulation are the main mechanisms for the generation of reactive oxygen species that are lethal bullets for cancer destruction. The copper cysteamine nanoparticle-based microwave-induced photodynamic therapy opens a new door for treating cancer and other diseases.

Nobel Prize in Chemistry 2016 awarded to three pioneers of molecular machines

The Nobel Prize in Chemistry 2016 was awarded today to Jean-Pierre Sauvage, PhD, Sir J. Fraser Stoddart,PhD, and Bernard L. Feringa, PhD, for their design and production of molecular machines. They have developed molecules with controllable movements, which can perform a task when energy is added.

Jean-Pierre Sauvage used a copper ion to interlock molecules using a mechanical bond. (credit: The Royal Swedish Academy of Sciences)

The first step towards a molecular machine was taken by Jean-Pierre Sauvage in 1983, when he succeeded in linking two ring-shaped molecules together to form a chain, called a catenane. Normally, molecules are joined by strong covalent bonds in which the atoms share electrons, but in the chain they were instead linked by a freer mechanical bond. For a machine to be able to perform a task it must consist of parts that can move relative to each other. The two interlocked rings fulfilled exactly this requirement.

Fraser Stoddart created a rotaxane cyclophane ring that could act as a molecular shuttle, moving along an axle in a controlled manner. (credit: The Royal Swedish Academy of Sciences)

The second step was taken by Fraser Stoddart in 1991, when he developed a rotaxane. He threaded a rotaxane cyclophane molecular ring onto a thin molecular axle and demonstrated that the ring was able to move along the axle — the start of applying topological entanglement in the development of molecular machinery.

(Left) Fraser Stoddart’s (left) rotaxane-based “molecular elevator” and (right) “artificial muscle,” using extension and contraction in a daisy-chain rotaxane structure (credit: The Royal Swedish Academy of Sciences)

Among his other developments based on rotaxanes are a molecular lift, a molecular muscle and a molecule-based computer chip.

Ben Feringa’s molecular motor (the first) was mechanically constructed to spin in a particular direction. His research group has optimized the motor so that it now spins at 12 million revolutions per second. (credit: The Royal Swedish Academy of Sciences)

Bernard Feringa was the first person to develop a molecular motor; in 1999 he got a molecular rotor blade to spin continually in the same direction.

Ben Feringa’s four-wheel drive nanocar, with a molecular chassis and four motors that functioned as wheels (credit: The Royal Swedish Academy of Sciences)

Using molecular motors, he has also rotated a glass cylinder that is 10,000 times bigger than the motor and also designed a nanocar.

2016′s Nobel Laureates in Chemistry have taken molecular systems out of equilibrium’s stalemate and into energy-filled states in which their movements can be controlled. In terms of development, the molecular motor is at the same stage as the electric motor was in the 1830s, when scientists displayed various spinning cranks and wheels, unaware that they would lead to electric trains, washing machines, fans and food processors. Molecular machines will most likely be used in the development of things such as new materials, sensors and energy storage systems.

Jean-Pierre Sauvage, born 1944 in Paris, France. Ph.D. 1971 from the University of Strasbourg, France. Professor Emeritus at the University of Strasbourg and Director of Research Emeritus at the National Center for Scientific Research (CNRS), France.
https://isis.unistra.fr/laboratory-of-inorganic-chemistry-jean-pierre-sauvage

Sir J. Fraser Stoddart, born 1942 in Edinburgh, UK. Ph.D. 1966 from Edinburgh University, UK. Board of Trustees Professor of Chemistry at Northwestern University, Evanston, IL, USA.
http://stoddart.northwestern.edu

Bernard L. Feringa, born 1951 in Barger-Compascuum, the Netherlands. Ph.D.1978 from the University of Groningen, the Netherlands. Professor in Organic Chemistry at the University of Groningen, the Netherlands.
www.benferinga.com