Brain-imaging headband measures how our minds mirror a speaker when we communicate

A cartoon image of brain “coupling” during communication (credit: Drexel University)

Drexel University biomedical engineers and Princeton University psychologists have used a wearable brain-imaging device called functional near-infrared spectroscopy (fNIRS) to measure brain synchronization when humans interact. fNIRS uses light to measure neural activity in the cortex of the brain (based on blood-oxygenation changes) during real-life situations and can be worn like a headband.

(KurzweilAI recently covered research with a fNIRS brain-computer interface that allows completely locked-in patients to communicate.)

A fNIRS headband (credit: Wyss Center for Bio and Neuroengineering)

Mirroring the speaker’s brain activity

The researchers found that a listener’s brain activity (in brain areas associated with speech comprehension) mirrors the speaker’s brain when he or she is telling a story about a real-life experience, with about a five-second delay. They also found that higher coupling is associated with better understanding.

The researchers believe the system can be used to offer important information about how to better communicate in many different environments, such as how people learn in classrooms and how to improve business meetings and doctor-patient communication. They also mentioned uses in analyzing political rallies and how people handle cable news.

“We now have a tool that can give us richer information about the brain during everyday tasks — such as person-to-person communication — that we could not receive in artificial lab settings or from single brain studies,” said Hasan Ayaz, PhD, an associate research professor in Drexel’s School of Biomedical Engineering, Science and Health Systems, who led the research team.

Traditional brain imaging methods like fMRI have limitations. In particular, fMRI requires subjects to lie down motionlessly in a noisy scanning environment. With this kind of setup, it’s not possible to simultaneously scan the brains of multiple individuals who are speaking face-to-face. Which is why the Drexel researchers turned to a portable fNIRS system, which could probe brain-to-brain coupling question in natural settings.

For their study, a native English speaker and two native Turkish speakers told an unrehearsed, real-life story in their native language. Their stories were recorded and their brains were scanned using fNIRS. Fifteen English speakers then listened to the recording, in addition to a story that was recorded at a live storytelling event.

The researchers targeted the prefrontal and parietal areas of the brain, which include cognitive and higher order areas that are involved in a person’s capacity to discern beliefs, desires, and goals of others. They hypothesized that a listener’s brain activity would correlate with the speaker’s only when listening to a story they understood (the English version). A second objective of the study was to compare the fNIRS results with data from a similar study that had used fMRI to compare the two methods.

They found that when the fNIRS measured the oxygenation and deoxygenation of blood cells in the test subject’s brains, the listeners’ brain activity matched only with the English speakers.* These results also correlated with the previous fMRI study.

The researchers believe the new research supports fNIRS as a viable future tool to study brain-to-brain coupling during social interaction. One can also imagine possible invasive uses in areas such as law enforcement and military interrogation.

The research was published in open-access Scientific Reports on Monday, Feb. 27.

* “During brain-to-brain coupling, activity in areas of prefrontal [in the speaker] and parietal cortex [in the listeners] previously reported to be involved in sentence comprehension were robustly correlated across subjects, as revealed in the inter-subject correlation analysis. As these are task-related (active listening) activation periods (not resting, etc.), the correlations reflect modulation of these regions by the time-varying content of the narratives, and comprise linguistic, conceptual and affective processing.” — Yichuan Liu et al./Scientific Reports)


Abstract of Measuring speaker–listener neural coupling with functional near infrared spectroscopy

The present study investigates brain-to-brain coupling, defined as inter-subject correlations in the hemodynamic response, during natural verbal communication. We used functional near-infrared spectroscopy (fNIRS) to record brain activity of 3 speakers telling stories and 15 listeners comprehending audio recordings of these stories. Listeners’ brain activity was significantly correlated with speakers’ with a delay. This between-brain correlation disappeared when verbal communication failed. We further compared the fNIRS and functional Magnetic Resonance Imaging (fMRI) recordings of listeners comprehending the same story and found a significant relationship between the fNIRS oxygenated-hemoglobin concentration changes and the fMRI BOLD in brain areas associated with speech comprehension. This correlation between fNIRS and fMRI was only present when data from the same story were compared between the two modalities and vanished when data from different stories were compared; this cross-modality consistency further highlights the reliability of the spatiotemporal brain activation pattern as a measure of story comprehension. Our findings suggest that fNIRS can be used for investigating brain-to-brain coupling during verbal communication in natural settings.

Billionaire Softbank CEO Masayoshi Son plans to invest in singularity

Masayoshi Son (credit: Softbank Group)

Billionaire Softbank Group Chairman and CEO Masayoshi Son revealed Monday (Feb. 27) at Mobile World Congress his plan to invest in singularity. “In next 30 years [the singularity] will become a reality,” he said, Tech Crunch reports.

“If superintelligence goes inside the moving device then the world, our lifestyle dramatically changes,” he said. “There will be many kinds. Flying, swimming, big, micro, run, 2 legs, 4 legs, 100 legs,” referring to robots. “I truly believe it’s coming, that’s why I’m in a hurry — to aggregate the cash, to invest.”

“Son said his personal conviction in the looming rise of billions of superintelligent robots both explains his acquisition of UK chipmaker ARM last year, and his subsequent plan to establish the world’s biggest VC fund,” noted TechCrunch — a new $100BN fund called the Softbank Vision Fund, announced last October.

TechCrunch said that despite additional contributors including Foxconn, Apple, Qualcomm and Oracle co-founder Larry Ellison’s family office, the fund has evidently not yet hit Son’s target of $100BN, so he used the keynote as a sales pitch for additional partners.

Addressing existential threats

“Son said his haste is partly down to a belief that superintelligent AIs can be used for ‘the goodness of humanity,’ going on to suggest that only AI has the potential to address some of the greatest threats to humankind’s continued existence — be it climate change or nuclear annihilation,” said Tech Crunch.

“It will be so much more capable than us — what will be our job? What will be our life? We have to ask philosophical questions,” Son said. “Is it good or bad? “I think this superintelligence is going to be our partner. If we misuse it it’s a risk. If we use it in good spirits it will be our partner for a better life. So the future can be better predicted, people will live healthier, and so on.”

“With the coming of singularity, I believe we will benefit from new ideas and wisdom that people were previously incapable of thanks to big data and other analytics,” Son said on the Softbank Group website. “At some point I am sure we will see the birth of a ‘Super-intelligence’ that will contribute to humanity. This paradigm shift has only accelerated in recent years as both a worldwide and irreversible trend.”

Neural networks promise sharpest-ever telescope images

From left to right: an example of an original galaxy image; the same image deliberately degraded; the image after recovery by the neural network; and for comparison, deconvolution. This figure visually illustrates the neural-networks’s ability to recover features that conventional deconvolutions cannot. (credit: K. Schawinski / C. Zhang / ETH Zurich)

Swiss researchers are using neural networks to achieve the sharpest-ever images in optical astronomy. The work appears in an open-access paper in Monthly Notices of the Royal Astronomical Society.

The aperture (diameter) of any telescope is fundamentally limited by its lens or mirror. The bigger the mirror or lens, the more light it gathers, allowing astronomers to detect fainter objects, and to observe them more clearly. Other factors affecting image quality are noise and atmospheric distortion.

The Swiss study uses “generative adversarial network” (GAN) machine-learning technology (see this KurzweilAI article) to go beyond this limit by using two neural networks that compete with each other to create a series of more realistic images. The researchers first train the neural network to “see” what galaxies look like (using blurred and sharp images of the same galaxy), and then ask it to automatically fix the blurred images of a galaxy, converting them to sharp ones.

Schematic illustration of the neural-network training process. The input is a set of original images. From these, the researchers automatically generate degraded images, and train a GAN. In the testing phase, only the generator will be used to recover images. (credit: K. Schawinski / C. Zhang / ETH Zurich)

The trained neural networks were able to recognize and reconstruct features that the telescope could not resolve, such as star-forming regions and dust lanes in galaxies. The scientists checked the reconstructed images against the original high-resolution images to test its performance, finding it better able to recover features than anything used to date.

“We can start by going back to sky surveys made with telescopes over many years, see more detail than ever before, and, for example, learn more about the structure of galaxies,” said lead author Prof. Kevin Schawinski of ETH Zurich in Switzerland. “There is no reason why we can’t then apply this technique to the deepest images from Hubble, and the coming James Webb Space Telescope, to learn more about the earliest structures in the Universe.”

ETH Zurich is hosting this work on the space.ml cross-disciplinary astrophysics/computer-science initiative, where the code is available to the general public.


Abstract of Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit

Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon–Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.