This voice-authentication wearable could block voice-assistant or bank spoofing

“Alexa, turn off my security system.” (credit: Amazon)

University of Michigan (U-M) scientists have developed a voice-authentication system for reducing the risk of being spoofed when you use a biometric system to log into secure services or a voice assistant (such as Amazon Echo and Google Home).

A hilarious example of spoofing a voice assistant happened during a Google commercial during the 2017 Super Bowl. When actors voiced “OK Google” commands on TV, viewers’ Google Home devices obediently began to play whale noises, flip lights on, and take other actions.

More seriously, an adversary could possibly bypass current voice-as-biometric authentication mechanisms, such as Nuance’s “FreeSpeech” customer authentication platform (used in a call centers and banks) by simply impersonating the user’s voice (possibly by using Adobe Voco software), the U-M scientists also point out.*

The VAuth system

VAuth system (credit: Kassem Fawaz/ACM Mobicom 2017)

The U-M VAuth (continuous voice authentication, pronounced “vee-auth”) system aims to make that a lot more difficult. It uses a tiny wearable device (which could be built in to a necklace, earbud/earphones/headset, or eyeglasses) containing an accelerometer (or a special microphone) that detects and measures vibrations on the skin of a person’s face, throat, or chest.

VAuth prototype features accelerometer chip for detecting body voice vibrations and Bluetooth transmitter (credit: Huan Feng et al./ACM)

The team has built a prototype using an off-the-shelf accelerometer and a Bluetooth transmitter, which sends the vibration signal to a real-time matching engine in a device (such as Google Home). It matches these vibrations with the sound of that person’s voice to create a unique, secure signature that is constant during an entire session (not just at the beginning). The team has also developed matching algorithms and software for Google Now.

Security holes in voice authentication systems

“Increasingly, voice is being used as a security feature but it actually has huge holes in it,” said Kang Shin, the Kevin and Nancy O’Connor Professor of Computer Science and professor of electrical engineering and computer science at U-M. “If a system is using only your voice signature, it can be very dangerous. We believe you have to have a second channel to authenticate the owner of the voice.”

VAuth doesn’t require training and is also immune to voice changes over time and different situations, such as sickness (a sore throat) or tiredness — a major limitation of voice biometrics, which require training from each individual who will use them, says the team.

The team tested VAuth with 18 users and 30 voice commands. It achieved a 97-percent detection accuracy and less than 0.1 percent false positive rate, regardless of its position on the body and the user’s language, accent or even mobility. The researchers say it also successfully thwarts various practical attacks, such as replay attacks, mangled voice attacks, or impersonation attacks.

A study on VAuth was presented Oct. 19 at the International Conference on Mobile Computing and Networking, MobiCom 2017, in Snowbird, Utah and is available for open-access download.

The work was supported by the National Science Foundation. The researchers have applied for a patent and are seeking commercialization partners to help bring the technology to market.

* As explained in this KurzweilAI articleAdobe Voco technology (aka “Photoshop for voice”) makes it easy to add or replace a word in an audio recording of a human voice by simply editing a text transcript of the recording. New words are automatically synthesized in the speaker’s voice — even if they don’t appear anywhere else in the recording.


Abstract of Continuous Authentication for Voice Assistants

Voice has become an increasingly popular User Interaction (UI) channel, mainly contributing to the current trend of wearables, smart vehicles, and home automation systems. Voice assistants such as Alexa, Siri, and Google Now, have become our everyday fixtures, especially when/where touch interfaces are inconvenient or even dangerous to use, such as driving or exercising. The open nature of the voice channel makes voice assistants difficult to secure, and hence exposed to various threats as demonstrated by security researchers. To defend against these threats, we present VAuth, the first system that provides continuous authentication for voice assistants. VAuth is designed to fit in widely-adopted wearable devices, such as eyeglasses, earphones/buds and necklaces, where it collects the body-surface vibrations of the user and matches it with the speech signal received by the voice assistant’s microphone. VAuth guarantees the voice assistant to execute only the commands that originate from the voice of the owner. We have evaluated VAuth with 18 users and 30 voice commands and find it to achieve 97% detection accuracy and less than 0.1% false positive rate, regardless of VAuth’s position on the body and the user’s language, accent or mobility. VAuth successfully thwarts various practical attacks, such as replay attacks, mangled voice attacks, or impersonation attacks. It also incurs low energy and latency overheads and is compatible with most voice assistants.

New transistor design enables flexible, high-performance wearable/mobile electronics

Advanced flexible transistor developed at UW-Madison (photo credit: Jung-Hun Seo/University at Buffalo, State University of New York)

A team of University of Wisconsin–Madison (UW–Madison) engineers has created “the most functional flexible transistor in the world,” along with a fast, simple, inexpensive fabrication process that’s easily scalable to the commercial level.

The development promises to allow manufacturers to add advanced, smart-wireless capabilities to wearable and mobile devices that curve, bend, stretch and move.*

The UW–Madison group’s advance is based on a BiCMOS (bipolar complementary metal oxide semiconductor) thin-film transistor, combining speed, high current, and low power dissipation (heat and wasted energy) on just one surface (a silicon nanomembrane, or “Si NM”).**

BiCMOS transistors are the chip of choice for “mixed-signal” devices (combining analog and digital capabilities), which include many of today’s portable electronic devices such as cellphones. “The [BiCMOS] industry standard is very good,” says Zhenqiang (Jack) Ma, the Lynn H. Matthias Professor and Vilas Distinguished Achievement Professor in electrical and computer engineering at UW–Madison. “Now we can do the same things with our transistor — but it can bend.”

The research was described in the inaugural issue of Nature Publishing Group’s open-access journal Flexible Electronics, published Sept. 27, 2017.***

Making traditional BiCMOS flexible electronics is difficult, in part because the process takes several months and requires a multitude of delicate, high-temperature steps. Even a minor variation in temperature at any point could ruin all of the previous steps.

Ma and his collaborators fabricated their flexible electronics on a single-crystal silicon nanomembrane on a single bendable piece of plastic. The secret to their success is their unique process, which eliminates many steps and slashes both the time and cost of fabricating the transistors.

“In industry, they need to finish these in three months,” he says. “We finished it in a week.”

He says his group’s much simpler, high-temperature process can scale to industry-level production right away.

“The key is that parameters are important,” he says. “One high-temperature step fixes everything — like glue. Now, we have more powerful mixed-signal tools. Basically, the idea is for [the flexible electronics platform] to expand with this.”

* Some companies (such as Samsung) have developed flexible displays, but not other flexible electronic components in their devices, Ma explained to KurzweilAI.

** “Flexible electronics have mainly focused on their form factors such as bendability, lightweight, and large area with low-cost processability…. To date, all the [silicon, or Si]-based thin-film transistors (TFTs) have been realized with CMOS technology because of their simple structure and process. However, as more functions are required in future flexible electronic applications (i.e., advanced bioelectronic systems or flexible wireless power applications), an integration of functional devices in one flexible substrate is needed to handle complex signals and/or various power levels.” — Jung Hun Seo et al./Flexible Electronics. The n-channel, p-channel metal-oxide semiconductor field-effect transistors (N-MOSFETs & P-MOSFETs), and NPN bipolar junction transistors (BJTs) were realized together on a 340-nm thick Si NM layer. 

*** Co-authors included researchers at the University at Buffalo, State University of New York, and the University of Texas at Arlington. This work was supported by the Air Force Office Of Scientific Research.


Abstract of High-performance flexible BiCMOS electronics based on single-crystal Si nanomembrane

In this work, we have demonstrated for the first time integrated flexible bipolar-complementary metal-oxide-semiconductor (BiCMOS) thin-film transistors (TFTs) based on a transferable single crystalline Si nanomembrane (Si NM) on a single piece of bendable plastic substrate. The n-channel, p-channel metal-oxide semiconductor field-effect transistors (N-MOSFETs & P-MOSFETs), and NPN bipolar junction transistors (BJTs) were realized together on a 340-nm thick Si NM layer with minimized processing complexity at low cost for advanced flexible electronic applications. The fabrication process was simplified by thoughtfully arranging the sequence of necessary ion implantation steps with carefully selected energies, doses and anneal conditions, and by wisely combining some costly processing steps that are otherwise separately needed for all three types of transistors. All types of TFTs demonstrated excellent DC and radio-frequency (RF) characteristics and exhibited stable transconductance and current gain under bending conditions. Overall, Si NM-based flexible BiCMOS TFTs offer great promises for high-performance and multi-functional future flexible electronics applications and is expected to provide a much larger and more versatile platform to address a broader range of applications. Moreover, the flexible BiCMOS process proposed and demonstrated here is compatible with commercial microfabrication technology, making its adaptation to future commercial use straightforward.

New system allows near-zero-power sensors to communicate data over long distances

This low-cost, flexible epidermal medical-data patch prototype successfully transmitted information at up to 37500 bits per second across a 3,300-square-feet atrium. (credit: Dennis Wise/University of Washington)

University of Washington (UW) researchers have developed a low-cost, long-range data-communication system that could make it possible for medical sensors or billions of low-cost “internet of things” objects to connect via radio signals at long distances (up to 2.8 kilometers) and with 1000 times lower required power (9.25 microwatts in an experiment) compared to existing technologies.

“People have been talking about embedding connectivity into everyday objects … for years, but the problem is the cost and power consumption to achieve this,” said Vamsi Talla, chief technology officer of Jeeva Wireless, which plans to market the system within six months. “This is the first wireless system that can inject connectivity into any device with very minimal cost.”

The new system uses “backscatter,” which uses energy from ambient transmissions (from WiFi, for example) to power a passive sensor that encodes and scatter-reflects the signal. (This article explains how ambient backscatter, developed by UW, works.) Backscatter systems, used with RFID chips, are very low cost, but are limited in distance.

So the researchers combined backscatter with a “chirp spread spectrum” technique, used in LoRa (long-range) wireless data-communication systems.

This tiny off-the-shelf spread-spectrum receiver enables extremely-low-power cheap sensors to communicate over long distances. (credit: Dennis Wise/University of Washington)

This new system has three components: a power source (which can be WiFi or other ambient transmission sources, or cheap flexible printed batteries, with an expected bulk cost of 10 to 20 cents each) for a radio signal; cheap sensors (less than 10 cents at scale) that modulate (encode) information (contained in scattered reflections of the signal), and an inexpensive, off-the-shelf spread-spectrum receiver, located as far away as 2.8 kilometers, that decodes the sensor information.

Applications could include, for example, medical monitoring devices that wirelessly transmit information about a heart patient’s condition to doctors; sensor arrays that monitor pollution, noise, or traffic in “smart” cities; and farmers looking to measure soil temperature or moisture, who could affordably blanket an entire field to determine how to efficiently plant seeds or water.

The research team built a contact lens prototype and a flexible epidermal patch that attaches to human skin, which successfully used long-range backscatter to transmit information across a 3300-square-foot building.

The research, which was partially funded by the National Science Foundation, is detailed in an open-access paper presented Sept. 13, 2017 at UbiComp 2017. More information: longrange@cs.washington.edu.


UW (University of Washington) | UW team shatters long-range communication barrier for devices that consume almost no power


Abstract of LoRa Backscatter: Enabling The Vision of Ubiquitous Connectivity

The vision of embedding connectivity into billions of everyday objects runs into the reality of existing communication technologies — there is no existing wireless technology that can provide reliable and long-range communication at tens of microwatts of power as well as cost less than a dime. While backscatter is low-power and low-cost, it is known to be limited to short ranges. This paper overturns this conventional wisdom about backscatter and presents the first wide-area backscatter system. Our design can successfully backscatter from any location between an RF source and receiver, separated by 475 m, while being compatible with commodity LoRa hardware. Further, when our backscatter device is co-located with the RF source, the receiver can be as far as 2.8 km away. We deploy our system in a 4,800 ft2 (446 m2) house spread across three floors, a 13,024 ft2 (1210 m2) office area covering 41 rooms, as well as a one-acre (4046 m2) vegetable farm and show that we can achieve reliable coverage, using only a single RF source and receiver. We also build a contact lens prototype as well as a flexible epidermal patch device attached to the human skin. We show that these devices can reliably backscatter data across a 3,328 ft2 (309 m2) room. Finally, we present a design sketch of a LoRa backscatter IC that shows that it costs less than a dime at scale and consumes only 9.25 &mgr;W of power, which is more than 1000x lower power than LoRa radio chipsets.

‘Fog computing’ could improve communications during natural disasters

Hurricane Irma at peak intensity near the U.S. Virgin Islands on September 6, 2017 (credit: NOAA)

Researchers at the Georgia Institute of Technology have developed a system that uses edge computing (also known as fog computing) to deal with the loss of internet access in natural disasters such as hurricanes, tornados, and floods.

The idea is to create an ad hoc decentralized network that uses computing power built into mobile phones, routers, and other hardware to provide actionable data to emergency managers and first responders.

In a flooded area, for example, search and rescue personnel could continuously ping enabled phones, surveillance cameras, and “internet of things” devices in an area to determine their exact locations. That data could then be used to create density maps of people to prioritize and guide emergency response teams.

Situational awareness for first responders

“We believe fog computing can become a potent enabler of decentralized, local social sensing services that can operate when internet connectivity is constrained,” said Kishore Ramachandran, PhD, computer science professor at Georgia Tech and senior author of a paper presented in April this year at the 2nd International Workshop on Social Sensing*.

“This capability will provide first responders and others with the level of situational awareness they need to make effective decisions in emergency situations.”

The team has proposed a generic software architecture for social sensing applications that is capable of exploiting the fog-enabled devices. The design has three components: a central management function that resides in the cloud, a data processing element placed in the fog infrastructure, and a sensing component on the user’s device.

Beyond emergency response during natural disasters, the team believes its proposed fog architecture can also benefit communities with limited or no internet access — for public transportation management, job recruitment, and housing, for example.

To monitor far-flung devices in areas with no internet access, a bus or other vehicle could be outfitted with fog-enabled sensing capabilities, the team suggests. As it travels in remote areas, it would collect data from sensing devices. Once in range of internet connectivity, the “data mule” bus would upload that information to centralized cloud-based platforms.

* “Social sensing has emerged as a new paradigm for collecting sensory measurements by means of “crowd-sourcing” sensory data collection tasks to a human population. Humans can act as sensor carriers (e.g., carrying GPS devices that share location data), sensor operators (e.g., taking pictures with smart phones), or as sensors themselves (e.g., sharing their observations on Twitter). The proliferation of sensors in the possession of the average individual, together with the popularity of social networks that allow massive information dissemination, heralds an era of social sensing that brings about new research challenges and opportunities in this emerging field.” — SocialSens2017

Ray Kurzweil reveals plans for ‘linguistically fluent’ Google software

Smart Reply (credit: Google Research)

Ray Kuzweil, a director of engineering at Google, reveals plans for a future version of Google’s “Smart Reply” machine-learning email software (and more) in a Wired article by Tom Simonite published Wednesday (Aug. 2, 2017).

Running on mobile Gmail and Google Inbox, Smart Reply suggests up to three replies to an email message, saving typing time or giving you ideas for a better reply.

Smarter autocomplete

Kurzweil’s team is now “experimenting with empowering Smart Reply to elaborate on its initial terse suggestions,” Simonite says.

“Tapping a Continue button [in response to an email] might cause ‘Sure I’d love to come to your party!’ to expand to include, for example, ‘Can I bring something?’ He likes the idea of having AI pitch in anytime you’re typing, a bit like an omnipresent, smarter version of Google’s search autocomplete. ‘You could have similar technology to help you compose documents or emails by giving you suggestions of how to complete your sentence,’ Kurzweil says.”

As Simonite notes, Kurzweil’s software is based on his hierarchical theory of intelligence, articulated in Kurzweil’s latest book, How to Create a Mind and in more detail in an arXiv paper by Kurzweil and key members of his team, published in May.

“Kurzweil’s work outlines a path to create a simulation of the human neocortex (the outer layer of the brain where we do much of our thinking) by building a hierarchy of similarly structured components that encode increasingly abstract ideas as sequences,” according to the paper. “Kurzweil provides evidence that the neocortex is a self-organizing hierarchy of modules, each of which can learn, remember, recognize and/or generate a sequence, in which each sequence consists of a sequential pattern from lower-level modules.”

The paper further explains that Smart Reply previously used a “long short-term memory” (LSTM) network*, “which are much slower than feed-forward networks [used in the new software] for training and inference” because with LSTM, it takes more computation to handle longer sequences of words.

Kurzweil’s team was able to produce email responses of similar quality to LSTM, but using fewer computational resources by training hierarchically connected layers of simulated neurons on clustered numerical representations of text. Essentially, the approach propagates information through a sequence of ever more complex pattern recognizers until the final patterns are matched to optimal responses.

Kona: linguistically fluent software

But underlying Smart Reply is “a system for understanding the meaning of language, according to Kurzweil,” Simonite reports.

“Codenamed Kona, the effort is aiming for nothing less than creating software as linguistically fluent as you or me. ‘I would not say it’s at human levels, but I think we’ll get there,’ Kurzweil says. More applications of Kona are in the works and will surface in future Google products, he promises.”

* The previous sequence-to-sequence (Seq2Seq) framework [described in this paper] uses “recurrent neural networks (RNNs), typically long short-term memory (LSTM) networks, to encode sequences of word embeddings into representations that depend on the order, and uses a decoder RNN to generate output sequences word by word. …While Seq2Seq models provide a generalized solution, it is not obvious that they are maximally efficient, and training these systems can be slow and complicated.”

How to run faster, smarter AI apps on smartphones

(credit: iStock)

When you use smartphone AI apps like Siri, you’re dependent on the cloud for a lot of the processing — limited by your connection speed. But what if your smartphone could do more of the processing directly on your device — allowing for smarter, faster apps?

MIT scientists have taken a step in that direction with a new way to enable artificial-intelligence systems called convolutional neural networks (CNNs) to run locally on mobile devices. (CNN’s are used in areas such as autonomous driving, speech recognition, computer vision, and automatic translation.) Neural networks take up a lot of memory and consume a lot of power, so they usually run on servers in the cloud, which receive data from desktop or mobile devices and then send back their analyses.

The new MIT analytic method can determine how much power a neural network will actually consume when run on a particular type of hardware. The researchers used the method to evaluate new techniques for paring down neural networks so that they’ll run more efficiently on handheld devices.

The new CNN designs are also optimized to run on an energy-efficient computer chip optimized for neural networks that the researchers developed in 2016.

Reducing energy consumption

The new MIT software method uses “energy-aware pruning” — meaning they reduce a neural networks’ power consumption by cutting out the layers of the network that contribute very little to a neural network’s final output and consume the most energy.

Associate professor of electrical engineering and computer science Vivienne Sze and colleagues describe the work in an open-access paper they’re presenting this week (of July 24, 2017) at the Computer Vision and Pattern Recognition Conference. They report that the methods offered up to 73 percent reduction in power consumption over the standard implementation of neural networks — 43 percent better than the best previous method.

Meanwhile, another MIT group at the Computer Science and Artificial Intelligence Laboratory has designed a hardware approach to reduce energy consumption and increase computer-chip processing speed for specific apps, using “cache hierarchies.” (“Caches” are small, local memory banks that store data that’s frequently used by computer chips to cut down on time- and energy-consuming communication with off-chip memory.)**

The researchers tested their system on a simulation of a chip with 36 cores, or processing units. They found that compared to its best-performing predecessors, the system increased processing speed by 20 to 30 percent while reducing energy consumption by 30 to 85 percent. They presented the new system, dubbed Jenga, in an open-access paper at the International Symposium on Computer Architecture earlier in July 2017.

Better batteries — or maybe, no battery?

Another solution to better mobile AI is improving rechargeable batteries in cell phones (and other mobile devices), which have limited charge capacity and short lifecycles, and perform poorly in cold weather.

Recently, DARPA-funded researchers from the University of Houston (and at the University of California-San Diego and Northwestern University) have discovered that quinones — an inexpensive, earth-abundant and easily recyclable material that is low-cost and nonflammable — can address current battery limitations.

“One of these batteries, as a car battery, could last 10 years,” said Yan Yao, associate professor of electrical and computer engineering. In addition to slowing the deterioration of batteries for vehicles and stationary electricity storage batteries, it also would make battery disposal easier because the material does not contain heavy metals. The research is described in Nature Materials.

The first battery-free cellphone that can send and receive calls using only a few microwatts of power. (credit: Mark Stone/University of Washington)

But what if we eliminated batteries altogether? University of Washington researchers have invented a cellphone that requires no batteries. Instead, it harvests 3.5 microwatts of power from ambient radio signals, light, or even the vibrations of a speaker.

The new technology is detailed in a paper published July 1, 2017 in the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies.

The UW researchers demonstrated how to harvest this energy from ambient radio signals transmitted by a WiFi base station up to 31 feet away. “You could imagine in the future that all cell towers or Wi-Fi routers could come with our base station technology embedded in it,” said co-author Vamsi Talla, a former UW electrical engineering doctoral student and Allen School research associate. “And if every house has a Wi-Fi router in it, you could get battery-free cellphone coverage everywhere.”

A cellphone CPU (computer processing unit) typically requires several watts or more (depending on the app), so we’re not quite there yet. But that power requirement could one day be sufficiently reduced by future special-purpose chips and MIT’s optimized algorithms.

It might even let you do amazing things. :)

* Loosely based on the anatomy of the brain, neural networks consist of thousands or even millions of simple but densely interconnected information-processing nodes, usually organized into layers. The connections between nodes have “weights” associated with them, which determine how much a given node’s output will contribute to the next node’s computation. During training, in which the network is presented with examples of the computation it’s learning to perform, those weights are continually readjusted, until the output of the network’s last layer consistently corresponds with the result of the computation. With the proposed pruning method, the energy consumption of AlexNet and GoogLeNet are reduced by 3.7x and 1.6x, respectively, with less than 1% top-5 accuracy loss.

** The software reallocates cache access on the fly to reduce latency (delay), based on the physical locations of the separate memory banks that make up the shared memory cache. If multiple cores are retrieving data from the same DRAM [memory] cache, this can cause bottlenecks that introduce new latencies. So after Jenga has come up with a set of cache assignments, cores don’t simply dump all their data into the nearest available memory bank; instead, Jenga parcels out the data a little at a time, then estimates the effect on bandwidth consumption and latency. 

*** The stumbling block, Yao said, has been the anode, the portion of the battery through which energy flows. Existing anode materials are intrinsically structurally and chemically unstable, meaning the battery is only efficient for a relatively short time. The differing formulations offer evidence that the material is an effective anode for both acid batteries and alkaline batteries, such as those used in a car, as well as emerging aqueous metal-ion batteries.

Google rolls out new ‘smart reply’ machine-learning email software to more than 1 billion Gmail mobile users

A smarter version of Smart Reply (credit: Google Research)

Google is rolling out an enhanced version of its “smart reply” machine-learning email software to “over 1 billion Android and iOS users of Gmail,” Google CEO Sundar Pichai said today (May 17, 2017) in a keynote at the annual Google I/O conference.

Smart Reply suggests up to three replies to an email message — saving you typing time, or giving you time to think through a better reply. Smart Reply was previously only available to users of Google Inbox (an app that helps Gmail users organize their email messages and reply efficiently).

Hierarchical model

Developed by a team headed by Ray Kurzweil, a Google director of engineering, “the new version of Smart Reply increases the percentage of usable suggestions and is much more algorithmically efficient than the original system,” said Kurzweil in a Google Research blog post with research colleague Brian Strope today. “And that efficiency now makes it feasible for us to provide Smart Reply for Gmail.”

A hierarchy of modules (credit: Google Research)

The team was inspired by how humans understand languages and concepts, based on hierarchical models of language, Kurzweil and Strope explained. The new approach uses “hierarchies of modules, each of which can learn, remember, and recognize a sequential pattern,” as described in Kurzweil’s 2012 book, How to Create a Mind.

For example, a sentence like “That interesting person at the cafe we like gave me a glance” is difficult to interpret. Was it a positive or negative gesture? But “given enough examples of language, a machine learning approach can discover many of these subtle distinctions,” they write.

Elon Musk wants to enhance us as superhuman cyborgs to deal with superintelligent AI

(credit: Neuralink Corp.)

It’s the year 2021. A quadriplegic patient has just had one million “neural lace” microparticles injected into her brain, the world’s first human with an internet communication system using a wireless implanted brain-mind interface — and empowering her as the first superhuman cyborg. …

No, this is not a science-fiction movie plot. It’s the actual first public step — just four years from now — in Tesla CEO Elon Musk’s business plan for his latest new venture, Neuralink. It’s now explained for the first time on Tim Urban’s WaitButWhy blog.

Dealing with the superintelligence existential risk

Such a system would allow for radically improved communication between people, Musk believes. But for Musk, the big concern is AI safety. “AI is obviously going to surpass human intelligence by a lot,” he says. “There’s some risk at that point that something bad happens, something that we can’t control, that humanity can’t control after that point — either a small group of people monopolize AI power, or the AI goes rogue, or something like that.”

“This is what keeps Elon up at night,” says Urban. “He sees it as only a matter of time before superintelligent AI rises up on this planet — and when that happens, he believes that it’s critical that we don’t end up as part of ‘everyone else.’ That’s why, in a future world made up of AI and everyone else, he thinks we have only one good option: To be AI.”

Neural dust: an ultrasonic, low power solution for chronic brain-machine interfaces (credit: Swarm Lab/UC Berkeley)

To achieve his, Neuralink CEO Musk has met with more than 1,000 people, narrowing it down initially to eight experts, such as Paul Merolla, who spent the last seven years as the lead chip designer at IBM on their DARPA-funded SyNAPSE program to design neuromorphic (brain-inspired) chips with 5.4 billion transistors (each with 1 million neurons and 256 million synapses), and Dongjin (DJ) Seo, who while at UC Berkeley designed an ultrasonic backscatter system for powering and communicating with implanted bioelectronics called neural dust for recording brain activity.*

Mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution (credit: Lieber Research Group, Harvard University)

Becoming one with AI — a good thing?

Neuralink’s goal its to create a “digital tertiary layer” to augment the brain’s current cortex and limbic layers — a radical high-bandwidth, long-lasting, biocompatible, bidirectional communicative, non-invasively implanted system made up of micron-size (millionth of a meter) particles communicating wirelessly via the cloud and internet to achieve super-fast communication speed and increased bandwidth (carrying more information).

“We’re going to have the choice of either being left behind and being effectively useless or like a pet — you know, like a house cat or something — or eventually figuring out some way to be symbiotic and merge with AI. … A house cat’s a good outcome, by the way.”

Thin, flexible electrodes mounted on top of a biodegradable silk substrate could provide a better brain-machine interface, as shown in this model. (credit: University of Illinois at Urbana-Champaign)

But machine intelligence is already vastly superior to human intelligence in specific areas (such as Google’s Alpha Go) and often inexplicable. So how do we know superintelligence has the best interests of humanity in mind?

“Just an engineering problem”

Musk’s answer: “If we achieve tight symbiosis, the AI wouldn’t be ‘other’  — it would be you and with a relationship to your cortex analogous to the relationship your cortex has with your limbic system.” OK, but then how does an inferior intelligence know when it’s achieved full symbiosis with a superior one — or when AI goes rogue?

Brain-to-brain (B2B) internet communication system: EEG signals representing two words were encoded into binary strings (left) by the sender (emitter) and sent via the internet to a receiver. The signal was then encoded as a series of transcranial magnetic stimulation-generated phosphenes detected by the visual occipital cortex, which the receiver then translated to words (credit: Carles Grau et al./PLoS ONE)

And what about experts in neuroethics, psychology, law? Musk says it’s just “an engineering problem. … If we can just use engineering to get neurons to talk to computers, we’ll have done our job, and machine learning can do much of the rest.”

However, it’s not clear how we could be assured our brains aren’t hacked, spied on, and controlled by a repressive government or by other humans — especially those with a more recently updated software version or covert cyborg hardware improvements.

NIRS/EEG brain-computer interface system using non-invasive near-infrared light for sensing “yes” or “no” thoughts, shown on a model (credit: Wyss Center for Bio and Neuroengineering)

In addition, the devices mentioned in WaitButWhy all require some form of neurosurgery, unlike Facebook’s research project to use non-invasive near-infrared light, as shown in this experiment, for example.** And getting implants for non-medical use approved by the FDA will be a challenge, to grossly understate it.

“I think we are about 8 to 10 years away from this being usable by people with no disability,” says Musk, optimistically. However, Musk does not lay out a technology roadmap for going further, as MIT Technology Review notes.

Nonetheless, Neuralink sounds awesome — it should lead to some exciting neuroscience breakthroughs. And Neuralink now has 16 San Francisco job listings here.

* Other experts: Vanessa Tolosa, Lawrence Livermore National Laboratory, one of the world’s foremost researchers on biocompatible materials; Max Hodak, who worked on the development of some groundbreaking BMI technology at Miguel Nicolelis’s lab at Duke University, Ben Rapoport, Neuralink’s neurosurgery expert, with a Ph.D. in Electrical Engineering and Computer Science from MIT; Tim Hanson, UC Berkeley post-doc and expert in flexible Electrodes for Stable, Minimally-Invasive Neural Recording; Flip Sabes, professor, UCSF School of Medicine expert in cortical physiology, computational and theoretical modeling, and human psychophysics and physiology; and Tim Gardner, Associate Professor of Biology at Boston University, whose lab works on implanting BMIs in birds, to study “how complex songs are assembled from elementary neural units” and learn about “the relationships between patterns of neural activity on different time scales.”

** This binary experiment and the binary Brain-to-brain (B2B) internet communication system mentioned above are the equivalents of the first binary (dot–dash) telegraph message, sent May 24, 1844: ”What hath God wrought?”

Infrared-light-based Wi-Fi network is 100 times faster

Schematic of a beam of white light being dispersed by a prism into different wavelengths, similar in prinicple to how a new near-infrared WiFi system works (credit: Lucas V. Barbosa/CC)

A new infrared-light WiFi network can provide more than 40 gigabits per second (Gbps) for each user* — about 100 times faster than current WiFi systems — say researchers at Eindhoven University of Technology (TU/e) in the Netherlands.

The TU/e WiFi design was inspired by experimental systems using ceiling LED lights (such as Oregon State University’s experimental WiFiFO, or WiFi Free space Optic, system), which can increase the total per-user speed of WiFi systems and extend the range to multiple rooms, while avoiding interference from neighboring WiFi systems. (However, WiFiFo is limited to 100 Mbps.)

Experimental Oregon State University system uses LED lighting to boost the bandwidth of Wi-Fi systems and extend range (credit: Thinh Nguyen/Oregon State University)

Near-infrared light

Instead of visible light, the TU/e system uses invisible near-infrared light.** Supplied by a fiber optic cable, a few central “light antennas” (mounted on the ceiling, for instance) each use a pair of ”passive diffraction gratings” that radiate light rays of different wavelengths at different angles.

That allows for directing the light beams to specific users. The network tracks the precise location of every wireless device, using a radio signal transmitted in the return direction.***

The TU/e system uses infrared light with a wavelength of 1500 nanometers (a frequency of 200 terahertz, or 40,000 times higher than 5GHz), allowing for significantly increased capacity. The system has so far used the light rays only for downloading; uploads are still done using WiFi radio signals, since much less capacity is usually needed for uploading.

The researchers expect it will take five years or more for the new technology to be commercially available. The first devices to be connected will likely be high-data devices like video monitors, laptops, and tablets.

* That speed is 67 times higher than the current 802.11n WiFi system’s max theoretical speed of 600Mbps capacity — which has to be shared between users, so the ratio is actually about 100 times, according to TU/e researchers. That speed is also 16 times higher than the 2.5 Gbps performance with the best (802.11ac) Wi-Fi system — which also has to be shared (so actually lower) — and in addition, uses the 5GHz wireless band, which has limited range. “The theoretical max speed of 802.11ac is eight 160MHz 256-QAM channels, each of which are capable of 866.7Mbps, for a total of 6,933Mbps, or just shy of 7Gbps,” notes Extreme Tech. “In the real world, thanks to channel contention, you probably won’t get more than two or three 160MHz channels, so the max speed comes down to somewhere between 1.7Gbps and 2.5Gbps. Compare this with 802.11n’s max theoretical speed, which is 600Mbps.”

** The TU/e system was designed by Joanne Oh as a doctoral thesis and part of the wider BROWSE project headed up by professor of broadband communication technology Ton Koonen, with funding from the European Research Council, under the auspices of the noted TU/e Institute for Photonic Integration.

*** According to TU/e researchers, a few other groups are investigating network concepts in which infrared-light rays are directed using movable mirrors. The disadvantage here is that this requires active control of the mirrors and therefore energy, and each mirror is only capable of handling one ray of light at a time. The grating used the and Oh can cope with many rays of light and, therefore, devices at the same time.


Space X plans global space internet

(credit: SpaceX)

SpaceX has applied to the FCC to launch 11,943 satellites into low-Earth orbit, providing “ubiquitous high-bandwidth (up to 1Gbps per user, once fully deployed) broadband services for consumers and businesses in the U.S. and globally,” according to FCC applications.

Recent meetings with the FCC suggest that the plan now looks like “an increasingly feasible reality — particularly with 5G technologies just a few years away, promising new devices and new demand for data,” Verge reports.

Such a service will be particularly useful to rural areas, which have limited access to internet bandwidth.

Low-Earth orbit (at up to 2,000 kilometers, or 1,200 mi) ensures lower latency (communication delay between Earth and satellite) — making the service usable for voice communications via Skype, for example — compared to geosynchronous orbit (at 35,786 kilometers, or 22,000 miles), offered by Dish Network and other satellite ISP services.* The downside: it takes a lot more satellites to provide the coverage.

Boeing, Softbank-backed OneWeb (which hopes to “connect every school to the Internet by 2022″), Telesat, and others** have proposed similar services, possibly bringing the total number of satellites to about 20,000 in low and mid earth orbits in the 2020s, estimates Next Big Future.

* “SpaceX expects its latencies between 25 and 35ms, similar to the latencies measured for wired Internet services. Current satellite ISPs have latencies of 600ms or more, according to FCC measurements, notes Ars Technica.

** Audacy, Karousel, Kepler Communications, LeoSat, O3b, Space Norway,Theia Holdings, and ViaSat, according to Space News. The ITU [international counterpart of the FCC] has set rules preventing new constellations to interfere with established ground and satellite systems operating in the same frequencies. OneWeb, for example, has said it will basically switch off power as its satellites cross the equator so as not to disturb transmissions from geostationary-orbit satellites directly above and using Ku-band frequencies.