New system allows near-zero-power sensors to communicate data over long distances

This low-cost, flexible epidermal medical-data patch prototype successfully transmitted information at up to 37500 bits per second across a 3,300-square-feet atrium. (credit: Dennis Wise/University of Washington)

University of Washington (UW) researchers have developed a low-cost, long-range data-communication system that could make it possible for medical sensors or billions of low-cost “internet of things” objects to connect via radio signals at long distances (up to 2.8 kilometers) and with 1000 times lower required power (9.25 microwatts in an experiment) compared to existing technologies.

“People have been talking about embedding connectivity into everyday objects … for years, but the problem is the cost and power consumption to achieve this,” said Vamsi Talla, chief technology officer of Jeeva Wireless, which plans to market the system within six months. “This is the first wireless system that can inject connectivity into any device with very minimal cost.”

The new system uses “backscatter,” which uses energy from ambient transmissions (from WiFi, for example) to power a passive sensor that encodes and scatter-reflects the signal. (This article explains how ambient backscatter, developed by UW, works.) Backscatter systems, used with RFID chips, are very low cost, but are limited in distance.

So the researchers combined backscatter with a “chirp spread spectrum” technique, used in LoRa (long-range) wireless data-communication systems.

This tiny off-the-shelf spread-spectrum receiver enables extremely-low-power cheap sensors to communicate over long distances. (credit: Dennis Wise/University of Washington)

This new system has three components: a power source (which can be WiFi or other ambient transmission sources, or cheap flexible printed batteries, with an expected bulk cost of 10 to 20 cents each) for a radio signal; cheap sensors (less than 10 cents at scale) that modulate (encode) information (contained in scattered reflections of the signal), and an inexpensive, off-the-shelf spread-spectrum receiver, located as far away as 2.8 kilometers, that decodes the sensor information.

Applications could include, for example, medical monitoring devices that wirelessly transmit information about a heart patient’s condition to doctors; sensor arrays that monitor pollution, noise, or traffic in “smart” cities; and farmers looking to measure soil temperature or moisture, who could affordably blanket an entire field to determine how to efficiently plant seeds or water.

The research team built a contact lens prototype and a flexible epidermal patch that attaches to human skin, which successfully used long-range backscatter to transmit information across a 3300-square-foot building.

The research, which was partially funded by the National Science Foundation, is detailed in an open-access paper presented Sept. 13, 2017 at UbiComp 2017. More information: longrange@cs.washington.edu.


UW (University of Washington) | UW team shatters long-range communication barrier for devices that consume almost no power


Abstract of LoRa Backscatter: Enabling The Vision of Ubiquitous Connectivity

The vision of embedding connectivity into billions of everyday objects runs into the reality of existing communication technologies — there is no existing wireless technology that can provide reliable and long-range communication at tens of microwatts of power as well as cost less than a dime. While backscatter is low-power and low-cost, it is known to be limited to short ranges. This paper overturns this conventional wisdom about backscatter and presents the first wide-area backscatter system. Our design can successfully backscatter from any location between an RF source and receiver, separated by 475 m, while being compatible with commodity LoRa hardware. Further, when our backscatter device is co-located with the RF source, the receiver can be as far as 2.8 km away. We deploy our system in a 4,800 ft2 (446 m2) house spread across three floors, a 13,024 ft2 (1210 m2) office area covering 41 rooms, as well as a one-acre (4046 m2) vegetable farm and show that we can achieve reliable coverage, using only a single RF source and receiver. We also build a contact lens prototype as well as a flexible epidermal patch device attached to the human skin. We show that these devices can reliably backscatter data across a 3,328 ft2 (309 m2) room. Finally, we present a design sketch of a LoRa backscatter IC that shows that it costs less than a dime at scale and consumes only 9.25 &mgr;W of power, which is more than 1000x lower power than LoRa radio chipsets.

‘Fog computing’ could improve communications during natural disasters

Hurricane Irma at peak intensity near the U.S. Virgin Islands on September 6, 2017 (credit: NOAA)

Researchers at the Georgia Institute of Technology have developed a system that uses edge computing (also known as fog computing) to deal with the loss of internet access in natural disasters such as hurricanes, tornados, and floods.

The idea is to create an ad hoc decentralized network that uses computing power built into mobile phones, routers, and other hardware to provide actionable data to emergency managers and first responders.

In a flooded area, for example, search and rescue personnel could continuously ping enabled phones, surveillance cameras, and “internet of things” devices in an area to determine their exact locations. That data could then be used to create density maps of people to prioritize and guide emergency response teams.

Situational awareness for first responders

“We believe fog computing can become a potent enabler of decentralized, local social sensing services that can operate when internet connectivity is constrained,” said Kishore Ramachandran, PhD, computer science professor at Georgia Tech and senior author of a paper presented in April this year at the 2nd International Workshop on Social Sensing*.

“This capability will provide first responders and others with the level of situational awareness they need to make effective decisions in emergency situations.”

The team has proposed a generic software architecture for social sensing applications that is capable of exploiting the fog-enabled devices. The design has three components: a central management function that resides in the cloud, a data processing element placed in the fog infrastructure, and a sensing component on the user’s device.

Beyond emergency response during natural disasters, the team believes its proposed fog architecture can also benefit communities with limited or no internet access — for public transportation management, job recruitment, and housing, for example.

To monitor far-flung devices in areas with no internet access, a bus or other vehicle could be outfitted with fog-enabled sensing capabilities, the team suggests. As it travels in remote areas, it would collect data from sensing devices. Once in range of internet connectivity, the “data mule” bus would upload that information to centralized cloud-based platforms.

* “Social sensing has emerged as a new paradigm for collecting sensory measurements by means of “crowd-sourcing” sensory data collection tasks to a human population. Humans can act as sensor carriers (e.g., carrying GPS devices that share location data), sensor operators (e.g., taking pictures with smart phones), or as sensors themselves (e.g., sharing their observations on Twitter). The proliferation of sensors in the possession of the average individual, together with the popularity of social networks that allow massive information dissemination, heralds an era of social sensing that brings about new research challenges and opportunities in this emerging field.” — SocialSens2017

Ray Kurzweil reveals plans for ‘linguistically fluent’ Google software

Smart Reply (credit: Google Research)

Ray Kuzweil, a director of engineering at Google, reveals plans for a future version of Google’s “Smart Reply” machine-learning email software (and more) in a Wired article by Tom Simonite published Wednesday (Aug. 2, 2017).

Running on mobile Gmail and Google Inbox, Smart Reply suggests up to three replies to an email message, saving typing time or giving you ideas for a better reply.

Smarter autocomplete

Kurzweil’s team is now “experimenting with empowering Smart Reply to elaborate on its initial terse suggestions,” Simonite says.

“Tapping a Continue button [in response to an email] might cause ‘Sure I’d love to come to your party!’ to expand to include, for example, ‘Can I bring something?’ He likes the idea of having AI pitch in anytime you’re typing, a bit like an omnipresent, smarter version of Google’s search autocomplete. ‘You could have similar technology to help you compose documents or emails by giving you suggestions of how to complete your sentence,’ Kurzweil says.”

As Simonite notes, Kurzweil’s software is based on his hierarchical theory of intelligence, articulated in Kurzweil’s latest book, How to Create a Mind and in more detail in an arXiv paper by Kurzweil and key members of his team, published in May.

“Kurzweil’s work outlines a path to create a simulation of the human neocortex (the outer layer of the brain where we do much of our thinking) by building a hierarchy of similarly structured components that encode increasingly abstract ideas as sequences,” according to the paper. “Kurzweil provides evidence that the neocortex is a self-organizing hierarchy of modules, each of which can learn, remember, recognize and/or generate a sequence, in which each sequence consists of a sequential pattern from lower-level modules.”

The paper further explains that Smart Reply previously used a “long short-term memory” (LSTM) network*, “which are much slower than feed-forward networks [used in the new software] for training and inference” because with LSTM, it takes more computation to handle longer sequences of words.

Kurzweil’s team was able to produce email responses of similar quality to LSTM, but using fewer computational resources by training hierarchically connected layers of simulated neurons on clustered numerical representations of text. Essentially, the approach propagates information through a sequence of ever more complex pattern recognizers until the final patterns are matched to optimal responses.

Kona: linguistically fluent software

But underlying Smart Reply is “a system for understanding the meaning of language, according to Kurzweil,” Simonite reports.

“Codenamed Kona, the effort is aiming for nothing less than creating software as linguistically fluent as you or me. ‘I would not say it’s at human levels, but I think we’ll get there,’ Kurzweil says. More applications of Kona are in the works and will surface in future Google products, he promises.”

* The previous sequence-to-sequence (Seq2Seq) framework [described in this paper] uses “recurrent neural networks (RNNs), typically long short-term memory (LSTM) networks, to encode sequences of word embeddings into representations that depend on the order, and uses a decoder RNN to generate output sequences word by word. …While Seq2Seq models provide a generalized solution, it is not obvious that they are maximally efficient, and training these systems can be slow and complicated.”

How to run faster, smarter AI apps on smartphones

(credit: iStock)

When you use smartphone AI apps like Siri, you’re dependent on the cloud for a lot of the processing — limited by your connection speed. But what if your smartphone could do more of the processing directly on your device — allowing for smarter, faster apps?

MIT scientists have taken a step in that direction with a new way to enable artificial-intelligence systems called convolutional neural networks (CNNs) to run locally on mobile devices. (CNN’s are used in areas such as autonomous driving, speech recognition, computer vision, and automatic translation.) Neural networks take up a lot of memory and consume a lot of power, so they usually run on servers in the cloud, which receive data from desktop or mobile devices and then send back their analyses.

The new MIT analytic method can determine how much power a neural network will actually consume when run on a particular type of hardware. The researchers used the method to evaluate new techniques for paring down neural networks so that they’ll run more efficiently on handheld devices.

The new CNN designs are also optimized to run on an energy-efficient computer chip optimized for neural networks that the researchers developed in 2016.

Reducing energy consumption

The new MIT software method uses “energy-aware pruning” — meaning they reduce a neural networks’ power consumption by cutting out the layers of the network that contribute very little to a neural network’s final output and consume the most energy.

Associate professor of electrical engineering and computer science Vivienne Sze and colleagues describe the work in an open-access paper they’re presenting this week (of July 24, 2017) at the Computer Vision and Pattern Recognition Conference. They report that the methods offered up to 73 percent reduction in power consumption over the standard implementation of neural networks — 43 percent better than the best previous method.

Meanwhile, another MIT group at the Computer Science and Artificial Intelligence Laboratory has designed a hardware approach to reduce energy consumption and increase computer-chip processing speed for specific apps, using “cache hierarchies.” (“Caches” are small, local memory banks that store data that’s frequently used by computer chips to cut down on time- and energy-consuming communication with off-chip memory.)**

The researchers tested their system on a simulation of a chip with 36 cores, or processing units. They found that compared to its best-performing predecessors, the system increased processing speed by 20 to 30 percent while reducing energy consumption by 30 to 85 percent. They presented the new system, dubbed Jenga, in an open-access paper at the International Symposium on Computer Architecture earlier in July 2017.

Better batteries — or maybe, no battery?

Another solution to better mobile AI is improving rechargeable batteries in cell phones (and other mobile devices), which have limited charge capacity and short lifecycles, and perform poorly in cold weather.

Recently, DARPA-funded researchers from the University of Houston (and at the University of California-San Diego and Northwestern University) have discovered that quinones — an inexpensive, earth-abundant and easily recyclable material that is low-cost and nonflammable — can address current battery limitations.

“One of these batteries, as a car battery, could last 10 years,” said Yan Yao, associate professor of electrical and computer engineering. In addition to slowing the deterioration of batteries for vehicles and stationary electricity storage batteries, it also would make battery disposal easier because the material does not contain heavy metals. The research is described in Nature Materials.

The first battery-free cellphone that can send and receive calls using only a few microwatts of power. (credit: Mark Stone/University of Washington)

But what if we eliminated batteries altogether? University of Washington researchers have invented a cellphone that requires no batteries. Instead, it harvests 3.5 microwatts of power from ambient radio signals, light, or even the vibrations of a speaker.

The new technology is detailed in a paper published July 1, 2017 in the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies.

The UW researchers demonstrated how to harvest this energy from ambient radio signals transmitted by a WiFi base station up to 31 feet away. “You could imagine in the future that all cell towers or Wi-Fi routers could come with our base station technology embedded in it,” said co-author Vamsi Talla, a former UW electrical engineering doctoral student and Allen School research associate. “And if every house has a Wi-Fi router in it, you could get battery-free cellphone coverage everywhere.”

A cellphone CPU (computer processing unit) typically requires several watts or more (depending on the app), so we’re not quite there yet. But that power requirement could one day be sufficiently reduced by future special-purpose chips and MIT’s optimized algorithms.

It might even let you do amazing things. :)

* Loosely based on the anatomy of the brain, neural networks consist of thousands or even millions of simple but densely interconnected information-processing nodes, usually organized into layers. The connections between nodes have “weights” associated with them, which determine how much a given node’s output will contribute to the next node’s computation. During training, in which the network is presented with examples of the computation it’s learning to perform, those weights are continually readjusted, until the output of the network’s last layer consistently corresponds with the result of the computation. With the proposed pruning method, the energy consumption of AlexNet and GoogLeNet are reduced by 3.7x and 1.6x, respectively, with less than 1% top-5 accuracy loss.

** The software reallocates cache access on the fly to reduce latency (delay), based on the physical locations of the separate memory banks that make up the shared memory cache. If multiple cores are retrieving data from the same DRAM [memory] cache, this can cause bottlenecks that introduce new latencies. So after Jenga has come up with a set of cache assignments, cores don’t simply dump all their data into the nearest available memory bank; instead, Jenga parcels out the data a little at a time, then estimates the effect on bandwidth consumption and latency. 

*** The stumbling block, Yao said, has been the anode, the portion of the battery through which energy flows. Existing anode materials are intrinsically structurally and chemically unstable, meaning the battery is only efficient for a relatively short time. The differing formulations offer evidence that the material is an effective anode for both acid batteries and alkaline batteries, such as those used in a car, as well as emerging aqueous metal-ion batteries.

Google rolls out new ‘smart reply’ machine-learning email software to more than 1 billion Gmail mobile users

A smarter version of Smart Reply (credit: Google Research)

Google is rolling out an enhanced version of its “smart reply” machine-learning email software to “over 1 billion Android and iOS users of Gmail,” Google CEO Sundar Pichai said today (May 17, 2017) in a keynote at the annual Google I/O conference.

Smart Reply suggests up to three replies to an email message — saving you typing time, or giving you time to think through a better reply. Smart Reply was previously only available to users of Google Inbox (an app that helps Gmail users organize their email messages and reply efficiently).

Hierarchical model

Developed by a team headed by Ray Kurzweil, a Google director of engineering, “the new version of Smart Reply increases the percentage of usable suggestions and is much more algorithmically efficient than the original system,” said Kurzweil in a Google Research blog post with research colleague Brian Strope today. “And that efficiency now makes it feasible for us to provide Smart Reply for Gmail.”

A hierarchy of modules (credit: Google Research)

The team was inspired by how humans understand languages and concepts, based on hierarchical models of language, Kurzweil and Strope explained. The new approach uses “hierarchies of modules, each of which can learn, remember, and recognize a sequential pattern,” as described in Kurzweil’s 2012 book, How to Create a Mind.

For example, a sentence like “That interesting person at the cafe we like gave me a glance” is difficult to interpret. Was it a positive or negative gesture? But “given enough examples of language, a machine learning approach can discover many of these subtle distinctions,” they write.

Elon Musk wants to enhance us as superhuman cyborgs to deal with superintelligent AI

(credit: Neuralink Corp.)

It’s the year 2021. A quadriplegic patient has just had one million “neural lace” microparticles injected into her brain, the world’s first human with an internet communication system using a wireless implanted brain-mind interface — and empowering her as the first superhuman cyborg. …

No, this is not a science-fiction movie plot. It’s the actual first public step — just four years from now — in Tesla CEO Elon Musk’s business plan for his latest new venture, Neuralink. It’s now explained for the first time on Tim Urban’s WaitButWhy blog.

Dealing with the superintelligence existential risk

Such a system would allow for radically improved communication between people, Musk believes. But for Musk, the big concern is AI safety. “AI is obviously going to surpass human intelligence by a lot,” he says. “There’s some risk at that point that something bad happens, something that we can’t control, that humanity can’t control after that point — either a small group of people monopolize AI power, or the AI goes rogue, or something like that.”

“This is what keeps Elon up at night,” says Urban. “He sees it as only a matter of time before superintelligent AI rises up on this planet — and when that happens, he believes that it’s critical that we don’t end up as part of ‘everyone else.’ That’s why, in a future world made up of AI and everyone else, he thinks we have only one good option: To be AI.”

Neural dust: an ultrasonic, low power solution for chronic brain-machine interfaces (credit: Swarm Lab/UC Berkeley)

To achieve his, Neuralink CEO Musk has met with more than 1,000 people, narrowing it down initially to eight experts, such as Paul Merolla, who spent the last seven years as the lead chip designer at IBM on their DARPA-funded SyNAPSE program to design neuromorphic (brain-inspired) chips with 5.4 billion transistors (each with 1 million neurons and 256 million synapses), and Dongjin (DJ) Seo, who while at UC Berkeley designed an ultrasonic backscatter system for powering and communicating with implanted bioelectronics called neural dust for recording brain activity.*

Mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution (credit: Lieber Research Group, Harvard University)

Becoming one with AI — a good thing?

Neuralink’s goal its to create a “digital tertiary layer” to augment the brain’s current cortex and limbic layers — a radical high-bandwidth, long-lasting, biocompatible, bidirectional communicative, non-invasively implanted system made up of micron-size (millionth of a meter) particles communicating wirelessly via the cloud and internet to achieve super-fast communication speed and increased bandwidth (carrying more information).

“We’re going to have the choice of either being left behind and being effectively useless or like a pet — you know, like a house cat or something — or eventually figuring out some way to be symbiotic and merge with AI. … A house cat’s a good outcome, by the way.”

Thin, flexible electrodes mounted on top of a biodegradable silk substrate could provide a better brain-machine interface, as shown in this model. (credit: University of Illinois at Urbana-Champaign)

But machine intelligence is already vastly superior to human intelligence in specific areas (such as Google’s Alpha Go) and often inexplicable. So how do we know superintelligence has the best interests of humanity in mind?

“Just an engineering problem”

Musk’s answer: “If we achieve tight symbiosis, the AI wouldn’t be ‘other’  — it would be you and with a relationship to your cortex analogous to the relationship your cortex has with your limbic system.” OK, but then how does an inferior intelligence know when it’s achieved full symbiosis with a superior one — or when AI goes rogue?

Brain-to-brain (B2B) internet communication system: EEG signals representing two words were encoded into binary strings (left) by the sender (emitter) and sent via the internet to a receiver. The signal was then encoded as a series of transcranial magnetic stimulation-generated phosphenes detected by the visual occipital cortex, which the receiver then translated to words (credit: Carles Grau et al./PLoS ONE)

And what about experts in neuroethics, psychology, law? Musk says it’s just “an engineering problem. … If we can just use engineering to get neurons to talk to computers, we’ll have done our job, and machine learning can do much of the rest.”

However, it’s not clear how we could be assured our brains aren’t hacked, spied on, and controlled by a repressive government or by other humans — especially those with a more recently updated software version or covert cyborg hardware improvements.

NIRS/EEG brain-computer interface system using non-invasive near-infrared light for sensing “yes” or “no” thoughts, shown on a model (credit: Wyss Center for Bio and Neuroengineering)

In addition, the devices mentioned in WaitButWhy all require some form of neurosurgery, unlike Facebook’s research project to use non-invasive near-infrared light, as shown in this experiment, for example.** And getting implants for non-medical use approved by the FDA will be a challenge, to grossly understate it.

“I think we are about 8 to 10 years away from this being usable by people with no disability,” says Musk, optimistically. However, Musk does not lay out a technology roadmap for going further, as MIT Technology Review notes.

Nonetheless, Neuralink sounds awesome — it should lead to some exciting neuroscience breakthroughs. And Neuralink now has 16 San Francisco job listings here.

* Other experts: Vanessa Tolosa, Lawrence Livermore National Laboratory, one of the world’s foremost researchers on biocompatible materials; Max Hodak, who worked on the development of some groundbreaking BMI technology at Miguel Nicolelis’s lab at Duke University, Ben Rapoport, Neuralink’s neurosurgery expert, with a Ph.D. in Electrical Engineering and Computer Science from MIT; Tim Hanson, UC Berkeley post-doc and expert in flexible Electrodes for Stable, Minimally-Invasive Neural Recording; Flip Sabes, professor, UCSF School of Medicine expert in cortical physiology, computational and theoretical modeling, and human psychophysics and physiology; and Tim Gardner, Associate Professor of Biology at Boston University, whose lab works on implanting BMIs in birds, to study “how complex songs are assembled from elementary neural units” and learn about “the relationships between patterns of neural activity on different time scales.”

** This binary experiment and the binary Brain-to-brain (B2B) internet communication system mentioned above are the equivalents of the first binary (dot–dash) telegraph message, sent May 24, 1844: ”What hath God wrought?”

Infrared-light-based Wi-Fi network is 100 times faster

Schematic of a beam of white light being dispersed by a prism into different wavelengths, similar in prinicple to how a new near-infrared WiFi system works (credit: Lucas V. Barbosa/CC)

A new infrared-light WiFi network can provide more than 40 gigabits per second (Gbps) for each user* — about 100 times faster than current WiFi systems — say researchers at Eindhoven University of Technology (TU/e) in the Netherlands.

The TU/e WiFi design was inspired by experimental systems using ceiling LED lights (such as Oregon State University’s experimental WiFiFO, or WiFi Free space Optic, system), which can increase the total per-user speed of WiFi systems and extend the range to multiple rooms, while avoiding interference from neighboring WiFi systems. (However, WiFiFo is limited to 100 Mbps.)

Experimental Oregon State University system uses LED lighting to boost the bandwidth of Wi-Fi systems and extend range (credit: Thinh Nguyen/Oregon State University)

Near-infrared light

Instead of visible light, the TU/e system uses invisible near-infrared light.** Supplied by a fiber optic cable, a few central “light antennas” (mounted on the ceiling, for instance) each use a pair of ”passive diffraction gratings” that radiate light rays of different wavelengths at different angles.

That allows for directing the light beams to specific users. The network tracks the precise location of every wireless device, using a radio signal transmitted in the return direction.***

The TU/e system uses infrared light with a wavelength of 1500 nanometers (a frequency of 200 terahertz, or 40,000 times higher than 5GHz), allowing for significantly increased capacity. The system has so far used the light rays only for downloading; uploads are still done using WiFi radio signals, since much less capacity is usually needed for uploading.

The researchers expect it will take five years or more for the new technology to be commercially available. The first devices to be connected will likely be high-data devices like video monitors, laptops, and tablets.

* That speed is 67 times higher than the current 802.11n WiFi system’s max theoretical speed of 600Mbps capacity — which has to be shared between users, so the ratio is actually about 100 times, according to TU/e researchers. That speed is also 16 times higher than the 2.5 Gbps performance with the best (802.11ac) Wi-Fi system — which also has to be shared (so actually lower) — and in addition, uses the 5GHz wireless band, which has limited range. “The theoretical max speed of 802.11ac is eight 160MHz 256-QAM channels, each of which are capable of 866.7Mbps, for a total of 6,933Mbps, or just shy of 7Gbps,” notes Extreme Tech. “In the real world, thanks to channel contention, you probably won’t get more than two or three 160MHz channels, so the max speed comes down to somewhere between 1.7Gbps and 2.5Gbps. Compare this with 802.11n’s max theoretical speed, which is 600Mbps.”

** The TU/e system was designed by Joanne Oh as a doctoral thesis and part of the wider BROWSE project headed up by professor of broadband communication technology Ton Koonen, with funding from the European Research Council, under the auspices of the noted TU/e Institute for Photonic Integration.

*** According to TU/e researchers, a few other groups are investigating network concepts in which infrared-light rays are directed using movable mirrors. The disadvantage here is that this requires active control of the mirrors and therefore energy, and each mirror is only capable of handling one ray of light at a time. The grating used the and Oh can cope with many rays of light and, therefore, devices at the same time.


Space X plans global space internet

(credit: SpaceX)

SpaceX has applied to the FCC to launch 11,943 satellites into low-Earth orbit, providing “ubiquitous high-bandwidth (up to 1Gbps per user, once fully deployed) broadband services for consumers and businesses in the U.S. and globally,” according to FCC applications.

Recent meetings with the FCC suggest that the plan now looks like “an increasingly feasible reality — particularly with 5G technologies just a few years away, promising new devices and new demand for data,” Verge reports.

Such a service will be particularly useful to rural areas, which have limited access to internet bandwidth.

Low-Earth orbit (at up to 2,000 kilometers, or 1,200 mi) ensures lower latency (communication delay between Earth and satellite) — making the service usable for voice communications via Skype, for example — compared to geosynchronous orbit (at 35,786 kilometers, or 22,000 miles), offered by Dish Network and other satellite ISP services.* The downside: it takes a lot more satellites to provide the coverage.

Boeing, Softbank-backed OneWeb (which hopes to “connect every school to the Internet by 2022″), Telesat, and others** have proposed similar services, possibly bringing the total number of satellites to about 20,000 in low and mid earth orbits in the 2020s, estimates Next Big Future.

* “SpaceX expects its latencies between 25 and 35ms, similar to the latencies measured for wired Internet services. Current satellite ISPs have latencies of 600ms or more, according to FCC measurements, notes Ars Technica.

** Audacy, Karousel, Kepler Communications, LeoSat, O3b, Space Norway,Theia Holdings, and ViaSat, according to Space News. The ITU [international counterpart of the FCC] has set rules preventing new constellations to interfere with established ground and satellite systems operating in the same frequencies. OneWeb, for example, has said it will basically switch off power as its satellites cross the equator so as not to disturb transmissions from geostationary-orbit satellites directly above and using Ku-band frequencies.

 

Brain-computer interface advance allows paralyzed people to type almost as fast as some smartphone users

Typing with your mind. You are paralyzed. But now, tiny electrodes have been surgically implanted in your brain to record signals from your motor cortex, the brain region controlling muscle movement. As you think of mousing over to a letter (or clicking to choose it), those electrical brain signals are transmitted via a cable to a computer (replacing your spinal cord and muscles). There, advanced algorithms decode the complex electrical brain signals, converting them instantly into screen actions. (credit: Chethan Pandarinath et al./eLife)

Stanford University researchers have developed a brain-computer interface (BCI) system that can enable people with paralysis* to type (using an on-screen cursor) at speeds and accuracy levels of about three times faster than reported to date.

Simply by imagining their own hand movements, one participant was able to type 39 correct characters per minute (about eight words per minute); the other two participants averaged 6.3 and 2.7 words per minute, respectively — all without auto-complete assistance (so it could be much faster).

Those are communication rates that people with arm and hand paralysis would also find useful, the researchers suggest. “We’re approaching the speed at which you can type text on your cellphone,” said Krishna Shenoy, PhD, professor of electrical engineering, a co-senior author of the study, which was published in an open-access paper online Feb. 21 in eLife.

Braingate and beyond

The three study participants used a brain-computer interface called the “BrainGate Neural Interface System.” On KurzweilAI, we first discussed Braingate in 2011, followed by a 2012 clinical trial that allowed a paralyzed patient to control a robot.

Braingate in 2012 (credit: Brown University)

The new research, led by Stanford, takes the Braingate technology way further**. Participants can now move a cursor (by just thinking about a hand movement) on a computer screen that displays the letters of the alphabet, and they can “point and click” on letters, computer-mouse-style, to type letters and sentences.

The new BCI uses a tiny silicon chip, just over one-sixth of an inch square, with 100 electrodes that penetrate the brain to about the thickness of a quarter and tap into the electrical activity of individual nerve cells in the motor cortex.

As the participant thinks of a specific hand-to-mouse movement (pointing at or clicking on a letter), neural electrical activity is recorded using 96-channel silicon microelectrode arrays implanted in the hand area of the motor cortex. These signals are then filtered to extract multiunit spiking activity and high-frequency field potentials, then decoded (using two algorithms) to provide “point-and-click” control of a computer cursor.

What’s next

The team next plans is to adapt the system so that brain-computer interfaces can control commercial computers, phones and tablets — perhaps extending out to the internet.

Beyond that, Shenoy predicted that a self-calibrating, fully implanted wireless BCI system with no required caregiver assistance and no “cosmetic impact” would be available in five to 10 years from now (“closer to five”).

Perhaps a future wireless, noninvasive version could let anyone simply think to select letters, words, ideas, and images — replacing the mouse and finger touch — along the lines of Elon Musk’s neural lace concept?

* Millions of people with paralysis reside in the U.S.

** The study’s results are the culmination of the long-running multi-institutional BrainGate consortium, which includes scientists at Massachusetts General Hospital, Brown University, Case Western University, and the VA Rehabilitation Research and Development Center for Neurorestoration and Neurotechnology in Providence, Rhode Island. The study was funded by the National Institutes of Health, the Stanford Office of Postdoctoral Affairs, the Craig H. Neilsen Foundation, the Stanford Medical Scientist Training Program, Stanford BioX-NeuroVentures, the Stanford Institute for Neuro-Innovation and Translational Neuroscience, the Stanford Neuroscience Institute, Larry and Pamela Garlick, Samuel and Betsy Reeves, the Howard Hughes Medical Institute, the U.S. Department of Veterans Affairs, the MGH-Dean Institute for Integrated Research on Atrial Fibrillation and Stroke and Massachusetts General Hospital.


Stanford | Stanford researchers develop brain-controlled typing for people with paralysis


Abstract of High performance communication by people with paralysis using an intracortical brain-computer interface

Brain-computer interfaces (BCIs) have the potential to restore communication for people with tetraplegia and anarthria by translating neural activity into control signals for assistive communication devices. While previous pre-clinical and clinical studies have demonstrated promising proofs-of-concept (Serruya et al., 2002; Simeral et al., 2011; Bacher et al., 2015; Nuyujukian et al., 2015; Aflalo et al., 2015; Gilja et al., 2015; Jarosiewicz et al., 2015; Wolpaw et al., 1998; Hwang et al., 2012; Spüler et al., 2012; Leuthardt et al., 2004; Taylor et al., 2002; Schalk et al., 2008; Moran, 2010; Brunner et al., 2011; Wang et al., 2013; Townsend and Platsko, 2016; Vansteensel et al., 2016; Nuyujukian et al., 2016; Carmena et al., 2003; Musallam et al., 2004; Santhanam et al., 2006; Hochberg et al., 2006; Ganguly et al., 2011; O’Doherty et al., 2011; Gilja et al., 2012), the performance of human clinical BCI systems is not yet high enough to support widespread adoption by people with physical limitations of speech. Here we report a high-performance intracortical BCI (iBCI) for communication, which was tested by three clinical trial participants with paralysis. The system leveraged advances in decoder design developed in prior pre-clinical and clinical studies (Gilja et al., 2015; Kao et al., 2016; Gilja et al., 2012). For all three participants, performance exceeded previous iBCIs (Bacher et al., 2015; Jarosiewicz et al., 2015) as measured by typing rate (by a factor of 1.4–4.2) and information throughput (by a factor of 2.2–4.0). This high level of performance demonstrates the potential utility of iBCIs as powerful assistive communication devices for people with limited motor function.

Someone is learning how to take down the Internet

Submarine cables map (credit: Teleography)

“Over the past year or two, someone has been probing the defenses of the companies that run critical pieces of the Internet,” according to a blog post by security expert Bruce Schneier.

“These probes take the form of precisely calibrated attacks designed to determine exactly how well these companies can defend themselves, and what would be required to take them down. It feels like a nation’s military cybercommand trying to calibrate its weaponry in the case of cyberwar.”

Schneier said major companies that provide the basic infrastructure that makes the Internet work [presumably, ones such as Cisco] have seen an increase in distributed denial of service (DDoS) attacks against them, and the attacks are significantly larger, last longer, and are more sophisticated.

“They look like probing — being forced to demonstrate their defense capabilities for the attacker.” This is similar to flying reconnaissance planes over a country to detect capabilities by making the enemy turn on air-defense radars.

Who might do this? “The size and scale of these probes — and especially their persistence — point to state actors. … China or Russia would be my first guesses.”