Graphene-based computer would be 1,000 times faster than silicon-based, use 100th the power

How a graphene-based transistor would work. A graphene nanoribbon (GNR) is created by unzipping (opening up) a portion of a carbon nanotube (CNT) (the flat area, shown with pink arrows above it). The GRN switching is controlled by two surrounding parallel CNTs. The magnitudes and relative directions of the control current, ICTRL (blue arrows) in the CNTs determine the rotation direction of the magnetic fields, B (green). The magnetic fields then control the GNR magnetization (based on the recent discovery of negative magnetoresistance), which causes the GNR to switch from resistive (no current) to conductive, resulting in current flow, IGNR (pink arrows) — in other words, causing the GNR to act as a transistor gate. The magnitude of the current flow through the GNR functions as the binary gate output — with binary 1 representing the current flow of the conductive state and binary 0 representing no current (the resistive state). (credit: Joseph S. Friedman et al./Nature Communications)

A future graphene-based transistor using spintronics could lead to tinier computers that are a thousand times faster and use a hundredth of the power of silicon-based computers.

The radical transistor concept, created by a team of researchers at Northwestern University, The University of Texas at Dallas, University of Illinois at Urbana-Champaign, and University of Central Florida, is explained this month in an open-access paper in the journal Nature Communications.

Transistors act as on and off switches. A series of transistors in different arrangements act as logic gates, allowing microprocessors to solve complex arithmetic and logic problems. But the speed of computer microprocessors that rely on silicon transistors has been relatively stagnant since around 2005, with clock speeds mostly in the 3 to 4 gigahertz range.

Clock speeds approaching the terahertz range

The researchers discovered that by applying a magnetic field to a graphene ribbon (created by unzipping a carbon nanotube), they could change the resistance of current flowing through the ribbon. The magnetic field — controlled by increasing or decreasing the current through adjacent carbon nanotubes — increased or decreased the flow of current.

A cascading series of graphene transistor-based logic circuits could produce a massive jump, with clock speeds approaching the terahertz range — a thousand times faster.* They would also be smaller and substantially more efficient, allowing device-makers to shrink technology and squeeze in more functionality, according to Ryan M. Gelfand, an assistant professor in The College of Optics & Photonics at the University of Central Florida.

The researchers hope to inspire the fabrication of these cascaded logic circuits to stimulate a future transformative generation of energy-efficient computing.

* Unlike other spintronic logic proposals, these new logic gates can be cascaded directly through the carbon materials without requiring intermediate circuits and amplification between gates. That would result in compact circuits with reduced area that are far more efficient than with CMOS switching, which is limited by charge transfer and accumulation from RLC (resistance-inductance-capacitance) interconnect delays.


Abstract of Cascaded spintronic logic with low-dimensional carbon

Remarkable breakthroughs have established the functionality of graphene and carbon nanotube transistors as replacements to silicon in conventional computing structures, and numerous spintronic logic gates have been presented. However, an efficient cascaded logic structure that exploits electron spin has not yet been demonstrated. In this work, we introduce and analyse a cascaded spintronic computing system composed solely of low-dimensional carbon materials. We propose a spintronic switch based on the recent discovery of negative magnetoresistance in graphene nanoribbons, and demonstrate its feasibility through tight-binding calculations of the band structure. Covalently connected carbon nanotubes create magnetic fields through graphene nanoribbons, cascading logic gates through incoherent spintronic switching. The exceptional material properties of carbon materials permit Terahertz operation and two orders of magnitude decrease in power-delay product compared to cutting-edge microprocessors. We hope to inspire the fabrication of these cascaded logic circuits to stimulate a transformative generation of energy-efficient computing.

High-speed light-based systems could replace supercomputers for certain ‘deep learning’ calculations

(a) Optical micrograph of an experimentally fabricated on-chip optical interference unit; the physical region where the optical neural network program exists is highlighted in gray. A programmable nanophotonic processor uses a field-programmable gate array (similar to an FPGA integrated circuit ) — an array of interconnected waveguides, allowing the light beams to be modified as needed for a specific deep-learning matrix computation. (b) Schematic illustration of the optical neural network program, which performs matrix multiplication and amplification fully optically. (credit: Yichen Shen et al./Nature Photonics)

A team of researchers at MIT and elsewhere has developed a new approach to deep learning systems — using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep-learning computations.

Deep-learning systems are based on artificial neural networks that mimic the way the brain learns from an accumulation of examples. They can enable technologies such as face- and voice-recognition software, or scour vast amounts of medical data to find patterns that could be useful diagnostically, for example.

But the computations these systems carry out are highly complex and demanding, even for supercomputers. Traditional computer architectures are not very efficient for calculations needed for neural-network tasks that involve repeated multiplications of matrices (arrays of numbers). These can be computationally intensive for conventional CPUs or even GPUs.

Programmable nanophotonic processor

Instead, the new approach uses an optical device that the researchers call a “programmable nanophotonic processor.” Multiple light beams are directed in such a way that their waves interact with each other, producing interference patterns that “compute” the intended operation.

The optical chips using this architecture could, in principle, carry out dense matrix multiplications (the most power-hungry and time-consuming part in AI algorithms) for learning tasks much faster, compared to conventional electronic chips. The researchers expect a computational speed enhancement of at least two orders of magnitude over the state-of-the-art and three orders of magnitude in power efficiency.

“This chip, once you tune it, can carry out matrix multiplication with, in principle, zero energy, almost instantly,” says Marin Soljacic, one of the MIT researchers on the team.

To demonstrate the concept, the team set the programmable nanophotonic processor to implement a neural network that recognizes four basic vowel sounds. Even with the prototype system, they were able to achieve a 77 percent accuracy level, compared to about 90 percent for conventional systems. There are “no substantial obstacles” to scaling up the system for greater accuracy, according to Soljacic.

The team says is will still take a lot more time and effort to make this system useful. However, once the system is scaled up and fully functioning, the low-power system should find many uses, especially for situations where power is limited, such as in self-driving cars, drones, and mobile consumer devices. Other uses include signal processing for data transmission and computer centers.

The research was published Monday (June 12, 2017) in a paper in the journal Nature Photonics (open-access version available on arXiv).

The team also included researchers at Elenion Technologies of New York and the Université de Sherbrooke in Quebec. The work was supported by the U.S. Army Research Office through the Institute for Soldier Nanotechnologies, the National Science Foundation, and the Air Force Office of Scientific Research.


Abstract of Deep learning with coherent nanophotonic circuits

Artificial neural networks are computational network models inspired by signal processing in the brain. These models have dramatically improved performance for many machine-learning tasks, including speech and image recognition. However, today’s computing hardware is inefficient at implementing neural networks, in large part because much of it was designed for von Neumann computing schemes. Significant effort has been made towards developing electronic architectures tuned to implement artificial neural networks that exhibit improved computational speed and accuracy. Here, we propose a new architecture for a fully optical neural network that, in principle, could offer an enhancement in computational speed and power efficiency over state-of-the-art electronics for conventional inference tasks. We experimentally demonstrate the essential part of the concept using a programmable nanophotonic processor featuring a cascaded array of 56 programmable Mach–Zehnder interferometers in a silicon photonic integrated circuit and show its utility for vowel recognition.

Princeton/Adobe technology will let you edit voices like text

Technology developed by Princeton University computer scientists may do for audio recordings of the human voice what word processing software did for the written word and Adobe Photoshop did for images.

“VoCo” software, still in the research stage, makes it easy to add or replace a word in an audio recording of a human voice by simply editing a text transcript of the recording. New words are automatically synthesized in the speaker’s voice — even if they don’t appear anywhere else in the recording.

The system uses a sophisticated algorithm to learn and recreate the sound of a particular voice. It could one day make editing podcasts and narration in videos much easier, or in the future, create personalized robotic voices that sound natural, according to co-developer Adam Finkelstein, a professor of computer science at Princeton. Or people who have lost their voices due to injury or disease might be able to recreate their voices through a robotic system, but one that sounds natural.

An earlier version of VoCo was announced in November 2016. A paper describing the current VoCo development will be published in the July issue of the journal Transactions on Graphics (an open-access preprint is available).


How it works (technical description)

VoCo allows people to edit audio recordings with the ease of changing words on a computer screen. The system inserts new words in the same voice as the rest of the recording. (credit: Professor Adam Finkelstein)

VoCo’s user interface looks similar to other audio editing software such as the podcast editing program Audacity, with a waveform of the audio track and cut, copy and paste tools for editing. But VoCo also augments the waveform with a text transcript of the track and allows the user to replace or insert new words that don’t already exist in the track by simply typing in the transcript. When the user types the new word, VoCo updates the audio track, automatically synthesizing the new word by stitching together snippets of audio from elsewhere in the narration.

VoCo is is based on an optimization algorithm that searches the voice recording and chooses the best possible combinations of phonemes (partial word sounds) to build new words in the user’s voice. To do this, it needs to find the individual phonemes and sequences of them that stitch together without abrupt transitions. It also needs to be fitted into the existing sentence so that the new word blends in seamlessly. Words are pronounced with different emphasis and intonation depending on where they fall in a sentence, so context is important.

Advanced VoCo editors can manually adjust pitch profile, amplitude and snippet duration. Novice users can choose from a predefined set of pitch profiles (bottom), or record their own voice as an exemplar to control pitch and timing (top). (credit: Professor Adam Finkelstein)

For clues about this context, VoCo looks to an audio track of the sentence that is automatically synthesized in artificial voice from the text transcript — one that sounds robotic to human ears. This recording is used as a point of reference in building the new word. VoCo then matches the pieces of sound from the real human voice recording to match the word in the synthesized track — a technique known as “voice conversion,” which inspired the project name, VoCo.

In case the synthesized word isn’t quite right, VoCo offers users several versions of the word to choose from. The system also provides an advanced editor to modify pitch and duration, allowing expert users to further polish the track.

To test how effective their system was a producing authentic sounding edits, the researchers asked people to listen to a set of audio tracks, some of which had been edited with VoCo and other that were completely natural. The fully automated versions were mistaken for real recordings more than 60 percent of the time.

The Princeton researchers are currently refining the VoCo algorithm to improve the system’s ability to integrate synthesized words more smoothly into audio tracks. They are also working to expand the system’s capabilities to create longer phrases or even entire sentences synthesized from a narrator’s voice.


Fake news videos?

Disney Research’s FaceDirector allows for editing recorded facial expressions and voice into a video (credit: Disney Research)

A key use for VoCo might be in intelligent personal assistants like Apple’s Siri, Google Assistant, Amazon’s Alexa, and Microsoft’s Cortana, or for using movie actors’ voices from old films in new ones, Finkelstein suggests.

But there are obvious concerns about fraud. It might even be possible to create a convincing fake video. Video clips with different facial expressions and lip movements (using Disney Research’s FaceDirector, for example) could be edited in and matched to associated fake words and other audio (such as background noise and talking), along with green screen to create fake backgrounds.

With billions of people now getting their news online and unfiltered, augmented-reality coming, and hacking way out of control, things may get even weirder. …

Zeyu Jin, a Princeton graduate student advised by Finkelstein, will present the work at the Association for Computing Machinery SIGGRAPH conference in July. The work at Princeton was funded by the Project X Fund, which provides seed funding to engineers for pursuing speculative projects. The Princeton researchers collaborated with scientists Gautham Mysore, Stephen DiVerdi, and Jingwan Lu at Adobe Research. Adobe has not announced availability of a commercial version of VoCo, or plans to integrate VoCo into Adobe Premiere Pro (or FaceDirector).


Abstract of VoCo: Text-based Insertion and Replacement in Audio Narration

Editing audio narration using conventional software typically involves many painstaking low-level manipulations. Some state of the art systems allow the editor to work in a text transcript of the narration, and perform select, cut, copy and paste operations directly in the transcript; these operations are then automatically applied to the waveform in a straightforward manner. However, an obvious gap in the text-based interface is the ability to type new words not appearing in the transcript, for example inserting a new word for emphasis or replacing a misspoken word. While high-quality voice synthesizers exist today, the challenge is to synthesize the new word in a voice that matches the rest of the narration. This paper presents a system that can synthesize a new word or short phrase such that it blends seamlessly in the context of the existing narration. Our approach is to use a text to speech synthesizer to say the word in a generic voice, and then use voice conversion to convert it into a voice that matches the narration. Offering a range of degrees of control to the editor, our interface supports fully automatic synthesis, selection among a candidate set of alternative pronunciations, fine control over edit placements and pitch profiles, and even guidance by the editors own voice. The paper presents studies showing that the output of our method is preferred over baseline methods and often indistinguishable from the original voice.

Best of MOOGFEST 2017

The Moogfest four-day festival in Durham, North Carolina next weekend (May 18 — 21) explores the future of technology, art, and music. Here are some of the sessions that may be especially interesting to KurzweilAI readers. Full #Moogfest2017 Program Lineup.

Culture and Technology

(credit: Google)

The Magenta by Google Brain team will bring its work to life through an interactive demo plus workshops on the creation of art and music through artificial intelligence.

Magenta is a Google Brain project to ask and answer the questions, “Can we use machine learning to create compelling art and music? If so, how? If not, why not?” It’s first a research project to advance the state-of-the art and creativity in music, video, image and text generation and secondly, Magenta is building a community of artists, coders, and machine learning researchers.

The interactive demo will go through a improvisation along with the machine learning models, much like the Al Jam Session. The workshop will cover how to use the open source library to build and train models and interact with them via MIDI.

Technical reference: Magenta: Music and Art Generation with Machine Intelligence


TEDx Talks | Music and Art Generation using Machine Learning | Curtis Hawthorne | TEDxMountainViewHighSchool


Miguel Nicolelis (credit: Duke University)

Miguel A. L. Nicolelis, MD, PhD will discuss state-of-the-art research on brain-machine interfaces, which make it possible for the brains of primates to interact directly and in a bi-directional way with mechanical, computational and virtual devices. He will review a series of recent experiments using real-time computational models to investigate how ensembles of neurons encode motor information. These experiments have revealed that brain-machine interfaces can be used not only to study fundamental aspects of neural ensemble physiology, but they can also serve as an experimental paradigm aimed at testing the design of novel neuroprosthetic devices.

He will also explore research that raises the hypothesis that the properties of a robot arm, or other neurally controlled tools, can be assimilated by brain representations as if they were extensions of the subject’s own body.

Theme: Transhumanism


Dervishes at Royal Opera House with Matthew Herbert (credit: ?)

Andy Cavatorta (MIT Media Lab) will present a conversation and workshop on a range of topics including the four-century history of music and performance at the forefront of technology. Known as the inventor of Bjork’s Gravity Harp, he has collaborated on numerous projects to create instruments using new technologies that coerce expressive music out of fire, glass, gravity, tiny vortices, underwater acoustics, and more. His instruments explore technologically mediated emotion and opportunities to express the previously inexpressible.

Theme: Instrument Design


Berklee College of Music

Michael Bierylo (credit: Moogfest)

Michael Bierylo will present his Modular Synthesizer Ensemble alongside the Csound workshops from fellow Berklee Professor Richard Boulanger.

Csound is a sound and music computing system originally developed at MIT Media Lab and can most accurately be described as a compiler or a software that takes textual instructions in the form of source code and converts them into object code which is a stream of numbers representing audio. Although it has a strong tradition as a tool for composing electro-acoustic pieces, it is used by composers and musicians for any kind of music that can be made with the help of the computer and has traditionally being used in a non-interactive score driven context, but nowadays it is mostly used in in a real-time context.

Michael Bierylo serves as the Chair of the Electronic Production and Design Department, which offers students the opportunity to combine performance, composition, and orchestration with computer, synthesis, and multimedia technology in order to explore the limitless possibilities of musical expression.


Berklee College of Music | Electronic Production and Design (EPD) at Berklee College of Music


Chris Ianuzzi (credit: William Murray)

Chris Ianuzzi, a synthesist of Ciani-Musica and past collaborator with pioneers such as Vangelis and Peter Baumann, will present a daytime performance and sound exploration workshops with the B11 braininterface and NeuroSky headset–a Brainwave Sensing Headset.

Theme: Hacking Systems


Argus Project (credit: Moogfest)

The Argus Project from Gan Golan and Ron Morrison of NEW INC is a wearable sculpture, video installation and counter-surveillance training, which directly intersects the public debate over police accountability. According to ancient Greek myth, Argus Panoptes was a giant with 100 eyes who served as an eternal watchman, both for – and against – the gods.

By embedding an array of camera “eyes” into a full body suit of tactical armor, the Argus exo-suit creates a “force field of accountability” around the bodies of those targeted. While some see filming the police as a confrontational or subversive act, it is in fact, a deeply democratic one.  The act of bearing witness to the actions of the state – and showing them to the world – strengthens our society and institutions. The Argus Project is not so much about an individual hero, but the Citizen Body as a whole. In between one of the music acts, a presentation about the project will be part of the Protest Stage.

Argus Exo Suit Design (credit: Argus Project)

Theme: Protest


Found Sound Nation (credit: Moogfest)

Democracy’s Exquisite Corpse from Found Sound Nation and Moogfest, an immersive installation housed within a completely customized geodesic dome, is a multi-person instrument and music-based round-table discussion. Artists, activists, innovators, festival attendees and community engage in a deeply interactive exploration of sound as a living ecosystem and primal form of communication.

Within the dome, there are 9 unique stations, each with their own distinct set of analog or digital sound-making devices. Each person’s set of devices is chained to the person sitting next to them, so that everybody’s musical actions and choices affect the person next to them, and thus affect everyone else at the table. This instrument is a unique experiment in how technology and the instinctive language of sound can play a role in the shaping of a truly collective unconscious.

Theme: Protest


(credit: Land Marking)

Land Marking, from Halsey Burgund and Joe Zibkow of MIT Open Doc Lab, is a mobile-based music/activist project that augments the physical landscape of protest events with a layer of location-based audio contributed by event participants in real-time. The project captures the audioscape and personal experiences of temporary, but extremely important, expressions of discontent and desire for change.

Land Marking will be teaming up with the Protest Stage to allow Moogfest attendees to contribute their thoughts on protests and tune into an evolving mix of commentary and field recordings from others throughout downtown Durham. Land Marking is available on select apps.

Theme: Protest


Taeyoon Choi (credit: Moogfest)

Taeyoon Choi, an artist and educator based in New York and Seoul, who will be leading a Sign Making Workshop as one of the Future Thought leaders on the Protest Stage. His art practice involves performance, electronics, drawings and storytelling that often leads to interventions in public spaces.

Taeyoon will also participate in the Handmade Computer workshop to build a1 Bit Computer, which demonstrates how binary numbers and boolean logic can be configured to create more complex components. On their own these components aren’t capable of computing anything particularly useful, but a computer is said to be Turing complete if it includes all of them, at which point it has the extraordinary ability to carry out any possible computation. He has participated in numerous workshops at festivals around the world, from Korea to Scotland, but primarily at the School for Poetic Computation (SFPC) — an artist run school co-founded by Taeyoon in NYC. Taeyoon Choi’s Handmade Computer projects.

Theme: Protest


(credit: Moogfest)

irlbb from Vivan Thi Tang, connects individuals after IRL (in real life) interactions and creates community that otherwise would have been missed. With a customized beta of the app for Moogfest 2017, irlbb presents a unique engagement opportunity.

Theme: Protest


Ryan Shaw and Michael Clamann (credit: Duke University)

Duke Professors Ryan Shaw, and Michael Clamann will lead a daily science pub talk series on topics that include future medicine, humans and anatomy, and quantum physics.

Ryan is a pioneer in mobile health—the collection and dissemination of information using mobile and wireless devices for healthcare–working with faculty at Duke’s Schools of Nursing, Medicine and Engineering to integrate mobile technologies into first-generation care delivery systems. These technologies afford researchers, clinicians, and patients a rich stream of real-time information about individuals’ biophysical and behavioral health in everyday environments.

Michael Clamann is a Senior Research Scientist in the Humans and Autonomy Lab (HAL) within the Robotics Program at Duke University, an Associate Director at UNC’s Collaborative Sciences Center for Road Safety, and the Lead Editor for Robotics and Artificial Intelligence for Duke’s SciPol science policy tracking website. In his research, he works to better understand the complex interactions between robots and people and how they influence system effectiveness and safety.

Theme: Hacking Systems


Dave Smith (credit: Moogfest)

Dave Smith, the iconic instrument innovator and Grammy-winner, will lead Moogfest’s Instruments Innovators program and host a headlining conversation with a leading artist revealed in next week’s release. He will also host a masterclass.

As the original founder of Sequential Circuits in the mid-70s and Dave designed the Prophet-5––the world’s first fully-programmable polyphonic synth and the first musical instrument with an embedded microprocessor. From the late 1980’s through the early 2000’s he has worked to develop next level synths with the likes of the Audio Engineering Society, Yamaha, Korg, Seer Systems (for Intel). Realizing the limitations of software, Dave returned to hardware and started Dave Smith Instruments (DSI), which released the Evolver hybrid analog/digital synthesizer in 2002. Since then the DSI product lineup has grown to include the Prophet-6, OB-6, Pro 2, Prophet 12, and Prophet ’08 synthesizers, as well as the Tempest drum machine, co-designed with friend and fellow electronic instrument designer Roger Linn.

Theme: Future Thought


Dave Rossum, Gerhard Behles, and Lars Larsen (credit: Moogfest)

EM-u Systems Founder Dave Rossum, Ableton CEO Gerhard Behles, and LZX Founder Lars Larsen will take part in conversations as part of the Instruments Innovators program.

Driven by the creative and technological vision of electronic music pioneer Dave Rossum, Rossum Electro-Music creates uniquely powerful tools for electronic music production and is the culmination of Dave’s 45 years designing industry-defining instruments and transformative technologies. Starting with his co-founding of E-mu Systems, Dave provided the technological leadership that resulted in what many consider the premier professional modular synthesizer system–E-mu Modular System–which became an instrument of choice for numerous recording studios, educational institutions, and artists as diverse as Frank Zappa, Leon Russell, and Hans Zimmer. In the following years, worked on developing Emulator keyboards and racks (i.e. Emulator II), Emax samplers, the legendary SP-12 and SP-1200 (sampling drum machines), the Proteus sound modules and the Morpheus Z-Plane Synthesizer.

Gerhard Behles co-founded Ableton in 1999 with Robert Henke and Bernd Roggendorf. Prior to this he had been part of electronic music act “Monolake” alongside Robert Henke, but his interest in how technology drives the way music is made diverted his energy towards developing music software. He was fascinated by how dub pioneers such as King Tubby ‘played’ the recording studio, and began to shape this concept into a music instrument that became Ableton Live.

LZX Industries was born in 2008 out of the Synth DIY scene when Lars Larsen of Denton, Texas and Ed Leckie of Sydney, Australia began collaborating on the development of a modular video synthesizer. At that time, analog video synthesizers were inaccessible to artists outside of a handful of studios and universities. It was their continuing mission to design creative video instruments that (1) stay within the financial means of the artists who wish to use them, (2) honor and preserve the legacy of 20th century toolmakers, and (3) expand the boundaries of possibility. Since 2015, LZX Industries has focused on the research and development of new instruments, user support, and community building.


Science

ATLAS detector (credit: Kaushik De, Brookhaven National Laboratory)

ATLAS @ CERN. The full ATLAS @ CERN program will be led by Duke University Professors Mark Kruse andKatherine Hayles along with ATLAS @ CERN Physicist Steven Goldfarb.

The program will include a “Virtual Visit” to the Large Hadron Collider — the world’s largest and most powerful particle accelerator — via a live video session,  a ½ day workshop analyzing and understanding LHC data, and a “Science Fiction versus Science Fact” live debate.

The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?

“Atlas Boogie” (referencing Higgs Boson):

ATLAS Experiment | The ATLAS Boogie

(credit: Kate Shaw)

Kate Shaw (ATLAS @ CERN), PhD, in her keynote, titled “Exploring the Universe and Impacting Society Worldwide with the Large Hadron Collider (LHC) at CERN,” will dive into the present-day and future impacts of the LHC on society. She will also share findings from the work she has done promoting particle physics in developing countries through her Physics without Frontiers program.

The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?

Theme: Future Thought


Arecibo (credit: Joe Davis/MIT)

In his keynote, Joe Davis (MIT) will trace the history of several projects centered on ideas about extraterrestrial communications that have given rise to new scientific techniques and inspired new forms of artistic practice. He will present his “swansong” — an interstellar message that is intended explicitly for human beings rather than for aliens.

Theme: Future Thought


Immortality bus (credit: Zoltan Istvan)

Zoltan Istvan (Immortality Bus), the former U.S. Presidential candidate for the Transhumanist party and leader of the Transhumanist movement, will explore the path to immortality through science with the purpose of using science and technology to radically enhance the human being and human experience. His futurist work has reached over 100 million people–some of it due to the Immortality Bus which he recently drove across America with embedded journalists aboard. The bus is shaped and looks like a giant coffin to raise life extension awareness.


Zoltan Istvan | 1-min Hightlight Video for Zoltan Istvan Transhumanism Documentary IMMORTALITY OR BUST

Theme: Transhumanism/Biotechnology


(credit: Moogfest)

Marc Fleury and members of the Church of Space — Park Krausen, Ingmar Koch, and Christ of Veillon — return to Moogfest for a second year to present an expanded and varied program with daily explorations in modern physics with music and the occult, Illuminati performances, theatrical rituals to ERIS, and a Sunday Mass in their own dedicated “Church” venue.

Theme: Techno-Shamanism

#Moogfest2017

A deep-learning tool that lets you clone an artistic style onto a photo

The Deep Photo Style Transfer tool lets you add artistic style and other elements from a reference photo onto your photo. (credit: Cornell University)

“Deep Photo Style Transfer” is a cool new artificial-intelligence image-editing software tool that lets you transfer a style from another (“reference”) photo onto your own photo, as shown in the above examples.

An open-access arXiv paper by Cornell University computer scientists and Adobe collaborators explains that the tool can transpose the look of one photo (such as the time of day, weather, season, and artistic effects) onto your photo, making it reminiscent of a painting, but that is still photorealistic.

The algorithm also handles extreme mismatch of forms, such as transferring a fireball to a perfume bottle. (credit: Fujun Luan et al.)

“What motivated us is the idea that style could be imprinted on a photograph, but it is still intrinsically the same photo, said Cornell computer science professor Kavita Bala. “This turned out to be incredibly hard. The key insight finally was about preserving boundaries and edges while still transferring the style.”

To do that, the researchers created deep-learning software that can add a neural network layer that pays close attention to edges within the image, like the border between a tree and a lake.

The software is still in the research stage.

Bala, Cornell doctoral student Fujun Luan, and Adobe collaborators Sylvian Paris and Eli Shechtman will present their paper at the Conference on Computer Vision and Pattern Recognition on July 21–26 in Honolulu.

This research is supported by a Google Faculty Re-search Award and NSF awards.


Abstract of Deep Photo Style Transfer

This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.


Future ‘lightwave’ computers could run 100,000 times faster

TeraHertz pulses in semiconductor crystal (credit: Fabian Langer, Regensburg University)

Using extremely short pulses of teraHertz (THz) radiation instead of electrical currents could lead to future computers that run ten to 100,000 times faster than today’s state-of-the-art electronics, according to an international team of researchers, writing in the journal Nature Photonics.

In a conventional computer, electrons moving through a semiconductor occasionally run into other electrons, releasing energy in the form of heat and slowing them down. With the proposed “lightwave electronics” approach, electrons could be guided by ultrafast THz pulses (the part of the electromagnetic spectrum between microwaves and infrared light). That means the travel time can be so short that the electrons would be statistically unlikely to hit anything, according to senior author Rupert Huber, a professor of physics at the University of Regensburg who led the experiment.

In the experiment, the researchers shined THz pulses into a crystal of the semiconductor gallium selenide.* These pulses were ultra-short (less than 100 femtoseconds, or 100 quadrillionths of a second). Each pulse popped electrons in the semiconductor into a higher energy level — which meant that they were free to move around.

When the electrons emitted light as they came down from the higher energy level, they emitted much shorter pulses than the electromagnetic radiation going in — just a few femtoseconds long — quick enough to read and write information to electrons at ultra-high speed.

But first, researchers need to be able to control electrons in a semiconductor. This work takes a step toward this by mobilizing groups of electrons inside a semiconductor crystal.

Quantum computation

Because femtosecond pulses are fast enough to trap an electron between being put into an excited state and coming down from that state, they can potentially also be used for quantum computations, using electrons in excited states as qubits. The researchers managed to launch one electron simultaneously via two excitation pathways, which is not classically possible.

An electron is small enough that it behaves like a wave as well as a particle, and when it is in an excited state, its wavelength changes. Because the electron was in two excited states at once, those two waves interfered with one another and left a fingerprint in the femtosecond pulse that the electron emitted.

The research is funded by the European Research Council and the German Research Foundation.

* “We generated high harmonics by irradiating a 40-μm-thick crystal of gallium selenide with intense, multi-THz pulses. These pulses were obtained by difference frequency mixing of two phase-correlated near-infrared pulse trains from a dual optical parametric amplifier pumped by a titanium sapphire amplifier. … The centre frequency was tunable and set to 33 THz in the experiments.” — F. Langer et al./Nature Photonics

Abstract of Symmetry-controlled temporal structure of high-harmonic carrier fields from a bulk crystal

High-harmonic (HH) generation in crystalline solids marks an exciting development, with potential applications in high-efficiency attosecond sources, all-optical bandstructure reconstruction and quasiparticle collisions. Although the spectral and temporal shape of the HH intensity has been described microscopically, the properties of the underlying HH carrier wave have remained elusive. Here, we analyse the train of HH waveforms generated in a crystalline solid by consecutive half cycles of the same driving pulse. Extending the concept of frequency combs to optical clock rates, we show how the polarization and carrier-envelope phase (CEP) of HH pulses can be controlled by the crystal symmetry. For certain crystal directions, we can separate two orthogonally polarized HH combs mutually offset by the driving frequency to form a comb of even and odd harmonic orders. The corresponding CEP of successive pulses is constant or offset by π, depending on the polarization. In the context of a quantum description of solids, we identify novel capabilities for polarization- and phase-shaping of HH waveforms that cannot be accessed with gaseous sources.

Brain has more than 100 times higher computational capacity than previously thought, say UCLA scientists

Neuron (blue) with dendrites (credit: Shelley Halpain/UC San Diego)

The brain has more than 100 times higher computational capacity than was previously thought, a UCLA team has discovered.

Obsoleting neuroscience textbooks, this finding suggests that our brains are both analog and digital computers and could lead to new approaches for treating neurological disorders and developing brain-like computers, according to the researchers.

Illustration of neuron and dendrites. Dendrites receive electrochemical stimulation (via synapses, not shown here) from neurons (not shown here), and propagate that stimulation to the neuron cell body (soma). A neuron sends electrochemical stimulation via an axon to communicate with other neurons via telodendria (purple, right) at the end of the axon and synapses (not shown here). (credit: Quasar/CC).

Dendrites have been considered simple passive conduits of signals. But by working with animals that were moving around freely, the UCLA team showed that dendrites are in fact electrically active — generating nearly 10 times more spikes than the soma (neuron cell body).

Fundamentally changes our understanding of brain computation

The finding, reported in the March 9 issue of the journal Science, challenges the long-held belief that spikes in the soma are the primary way in which perception, learning and memory formation occur.

“Dendrites make up more than 90 percent of neural tissue,” said UCLA neurophysicist Mayank Mehta, the study’s senior author. “Knowing they are much more active than the soma fundamentally changes the nature of our understanding of how the brain computes information.”

“This is a major departure from what neuroscientists have believed for about 60 years,” said Mehta, a UCLA professor of physics and astronomy, of neurology and of neurobiology.

Because the dendrites are nearly 100 times larger in volume than the neuronal centers, Mehta said, the large number of dendritic spikes taking place could mean that the brain has more than 100 times the computational capacity than was previously thought.

Study with moving rats made discovery possible

Previous studies have been limited to stationary rats, because scientists have found that placing electrodes in the dendrites themselves while the animals were moving actually killed those cells. But the UCLA team developed a new technique that involves placing the electrodes near, rather than in, the dendrites.

Using that approach, the scientists measured dendrites’ activity for up to four days in rats that were allowed to move freely within a large maze. Taking measurements from the posterior parietal cortex, the part of the brain that plays a key role in movement planning, the researchers found far more activity in the dendrites than in the somas — approximately five times as many spikes while the rats were sleeping, and up to 10 times as many when they were exploring.

Looking at the soma to understand how the brain works has provided a framework for numerous medical and scientific questions — from diagnosing and treating diseases to how to build computers. But, Mehta said, that framework was based on the understanding that the cell body makes the decisions, and that the process is digital.

“What we found indicates that such decisions are made in the dendrites far more often than in the cell body, and that such computations are not just digital, but also analog,” Mehta said. “Due to technological difficulties, research in brain function has largely focused on the cell body. But we have discovered the secret lives of neurons, especially in the extensive neuronal branches. Our results substantially change our understanding of how neurons compute.”

Funding was provided by the University of California.

Complete neuron cell diagram (credit: LadyofHats/CC)


Abstract of Dynamics of cortical dendritic membrane potential and spikes in freely behaving rats

Neural activity in vivo is primarily measured using extracellular somatic spikes, which provide limited information about neural computation. Hence, it is necessary to record from neuronal dendrites, which generate dendritic action potentials (DAP) and profoundly influence neural computation and plasticity. We measured neocortical sub- and suprathreshold dendritic membrane potential (DMP) from putative distal-most dendrites using tetrodes in freely behaving rats over multiple days with a high degree of stability and sub-millisecond temporal resolution. DAP firing rates were several fold larger than somatic rates. DAP rates were modulated by subthreshold DMP fluctuations which were far larger than DAP amplitude, indicting hybrid, analog-digital coding in the dendrites. Parietal DAP and DMP exhibited egocentric spatial maps comparable to pyramidal neurons. These results have important implications for neural coding and plasticity.

IBM-led international research team stores one bit of data on a single atom

Scanning tunneling microscope image of a single atom of holmium, an element that researchers used as a magnet to store one bit of data. (credit: IBM Research — Almaden)

An international team led by IBM has created the world’s smallest magnet, using a single atom of rare-earth element holmium, and stored one bit of data on it over several hours.

The achievement represents the ultimate limit of the classical approach to high-density magnetic storage media, according to a paper published March 8 in the journal Nature.

Currently, hard disk drives use about 100,000 atoms to store a single bit. The ability to read and write one bit on one atom may lead to significantly smaller and denser storage devices in the future. (The researchers are currently working in an ultrahigh vacuum at 1.2 K (a temperature near absolute zero.)

Using a scanning tunneling microscope* (STM), the researchers also showed that a device using two magnetic atoms could be written and read independently, even when they were separated by just one nanometer.

IBM microscope mechanic Bruce Melior at scanning tunneling microscope, used to view and manipulate atoms (credit: IBM Research — Almaden)

The researchers believe this tight spacing could eventually yield magnetic storage that is 1,000 times denser than today’s hard disk drives and solid state memory chips. So they could one day store 1,000 times more information in the same space. That means data centers, computers, and personal devices would be radically smaller and more powerful.

Single-atom write and read operations. (Left) To write the data onto the holmium atom, a pulse of electric current from the magnetized tip of a scanning tunneling microscope (STM) is used to flip the orientation of the atom’s field between a 0 or 1. The STM is also used to read it. (Right) A second read-out method used an iron atom as a magnetic sensor, which also allowed the team to read out multiple bits at the same time, making it more practical than an STM. (credit: IBM Research and Fabian D. Natterer et al./Nature)

Researchers at EPFL in Switzerland, University of Chinese Academy of Sciences in Hong Kong, University of Göttingen in Germany, Universität Zürich in Switzerland, Institute of Basic Science, Center for Quantum Nanoscience in South Korea, and Ewha Womans University in South Korea were also on the research team.

* The STM was developed in 1981, earning its inventors, Gerd Binnig and Heinrich Rohrer (at IBM Zürich), the Nobel Prize in Physics in 1986. IBM is planning future scanning tunneling microscope studies to investigate the potential of performing quantum information processing using individual magnetic atoms. Earlier this week, IBM announced it will be building the world’s first commercial quantum computers for business and science.


IBM Research | IBM Research Created the World’s Smallest Magnet — an Atom


An ultra-low-power artificial synapse for neural-network computing

(Left) Illustration of a synapse in the brain connecting two neurons. (Right) Schematic of artificial synapse (ENODe), which functions as a transistor. It consists of two thin, flexible polymer films (black) with source, drain, and gate terminals, connected by an electrolyte of salty water that permits ions to cross. A voltage pulse applied to the “presynaptic” layer (top) alters the level of oxidation in the “postsynaptic layer” (bottom), triggering current flow between source and drain. (credit: Thomas Splettstoesser/CC and Yoeri van de Burgt et al./Nature Materials)

Stanford University and Sandia National Laboratories researchers have developed an organic artificial synapse based on a new memristor (resistive memory device) design that mimics the way synapses in the brain learn. The new artificial synapse could lead to computers that better recreate the way the human brain processes information. It could also one day directly interface with the human brain.

The new artificial synapse is an electrochemical neuromorphic organic device (dubbed “ENODe”) — a mixed ionic/electronic design that is fundamentally different from existing and other proposed resistive memory devices, which are limited by noise, required high write voltage, and other factors*, the researchers note in a paper published online Feb. 20 in Nature Materials.

Like a neural path in a brain being reinforced through learning, the artificial synapse is programmed by discharging and recharging it repeatedly. Through this training, the researchers have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, remain at that state.

“The working mechanism of ENODes is reminiscent of that of natural synapses, where neurotransmitters diffuse through the cleft, inducing depolarization due to ion penetration in the postsynaptic neuron,” the researchers explain in the paper. “In contrast, other memristive devices switch by melting materials at relatively high temperatures (PCMs) or by voltage-induced breakdown/filament formation and ion diffusion in dense oxide layers (FFMOs).”

The ENODe achieves significant energy savings** in two ways:

  • Unlike a conventional computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts. Traditional computing requires separately processing information and then storing it into memory. Here, the processing creates the memory.
  • When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and co-senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

A future brain-like computer with 500 states

Only one artificial synapse has been produced so far, but researchers at Sandia used 15,000 measurements to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

This artificial synapse may one day be part of a brain-like computer, which could be especially useful for processing visual and auditory signals, as in voice-controlled interfaces and driverless cars, but without energy-consuming computer hardware.

This device is also well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs to move data from the processing unit to the memory.

However, this is still about 10,000 times as much energy as the minimum a biological synapse needs in order to fire**. The researchers hope to attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Linking to live organic neurons

This new artificial synapse may one day be part of a brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these depend on energy-consuming traditional computer hardware.

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The switching voltages applied to train the artificial synapse (about 0.5 mV) are also the same as those that move through human neurons — about 1,000 times lower than the “write” voltage for a typical memristor.

That means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

* “A resistive memory device has not yet been demonstrated with adequate electrical characteristics to fully realize the efficiency and performance gains of a neural architecture. State-of-the-art memristors suffer from excessive write noise, write non-linearities, and high write voltages and currents.  Reducing the noise and lowering the switching voltage significantly below 0.3 V (~10 kT) in a two-terminal device without compromising long-term data retention has proven difficult.” … Organic memristive devices have been recently proposed, but are limited by “the slow kinetics of ion diffusion through a polymer to retain their states or on charge storage in metal nanoparticles, which inherently limits performance and stability.” — Yoeri van de Burgt et al., Nature Materials

** ENODe switches at low voltage and energy (< 10 pJ for 1000-square-micrometer devices), compared to an estimated ∼ 1–100 fJ per synaptic event for the human brain.
 

Abstract of A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing

The brain is capable of massively parallel information processing while consuming only ~1–100 fJ per synaptic event. Inspired by the efficiency of the brain, CMOS-based neural architectures and memristors are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10 pJ for 103 μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1 V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Brain-computer interface advance allows paralyzed people to type almost as fast as some smartphone users

Typing with your mind. You are paralyzed. But now, tiny electrodes have been surgically implanted in your brain to record signals from your motor cortex, the brain region controlling muscle movement. As you think of mousing over to a letter (or clicking to choose it), those electrical brain signals are transmitted via a cable to a computer (replacing your spinal cord and muscles). There, advanced algorithms decode the complex electrical brain signals, converting them instantly into screen actions. (credit: Chethan Pandarinath et al./eLife)

Stanford University researchers have developed a brain-computer interface (BCI) system that can enable people with paralysis* to type (using an on-screen cursor) at speeds and accuracy levels of about three times faster than reported to date.

Simply by imagining their own hand movements, one participant was able to type 39 correct characters per minute (about eight words per minute); the other two participants averaged 6.3 and 2.7 words per minute, respectively — all without auto-complete assistance (so it could be much faster).

Those are communication rates that people with arm and hand paralysis would also find useful, the researchers suggest. “We’re approaching the speed at which you can type text on your cellphone,” said Krishna Shenoy, PhD, professor of electrical engineering, a co-senior author of the study, which was published in an open-access paper online Feb. 21 in eLife.

Braingate and beyond

The three study participants used a brain-computer interface called the “BrainGate Neural Interface System.” On KurzweilAI, we first discussed Braingate in 2011, followed by a 2012 clinical trial that allowed a paralyzed patient to control a robot.

Braingate in 2012 (credit: Brown University)

The new research, led by Stanford, takes the Braingate technology way further**. Participants can now move a cursor (by just thinking about a hand movement) on a computer screen that displays the letters of the alphabet, and they can “point and click” on letters, computer-mouse-style, to type letters and sentences.

The new BCI uses a tiny silicon chip, just over one-sixth of an inch square, with 100 electrodes that penetrate the brain to about the thickness of a quarter and tap into the electrical activity of individual nerve cells in the motor cortex.

As the participant thinks of a specific hand-to-mouse movement (pointing at or clicking on a letter), neural electrical activity is recorded using 96-channel silicon microelectrode arrays implanted in the hand area of the motor cortex. These signals are then filtered to extract multiunit spiking activity and high-frequency field potentials, then decoded (using two algorithms) to provide “point-and-click” control of a computer cursor.

What’s next

The team next plans is to adapt the system so that brain-computer interfaces can control commercial computers, phones and tablets — perhaps extending out to the internet.

Beyond that, Shenoy predicted that a self-calibrating, fully implanted wireless BCI system with no required caregiver assistance and no “cosmetic impact” would be available in five to 10 years from now (“closer to five”).

Perhaps a future wireless, noninvasive version could let anyone simply think to select letters, words, ideas, and images — replacing the mouse and finger touch — along the lines of Elon Musk’s neural lace concept?

* Millions of people with paralysis reside in the U.S.

** The study’s results are the culmination of the long-running multi-institutional BrainGate consortium, which includes scientists at Massachusetts General Hospital, Brown University, Case Western University, and the VA Rehabilitation Research and Development Center for Neurorestoration and Neurotechnology in Providence, Rhode Island. The study was funded by the National Institutes of Health, the Stanford Office of Postdoctoral Affairs, the Craig H. Neilsen Foundation, the Stanford Medical Scientist Training Program, Stanford BioX-NeuroVentures, the Stanford Institute for Neuro-Innovation and Translational Neuroscience, the Stanford Neuroscience Institute, Larry and Pamela Garlick, Samuel and Betsy Reeves, the Howard Hughes Medical Institute, the U.S. Department of Veterans Affairs, the MGH-Dean Institute for Integrated Research on Atrial Fibrillation and Stroke and Massachusetts General Hospital.


Stanford | Stanford researchers develop brain-controlled typing for people with paralysis


Abstract of High performance communication by people with paralysis using an intracortical brain-computer interface

Brain-computer interfaces (BCIs) have the potential to restore communication for people with tetraplegia and anarthria by translating neural activity into control signals for assistive communication devices. While previous pre-clinical and clinical studies have demonstrated promising proofs-of-concept (Serruya et al., 2002; Simeral et al., 2011; Bacher et al., 2015; Nuyujukian et al., 2015; Aflalo et al., 2015; Gilja et al., 2015; Jarosiewicz et al., 2015; Wolpaw et al., 1998; Hwang et al., 2012; Spüler et al., 2012; Leuthardt et al., 2004; Taylor et al., 2002; Schalk et al., 2008; Moran, 2010; Brunner et al., 2011; Wang et al., 2013; Townsend and Platsko, 2016; Vansteensel et al., 2016; Nuyujukian et al., 2016; Carmena et al., 2003; Musallam et al., 2004; Santhanam et al., 2006; Hochberg et al., 2006; Ganguly et al., 2011; O’Doherty et al., 2011; Gilja et al., 2012), the performance of human clinical BCI systems is not yet high enough to support widespread adoption by people with physical limitations of speech. Here we report a high-performance intracortical BCI (iBCI) for communication, which was tested by three clinical trial participants with paralysis. The system leveraged advances in decoder design developed in prior pre-clinical and clinical studies (Gilja et al., 2015; Kao et al., 2016; Gilja et al., 2012). For all three participants, performance exceeded previous iBCIs (Bacher et al., 2015; Jarosiewicz et al., 2015) as measured by typing rate (by a factor of 1.4–4.2) and information throughput (by a factor of 2.2–4.0). This high level of performance demonstrates the potential utility of iBCIs as powerful assistive communication devices for people with limited motor function.