Tactile sensor lets robots gauge objects’ hardness and manipulate small tools

A GelSight sensor attached to a robot’s gripper enables the robot to determine precisely where it has grasped a small screwdriver, removing it from and inserting it back into a slot, even when the gripper screens the screwdriver from the robot’s camera. (credit: Robot Locomotion Group at MIT)

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have added sensors to grippers on robot arms to give robots greater sensitivity and dexterity. The sensor can judge the hardness of surfaces it touches, enabling a robot to manipulate smaller objects than was previously possible.

The “GelSight” sensor consists of a block of transparent soft rubber — the “gel” of its name — with one face coated with metallic paint. It is mounted on one side of a robotic gripper. When the paint-coated face is pressed against an object, the face conforms to the object’s shape and the metallic paint makes the object’s surface reflective. Mounted on the sensor opposite the paint-coated face of the rubber block are three colored lights at different angles and a single camera.

Humans gauge hardness by the degree to which the contact area between the object and our fingers changes as we press on it. Softer objects tend to flatten more, increasing the contact area. The MIT researchers used the same approach.

A GelSight sensor, pressed against each object manually, recorded how the contact pattern changed over time, essentially producing a short movie for each object. A neural network was then used to look for correlations between changes in contact patterns and hardness measurements. The resulting system takes frames of video as inputs and produces hardness scores with very high accuracy.

The researchers also designed control algorithms that use a computer vision system to guide the robot’s gripper toward a tool and then turn location estimation over to a GelSight sensor once the robot has the tool in hand.

“I think that the GelSight technology, as well as other high-bandwidth tactile sensors, will make a big impact in robotics,” says Sergey Levine, an assistant professor of electrical engineering and computer science at the University of California at Berkeley. “For humans, our sense of touch is one of the key enabling factors for our amazing manual dexterity. Current robots lack this type of dexterity and are limited in their ability to react to surface features when manipulating objects. If you imagine fumbling for a light switch in the dark, extracting an object from your pocket, or any of the other numerous things that you can do without even thinking — these all rely on touch sensing.”

The researchers presented their work in two papers at the International Conference on Robotics and Automation.

Wenzhen Yuan | Measuring hardness of fruits with GelSight sensor

Abstract of Tracking Objects with Point Clouds from Vision and Touch

We present an object-tracking framework that fuses point cloud information from an RGB-D camera with tactile information from a GelSight contact sensor. GelSight can be treated as a source of dense local geometric information, which we incorporate directly into a conventional point-cloud-based articulated object tracker based on signed-distance functions. Our implementation runs at 12 Hz using an online depth reconstruction algorithm for GelSight and a modified secondorder update for the tracking algorithm. We present data from hardware experiments demonstrating that the addition of contact-based geometric information significantly improves the pose accuracy during contact, and provides robustness to occlusions of small objects by the robot’s end effector.

High-speed light-based systems could replace supercomputers for certain ‘deep learning’ calculations

(a) Optical micrograph of an experimentally fabricated on-chip optical interference unit; the physical region where the optical neural network program exists is highlighted in gray. A programmable nanophotonic processor uses a field-programmable gate array (similar to an FPGA integrated circuit ) — an array of interconnected waveguides, allowing the light beams to be modified as needed for a specific deep-learning matrix computation. (b) Schematic illustration of the optical neural network program, which performs matrix multiplication and amplification fully optically. (credit: Yichen Shen et al./Nature Photonics)

A team of researchers at MIT and elsewhere has developed a new approach to deep learning systems — using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep-learning computations.

Deep-learning systems are based on artificial neural networks that mimic the way the brain learns from an accumulation of examples. They can enable technologies such as face- and voice-recognition software, or scour vast amounts of medical data to find patterns that could be useful diagnostically, for example.

But the computations these systems carry out are highly complex and demanding, even for supercomputers. Traditional computer architectures are not very efficient for calculations needed for neural-network tasks that involve repeated multiplications of matrices (arrays of numbers). These can be computationally intensive for conventional CPUs or even GPUs.

Programmable nanophotonic processor

Instead, the new approach uses an optical device that the researchers call a “programmable nanophotonic processor.” Multiple light beams are directed in such a way that their waves interact with each other, producing interference patterns that “compute” the intended operation.

The optical chips using this architecture could, in principle, carry out dense matrix multiplications (the most power-hungry and time-consuming part in AI algorithms) for learning tasks much faster, compared to conventional electronic chips. The researchers expect a computational speed enhancement of at least two orders of magnitude over the state-of-the-art and three orders of magnitude in power efficiency.

“This chip, once you tune it, can carry out matrix multiplication with, in principle, zero energy, almost instantly,” says Marin Soljacic, one of the MIT researchers on the team.

To demonstrate the concept, the team set the programmable nanophotonic processor to implement a neural network that recognizes four basic vowel sounds. Even with the prototype system, they were able to achieve a 77 percent accuracy level, compared to about 90 percent for conventional systems. There are “no substantial obstacles” to scaling up the system for greater accuracy, according to Soljacic.

The team says is will still take a lot more time and effort to make this system useful. However, once the system is scaled up and fully functioning, the low-power system should find many uses, especially for situations where power is limited, such as in self-driving cars, drones, and mobile consumer devices. Other uses include signal processing for data transmission and computer centers.

The research was published Monday (June 12, 2017) in a paper in the journal Nature Photonics (open-access version available on arXiv).

The team also included researchers at Elenion Technologies of New York and the Université de Sherbrooke in Quebec. The work was supported by the U.S. Army Research Office through the Institute for Soldier Nanotechnologies, the National Science Foundation, and the Air Force Office of Scientific Research.

Abstract of Deep learning with coherent nanophotonic circuits

Artificial neural networks are computational network models inspired by signal processing in the brain. These models have dramatically improved performance for many machine-learning tasks, including speech and image recognition. However, today’s computing hardware is inefficient at implementing neural networks, in large part because much of it was designed for von Neumann computing schemes. Significant effort has been made towards developing electronic architectures tuned to implement artificial neural networks that exhibit improved computational speed and accuracy. Here, we propose a new architecture for a fully optical neural network that, in principle, could offer an enhancement in computational speed and power efficiency over state-of-the-art electronics for conventional inference tasks. We experimentally demonstrate the essential part of the concept using a programmable nanophotonic processor featuring a cascaded array of 56 programmable Mach–Zehnder interferometers in a silicon photonic integrated circuit and show its utility for vowel recognition.

How to design and build your own robot

Two robots — robot calligrapher and puppy — produced using an interactive display tool and selecting off-the-shelf components and 3D-printed parts (credit: Carnegie Mellon University)

Carnegie Mellon University (CMU) Robotics Institute researchers have developed a simplified interactive design tool that lets you design and make your own customized legged or wheeled robot, using a mix of 3D-printed parts and off-the-shelf components.

The current process of creating new robotic systems is challenging, time-consuming, and resource-intensive. So the CMU researchers have created a visual design tool with a simple drag-and-drop interface that lets you choose from a library of standard building blocks (such as actuators and mounting brackets that are either off-the-shelf/mass-produced or can be 3D-printed) that you can combine to create complex functioning robotic systems.

(a) The design interface consists of two workspaces. The left workspace allows for designing the robot. It displays a list of various modules at the top. The leftmost menu provides various functions that allow users to define preferences for the search process visualization and for physical simulation. The right workspace (showing the robot design on a plane) runs a physics simulation of the robot for testing. (b) When you select a new module from the modules list, the system automatically makes visual suggestions (shown in red) about possible connections for this module that are relevant to the current design. (credit: Carnegie Mellon University)

An iterative design process lets you experiment by changing the number and location of actuators and adjusting the physical dimensions of your robot. An auto-completion feature can automatically generate assemblies of components by searching through possible component arrangements. It even suggests components that are compatible with each other, points out where actuators should go, and automatically generates 3D-printable structural components to connect those actuators.

Automated design process. (a) Start with a guiding mesh for the robot you want to make and select the orientations of its motors, using the drag and drop interface. (b) The system then searches for possible designs that connect a given pair of motors in user-defined locations, according to user-defined preferences. You can reject the solution and re-do the search with different preferences anytime. A proposed search solution connecting the root motor to the target motor (highlighted in dark red) is shown in light blue. Repeat this process for each pair of motors. (c) Since the legs are symmetric in this case, you would only need to use the search process for two legs. The interface lets you create the other pair of legs by simple editing operations. Finally, attach end-effectors of your choice and create a body plate to complete your awesome robot design. (d) shows the final design (with and without the guiding mesh). The dinosaur head mesh was manually added after this particular design, for aesthetic appeal. (credit: Carnegie Mellon University)

The research team, headed by Stelian Coros, CMU Robotics Institute assistant professor of robotics, designed a number of robots with the tool and verified its feasibility by fabricating two test robots (shown above) — a wheeled robot with a manipulator arm that can hold a pen for drawing, and a four-legged “puppy” robot that can walk forward or sideways. “Our work aims to make robotics more accessible to casual users,” says Coros.

Robotics Ph.D. student Ruta Desai presented a report on the design tool at the IEEE International Conference on Robotics and Automation (ICRA 2017) May 29–June 3 in Singapore. No date for the availability of this tool has been announced.

This work was supported by the National Science Foundation.

Ruta Desai | Computational Abstractions for Interactive Design of Robotic Devices (ICRA 2017)

Abstract of Computational Abstractions for Interactive Design of Robotic Devices

We present a computational design system that allows novices and experts alike to easily create custom robotic devices using modular electromechanical components. The core of our work consists of a design abstraction that models the way in which these components can be combined to form complex robotic systems. We use this abstraction to develop a visual design environment that enables an intuitive exploration of the space of robots that can be created using a given set of actuators, mounting brackets and 3d-printable components. Our computational system also provides support for design auto-completion operations, which further simplifies the task of creating robotic devices. Once robot designs are finished, they can be tested in physical simulations and iteratively improved until they meet the individual needs of their users. We demonstrate the versatility of our computational design system by creating an assortment of legged and wheeled robotic devices. To test the physical feasibility of our designs, we fabricate a wheeled device equipped with a 5-DOF arm and a quadrupedal robot.

Alpha Go defeats world’s top Go player. What’s next?

What does the research team behind AlphaGo do next after winning the three-game match Saturday (May 27) against Ke Jie — the world’s top Go player — at the Future of Go Summit in Wuzhen, China?

“Throw their energy into the next set of grand challenges, developing advanced general algorithms that could one day help scientists as they tackle some of our most complex problems, such as finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials,” says DeepMind Technologies CEO Demis Hassabis.

Academic paper, Go teaching tool

But it’s “not the end of our work with the Go community,” he adds. “We plan to publish one final academic paper later this year that will detail the extensive set of improvements we made to the algorithms’ efficiency and potential to be generalised across a broader set of problems.”

Already in the works (with Jie’s collaboration): a teaching tool that “will show AlphaGo’s analysis of Go positions, providing an insight into how the program thinks, and hopefully giving all players and fans the opportunity to see the game through the lens of AlphaGo.”

Ke Jie plays the final match (credit: DeepMind)

DeepMind is also “publishing a special set of 50 AlphaGo vs AlphaGo games, played at full-length time controls, which we believe contain many new and interesting ideas and strategies.”

Deep Mind | The Future of Go Summit, Match Three: Ke Jie & AlphaGo

Deep Mind | Exploring the mysteries of Go with AlphaGo and China’s top players

DeepMind | Demis Hassabis on AlphaGo: its legacy and the ‘Future of Go Summit’ in Wuzhen, China

3D-printed ‘bionic skin’ could give robots and prosthetics the sense of touch

Schematic of a new kind of 3D printer that can print touch sensors directly on a model hand. (credit: Shuang-Zhuang Guo and Michael McAlpine/Advanced Materials )

Engineering researchers at the University of Minnesota have developed a process for 3D-printing stretchable, flexible, and sensitive electronic sensory devices that could give robots or prosthetic hands — or even real skin — the ability to mechanically sense their environment.

One major use would be to give surgeons the ability to feel during minimally invasive surgeries instead of using cameras, or to increase the sensitivity of surgical robots. The process could also make it easier for robots to walk and interact with their environment.

Printing electronics directly on human skin could be used for pulse monitoring, energy harvesting (of movements), detection of finger motions (on a keyboard or other devices), or chemical sensing (for example, by soldiers in the field to detect dangerous chemicals or explosives). Or imagine a future computer mouse built into your fingertip, with haptic touch on any surface.

“While we haven’t printed on human skin yet, we were able to print on the curved surface of a model hand using our technique,” said Michael McAlpine, a University of Minnesota mechanical engineering associate professor and lead researcher on the study.* “We also interfaced a printed device with the skin and were surprised that the device was so sensitive that it could detect your pulse in real time.”

The researchers also visualize use in “bionic organs.”

A unique skin-compatible 3D-printing process

(left) Schematic of the tactile sensor. (center) Top view. (right) Optical image showing the conformally printed 3D tactile sensor on a fingertip. Scale bar = 4 mm. (credit: Shuang-Zhuang Guo et al./Advanced Materials)

McAlpine and his team made the sensing fabric with a one-of-a kind 3D printer they built in the lab. The multifunctional printer has four nozzles to print the various specialized “inks” that make up the layers of the device — a base layer of silicone**, top and bottom electrodes made of a silver-based piezoresistive conducting ink, a coil-shaped pressure sensor, and a supporting layer that holds the top layer in place while it sets (later washed away in the final manufacturing process).

Surprisingly, all of the layers of “inks” used in the flexible sensors can set at room temperature. Conventional 3D printing using liquid plastic is too hot and too rigid to use on the skin. The sensors can stretch up to three times their original size.

The researchers say the next step is to move toward semiconductor inks and printing on a real surface. “The manufacturing is built right into the process, so it is ready to go now,” McAlpine said.

The research was published online in the journal Advanced Materials. It was funded by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health.

* McAlpine integrated electronics and novel 3D-printed nanomaterials to create a “bionic ear” in 2013.

** The silicone rubber has a low modulus of elasticity of 150 kPa, similar to that of skin, and lower hardness (Shore A 10) than that of human skin, according to the Advanced Materials paper.

College of Science and Engineering, UMN | 3D Printed Stretchable Tactile Sensors

Abstract of 3D Printed Stretchable Tactile Sensors

The development of methods for the 3D printing of multifunctional devices could impact areas ranging from wearable electronics and energy harvesting devices to smart prosthetics and human–machine interfaces. Recently, the development of stretchable electronic devices has accelerated, concomitant with advances in functional materials and fabrication processes. In particular, novel strategies have been developed to enable the intimate biointegration of wearable electronic devices with human skin in ways that bypass the mechanical and thermal restrictions of traditional microfabrication technologies. Here, a multimaterial, multiscale, and multifunctional 3D printing approach is employed to fabricate 3D tactile sensors under ambient conditions conformally onto freeform surfaces. The customized sensor is demonstrated with the capabilities of detecting and differentiating human movements, including pulse monitoring and finger motions. The custom 3D printing of functional materials and devices opens new routes for the biointegration of various sensors in wearable electronics systems, and toward advanced bionic skin applications.

How Google’s ‘smart reply’ is getting smarter

(credit: Google Research)

Last week, KurzweilAI reported that Google is rolling out an enhanced version of its “smart reply” machine-learning email software to “over 1 billion Android and iOS users of Gmail” — quoting Google CEO Sundar Pichai.

We noted that the new smart-reply version is now able to handle challenging sentences like “That interesting person at the cafe we like gave me a glance,” as Google research scientist Brian Strope and engineering director Ray Kurzweil noted in a Google Research blog post.

But “given enough examples of language, a machine learning approach can discover many of these subtle distinctions,” they wrote.

How does it work? “The content of language is deeply hierarchical, reflected in the structure of language itself, going from letters to words to phrases to sentences to paragraphs to sections to chapters to books to authors to libraries, etc.,” they explained.

So a hierarchical approach to learning “is well suited to the hierarchical nature of language. We have found that this approach works well for suggesting possible responses to emails. We use a hierarchy of modules, each of which considers features that correspond to sequences at different temporal scales, similar to how we understand speech and language.”*

Simplfying communication

“With Smart Reply, Google is assuming users want to offload the burdensome task of communicating with one another to our more efficient counterparts,” says Wired writer Liz Stinson.

“It’s not wrong. The company says the machine-generated replies already account for 12 percent of emails sent; expect that number to boom once everyone with the Gmail app can send one-tap responses.

“In the short term, that might mean more stilted conversations in your inbox. In the long term, the growing number of people who use these canned responses is only going to benefit Google, whose AI grows smarter with every email sent.”

Another challenge is that our emails, particularly from mobile devices, “tend to be riddled with idioms [such as urban lingo] that make no actual sense,” suggests Washington Post writer Hayley Tsukayama. “Things change depending on context: Something ‘wicked’ could be good or very bad, for example. Not to mention, sarcasm is a thing.

“Which is all to warn you that you may still get a wildly random and even potentially inappropriate suggestion — I once got an ‘Oh no!’ suggestion to a friend’s self-deprecating pregnancy announcement, for example. If the email only calls for a one- or two-sentence response, you’ll probably find Smart Reply useful. If it requires any nuance, though, it’s still best to use your own human judgment.”

* The initial release of Smart Reply encoded input emails word-by-word with a long-short-term-memory (LSTM) recurrent neural network, and then decoded potential replies with yet another word-level LSTM. While this type of modeling is very effective in many contexts, even with Google infrastructure, it’s an approach that requires substantial computation resources. Instead of working word-by-word, we found an effective and highly efficient path by processing the problem more all-at-once, by comparing a simple hierarchy of vector representations of multiple features corresponding to longer time spans. — Brian Strope and Ray Kurzweil, Google Research Blog.

When AI improves human performance instead of taking over

The game results show that placing slightly “noisy” bots in a central location (high-degree nodes) improves human coordination by reducing same-color neighbor nodes (the goal of the game). Square nodes show the bots and round nodes show human players; thick red lines show color conflicts, which are reduced with bot participation (right). (credit: Hirokazu Shirado and Nicholas A. Christakis/Nature)

It’s not about artificial intelligence (AI) taking over — it’s about AI improving human performance, a new study by Yale University researchers has shown.

“Much of the current conversation about artificial intelligence has to do with whether AI is a substitute for human beings. We believe the conversation should be about AI as a complement to human beings,” said Nicholas Christakis, Yale University co-director of the Yale Institute for Network Science (YINS) and senior author of a study by Yale Institute for Network Science.*

AI doesn’t even have to be super-sophisticated to make a difference in people’s lives; even “dumb AI” can help human groups, based on the study, which appears in the May 18, 2017 edition of the journal Nature.

How bots can boost human performance

In a series of experiments using teams of human players and autonomous software agents (“bots”), the bots boosted the performance of human groups and the individual players, the researchers found.

The experiment design involved an online color-coordination game that required groups of people to coordinate their actions for a collective goal. The collective goal was for every node to have a color different than all of its neighbor nodes. The subjects were paid a US$2 show-up fee and a declining bonus of up to US$3 depending on the speed of reaching a global solution to the coordination problem (in which every player in a group had chosen a different color from their connected neighbors). When they did not reach a global solution within 5 min, the game was stopped and the subjects earned no bonus.

The human players also interacted with anonymous bots that were programmed with three levels of behavioral randomness — meaning the AI bots sometimes deliberately made mistakes (introduced “noise”). In addition, sometimes the bots were placed in different parts of the social network to try different strategies.

The result: The bots reduced the median time for groups to solve problems by 55.6%. The experiment also showed a cascade effect: People whose performance improved when working with the bots then influenced other human players to raise their game. More than 4,000 people participated in the experiment, which used Yale-developed software called breadboard.

The findings have implications for a variety of situations in which people interact with AI technology, according to the researchers. Examples include human drivers who share roadways with autonomous cars and operations in which human soldiers work in tandem with AI.

“There are many ways in which the future is going to be like this,” Christakis said. “The bots can help humans to help themselves.”

Practical business AI tools

One example: Salesforce CEO Marc Benioff uses a bot called Einstein to help him run his company, Business Intelligence reported Thursday (May 18, 2017).


“Powered by advanced machine learning, deep learning, predictive analytics, natural language processing and smart data discovery, Einstein’s models will be automatically customised for every single customer,” according to the Salesforce blog. “It will learn, self-tune and get smarter with every interaction and additional piece of data. And most importantly, Einstein’s intelligence will be embedded within the context of business, automatically discovering relevant insights, predicting future behavior, proactively recommending best next actions and even automating tasks.”

Benioff says he also uses a version called Einstein Guidance for forecasting and modeling. It even helps end internal politics at executive meetings, calling out under-performing executives.

“AI is the next platform. All future apps for all companies will be built on AI,” Benioff predicts.

* Christakis is a professor of sociology, ecology & evolutionary biology, biomedical engineering, and medicine at Yale. Grants from the Robert Wood Johnson Foundation and the National Institute of Social Sciences supported the research.

Abstract of Locally noisy autonomous agents improve global human coordination in network experiments

Coordination in groups faces a sub-optimization problem and theory suggests that some randomness may help to achieve global optima. Here we performed experiments involving a networked colour coordination game in which groups of humans interacted with autonomous software agents (known as bots). Subjects (n = 4,000) were embedded in networks (n = 230) of 20 nodes, to which we sometimes added 3 bots. The bots were programmed with varying levels of behavioural randomness and different geodesic locations. We show that bots acting with small levels of random noise and placed in central locations meaningfully improve the collective performance of human groups, accelerating the median solution time by 55.6%. This is especially the case when the coordination problem is hard. Behavioural randomness worked not only by making the task of humans to whom the bots were connected easier, but also by affecting the gameplay of the humans among themselves and hence creating further cascades of benefit in global coordination in these heterogeneous systems.

Google rolls out new ‘smart reply’ machine-learning email software to more than 1 billion Gmail mobile users

A smarter version of Smart Reply (credit: Google Research)

Google is rolling out an enhanced version of its “smart reply” machine-learning email software to “over 1 billion Android and iOS users of Gmail,” Google CEO Sundar Pichai said today (May 17, 2017) in a keynote at the annual Google I/O conference.

Smart Reply suggests up to three replies to an email message — saving you typing time, or giving you time to think through a better reply. Smart Reply was previously only available to users of Google Inbox (an app that helps Gmail users organize their email messages and reply efficiently).

Hierarchical model

Developed by a team headed by Ray Kurzweil, a Google director of engineering, “the new version of Smart Reply increases the percentage of usable suggestions and is much more algorithmically efficient than the original system,” said Kurzweil in a Google Research blog post with research colleague Brian Strope today. “And that efficiency now makes it feasible for us to provide Smart Reply for Gmail.”

A hierarchy of modules (credit: Google Research)

The team was inspired by how humans understand languages and concepts, based on hierarchical models of language, Kurzweil and Strope explained. The new approach uses “hierarchies of modules, each of which can learn, remember, and recognize a sequential pattern,” as described in Kurzweil’s 2012 book, How to Create a Mind.

For example, a sentence like “That interesting person at the cafe we like gave me a glance” is difficult to interpret. Was it a positive or negative gesture? But “given enough examples of language, a machine learning approach can discover many of these subtle distinctions,” they write.

Best of MOOGFEST 2017

The Moogfest four-day festival in Durham, North Carolina next weekend (May 18 — 21) explores the future of technology, art, and music. Here are some of the sessions that may be especially interesting to KurzweilAI readers. Full #Moogfest2017 Program Lineup.

Culture and Technology

(credit: Google)

The Magenta by Google Brain team will bring its work to life through an interactive demo plus workshops on the creation of art and music through artificial intelligence.

Magenta is a Google Brain project to ask and answer the questions, “Can we use machine learning to create compelling art and music? If so, how? If not, why not?” It’s first a research project to advance the state-of-the art and creativity in music, video, image and text generation and secondly, Magenta is building a community of artists, coders, and machine learning researchers.

The interactive demo will go through a improvisation along with the machine learning models, much like the Al Jam Session. The workshop will cover how to use the open source library to build and train models and interact with them via MIDI.

Technical reference: Magenta: Music and Art Generation with Machine Intelligence

TEDx Talks | Music and Art Generation using Machine Learning | Curtis Hawthorne | TEDxMountainViewHighSchool

Miguel Nicolelis (credit: Duke University)

Miguel A. L. Nicolelis, MD, PhD will discuss state-of-the-art research on brain-machine interfaces, which make it possible for the brains of primates to interact directly and in a bi-directional way with mechanical, computational and virtual devices. He will review a series of recent experiments using real-time computational models to investigate how ensembles of neurons encode motor information. These experiments have revealed that brain-machine interfaces can be used not only to study fundamental aspects of neural ensemble physiology, but they can also serve as an experimental paradigm aimed at testing the design of novel neuroprosthetic devices.

He will also explore research that raises the hypothesis that the properties of a robot arm, or other neurally controlled tools, can be assimilated by brain representations as if they were extensions of the subject’s own body.

Theme: Transhumanism

Dervishes at Royal Opera House with Matthew Herbert (credit: ?)

Andy Cavatorta (MIT Media Lab) will present a conversation and workshop on a range of topics including the four-century history of music and performance at the forefront of technology. Known as the inventor of Bjork’s Gravity Harp, he has collaborated on numerous projects to create instruments using new technologies that coerce expressive music out of fire, glass, gravity, tiny vortices, underwater acoustics, and more. His instruments explore technologically mediated emotion and opportunities to express the previously inexpressible.

Theme: Instrument Design

Berklee College of Music

Michael Bierylo (credit: Moogfest)

Michael Bierylo will present his Modular Synthesizer Ensemble alongside the Csound workshops from fellow Berklee Professor Richard Boulanger.

Csound is a sound and music computing system originally developed at MIT Media Lab and can most accurately be described as a compiler or a software that takes textual instructions in the form of source code and converts them into object code which is a stream of numbers representing audio. Although it has a strong tradition as a tool for composing electro-acoustic pieces, it is used by composers and musicians for any kind of music that can be made with the help of the computer and has traditionally being used in a non-interactive score driven context, but nowadays it is mostly used in in a real-time context.

Michael Bierylo serves as the Chair of the Electronic Production and Design Department, which offers students the opportunity to combine performance, composition, and orchestration with computer, synthesis, and multimedia technology in order to explore the limitless possibilities of musical expression.

Berklee College of Music | Electronic Production and Design (EPD) at Berklee College of Music

Chris Ianuzzi (credit: William Murray)

Chris Ianuzzi, a synthesist of Ciani-Musica and past collaborator with pioneers such as Vangelis and Peter Baumann, will present a daytime performance and sound exploration workshops with the B11 braininterface and NeuroSky headset–a Brainwave Sensing Headset.

Theme: Hacking Systems

Argus Project (credit: Moogfest)

The Argus Project from Gan Golan and Ron Morrison of NEW INC is a wearable sculpture, video installation and counter-surveillance training, which directly intersects the public debate over police accountability. According to ancient Greek myth, Argus Panoptes was a giant with 100 eyes who served as an eternal watchman, both for – and against – the gods.

By embedding an array of camera “eyes” into a full body suit of tactical armor, the Argus exo-suit creates a “force field of accountability” around the bodies of those targeted. While some see filming the police as a confrontational or subversive act, it is in fact, a deeply democratic one.  The act of bearing witness to the actions of the state – and showing them to the world – strengthens our society and institutions. The Argus Project is not so much about an individual hero, but the Citizen Body as a whole. In between one of the music acts, a presentation about the project will be part of the Protest Stage.

Argus Exo Suit Design (credit: Argus Project)

Theme: Protest

Found Sound Nation (credit: Moogfest)

Democracy’s Exquisite Corpse from Found Sound Nation and Moogfest, an immersive installation housed within a completely customized geodesic dome, is a multi-person instrument and music-based round-table discussion. Artists, activists, innovators, festival attendees and community engage in a deeply interactive exploration of sound as a living ecosystem and primal form of communication.

Within the dome, there are 9 unique stations, each with their own distinct set of analog or digital sound-making devices. Each person’s set of devices is chained to the person sitting next to them, so that everybody’s musical actions and choices affect the person next to them, and thus affect everyone else at the table. This instrument is a unique experiment in how technology and the instinctive language of sound can play a role in the shaping of a truly collective unconscious.

Theme: Protest

(credit: Land Marking)

Land Marking, from Halsey Burgund and Joe Zibkow of MIT Open Doc Lab, is a mobile-based music/activist project that augments the physical landscape of protest events with a layer of location-based audio contributed by event participants in real-time. The project captures the audioscape and personal experiences of temporary, but extremely important, expressions of discontent and desire for change.

Land Marking will be teaming up with the Protest Stage to allow Moogfest attendees to contribute their thoughts on protests and tune into an evolving mix of commentary and field recordings from others throughout downtown Durham. Land Marking is available on select apps.

Theme: Protest

Taeyoon Choi (credit: Moogfest)

Taeyoon Choi, an artist and educator based in New York and Seoul, who will be leading a Sign Making Workshop as one of the Future Thought leaders on the Protest Stage. His art practice involves performance, electronics, drawings and storytelling that often leads to interventions in public spaces.

Taeyoon will also participate in the Handmade Computer workshop to build a1 Bit Computer, which demonstrates how binary numbers and boolean logic can be configured to create more complex components. On their own these components aren’t capable of computing anything particularly useful, but a computer is said to be Turing complete if it includes all of them, at which point it has the extraordinary ability to carry out any possible computation. He has participated in numerous workshops at festivals around the world, from Korea to Scotland, but primarily at the School for Poetic Computation (SFPC) — an artist run school co-founded by Taeyoon in NYC. Taeyoon Choi’s Handmade Computer projects.

Theme: Protest

(credit: Moogfest)

irlbb from Vivan Thi Tang, connects individuals after IRL (in real life) interactions and creates community that otherwise would have been missed. With a customized beta of the app for Moogfest 2017, irlbb presents a unique engagement opportunity.

Theme: Protest

Ryan Shaw and Michael Clamann (credit: Duke University)

Duke Professors Ryan Shaw, and Michael Clamann will lead a daily science pub talk series on topics that include future medicine, humans and anatomy, and quantum physics.

Ryan is a pioneer in mobile health—the collection and dissemination of information using mobile and wireless devices for healthcare–working with faculty at Duke’s Schools of Nursing, Medicine and Engineering to integrate mobile technologies into first-generation care delivery systems. These technologies afford researchers, clinicians, and patients a rich stream of real-time information about individuals’ biophysical and behavioral health in everyday environments.

Michael Clamann is a Senior Research Scientist in the Humans and Autonomy Lab (HAL) within the Robotics Program at Duke University, an Associate Director at UNC’s Collaborative Sciences Center for Road Safety, and the Lead Editor for Robotics and Artificial Intelligence for Duke’s SciPol science policy tracking website. In his research, he works to better understand the complex interactions between robots and people and how they influence system effectiveness and safety.

Theme: Hacking Systems

Dave Smith (credit: Moogfest)

Dave Smith, the iconic instrument innovator and Grammy-winner, will lead Moogfest’s Instruments Innovators program and host a headlining conversation with a leading artist revealed in next week’s release. He will also host a masterclass.

As the original founder of Sequential Circuits in the mid-70s and Dave designed the Prophet-5––the world’s first fully-programmable polyphonic synth and the first musical instrument with an embedded microprocessor. From the late 1980’s through the early 2000’s he has worked to develop next level synths with the likes of the Audio Engineering Society, Yamaha, Korg, Seer Systems (for Intel). Realizing the limitations of software, Dave returned to hardware and started Dave Smith Instruments (DSI), which released the Evolver hybrid analog/digital synthesizer in 2002. Since then the DSI product lineup has grown to include the Prophet-6, OB-6, Pro 2, Prophet 12, and Prophet ’08 synthesizers, as well as the Tempest drum machine, co-designed with friend and fellow electronic instrument designer Roger Linn.

Theme: Future Thought

Dave Rossum, Gerhard Behles, and Lars Larsen (credit: Moogfest)

EM-u Systems Founder Dave Rossum, Ableton CEO Gerhard Behles, and LZX Founder Lars Larsen will take part in conversations as part of the Instruments Innovators program.

Driven by the creative and technological vision of electronic music pioneer Dave Rossum, Rossum Electro-Music creates uniquely powerful tools for electronic music production and is the culmination of Dave’s 45 years designing industry-defining instruments and transformative technologies. Starting with his co-founding of E-mu Systems, Dave provided the technological leadership that resulted in what many consider the premier professional modular synthesizer system–E-mu Modular System–which became an instrument of choice for numerous recording studios, educational institutions, and artists as diverse as Frank Zappa, Leon Russell, and Hans Zimmer. In the following years, worked on developing Emulator keyboards and racks (i.e. Emulator II), Emax samplers, the legendary SP-12 and SP-1200 (sampling drum machines), the Proteus sound modules and the Morpheus Z-Plane Synthesizer.

Gerhard Behles co-founded Ableton in 1999 with Robert Henke and Bernd Roggendorf. Prior to this he had been part of electronic music act “Monolake” alongside Robert Henke, but his interest in how technology drives the way music is made diverted his energy towards developing music software. He was fascinated by how dub pioneers such as King Tubby ‘played’ the recording studio, and began to shape this concept into a music instrument that became Ableton Live.

LZX Industries was born in 2008 out of the Synth DIY scene when Lars Larsen of Denton, Texas and Ed Leckie of Sydney, Australia began collaborating on the development of a modular video synthesizer. At that time, analog video synthesizers were inaccessible to artists outside of a handful of studios and universities. It was their continuing mission to design creative video instruments that (1) stay within the financial means of the artists who wish to use them, (2) honor and preserve the legacy of 20th century toolmakers, and (3) expand the boundaries of possibility. Since 2015, LZX Industries has focused on the research and development of new instruments, user support, and community building.


ATLAS detector (credit: Kaushik De, Brookhaven National Laboratory)

ATLAS @ CERN. The full ATLAS @ CERN program will be led by Duke University Professors Mark Kruse andKatherine Hayles along with ATLAS @ CERN Physicist Steven Goldfarb.

The program will include a “Virtual Visit” to the Large Hadron Collider — the world’s largest and most powerful particle accelerator — via a live video session,  a ½ day workshop analyzing and understanding LHC data, and a “Science Fiction versus Science Fact” live debate.

The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?

“Atlas Boogie” (referencing Higgs Boson):

ATLAS Experiment | The ATLAS Boogie

(credit: Kate Shaw)

Kate Shaw (ATLAS @ CERN), PhD, in her keynote, titled “Exploring the Universe and Impacting Society Worldwide with the Large Hadron Collider (LHC) at CERN,” will dive into the present-day and future impacts of the LHC on society. She will also share findings from the work she has done promoting particle physics in developing countries through her Physics without Frontiers program.

The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?

Theme: Future Thought

Arecibo (credit: Joe Davis/MIT)

In his keynote, Joe Davis (MIT) will trace the history of several projects centered on ideas about extraterrestrial communications that have given rise to new scientific techniques and inspired new forms of artistic practice. He will present his “swansong” — an interstellar message that is intended explicitly for human beings rather than for aliens.

Theme: Future Thought

Immortality bus (credit: Zoltan Istvan)

Zoltan Istvan (Immortality Bus), the former U.S. Presidential candidate for the Transhumanist party and leader of the Transhumanist movement, will explore the path to immortality through science with the purpose of using science and technology to radically enhance the human being and human experience. His futurist work has reached over 100 million people–some of it due to the Immortality Bus which he recently drove across America with embedded journalists aboard. The bus is shaped and looks like a giant coffin to raise life extension awareness.

Zoltan Istvan | 1-min Hightlight Video for Zoltan Istvan Transhumanism Documentary IMMORTALITY OR BUST

Theme: Transhumanism/Biotechnology

(credit: Moogfest)

Marc Fleury and members of the Church of Space — Park Krausen, Ingmar Koch, and Christ of Veillon — return to Moogfest for a second year to present an expanded and varied program with daily explorations in modern physics with music and the occult, Illuminati performances, theatrical rituals to ERIS, and a Sunday Mass in their own dedicated “Church” venue.

Theme: Techno-Shamanism


Do robots creep you out?

Which of these presentation methods make the robot look most real: live, VR, 3D TV, or 2D TV? (credit: Constanze Schreiner/University of Koblenz-Landau, Martina Mara/Ars Electronica Futuerlab, and Markus Appel/ University of Wurzburg)

How do you make humanoid robots look least creepy? With increasing use of industrial (and soon, service robots), it’s a good question.

Researchers at the University of Koblenz-Landau, University of Wurzburg, and Arts Electronica Futurelab decided to find out with an experiment. They created a skit with a human actor and the Roboy robot, and presented scripted human-robot interactions (HRIs), using four types of presentations: live, virtual reality (VR), 3D TV, and 2D TV. Participants saw Roboy assisting the human in organizing appointments, conducting web searches, and finding a birthday present for the human’s mother.

People who watched live interactions with the robot were most likely to consider the robot as real, followed by viewing the same interaction via VR. Robots presented in VR also scored high in human likeness, but lower than in the live presentation.

The researchers will present their findings at the 67th Annual Conference of the International Communication Association in San Diego, CA, May 25–29, 2017.