AlphaZero’s ‘alien’ superhuman-level program masters chess in 24 hours with no domain knowledge

AlphaZero vs. Stockfish chess program | Round 1 (credit: Chess.com)

Demis Hassabis, the founder and CEO of DeepMind, announced at the Neural Information Processing Systems conference (NIPS 2017) last week that DeepMind’s new AlphaZero program achieved a superhuman level of play in chess within 24 hours.

The program started from random play, given no domain knowledge except the game rules, according to an arXiv paper by DeepMind researchers published Dec. 5.

“It doesn’t play like a human, and it doesn’t play like a program,” said Hassabis, an expert chess player himself. “It plays in a third, almost alien, way. It’s like chess from another dimension.”

AlphaZero also mastered both shogi (Japanese chess) and Go within 24 hours, defeating a world-champion program in all three cases. The original AlphaGo mastered Go by learning thousands of example games and then practicing against another version of itself.

“AlphaZero was not ‘taught’ the game in the traditional sense,” explains Chess.com. “That means no opening book, no endgame tables, and apparently no complicated algorithms dissecting minute differences between center pawns and side pawns. This would be akin to a robot being given access to thousands of metal bits and parts, but no knowledge of a combustion engine, then it experiments numerous times with every combination possible until it builds a Ferrari. … The program had four hours to play itself many, many times, thereby becoming its own teacher.”

“What’s also remarkable, though, Hassabis explained, is that it sometimes makes seemingly crazy sacrifices, like offering up a bishop and queen to exploit a positional advantage that led to victory,” MIT Technology Review notes. “Such sacrifices of high-value pieces are normally rare. In another case the program moved its queen to the corner of the board, a very bizarre trick with a surprising positional value.”


Abstract of Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm

The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.

New technology allows robots to visualize their own future


UC Berkeley | Vestri the robot imagines how to perform tasks

UC Berkeley researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered before. It could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes.

The initial prototype focuses on learning simple manual skills entirely from autonomous play — similar to how children can learn about their world by playing with toys, moving them around, grasping, etc.

Using this technology, called visual foresight, the robots can predict what their cameras will see if they perform a particular sequence of movements. These robotic imaginations are still relatively simple for now — predictions made only several seconds into the future — but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles.

The robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment, or what the objects are. That’s because the visual imagination is learned entirely from scratch from unattended and unsupervised (no humans involved) exploration, where the robot plays with objects on a table.

After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before.

“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Sergey Levine, assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences, whose lab developed the technology. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”

The research team demonstrated the visual foresight technology at the Neural Information Processing Systems conference in Long Beach, California, on Monday, December 4, 2017.

Learning by playing: how it works

Robot’s imagined predictions (credit: UC Berkeley)

At the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA). DNA-based models predict how pixels in an image will move from one frame to the next, based on the robot’s actions. Recent improvements to this class of models, as well as greatly improved planning capabilities, have enabled robotic control based on video prediction to perform increasingly complex tasks, such as sliding toys around obstacles and repositioning multiple objects.

“In that past, robots have learned skills with a human supervisor helping and providing feedback. What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own,” said Chelsea Finn, a doctoral student in Levine’s lab and inventor of the original DNA model.

With the new technology, a robot pushes objects on a table, then uses the learned prediction model to choose motions that will move an object to a desired location. Robots use the learned model from raw camera observations to teach themselves how to avoid obstacles and push objects around obstructions.

Since control through video prediction relies only on observations that can be collected autonomously by the robot, such as through camera images, the resulting method is general and broadly applicable. Building video prediction models only requires unannotated video, which can be collected by the robot entirely autonomously.

That contrasts with conventional computer-vision methods, which require humans to manually label thousands or even millions of images.

Why (most) future robots won’t look like robots

A future robot’s body could combine soft actuators and stiff structure, with distributed computation throughout — an example of the new “material robotics.” (credit: Nikolaus Correll/University of Colorado)

Future robots won’t be limited to humanoid form (like Boston Robotics’ formidable backflipping Atlas). They’ll be invisibly embedded everywhere in common objects.

Such as a shoe that can intelligently support your gait, change stiffness as you’re running or walking, and adapt to different surfaces — or even help you do backflips.

That’s the vision of researchers at Oregon State University, the University of Colorado, Yale University, and École Polytechnique Fédérale de Lausanne, who describe the burgeoning new field of  “material robotics” in a perspective article published Nov. 29, 2017 in Science Robotics. (The article cites nine articles in this special issue, three of which you can access for free.)

Disappearing into the background of everyday life

The authors challenge a widespread basic assumption: that robots are either “machines that run bits of code” or “software ‘bots’ interacting with the world through a physical instrument.”

“We take a third path: one that imbues intelligence into the very matter of a robot,” says Oregon State University researcher Yiğit Mengüç, an assistant professor of mechanical engineering in OSU’s College of Engineering and part of the college’s Collaborative Robotics and Intelligent Systems Institute.

On that path, materials scientists are developing new bulk materials with the inherent multifunctionality required for robotic applications, while roboticists are working on new material systems with tightly integrated components, disappearing into the background of everyday life. “The spectrum of possible ap­proaches spans from soft grippers with zero knowledge and zero feedback all the way to humanoids with full knowledge and full feed­back,” the authors note in the paper.

For example, “In the future, your smartphone may be made from stretchable, foldable material so there’s no danger of it shattering,” says Mengüç. “Or it might have some actuation, where it changes shape in your hand to help with the display, or it can be able to communicate something about what you’re observing on the screen. All these bits and pieces of technology that we take for granted in life will be living, physically responsive things, moving, changing shape in response to our needs, not just flat, static screens.”

Soft robots get superpowers

Origami-inspired artificial muscles capable of lifting up to 1,000 times their own weight, simply by applying air or water pressure (credit: Shuguang Li/Wyss Institute at Harvard University)

As a good example of material-enabled robotics, researchers at the Wyss Institute at Harvard University and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed origami-inspired, programmable, super-strong artificial muscles that will allow future soft robots to lift objects that are up to 1,000 times their own weight — using only air or water pressure.

The actuators are “programmed” by the structural design itself. The skeleton folds define how the whole structure moves — no control system required.

That allows the muscles to be very compact and simple, which makes them more appropriate for mobile or body-mounted systems that can’t accommodate large or heavy machinery, says Shuguang Li, Ph.D., a Postdoctoral Fellow at the Wyss Institute and MIT CSAIL and first author of an an open-access article on the research published Nov. 21, 2017 in Proceedings of the National Academy of Sciences (PNAS).

Each artificial muscle consists of an inner “skeleton” that can be made of various materials, such as a metal coil or a sheet of plastic folded into a certain pattern, surrounded by air or fluid and sealed inside a plastic or textile bag that serves as the “skin.” The structural geometry of the skeleton itself determines the muscle’s motion. A vacuum applied to the inside of the bag initiates the muscle’s movement by causing the skin to collapse onto the skeleton, creating tension that drives the motion. Incredibly, no other power source or human input is required to direct the muscle’s movement — it’s automagically determined entirely by the shape and composition of the skeleton. (credit: Shuguang Li/Wyss Institute at Harvard University)

Resilient, multipurpose, scalable

Not only can the artificial muscles move in many ways, they do so with impressive resilience. They can generate about six times more force per unit area than mammalian skeletal muscle can, and are also incredibly lightweight. A 2.6-gram muscle can lift a 3-kilogram object, which is the equivalent of a mallard duck lifting a car. Additionally, a single muscle can be constructed within ten minutes using materials that cost less than $1, making them cheap and easy to test and iterate.

These muscles can be powered by a vacuum, which makes them safer than most of the other artificial muscles currently being tested. The muscles have been built in sizes ranging from a few millimeters up to a meter. So the muscles can be used in numerous applications at multiple scales, from miniature surgical devices to wearable robotic exoskeletons, transformable architecture, and deep-sea manipulators for research or construction, up to large deployable structures for space exploration.

The team could also construct the muscles out of the water-soluble polymer PVA. That opens the possibility of bio-friendly robots that can perform tasks in natural settings with minimal environmental impact, or ingestible robots that move to the proper place in the body and then dissolve to release a drug.

The team constructed dozens of muscles using materials ranging from metal springs to packing foam to sheets of plastic, and experimented with different skeleton shapes to create muscles that can contract down to 10% of their original size, lift a delicate flower off the ground, and twist into a coil, all simply by sucking the air out of them.

This research was funded by the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation (NSF), and the Wyss Institute for Biologically Inspired Engineering.


Wyss Institute | Origami-Inspired Artificial Muscles


Abstract of Fluid-driven origami-inspired artificial muscles

Artificial muscles hold promise for safe and powerful actuation for myriad common machines and robots. However, the design, fabrication, and implementation of artificial muscles are often limited by their material costs, operating principle, scalability, and single-degree-of-freedom contractile actuation motions. Here we propose an architecture for fluid-driven origami-inspired artificial muscles. This concept requires only a compressible skeleton, a flexible skin, and a fluid medium. A mechanical model is developed to explain the interaction of the three components. A fabrication method is introduced to rapidly manufacture low-cost artificial muscles using various materials and at multiple scales. The artificial muscles can be programed to achieve multiaxial motions including contraction, bending, and torsion. These motions can be aggregated into systems with multiple degrees of freedom, which are able to produce controllable motions at different rates. Our artificial muscles can be driven by fluids at negative pressures (relative to ambient). This feature makes actuation safer than most other fluidic artificial muscles that operate with positive pressures. Experiments reveal that these muscles can contract over 90% of their initial lengths, generate stresses of ∼600 kPa, and produce peak power densities over 2 kW/kg—all equal to, or in excess of, natural muscle. This architecture for artificial muscles opens the door to rapid design and low-cost fabrication of actuation systems for numerous applications at multiple scales, ranging from miniature medical devices to wearable robotic exoskeletons to large deployable structures for space exploration.

Disturbing video depicts near-future ubiquitous lethal autonomous weapons


Campaign to Stop Killer Robots | Slaughterbots

In response to growing concerns about autonomous weapons, the Campaign to Stop Killer Robots, a coalition of AI researchers and advocacy organizations, has released a fictional video that depicts a disturbing future in which lethal autonomous weapons have become cheap and ubiquitous worldwide.

UC Berkeley AI researcher Stuart Russell presented the video at the United Nations Convention on Certain Conventional Weapons in Geneva, hosted by the Campaign to Stop Killer Robots earlier this week. Russell, in an appearance at the end of the video, warns that the technology described in the film already exists* and that the window to act is closing fast.

Support for a ban against autonomous weapons has been mounting. On Nov. 2, more than 200 Canadian scientists and more than 100 Australian scientists in academia and industry penned open letters to Prime Minister Justin Trudeau and Malcolm Turnbull urging them to support the ban.

Earlier this summer, more than 130 leaders of AI companies signed a letter in support of this week’s discussions. These letters follow a 2015 open letter released by the Future of Life Institute and signed by more than 20,000 AI/robotics researchers and others, including Elon Musk and Stephen Hawking.

“Many of the world’s leading AI researchers worry that if these autonomous weapons are ever developed, they could dramatically lower the threshold for armed conflict, ease and cheapen the taking of human life, empower terrorists, and create global instability,” according to an article published by the Future of Life Institute, which funded the video. “The U.S. and other nations have used drones and semi-automated systems to carry out attacks for several years now, but fully removing a human from the loop is at odds with international humanitarian and human rights law.”

“The Campaign to Stop Killer Robots is not trying to stifle innovation in artificial intelligence and robotics and it does not wish to ban autonomous systems in the civilian or military world,” explained Noel Sharkey of the International Committee for Robot Arms Control. Rather we see an urgent need to prevent automation of the critical functions for selecting targets and applying violent force without human deliberation and to ensure meaningful human control for every attack.”

For more information about autonomous weapons:

* As suggested in this U.S. Department of Defense video:


Perdix Drone Swarm – Fighters Release Hive-mind-controlled Weapon UAVs in Air | U.S. Naval Air Systems Command

A tool to debug ‘black box’ deep-learning neural networks

Oops! A new debugging tool called DeepXplore generates real-world test images meant to expose logic errors in deep neural networks. The darkened photo at right tricked one set of neurons into telling the car to turn into the guardrail. After catching the mistake, the tool retrains the network to fix the bug. (credit: Columbia Engineering)

Researchers at Columbia and Lehigh universities have developed a method for error-checking the reasoning of the thousands or millions of neurons in unsupervised (self-taught) deep-learning neural networks, such as those used in self-driving cars.

Their tool, DeepXplore, feeds confusing, real-world inputs into the network to expose rare instances of flawed reasoning, such as the incident last year when Tesla’s autonomous car collided with a truck it mistook for a cloud, killing its passenger. Deep learning systems don’t explain how they make their decisions, which makes them hard to trust.

Modeled after the human brain, deep learning uses layers of artificial neurons that process and consolidate information. This results in a set of rules to solve complex problems, from recognizing friends’ faces online to translating email written in Chinese. The technology has achieved impressive feats of intelligence, but as more tasks become automated this way, concerns about safety, security, and ethics are growing.

Finding bugs by generating test images

Debugging the neural networks in self-driving cars is an especially slow and tedious process, with no way to measure how thoroughly logic within the network has been checked for errors. Current limited approaches include randomly feeding manually generated test images into the network until one triggers a wrong decision (telling the car to veer into the guardrail, for example); and “adversarial testing,” which automatically generates test images that it alters incrementally until one image tricks the system.

The new DeepXplore solution — presented Oct. 29, 2017 in an open-access paper at ACM’s Symposium on Operating Systems Principles in Shanghai — can find a wider variety of bugs than random or adversarial testing by using the network itself to generate test images likely to cause neuron clusters to make conflicting decisions, according to the researchers.

To simulate real-world conditions, photos are lightened and darkened, and made to mimic the effect of dust on a camera lens, or a person or object blocking the camera’s view. A photo of the road may be darkened just enough, for example, to cause one set of neurons to tell the car to turn left, and two other sets of neurons to tell it to go right.

After inferring that the first set misclassified the photo, DeepXplore automatically retrains the network to recognize the darker image and fix the bug. Using optimization techniques, researchers have designed DeepXplore to trigger as many conflicting decisions with its test images as it can while maximizing the number of neurons activated.

“You can think of our testing process as reverse-engineering the learning process to understand its logic,” said co-developer Suman Jana, a computer scientist at Columbia Engineering and a member of the Data Science Institute. “This gives you some visibility into what the system is doing and where it’s going wrong.”

Testing their software on 15 state-of-the-art neural networks, including Nvidia’s Dave 2 network for self-driving cars, the researchers uncovered thousands of bugs missed by previous techniques. They report activating up to 100 percent of network neurons — 30 percent more on average than either random or adversarial testing — and bringing overall accuracy up to 99 percent in some networks, a 3 percent improvement on average.*

The ultimate goal: certifying a neural network is bug-free

Still, a high level of assurance is needed before regulators and the public are ready to embrace robot cars and other safety-critical technology like autonomous air-traffic control systems. One limitation of DeepXplore is that it can’t certify that a neural network is bug-free. That requires isolating and testing the exact rules the network has learned.

A new tool developed at Stanford University, called ReluPlex, uses the power of mathematical proofs to do this for small networks. Costly in computing time, but offering strong guarantees, this small-scale verification technique complements DeepXplore’s full-scale testing approach, said ReluPlex co-developer Clark Barrett, a computer scientist at Stanford.

“Testing techniques use efficient and clever heuristics to find problems in a system, and it seems that the techniques in this paper are particularly good,” he said. “However, a testing technique can never guarantee that all the bugs have been found, or similarly, if it can’t find any bugs, that there are, in fact, no bugs.”

DeepXplore has applications beyond self-driving cars. It can find malware disguised as benign code in anti-virus software, and uncover discriminatory assumptions baked into predictive policing and criminal sentencing software, for example.

The team has made their open-source software public for other researchers to use, and launched a website to let people upload their own data to see how the testing process works.

* The team evaluated DeepXplore on real-world datasets including Udacity self-driving car challenge data, image data from ImageNet and MNIST, Android malware data from Drebin, PDF malware data from Contagio/VirusTotal, and production-quality deep neural networks trained on these datasets, such as these ranked top in Udacity self-driving car challenge. Their results show that DeepXplore found thousands of incorrect corner case behaviors (e.g., self-driving cars crashing into guard rails) in 15 state-of-the-art deep learning models with a total of 132,057 neurons trained on five popular datasets containing around 162 GB of data.


Abstract of DeepXplore: Automated Whitebox Testing of Deep Learning Systems

Deep learning (DL) systems are increasingly deployed in safety- and security-critical domains including self-driving cars and malware detection, where the correctness and predictability of a system’s behavior for corner case inputs are of great importance. Existing DL testing depends heavily on manually labeled data and therefore often fails to expose erroneous behaviors for rare inputs.

We design, implement, and evaluate DeepXplore, the first whitebox framework for systematically testing real-world DL systems. First, we introduce neuron coverage for systematically measuring the parts of a DL system exercised by test inputs. Next, we leverage multiple DL systems with similar functionality as cross-referencing oracles to avoid manual checking. Finally, we demonstrate how finding inputs for DL systems that both trigger many differential behaviors and achieve high neuron coverage can be represented as a joint optimization problem and solved efficiently using gradient-based search techniques.

DeepXplore efficiently finds thousands of incorrect corner case behaviors (e.g., self-driving cars crashing into guard rails and malware masquerading as benign software) in state-of-the-art DL models with thousands of neurons trained on five popular datasets including ImageNet and Udacity self-driving challenge data. For all tested DL models, on average, DeepXplore generated one test input demonstrating incorrect behavior within one second while running only on a commodity laptop. We further show that the test inputs generated by DeepXplore can also be used to retrain the corresponding DL model to improve the model’s accuracy by up to 3%.

Researchers watch video images people are seeing, decoded from their fMRI brain scans in near-real-time

Purdue Engineering researchers have developed a system that can show what people are seeing in real-world videos, decoded from their fMRI brain scans — an advanced new form of  “mind-reading” technology that could lead to new insights in brain function and to advanced AI systems.

The research builds on previous pioneering research at UC Berkeley’s Gallant Lab, which created a computer program in 2011 that translated fMRI brain-wave patterns into images that loosely mirrored a series of images being viewed.

The new system also decodes moving images that subjects see in videos and does it in near-real-time. But the researchers were also able to determine the subjects’ interpretations of the images they saw — for example, interpreting an image as a person or thing — and could even reconstruct a version of the original images that the subjects saw.

Deep-learning AI system for watching what the brain sees

Watching in near-real-time what the brain sees. Visual information generated by a video (a) is processed in a cascade from the retina through the thalamus (LGN area) to several levels of the visual cortex (b), detected from fMRI activity patterns (c) and recorded. A powerful deep-learning technique (d) then models this detected cortical visual processing. Called a convolutional neural network (CNN), this model transforms every video frame into multiple layers of features, ranging from orientations and colors (the first visual layer) to high-level object categories (face, bird, etc.) in semantic (meaning) space (the eighth layer). The trained CNN model can then be used to reverse this process, reconstructing the original videos — even creating new videos that the CNN model had never watched. (credit: Haiguang Wen et al./Cerebral Cortex)

The researchers acquired 11.5 hours of fMRI data from each of three women subjects watching 972 video clips, including clips showing people or animals in action and nature scenes.

To decode the  fMRI images, the research pioneered the use of a deep-learning technique called a convolutional neural network (CNN). The trained CNN model was able to accurately decode the fMRI blood-flow data to identify specific image categories (such as the face, bird, ship, and scene examples in the above figure). The researchers could compare (in near-real-time) these viewed video images side-by-side with the computer’s visual interpretation of what the person’s brain saw.

Reconstruction of a dynamic visual experience in the experiment. The top row shows the example movie frames seen by one subject; the bottom row shows the reconstruction of those frames based on the subject’s cortical fMRI responses to the movie. (credit: Haiguang Wen et al./ Cerebral Cortex)

The researchers were also able to figure out how certain locations in the visual cortex were associated with specific information a person was seeing.

Decoding how the visual cortex works

CNNs have been used to recognize faces and objects, and to study how the brain processes static images and other visual stimuli. But the new findings represent the first time CNNs have been used to see how the brain processes videos of natural scenes. This is “a step toward decoding the brain while people are trying to make sense of complex and dynamic visual surroundings,” said doctoral student Haiguang Wen.

Wen was first author of a paper describing the research, appearing online Oct. 20 in the journal Cerebral Cortex.

“Neuroscience is trying to map which parts of the brain are responsible for specific functionality,” Wen explained. “This is a landmark goal of neuroscience. I think what we report in this paper moves us closer to achieving that goal. Using our technique, you may visualize the specific information represented by any brain location, and screen through all the locations in the brain’s visual cortex. By doing that, you can see how the brain divides a visual scene into pieces, and re-assembles the pieces into a full understanding of the visual scene.”

The researchers also were able to use models trained with data from one human subject to predict and decode the brain activity of a different human subject, a process called “cross-subject encoding and decoding.” This finding is important because it demonstrates the potential for broad applications of such models to study brain function, including people with visual deficits.

The research has been funded by the National Institute of Mental Health. The work is affiliated with the Purdue Institute for Integrative Neuroscience. Data reported in this paper are also publicly available at the Laboratory of Integrated Brain Imaging website.

UPDATE Oct. 28, 2017 — Additional figure added, comparing the original images and those reconstructed from the subject’s cortical fMRI responses to the movie; subhead revised to clarify the CNN function. Two references also added.


Abstract of Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision

Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albeit to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, contrast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and semantic categorization, respectively. These results corroborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision.

IBM scientists say radical new ‘in-memory’ computing architecture will speed up computers by 200 times

(Left) Schematic of conventional von Neumann computer architecture, where the memory and computing units are physically separated. To perform a computational operation and to store the result in the same memory location, data is shuttled back and forth between the memory and the processing unit. (Right) An alternative architecture where the computational operation is performed in the same memory location. (credit: IBM Research)

IBM Research announced Tuesday (Oct. 24, 2017) that its scientists have developed the first “in-memory computing” or “computational memory” computer system architecture, which is expected to yield 200x improvements in computer speed and energy efficiency — enabling ultra-dense, low-power, massively parallel computing systems.

Their concept is to use one device (such as phase change memory or PCM*) for both storing and processing information. That design would replace the conventional “von Neumann” computer architecture, used in standard desktop computers, laptops, and cellphones, which splits computation and memory into two different devices. That requires moving data back and forth between memory and the computing unit, making them slower and less energy-efficient.

The researchers used PCM devices made from a germanium antimony telluride alloy, which is stacked and sandwiched between two electrodes. When the scientists apply a tiny electric current to the material, they heat it, which alters its state from amorphous (with a disordered atomic arrangement) to crystalline (with an ordered atomic configuration). The IBM researchers have used the crystallization dynamics to perform computation in memory. (credit: IBM Research)

Especially useful in AI applications

The researchers believe this new prototype technology will enable ultra-dense, low-power, and massively parallel computing systems that are especially useful for AI applications. The researchers tested the new architecture using an unsupervised machine-learning algorithm running on one million phase change memory (PCM) devices, successfully finding temporal correlations in unknown data streams.

“This is an important step forward in our research of the physics of AI, which explores new hardware materials, devices and architectures,” says Evangelos Eleftheriou, PhD, an IBM Fellow and co-author of an open-access paper in the peer-reviewed journal Nature Communications. “As the CMOS scaling laws break down because of technological limits, a radical departure from the processor-memory dichotomy is needed to circumvent the limitations of today’s computers.”

“Memory has so far been viewed as a place where we merely store information, said Abu Sebastian, PhD. exploratory memory and cognitive technologies scientist, IBM Research and lead author of the paper. But in this work, we conclusively show how we can exploit the physics of these memory devices to also perform a rather high-level computational primitive. The result of the computation is also stored in the memory devices, and in this sense the concept is loosely inspired by how the brain computes.” Sebastian also leads a European Research Council funded project on this topic.

* To demonstrate the technology, the authors chose two time-based examples and compared their results with traditional machine-learning methods such as k-means clustering:

  • Simulated Data: one million binary (0 or 1) random processes organized on a 2D grid based on a 1000 x 1000 pixel, black and white, profile drawing of famed British mathematician Alan Turing. The IBM scientists then made the pixels blink on and off with the same rate, but the black pixels turned on and off in a weakly correlated manner. This means that when a black pixel blinks, there is a slightly higher probability that another black pixel will also blink. The random processes were assigned to a million PCM devices, and a simple learning algorithm was implemented. With each blink, the PCM array learned, and the PCM devices corresponding to the correlated processes went to a high conductance state. In this way, the conductance map of the PCM devices recreates the drawing of Alan Turing.
  • Real-World Data: actual rainfall data, collected over a period of six months from 270 weather stations across the USA in one hour intervals. If rained within the hour, it was labelled “1” and if it didn’t “0”. Classical k-means clustering and the in-memory computing approach agreed on the classification of 245 out of the 270 weather stations. In-memory computing classified 12 stations as uncorrelated that had been marked correlated by the k-means clustering approach. Similarly, the in-memory computing approach classified 13 stations as correlated that had been marked uncorrelated by k-means clustering. 


Abstract of Temporal correlation detection using computational phase-change memory

Conventional computers based on the von Neumann architecture perform computation by repeatedly transferring data between their physically separated processing and memory units. As computation becomes increasingly data centric and the scalability limits in terms of performance and power are being reached, alternative computing paradigms with collocated computation and storage are actively being sought. A fascinating such approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. We present an experimental demonstration using one million phase change memory devices organized to perform a high-level computational primitive by exploiting the crystallization dynamics. Its result is imprinted in the conductance states of the memory devices. The results of using such a computational memory for processing real-world data sets show that this co-existence of computation and storage at the nanometer scale could enable ultra-dense, low-power, and massively-parallel computing systems.

AlphaGo Zero trains itself to be most powerful Go player in the world

(credit: DeepMind)

Deep Mind has just announced AlphaGo Zero, an evolution of AlphaGo, the first computer program to defeat a world champion at the ancient Chinese game of Go. Zero is even more powerful and is now arguably the strongest Go player in history, according to the company.

While previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go, AlphaGo Zero skips this step. It learns to play from scratch, simply by playing games against itself, starting from completely random play.

(credit: DeepMind)

It surpassed Alpha Lee in 3 days, then surpassed human level of play, defeating the previously published champion-defeating version of AlphaGo by 100 games to 0 in just 40 days.

The achievement is described in the journal Nature today (Oct. 18, 2017)


DeepMind | AlphaGo Zero: Starting from scratch


Abstract of Mastering the game of Go without human knowledge

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

Using ‘cooperative perception’ between intelligent vehicles to reduce risks

Networked intelligent vehicles (credit: EPFL)

Researchers at École polytechnique fédérale de Lausanne (EPFL) have combined data from two autonomous cars to create a wider field of view, extended situational awareness, and greater safety.

Autonomous vehicles get their intelligence from cameras, radar, light detection and ranging (LIDAR) sensors, and navigation and mapping systems. But there are ways to make them even smarter. Researchers at EPFL are working to improve the reliability and fault tolerance of these systems by sharing data between vehicles. For example, this can extend the field of view of a car that is behind another car.

Using simulators and road tests, the team has developed a flexible software framework for networking intelligent vehicles so that they can interact.

Cooperative perception

“Today, intelligent vehicle development is focused on two main issues: the level of autonomy and the level of cooperation,” says Alcherio Martinoli, who heads EPFL’s Distributed Intelligent Systems and Algorithms Laboratory (DISAL). As part of his PhD thesis, Milos Vasic has developed cooperative perception algorithms, which extend an intelligent vehicle’s situational awareness by fusing data from onboard sensors with data provided by cooperative vehicles nearby.

Milos Vasic, PhD, and Alcherio Martinoli made two regular cars intelligent using off-the-shelf equipment. (credit: Alain Herzog/EPFL)

The researchers used  cooperative perception algorithms as the basis for the software framework. Cooperative perception means that an intelligent vehicle can combine its own data with that of another vehicle to help make driving decisions.

They developed an assistance system that assesses the risk of passing, for example. The risk assessment factors in the probability of an oncoming car in the opposite lane as well as kinematic conditions such as driving speeds, the distance required to overtake, and the distance to the oncoming car.

Difficulties in fusing data

The team retrofitted two Citroen C-Zero electric cars with a Mobileye camera, an accurate localization system, a router to enable Wi-Fi communication, a computer to run the software and an external battery to power everything. “These were not autonomous vehicles,” says Martinoli, “but we made them intelligent using off-the-shelf equipment.”

One of the difficulties in fusing data from the two vehicles involved relative localization. The cars needed to be able to know precisely where they are in relation to each other as well to objects in the vicinity.

For example, if a single pedestrian does not appear to both cars to be in the same exact spot, there is a risk that, together, they will see two figures instead of one. By using other signals, particularly those provided by the LIDAR sensors and cameras, the researchers were able to correct flaws in the navigation system and adjust their algorithms accordingly. This exercise was even more challenging because the data had to be processed in real time while the vehicles were in motion.

Although the tests involved only two vehicles, the longer-term goal is to create a network between multiple vehicles as well with the roadway infrastructure.

In addition to driving safety and comfort, cooperative networks of this sort could eventually be used to optimize a vehicle’s trajectory, save energy, and improve traffic flows.

Of course, determining liability in case of an accident becomes more complicated when vehicles cooperate. “The answers to these issues will play a key role in determining whether autonomous vehicles are accepted,” says Martinoli.


École polytechnique fédérale de Lausanne (EPFL) | Networked intelligent vehicles

Ray Kurzweil on The Age of Spiritual Machines: A 1999 TV interview

Dear readers,

For your interest, this 1999 interview with me, which I recently re-watched, describes some interesting predictions that are still coming true today. It’s intriguing to look back at the last 18 years to see what actually unfolded. This video is a compelling glimpse into the future, as we’re living it today.

Enjoy!

— Ray


Dear readers,

This interview by Harold Hudson Channer was recorded on Jan. 14, 1999 and aired February 1, 1999 on a Manhattan Neighborhood Network cable-access show, Conversations with Harold Hudson Channer.

In the discussion, Ray explains many of the ahead-of-their-time ideas presented in The Age of Spiritual Machines*, such as the “law of accelerating returns” (how technological change is exponential, contrary to the common-sense “intuitive linear” view); the forthcoming revolutionary impacts of AI; nanotech brain and body implants for increased intelligence, improved health, and life extension; and technological impacts on economic growth.

I was personally inspired by the book in 1999 and by Ray’s prophetic, uplifting vision of the future. I hope you also enjoy this blast from the past.

— Amara D. Angelica, Editor

* First published in hardcover January 1, 1999 by Viking. The series also includes The Age of Intelligent Machines (The MIT Press, 1992) and The Singularity Is Near (Penquin Books, 2006).