Berkeley Lab announces first transistor with a working 1-nanometer gate

Schematic of a transistor with molybdenum disulfide semiconductor and 1-nanometer carbon nanotube gate. (credit: Sujay Desai/Berkeley Lab)

The first transistor with a working 1-nanometer (nm) gate* has been created by a team led by Lawrence Berkeley National Laboratory (Berkeley Lab) scientists. Until now, a transistor gate size less than 5 nanometers has been considered impossible because of quantum tunneling effects. (One nanometer is the diameter of a glucose molecule.)

The breakthrough was achieved by creating a 2D (flat) semiconductor field-effect transistor using molybdenum disulfide (MoS2) instead of silicon and a 1D single-walled carbon nanotube (SWCNT) as a gate electrode, instead of various metals. (SWCNTs are hollow cylindrical tubes with diameters as small as 1 nanometer.)

The MoS2 advantage

Compared with MoS2, electrons flowing through silicon are lighter and encounter less resistance . But with a gate length below 5 nanometers in length, a quantum mechanical phenomenon called tunneling kicks in, and the gate barrier is no longer able to keep the electrons from barging through from the source to the drain terminals, so the transistor cannot be turned off.

Electrons flowing through MoS2 are heavier, so their flow can be controlled with smaller gate lengths. MoS2 can also be scaled down to atomically thin sheets, about 0.65 nanometers thick, with a a larger band gap and lower dielectric constant, a measure reflecting the ability of a material to store energy in an electric field (similar to a capacitor). These properties help improve the control of the flow of current inside the transistor when the gate length is reduced to 1 nanometer.

Transistors consist of three terminals: a source (left), a drain (right), and a gate (the carbon nanotube, black, below). Current flows through the semiconductor (MoS2, represented by the yellow molecular model) from the source to the drain. Based on the voltage applied to the gate, it switches the channel (the portion of the MoS2 semiconductor just above the carbon nanotube) on and off, via a dielectric (zirconium oxide, green), operating in a manner similar to a capacitor. (credit: Sujay Desai/Berkeley Lab)

“We made the smallest transistor reported to date,” said faculty scientist Ali Javey at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and lead principal investigator of the Electronic Materials program in Berkeley Lab’s Materials Science Division. “The gate length is considered a defining dimension of the transistor. We demonstrated a 1-nanometer-gate transistor, showing that with the choice of proper materials, there is a lot more room to shrink our electronics.”

The development could be key to keeping alive Intel co-founder Gordon Moore’s prediction that the density of transistors on integrated circuits would double every two years, enabling the increased performance of our laptops, mobile phones, televisions, and other electronics.

“The semiconductor industry has long assumed that any gate below 5 nanometers wouldn’t work, so anything below that was not even considered,” said study lead author Sujay Desai, a graduate student in Javey’s lab. “This research shows that sub-5-nanometer gates should not be discounted. Industry has been squeezing every last bit of capability out of silicon. By changing the material from silicon to MoS2, we can make a transistor with a gate that is just 1 nanometer in length, and operate it like a switch.”

Transmission electron microscope image of a cross section of the transistor, showing the edge of a 1-nanometer carbon nanotube gate and the molybdenum disulfide semiconductor separated by zirconium dioxide, which is a dielectric insulator. (credit: Sujay B. Desai/Science)

Continuing Moore’s law

“This work demonstrated the shortest transistor ever,” said Javey, who is also a UC Berkeley professor of electrical engineering and computer sciences. “However, it’s a proof of concept. We have not yet packed these transistors onto a chip, and we haven’t done this billions of times over. We also have not developed self-aligned fabrication schemes for reducing parasitic resistances in the device. But this work is important to show that we are no longer limited to a 5-nanometer gate for our transistors. Moore’s Law can continue a while longer by proper engineering of the semiconductor material and device architecture.”

The findings appeared in the Oct. 7 issue of the journal Science. Researchers at the University of Texas at Dallas, Stanford University, and  the University of California, Berkeley, were also involved. The work at Berkeley Lab was primarily funded by the Department of Energy’s Basic Energy Sciences program.

According to an earlier article in CTimes on Sept. 30, Taiwan Semiconductor Manufacturing Co., Ltd. (TSMC) said the company is working toward a 1-nanometer manufacturing process, starting with a “5 nanometers process technology, while putting about 300 to 400 R&D personnel in developing more advanced 3-nanometer process.” However, TSMC spokesperson Elizabeth Sun told KurzweilAI that “no further information regarding any technology either under development or in path-finding stage will be disclosed to the public at this point.”

* Gate length is the length of the gate portion of the transistor, not to be confused with “node,” which was initially a measure of “half pitch” (half of the distance between features of a transistor), but the number itself has lost the exact meaning it once held. Gate length was 26nm for the 22nm node from Intel and 20 nanometers for the more recent 14nm node from Intel. — S. Natarajan et al., “A 14nm logic technology featuring 2nd-generation FinFET, air-gapped interconnects, self-aligned double patterning and a 0.0588 µm2 SRAM cell size,” 2014 IEEE International Electron Devices Meeting, San Francisco, CA, 2014, pp. 3.7.1-3.7.3. doi: 10.1109/IEDM.2014.7046976


Abstract of MoS2 transistors with 1-nanometer gate lengths

Scaling of silicon (Si) transistors is predicted to fail below 5-nanometer (nm) gate lengths because of severe short channel effects. As an alternative to Si, certain layered semiconductors are attractive for their atomically uniform thickness down to a monolayer, lower dielectric constants, larger band gaps, and heavier carrier effective mass. Here, we demonstrate molybdenum disulfide (MoS2) transistors with a 1-nm physical gate length using a single-walled carbon nanotube as the gate electrode. These ultrashort devices exhibit excellent switching characteristics with near ideal subthreshold swing of ~65 millivolts per decade and an On/Off current ratio of ~106. Simulations show an effective channel length of ~3.9 nm in the Off state and ~1 nm in the On state.

D-Wave Systems previews 2000-qubit quantum processor

D-Wave 2000-qubit processor (credit: D-Wave Systems)

D-Wave Systems announced Tuesday (Sept. 28, 2016) a new 2000-qubit processor, doubling the number of qubits over the previous-generation D-Wave 2X system. The new system will enable larger problems to be solved and performance improvements of up to 1000 times.

D-Wave’s quantum system runs a quantum-annealing algorithm to find the lowest points in a virtual energy landscape representing a computational problem to be solved. The lowest points in the landscape correspond to optimal or near-optimal solutions to the problem. The increase in qubit count enables larger and more difficult problems to be solved, and the ability to tune the rate of annealing of individual qubits will enhance application performance.

According to D-Wave, users will be able to tune the quantum computational process to solve problems faster and find more diverse solutions when they exist. They will have the ability to sample the state of the quantum computer during the quantum annealing process, which will power hybrid quantum-classical machine learning algorithms that were not possible before.

The system will also allow for combining quantum processing with classical processing, improving the quality of optimization and sampling results returned from the system.

D-Wave’s first users conference, being held on September 28–29 in Santa Fe, New Mexico, features speakers from Los Alamos National Laboratory, NASA, Lockheed Martin, the Roswell Park Cancer Center, Oak Ridge National Laboratory, USC, and D-Wave, and a number of quantum software and services companies.

Google’s secret plan for quantum computer supremacy

UCSB Martinis Group’s superconducting five-qubit array (credit: Erik Lucero)

Google* is developing a quantum computer that it believes will outperform the world’s top supercomputers, according to an August 31 New Scientist article and sourced to researchers contacted by the magazine.

Google’s ambitious goal is to achieve “quantum supremacy”— which would be achieved when “quantum devices without error correction can perform a well-defined computational task beyond the capabilities of state-of-the-art classical computers,” as the authors of an arXiv paper (open access) explain.

The task in this case: simulate the behavior of a random arrangement of quantum circuits in a 48-qubit grid, which would require 2.252 petabytes of memory, almost double that of the top supercomputer in the world.

To do that, Google plans to build a whopping 50 qubit computer. So far, Google has only announced a modest nine qubit computer, but it has hired arXiv paper co-author John M. Martinis at the University of California, Santa Barbara (see “Google partners with UC Santa Barbara team to build new superconductor-based quantum information processors” on KurzweilAI) to try.

Success may prepare Google to construct something even bigger: a fully scalable machine,” says Ian Walmsley at the University of Oxford.

* With partners NASA Ames, SGT, and University of California, Santa Barbara

How creating defective nanodiamonds could revolutionize nanotechnology and quantum computing

This electron microscope image shows a hybrid nanoparticle consisting of a nanodiamond (roughly 50 nanometers wide) covered in smaller silver nanoparticles that enhance the diamond’s optical properties. (credit: Min Ouyang)

University of Maryland researchers have developed a method to quickly and inexpensively assemble diamond-based hybrid nanoparticles from the ground up in large quantities while avoiding many of the problems with current methods.

These hybrid nanoparticles could speed the design of room-temperature qubits for quantum computers and create brighter dyes for biomedical imaging or highly sensitive magnetic and temperature sensors, for example.

When impurities are better

Synthetic diamonds of various colors (from defects) grown by the high-pressure high-temperature technique (credit: Wikipedia/
public domain)

The basic trick in creating a interesting or useful diamond is, ironically: Add a defect in the diamond’s crystal lattice. It’s similar to doping silicon to give it special electronic properties (such as making it work as a transistor).

Pure diamonds consist of an orderly lattice of carbon atoms and are completely transparent. However, pure diamonds are quite rare in natural diamond deposits; most have defects resulting from non-carbon impurities such as nitrogen, boron and phosphorus. Such defects create the subtle and desirable color variations seen in gemstone diamonds.

This altered bond is also the source of the optical, electromagnetic, and quantum physical properties that will make a nanodiamond useful when paired with other nanomaterials.

Nitrogen vacancy impurity

Model of nitrogen-vacancy center in diamond (credit: Wikipedia/public domain)

The most useful impurity — and used in the Maryland study — is the famous “nitrogen vacancy” defect: Sticking in a single nitrogen atom where a carbon atom should be, with an empty space right next to it.

As KurzweilAI has shown in several articles, a nitrogen vacancy in a diamond (or other crystalline materials) can lead to a variety of interesting new properties, such as a highly sensitive way to detect neural signals, an ultrasensitive real-time magnetic field detector, and importantly, making a nanodiamond behave as a quantum bit (qubit) for use in quantum computing and other applications.

Nearly all qubits studied to date require ultra-cold temperatures to function properly. A qubit that works at room temperature would represent a significant step forward, helping use quantum circuits in industrial, commercial and consumer-level electronics. That’s of special interest to Ougang’s team.

Volume production of hybrid nanoparticles

A synthetic route for hybrid nanodiamond nanoparticles. (a) Different growth stages, ending in (S6) growth of metal nanoparticles on the nanodiamond surface. (b) Transmission electron microscope image showing hybrid nanodiamond-silver nanostructures made by following the synthetic scheme in (a). Scale bar, 200 nm. (credit: J. Gong et al./Nature Communications)

Ougang’s and colleagues’ main breakthrough, though, is their method for constructing the hybrid nanoparticles. Other researchers have paired nanodiamonds with complementary nanoparticles using relatively imprecise methods, such as manually installing the diamonds and particles next to each other onto a larger surface one by one.

These top-down methods are costly, time consuming, and introduce a host of complications. “Our key innovation is that we can now reliably and efficiently produce these freestanding hybrid particles in large numbers,” explained Ouyang, who also has appointments in the UMD Center for Nanophysics and Advanced Materials and the Maryland NanoCenter, with an affiliate professorship in the UMD Department of Materials Science and Engineering.

His team’s method also enables precise control of the hybrid particles’ properties, such as the composition and total number of non-diamond particles.

“A major strength of our technique is that it is broadly useful and can be applied to a variety of diamond types and paired with a variety of other nanomaterials,” Ouyang said. “It can also be scaled up fairly easily. We are interested in studying the basic physics further, but also moving toward specific applications.”


Abstract of Nanodiamond-based nanostructures for coupling nitrogen-vacancy centres to metal nanoparticles and semiconductor quantum dots

The ability to control the interaction between nitrogen-vacancy centres in diamond and photonic and/or broadband plasmonic nanostructures is crucial for the development of solid-state quantum devices with optimum performance. However, existing methods typically employ top-down fabrication, which restrict scalable and feasible manipulation of nitrogen-vacancy centres. Here, we develop a general bottom-up approach to fabricate an emerging class of freestanding nanodiamond-based hybrid nanostructures with external functional units of either plasmonic nanoparticles or excitonic quantum dots. Precise control of the structural parameters (including size, composition, coverage and spacing of the external functional units) is achieved, representing a pre-requisite for exploring the underlying physics. Fine tuning of the emission characteristics through structural regulation is demonstrated by performing single-particle optical studies. This study opens a rich toolbox to tailor properties of quantum emitters, which can facilitate design guidelines for devices based on nitrogen-vacancy centres that use these freestanding hybrid nanostructures as building blocks.

Machine learning outperforms physicists in experiment

The experiment, featuring the small red glow of a BEC trapped in infrared laser beams (credit: Stuart Hay, ANU)

Australian physicists have used an online optimization process based on machine learning to produce effective Bose-Einstein condensates (BECs) in a fraction of the time it would normally take the researchers.

A BEC is a state of matter of a dilute gas of atoms trapped in a laser beam and cooled to temperatures just above absolute zero. BECs are extremely sensitive to external disturbances, which makes them ideal for research into quantum phenomena or for making very precise measurements such as tiny changes in the Earth’s magnetic field or gravity.

The experiment, developed by physicists from ANU, University of Adelaide and UNSW ADFA, demonstrated that “machine-learning online optimization” can discover optimized condensation methods “with less experiments than a competing optimization method and provide insight into which parameters are important in achieving condensation,” the physicists explain in an open-access paper in the Nature group journal Scientific Reports.

Faster, cheaper than a physicist

Optical dipole trap used in the experiment, showing the three laser beams and the condensate (red-yellow oval in blue square) (credit: P. B. Wigley et al./Scientific Reports)

The team cooled the gas to around 5 microkelvin. To further cool down the trapped gas (containing about 40 million rubidium atoms) to on the order of nanokelvin*, they then handed control of the three laser beams** over to the machine-learning program.

The physicists were surprised by the clever methods the system came up with to create a BEC, like changing one laser’s power up and down, and compensating with another laser.

“I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour,” said co-lead researcher Paul Wigley from ANU Research School of Physics and Engineering. “A simple computer program would have taken longer than the age of the universe to run through all the combinations and work this out.”

Wigley suggested that one could make a working device to measure gravity that you could take in the back of a car, and the AI would automatically recalibrate and fix itself.

“It’s cheaper than taking a physicist everywhere with you,” he said.

* Billionth of a degree above absolute zero — where a phase transition occurs, and a macroscopic number of atoms start to occupy the same quantum state, called Bose-Einstein condensation.

** The 1064 nm beam is controlled by varying the current to the laser, while the 1090 nm beam is controlled using the current and a waveplate rotation stage combined with a polarizing beamsplitter to provide additional power attenuation while maintaining mode stability.


Abstract of Fast machine-learning online optimization of ultra-cold-atom experiments

We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our ‘learner’ discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system.

‘Primitive’ quantum computer outperforms classical computers

A simulation of Brownian motion (random walk) of a dust particle (yellow) that collides with a large set of smaller particles (molecules of a gas) moving with different velocities in different random directions (credit: Lookang et al./CC)

Researchers at the Universities of Bristol and Western Australia have demonstrated a practical use of a “primitive” quantum computer, using an algorithm known as “quantum walk.” They showed that a two-qubit photonics quantum processor can outperform classical computers for this type of algorithm, without requiring more sophisticated quantum computers, such as IBM’s five-qubits cloud-based quantum processor (see IBM makes quantum computing available free on IBM Cloud).

Quantum walk is the quantum-mechanical analog of “random-walk” models such as Brownian motion (for example, the random motion of a dust particle in air). The researchers implemented “continuous-time quantum walk” computations on circulant graphs* in a proof-of-principle experiment.

The probability distribution of quantum walk on an example circulant graph. Sampling this probability distribution is generally hard for a classical computer, but simple on a primitive quantum computer. (credit: University of Bristol)

Jonathan Matthews, PhD., EPSRC Early Career Fellow and Lecturer in the School of Physics and the Centre for Quantum Photonics, explained in an open-access paper in Nature Communications: “An exciting outcome of our work is that we may have found a new example of quantum walk physics that we can observe with a primitive quantum computer, that otherwise a classical computer could not see. These otherwise hidden properties have practical use, perhaps in helping to design more sophisticated quantum computers.”


Microsoft | Quantum Computing 101

* A circulant graph is a graph where every vertex is connected to the same set of relative vertices, as explained in an open-access paper by Salisbury University student Shealyn Tucker, including a practical example of the use of a circulant graph:

Example of a circulent graph depicting how products should be optimally collocated based on which products customers buy at a grocery store (credit: Shealyn Tucker/Salisbury University)


Abstract of Efficient quantum walk on a quantum processor

The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor.

IBM makes quantum computing available free on IBM Cloud

Layout of IBM’s five superconducting quantum bit device. In 2015, IBM scientists demonstrated critical breakthroughs to detect quantum errors by combining superconducting qubits in latticed arrangements, and whose quantum circuit design is the only physical architecture that can scale to larger dimensions. Now, IBM scientists have achieved a further advance by combining five qubits in the lattice architecture, which demonstrates a key operation known as a parity measurement — the basis of many quantum error correction protocols. (credit: IBM Research)

IBM Research has announced that effective Wednesday May 4, it is making quantum computing available free to members of the public, who can access and run experiments on IBM’s quantum processor, via the IBM Cloud, from any desktop or mobile device.

IBM believes quantum computing is the future of computing and has the potential to solve certain problems that are impossible to solve on today’s supercomputers.

The cloud-enabled quantum computing platform, called IBM Quantum Experience, will allow users to run algorithms and experiments on IBM’s quantum processor, work with the individual quantum bits (qubits), and explore tutorials and simulations around what might be possible with quantum computing.

The quantum processor is composed of five superconducting qubits and is housed at the IBM T.J. Watson Research Center in New York. IBM’s quantum architecture can scale to larger quantum systems. It is aimed at building a universal quantum computer that can be programmed to perform any computing task and will be exponentially faster than classical computers for a number of important applications for science and business, IBM says.


IBM | Explore our 360 Video of the IBM Research Quantum Lab

IBM envisions medium-sized quantum processors of 50–100 qubits to be possible in the next decade. With a quantum computer built of just 50 qubits, none of today’s TOP500 supercomputers could successfully emulate it, reflecting the tremendous potential of this technology.

“Quantum computing is becoming a reality and it will extend computation far beyond what is imaginable with today’s computers,” said Arvind Krishna, senior vice president and director, IBM Research. “This moment represents the birth of quantum cloud computing. By giving hands-on access to IBM’s experimental quantum systems, the IBM Quantum Experience will make it easier for researchers and the scientific community to accelerate innovations in the quantum field, and help discover new applications for this technology.”

This leap forward in computing could lead to the discovery of new pharmaceutical drugs and completely safeguard cloud computing systems, IBM believes. It could also unlock new facets of artificial intelligence (which could lead to future, more powerful Watson technologies), develop new materials science to transform industries, and search large volumes of big data.

The IBM Quantum Experience


IBM | Running an experiment in the IBM Quantum Experience

Coupled with software expertise from the IBM Research ecosystem, the team has built a dynamic user interface on the IBM Cloud platform that allows users to easily connect to the quantum hardware via the cloud.

In the future, users will have the opportunity to contribute and review their results in the community hosted on the IBM Quantum Experience and IBM scientists will be directly engaged to offer more research and insights on new advances. IBM plans to add more qubits and different processor arrangements to the IBM Quantum Experience over time, so users can expand their experiments and help uncover new applications for the technology.

IBM employs superconducting qubits that are made with superconducting metals on a silicon chip and can be designed and manufactured using standard silicon fabrication techniques. Last year, IBM scientists demonstrated critical breakthroughs to detect quantum errors by combining superconducting qubits in latticed arrangements, and whose quantum circuit design is the only physical architecture that can scale to larger dimensions.


IBM | IBM Brings Quantum Computing to the Cloud

Now, IBM scientists have achieved a further advance by combining five qubits in the lattice architecture, which demonstrates a key operation known as a parity measurement — the basis of many quantum error correction protocols.

By giving users access to IBM’s experimental quantum systems, IBM believes it will help businesses and organizations begin to understand the technology’s potential, for universities to grow their teaching programs in quantum computing and related subjects, and for students (IBM’s potential future customers) to become aware of promising new career paths. And of course, to raise IBM’s marketing profile in this emerging field.

Capturing a single photon

Capturing a single photon from a pulse of light (credit: Weizmann Institute of Science)

Weizmann Institute of Science researchers have managed to isolate a single photon out of a pulse of light. Single photons may be the backbone of future quantum communication systems, the researchers say.

The mechanism relies on a physical effect that they call “single-photon Raman interaction” (SPRINT). “The advantage of SPRINT is that it is completely passive; it does not require any control fields — just the interaction between the atom and the optical pulse,” said Barak Dayan, PhD, head of the Weizmann Institute Quantum Optics group.

The experimental setup involves laser cooling and trapping of atoms (in this case, rubidium), optical nanofibers, and fabrication of chip-based, ultrahigh-quality glass microspheres.

Previously, a low-reflectivity beam splitter directing a small fraction of the incoming light toward a detector was used, with low success rates.

“The ability to divert a single photon from a flow could be harnessed for various tasks, from creating nonclassical states of light that are useful for basic scientific research, through eavesdropping on imperfect quantum-cryptography systems that rely on single photons, to increasing the security of your own quantum-communication systems,” Dayan said.

The findings of this research appeared Nov. 23, 2015 in Nature Photonics.


Abstract of Extraction of a single photon from an optical pulse

Removing a single photon from a pulse is one of the most elementary operations that can be performed on light, having both fundamental significance and practical applications in quantum communication and computation. So far, photon subtraction, in which the removed photon is detected and therefore irreversibly lost, has been implemented in a probabilistic manner with inherently low success rates using low-reflectivity beam splitters. Here we demonstrate a scheme for the deterministic extraction of a single photon from an incoming pulse. The removed photon is diverted to a different mode, enabling its use for other purposes, such as a photon number-splitting attack on quantum key distribution protocols. Our implementation makes use of single-photon Raman interaction (SPRINT) with a single atom near a nanofibre-coupled microresonator. The single-photon extraction probability in our current realization is limited mostly by linear loss, yet probabilities close to unity should be attainable with realistic experimental parameters.