Are you ready for pop-up, shape-shifting food? Just add water.

Fun with food: These pasta shapes were generated by immersing a 2D flat gelatin film into water. (credit: Michael Indresano Photography)

Researchers at MIT’s Tangible Media Group are exploring ways to make your dining experience interactive and fun, with food that can transform its shape by just adding water.

Think of it as edible origami or culinary performance art — flat sheets of gelatin and starch that instantly sprout into three-dimensional structures, such as macaroni and rotini, or the shape of a flower.

But the researchers suggest it’s also a practical way to reduce food-shipping costs. Edible films could be stacked together, IKEA-style, and shipped to consumers, then morph into their final shape later when immersed in water.

“We did some simple calculations, such as for macaroni pasta, and even if you pack it perfectly, you still will end up with 67 percent of the volume as air,” says Wen Wang, a co-author on the paper and a former graduate student and research scientist in MIT’s Media Lab. “We thought maybe in the future our shape-changing food could be packed flat and save space.”

Programmable pasta, anyone?

At MIT, Wang and associates had been investigating the response of various materials to moisture. They started playing around with gelatin (as in Jello), a substance that naturally expands when it absorbs water. Gelatin can expand to varying degrees depending on its density — a characteristic that the team exploited in creating their shape-transforming structures.

They created a flat, two-layer film made from gelatin of two different densities. In theory, the top layer was more densely packed, so it should be able to absorb more water than the bottom layer. Sure enough, when they immersed the entire structure in water, the top layer curled over the bottom layer, forming a slowly rising arch — creative pasta.*

Culinary performance art by MIT  researchers. (left) Phytoplankton pasta salad with heirloom tomatoes and wild Sorrel. (right) Flowering pasta with west-coast foraged mushrooms
and fermented burgundy truffle. (credit: Michael Indresano Photography)

To see how their designs might be implemented in a professional kitchen, the researchers showed their engineered edibles to Matthew Delisle, the head chef of high-end Boston restaurant L’Espalier. They jointly designed two culinary creations: transparent discs of gelatin flavored with plankton and squid ink, that instantly wrap around small beads of caviar; and long fettuccini-like strips, made from two gelatins that melt at different temperatures, causing the noodles to spontaneously divide when hot broth melts away certain sections. “They had great texture and tasted pretty good,” Yao says.

DIY food 

The researchers used a laboratory 3-D printer to pattern cellulose onto films of gelatin. But they suggest users can reproduce similar effects with more common techniques such as “screen printing” in an open-access paper presented at the Association for Computing Machinery’s 2017 Computer-Human Interaction Conference on Human Factors in Computing Systems (CHI 2017).

They envision that their “online software can provide design instructions, and a startup company can ship the materials to your home,” Yao says.

This research was funded, in part, by the MIT Media Lab and Food + Future, a startup accelerator sponsored by Target Corporation, IDEO, and Intel.

* The team recorded the cellulose patterns and the dimensions of all of the structures they were able to produce, and also tested mechanical properties such as toughness, organizing all this data into a database. Co-authors Zhang and Cheng then built computational models of the material’s transformations, which they used to design an online interface for users to design their own edible, shape-transforming structures.“We did many lab tests and collected a database, within which you can pick different shapes, with fabrication instructions,” Wang says. “Reversibly, you can also select a basic pattern from the database and adjust the distribution or thickness, and can see how the final transformation will look.”


Tangible Media Group | Transformative Appetite


Abstract of Transformative Appetite: Shape-Changing Food Transforms from 2D to 3D by Water Interaction through Cooking

We developed a concept of transformative appetite, where edible 2D films made of common food materials (protein, cellulose or starch) can transform into 3D food during cooking. This transformation process is triggered by water adsorption, and it is strongly compatible with the ‘flat packaging’ concept for substantially reducing shipping costs and storage space. To develop these transformable foods, we performed material-based design, established a hybrid fabrication strategy, and conducted performance simulation. Users can customize food shape transformations through a pre-defined simulation platform, and then fabricate these designed patterns using additive manufacturing. Three application techniques are provided – 2D-to-3D folding, hydration-induced wrapping, and temperature-induced self-fragmentation, to present the shape, texture, and interaction with food materials. Based on this concept, several dishes were created in the kitchen, to demonstrate the futuristic dining experience through materials-based interaction design.

Best of MOOGFEST 2017

The Moogfest four-day festival in Durham, North Carolina next weekend (May 18 — 21) explores the future of technology, art, and music. Here are some of the sessions that may be especially interesting to KurzweilAI readers. Full #Moogfest2017 Program Lineup.

Culture and Technology

(credit: Google)

The Magenta by Google Brain team will bring its work to life through an interactive demo plus workshops on the creation of art and music through artificial intelligence.

Magenta is a Google Brain project to ask and answer the questions, “Can we use machine learning to create compelling art and music? If so, how? If not, why not?” It’s first a research project to advance the state-of-the art and creativity in music, video, image and text generation and secondly, Magenta is building a community of artists, coders, and machine learning researchers.

The interactive demo will go through a improvisation along with the machine learning models, much like the Al Jam Session. The workshop will cover how to use the open source library to build and train models and interact with them via MIDI.

Technical reference: Magenta: Music and Art Generation with Machine Intelligence


TEDx Talks | Music and Art Generation using Machine Learning | Curtis Hawthorne | TEDxMountainViewHighSchool


Miguel Nicolelis (credit: Duke University)

Miguel A. L. Nicolelis, MD, PhD will discuss state-of-the-art research on brain-machine interfaces, which make it possible for the brains of primates to interact directly and in a bi-directional way with mechanical, computational and virtual devices. He will review a series of recent experiments using real-time computational models to investigate how ensembles of neurons encode motor information. These experiments have revealed that brain-machine interfaces can be used not only to study fundamental aspects of neural ensemble physiology, but they can also serve as an experimental paradigm aimed at testing the design of novel neuroprosthetic devices.

He will also explore research that raises the hypothesis that the properties of a robot arm, or other neurally controlled tools, can be assimilated by brain representations as if they were extensions of the subject’s own body.

Theme: Transhumanism


Dervishes at Royal Opera House with Matthew Herbert (credit: ?)

Andy Cavatorta (MIT Media Lab) will present a conversation and workshop on a range of topics including the four-century history of music and performance at the forefront of technology. Known as the inventor of Bjork’s Gravity Harp, he has collaborated on numerous projects to create instruments using new technologies that coerce expressive music out of fire, glass, gravity, tiny vortices, underwater acoustics, and more. His instruments explore technologically mediated emotion and opportunities to express the previously inexpressible.

Theme: Instrument Design


Berklee College of Music

Michael Bierylo (credit: Moogfest)

Michael Bierylo will present his Modular Synthesizer Ensemble alongside the Csound workshops from fellow Berklee Professor Richard Boulanger.

Csound is a sound and music computing system originally developed at MIT Media Lab and can most accurately be described as a compiler or a software that takes textual instructions in the form of source code and converts them into object code which is a stream of numbers representing audio. Although it has a strong tradition as a tool for composing electro-acoustic pieces, it is used by composers and musicians for any kind of music that can be made with the help of the computer and has traditionally being used in a non-interactive score driven context, but nowadays it is mostly used in in a real-time context.

Michael Bierylo serves as the Chair of the Electronic Production and Design Department, which offers students the opportunity to combine performance, composition, and orchestration with computer, synthesis, and multimedia technology in order to explore the limitless possibilities of musical expression.


Berklee College of Music | Electronic Production and Design (EPD) at Berklee College of Music


Chris Ianuzzi (credit: William Murray)

Chris Ianuzzi, a synthesist of Ciani-Musica and past collaborator with pioneers such as Vangelis and Peter Baumann, will present a daytime performance and sound exploration workshops with the B11 braininterface and NeuroSky headset–a Brainwave Sensing Headset.

Theme: Hacking Systems


Argus Project (credit: Moogfest)

The Argus Project from Gan Golan and Ron Morrison of NEW INC is a wearable sculpture, video installation and counter-surveillance training, which directly intersects the public debate over police accountability. According to ancient Greek myth, Argus Panoptes was a giant with 100 eyes who served as an eternal watchman, both for – and against – the gods.

By embedding an array of camera “eyes” into a full body suit of tactical armor, the Argus exo-suit creates a “force field of accountability” around the bodies of those targeted. While some see filming the police as a confrontational or subversive act, it is in fact, a deeply democratic one.  The act of bearing witness to the actions of the state – and showing them to the world – strengthens our society and institutions. The Argus Project is not so much about an individual hero, but the Citizen Body as a whole. In between one of the music acts, a presentation about the project will be part of the Protest Stage.

Argus Exo Suit Design (credit: Argus Project)

Theme: Protest


Found Sound Nation (credit: Moogfest)

Democracy’s Exquisite Corpse from Found Sound Nation and Moogfest, an immersive installation housed within a completely customized geodesic dome, is a multi-person instrument and music-based round-table discussion. Artists, activists, innovators, festival attendees and community engage in a deeply interactive exploration of sound as a living ecosystem and primal form of communication.

Within the dome, there are 9 unique stations, each with their own distinct set of analog or digital sound-making devices. Each person’s set of devices is chained to the person sitting next to them, so that everybody’s musical actions and choices affect the person next to them, and thus affect everyone else at the table. This instrument is a unique experiment in how technology and the instinctive language of sound can play a role in the shaping of a truly collective unconscious.

Theme: Protest


(credit: Land Marking)

Land Marking, from Halsey Burgund and Joe Zibkow of MIT Open Doc Lab, is a mobile-based music/activist project that augments the physical landscape of protest events with a layer of location-based audio contributed by event participants in real-time. The project captures the audioscape and personal experiences of temporary, but extremely important, expressions of discontent and desire for change.

Land Marking will be teaming up with the Protest Stage to allow Moogfest attendees to contribute their thoughts on protests and tune into an evolving mix of commentary and field recordings from others throughout downtown Durham. Land Marking is available on select apps.

Theme: Protest


Taeyoon Choi (credit: Moogfest)

Taeyoon Choi, an artist and educator based in New York and Seoul, who will be leading a Sign Making Workshop as one of the Future Thought leaders on the Protest Stage. His art practice involves performance, electronics, drawings and storytelling that often leads to interventions in public spaces.

Taeyoon will also participate in the Handmade Computer workshop to build a1 Bit Computer, which demonstrates how binary numbers and boolean logic can be configured to create more complex components. On their own these components aren’t capable of computing anything particularly useful, but a computer is said to be Turing complete if it includes all of them, at which point it has the extraordinary ability to carry out any possible computation. He has participated in numerous workshops at festivals around the world, from Korea to Scotland, but primarily at the School for Poetic Computation (SFPC) — an artist run school co-founded by Taeyoon in NYC. Taeyoon Choi’s Handmade Computer projects.

Theme: Protest


(credit: Moogfest)

irlbb from Vivan Thi Tang, connects individuals after IRL (in real life) interactions and creates community that otherwise would have been missed. With a customized beta of the app for Moogfest 2017, irlbb presents a unique engagement opportunity.

Theme: Protest


Ryan Shaw and Michael Clamann (credit: Duke University)

Duke Professors Ryan Shaw, and Michael Clamann will lead a daily science pub talk series on topics that include future medicine, humans and anatomy, and quantum physics.

Ryan is a pioneer in mobile health—the collection and dissemination of information using mobile and wireless devices for healthcare–working with faculty at Duke’s Schools of Nursing, Medicine and Engineering to integrate mobile technologies into first-generation care delivery systems. These technologies afford researchers, clinicians, and patients a rich stream of real-time information about individuals’ biophysical and behavioral health in everyday environments.

Michael Clamann is a Senior Research Scientist in the Humans and Autonomy Lab (HAL) within the Robotics Program at Duke University, an Associate Director at UNC’s Collaborative Sciences Center for Road Safety, and the Lead Editor for Robotics and Artificial Intelligence for Duke’s SciPol science policy tracking website. In his research, he works to better understand the complex interactions between robots and people and how they influence system effectiveness and safety.

Theme: Hacking Systems


Dave Smith (credit: Moogfest)

Dave Smith, the iconic instrument innovator and Grammy-winner, will lead Moogfest’s Instruments Innovators program and host a headlining conversation with a leading artist revealed in next week’s release. He will also host a masterclass.

As the original founder of Sequential Circuits in the mid-70s and Dave designed the Prophet-5––the world’s first fully-programmable polyphonic synth and the first musical instrument with an embedded microprocessor. From the late 1980’s through the early 2000’s he has worked to develop next level synths with the likes of the Audio Engineering Society, Yamaha, Korg, Seer Systems (for Intel). Realizing the limitations of software, Dave returned to hardware and started Dave Smith Instruments (DSI), which released the Evolver hybrid analog/digital synthesizer in 2002. Since then the DSI product lineup has grown to include the Prophet-6, OB-6, Pro 2, Prophet 12, and Prophet ’08 synthesizers, as well as the Tempest drum machine, co-designed with friend and fellow electronic instrument designer Roger Linn.

Theme: Future Thought


Dave Rossum, Gerhard Behles, and Lars Larsen (credit: Moogfest)

EM-u Systems Founder Dave Rossum, Ableton CEO Gerhard Behles, and LZX Founder Lars Larsen will take part in conversations as part of the Instruments Innovators program.

Driven by the creative and technological vision of electronic music pioneer Dave Rossum, Rossum Electro-Music creates uniquely powerful tools for electronic music production and is the culmination of Dave’s 45 years designing industry-defining instruments and transformative technologies. Starting with his co-founding of E-mu Systems, Dave provided the technological leadership that resulted in what many consider the premier professional modular synthesizer system–E-mu Modular System–which became an instrument of choice for numerous recording studios, educational institutions, and artists as diverse as Frank Zappa, Leon Russell, and Hans Zimmer. In the following years, worked on developing Emulator keyboards and racks (i.e. Emulator II), Emax samplers, the legendary SP-12 and SP-1200 (sampling drum machines), the Proteus sound modules and the Morpheus Z-Plane Synthesizer.

Gerhard Behles co-founded Ableton in 1999 with Robert Henke and Bernd Roggendorf. Prior to this he had been part of electronic music act “Monolake” alongside Robert Henke, but his interest in how technology drives the way music is made diverted his energy towards developing music software. He was fascinated by how dub pioneers such as King Tubby ‘played’ the recording studio, and began to shape this concept into a music instrument that became Ableton Live.

LZX Industries was born in 2008 out of the Synth DIY scene when Lars Larsen of Denton, Texas and Ed Leckie of Sydney, Australia began collaborating on the development of a modular video synthesizer. At that time, analog video synthesizers were inaccessible to artists outside of a handful of studios and universities. It was their continuing mission to design creative video instruments that (1) stay within the financial means of the artists who wish to use them, (2) honor and preserve the legacy of 20th century toolmakers, and (3) expand the boundaries of possibility. Since 2015, LZX Industries has focused on the research and development of new instruments, user support, and community building.


Science

ATLAS detector (credit: Kaushik De, Brookhaven National Laboratory)

ATLAS @ CERN. The full ATLAS @ CERN program will be led by Duke University Professors Mark Kruse andKatherine Hayles along with ATLAS @ CERN Physicist Steven Goldfarb.

The program will include a “Virtual Visit” to the Large Hadron Collider — the world’s largest and most powerful particle accelerator — via a live video session,  a ½ day workshop analyzing and understanding LHC data, and a “Science Fiction versus Science Fact” live debate.

The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?

“Atlas Boogie” (referencing Higgs Boson):

ATLAS Experiment | The ATLAS Boogie

(credit: Kate Shaw)

Kate Shaw (ATLAS @ CERN), PhD, in her keynote, titled “Exploring the Universe and Impacting Society Worldwide with the Large Hadron Collider (LHC) at CERN,” will dive into the present-day and future impacts of the LHC on society. She will also share findings from the work she has done promoting particle physics in developing countries through her Physics without Frontiers program.

The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?

Theme: Future Thought


Arecibo (credit: Joe Davis/MIT)

In his keynote, Joe Davis (MIT) will trace the history of several projects centered on ideas about extraterrestrial communications that have given rise to new scientific techniques and inspired new forms of artistic practice. He will present his “swansong” — an interstellar message that is intended explicitly for human beings rather than for aliens.

Theme: Future Thought


Immortality bus (credit: Zoltan Istvan)

Zoltan Istvan (Immortality Bus), the former U.S. Presidential candidate for the Transhumanist party and leader of the Transhumanist movement, will explore the path to immortality through science with the purpose of using science and technology to radically enhance the human being and human experience. His futurist work has reached over 100 million people–some of it due to the Immortality Bus which he recently drove across America with embedded journalists aboard. The bus is shaped and looks like a giant coffin to raise life extension awareness.


Zoltan Istvan | 1-min Hightlight Video for Zoltan Istvan Transhumanism Documentary IMMORTALITY OR BUST

Theme: Transhumanism/Biotechnology


(credit: Moogfest)

Marc Fleury and members of the Church of Space — Park Krausen, Ingmar Koch, and Christ of Veillon — return to Moogfest for a second year to present an expanded and varied program with daily explorations in modern physics with music and the occult, Illuminati performances, theatrical rituals to ERIS, and a Sunday Mass in their own dedicated “Church” venue.

Theme: Techno-Shamanism

#Moogfest2017

Elon Musk’s Los Angeles tunnel-boring machine plan revealed

Musk’s plan for tunnels under Los Angeles (credit: The Boring Company)

Things happen fast with Elon Musk, CEO of Tesla Motors and CEO/CTO of SpaceX. It starts December 17, 2016 when he’s stuck in Los Angeles traffic:

On February 3, Musk reveals he has already begun digging a “demo tunnel” in the SpaceX parking lot, Bloomberg reports.

Bloomberg also reports that Musk plans to build an underground network that “includes as many as 30 levels of tunnels for cars and high-speed trains such as the Hyperloop.”

Fast-forward to Thursday April 27: a SpaceX employee posts this on Instagram:

Then on Friday April 28, Musk’s latest venture — The Boring Company — posts this video:


The Boring Company | Tunnels

Also on Friday, Musk reveals at TED 2017 his plan to connect the tunnels to a coast-to-coast Hyperloop: “You could dig as much as you want,” he says. “I think if you were to do something like D.C. to New York Hyperloop, I think you’d probably want to go underground the entire way because it’s a high-density area.”

The Hyperloop system built by SpaceX at its headquarters in Hawthorne, California, is already approximately one mile in length with a six foot outer diameter,” says SpaceX.


SpaceXHyperloop | Hyperloop Pod Flights | 1-29-17


Inverse | Elon Musk’s TED 2017 Full Interview

To be continued. …

Elon Musk wants to enhance us as superhuman cyborgs to deal with superintelligent AI

(credit: Neuralink Corp.)

It’s the year 2021. A quadriplegic patient has just had one million “neural lace” microparticles injected into her brain, the world’s first human with an internet communication system using a wireless implanted brain-mind interface — and empowering her as the first superhuman cyborg. …

No, this is not a science-fiction movie plot. It’s the actual first public step — just four years from now — in Tesla CEO Elon Musk’s business plan for his latest new venture, Neuralink. It’s now explained for the first time on Tim Urban’s WaitButWhy blog.

Dealing with the superintelligence existential risk

Such a system would allow for radically improved communication between people, Musk believes. But for Musk, the big concern is AI safety. “AI is obviously going to surpass human intelligence by a lot,” he says. “There’s some risk at that point that something bad happens, something that we can’t control, that humanity can’t control after that point — either a small group of people monopolize AI power, or the AI goes rogue, or something like that.”

“This is what keeps Elon up at night,” says Urban. “He sees it as only a matter of time before superintelligent AI rises up on this planet — and when that happens, he believes that it’s critical that we don’t end up as part of ‘everyone else.’ That’s why, in a future world made up of AI and everyone else, he thinks we have only one good option: To be AI.”

Neural dust: an ultrasonic, low power solution for chronic brain-machine interfaces (credit: Swarm Lab/UC Berkeley)

To achieve his, Neuralink CEO Musk has met with more than 1,000 people, narrowing it down initially to eight experts, such as Paul Merolla, who spent the last seven years as the lead chip designer at IBM on their DARPA-funded SyNAPSE program to design neuromorphic (brain-inspired) chips with 5.4 billion transistors (each with 1 million neurons and 256 million synapses), and Dongjin (DJ) Seo, who while at UC Berkeley designed an ultrasonic backscatter system for powering and communicating with implanted bioelectronics called neural dust for recording brain activity.*

Mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution (credit: Lieber Research Group, Harvard University)

Becoming one with AI — a good thing?

Neuralink’s goal its to create a “digital tertiary layer” to augment the brain’s current cortex and limbic layers — a radical high-bandwidth, long-lasting, biocompatible, bidirectional communicative, non-invasively implanted system made up of micron-size (millionth of a meter) particles communicating wirelessly via the cloud and internet to achieve super-fast communication speed and increased bandwidth (carrying more information).

“We’re going to have the choice of either being left behind and being effectively useless or like a pet — you know, like a house cat or something — or eventually figuring out some way to be symbiotic and merge with AI. … A house cat’s a good outcome, by the way.”

Thin, flexible electrodes mounted on top of a biodegradable silk substrate could provide a better brain-machine interface, as shown in this model. (credit: University of Illinois at Urbana-Champaign)

But machine intelligence is already vastly superior to human intelligence in specific areas (such as Google’s Alpha Go) and often inexplicable. So how do we know superintelligence has the best interests of humanity in mind?

“Just an engineering problem”

Musk’s answer: “If we achieve tight symbiosis, the AI wouldn’t be ‘other’  — it would be you and with a relationship to your cortex analogous to the relationship your cortex has with your limbic system.” OK, but then how does an inferior intelligence know when it’s achieved full symbiosis with a superior one — or when AI goes rogue?

Brain-to-brain (B2B) internet communication system: EEG signals representing two words were encoded into binary strings (left) by the sender (emitter) and sent via the internet to a receiver. The signal was then encoded as a series of transcranial magnetic stimulation-generated phosphenes detected by the visual occipital cortex, which the receiver then translated to words (credit: Carles Grau et al./PLoS ONE)

And what about experts in neuroethics, psychology, law? Musk says it’s just “an engineering problem. … If we can just use engineering to get neurons to talk to computers, we’ll have done our job, and machine learning can do much of the rest.”

However, it’s not clear how we could be assured our brains aren’t hacked, spied on, and controlled by a repressive government or by other humans — especially those with a more recently updated software version or covert cyborg hardware improvements.

NIRS/EEG brain-computer interface system using non-invasive near-infrared light for sensing “yes” or “no” thoughts, shown on a model (credit: Wyss Center for Bio and Neuroengineering)

In addition, the devices mentioned in WaitButWhy all require some form of neurosurgery, unlike Facebook’s research project to use non-invasive near-infrared light, as shown in this experiment, for example.** And getting implants for non-medical use approved by the FDA will be a challenge, to grossly understate it.

“I think we are about 8 to 10 years away from this being usable by people with no disability,” says Musk, optimistically. However, Musk does not lay out a technology roadmap for going further, as MIT Technology Review notes.

Nonetheless, Neuralink sounds awesome — it should lead to some exciting neuroscience breakthroughs. And Neuralink now has 16 San Francisco job listings here.

* Other experts: Vanessa Tolosa, Lawrence Livermore National Laboratory, one of the world’s foremost researchers on biocompatible materials; Max Hodak, who worked on the development of some groundbreaking BMI technology at Miguel Nicolelis’s lab at Duke University, Ben Rapoport, Neuralink’s neurosurgery expert, with a Ph.D. in Electrical Engineering and Computer Science from MIT; Tim Hanson, UC Berkeley post-doc and expert in flexible Electrodes for Stable, Minimally-Invasive Neural Recording; Flip Sabes, professor, UCSF School of Medicine expert in cortical physiology, computational and theoretical modeling, and human psychophysics and physiology; and Tim Gardner, Associate Professor of Biology at Boston University, whose lab works on implanting BMIs in birds, to study “how complex songs are assembled from elementary neural units” and learn about “the relationships between patterns of neural activity on different time scales.”

** This binary experiment and the binary Brain-to-brain (B2B) internet communication system mentioned above are the equivalents of the first binary (dot–dash) telegraph message, sent May 24, 1844: ”What hath God wrought?”

Dear President Trump: Here’s How to Make Space Great Again

(Credit: NASA Innovative Advanced Concepts)

By Brent Ziarnick, Peter Garretson, Everett Dolman, and Coyote Smith

President-elect Donald Trump often says that Americans no longer dream and must do so again. Nowhere can dreams be more inspiring and profitable than in space. But today, expanding space enterprise is not foremost on the minds of Americans or military strategists. As a recent CNN special showed, defense thinkers feel embattled in space, focused on protecting our existing investments rather than developing new ones that seize strategic advantage.


Major Brent Ziarnick, Lieutenant Colonel Peter Garretson, Everett Dolman, and Coyote Smith are members of the United States Air Force’s Space Horizons team. Space Horizons is a research group chartered by the Air University to explore the future of American space activity. The opinions herein are those of the authors alone and are not necessarily the views of Air University, the U.S. Air Force, or the U.S. government.


The first step to make space great again is for the United States to offer a constructive vision that can satisfy many American space needs, including defense. The Trump administration has an opportunity to transcend pessimism in space and focus America where it thrives: aggressive yet peaceful competition. Interested readers can view our complete recommendations, but a new Trump national space policy should declare:

The U.S. will be the first nation to mine an asteroid. The trillions of dollars in mineral wealth from asteroids can fuel a vibrant in-space economy capable of lifting up all humankind. America must lead this process.

The U.S. will be the first nation to extract resources from Earth’s moon to operate a commercial transportation service to and from the lunar surface. Our moon offers vast resources and tremendous logistical advantages for the development of that in-space economy. The U.S. will conduct research and establish public-private partnerships to advance the technology and the development of self-sustaining commercial services. The U.S. should also commit to being an early customer of such services, and it should take a leadership role in helping private industry develop businesses based on lunar exploration.

The U.S. will be the first nation to operate a propellant depot and on-orbit refueling service. Being able to refuel on orbit is key to an agile and fully reusable space transportation system. The United States will be the first to prove this technology and offer it as a commercial service to others.

The U.S. will be the first nation to operate a private space station. A thriving space economy must provide broad, affordable access to space across society, and it must have ordinary citizens living and working there permanently. As someone deeply knowledgeable about the hotel industry, the president-elect might understand the value of a U.S.-branded orbital tower.

The U.S. will operate the first fleet of fully reusable launch vehicles. Central to assured access for our citizenry is the ability to come and go to space with aircraft-like operations. A fully reusable architecture, technically feasible but never championed by the government, makes private spaceflight and even greater projects possible. America will provide the transportation system that fuels the larger global ecosystem of innovation.

The U.S. will build the first profitable solar power satellite. No single innovation in space could be as transformational as unlocking the vast potential of space-based solar energy generation to power Earth’s electrical needs; that could provide the hundreds of terawatts of renewable energy necessary to provide first-world living standards to the entire planet in a green and environmentally sustainable manner. The logistics system to create this space-power grid would require moving millions of metric tons of satellites to geostationary orbit and, consequently, will be orders of magnitude larger than any envisioned government-centric space program.

The U.S. will build the first comprehensive system to defend Earth from hazardous asteroids and comets. This planetary defense capability will initially start small, providing adequate defense against both 50-meter and 300-meter diameter objects with years of advance warning, and be built to provide comprehensive protection against extinction-level events. The United States will design, construct, and seek to test this capability in the current administration, and aim to maintain a standby global defense capability soon thereafter.

The U.S. will fly the first mission to another star. Interstellar spaceflight will be the ultimate expression of humanity mastering space travel. The American people must be the first to be ready.

This list of goals sounds audacious, perhaps outrageous, but it is entirely within the capability and character of the people who built the Transcontinental Railroad, the Hoover Dam, and conquered a continent. Americans are leaders in every one of these fields. It is only necessary for the new President to unleash America’s potential — once unleashed, American innovators will move these dreams toward reality faster than anyone can imagine.

‘Bits & Watts’: integrating inexpensive energy sources into the electric grid

Bits & Watts initiative (credit: SLAC National Accelerator Laboratory)

Stanford University and DOE’s SLAC National Accelerator Laboratory launched today an initiative called “Bits & Watts” aimed at integrating low-carbon, inexpensive energy sources, like wind and solar, into the electric grid.

The interdisciplinary initiative hopes to develop “smart” technology that will bring the grid into the 21st century while delivering reliable, efficient, affordable power to homes and businesses.

That means you’ll be able to feed extra power from a home solar collector, for instance, into the grid — without throwing it off balance and triggering potential outages.

The three U.S. power grids (credit: Microsoft Encarta Encyclopedia)

A significant challenge. For starters, the U.S. electric grid is actually two giant, continent-spanning networks, plus a third, smaller network in Texas, that connect power sources and consumers via transmission lines. Each network runs like a single machine, with all its parts humming along at the same frequency, and their operators try to avoid unexpected surges and drops in power that could set off a chain reaction of disruptions and even wreck equipment or hurt people.

Remember the Northeast blackout of 2003, the second largest in history? It knocked out power for an estimated 45 million people in eight U.S. states and 10 million people in the Canadian province of Ontario, some for nearly a week.

“The first challenge was to bring down the cost of wind, solar and other forms of distributed power. The next challenge is to create an integrated system. We must develop the right technologies, financial incentives and investment atmosphere to take full advantage of the lowering costs of clean energy.” — Steven Chu, a Stanford professor, Nobel laureate, former U.S. Energy Secretary, and one of the founding researchers of Bits & Watts. (credit: U.S. Department of Energy)

“Today’s electric grid is … an incredibly complex and finely balanced ecosystem that’s designed to handle power flows in only one direction — from centralized power plants to the consumer,” explained Arun Majumdar, a Stanford professor of mechanical engineering who co-directs both Bits & Watts and the university’s Precourt Institute for Energy, which oversees the initiative.

“As we incorporate more low-carbon, highly variable sources like wind and solar — including energy generated, stored and injected back into the grid by individual consumers — we’ll need a whole new set of tools, from computing and communications to controls and data sciences, to keep the grid stable, efficient and secure and provide affordable electricity.”

Coordination and integration of transmission and distribution systems  (credit: SLAC National Accelerator Laboratory)

The initiative also plans to develop market structures, regulatory frameworks, business models and pricing mechanisms that are crucial for making the grid run smoothly, working with industry and policymakers to identify and solve problems that stand in the way of grid modernization.

(Three bigger grid problems the Stanford announcement today didn’t mention: a geomagnetic solar storm-induced Carrington event, an EMP attack, and a grid cyber attack.)

Simulating the Grid in the Lab

Sila Kiliccote, head of SLAC’s GISMo (Grid Integration, Systems and Mobility) lab, and Stanford graduate student Gustavo Cezar look at a computer dashboard showing how appliances, batteries, lighting and other systems in a “home hub” network could be turned on and off in response to energy prices, consumer preferences and demands on the grid. The lab is part of the Bits & Watts initiative. (credit: SLAC National Accelerator Laboratory)

Researchers will develop ways to use digital sensors and controls to collect data from millions of sources, from rooftop solar panels to electric car charging stations, wind farms, factory operations and household appliances and thermostats, and provide the real-time feedback grid operators need to seamlessly incorporate variable sources of energy and automatically adjust power distribution to customers.

All of the grid-related software developed by Bits & Watts will be open source, so it can be rapidly adopted by industry and policymakers and used by other researchers.

The initiative includes research projects that will:

  • Simulate the entire smart grid, from central power plants to networked home appliances (Virtual Megagrid).
  • Analyze data on electricity use, weather, geography, demographic patterns, and other factors to get a clear understanding of customer behavior via an easy-to-understand graphical interface (VISDOM).
  • Develop a “home hub” system that controls and monitors a home’s appliances, heating and cooling and other electrical demands and can switch them on and off in response to fluctuating electricity prices, demands on the power grid, and the customer’s needs (Powernet).
  • Gather vast and growing sources of data from buildings, rooftop solar modules, electric vehicles, utility equipment, energy markets and so on, and analyze it in real time to dramatically improve the operation and planning of the electricity grid (VADER). This project will incorporate new data science tools such as machine learning, and validate those tools using data from utilities and industry.
  • Create a unique data depository for the electricity ecosystem (DataCommons).

Through the Grid Modernization Initiative, initial Bits & Watts projects are being funded for a combined $8.6 million from two DOE programs, the Advanced Research Projects Agency-Energy (ARPA-E) and the Grid Modernization Laboratory Consortium; $2.2 million from the California Energy Commission; and $1.6 million per year from industrial members, including China State Grid, PG&E (Pacific Gas & Electric), innogy SE (formerly RWE), Schneider Electric and Meidensha Corp.

 

Musk’s new master plan for Tesla

Tesla Autopilot (credit: Tesla Motors)

Elon Musk revealed his new master plan for Tesla today (July 20) in a blog post published on Tesla’s website:

  • Create stunning solar roofs with seamlessly integrated battery storage.
  • Expand the electric vehicle product line to address all major segments.
  • Develop a self-driving capability that is 10X safer than manual via massive fleet learning.
  • Enable your car to make money for you when you aren’t using it.

Increasing safety: “morally reprehensible to delay”

In the context of the recent Autopilot problem, Musk clarified why Tesla is deploying partial autonomy now, rather than waiting until some point in the future: “When used correctly, it is already significantly safer than a person driving by themselves and it would therefore be morally reprehensible to delay release simply for fear of bad press or some mercantile calculation of legal liability.

“According to the recently released 2015 NHTSA report, automotive fatalities increased by 8% to one death every 89 million miles. Autopilot miles will soon exceed twice that number and the system gets better every day. It would no more make sense to disable Tesla’s Autopilot, as some have called for, than it would to disable autopilot in aircraft, after which our system is named.”

Another way to increase safety, he says, is new heavy-duty trucks and high passenger-density urban transport, both planned for unveiling next year. “With the advent of autonomy, it will probably make sense to shrink the size of buses and transition the role of bus driver to that of fleet manager. … Traffic congestion would improve due to increased passenger areal density by eliminating the center aisle and putting seats where there are currently entryways, and matching acceleration and braking to other vehicles, thus avoiding the inertial impedance to smooth traffic flow of traditional heavy buses. It would also take people all the way to their destination.”

Lowering the cost of an autonomous car

Musk said that when true self-driving is approved by regulators, “it will mean that you will be able to summon your Tesla from pretty much anywhere. Once it picks you up, you will be able to sleep, read, or do anything else enroute to your destination.

“You will also be able to add your car to the Tesla shared fleet just by tapping a button on the Tesla phone app and have it generate income for you while you’re at work or on vacation, significantly offsetting and at times potentially exceeding the monthly loan or lease cost. This dramatically lowers the true cost of ownership to the point where almost anyone could own a Tesla. Since most cars are only in use by their owner for 5% to 10% of the day, the fundamental economic utility of a true self-driving car is likely to be several times that of a car which is not.”

Musk said that in cities where demand exceeds the supply of customer-owned cars, “Tesla will operate its own fleet, ensuring you can always hail a ride from us no matter where you are.”

The top 10 emerging technologies of 2016

(credit: WEF)

The World Economic Forum’s annual list of this year’s breakthrough technologies, published today, includes “socially aware” openAI, grid-scale energy storage, perovskite solar cells, and other technologies with the potential to “transform industries, improve lives, and safeguard the planet.” The WEF’s specific interest is to “close gaps in investment and regulation.”

“Horizon scanning for emerging technologies is crucial to staying abreast of developments that can radically transform our world, enabling timely expert analysis in preparation for these disruptors. The global community needs to come together and agree on common principles if our society is to reap the benefits and hedge the risks of these technologies,” said Bernard Meyerson, PhD, Chief Innovation Officer of IBM and Chair of the WEF’s Meta-Council on Emerging Technologies.

The list also provides an opportunity to debate human, societal, economic or environmental risks and concerns that the technologies may pose — prior to widespread adoption.

One of the criteria used by council members during their deliberations was the likelihood that 2016 represents a tipping point in the deployment of each technology. So the list includes some technologies that have been known for a number of years, but are only now reaching a level of maturity where their impact can be meaningfully felt.

The top 10 technologies that make this year’s list are:

  1. Nanosensors and the Internet of Nanothings  — With the Internet of Things expected to comprise 30 billion connected devices by 2020, one of the most exciting areas of focus today is now on nanosensors capable of circulating in the human body or being embedded in construction materials. They could use DNA and proteins to recognize specific chemical targets, store a few bits of information, and then report their status by changing color or emitting some other easily detectable signal.
  2. Next-Generation Batteries — One of the greatest obstacles holding renewable energy back is matching supply with demand, but recent advances in energy storage using sodium, aluminum, and zinc based batteries makes mini-grids feasible that can provide clean, reliable, around-the-clock energy sources to entire villages.
  3. The Blockchain — With venture investment related to the online currency Bitcoin exceeding $1 billion in 2015 alone, the economic and social impact of blockchain’s potential to fundamentally change the way markets and governments work is only now emerging.
  4. 2D Materials — Plummeting production costs mean that 2D materials like graphene are emerging in a wide range of applications, from air and water filters to new generations of wearables and batteries.
  5. Autonomous Vehicles — The potential of self-driving vehicles for saving lives, cutting pollution, boosting economies, and improving quality of life for the elderly and other segments of society has led to rapid deployment of key technology forerunners along the way to full autonomy.
  6. Organs-on-chips — Miniature models of human organs could revolutionize medical research and drug discovery by allowing researchers to see biological mechanism behaviors in ways never before possible.
  7. Perovskite Solar Cells — This new photovoltaic material offers three improvements over the classic silicon solar cell: it is easier to make, can be used virtually anywhere and, to date, keeps on generating power more efficiently.
  8. Open AI Ecosystem — Shared advances in natural language processing and social awareness algorithms, coupled with an unprecedented availability of data, will soon allow smart digital assistants to help with a vast range of tasks, from keeping track of one’s finances and health to advising on wardrobe choice.
  9. Optogenetics — Recent developments mean light can now be delivered deeper into brain tissue, something that could lead to better treatment for people with brain disorders.
  10. Systems Metabolic Engineering — Advances in synthetic biology, systems biology, and evolutionary engineering mean that the list of building block chemicals that can be manufactured better and more cheaply by using plants rather than fossil fuels is growing every year.

To compile this list, the World Economic Forum’s Meta-Council on Emerging Technologies, a panel of global experts, “drew on the collective expertise of the Forum’s communities to identify the most important recent technological trends. By doing so, the Meta-Council aims to raise awareness of their potential and contribute to closing gaps in investment, regulation and public understanding that so often thwart progress.”

You can read 10 expert views on these technologies here or download the series as a PDF.

What happens when drones and people sync their vision?

Multiple recon drones in the sky all suddenly aim their cameras at a person of interest on the ground, synced to what persons on the ground see …

That could be a reality soon, thanks to an agreement just announced by the mysterious SICdrone, an unmanned aircraft system manufacturer, and CrowdOptic, an “interactive streaming platform that connects the world through smart devices.”

A CrowdOptic “cluster” — multiple people focused on the same object.  (credit: CrowdOptic)

CrowdOptic’s technology lets a “cluster” (multiple people or objects) point their cameras or smartphones at the same thing (say, at a concert or sporting event), with different views, allowing for group chat or sharing content.

Drone air control

For SICdrone, the idea is to use CrowdOptic tech to automatically orchestrate the drones’ onboard cameras to track and capture multiple camera angles (and views) of a single point of interest.* Beyond that, this tech could provide vital flight-navigation systems to coordinate multiple drones without having them conflict (or crash), says CrowdOptic CEO Jon Fisher.

This disruptive innovation might become essential (and mandated by law?) as AmazonFlirtey, and others compete to dominate drone delivery. It could also possibly help with the growing concern about drone risk to airplanes.**

Other current (and possible) users of CrowdOptics tech include first responders, news and sports reporting, advertising analytics (seeing what people focus on), linking up augmented-reality and VR headset users, and “social TV” (live attendees — using the Periscope app, for example — provide the most interesting video to people watching at home), Fisher explained to KurzweilAI.

* This uses several CrowdOptic patents (U.S. Patents 8,527,340, 9,020,832, and 9,264,474).

** Drone Comes Within 200 Feet Of Passenger Jet Coming In To Land At LAX

Creating custom drugs on a portable refrigerator-size device

This device built by MIT researchers can be reconfigured to manufacture several different types of pharmaceuticals (credit: courtesy of the researchers)

MIT researchers have developed a compact, portable pharmaceutical manufacturing system that can be reconfigured to produce a variety of drugs on demand — if you have the right chemicals.

The device could be rapidly deployed to produce drugs needed to handle an unexpected disease outbreak, to prevent a drug shortage caused by a manufacturing plant shutdown, or produce small quantities of drugs needed for clinical trials or to treat rare diseases, the researchers say.

Traditional “batch processing” drug manufacturing can take weeks or months. Active pharmaceutical ingredients are synthesized in chemical manufacturing plants and then shipped to other sites to be converted into a form that can be given to patients, such as tablets, drug solutions, or suspensions.

With research funded by DARPA’s Make-It program, the new system prototype can produce four drugs formulated as solutions or suspensions: Benadryl, Lidocaine, Valium, and Prozac. Using this apparatus, the researchers can manufacture about 1,000 doses of a given drug in 24 hours.

The key to the new system: chemical reactions that can take place as the reactants flow through relatively small tubes as opposed to the huge vats in which most pharmaceutical reactions now take place. Traditional batch processing is limited by the difficulty of cooling these vats, but the flow system allows reactions that produce a great deal of heat to be run safely.*

Personalized “orphan drugs”

One of the advantages of this small-scale system is that it could be used to make small amounts of drugs that would be prohibitively expensive to make in a large-scale plant. This would be useful for “orphan drugs” — drugs needed by a small number of patients. “Sometimes it’s very difficult to get those drugs, because economically it makes no sense to have a huge production operation for those,” says Klavs Jensen, the Warren K. Lewis Professor of Chemical Engineering at MIT and a senior author of a paper describing the new system in the March 31 online edition of Science.

The researchers are now working on the second phase of the project, which includes making the system about 40 percent smaller and producing drugs whose chemical syntheses are more complex. They are also working on producing tablets, which are more complicated to manufacture than liquid drugs.

*The chemical reactions required to synthesize each drug take place in the first of two modules. The reactions were designed so that they can take place at temperatures up to 250 degrees Celsius and pressures up to 17 atmospheres. By swapping in different module components, the researchers can easily reconfigure the system to produce different drugs. “Within a few hours we could change from one compound to the other,” Jensen says.

In the second module, the crude drug solution is purified by crystallization, filtered, and dried to remove solvent, then dissolved or suspended in water as the final dosage form. The researchers also incorporated an ultrasound monitoring system that ensures the formulated drug solution is at the correct concentration.


Abstract of On-demand continuous-flow production of pharmaceuticals in a compact, reconfigurable system

Pharmaceutical manufacturing typically uses batch processing at multiple locations. Disadvantages of this approach include long production times and the potential for supply chain disruptions. As a preliminary demonstration of an alternative approach, we report here the continuous-flow synthesis and formulation of active pharmaceutical ingredients in a compact, reconfigurable manufacturing platform. Continuous end-to-end synthesis in the refrigerator-sized [1.0 meter (width) × 0.7 meter (length) × 1.8 meter (height)] system produces sufficient quantities per day to supply hundreds to thousands of oral or topical liquid doses of diphenhydramine hydrochloride, lidocaine hydrochloride, diazepam, and fluoxetine hydrochloride that meet U.S. Pharmacopeia standards. Underlying this flexible plug-and-play approach are substantial enabling advances in continuous-flow synthesis, complex multistep sequence telescoping, reaction engineering equipment, and real-time formulation.