Disturbing video depicts near-future ubiquitous lethal autonomous weapons

Campaign to Stop Killer Robots | Slaughterbots

In response to growing concerns about autonomous weapons, the Campaign to Stop Killer Robots, a coalition of AI researchers and advocacy organizations, has released a fictional video that depicts a disturbing future in which lethal autonomous weapons have become cheap and ubiquitous worldwide.

UC Berkeley AI researcher Stuart Russell presented the video at the United Nations Convention on Certain Conventional Weapons in Geneva, hosted by the Campaign to Stop Killer Robots earlier this week. Russell, in an appearance at the end of the video, warns that the technology described in the film already exists* and that the window to act is closing fast.

Support for a ban against autonomous weapons has been mounting. On Nov. 2, more than 200 Canadian scientists and more than 100 Australian scientists in academia and industry penned open letters to Prime Minister Justin Trudeau and Malcolm Turnbull urging them to support the ban.

Earlier this summer, more than 130 leaders of AI companies signed a letter in support of this week’s discussions. These letters follow a 2015 open letter released by the Future of Life Institute and signed by more than 20,000 AI/robotics researchers and others, including Elon Musk and Stephen Hawking.

“Many of the world’s leading AI researchers worry that if these autonomous weapons are ever developed, they could dramatically lower the threshold for armed conflict, ease and cheapen the taking of human life, empower terrorists, and create global instability,” according to an article published by the Future of Life Institute, which funded the video. “The U.S. and other nations have used drones and semi-automated systems to carry out attacks for several years now, but fully removing a human from the loop is at odds with international humanitarian and human rights law.”

“The Campaign to Stop Killer Robots is not trying to stifle innovation in artificial intelligence and robotics and it does not wish to ban autonomous systems in the civilian or military world,” explained Noel Sharkey of the International Committee for Robot Arms Control. Rather we see an urgent need to prevent automation of the critical functions for selecting targets and applying violent force without human deliberation and to ensure meaningful human control for every attack.”

For more information about autonomous weapons:

* As suggested in this U.S. Department of Defense video:

Perdix Drone Swarm – Fighters Release Hive-mind-controlled Weapon UAVs in Air | U.S. Naval Air Systems Command

Why futurist Ray Kurzweil isn’t worried about technology stealing your job — Fortune

1985: Ray Kurzweil looks on as Stevie Wonder experiences the Kurzweil 250, the first synthesizer to accurately reproduce the sounds of the piano — replacing piano-maker jobs but adding many more jobs for musicians (credit: Kurzweil Music Systems)

Last week, Fortune magazine asked Ray Kurzweil to comment on some often-expressed questions about the future.

Does AI pose an existential threat to humanity?

Kurzweil sees the future as nuanced, notes writer Michal Lev-Ram. “A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation,” Kurzweil said. “It’s very important for your survival to be sensitive to bad news. … I think if you look at history, though, we’re being helped [by new technology] more than we’re being hurt.”

How will artificial intelligence and other technologies impact jobs?

“We have already eliminated all jobs several times in human history,” said Kurzweil, pointing out that “for every job we eliminate, we’re going to create more jobs at the top of the skill ladder. … You can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.”

Why are we so bad at predicting certain things? For example, Donald Trump winning the presidency?

Kurzweil: “He’s not technology.”

Read Fortune article here.

Why futurist Ray Kurzweil isn’t worried about technology stealing your job — Fortune

1985: Ray Kurzweil looks on as Stevie Wonder experiences the Kurzweil 250, the first synthesizer to accurately reproduce the sounds of the piano — replacing piano-maker jobs but adding many more jobs for musicians (credit: Kurzweil Music Systems)

Last week, Fortune magazine asked Ray Kurzweil to comment on some often-expressed questions about the future.

Does AI pose an existential threat to humanity?

Kurzweil sees the future as nuanced, notes writer Michal Lev-Ram. “A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation,” Kurzweil said. “It’s very important for your survival to be sensitive to bad news. … I think if you look at history, though, we’re being helped [by new technology] more than we’re being hurt.”

How will artificial intelligence and other technologies impact jobs?

“We have already eliminated all jobs several times in human history,” said Kurzweil, pointing out that “for every job we eliminate, we’re going to create more jobs at the top of the skill ladder. … You can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.”

Why are we so bad at predicting certain things? For example, Donald Trump winning the presidency?

Kurzweil: “He’s not technology.”

Read Fortune article here.

‘Fog computing’ could improve communications during natural disasters

Hurricane Irma at peak intensity near the U.S. Virgin Islands on September 6, 2017 (credit: NOAA)

Researchers at the Georgia Institute of Technology have developed a system that uses edge computing (also known as fog computing) to deal with the loss of internet access in natural disasters such as hurricanes, tornados, and floods.

The idea is to create an ad hoc decentralized network that uses computing power built into mobile phones, routers, and other hardware to provide actionable data to emergency managers and first responders.

In a flooded area, for example, search and rescue personnel could continuously ping enabled phones, surveillance cameras, and “internet of things” devices in an area to determine their exact locations. That data could then be used to create density maps of people to prioritize and guide emergency response teams.

Situational awareness for first responders

“We believe fog computing can become a potent enabler of decentralized, local social sensing services that can operate when internet connectivity is constrained,” said Kishore Ramachandran, PhD, computer science professor at Georgia Tech and senior author of a paper presented in April this year at the 2nd International Workshop on Social Sensing*.

“This capability will provide first responders and others with the level of situational awareness they need to make effective decisions in emergency situations.”

The team has proposed a generic software architecture for social sensing applications that is capable of exploiting the fog-enabled devices. The design has three components: a central management function that resides in the cloud, a data processing element placed in the fog infrastructure, and a sensing component on the user’s device.

Beyond emergency response during natural disasters, the team believes its proposed fog architecture can also benefit communities with limited or no internet access — for public transportation management, job recruitment, and housing, for example.

To monitor far-flung devices in areas with no internet access, a bus or other vehicle could be outfitted with fog-enabled sensing capabilities, the team suggests. As it travels in remote areas, it would collect data from sensing devices. Once in range of internet connectivity, the “data mule” bus would upload that information to centralized cloud-based platforms.

* “Social sensing has emerged as a new paradigm for collecting sensory measurements by means of “crowd-sourcing” sensory data collection tasks to a human population. Humans can act as sensor carriers (e.g., carrying GPS devices that share location data), sensor operators (e.g., taking pictures with smart phones), or as sensors themselves (e.g., sharing their observations on Twitter). The proliferation of sensors in the possession of the average individual, together with the popularity of social networks that allow massive information dissemination, heralds an era of social sensing that brings about new research challenges and opportunities in this emerging field.” — SocialSens2017

Leading AI country will be ‘ruler of the world,’ says Putin

DoD autonomous drone swarms concept (credit: U.S. Dept. of Defense)

Russian President Vladimir Putin warned Friday (Sept. 1, 2017) that the country that becomes the leader in developing artificial intelligence will be “the ruler of the world,” reports the Associated Press.

AI development “raises colossal opportunities and threats that are difficult to predict now,” Putin said in a lecture to students, warning that “it would be strongly undesirable if someone wins a monopolist position.”

Future wars will be fought by autonomous drones, Putin suggested, and “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender.”

U.N. urged to address lethal autonomous weapons

AI experts worldwide are also concerned. On August 20, 116 founders of robotics and artificial intelligence companies from 26 countries, including Elon Musk* and Google DeepMind’s Mustafa Suleyman, signed an open letter asking the United Nations to “urgently address the challenge of lethal autonomous weapons (often called ‘killer robots’) and ban their use internationally.”

“Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter states. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

Unfortunately, the box may have already been opened. Three examples:

Russia. In 2014, Dmitry Andreyev of the Russian Strategic Missile Forces announced that mobile robots would be standing guard over five ballistic missile installations, New Scientist reported. Armed with a heavy machine gun, this “mobile robotic complex … can detect and destroy targets, without human involvement.”

Uran-9 unmanned combat ground vehicle (credit: Vitaly V. Kuzmin/CC)

In 2016, Russian military equipment manufacturer JSC 766 UPTK announced what appears to be the commercial version: the Uran-9 multipurpose unmanned ground combat vehicle. “In autonomous mode, the vehicle can automatically identify, detect, track and defend [against] enemy targets based on the pre-programmed path set by the operator,” the company said.

United States. In a 2016 report, the U.S. Department of Defense advocated self-organizing “autonomous unmanned” (UA) swarms of small drones that would assist frontline troops in real time by surveillance, jamming/spoofing enemy electronics, and autonomously firing against the enemy.

The authors warned that “autonomy — fueled by advances in artificial intelligence — has attained a ‘tipping point’ in value. Autonomous capabilities are increasingly ubiquitous and are readily available to allies and adversaries alike.” The report advised that the Department of Defense “must take immediate action to accelerate its exploitation of autonomy while also preparing to counter autonomy employed by adversaries.”**

South Korea. Designed initially for the DMZ, Super aEgis II, a robot-sentry machine gun designed by Dodaam Systems, can identify, track, and automatically destroy a human target 3 kilometers away, assuming that capability is turned on.

* “China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.” — Elon Musk tweet 2:33 AM – 4 Sep 2017

** While it doesn’t use AI, the U.S. Navy’s computer-controlled, radar-guided Phalanx gun system can automatically detect, track, evaluate, and fire at incoming missiles and aircraft that it judges to be a threat.

UPDATE Sept. 5, 2017: Added Musk tweet in footnote

Elon Musk wants to enhance us as superhuman cyborgs to deal with superintelligent AI

(credit: Neuralink Corp.)

It’s the year 2021. A quadriplegic patient has just had one million “neural lace” microparticles injected into her brain, the world’s first human with an internet communication system using a wireless implanted brain-mind interface — and empowering her as the first superhuman cyborg. …

No, this is not a science-fiction movie plot. It’s the actual first public step — just four years from now — in Tesla CEO Elon Musk’s business plan for his latest new venture, Neuralink. It’s now explained for the first time on Tim Urban’s WaitButWhy blog.

Dealing with the superintelligence existential risk

Such a system would allow for radically improved communication between people, Musk believes. But for Musk, the big concern is AI safety. “AI is obviously going to surpass human intelligence by a lot,” he says. “There’s some risk at that point that something bad happens, something that we can’t control, that humanity can’t control after that point — either a small group of people monopolize AI power, or the AI goes rogue, or something like that.”

“This is what keeps Elon up at night,” says Urban. “He sees it as only a matter of time before superintelligent AI rises up on this planet — and when that happens, he believes that it’s critical that we don’t end up as part of ‘everyone else.’ That’s why, in a future world made up of AI and everyone else, he thinks we have only one good option: To be AI.”

Neural dust: an ultrasonic, low power solution for chronic brain-machine interfaces (credit: Swarm Lab/UC Berkeley)

To achieve his, Neuralink CEO Musk has met with more than 1,000 people, narrowing it down initially to eight experts, such as Paul Merolla, who spent the last seven years as the lead chip designer at IBM on their DARPA-funded SyNAPSE program to design neuromorphic (brain-inspired) chips with 5.4 billion transistors (each with 1 million neurons and 256 million synapses), and Dongjin (DJ) Seo, who while at UC Berkeley designed an ultrasonic backscatter system for powering and communicating with implanted bioelectronics called neural dust for recording brain activity.*

Mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution (credit: Lieber Research Group, Harvard University)

Becoming one with AI — a good thing?

Neuralink’s goal its to create a “digital tertiary layer” to augment the brain’s current cortex and limbic layers — a radical high-bandwidth, long-lasting, biocompatible, bidirectional communicative, non-invasively implanted system made up of micron-size (millionth of a meter) particles communicating wirelessly via the cloud and internet to achieve super-fast communication speed and increased bandwidth (carrying more information).

“We’re going to have the choice of either being left behind and being effectively useless or like a pet — you know, like a house cat or something — or eventually figuring out some way to be symbiotic and merge with AI. … A house cat’s a good outcome, by the way.”

Thin, flexible electrodes mounted on top of a biodegradable silk substrate could provide a better brain-machine interface, as shown in this model. (credit: University of Illinois at Urbana-Champaign)

But machine intelligence is already vastly superior to human intelligence in specific areas (such as Google’s Alpha Go) and often inexplicable. So how do we know superintelligence has the best interests of humanity in mind?

“Just an engineering problem”

Musk’s answer: “If we achieve tight symbiosis, the AI wouldn’t be ‘other’  — it would be you and with a relationship to your cortex analogous to the relationship your cortex has with your limbic system.” OK, but then how does an inferior intelligence know when it’s achieved full symbiosis with a superior one — or when AI goes rogue?

Brain-to-brain (B2B) internet communication system: EEG signals representing two words were encoded into binary strings (left) by the sender (emitter) and sent via the internet to a receiver. The signal was then encoded as a series of transcranial magnetic stimulation-generated phosphenes detected by the visual occipital cortex, which the receiver then translated to words (credit: Carles Grau et al./PLoS ONE)

And what about experts in neuroethics, psychology, law? Musk says it’s just “an engineering problem. … If we can just use engineering to get neurons to talk to computers, we’ll have done our job, and machine learning can do much of the rest.”

However, it’s not clear how we could be assured our brains aren’t hacked, spied on, and controlled by a repressive government or by other humans — especially those with a more recently updated software version or covert cyborg hardware improvements.

NIRS/EEG brain-computer interface system using non-invasive near-infrared light for sensing “yes” or “no” thoughts, shown on a model (credit: Wyss Center for Bio and Neuroengineering)

In addition, the devices mentioned in WaitButWhy all require some form of neurosurgery, unlike Facebook’s research project to use non-invasive near-infrared light, as shown in this experiment, for example.** And getting implants for non-medical use approved by the FDA will be a challenge, to grossly understate it.

“I think we are about 8 to 10 years away from this being usable by people with no disability,” says Musk, optimistically. However, Musk does not lay out a technology roadmap for going further, as MIT Technology Review notes.

Nonetheless, Neuralink sounds awesome — it should lead to some exciting neuroscience breakthroughs. And Neuralink now has 16 San Francisco job listings here.

* Other experts: Vanessa Tolosa, Lawrence Livermore National Laboratory, one of the world’s foremost researchers on biocompatible materials; Max Hodak, who worked on the development of some groundbreaking BMI technology at Miguel Nicolelis’s lab at Duke University, Ben Rapoport, Neuralink’s neurosurgery expert, with a Ph.D. in Electrical Engineering and Computer Science from MIT; Tim Hanson, UC Berkeley post-doc and expert in flexible Electrodes for Stable, Minimally-Invasive Neural Recording; Flip Sabes, professor, UCSF School of Medicine expert in cortical physiology, computational and theoretical modeling, and human psychophysics and physiology; and Tim Gardner, Associate Professor of Biology at Boston University, whose lab works on implanting BMIs in birds, to study “how complex songs are assembled from elementary neural units” and learn about “the relationships between patterns of neural activity on different time scales.”

** This binary experiment and the binary Brain-to-brain (B2B) internet communication system mentioned above are the equivalents of the first binary (dot–dash) telegraph message, sent May 24, 1844: ”What hath God wrought?”

Beneficial AI conference develops ‘Asilomar AI principles’ to guide future AI research

Beneficial AI conference (credit: Future of Life Institute)

At the Beneficial AI 2017 conference, January 5–8 held at a conference center in Asilomar, California — a sequel to the 2015 AI Safety conference in Puerto Rico — the Future of Life Institute (FLI) brought together more 100 AI researchers from academia and industry and thought leaders in economics, law, ethics, and philosophy to address and formulate principles of beneficial AI.

FLI hosted a two-day workshop for its grant recipients, followed by a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the resulting technology is beneficial.

Beneficial AI conference participants (credit: Future of Life Institute)

The result was 23 Asilomar AI Principles, intended to suggest AI research guidelines, such as “The goal of AI research should be to create not undirected intelligence, but beneficial intelligence” and “An arms race in lethal autonomous weapons should be avoided”; identify ethics and values, such as safety and transparency; and address longer-term issues — notably, “ Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”

To date, 2515 AI researchers and others are signatories of the Principles. The process is described here.

The conference location has historic significance. In 2009, the Association for the Advancement of Artificial Intelligence held the Asilomar Meeting on Long-Term AI Futures to address similar concerns. And in 1975, the Asilomar Conference on Recombinant DNA was held to discuss potential biohazards and regulation of emerging biotechnology.

The non-profit Future of Life Institute was founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, DeepMind research scientist Viktoriya Krakovna, Boston University Ph.D. candidate in Developmental Sciences Meia Chita-Tegmark, and UCSC physicist Anthony Aguirre. Its mission is “to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.”

FLI’s scientific advisory board includes physicist Stephen Hawking, SpaceX CEO Elon Musk, Astronomer Royal Martin Rees, and UC Berkeley Professor of Computer Science/Smith-Zadeh Professor in Engineering Stuart Russell.

Future of Life Institute
| Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds

Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI [artificial general intelligence] (and beyond), and also what we would like to happen.


‘Bits & Watts’: integrating inexpensive energy sources into the electric grid

Bits & Watts initiative (credit: SLAC National Accelerator Laboratory)

Stanford University and DOE’s SLAC National Accelerator Laboratory launched today an initiative called “Bits & Watts” aimed at integrating low-carbon, inexpensive energy sources, like wind and solar, into the electric grid.

The interdisciplinary initiative hopes to develop “smart” technology that will bring the grid into the 21st century while delivering reliable, efficient, affordable power to homes and businesses.

That means you’ll be able to feed extra power from a home solar collector, for instance, into the grid — without throwing it off balance and triggering potential outages.

The three U.S. power grids (credit: Microsoft Encarta Encyclopedia)

A significant challenge. For starters, the U.S. electric grid is actually two giant, continent-spanning networks, plus a third, smaller network in Texas, that connect power sources and consumers via transmission lines. Each network runs like a single machine, with all its parts humming along at the same frequency, and their operators try to avoid unexpected surges and drops in power that could set off a chain reaction of disruptions and even wreck equipment or hurt people.

Remember the Northeast blackout of 2003, the second largest in history? It knocked out power for an estimated 45 million people in eight U.S. states and 10 million people in the Canadian province of Ontario, some for nearly a week.

“The first challenge was to bring down the cost of wind, solar and other forms of distributed power. The next challenge is to create an integrated system. We must develop the right technologies, financial incentives and investment atmosphere to take full advantage of the lowering costs of clean energy.” — Steven Chu, a Stanford professor, Nobel laureate, former U.S. Energy Secretary, and one of the founding researchers of Bits & Watts. (credit: U.S. Department of Energy)

“Today’s electric grid is … an incredibly complex and finely balanced ecosystem that’s designed to handle power flows in only one direction — from centralized power plants to the consumer,” explained Arun Majumdar, a Stanford professor of mechanical engineering who co-directs both Bits & Watts and the university’s Precourt Institute for Energy, which oversees the initiative.

“As we incorporate more low-carbon, highly variable sources like wind and solar — including energy generated, stored and injected back into the grid by individual consumers — we’ll need a whole new set of tools, from computing and communications to controls and data sciences, to keep the grid stable, efficient and secure and provide affordable electricity.”

Coordination and integration of transmission and distribution systems  (credit: SLAC National Accelerator Laboratory)

The initiative also plans to develop market structures, regulatory frameworks, business models and pricing mechanisms that are crucial for making the grid run smoothly, working with industry and policymakers to identify and solve problems that stand in the way of grid modernization.

(Three bigger grid problems the Stanford announcement today didn’t mention: a geomagnetic solar storm-induced Carrington event, an EMP attack, and a grid cyber attack.)

Simulating the Grid in the Lab

Sila Kiliccote, head of SLAC’s GISMo (Grid Integration, Systems and Mobility) lab, and Stanford graduate student Gustavo Cezar look at a computer dashboard showing how appliances, batteries, lighting and other systems in a “home hub” network could be turned on and off in response to energy prices, consumer preferences and demands on the grid. The lab is part of the Bits & Watts initiative. (credit: SLAC National Accelerator Laboratory)

Researchers will develop ways to use digital sensors and controls to collect data from millions of sources, from rooftop solar panels to electric car charging stations, wind farms, factory operations and household appliances and thermostats, and provide the real-time feedback grid operators need to seamlessly incorporate variable sources of energy and automatically adjust power distribution to customers.

All of the grid-related software developed by Bits & Watts will be open source, so it can be rapidly adopted by industry and policymakers and used by other researchers.

The initiative includes research projects that will:

  • Simulate the entire smart grid, from central power plants to networked home appliances (Virtual Megagrid).
  • Analyze data on electricity use, weather, geography, demographic patterns, and other factors to get a clear understanding of customer behavior via an easy-to-understand graphical interface (VISDOM).
  • Develop a “home hub” system that controls and monitors a home’s appliances, heating and cooling and other electrical demands and can switch them on and off in response to fluctuating electricity prices, demands on the power grid, and the customer’s needs (Powernet).
  • Gather vast and growing sources of data from buildings, rooftop solar modules, electric vehicles, utility equipment, energy markets and so on, and analyze it in real time to dramatically improve the operation and planning of the electricity grid (VADER). This project will incorporate new data science tools such as machine learning, and validate those tools using data from utilities and industry.
  • Create a unique data depository for the electricity ecosystem (DataCommons).

Through the Grid Modernization Initiative, initial Bits & Watts projects are being funded for a combined $8.6 million from two DOE programs, the Advanced Research Projects Agency-Energy (ARPA-E) and the Grid Modernization Laboratory Consortium; $2.2 million from the California Energy Commission; and $1.6 million per year from industrial members, including China State Grid, PG&E (Pacific Gas & Electric), innogy SE (formerly RWE), Schneider Electric and Meidensha Corp.


Mars-bound astronauts face brain damage from galactic cosmic ray exposure, says NASA-funded study

An (unshielded) view of Mars (credit: SpaceX)

A NASA-funded study of rodents exposed to highly energetic charged particles — similar to the galactic cosmic rays that will bombard astronauts during extended spaceflights — found that the rodents developed long-term memory deficits, anxiety, depression, and impaired decision-making (not to mention long-term cancer risk).

The study by University of California, Irvine (UCI) scientists* appeared Oct. 10 in Nature’s open-access Scientific Reports. It follows one last year that appeared in the May issue of open-access Science Advances, showing somewhat shorter-term brain effects of galactic cosmic rays.

The rodents were subjected to charged particle irradiation (ionized charged atomic nuclei from oxygen and titanium) at the NASA Space Radiation Laboratory at New York’s Brookhaven National Laboratory.

Digital imaging revealed a reduction of dendrites (green) and spines (red) on neurons of  irradiated rodents, disrupting the transmission of signals among brain cells and thus impairing the brain’s neural network. Left: dendrites in unirradiated brains. Center: dendrites exposed to 0.05 Gy** ionized oxygen. Right: dendrites exposed to 0.30 Gy ionized oxygen. (credit: Vipan K. Parihar et al./Scientific Reports)

Six months after exposure, the researchers still found significant levels of brain inflammation and damage to neurons,  poor performance on behavioral tasks designed to test learning and memory, and reduced “fear extinction” (an active process in which the brain suppresses prior unpleasant and stressful associations) — leading to elevated anxiety.

Similar types of more severe cognitive dysfunction (“chemo brain”) are common in brain cancer patients who have received high-dose, photon-based radiation treatments.

“The space environment poses unique hazards to astronauts,” said Charles Limoli, a professor of radiation oncology in UCI’s School of Medicine. “Exposure to these particles can lead to a range of potential central nervous system complications that can occur during and persist long after actual space travel. Many of these adverse consequences to cognition may continue and progress throughout life.”

NASA health hazards advisory

“During a 360-day round trip [to Mars], an astronaut would receive a dose of about 662 millisieverts (0.662 Gy) [twice the highest amount of radiation used in the UCI experiment with rodents] according to data from the Radiation Assessment Detector (RAD) … piggybacking on Curiosity,” said Cary Zeitlin, PhD, a principal scientist in Southwest Research Institute Space Science and Engineering Division and lead author of an article published in the journal Science in 2013. “In terms of accumulated dose, it’s like getting a whole-body CT scan once every five or six days [for a year],” he said in a NASA press release. There’s also the risk from increased radiation during periodic solar storms.

In addition, as dramatized in the movie The Martian (and explained in this analysis), there’s a risk on the surface of Mars, although less than in space, thanks to the atmosphere, and thanks to nighttime shielding of solar radiation by the planet.

In October 2015, the NASA Office of Inspector General issued a health hazards report related to space exploration, including a human mission to Mars.

“There’s going to be some risk of radiation, but it’s not deadly,” claimed SpaceX CEO Elon Musk Sept. 27 in an announcement of plans to establish a permanent, self-sustaining civilization of a million people on Mars (with an initial flight as soon as 2024). “There will be some slightly increased risk of cancer, but I think it’s relatively minor. … Are you prepared to die? If that’s OK, you’re a candidate for going.”

Sightseeers expose themselves to galactic cosmic radiation on Europa, a moon of Jupiter, shown in the background (credit: SpaceX)

Not to be one-upped by Musk, President Obama said in an op-ed on the CNN blog on Oct. 11 (perhaps channeling JFK) that “we have set a clear goal vital to the next chapter of America’s story in space: sending humans to Mars by the 2030s and returning them safely to Earth, with the ultimate ambition to one day remain there for an extended time.”

In a follow-up explainer, NASA Administrator Charles Bolden and John Holdren, Director of the White House Office of Science and Technology Policy, announced that in August, NASA selected six companies (under the  Next Space Technologies for Exploration Partnerships-2 (NextSTEP-2) program) to produce ground prototypes for deep space habitat modules. No mention of plans for avoiding astronaut brain damage, and the NextSTEP-2 illustrations don’t appear to address that either.

Concept image of Sierra Nevada Corporation’s habitation prototype, based on its Dream Chaser cargo module. No multi-ton shielding is apparent. (credit: Sierra Nevada)

Hitchhiking on an asteroid

So what are the solutions (if any)? Material shielding can be effective against galactic cosmic rays, but it’s expensive and impractical for space travel. For instance, a NASA design study for a large space station envisioned four metric tons per square meter of shielding to drop radiation exposure to 2.5 millisieverts (mSv) (or 0.0025 Gy) annually (the annual global average dose from natural background radiation is 2.4 mSv (3.6 in the U.S., including X-rays), according to a United Nations report in 2008).

Various alternate shielding scheme have been proposed. NASA scientist Geoffrey A. Landis suggested in a 1991 paper the use of magnetic deflection of charged radiation particles (imitating the Earth’s magnetosphere***). Improvements in superconductors since 1991 may make this more practical today and possibly more so in future.

In a 2011 paper in Acta Astronautica, Gregory Matloff of New York City College of Technology suggested that a Mars-bound spacecraft could tunnel into the asteroid for shielding, as long as the asteroid is at least 33 feet wide (if the asteroid were especially iron-rich, the necessary width would be smaller), National Geographic reported.

The calculated orbit of (357024) 1999 YR14 (credit: Lowell Observatory Near-Earth-Object Search)

“There are five known asteroids that fit the criteria and will pass from Earth to Mars before the year 2100. … The asteroids 1999YR14 and 2007EE26, for example, will both pass Earth in 2086, and they’ll make the journey to Mars in less than a year,” he said. Downside: it would be five years before either asteroid would swing around Mars as it heads back toward Earth.

Meanwhile, future preventive treatments may help. Limoli’s group is working on pharmacological strategies involving compounds that scavenge free radicals and protect neurotransmission.

* An Eastern Virginia Medical School researcher also contributed to the study.

** The Scientific Reports paper shows these values as centigray (cGy), a decimal fraction (0.01) of the SI derived Gy (Gray) unit of absorbed dose and specific energy (energy per unit mass). Such energies are usually associated with ionizing radiation such as gamma particles or X-rays.

*** Astronauts working for extended periods on the International Space Station do not face the same level of bombardment with galactic cosmic rays because they are still within the Earth’s protective magnetosphere. Astronauts on Apollo and Skylab missions received on average 1.2 mSv (0.0012 Gy) per day and 1.4 mSv (0.0014 Gy) per day respectively, according to a NASA study.

Abstract of Cosmic radiation exposure and persistent cognitive dysfunction

The Mars mission will result in an inevitable exposure to cosmic radiation that has been shown to cause cognitive impairments in rodent models, and possibly in astronauts engaged in deep space travel. Of particular concern is the potential for cosmic radiation exposure to compromise critical decision making during normal operations or under emergency conditions in deep space. Rodents exposed to cosmic radiation exhibit persistent hippocampal and cortical based performance decrements using six independent behavioral tasks administered between separate cohorts 12 and 24 weeks after irradiation. Radiation-induced impairments in spatial, episodic and recognition memory were temporally coincident with deficits in executive function and reduced rates of fear extinction and elevated anxiety. Irradiation caused significant reductions in dendritic complexity, spine density and altered spine morphology along medial prefrontal cortical neurons known to mediate neurotransmission interrogated by our behavioral tasks. Cosmic radiation also disrupted synaptic integrity and increased neuroinflammation that persisted more than 6 months after exposure. Behavioral deficits for individual animals correlated significantly with reduced spine density and increased synaptic puncta, providing quantitative measures of risk for developing cognitive impairment. Our data provide additional evidence that deep space travel poses a real and unique threat to the integrity of neural circuits in the brain.

Abstract of What happens to your brain on the way to Mars

As NASA prepares for the first manned spaceflight to Mars, questions have surfaced concerning the potential for increased risks associated with exposure to the spectrum of highly energetic nuclei that comprise galactic cosmic rays. Animal models have revealed an unexpected sensitivity of mature neurons in the brain to charged particles found in space. Astronaut autonomy during long-term space travel is particularly critical as is the need to properly manage planned and unanticipated events, activities that could be compromised by accumulating particle traversals through the brain. Using mice subjected to space-relevant fluences of charged particles, we show significant cortical- and hippocampal-based performance decrements 6 weeks after acute exposure. Animals manifesting cognitive decrements exhibited marked and persistent radiation-induced reductions in dendritic complexity and spine density along medial prefrontal cortical neurons known to mediate neurotransmission specifically interrogated by our behavioral tasks. Significant increases in postsynaptic density protein 95 (PSD-95) revealed major radiation-induced alterations in synaptic integrity. Impaired behavioral performance of individual animals correlated significantly with reduced spine density and trended with increased synaptic puncta, thereby providing quantitative measures of risk for developing cognitive decrements. Our data indicate an unexpected and unique susceptibility of the central nervous system to space radiation exposure, and argue that the underlying radiation sensitivity of delicate neuronal structure may well predispose astronauts to unintended mission-critical performance decrements and/or longer-term neurocognitive sequelae.

Elon Musk unveils plans for Mars civilization

(credit: SpaceX)

In a talk on Tuesday at the International Astronautical Congress in Guadalajara, Mexico, SpaceX CEO Elon Musk laid out engineering details to establish a permanent, self-sustaining civilization of a million people on Mars, with an initial flight as soon as 2024.

SpaceX is designing a massive reusable Interplanetary Transport System spacecraft with cabins. The trip would initially cost $500,000 per person, with a long-term goal of 100 passengers per trip.

Musk plans to make humanity a “multiplanetary species” to ensure survival in case of a calamity like an asteroid strike. “This is really about minimizing existential risk and having a tremendous sense of adventure,” he said.

Artist’s impression of Interplanetary Transport System on Europa (note humans for scale) (credit: SpaceX)

The new rocket could also be used for other interplanetary trips to places like Europa, the icy moon of Jupiter.

(credit: SpaceX)

SpaceX | SpaceX Interplanetary Transport System

Space X | Making Humans a Multiplanetary Species