Ray Kurzweil on The Age of Spiritual Machines: A 1999 TV interview

Dear readers,

For your interest, this 1999 interview with me, which I recently re-watched, describes some interesting predictions that are still coming true today. It’s intriguing to look back at the last 18 years to see what actually unfolded. This video is a compelling glimpse into the future, as we’re living it today.

Enjoy!

— Ray


Dear readers,

This interview by Harold Hudson Channer was recorded on Jan. 14, 1999 and aired February 1, 1999 on a Manhattan Neighborhood Network cable-access show, Conversations with Harold Hudson Channer.

In the discussion, Ray explains many of the ahead-of-their-time ideas presented in The Age of Spiritual Machines*, such as the “law of accelerating returns” (how technological change is exponential, contrary to the common-sense “intuitive linear” view); the forthcoming revolutionary impacts of AI; nanotech brain and body implants for increased intelligence, improved health, and life extension; and technological impacts on economic growth.

I was personally inspired by the book in 1999 and by Ray’s prophetic, uplifting vision of the future. I hope you also enjoy this blast from the past.

— Amara D. Angelica, Editor

* First published in hardcover January 1, 1999 by Viking. The series also includes The Age of Intelligent Machines (The MIT Press, 1992) and The Singularity Is Near (Penquin Books, 2006).

Why futurist Ray Kurzweil isn’t worried about technology stealing your job — Fortune

1985: Ray Kurzweil looks on as Stevie Wonder experiences the Kurzweil 250, the first synthesizer to accurately reproduce the sounds of the piano — replacing piano-maker jobs but adding many more jobs for musicians (credit: Kurzweil Music Systems)

Last week, Fortune magazine asked Ray Kurzweil to comment on some often-expressed questions about the future.

Does AI pose an existential threat to humanity?

Kurzweil sees the future as nuanced, notes writer Michal Lev-Ram. “A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation,” Kurzweil said. “It’s very important for your survival to be sensitive to bad news. … I think if you look at history, though, we’re being helped [by new technology] more than we’re being hurt.”

How will artificial intelligence and other technologies impact jobs?

“We have already eliminated all jobs several times in human history,” said Kurzweil, pointing out that “for every job we eliminate, we’re going to create more jobs at the top of the skill ladder. … You can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.”

Why are we so bad at predicting certain things? For example, Donald Trump winning the presidency?

Kurzweil: “He’s not technology.”

Read Fortune article here.

Why futurist Ray Kurzweil isn’t worried about technology stealing your job — Fortune

1985: Ray Kurzweil looks on as Stevie Wonder experiences the Kurzweil 250, the first synthesizer to accurately reproduce the sounds of the piano — replacing piano-maker jobs but adding many more jobs for musicians (credit: Kurzweil Music Systems)

Last week, Fortune magazine asked Ray Kurzweil to comment on some often-expressed questions about the future.

Does AI pose an existential threat to humanity?

Kurzweil sees the future as nuanced, notes writer Michal Lev-Ram. “A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation,” Kurzweil said. “It’s very important for your survival to be sensitive to bad news. … I think if you look at history, though, we’re being helped [by new technology] more than we’re being hurt.”

How will artificial intelligence and other technologies impact jobs?

“We have already eliminated all jobs several times in human history,” said Kurzweil, pointing out that “for every job we eliminate, we’re going to create more jobs at the top of the skill ladder. … You can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.”

Why are we so bad at predicting certain things? For example, Donald Trump winning the presidency?

Kurzweil: “He’s not technology.”

Read Fortune article here.

Elon Musk wants to enhance us as superhuman cyborgs to deal with superintelligent AI

(credit: Neuralink Corp.)

It’s the year 2021. A quadriplegic patient has just had one million “neural lace” microparticles injected into her brain, the world’s first human with an internet communication system using a wireless implanted brain-mind interface — and empowering her as the first superhuman cyborg. …

No, this is not a science-fiction movie plot. It’s the actual first public step — just four years from now — in Tesla CEO Elon Musk’s business plan for his latest new venture, Neuralink. It’s now explained for the first time on Tim Urban’s WaitButWhy blog.

Dealing with the superintelligence existential risk

Such a system would allow for radically improved communication between people, Musk believes. But for Musk, the big concern is AI safety. “AI is obviously going to surpass human intelligence by a lot,” he says. “There’s some risk at that point that something bad happens, something that we can’t control, that humanity can’t control after that point — either a small group of people monopolize AI power, or the AI goes rogue, or something like that.”

“This is what keeps Elon up at night,” says Urban. “He sees it as only a matter of time before superintelligent AI rises up on this planet — and when that happens, he believes that it’s critical that we don’t end up as part of ‘everyone else.’ That’s why, in a future world made up of AI and everyone else, he thinks we have only one good option: To be AI.”

Neural dust: an ultrasonic, low power solution for chronic brain-machine interfaces (credit: Swarm Lab/UC Berkeley)

To achieve his, Neuralink CEO Musk has met with more than 1,000 people, narrowing it down initially to eight experts, such as Paul Merolla, who spent the last seven years as the lead chip designer at IBM on their DARPA-funded SyNAPSE program to design neuromorphic (brain-inspired) chips with 5.4 billion transistors (each with 1 million neurons and 256 million synapses), and Dongjin (DJ) Seo, who while at UC Berkeley designed an ultrasonic backscatter system for powering and communicating with implanted bioelectronics called neural dust for recording brain activity.*

Mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution (credit: Lieber Research Group, Harvard University)

Becoming one with AI — a good thing?

Neuralink’s goal its to create a “digital tertiary layer” to augment the brain’s current cortex and limbic layers — a radical high-bandwidth, long-lasting, biocompatible, bidirectional communicative, non-invasively implanted system made up of micron-size (millionth of a meter) particles communicating wirelessly via the cloud and internet to achieve super-fast communication speed and increased bandwidth (carrying more information).

“We’re going to have the choice of either being left behind and being effectively useless or like a pet — you know, like a house cat or something — or eventually figuring out some way to be symbiotic and merge with AI. … A house cat’s a good outcome, by the way.”

Thin, flexible electrodes mounted on top of a biodegradable silk substrate could provide a better brain-machine interface, as shown in this model. (credit: University of Illinois at Urbana-Champaign)

But machine intelligence is already vastly superior to human intelligence in specific areas (such as Google’s Alpha Go) and often inexplicable. So how do we know superintelligence has the best interests of humanity in mind?

“Just an engineering problem”

Musk’s answer: “If we achieve tight symbiosis, the AI wouldn’t be ‘other’  — it would be you and with a relationship to your cortex analogous to the relationship your cortex has with your limbic system.” OK, but then how does an inferior intelligence know when it’s achieved full symbiosis with a superior one — or when AI goes rogue?

Brain-to-brain (B2B) internet communication system: EEG signals representing two words were encoded into binary strings (left) by the sender (emitter) and sent via the internet to a receiver. The signal was then encoded as a series of transcranial magnetic stimulation-generated phosphenes detected by the visual occipital cortex, which the receiver then translated to words (credit: Carles Grau et al./PLoS ONE)

And what about experts in neuroethics, psychology, law? Musk says it’s just “an engineering problem. … If we can just use engineering to get neurons to talk to computers, we’ll have done our job, and machine learning can do much of the rest.”

However, it’s not clear how we could be assured our brains aren’t hacked, spied on, and controlled by a repressive government or by other humans — especially those with a more recently updated software version or covert cyborg hardware improvements.

NIRS/EEG brain-computer interface system using non-invasive near-infrared light for sensing “yes” or “no” thoughts, shown on a model (credit: Wyss Center for Bio and Neuroengineering)

In addition, the devices mentioned in WaitButWhy all require some form of neurosurgery, unlike Facebook’s research project to use non-invasive near-infrared light, as shown in this experiment, for example.** And getting implants for non-medical use approved by the FDA will be a challenge, to grossly understate it.

“I think we are about 8 to 10 years away from this being usable by people with no disability,” says Musk, optimistically. However, Musk does not lay out a technology roadmap for going further, as MIT Technology Review notes.

Nonetheless, Neuralink sounds awesome — it should lead to some exciting neuroscience breakthroughs. And Neuralink now has 16 San Francisco job listings here.

* Other experts: Vanessa Tolosa, Lawrence Livermore National Laboratory, one of the world’s foremost researchers on biocompatible materials; Max Hodak, who worked on the development of some groundbreaking BMI technology at Miguel Nicolelis’s lab at Duke University, Ben Rapoport, Neuralink’s neurosurgery expert, with a Ph.D. in Electrical Engineering and Computer Science from MIT; Tim Hanson, UC Berkeley post-doc and expert in flexible Electrodes for Stable, Minimally-Invasive Neural Recording; Flip Sabes, professor, UCSF School of Medicine expert in cortical physiology, computational and theoretical modeling, and human psychophysics and physiology; and Tim Gardner, Associate Professor of Biology at Boston University, whose lab works on implanting BMIs in birds, to study “how complex songs are assembled from elementary neural units” and learn about “the relationships between patterns of neural activity on different time scales.”

** This binary experiment and the binary Brain-to-brain (B2B) internet communication system mentioned above are the equivalents of the first binary (dot–dash) telegraph message, sent May 24, 1844: ”What hath God wrought?”

Join the around-the-world 24-hour conversation on the future to celebrate World Future Day March 1

Futurists from the 55 Millennium Project nodes worldwide will join other organizations and the public on March 1 to exchange ideas about the future

Futurists worldwide plan to celebrate March 1 as World Future Day with a 24-hour conversation about the world’s potential futures, challenges, and opportunities.

At 12 noon your local time on March 1, you can click on a Google hangout at goo.gl/4hCJq3 and join the conversation* (log in with a Google account).  It starts at 12 noon (midnight in New York) in Auckland, New Zealand and moves across the world, ending in Honolulu at 12 noon Honolulu time.

The World Futures Studies Federation, Association of Professional Futurists, and Humanity+ have joined forces with The Millennium Project** to invite their members and the public to participate.

“This is an open discussion about the future,“ says Jerome Glenn, CEO of The Millennium Project. “People will be encouraged to share their ideas about how to build a better future.”

This is the fourth year The Millennium Project has done this. Previous World Future Days have discussed issues like:

  • Has the world become too complex to understand and manage?
  • Can collective intelligence and smart cities anticipate and manage such complexity?
  • Will there be a phase shift of global attitudes in the near future about what is important about the future?
  • Can new concepts of employment be created to prevent increasing unemployment caused by the acceleration of technological changes?
  • Can self-organization on the Internet reduce dependence on ill-informed politicians?
  • Can virtual currencies work without supporting organized crime?
  • How can we break free from mental constraints preventing truly innovative valuable ideas and understand how our brains might sabotage us (rational vs. irrational fear, traumatic memories, and defense mechanisms)?
  • How can we connect our brains to become more intelligent?

* If you join the video conference and see that the limit of interactive video participation has been reached, you will still be able to see and hear, as well as type in the chat box, but your video will not be seen until some leave the conversation. As people drop out, new video slots will open up. You can also tweet a comment to @millenniumproj and facilitators will read it live in the video conference.

** The Millennium Project is an independent non-profit global participatory futures research think tank of futurists, scholars, business planners, and policy makers who work for international organizations, governments, corporations, non-governmental organizations, and universities. It produces the annual “State of the Future” reports, the “Futures Research Methodology” series, the Global Futures Intelligence System (GFIS), and special studies. 

Billionaire Softbank CEO Masayoshi Son plans to invest in singularity

Masayoshi Son (credit: Softbank Group)

Billionaire Softbank Group Chairman and CEO Masayoshi Son revealed Monday (Feb. 27) at Mobile World Congress his plan to invest in singularity. “In next 30 years [the singularity] will become a reality,” he said, Tech Crunch reports.

“If superintelligence goes inside the moving device then the world, our lifestyle dramatically changes,” he said. “There will be many kinds. Flying, swimming, big, micro, run, 2 legs, 4 legs, 100 legs,” referring to robots. “I truly believe it’s coming, that’s why I’m in a hurry — to aggregate the cash, to invest.”

“Son said his personal conviction in the looming rise of billions of superintelligent robots both explains his acquisition of UK chipmaker ARM last year, and his subsequent plan to establish the world’s biggest VC fund,” noted TechCrunch — a new $100BN fund called the Softbank Vision Fund, announced last October.

TechCrunch said that despite additional contributors including Foxconn, Apple, Qualcomm and Oracle co-founder Larry Ellison’s family office, the fund has evidently not yet hit Son’s target of $100BN, so he used the keynote as a sales pitch for additional partners.

Addressing existential threats

“Son said his haste is partly down to a belief that superintelligent AIs can be used for ‘the goodness of humanity,’ going on to suggest that only AI has the potential to address some of the greatest threats to humankind’s continued existence — be it climate change or nuclear annihilation,” said Tech Crunch.

“It will be so much more capable than us — what will be our job? What will be our life? We have to ask philosophical questions,” Son said. “Is it good or bad? “I think this superintelligence is going to be our partner. If we misuse it it’s a risk. If we use it in good spirits it will be our partner for a better life. So the future can be better predicted, people will live healthier, and so on.”

“With the coming of singularity, I believe we will benefit from new ideas and wisdom that people were previously incapable of thanks to big data and other analytics,” Son said on the Softbank Group website. “At some point I am sure we will see the birth of a ‘Super-intelligence’ that will contribute to humanity. This paradigm shift has only accelerated in recent years as both a worldwide and irreversible trend.”

Beneficial AI conference develops ‘Asilomar AI principles’ to guide future AI research

Beneficial AI conference (credit: Future of Life Institute)

At the Beneficial AI 2017 conference, January 5–8 held at a conference center in Asilomar, California — a sequel to the 2015 AI Safety conference in Puerto Rico — the Future of Life Institute (FLI) brought together more 100 AI researchers from academia and industry and thought leaders in economics, law, ethics, and philosophy to address and formulate principles of beneficial AI.

FLI hosted a two-day workshop for its grant recipients, followed by a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the resulting technology is beneficial.

Beneficial AI conference participants (credit: Future of Life Institute)

The result was 23 Asilomar AI Principles, intended to suggest AI research guidelines, such as “The goal of AI research should be to create not undirected intelligence, but beneficial intelligence” and “An arms race in lethal autonomous weapons should be avoided”; identify ethics and values, such as safety and transparency; and address longer-term issues — notably, “ Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”

To date, 2515 AI researchers and others are signatories of the Principles. The process is described here.

The conference location has historic significance. In 2009, the Association for the Advancement of Artificial Intelligence held the Asilomar Meeting on Long-Term AI Futures to address similar concerns. And in 1975, the Asilomar Conference on Recombinant DNA was held to discuss potential biohazards and regulation of emerging biotechnology.

The non-profit Future of Life Institute was founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, DeepMind research scientist Viktoriya Krakovna, Boston University Ph.D. candidate in Developmental Sciences Meia Chita-Tegmark, and UCSC physicist Anthony Aguirre. Its mission is “to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.”

FLI’s scientific advisory board includes physicist Stephen Hawking, SpaceX CEO Elon Musk, Astronomer Royal Martin Rees, and UC Berkeley Professor of Computer Science/Smith-Zadeh Professor in Engineering Stuart Russell.


Future of Life Institute
| Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds

Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI [artificial general intelligence] (and beyond), and also what we would like to happen.

 

IBM announces AI-powered decision-making

Project DataWorks predictive model (credit: IBM)

IBM today announced today Watson-based “Project DataWorks,” the first cloud-based data and analytics platform to integrate all types of data and enable AI-powered decision-making.

Project DataWorks is designed to make it simple for business leaders and data professionals to collect, organize, govern, and secure data, and become a “cognitive business.”

Achieving data insights is increasingly complex, and most of this work is done by highly skilled data professionals who work in silos with disconnected tools and data services that may be difficult to manage, integrate, and govern, says IBM. Businesses must also continually iterate their data models and products — often manually — to benefit from the most relevant, up-to-date insights.

IBM says Project DataWorks can help businesses break down these barriers by connecting all data and insights for their users into an integrated, self-service platform.

Available on Bluemix, IBM’s Cloud platform, Project DataWorks is designed to help organizations:

  • Automate the deployment of data assets and products using cognitive-based machine learning and Apache Spark;
  • Ingest data faster than any other data platform, from 50 to hundreds of Gbps, and all endpoints: enterprise databases, Internet of Things, weather, and social media;
  • Leverage an open ecosystem of more than 20 partners and technologies, such as Confluent, Continuum Analytics, Galvanize, Alation, NumFOCUS, RStudio, Skymind, and more.

 

Seth Rogen plans FX TV comedy series on the Singularity

Seth Rogan in poster for “The Interview” (credit: Columbia Pictures)

Seth Rogen (Freaks and Geeks, Knocked Up, Superbad) and collaborator Evan Goldberg are writing the script for a pilot for a new “half-hour comedy television series about the Singularity for FX,” Rogen revealed Thursday (August 11) on Nerdist podcast: Seth Rogen Returns (at 55:20 mark), while promoting his latest film, Sausage Party (an animated movie that apparently sets a new world record for f-bombs, based on the trailer).

“Yeah, it’s happening, I just read an article about neural dust,” said host Chris Hardwick.

“Oh, it’s happening, it’s super scary, and we’re trying to make a comedy about it,” said Rogen. “We’ll film that in the next year, basically.”

“Neural dust are, like, small particles, kind of like nano-mites, that work in your systems,” Hardwick said, “and can …” — “wipe out whole civilizations,” Rogen interjected. “But, you know, they always kinda pitch you the good stuff first: it could help your body,” Hardwick added.

(credit: Vanity Fair)

Also mentioned on the podcast: a “prank show [All People Are Famous] next week where the guy we’re pranking thinks he’s responsible for the Singularity … goes nuts, destroying everything. …”