Elon Musk wants to enhance us as superhuman cyborgs to deal with superintelligent AI

(credit: Neuralink Corp.)

It’s the year 2021. A quadriplegic patient has just had one million “neural lace” microparticles injected into her brain, the world’s first human with an internet communication system using a wireless implanted brain-mind interface — and empowering her as the first superhuman cyborg. …

No, this is not a science-fiction movie plot. It’s the actual first public step — just four years from now — in Tesla CEO Elon Musk’s business plan for his latest new venture, Neuralink. It’s now explained for the first time on Tim Urban’s WaitButWhy blog.

Dealing with the superintelligence existential risk

Such a system would allow for radically improved communication between people, Musk believes. But for Musk, the big concern is AI safety. “AI is obviously going to surpass human intelligence by a lot,” he says. “There’s some risk at that point that something bad happens, something that we can’t control, that humanity can’t control after that point — either a small group of people monopolize AI power, or the AI goes rogue, or something like that.”

“This is what keeps Elon up at night,” says Urban. “He sees it as only a matter of time before superintelligent AI rises up on this planet — and when that happens, he believes that it’s critical that we don’t end up as part of ‘everyone else.’ That’s why, in a future world made up of AI and everyone else, he thinks we have only one good option: To be AI.”

Neural dust: an ultrasonic, low power solution for chronic brain-machine interfaces (credit: Swarm Lab/UC Berkeley)

To achieve his, Neuralink CEO Musk has met with more than 1,000 people, narrowing it down initially to eight experts, such as Paul Merolla, who spent the last seven years as the lead chip designer at IBM on their DARPA-funded SyNAPSE program to design neuromorphic (brain-inspired) chips with 5.4 billion transistors (each with 1 million neurons and 256 million synapses), and Dongjin (DJ) Seo, who while at UC Berkeley designed an ultrasonic backscatter system for powering and communicating with implanted bioelectronics called neural dust for recording brain activity.*

Mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution (credit: Lieber Research Group, Harvard University)

Becoming one with AI — a good thing?

Neuralink’s goal its to create a “digital tertiary layer” to augment the brain’s current cortex and limbic layers — a radical high-bandwidth, long-lasting, biocompatible, bidirectional communicative, non-invasively implanted system made up of micron-size (millionth of a meter) particles communicating wirelessly via the cloud and internet to achieve super-fast communication speed and increased bandwidth (carrying more information).

“We’re going to have the choice of either being left behind and being effectively useless or like a pet — you know, like a house cat or something — or eventually figuring out some way to be symbiotic and merge with AI. … A house cat’s a good outcome, by the way.”

Thin, flexible electrodes mounted on top of a biodegradable silk substrate could provide a better brain-machine interface, as shown in this model. (credit: University of Illinois at Urbana-Champaign)

But machine intelligence is already vastly superior to human intelligence in specific areas (such as Google’s Alpha Go) and often inexplicable. So how do we know superintelligence has the best interests of humanity in mind?

“Just an engineering problem”

Musk’s answer: “If we achieve tight symbiosis, the AI wouldn’t be ‘other’  — it would be you and with a relationship to your cortex analogous to the relationship your cortex has with your limbic system.” OK, but then how does an inferior intelligence know when it’s achieved full symbiosis with a superior one — or when AI goes rogue?

Brain-to-brain (B2B) internet communication system: EEG signals representing two words were encoded into binary strings (left) by the sender (emitter) and sent via the internet to a receiver. The signal was then encoded as a series of transcranial magnetic stimulation-generated phosphenes detected by the visual occipital cortex, which the receiver then translated to words (credit: Carles Grau et al./PLoS ONE)

And what about experts in neuroethics, psychology, law? Musk says it’s just “an engineering problem. … If we can just use engineering to get neurons to talk to computers, we’ll have done our job, and machine learning can do much of the rest.”

However, it’s not clear how we could be assured our brains aren’t hacked, spied on, and controlled by a repressive government or by other humans — especially those with a more recently updated software version or covert cyborg hardware improvements.

NIRS/EEG brain-computer interface system using non-invasive near-infrared light for sensing “yes” or “no” thoughts, shown on a model (credit: Wyss Center for Bio and Neuroengineering)

In addition, the devices mentioned in WaitButWhy all require some form of neurosurgery, unlike Facebook’s research project to use non-invasive near-infrared light, as shown in this experiment, for example.** And getting implants for non-medical use approved by the FDA will be a challenge, to grossly understate it.

“I think we are about 8 to 10 years away from this being usable by people with no disability,” says Musk, optimistically. However, Musk does not lay out a technology roadmap for going further, as MIT Technology Review notes.

Nonetheless, Neuralink sounds awesome — it should lead to some exciting neuroscience breakthroughs. And Neuralink now has 16 San Francisco job listings here.

* Other experts: Vanessa Tolosa, Lawrence Livermore National Laboratory, one of the world’s foremost researchers on biocompatible materials; Max Hodak, who worked on the development of some groundbreaking BMI technology at Miguel Nicolelis’s lab at Duke University, Ben Rapoport, Neuralink’s neurosurgery expert, with a Ph.D. in Electrical Engineering and Computer Science from MIT; Tim Hanson, UC Berkeley post-doc and expert in flexible Electrodes for Stable, Minimally-Invasive Neural Recording; Flip Sabes, professor, UCSF School of Medicine expert in cortical physiology, computational and theoretical modeling, and human psychophysics and physiology; and Tim Gardner, Associate Professor of Biology at Boston University, whose lab works on implanting BMIs in birds, to study “how complex songs are assembled from elementary neural units” and learn about “the relationships between patterns of neural activity on different time scales.”

** This binary experiment and the binary Brain-to-brain (B2B) internet communication system mentioned above are the equivalents of the first binary (dot–dash) telegraph message, sent May 24, 1844: ”What hath God wrought?”

Join the around-the-world 24-hour conversation on the future to celebrate World Future Day March 1

Futurists from the 55 Millennium Project nodes worldwide will join other organizations and the public on March 1 to exchange ideas about the future

Futurists worldwide plan to celebrate March 1 as World Future Day with a 24-hour conversation about the world’s potential futures, challenges, and opportunities.

At 12 noon your local time on March 1, you can click on a Google hangout at goo.gl/4hCJq3 and join the conversation* (log in with a Google account).  It starts at 12 noon (midnight in New York) in Auckland, New Zealand and moves across the world, ending in Honolulu at 12 noon Honolulu time.

The World Futures Studies Federation, Association of Professional Futurists, and Humanity+ have joined forces with The Millennium Project** to invite their members and the public to participate.

“This is an open discussion about the future,“ says Jerome Glenn, CEO of The Millennium Project. “People will be encouraged to share their ideas about how to build a better future.”

This is the fourth year The Millennium Project has done this. Previous World Future Days have discussed issues like:

  • Has the world become too complex to understand and manage?
  • Can collective intelligence and smart cities anticipate and manage such complexity?
  • Will there be a phase shift of global attitudes in the near future about what is important about the future?
  • Can new concepts of employment be created to prevent increasing unemployment caused by the acceleration of technological changes?
  • Can self-organization on the Internet reduce dependence on ill-informed politicians?
  • Can virtual currencies work without supporting organized crime?
  • How can we break free from mental constraints preventing truly innovative valuable ideas and understand how our brains might sabotage us (rational vs. irrational fear, traumatic memories, and defense mechanisms)?
  • How can we connect our brains to become more intelligent?

* If you join the video conference and see that the limit of interactive video participation has been reached, you will still be able to see and hear, as well as type in the chat box, but your video will not be seen until some leave the conversation. As people drop out, new video slots will open up. You can also tweet a comment to @millenniumproj and facilitators will read it live in the video conference.

** The Millennium Project is an independent non-profit global participatory futures research think tank of futurists, scholars, business planners, and policy makers who work for international organizations, governments, corporations, non-governmental organizations, and universities. It produces the annual “State of the Future” reports, the “Futures Research Methodology” series, the Global Futures Intelligence System (GFIS), and special studies. 

Billionaire Softbank CEO Masayoshi Son plans to invest in singularity

Masayoshi Son (credit: Softbank Group)

Billionaire Softbank Group Chairman and CEO Masayoshi Son revealed Monday (Feb. 27) at Mobile World Congress his plan to invest in singularity. “In next 30 years [the singularity] will become a reality,” he said, Tech Crunch reports.

“If superintelligence goes inside the moving device then the world, our lifestyle dramatically changes,” he said. “There will be many kinds. Flying, swimming, big, micro, run, 2 legs, 4 legs, 100 legs,” referring to robots. “I truly believe it’s coming, that’s why I’m in a hurry — to aggregate the cash, to invest.”

“Son said his personal conviction in the looming rise of billions of superintelligent robots both explains his acquisition of UK chipmaker ARM last year, and his subsequent plan to establish the world’s biggest VC fund,” noted TechCrunch — a new $100BN fund called the Softbank Vision Fund, announced last October.

TechCrunch said that despite additional contributors including Foxconn, Apple, Qualcomm and Oracle co-founder Larry Ellison’s family office, the fund has evidently not yet hit Son’s target of $100BN, so he used the keynote as a sales pitch for additional partners.

Addressing existential threats

“Son said his haste is partly down to a belief that superintelligent AIs can be used for ‘the goodness of humanity,’ going on to suggest that only AI has the potential to address some of the greatest threats to humankind’s continued existence — be it climate change or nuclear annihilation,” said Tech Crunch.

“It will be so much more capable than us — what will be our job? What will be our life? We have to ask philosophical questions,” Son said. “Is it good or bad? “I think this superintelligence is going to be our partner. If we misuse it it’s a risk. If we use it in good spirits it will be our partner for a better life. So the future can be better predicted, people will live healthier, and so on.”

“With the coming of singularity, I believe we will benefit from new ideas and wisdom that people were previously incapable of thanks to big data and other analytics,” Son said on the Softbank Group website. “At some point I am sure we will see the birth of a ‘Super-intelligence’ that will contribute to humanity. This paradigm shift has only accelerated in recent years as both a worldwide and irreversible trend.”

Beneficial AI conference develops ‘Asilomar AI principles’ to guide future AI research

Beneficial AI conference (credit: Future of Life Institute)

At the Beneficial AI 2017 conference, January 5–8 held at a conference center in Asilomar, California — a sequel to the 2015 AI Safety conference in Puerto Rico — the Future of Life Institute (FLI) brought together more 100 AI researchers from academia and industry and thought leaders in economics, law, ethics, and philosophy to address and formulate principles of beneficial AI.

FLI hosted a two-day workshop for its grant recipients, followed by a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the resulting technology is beneficial.

Beneficial AI conference participants (credit: Future of Life Institute)

The result was 23 Asilomar AI Principles, intended to suggest AI research guidelines, such as “The goal of AI research should be to create not undirected intelligence, but beneficial intelligence” and “An arms race in lethal autonomous weapons should be avoided”; identify ethics and values, such as safety and transparency; and address longer-term issues — notably, “ Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”

To date, 2515 AI researchers and others are signatories of the Principles. The process is described here.

The conference location has historic significance. In 2009, the Association for the Advancement of Artificial Intelligence held the Asilomar Meeting on Long-Term AI Futures to address similar concerns. And in 1975, the Asilomar Conference on Recombinant DNA was held to discuss potential biohazards and regulation of emerging biotechnology.

The non-profit Future of Life Institute was founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, DeepMind research scientist Viktoriya Krakovna, Boston University Ph.D. candidate in Developmental Sciences Meia Chita-Tegmark, and UCSC physicist Anthony Aguirre. Its mission is “to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.”

FLI’s scientific advisory board includes physicist Stephen Hawking, SpaceX CEO Elon Musk, Astronomer Royal Martin Rees, and UC Berkeley Professor of Computer Science/Smith-Zadeh Professor in Engineering Stuart Russell.

Future of Life Institute
| Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds

Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI [artificial general intelligence] (and beyond), and also what we would like to happen.


IBM announces AI-powered decision-making

Project DataWorks predictive model (credit: IBM)

IBM today announced today Watson-based “Project DataWorks,” the first cloud-based data and analytics platform to integrate all types of data and enable AI-powered decision-making.

Project DataWorks is designed to make it simple for business leaders and data professionals to collect, organize, govern, and secure data, and become a “cognitive business.”

Achieving data insights is increasingly complex, and most of this work is done by highly skilled data professionals who work in silos with disconnected tools and data services that may be difficult to manage, integrate, and govern, says IBM. Businesses must also continually iterate their data models and products — often manually — to benefit from the most relevant, up-to-date insights.

IBM says Project DataWorks can help businesses break down these barriers by connecting all data and insights for their users into an integrated, self-service platform.

Available on Bluemix, IBM’s Cloud platform, Project DataWorks is designed to help organizations:

  • Automate the deployment of data assets and products using cognitive-based machine learning and Apache Spark;
  • Ingest data faster than any other data platform, from 50 to hundreds of Gbps, and all endpoints: enterprise databases, Internet of Things, weather, and social media;
  • Leverage an open ecosystem of more than 20 partners and technologies, such as Confluent, Continuum Analytics, Galvanize, Alation, NumFOCUS, RStudio, Skymind, and more.


Seth Rogen plans FX TV comedy series on the Singularity

Seth Rogan in poster for “The Interview” (credit: Columbia Pictures)

Seth Rogen (Freaks and Geeks, Knocked Up, Superbad) and collaborator Evan Goldberg are writing the script for a pilot for a new “half-hour comedy television series about the Singularity for FX,” Rogen revealed Thursday (August 11) on Nerdist podcast: Seth Rogen Returns (at 55:20 mark), while promoting his latest film, Sausage Party (an animated movie that apparently sets a new world record for f-bombs, based on the trailer).

“Yeah, it’s happening, I just read an article about neural dust,” said host Chris Hardwick.

“Oh, it’s happening, it’s super scary, and we’re trying to make a comedy about it,” said Rogen. “We’ll film that in the next year, basically.”

“Neural dust are, like, small particles, kind of like nano-mites, that work in your systems,” Hardwick said, “and can …” — “wipe out whole civilizations,” Rogen interjected. “But, you know, they always kinda pitch you the good stuff first: it could help your body,” Hardwick added.

(credit: Vanity Fair)

Also mentioned on the podcast: a “prank show [All People Are Famous] next week where the guy we’re pranking thinks he’s responsible for the Singularity … goes nuts, destroying everything. …”







Futurists worldwide celebrate ‘Future Day’ March 1st

(credit: Adam Ford)

Today, March 1, five international futurist organizations will conduct a 24-hour global online conversation about the world’s potential futures, challenges, and opportunities. The objective is to support humanity in thinking about a more positive future.

Already started in New Zealand, the conversation is moving across the world with people entering and leaving the conversation whenever they want. The five organizations (The Millennium Project; the Association of Professional Futurists; “Science, Technology & the Future”; World Future Society; and the World Futures Studies Federation) will provide facilitators for each of the 24 time zones when possible (ending March 1 at 1:00pm Hawaii time (GMT-10). Join the current Google hangout here (updates on the Millennium Project website).

(credit: Millennium Project)

If the limit of interactive video conference participation is reached, new arrivals will be able to see and hear, but not have their video seen and voice heard; they can tweet their questions and comments to @MillenniumProj (#FutureDay2016, #FutureWeCreate, #FutureWeShare, #FutureWeWant,#FUTUREDAY). (As people drop out, new video slots will open up.)

Four years ago on March 1, 2012, groups around the globe celebrated Future Day for the first time. This year, Joyce Gioia, CEO of The Herman Group and Claire A. Nelson,PhD, Founder of The Futures Forum, will join Jerome Glenn, CEO of The Millennium Project, to make this global online event a success.

More information:

  • Association of Professional Futurists, Joyce Gioia (for Cindy Frewen, Chair) 336.210.3548
  • Future Day, Science, Technology & the Future: Adam Ford, tech101@gmail.com
  • The Millennium Project, Jerome Glenn, CEO, Jerome.Glenn@Millennium-Project.org
  • World Future Society: Julie Friedman Steele, julie@wfs.org
  • World Futures Studies Federation, Jennifer Gidley, PhD, President wfsf.president@jennifergidley.com

Can human-machine superintelligence solve the world’s most dire problems?

Human Computation Institute | Dr. Pietro Michelucci

“Human computation” — combining human and computer intelligence in crowd-powered systems — might be what we need to solve the “wicked” problems of the world, such as climate change and geopolitical conflict, say researchers from the Human Computation Institute (HCI) and Cornell University.

In an article published in the journal Science, the authors present a new vision of human computation that takes on hard problems that until recently have remained out of reach.

Humans surpass machines at many things, ranging from visual pattern recognition to creative abstraction. And with the help of computers, these cognitive abilities can be effectively combined into multidimensional collaborative networks that achieve what traditional problem-solving cannot, the authors say.


Microtasking: Crowdsourcing breaks large tasks down into microtasks, which can be things at which humans excel, like classifying images. The microtasks are delivered to a large crowd via a user-friendly interface, and the data are aggregated for further processing. (credit: Pietro Michelucci and Janis L. Dickinson/Science)

Most of today’s human-computation systems rely on “microtasking” — sending “micro-tasks” to many individuals and then stitching together the results. For example, 165,000 volunteers in EyeWire have analyzed thousands of images online to help build the world’s most complete map of human retinal neurons.

Another example is reCAPTCHA, a Web widget used by 100 million people a day when they transcribe distorted text into a box to prove they are human.

“Microtasking is well suited to problems that can be addressed by repeatedly applying the same simple process to each part of a larger data set, such as stitching together photographs contributed by residents to decide where to drop water during a forest fire,” the authors note.

But this microtasking approach alone cannot address the tough challenges we face today, say the authors. “A radically new approach is needed to solve ‘wicked problems’ — those that involve many interacting systems that are constantly changing, and whose solutions have unforeseen consequences, such as climate change, disease, and geopolitical conflict, which are dynamic, involve multiple, interacting systems, and have non-obvious secondary effects, such as political exploitation of a pandemic crisis.”

New human-computation technologies

New human-computation technologies: In creating problem-solving ecosystems, researchers are beginning to explore how to combine the cognitive processing of many human contributors with machine-based computing to build faithful models of the complex, interdependent systems that underlie the world’s most challenging problems. (credit: Pietro Michelucci and Janis L. Dickinson/Science)

The authors say new human computation technologies can help build flexible collaborative environments. Recent techniques provide real-time access to crowd-based inputs, where individual contributions can be processed by a computer and sent to the next person for improvement or analysis of a different kind.

This idea is already taking shape in several human-computation projects:

  • YardMap.org, launched by the Cornell in 2012, maps global conservation efforts. It allows participants to interact and build on each other’s work — something that crowdsourcing alone cannot achieve.
  • WeCureAlz.com accelerates Cornell-based Alzheimer’s disease research by combining two successful microtasking systems into an interactive analytic pipeline that builds blood-flow models of mouse brains. The stardust@home system, which was used to search for comet dust in one million images of aerogel, is being adapted to identify stalled blood vessels, which will then be pinpointed in the brain by a modified version of the EyeWire system.

“By enabling members of the general public to play some simple online game, we expect to reduce the time to treatment discovery from decades to just a few years,” says HCI director and lead author, Pietro Michelucci, PhD. “This gives an opportunity for anyone, including the tech-savvy generation of caregivers and early stage AD patients, to take the matter into their own hands.”

Abstract of The power of crowds

Human computation, a term introduced by Luis von Ahn, refers to distributed systems that combine the strengths of humans and computers to accomplish tasks that neither can do alone. The seminal example is reCAPTCHA, a Web widget used by 100 million people a day when they transcribe distorted text into a box to prove they are human. This free cognitive labor provides users with access to Web content and keeps websites safe from spam attacks, while feeding into a massive, crowd-powered transcription engine that has digitized 13 million articles from The New York Times archives. But perhaps the best known example of human computation is Wikipedia. Despite initial concerns about accuracy, it has become the key resource for all kinds of basic information. Information science has begun to build on these early successes, demonstrating the potential to evolve human computation systems that can model and address wicked problems (those that defy traditional problem-solving methods) at the intersection of economic, environmental, and sociopolitical systems.

Feeling like things are speeding up?

(credit: NASA)

That may be because they are: Earth is rushing along right now at about 30 kilometers per second (almost 19 miles per second) — moving about a kilometer per second faster than when Earth will be farthest from the Sun on July 4, notes Bruce McClure of EarthSky Tonight.*

Or maybe it’s the accelerating pace of new developments? Tech predictions for 2016 are ranging from “Intelligent agents that will talk to you actively, reminding you of things that are happening and giving you a unique form of augmented reality” (TechCrunch) and “3-D printing’s inflection point: 3-D printed guns, 3-D printed vital organs” (Inc.) to “The Hyperloop will become fully operational” (Computerworld).

What are your predictions?

Head on over to our Forums — where folks are making predictions for 2016 ranging from “Amazon and Walmart will begin experimental delivery of parcels using drones” (Wiccidor) to “Low cost, bendable hi-def displays will make significant inroads in consumer electronics” (beachmike) — and make yours!

We’ll do a reality check on Jan. 2, 2017.

* The reason: today (January 2) our planet Earth reached its closest point to the Sun for this year — Earth’s perihelion. Earth is now about 5 million kilometers (3 million miles) closer to the Sun than it will be on July 4 at aphelion. “Though not responsible for the seasons, Earth’s closest and farthest points to the sun do affect seasonal lengths,” McClure explains.

Gartner identifies the top 10 strategic IT technology trends for 2016

Top 10 strategic trends 2016 (credit: Gartner, Inc.)

At the Gartner Symposium/ITxpo today (Oct. 8), Gartner, Inc. highlighted the top 10 technology trends that will be strategic for most organizations in 2016 and will shape digital business opportunities through 2020.

The Device Mesh

The device mesh refers to how people access applications and information or interact with people, social communities, governments and businesses. It includes mobile devices, wearable, consumer and home electronic devices, automotive devices, and environmental devices, such as sensors in the Internet of Things (IoT), allowing for greater cooperative interaction between devices.

Ambient User Experience

The device mesh creates the foundation for a new continuous and ambient user experience. Immersive environments delivering augmented and virtual reality hold significant potential but are only one aspect of the experience. The ambient user experience preserves continuity across boundaries of device mesh, time and space. The experience seamlessly flows across a shifting set of devices — such as sensors, cars, and even factories — and interaction channels blending physical, virtual and electronic environment as the user moves from one place to another.

3D Printing Materials

Advances in 3D printing will drive user demand and a compound annual growth rate of 64.1 percent for enterprise 3D-printer shipments through 2019, which will require a rethinking of assembly line and supply chain processes to exploit 3D printing.

Information of Everything

Everything in the digital mesh produces, uses and transmits information, including sensory and contextual information. “Information of everything” addresses this influx with strategies and technologies to link data from all these different data sources. Advances in semantic tools such as graph databases as well as other emerging data classification and information analysis techniques will bring meaning to the often chaotic deluge of information.

Advanced Machine Learning

In advanced machine learning, deep neural nets (DNNs) move beyond classic computing and information management to create systems that can autonomously learn to perceive the world on their own, making it possible to address key challenges related to the information of everything trend.

DNNs (an advanced form of machine learning particularly applicable to large, complex datasets) is what makes smart machines appear “intelligent.” DNNs enable hardware- or software-based machines to learn for themselves all the features in their environment, from the finest details to broad sweeping abstract classes of content. This area is evolving quickly, and organizations must assess how they can apply these technologies to gain competitive advantage.

Autonomous Agents and Things

Machine learning gives rise to a spectrum of smart machine implementations — including robots, autonomous vehicles, virtual personal assistants (VPAs) and smart advisors — that act in an autonomous (or at least semiautonomous) manner.

VPAs such as Google Now, Microsoft’s Cortana, and Apple’s Siri are becoming smarter and are precursors to autonomous agents. The emerging notion of assistance feeds into the ambient user experience in which an autonomous agent becomes the main user interface. Instead of interacting with menus, forms and buttons on a smartphone, the user speaks to an app, which is really an intelligent agent.

Adaptive Security Architecture

The complexities of digital business and the algorithmic economy combined with an emerging “hacker industry” significantly increase the threat surface for an organization. Relying on perimeter defense and rule-based security is inadequate, especially as organizations exploit more cloud-based services and open APIs for customers and partners to integrate with their systems. IT leaders must focus on detecting and responding to threats, as well as more traditional blocking and other measures to prevent attacks. Application self-protection, as well as user and entity behavior analytics, will help fulfill the adaptive security architecture.

Advanced System Architecture

The digital mesh and smart machines require intense computing architecture demands to make them viable for organizations. Providing this required boost are high-powered and ultraefficient neuromorphic (brain-like) architectures fueled by GPUs (graphic processing units) and field-programmable gate arrays (FPGAs). There are significant gains to this architecture, such as being able to run at speeds of greater than a teraflop with high-energy efficiency.

Mesh App and Service Architecture

Monolithic, linear application designs (e.g., the three-tier architecture) are giving way to a more loosely coupled integrative approach: the apps and services architecture. Enabled by software-defined application services, this new approach enables Web-scale performance, flexibility and agility. Microservice architecture is an emerging pattern for building distributed applications that support agile delivery and scalable deployment, both on-premises and in the cloud. Containers are emerging as a critical technology for enabling agile development and microservice architectures. Bringing mobile and IoT elements into the app and service architecture creates a comprehensive model to address back-end cloud scalability and front-end device mesh experiences. Application teams must create new modern architectures to deliver agile, flexible and dynamic cloud-based applications that span the digital mesh.

Internet of Things Platforms

IoT platforms complement the mesh app and service architecture. The management, security, integration and other technologies and standards of the IoT platform are the base set of capabilities for building, managing, and securing elements in the IoT. The IoT is an integral part of the digital mesh and ambient user experience and the emerging and dynamic world of IoT platforms is what makes them possible.

* Gartner defines a strategic technology trend as one with the potential for significant impact on the organization. Factors that denote significant impact include a high potential for disruption to the business, end users or IT, the need for a major investment, or the risk of being late to adopt. These technologies impact the organization’s long-term plans, programs and initiatives.