Technology
PRINT ARTICLE

Smart Society

- How AI and Robots Will Change the World

Teknologi.jpg

This excerpt from the CIFS report Smart Society: How AI and Robots Will Change the World looks at the broad societal implications of intelligent information technologies such as digital assistants, mixed reality and neural interfaces.

Employee_Klaus Æ. Mogensen b_w sort.jpg_PLACEHOLDER
KLAUS Æ. MOGENSEN

Senior Futurist, Editor

Posted Apr 25, 2019 in Technology Article from Scenario 01:2018

There are more devices connected to the internet than there are people on the planet. And this number is expected to increase more than tenfold in the coming decade, reaching 75 billion units according to the market analysis firm IHS Markit. These devices produce mountains of data, of which only a small percentage is currently being used. Much can potentially be gained from a greater use of artificial intelligence and deep learning systems that can analyse more data and spot new patterns and connections.

We bring an excerpt from the Copenhagen Institute for Futures Studies’ recent report Smart Society: How AI and Robots Will Change the World, which looks at the broad societal implications of intelligent information technologies such as digital assistants, mixed reality and neural interfaces.

Digital Assistants

Over the coming decades, the surface of the earth will be covered with an increasingly dense layer of data-collecting and data-sharing devices that, with data analysis, provide a growing amount of real-time information. Only a minority of these devices will be personal devices like laptops, smartphones, and wearables (or the future successors of these devices), while the vast majority will be things like vehicles, weather stations, air-quality sensors, traffic sensors, surveillance cameras, biometric scanners, and embedded trackers in all sorts of objects. Today, we have lightbulbs that can be remote-controlled to change colour or lighting pattern, diapers that tell you when they need to be changed, and even pills that tell your doctor if you swallow them or not. Extending this into the future, it is hard to imagine anything that will not be equipped with internet-connected sensors.

We are witnessing the early days of digital personal assistants that help us navigate this flood of information: Apple’s Siri, Amazon’s Alexa, Google Assistant, and more. In the future, such digital assistants will take on more of the functions we associate with a skilled secretary and/or butler. They will be able to find news items, deals and events that might interest you, and will soon come to know you better than you know yourself. The question is who will get access to the information it collects. Most likely, your assistant will be an application provided by a company, as is the case for Siri, Alexa, and Google Now. If so, this company will store all the data collected by your assistant about your needs and behaviour, ostensibly to be able to provide better service in the future, but also to build a detailed customer profile and possibly sell this to third parties. Government agencies may acquire access to this information, and hackers may steal it. As a result, data privacy may become a hot political topic in the future, with authorities regulating what and how much data a company may collect and store about citizens and who they may share it with, whether or not the citizen has agreed by clicking an enduser license agreement (EULA).

Digital assistants may be a strong driver for the sharing economy, since they can easily and quickly find suitable, available resources in a closed or open network, with evaluations of the quality and reliability of the offerings. This can also work the other way around, with the assistant offering its user’s resources. For instance, if a user has to drive from A to B, the assistant can suggest taking on a trusted passenger who has indicated interest in the same trip.

Superhuman Data Analysis

Even today, computers can analyse vastly greater data volumes in seconds than a human being could do in a lifetime. IBM claims that its analytical system Watson can “read 800 million pages per second and know findings from over 23 million global research studies.” Through machine learning, Watson has been taught – or arguably has taught itself – to handle a variety of tasks better than human experts, from playing Jeopardy to diagnosing cancer, and Google’s AlphaGo has beaten the world’s best players at Go, a board game considered to be the planet’s most demanding strategy game. These are fairly narrow (if complex) applications of machine intelligence, but with advances in both computer capacity and machine learning, we can expect increasingly broad applications of AI in a variety of fields over the coming decades, not only excelling in fields that today need highly trained human experts, but also performing tasks that lie beyond the limits of the human brain.

‘Strong AI’ (based on clustering and associations that come from experience gained through trial and error) is currently using machine learning to learn to solve increasingly complex problems like image recognition, real-time translation, driving cars, and much more, and will within a decade or two be as good as most (or all) human experts in handling these and other tasks. This development will rewrite Fortune 500 as new companies enter the market with AI-driven solutions and business models that outcompete incumbents who have delayed AI implementation in fear of cannibalising their existing core business

Using AI for basic decision making and knowledge services will free resources for more important tasks that require analysis of a complexity that AI can’t yet handle – or tasks requiring original thinking, which is something AI taught by past examples can’t do.

Mixed Reality and Neural Interfaces

Finding, understanding, and sharing information will also be improved by increasingly advanced and ‘intelligent’ interfaces. Virtual reality and augmented reality (collectively called mixed reality) are on the verge of achieving general breakthroughs that will move the technologies from specialist and hobbyist use to mainstream use. Microsoft has with its wearable HoloLens augmented reality system shown that virtual images can be ‘attached’ to a real setting, allowing the user to examine a model on a table from all angles and even manipulate it by (virtual) touch. This will impact innovation and design and create virtual prototyping to partly replace 3D-printed instant prototyping.

In the past, mixed reality has mainly been used for entertainment purposes, but the technology has now advanced enough for serious, professional use. The US elevator company Thyssenkrupp is using Microsoft HoloLens to improve elevator repairs, allegedly allowing repairs that traditionally take two hours to be done in as little as 20 minutes.

Virtual reality can be combined with motion capture-interfaces to remote control drones or robots. The Swiss Federal Institute of Technology (EPFL) is, for example, experimenting with a rig consisting of a VR headset, a jacket, and a chair, which allow you to control a plane just by moving your body and provides sensory feedback, creating a more intuitive way to control the plane than with a joystick. The company iKinema has developed a low-cost, laser-based full-body motion capture system that can be used for gaming or movie making. With such a system, a person’s movements can be copied into a virtual setting, allowing natural movement and manipulation of objects (though without a sense of touch, unless haptic feedback gear is used).

You can use augmented or virtual reality to shape three-dimensional objects by hand, like a potter moulding clay into a pot, and then have the virtual objects turned into physical ones by 3D printers. Or, going the other way, you can 3D-scan a physical object and copy it into virtual reality, as is currently being done with fragile artefacts or art objects. A shared virtual work environment may make it far easier for people living far apart to work together on a project, and with augmented reality, people may be virtually present in an otherwise physical environment, as an alternative to using the increasingly common telepresence robots.

In the longer term, we may see interfaces where people can interact with machines or virtual environments simply by thinking about them by using neural implants. In 2016, an American woman, Melissa Loomis, who had lost her right arm to an infection the previous year, underwent surgery that attached muscles and nerves in her stump to implants that could transmit signals to an armband and – more significantly – receive signals from it. With this interface, Loomis can remote control a prosthetic arm just by thinking about it, and sensors in the fingers of the prosthetic can send signals back, giving her a sense of touch that can feel not only pressure, but also temperature.

The complicated surgery necessary for implanted neural interfaces like Melissa Loomis’ means that such interfaces are far from common and not likely to become mainstream any time soon. Unless you ask business magnate and entrepreneur Elon Musk, who in 2016 founded a new company, Neuralink, that aims to do just that. Musk says that he wants to develop cranial implants called ‘neural laces’ that allows easy brain-machine communication, eliminating the need for any external interfaces. He believes that his company will have such implants ready for use by disabled people as soon as 2021, but goes on to say that a handful of years after this, with the timing depending heavily on regulatory approval, perfectly healthy people will choose to receive such implants to better interface with machines.

In time, Musk says, such neural implants may even allow a sort of digital telepathy where thoughts can be transmitted directly from mind to mind, even bypassing the process of transforming ideas into language that must then be decoded by another brain, as well as integrating cloud-based AI computing within ourselves in a way that’s indistinguishable from our core selves. However, others are sceptical about these claims, particularly in the given timeframe.

Scepticism or no, neural interfaces shouldn’t be entirely dismissed. The US military’s research department DARPA, which works with brain/computer interfaces, e.g. for fighter pilots, has developed a device that with minimal surgery could allow for controlling machines through thought, much the way Elon Musk envisions. The device, known as a stentrode, can be inserted through the neck via a catheter and directed up to the brain where it will attach to the wall of a blood vessel, where electrodes can monitor neuron activity in the given part of the brain and translate it into commands.34

Remote-controlling computers and other devices just by thinking seems innocuous enough, though given the array of interfaces we already have, including voice command, it may seem unnecessary except in special cases such as disabled people, or fighter pilots subjected to massive G forces. Hacking of technology that allows feedback into the brain, however, should be a cause for concern. Imagine if a hacker can subject a user to constant pain or even insert ideas and thoughts that are indistinguishable from the user’s own. Such concerns may mean that an otherwise feasible technology will be delayed indefinitely.

Other interesting articles

EXPLORE CIFS

Advisory

We are a leading global advisory firm in the use of futurist methods developed to solve strategic organisational challenges. Our clients include some of the world’s largest organisations.

Advisory

Talks & courses

Inspire your participants with insights into the trends shaping the future — book us for inspirational talks, keynote presentations or courses on future developments.

Talks & courses

Membership

The Copenhagen Institute for Futures Studies (CIFS) is a self-owned membership organisation. The member circle consists of future-oriented organisations and institutions.

Membership