essa | White Paper, Global Risk Institute: "AI Frontiers: Where We are Today and the Risks and Benefits of an AI Enabled Future"
White Paper, Global Risk Institute: “AI Frontiers: Where We are Today and the Risks and Benefits of an AI Enabled Future”

EDITOR’S NOTE

“AI Frontiers: Where We are Today and the Risks and Benefits of an AI Enabled Future” is the next in a series of Global Risk Institute expert papers on the evolving world of machine learning. The authors Michael Durland and Matthew Killi are also preparing a GRI paper on the topic of the impact of AI on financial services for the fall of 2017.

EXECUTIVE SUMMARY

“Artificial Intelligence” is a very powerful narrative. In fact, today, many leading thinkers envision a future where machines surpass humans in intelligence. Many of those individuals worry about the abuses of AI, and although they don’t dispute the potential good, they dwell more on the potential bad. Others have a more constructive imagination of the future. They see AI more as a powerful set of tools, with the potential to significantly augment human productivity. They see the risk of “singularity” as over hyped and distracting. In Part One of our two part series, we assess of the potential near term risks and benefits of Artificial Intelligence. Later, in Part Two, we explore how AI is expected to impact Financial Services and what specific use-cases we expect to see over the next 1-3 years.

The paper begins by differentiating between the concept of automation and innovation. Here we define automation as the use of t echnology as a substitute for an existing process, for example one that is carried out using human labour, and innovation, as the use of technology to augment human productivity, enabling humans to do things they could not do before. AI has the potential to both automate and innovate. Although subtle, this distinction is important. Automation and innovation have very different potential implications for both the future of labour and human pr ogress. We discuss the semantics behind AI, and ho w the phrase “Artificial Intelligence” creates a powerful fictional image that serves to both inspire innovation and evoke fear. The inspiration is important. Fictional narratives such as “Artificial Intelligence” are a vital component of driving forward human progress. Yet, at times this particular narrative acts more as a negative, evoking fears that to date seem mostly unfounded.

We then discuss the current optimism surrounding AI today. We provide a brief primer on the most important technology underlying AI today, machine learning. We contrast the various forms of learning paradigms: supervised, unsupervised and reinforcement learning. These concepts are important because they are at the center of an emerging issue in AI, namely, who is accountable for the actions of an AI and ho w the developer must take a deeper role in curation.

Following this section, we introduce the concept of Narrow AI, General AI and Artificial Super Intelligence. In Narrow AI, machines perform

narrow set of tasks applied to a narrowly defined problem. Narrow AI’s can be integrated to produce highly powerful applications. An example of this is the autonomous vehicle. General AI refers to a machine that is capable of performing the broad array of intellectual tasks of a human. In Ge neral AI, machines have human-like cognitive abilities and are capable of reasoning, making decisions, learning and communicating in natural language, and are able to operate in an open system. Creating General AI is a much different and more difficult challenge than creating Narrow AI. Artificial Super Intelligence, refers to a computer that is “smarter than a human”, a machine that is capable of performing more than the broad array of intellectual tasks of a human. In this fictional form of machine intelligence, the computer would have the cognitive ability to outperform human brains across a large number of disciplines, even possessing scientific creativity and social skills. Today, all forms of artificial intelligence are instances of Narrow AI. In a world of Narrow AI, we can eliminate from our concern the notion of AI as an existential threat and instead focus on the impact that Narrow AI is likely to have on the world in which we live in today.

We begin our assessment of the risks and benefits of AI by introducing four key factors that are likely to shape the future of AI in the ne ar term: 1) the identification and application of suitable use cases, 2) the access to large data sets, 3) the scarcity of talent, and 4) the lack of platform technologies. In other words, in order to successfully create AI today you must identify a suitable problem, have access to the data required to train the AI to solve the problem you identify, and have access to the talent and the tools required to developed the AI.

We build three broad scenarios that help us think about the future of AI: “AI Winter”, “Winner Takes All”, and “Collaborative AI”. These scenarios are used to assess the potential benefits and risks associated with AI in the near future. We consider two potential benefits: an increase in human productivity and efficiency, and an increase in our ability to drive future innovation. The later benefit is a broad category but is meant to capture the tremendous potential for AI to drive future scientific innovation. We consider six potential risks: scope erosion, unemployment, wealth inequality, the exploitation of data, black box vulnerability and the creation of new system risk. The results of this scenario analysis are summarized in Table 1.

The “AI Winter” scenario, in which AI doe s not live up to its hype, provides the least benefits. Although a “full” AI winter deemed not likely, we do believe, that given the considerable hype surrounding AI, the probability of some type of cooling off period for AI is likely in the near future.

The “Winner Takes All” scenario, in which a numbe r of companies exploit the potential of AI to achieve an early monopoly position, provides moderate benefits and material risks. We believe these risks are tolerable, and indeed likely necessary for society to further the development of important innovations in AI. We believe that such innovations will increase the potential for material long term benefits. However, we do believe this scenario should grab our attention. The current discourse of disruption and creative destruction must be understood in the context of a tenuous balance. The outcome we want for society is not disruption but progress. One way we can achieve this objective is the democratization of AI.

The “Collaborative AI” attempts to assess what needs to occur in order for society to maximize the future benefit of AI while minimizing its futur e risks. To achieve this objective it is helpful for us to perceive artificial intelligence, not as the automation of human cognition, but rather as an innovation capable of augmenting human productivity and efficiency. In this scenario, AI is not pe rceived as a substitute for human intelligence, in the possession of a concentrated set of large corporations but as a complement to human intelligence available to the masses. This scenario can be defined as the state in which we have successfully begun the democratization of the benefits of AI. Automation vs Innovation From the dawn of the industrial revolution, machines have improved human productivity and efficiency by automating existing processes, and innovating new ones. Automation and innovation in this context are different concepts. Automation can be thought of as a substitute for an existing process, for example one that is carried out using human labour. Innovation, on the other hand, can be thought of as the creation of new technologies that augment human productivity, enabling humans to do things they could not do before. During the industrial revolution automation and innovation were constrained

to physical processes. More recently, the digital revolution has brought exciting new potential. Advances in computer and information technology have enabled society to imagine the potential of automation and innovation in the realm of cognitive processes.
The Semantics In the mid 1950’s, John McCarthy, an American computer scientist, invited leading researchers from a broad array of disciplines to Dartmouth, New Hampshire to assess the rapid evolution of computer science, and whether someday computers would have the resources to be as intelligent as human beings [1]. This concept was so new to the human imagination that McCarthy felt it deserved a powerful name: “Artificial Intelligence”.

The label Artificial Intelligence (AI) is powerful. Semantics are important. The science fiction fantasy of AI has served to shape our aspirations and inspire our efforts. Talent has been drawn into the discipline of AI because it captures their imagination. These individuals are inspired to be part of the construction of the future, one in which AI is a cen tral feature. Words help create images, and from the very onset of this label, these fantasies, and the fears associated with them, have greatly influenced our perceptions of artificial intelligence. For many, there is a lot to fear about AI. If we automate human cognition, do we not, as humans, now compete with machines for employment? And who controls the machine? Who benefits from the creation of AI? This lack of controllability and perceived participation in benefits reinforces the notion that AI is a threat, rather than an innovation for augmenting human productivity. Why Now? More than 60 years after that inaugural gathering in Dartmouth, attention grabbing headlines like the New York Times “The Great AI Awakening” [2] suggest that great progress has been made and that today we at the cusp of a w atershed moment in Artificial Intelligence. The combination of vast volumes of data, unprecedented processing power, and increasingly sophisticated algorithms that enable machines to perceive images and sounds, and to discern complex patterns [3], leads many proponents to believe that AI is now poised to fundamentally transform human life.

Not everyone shares this view. Skeptics like to point out that during the past 60 years computer scientists have regularly claimed to be on the cusp of a breakthrough in AI, only to disappoint. In fact, these cycles of boom and bust became so consistent in the past, that they were known as “AI summers” and “AI winters” [4]. So what makes us think that the current hype will not fall to the same fate?
Historically, there have been three broad impediments that have limited the success of AI: computational power, access to large data sets, and the development of effective computational algorithms. Today, we have an abundance of both data and economical high-speed processing power – both of which are growing at an extraordinary rate. In fact, some 90% of all the data in the world today was collected in the past 2 years [5]. Now in the era of big data and cheap processing power, two of AI’s biggest constraints are no longer binding.

While the removal of these two constraints is clearly a necessary condition for AI to evolve, their removal alone is not sufficient for AI to thrive. The promise of AI, and the hope tha t it will someday transform human progress, now rests of the evolution of algorithms, specifically machine learning algorithms. Primer on machine learning Artificial Intelligence is often used to describe the appearance of human intelligence

exhibited by computers. In the “early years”, AI was primarily based on rule s-based programs that delivered rudimentary displays of ‘intelligence’ in a very narrow or specific context. Early progress was limited because real world problems are far too complex to program using a rules-based approach. As a result, these so-called “expert systems” failed to achieve intelligence in any but the narrowest definition. To advance AI, we must be able “create” intelligence without the need to enumerate the complex rules or concepts that govern intelligent behaviour. This is precisely the goal of machine learning.

Machine learning is the study of computer programs that can be trained to learn patterns, rules, and concepts from data, yielding trained models that describe the domain from which the data originated. Breakthroughs in subfields of machine learning like deep learning have recently allowed extremely complex models to be constructed, including models of written and spoken language, of our visual and
acoustic world, and of rational and irrational human behaviour. Often times, the ultimate goal of these complex models is to help us make more accurate predictions about the world and to make decisions with the best chances of success. Such models will receive input data about a domain (say, the films a person has watched in the past) and weigh the inputs to make useful predictions (the probability of the person enjoying a different film in the future). By endowing computers with the ability to learn complex patterns from today’s abundance of data, computers are beginning to mimic humans’ ability to learn from our environment, propelling us towards a new era of artificial intelligence.

There are many machine learning models, and there is usually more than one way to train each type of model. Moreover, the number of models and tr aining algorithms are continually being discovered and at an increasing rate. Needless to say, each pairing of a model and training algorithm has its advantages and disadvantages and the pair selected to solve a particular problem will depend on many factors: • size and richness of the data sets • available computing resources • model’s ability to scale with increasing size of data set and computing resources • type of feedback signal used during training • complexity of the task • performance requirements of the trained model • speed and responsiveness of the trained model • risks associated with mistakes and failures • stakeholder risk tolerance • cost of the end-to-end development of the model • requirements on interpretability of the model. Out of the factors listed above, the type of training feedback signal is one of the first considerations to be made, since it can significantly focus the model search and inform the feasibility of the project. There are three basic categories of feedback signals [6]: In the case of supervised learning, the training data consists of “labeled” examples. During training, the computer ingests a large number of examples that contain both input data as well as the correct answer. Over time, the algorithm finds patterns in the input data to help it predict the correct answer. In the case of unsupervised learning, there is no prior kno wledge of a correct answer. The learning algorithm must draw inferences from the data through techniques such as Clustering, and Hidden Markov Models. In the case of reinforcement learning, the machine interacts directly with an environment and is rewarded when it achieves a desirable outcome and punished when the outcome is negative. Over time, the machine adjusts its decision making to maximize the rewards it receives.

Detailing these feedback mechanisms highlights the shifting role of the AI developer. We no longer need to explicitly write down the rules and de cisions that an AI needs follow. Instead, our primary role is to curate the data that the machine will le arn from and decide on what model and learning algorithm is most optimal for the problem at hand.

Where we are today So, where exactly are we today in the evolution of AI? To answer this we must first distinguish between three broad stages of development: Narrow AI, General AI and Artificial Supe r Intelligence. On doing so, we can have a better sense of not only where we are, but where we are going. This will inform our discussion on the imminent risks and benefits, and help avoid overly hypothetical scenarios.

Narrow AI, sometimes referred to as Weak AI, is AI that specializes in one area. These machines perform a very narrow set of tasks that apply to a narrowly defined problem. Narrow AI’s can be integrated to produce highly powerful applications.

General AI, sometimes referred to as Strong AI, or Human-Level AI, refers to a machine that is capable of performing the same broad array of intellectual tasks as a human. With Ge neral AI, machines have human-like cognitive abilities and are capable of reasoning, making decisions, learning and communicating in natural language, and are able to operate in an open system. Obviously, creating General AI is a much dif ferent and more difficult challenge than creating Narrow AI.

Artificial Super Intelligence, refers to a computer that is “smarter than a human”, a machine that is capable of performing more than the broad array of intellectual tasks of a human. In this imaginar y form of machine intelligence, the computer would have
the cognitive ability to outperform human brains across a large number of disciplines, even possessing scientific creativity and social skills. Today, we are still in the early stages of AI development – all forms of artificial intelligence are instances of Narrow AI. This may sound disappointing, but in reality, this is not the c ase. Narrow AI, in and of itself, presents tremendous potential. For example, the automobiles we drive today are full of Narrow AI systems, from the computer that figures out when the anti-lock brakes should kick into the computer that tunes the parameters of the fuel injection s ystems. Our mobile phones are filled with little use cases of Narrow AI in the form of our favorite apps. The Google search is another great example of the application of Narrow AI, enabling the ranking and matching of webpages to your particular need or interest. Sophisticated Narrow AI systems are now widely used across a variety of sectors and industries. Many of these applications combine several Narrow AI algorithms into an integrated system. This approach has resulted in the creation of a number of increasingly sophisticated applications of Narrow AI that we might mistake for General AI. Perhaps the best example being the autonomous vehicle which contains a variety of Narrow AI systems that allow it to perceive and react to the world around it. The recent visible progress made in AI technologies, such as the autonomous automobile, has spurred the emergence of narratives prophesizing the obsolescence of not only job s, but also the human race in itself. This discourse is not limited to headline grabbing articles but includes industry leaders as well. Elon Musk stated, “With artificial intelligence we are summoning the demons” and “If I had to guess at what our biggest existential threat is, it’s probably [artificial intelligence]”. Other predominate figures, such as Stephen Hawking and Bill Gates, agree.

Although it is tempting to delve into the realm of the science fiction of AI, it is becoming increasingly important for decision makers, business people, and regulators to take a step back. In a world of Narrow AI, we can eliminate from our concern the notion of AI as an existential threat and instead focus on the impact that Narrow AI is likely to have on the world we live in today.
Key factors that will shape the future of AI How fast AI will progress and become a commercial reality will depend on several key factors. For the purposes of this paper, we consider four of these factors: 1) the identification and application of suitable use cases, 2) the access to large data sets, 3) the scarcity of talent, and 4) the lack of pla tform technologies. This is to say, that in order to successfully create AI today, you must identify a suitable problem, have access to the data required to train the AI to solve the problem you identify, and have access to the talent and the tools required to developed the AI.

The identification and application of suitable use casesSomewhat harshly, Box and Draper said, “Essentially, all models are wrong, but some are useful” [7]. Models, like humans, are imperfect. And indeed, many are useful. Models are abstractions of our complex, uncertain and ambiguous world. The more complex, uncertain or ambiguous the use case, the more our models will be imperfect. And this is true for AI. The simpler the problem, the better the training data, the more stable the training data, the better the performance of the AI.

Today’s highly inflated expectations for AI probably exceed its near-term potential. As a result, it is reasonable to expect a period of disillusionment regarding AI that could inhibit its development. The mitigation for this is awareness and honesty. We need to be honest about what AI can do today and be less focused on the fictional expectations of what it may be able to do many years hence. Taking a more pragmatic and critical lens will help identify suitable use-cases and help quell unrealistic expectations. Access to large data sets Data is the lifeblood of AI. Training data is a necessary condition for the development of AI. As a result, it is reasonable to expect that in the near term applications of AI will evolve where the data resides. Today, much of the data used for the development of AI is our o wn personal information, data collected from our own personal transactions and countless interactions with technology. This may or may not be the best use cases for AI from a societal perspective, but it is where the data resides, and it is whe re many of the near term applications are likely to evolve.

GRI_2017_White_Paper_AI_Frontiers.pdf

Matthew Killi

March 1, 2017