Todd Carpenter Talks AI at the Frankfurt Book Fair 2020

While attending the Frankfurt Bookfair in October, Todd Carpenter, Executive Director of NISO, participated in a panel discussion where his role was to offer some brief thoughts about artificial intelligence (AI). While his allotted time of just 8 minutes prevented him from exploring some of the more complex elements of AI  -- Bayesian probabilities, Markov chains, and the like -- his commentary looked at more mundane aspects of AI as it currently exists. Not the AI of a distant future, but what one might find today in a cell phone, in a discovery system, or in a business. What follows is an edited version of his comments that day, which accompanied the slides displayed below. 

Will a Google of AIs Transform the Future of Communications?

AI today may be defined as an algorithmic, statistical approach to problem-solving powered by rapid replication of a process that is based on an explicit understanding of the desired outcome.

Subsequent digging into this definition highlights some key characteristics:

  • Algorithmic and statistical, entailing a combination of rules and logic, aimed at deriving an answer.

  • Powered by rapid replication, meaning that AI is a brute force approach to problem solving, largely based on trial and error, and it requires an iterative process in generating an answer

  • An explicit understanding of the desired outcome, meaning that we know the nature of the response we are seeking. “I want to produce something like this.” But the system is not necessarily performing creative problem solving. It is running through a combination of calculations that yield a specific response. 

Much of what is considered to be AI is centered around machine learning, the means whereby a system processes a solution to the pre-defined problem. To do so, it needs input, a goal set by the human programmer and a process by which it can improve its performance.

When thinking about inputs, these could be text, or images, or fish (a total non-sequitur). But machines don’t have an understanding of anything fed into them when they start this process. They need to be taught the difference between text or images or fish, thereby introducing the human programmer’s individual biases. We can make this input easier or harder. Machines have trouble understanding nuance, ambiguity, inferences, non-sequiturs.  Unfortunately, some of the barriers we build into AI aren’t technical; in some instances, they may be  legal, such as with copyright.

As previously noted, machine learning is based on logic, on concrete inputs and outputs and on probabilities. Decades of game theory and behavioral economics point to the fact that humans don’t always think that way. We also don’t operate at the scale and speed of computers. This is why machines may be better at games of chess than we are. A cutting edge laptop computer can calculate hundreds of moves ahead in the game and leave all but the best in the dust. Importantly, it isn’t creative in the way human beings are creative. The machine is built to be logical.

The truth of the matter is that we really have a poor understanding of human intelligence and no clear definition even of what human intelligence means. For centuries, philosophers, psychologists and neuroscientists have tried and have failed (so far) to define what thought and consciousness are.  We think that because humans can readily make the mental shift from one thing to another that machines can as well, which is absolutely not the case.  Even really good AI can do one thing quite well but asked to do something else, it will likely fail.  They’re tools, often designed to do one thing well. Think of a robot in a factory that is bolting on the tires. It can do that really well, but don’t ask it to dance.

So here’s my own model of a hierarchy of knowledge.  From the most basic task at the bottom to the highest thought on the top - granted, this is not exhaustive because of my time limitations today. But you can see the basics of memorization, vocalization and seeing/feeling here on the bottom. Memory and recognition are one step up.  Connecting the dots, adding 4+4 is next up the pyramid. Then higher-level analysis and then to understanding/comprehension and even hopefully wisdom.

Associated with these various levels are different capacities of understanding and thought.  Again, from perception to awareness, processing, consideration and then up to creativity.  The question is then what do we do within the information services, scholarly communications ecosystem?  What can the current AI capabilities do?

This is by no means an exhaustive list of some of the AI tools that could be envisioned.  From the bottom to the top…

The question is what are these services currently doing, somewhat deployed, in their earliest stages of development and those that really aren’t there yet.  This too is going up the list.  Machines are really good at memory.  This is why kids these days aren’t taught to memorize multiplication tables in school. Character recognition, text to speech, these things are widely deployed. Probably the device in your pocket right now can do many of these.  Building up, you see taxonomy development, data protection and fraud matching. Some of these things are easy, some are more nuanced and complicated. In many ways, the harder the case, the more human intervention exists. And on up the chain.  We don’t have really any creative outputs.  Yes, there have been artistic endeavors, but normally those are highly curated efforts supported by machine work at the more basic levels.

Despite these successes, problems abound. Look at the legal and policy frameworks within which these systems operate. Current copyright law is ill-suited to an AI world. Most policy makers are themselves insufficiently trained in the nature and operation of the technology.

There is a lack of transparency in these systems which unsurprisingly leads to a certain lack of trust. Consider the black-box nature of AI. How does it function? What data is being used in training the AI? It’s hard to know if the AI is looking at the picture or at the picture’s supporting metadata to solve a particular problem.

And perhaps most troubling are the questions of privacy and ethics. The aforementioned biases in training the system or in the algorithmic development compound existing issues. 

Again, machines aren’t as smart as you might think.