How smart is artificial intelligence really?

AI can perform amazing feats or fail spectacularly. According to experts, we have to be aware of these limitations.

How smart is artificial intelligence really?

First unbridled euphoria, then deep disappointment: the history of artificial intelligence (AI) to date has followed this pattern several times since the 1950s. After all, AI sometimes amazes us by what it can do and sometimes by how stupid it continues to be.

An example: researchers tested the limits of an AI system that can recognize objects in photos. To do this, they manipulated the image of a living room by inserting an elephant in various places. A person would certainly find this curious, but would still recognize the elephant in the picture along with the sofa or the book. With AI this was not the case. Depending on where the elephant was placed, it was sometimes overlooked completely or even identified as a chair. The AI system was similarly confused by other objects in the picture.

No drastic changes are needed for this. Even a shadow in the image or other disturbances can lead to false results or total failure of AI. For example, Google’s AI specialists at DeepMind made headlines by announcing that one of their systems had learned the classic computer game “Breakout” from scratch and was at the level of experienced human players after just four hours of training. As other researchers have shown, however, even the slightest changes to the game can cause the system to fail miserably.

One reason for this is that today’s AI possesses no basic abstract concept of the world. It can recognize patterns in data by way of deep learning. But it doesn’t understand what this data is all about. This is why Prof. Gary Marcus of New York University writes:

“For many reasons deep learning cannot be considered (as it sometimes is in the popular press) as a general solution to artificial intelligence.”

 

Where are the self-driving cars?

That might also be one of the reasons why autonomous vehicles are not making the progress predicted just a few years ago. Even Google’s subsidiary Waymo is still a long way from achieving this goal. The company may have started a service with self-driving taxis in Phoenix, Arizona. But they’re only on the roads in a very limited part of the city and also have numerous other limitations. And Waymo is considered to be the company that is far ahead of all others in this field.

Accident-free car steering primarily requires reliable interpretation of road markings, signs and traffic lights at all times. Even this is not always a given, as has been shown. With the addition of just a few stickers, AI systems suddenly interpret a stop sign as a speed limit. But that’s not all. As a driver you have to judge a number of different situations by comparing them with previous experiences. However, today’s AI systems are still poor in this respect.

 

The difficulty of obtaining clean data

Today’s applications also work well if they have a lot of suitable training data at their disposal. Accordingly, an AI algorithm can only be as neutral as the data it receives as training material. Silicon Valley, for example, has a major problem due to the fact that it is predominantly white, heterosexual men who work there. It is therefore no wonder that AI systems suddenly fail when it comes to dark-skinned people, for example. AI tools for personnel management can also further reinforce existing prejudices if they are already present in the training data. Clean, neutral, representative data is much sought after and not always easy to find.

Therefore, the great risk of AI is currently not that we could accidentally create a super-intelligence, which then wants to extinguish mankind. The acute danger is rather that we place too much trust in AI and are not aware of its limits. Prof. Melanie Mitchell from Portland State University, for example, says:

“The most dangerous aspect of AI systems is that we will trust them too much and give them too much autonomy while not being fully aware of their limitations.”

 

The bottom line

Human-like AI seems to be a long way off. New breakthroughs, methods or possibilities may soon change this. On the other hand, this could take several decades. However, this should not obscure the fact that AI is already able to perform amazing tasks and support us in many ways. We simply have to keep in mind that machines can make mistakes or be manipulated.