AI Ethics I: What do we want to use AI for?

Artificial intelligence can help solve many problems, but it can also create new ones if we are not careful.

AI Ethics I: What do we want to use AI for?

Artificial intelligence is rapidly becoming one of the fundamental technologies of the digitalized world. The decades-long dream of scientists and researchers is increasingly becoming reality, even if we are still largely unaware of it. Real-time translations at the touch of a button, facial recognition and voice control would all be inconceivable without AI. But the unquestionably enormous potential of artificial intelligence also poses ethical questions that we urgently need to discuss.

Basically, there are two important issues:

  1. How will we use AI in the future? Where should boundaries be drawn?
  2. Who controls the development of AI and ensures its social compatibility?

This raises another important issue: How can we control AI ethically?

 

Why we shouldn’t use AI for everything

There is no question that AI can and will make a valuable contribution to the development of the economy and society as a whole. However, we must also be aware that Artificial Intelligence is a technology that knows almost no natural boundaries. Facial recognition is a good example to illustrate the problem. While Amazon originally developed the deep learning project Rekognition as a cloud service for the recognition, analysis and labeling of objects and persons, expansion towards facial recognition caused ambivalent feelings: US Immigration and Customs Enforcement (ICE) wants to use the software to identify people from a distance at the borders. When the plans became known, hundreds of Amazon employees spoke out against selling the software to the US government. They fear that the use of facial recognition could quickly become a tool for mass surveillance.

The issue therefore not only consists of the limits to the economically motivated use of AI within corporations, but also or especially the social consequences of using AI technologies by state authorities.

The example of China: welcome to Orwell’s 1984

We just need to look to China to see that such fears are by no means unfounded. Various scenarios there show how AI is being used for seamless surveillance. An estimated 176 million surveillance cameras record the movements of the population and identify individuals using facial recognition. By 2020, 600 million cameras are to make the network even more tightly meshed, so that various actions can be evaluated and included in a so-called social score. This is comparable to a gigantic credit rating system for the entire population of China. However, this system is not limited to financial aspects. If, for example, you walk across a street at a red light, you receive a negative rating for it.

This social score is used for loans, promotions or housing searches and, as such, many “little sins” could have devastating effects. Even the use of transportation such as trains or planes could be restricted if the score is poor: “Righteous and trustworthy citizens should be able to move freely under the sky. But those who fall into disrepute should have their freedom of movement severely restricted,” says the government. And the next expansion stage even makes the system independent of facial recognition: a new type of AI is intended to identify people based on their gait pattern, even if the face is covered by a hood, for example.

AI personalization is becoming a dance on razor’s edge

In the world of marketing, there are many tasks for AI. In addition to numerous automations and data analyses, personalization is at the top of the list here. Services like Netflix or Spotify have taken personal recommendations to a new level with their algorithms. And since consumers tend to adopt the best user experience as the new standard, it is now up to all other companies to follow suit. The task is very complex, as it not only involves different technological requirements, but also requires a great deal of sensitivity.

On the one hand, too much personalization can be frightening for consumers, especially if transparency is lacking. If a customer is addressed in a personalized way in areas in which they are not used to it and does not expect it, it can even have negative consequences for the brand, because the customer may feel as if they are being followed. On the other hand, personalization can also cause rejection if it is perceived as primarily benefiting the company. It is not about achieving more effective advertising penetration, but about increasing relevance from the customer’s perspective.

Outlook: Part 2 of this AI Ethics series deals with the quality of AI and why it can become a problem.