Algorithms are the product of human programming and training with data selected by humans. As such, these algorithms are ultimately just as biased as the people who were involved in their creation. In addition to this, the intentions pursued with the algorithm can themselves violate social values and norms. Algorithms can recognize patterns explicitly or implicitly and define them as standards that are undesirable when considered as a whole.
This cannot be prevented automatically, but we can work on quality control that can detect and correct unwanted biases (cognitive distortions). The greater the impact of an AI-based decision on social processes, the more important such a monitoring body becomes. It is also important to understand that disclosure of the source code can only be part of the required control process. An algorithm can be programmed with the best intentions, but if the training data contains unwanted patterns, the result can still be problematic. Therefore, effectively controlling AI applications also includes insight into the data they use and documentation of the intended objectives.
AI ethics initiatives
There are already some initiatives that have given concrete thought to the requirements AI control should fulfill.
The #alorules code of ethics
Together with the Bertelsmann Stiftung, the iRights.Lab think tank is developing a list of quality criteria for socially relevant algorithms. An online survey on citizen participation has also been conducted since the end of November 2018. The following criteria have already been established:
1. Develop competence
The functions and effects of algorithms must be understood.
2. Assign responsibility
A natural person must be responsible for the effects of an algorithmic system.
3. Making goals and expected impact comprehensible
The goals and the expected effect of an algorithm must be made comprehensible.
4. Ensure safety
The use of algorithmic systems must be safe.
5. Increase transparency
The use of an algorithmic system must be indicated as such.
6. Ensure controllability
The use of algorithmic systems must be controllable.
7. Check effect
The effects of an algorithmic system on humans must be checked regularly.
8. Establish correctability
The decisions of an algorithm must never be irreversible.
D64: paper on basic values for artificial intelligence
Think tank “D64 – Zentrum für Digitale Fortschritt” (D64 – Center for Digital Progress) has published a paper on the basic values of AI, in which it formulated 18 concrete demands to ensure that AI, as an essential part of digitalization, contributes to social progress. These include, for example, the creation of an ethics council at national and global level, adaptation of the legal framework and the modernization of data protection laws in line with citizens’ rights.
AI Now Institute
The AI Now Institute is a research facility at New York University founded in 2017 by researchers Kate Crawford (Microsoft) and Meredith Whittaker (Google) to ensure that AI systems are sensitive to the complex social realms, in which they are deployed. In its 2018 Annual Report, the research institute makes a number of clear recommendations. First and foremost, this includes the demand for state regulation of AI so that the same rules, standards and certifications apply in the various areas of application such as health care, education, criminal law and social affairs. Moreover, it is in the public interest to strictly regulate facial recognition. The AI Now Institute also gives verifiability of AI software higher priority than the trade secrets of AI companies.
The bottom line: We need control instances for AI
The problem areas presented make it clear that we also need a political-social discourse for the development of AI. Especially the case of Amazon Rekognition is both a warning shot and a bearer of hope. It is a warning because AI can also be used by a constitutional state in a way that is strongly reminiscent of the totalitarian surveillance state from Orwell’s dystopian novel 1984. On the other hand, it is a sign of hope, because the Amazon workforce is clearly opposed to such use. The demands of the various initiatives are a first step towards a broad discourse in which business, science and research, politics and the population must be equally involved. If this discourse does not succeed, we risk losing people’s fundamental trust in an otherwise promising technology.
First of all, every company that uses AI should inform itself comprehensively about the problems and impose the highest quality criteria on itself. Assessment by a neutral authority can help achieve an even higher level of trust. The utmost transparency in the code, the data and the objectives make algorithm-based decisions less error-prone and more trustworthy.