AI regulation – smart move or real challenge?
Does the development and use of artificial intelligence need a legal framework? Katharina Rieke from the BVDW thinks so. In this interview, she discusses new guidelines and why companies especially should have AI regulation on their radar.
Ethics and AI in Europe
AI technologies are gaining ground, and the European Union wants to become a forerunner in the development and use of artificial intelligences. To this end, the EU Commission recently proposed a comprehensive legal framework that primarily focuses on excellence and trust. But what does that mean? We asked Katharina Rieke, Director of Politics and Society at the German Association for the Digital Economy (BVDW):
What does the term “ethical AI” mean?
Katharina Rieke: Ethics has always been an important issue in our society. The datafication and networking of today’s world are bringing up these old questions again. Especially in the context of AI, these questions are obviously being raised more often. But that doesn’t mean we have to abandon old standards and invent completely new ones. Quite the opposite: we need to translate tried-and-tested ethical standards for the digital age – and revisit certain ethical debates that we may not have deemed so relevant until now.
For instance, when it comes to AI systems we also have to act responsibly and transparently, while being mindful of legal and societal requirements. Against this backdrop, the AI systems need to be designed in such a way that gives people greater scope for action in a values-based way, while respecting their rights, autonomy, and self-determination.
Why does artificial intelligence need legal regulation? In what areas (of life) does this play a role?
Katharina Rieke: Digitalization comes with opportunities, but also risks. Particularly in the field of AI, these two sides are very pronounced. We can benefit incredibly from AI in terms of our economy and society. Not only does it enable us to establish and improve innovative business models, but it also gives rise to new possibilities, for example in the area of medical diagnosis thanks to AI-driven health services. The opportunities offered by AI permeate virtually all areas of life.
However, besides the many innovative opportunities, there are of course also some risks associated with this technology if it isn’t used properly. That’s exactly why regulation is wanted and needed, the specific aim of which is to ensure that the fundamental values of the European Union and the fundamental rights of users are observed in all circumstances. Only then can we use the technology to benefit everyone.
The European Commission recently proposed a framework for the use of artificial intelligence. What does it entail and what do the rules set out to do?
Katharina Rieke: The EU Commission has been addressing the topic of AI more intensively since 2018, resulting in the regulation proposal for a European AI approach, published on April 21, 2021, which is to be understood as a legal framework for artificial intelligence at the EU level. Among other things, the proposal contains bans on certain practices in the field of AI, specific requirements relating to “high-risk AI systems”, and obligations to be followed by the operators of such systems. It also lays down harmonized transparency rules for AI systems as well as rules for market monitoring and market surveillance.
Although the BVDW welcomes this European approach and also agrees with the need for regulation, we believe that the rules in their current form are not conducive to the goal. In particular, we would like to stress three aspects:
- Too broad a definition of AI
- Legal uncertainties, especially with regard to high-risk AI
- Partly disproportionate requirements and bureaucracy associated with a hefty fine in the proposal
To balance the opportunities and risks of AI now instead of later, the BVDW also feels that it would be more appropriate to opt for a regulation approach that is differentiated by sector, since AI is used differently in every industry and thus presents different risks.
In early 2019, the BVDW already published its own AI guideline package. How do these guidelines differ from the new EU rules?
Katharina Rieke: In principle, the guidelines issued by the BVDW and the EU Commission’s measures complement each other well. In its total of eight guidelines, the BVDW proposes, for example, that business, society, and science should be included in the discourse on artificial intelligence, that the European Union must act together, and that we should view AI from a European standpoint. In addition, the BVDW’s overriding goal is to create trust and clarify ethical questions in relation to AI.
However, there are also some aspects where we are at variance with each other: As already mentioned, the EU Commission has opted for a horizontal, global approach in its AI regulation. The BVDW believes it is more appropriate to take a sectoral approach with precise application fields for AI. We have also identified other topics that currently aren’t being discussed yet, namely changes in the job market brought about by AI and the further establishment of qualified AI specialists in Europe.
How can companies benefit from the BVDW guidelines?
Katharina Rieke: The AI guidelines were a first step to encourage companies to address all the major aspects of artificial intelligence, take a closer look at the topic, and not just view it from their business perspective. We then fleshed out these guidelines and addressed the topic of artificial intelligence more intensively as an association. This gave rise to the AI monitor, which the BVDW has issued annually together with the German Economic Institute (IW) since 2020. This AI monitor allows us to measure how artificial intelligence is developing in Germany – both in terms of the framework conditions and the developments in business and society. With the findings, we can enable everyone to self-reflect and our recommended actions can pave the way for measures at all levels.
The EU Commission has said it wants Europe to be at the forefront of trustworthy AI systems. How do you think we will achieve that?
Katharina Rieke: In our view, we will only manage to do that if we strike the right balance between innovation and security in Europe. We must communicate clear fundamental values to companies and give them a framework they can act in. Only then will we establish the required trust in such services. At the same time though, this framework must be flexible enough to not deprive companies of their innovativeness or overwhelm them with obligations and bureaucracy that stifle them. It’s a fine line that needs to be walked here. That’s why the BVDW is of the opinion that it makes more sense to deal with AI according to the specific industry and application, because the two opposing sides can then be reconciled in a more precise and targeted manner.