The European Union has published a series of guidelines last week to guide companies and governments in developing ethical AI applications.
The boom in artificial intelligence
Artificial Intelligence or AI has been controlling many (legal) debates for some time. Technology has now evolved so far that smart and self-learning computer algorithms can take over many of the tasks that people have done in the traditional way. This usually involves software that can learn to recognize patterns based on large amounts of data and can draw certain decisions or take actions from those patterns. For example, AI applications allow certain medical diagnoses to be automated faster and more efficiently than if a doctor had to do the same work himself. Recent tests show, for example, that skin cancer diagnoses can be made with great precision by AI software.
But Artificial Intelligence or Machine Learning also finds useful applications in other sectors. AI can help to guide traffic flows, to predict individual driving behavior (which is very interesting for insurers, who can suddenly estimate the individual accident risk of a certain person with great precision), to predict consumer behavior, to better tailor education to needs of students, to detect potential fraud from large amounts of data in for example the financial sector or in the online gambling world, …
Risks and concerns
So many potential useful applications, but at the same time there are also many risks and concerns regarding the use of Artificial Intelligence and in particular in the areas of privacy, non-discrimination and individual freedom. After all, AI makes it possible to process data about people on a very large scale and to link data from different sources together in order to sometimes make very unexpected decisions. If those decisions in turn automatically lead to certain consequences (someone does not receive insurance or a loan, someone is not allocated a home, etc.), this can lead to discrimination which the person concerned can only object to with great difficulty. The hilarious “Computer says no” from the legendary TV series Little Britain suddenly becomes a very real situation.
The EU intervenes
The EU is now trying to intervene preventively and to ensure that governments and companies working with AI apply at least a number of ethical and democratic principles that must guarantee the individual rights and freedoms of citizens in the European Union.
To this end, the EU has brought together a group of 52 experts. Based on their specific professional knowledge and experience, they have drawn up a list of seven basic requirements that they believe all future AI systems should meet. This list has been processed in a report with guidelines that was made public this week.
- Human agency and oversight
AI must can’t prevent human autonomy. People can’t be manipulated or forced into certain behaviors by AI systems and people must be able to intervene or monitor any decision that the software makes
- Technical robustness and safety
AI must be safe and accurate. AI systems must be able to withstand external attacks and their operation must be reliable.
- Privacy and data management
Personal data collected by AI systems must be processed securely and confidentially. They must not be accessible to unauthorized persons and must be adequately secured. Incidentally, this also logically follows from a correct application of the GDPR or AVG, which in any case already imposes these principles.
Data and algorithms used to create an AI application must be accessible and the decisions made by the software must be understandable and traceable by people.
- Diversity, non-discrimination and honesty
Services offered on the basis of AI must be available to everyone, regardless of age, gender, race or other characteristics and, naturally, the software must not discriminate against these criteria.
- Environment and social welfare
AI systems must be sustainable (i.e. they must be ecologically responsible) and “promote positive social change”
AI systems must be verifiable. Potential negative effects of systems must be investigated and reported in advance. This refers to the mandatory Data Protection Impact Assessment (DPIA) or GDPR or AVG data protection assessment.In addition to the above principles, the report also contains what is referred to as a “reliable assessment list for AI”. This is a checklist of questions that experts can use to identify potential weaknesses or hazards in AI software and that should help them make a prior impact assessment or impact assessment, whether or not under GDPR or AVG.
No binding legislation
These directives are not legally binding, but they can shape future European Union legislation. The EU has repeatedly said it wants to be a leader in ethical AI and has shown with the GDPR that it is prepared to create far-reaching laws that protect digital rights.
The assessment lists are also only preliminary, but the EU will collect feedback from companies in the coming years, with a final report on their usefulness in 2020.
In the meantime, the EU has established the European AI Alliance, a think tank and consultative body where experts in AI from a variety of fields can discuss deal with the EU High Level Expert Group on Artificial Intelligence within the European Commission. Sirius Legal is, together with representatives from many universities, governments and professional federations, an active member of this AI Alliance.
Questions about artificial intelligence, new technology or software development in general?
We are happy to help you. Feel free to contact Bart Van den Brande or one of the other members of our team on firstname.lastname@example.org or on +32 486 901 931