Artificial Intelligence («AI») has and will continue to have an increasing impact on our everyday lives. Although AI in general is only at its inception phase, when it comes to medicine it already has a track record in particular in terms of pattern recognition. The application of machine learning systems may even help to develop and fight the current Covid-19 crisis. At the same time, this adds to the need to find out how AI should be addressed from a legal perspective, which the European Union («EU») White Paper on Artificial Intelligence intends to outline.
Before the current coronavirus challenges, the EU Commission had already presented a European strategy for AI (2018) and several other EU institutions have added guidelines for dealing with the technology. On 19 February 2020, the EU Commission finally published its White Paper on Artificial Intelligence presenting policy options that intend to enable a trustworthy and secure development of AI in Europe. In light of the coronavirus crisis, the end date for the consultation period for the White Paper has been extended from 31 May 2020 to 14 June 2020 and the EU has reaffirmed its commitment that AI will remain a priority.
1. What does the EU White Paper cover?
In its White Paper the EU Commission stresses the importance that European AI is grounded in the EU values and fundamental rights such as human dignity and privacy protection. Also, the EU Commission recognises the significant role AI systems can have in achieving the EU's Sustainable Development Goals, supporting democratic processes, social rights and enabling the goals of the EU Green Deal.
To foster AI systems and to generate trust, the EU focuses its efforts in its White Paper on two areas, referred to as the «ecosystem of excellence» and the «ecosystem of trust». The ecosystem of excellence focuses on how best to align policy efforts at a European level. The ecosystem of trust sets out the future regulatory framework for AI and highlights where developers or investors will have to put their focus when boosting AI solutions. The implication AI, IoT and other new digital technologies will have on safety and liability legislation is covered in the EU Commission Report accompanying the White Paper. We do not explicitly deal with these additional topics in this blog.
1.1 The ecosystem of excellence
The ecosystem of excellence deals with the framework that intends to align the efforts to regulate and develop trustworthy AI at European, regional and national level. To do so, the White Paper puts a focus on a partnership between the private and the public sector to mobilise resources to achieve this «ecosystem of excellence» along the entire value chain. This means the outlined measures will start with research and innovation to create the right incentives to accelerate the adoption of solutions based on AI. To achieve this, the ecosystem of excellence defines six key actions, namely:
- To foster cooperation between the EU and its Member States, which the EU will explain in more detail in a Coordinated Plan on AI, which is intended to be published by the end of 2020.
- To focus on research and innovation to strengthen and connect AI research excellence centres as well as to set up testing experimentation sites on a regional level.
- To improve skills and foster talent to have the necessary resources to become the centre for excellence.
- To help SMEs by providing Digital Innovation Hubs and increase equity financing for AI developments and deployment through InvestEU.
- To work with the private sector through new public-private-partnerships («PPP») on AI, data and robotics.
- To promote AI in the public sector by boosting sector dialogues.
1.2 The ecosystem of trust
The White Paper not only defines a framework for the development of AI but also outlines the key elements of a future regulatory framework for AI in Europe, which is referred to as the «ecosystem of trust». In the paper, the EU argues that the creation of trust is paramount to successfully foster AI developments, which means that fundamental rights and consumers rights must be protected by the EU's regulatory approach. To do so the EU intends to focus on a proportionate and risk-based approach. In particular, they distinguish between high-risk and non-high-risk applications in their regulatory efforts.
The determination whether an AI system is considered high-risk shall be dependent on two factors, namely the sector of application (e.g. healthcare, transport, public sector) and its specific use and effect (e.g. the impact on the affected parties, legal or similar effects for the rights of an individual or a company, injury, death or significant material or immaterial damage). Notwithstanding this general approach, the White Paper acknowledges that there may be the case that an AI system is considered high-risk irrespective of the sector of its application. This has often been referred to in the media in connection with recruitment systems and/ or remote biometric identification.
Regarding no-high-risk AI systems, the White Paper states that this will remain entirely subject to the existing EU rules, such as the General Data Protection Regulation («GDPR»). Yet, it also envisions a voluntary labelling system for non-high-risk systems where the economic operator can decide to obtain an AI quality label to signal to the market that the AI system is trustworthy. However, once the developer or economic operator decides to be subject to such a labelling system, the relevant requirement will become binding.
2. What does this regulatory approach mean for a start-up developer or business?
Although the White Paper gives some guidance on how AI will be regulated in the EU, it is not yet a concrete policy proposal.
Nevertheless, it highlights areas that can already be taken into account today when developing or implementing an AI tool. In particular, it shows the need to make a risk assessment and appropriately address the identified risks while at the same time ensuring compliance with the current legal framework.
In our opinion, at least the following (legal) requirements need to be taken into account when developing an AI system:
- Training data needs to be of high quality as well as respect EU rules and values;
- Records of the relevant data sets and the programming and training methodologies need to be kept;
- Information about the AI system’s performance and existence needs to be provided;
- Robustness and accuracy of the AI system needs to be ensured;
- Human oversight needs to be included at all times.
To read the entire EU White Paper on Artificial Intelligence, please click here.
If you are interested to know more about the legal implications of AI systems and how to address them in practice, join us for our Start-up Stories Night on 19 August 2020 in Zurich.
To learn more about the legal aspects of AI and what to look out for when implementing AI solutions in your company sign up to our blog.
Artificial Intelligence throw up legal challenges across disciplines. Froriep’s Artificial Intelligence Focus Area allows us to build teams of lawyers from very different disciplines but all with a burning interest and practical experience in the fast-evolving areas of machine learning and deep learning.
Our team advises clients active in a wide range of fields on regulatory compliance, liability issues, intellectual property, data protection and security, ethical considerations, taxation, contractual and litigious matters as they apply to emerging and disruptive technologies.
Any questions? Please get in touch with Nicola Benz, Head of Focus Area AI & Digitalisation.
Photocredit: Markus Spiske - Pixabay