Dealing safely with AI in your company: 5 core steps towards safe use of AI tools

Reading time: 6 minutes

How do you prepare your company for the use of AI within your teams and what are the risks you need to take into account? We have recently shared our insights on this burning topic with companies at various venues, including AI5050 or our own AI’s got talent event. We also had an opportunity to discuss the matter with fellow lawyers at the Cardiff Conference of our partner network Consulegis.

We thought it would be useful to wrap up the key take-aways of those presentations in a blog. After all, the adoption of AI in companies is happening very quickly and there is a good chance that your teams are already experimenting with one of the many cloud-based AI tools, which are released, it feels like, on a daily basis. That in itself is fantastic, because their benefits are endless. However, there are also some risks attached to the use of AI that you should not overlook.

Risks of AI for your company

The first thing to consider is the ethical risks. The upcoming AI Act places great emphasis on ethics, transparency and protecting fundamental rights.

The improper or incorrect use of AI applications can lead to discrimination, (unintentional) exclusion or unequal treatment. Therefore, it is essential that you understand the decision-making of your AI applications and that you can ensure that they consistently follow ethical principles. In the near future, the AI ​​Act will impose a series of obligations on companies that develop or use AI. For example, in many cases you will be required to carry out an AI Impact Assessment before you start working with AI and in a number of cases you will also have to register AI tools in advance with the (yet to be established) supervisory authority.

Apart from that, data protection and the GDPR are obviously crucial points of attention when using AI tools. After all, many AI applications process personal data and as soon as that is the case, you have to take into account all the, sometimes heavy, obligations that the GDPR entails. Unfortunately, many AI tools and many companies that use those tools today do not take sufficient account of the GDPR obligations, resulting in a serious risk of fines, liabilities and reputational damage.

Intellectual property rights are also a real challenge for those who bring AI into their company. We already described these risks in a previous blog post and there are plenty of mediatized court cases, such as the Getty case in the US, that show how easily intellectual property can be violated using AI.

Not only is there danger that you violate the intellectual property of others when using generative AI is real, your own IP and trade secrets can also be at risk if tools such as ChatGPT are used improperly by the teams within your company. Even large multinationals are not immune to this, as evidenced by the very painful incident at Samsung, where confidential R&D information was leaked after an employee entered highly sensitive data as part of a prompt to ChatGPT. Unintentional leaking of confidential information and trade secrets is not only a concern in the context of multinational corporations, lawyers have to also be extremely careful when using such tools as ChatGPT to ensure that the sensitive information of their clients is not revealed without their consent.

Protecting your IP is crucial as a company. Your intellectual property is of capital value to your company and infringements of other people’s rights can cost a lot of money. Caution is therefore required here.

In addition to the points we have already mentioned, AI entails many other legal risks, among which is the question of liability. If an AI system makes a mistake, who is responsible? The legal waters surrounding AI liability are still murky, so it’s important to consider this in advance and protect yourself as best you can.

Protect your company against unnecessary risks in 5 steps

1. Internal processes and transparency

How do you protect yourself against these risks? Start by setting up internal processes and ensuring transparency. This helps everyone in the company understand how and why AI is being used. Make sure you have a good inventory of all AI used in your company and avoid so-called “shadow AI”, or AI tools that are used without prior approval.

 

2. Vendor assessments

Make sure you have a solid vendor assessment process. Prior working with an AI vendor, make sure they meet your standards for safety, ethics, and reliability. You do this by setting up a solid purchasing process and maintaining a good checklist with minimum criteria that AI suppliers must meet in terms of transparency, security, the GDPR compliance, the necessary preliminary audits (AI Impact Assessments, Data Protection Impact Assessment) and in the future also compliance with the AI ​​Act.

Furthermore, ensure that solid standard procurement contracts are in place. Those contracts should  provide  sound safeguards and guarantees from the supplier that limit your risk as an entrepreneur as much as possible. In that context,  the standard AI procurement contracts for governments that are drawn up by the European Commission are especially intriguing. We will certainly return to those standard contractual clauses in a next blog.

 

3. Impact Assessments

Carrying out regular impact assessments is essential. This ensures that you understand the technical and ethical impact of your AI and can respond quickly to any related issues.

AI Impact Assessments (AIIA) are not providing relief for the time being. Until the AI ​​Act comes into effect, you can continue without such an AI Impact Assessment, but keep in mind that such AIIA will often be mandatory in the future. The AI ​​Act will also, like the GDPR, transmit an obligation of accountability on you as an entrepreneur. You will therefore have to be able to demonstrate at any time that you are handling AI responsibly and in accordance with legal obligations. In view of these upcoming obligations, engaging in a timely AIIA is already a wise decision today.

We already mentioned the GDPR in the previous paragraph: if your AI tool also processes personal data, a prior Data Protection Impact Assessment (DPIA) is almost inevitable. It is a legal obligation that follows directly from the GDPR and which many companies have wrongly (and often unconsciously) ignored to date. This legal obligation entails enormous legal and financial liabilities.

 

4. Internal guidelines and policies

Less than 10% of companies working with AI today have an “AI policy” for their staff. However, a good internal policy with clear guidelines is the very first step in safely handling AI within your company. Thus, it is essential to  set up clear internal guidelines. By establishing such policies and rules, everyone within the company will be sufficiently informed on how to use AI safely and effectively. In such guidelines you can specify, for instance, whether AI tools may be used and under which conditions, who is in charge of validating or authorizing their use, which data may or may not be entrusted to AI, what security measures are required, and whether a prior impact assessment must be carried out.

 

5. Strong contracts

Finally, conduct a contract audit regularly. Check the underlying contracts you enter with  AI providers and  other technical parties, such as your web builder and web hosting provider, your online marketing partner, as well as contracts for the various software tools you use internally (CRM, e-mailing, accounting package, HR management, etc.). Pay special attention to whether the necessary transparency, guarantees and restrictions around AI are provided. Ensure that your legal and financial risks are sufficiently contractually addressed and covered.

Questions about the legal framework surrounding AI?

Feel free to email bart@siriuslegal.be or schedule a meeting right here. Soon we will provide AI related packages that will help you implement the above points of interest correctly within your company.

Schedule a free appointment

About the author

Bart
Van den Brande

I am the founder and Managing Partner of Sirius Legal. In 2010, I decided to leave the Brussels big city law scene behind me to start practi...