Insights
Lithuania
Blog
Law firm that takes care of its clients

AI policy for businesses: a tool to avoid costly mistakes when using AI in your business

The current availability of Artificial Intelligence (AI) is changing the way we work. Even if you don’t actively use “ChatGPT“, AI is already embedded in our everyday digital office tools such as “Word”, “Slides”, “Docs”, “Sheets” and more. The winners are those who are able to successfully harness the benefits of AI – increasing personal and team productivity, reducing routine tasks while improving performance.

But like any innovative tool, we need to learn how to use AI properly and safely. Only by doing so will we reduce the risk of costly mistakes and leaks of sensitive information. However, many Lithuanian companies have not yet defined rules for the safe and responsible use of AI in their operations, which inevitably means vulnerability.

As tech-giants race to develop and deliver cutting-edge AI solutions, US and EU regulators are grappling with how to effectively regulate AI developers. Meanwhile, companies that want to use these technologies to streamline their operations have to deal with the potential risks themselves. It is therefore essential that company leaders have a clear understanding of the potential risks in order to adopt a strong and sustainable internal policy defining the use of AI within the organisation.

How will we ensure the quality of the results produced by AI?

While AI can certainly contribute to excellent results, there is still a risk of biased, inappropriate or incorrect decisions. This is a major challenge for companies that rely on AI to make important business decisions or interact with customers. For example, using a text generator to respond to customer complaints or queries may result in inaccurate or inappropriate responses, which could lead to negative customer experiences or even legal consequences. In addition, employees relying solely on AI results rather than their own experience can directly damage the quality of their work or even their employer’s image.

Well-drafted rules on the use of AI should require employees to apply the same level of quality control to AI-generated outputs as they do to other work, and to use these tools only when they genuinely improve the workflow. This ensures that the results are in line with the organisation’s values, standards and internal rules.

These rules should also address issues related to quality control, confidentiality, digital security, intellectual property rights, protection of personal data, compliance with legislation and disclosure of information to clients. Finally, it should not be forgotten to provide for liability for non-compliance with the policy and the rules set.

How will we ensure confidentiality and digital security and the protection of personal data?

The internal policy should explain that entering text into AI text generators carries the same risks and responsibilities as sharing information with any other third party. Data is not only shared with the company that owns the AI product, but is often included in the tool’s training dataset, and may therefore be disclosed to other users. Companies should ensure that employees do not inadvertently disclose confidential information and trade secrets to third parties, and should treat data entered into AI text generators as external disclosures covered by the organisation’s data confidentiality and security policy.

The well-publicised incident at “Samsung Electronics”, where one of their employees uploaded confidential software code into the “ChatGPT” tool, is a well-publicised example of the potential risks. Following this, “Samsung Electronics” completely banned the use of any generative AI solutions for a while. This would not have happened if “Samsung Electronics” had provided in advance a policy on the use of AI solutions for employees.

Digital security is closely linked to confidentiality. Sharing confidential information with third parties using artificial intelligence tools can compromise a company’s security systems. In addition, AI’s ability to absorb information and act on its own makes it vulnerable to manipulation by hackers, which could lead to data breaches or disclosure of sensitive information.

In principle, the same rules apply to the protection of personal data. Employees need to be very clear that uploading personal data to AI systems means sharing it with a third party. This, in turn, requires that we have a very specific legal basis for such sharing. In many cases, when it comes to the use of AI systems for data processing, we will not have such a basis, and company managers should take care to ensure that the de-personalisation of the information provided is taken care of when designing AI tasks.

How will we protect intellectual property?

AI systems can be trained to use publicly available data without users even being aware of the proprietary nature of the content generated. 
Developers of AI solutions generally disclaim liability for such infringements, so it is incumbent on companies to ensure that their use complies with the rights of third parties.

Another important aspect is that, to date, we still do not have a firm answer as to who will be considered as the author of a work or other intellectual property object created with the help of an AI system. At the same time, this means that companies using AI systems for creative activities, innovation and the like need to take into account the fact that the intellectual property created with the help of AI systems may not be effectively protected.

How will we ensure compliance with the law?

In the most heavily regulated areas such as healthcare and finance, the use of AI requires compliance with legislation by establishing policies that define the roles and responsibilities of employees for the use of AI text generators.

How will we inform clients about the use of AI?

Ultimately, companies need to decide how and whether they will disclose information about the use of AI to customers. There are different ways to do this. On one side of the scale, it may be to keep quiet about the fact that an AI system contributed to the result; on the other, it may be to add a disclaimer to every document created using or with the help of AI, or to include a contractual clause stating that certain information may be created using AI. Employees have the right to know clearly which path their company has chosen.