The proposed legislation will require all GenAI developers to provide detailed summaries of the content used to train their models
The EU parliament has approved its AI act, the world’s first recognised set of rules designed to regulate this technology. Under the act, AI solutions will be divided into risk categories, including an ‘unacceptable’ tier that would see models that pose systemic risk banned. The bill will come into force 12 months after it becomes law and is subject to formal approval from ministers of EU member states.
It’s expected the act will have an impact beyond the borders of EU countries, in a similar way to the effect of GDPR on the management of people’s data. Generative AI (GenAI) regulation will be split into two tiers, the first of which will require compliance with EU copyright law and summaries of content used to train models. The second tier, reserved for models deemed high risk, involves regular incident reports and more stringent testing.
According to Yohan Lobo, Industry Solutions Manager, Financial Services at M-Files, ensuring GenAI tools only use accurate and well-structured data − typically the same data managed and used by their company – is vital for companies bound to comply with the EU AI act.
Yohan said: “Now that GenAI has broken into the mainstream, businesses across industries are rushing to implement these solutions and get ahead of the competition. However, firms can only implement a GenAI tool if they are sure it is safe and reliable.
“The EU’s AI act adds another layer of complexity for business leaders with concerns about the safety of their models. The majority of GenAI solutions will fall into the first tier of regulation, where organisations must prove that the data the model is grounded in can be relied upon.
“The impact of the bill is likely to permeate beyond the EU, with other nations and governing bodies sure to follow suit if the legislation is a success. As a result, it’s crucial that companies developing GenAI models consider how they can better align with any upcoming regulatory changes.
“Satisfying the requirements of the EU AI act is dependent upon three key pillars: trust, security, and accuracy. The easiest way to comply with the legislation is by deploying a solution that operates within reliable internal data. A question all companies should ask themselves, is do they trust their data? If so, they can count on the results their GenAI tool produces.
“Correctly implementing an approach driven by internal data means businesses can act with certainty on their AI outputs, improving productivity by equipping knowledge workers with tools to quickly search for, access and analyse the information they need. When a model is given time to adapt to the data it is built upon and learn more about the requirements of the individual employees it services, it will grow in intelligence and intuition and further support the needs of its users.”
Yohan concluded: “Trust, security, and accuracy are all intrinsically linked, and companies looking to embed a GenAI strategy that complies with the EU AI act should begin by organising data across all operations. In doing so, they can lay the foundation for a GenAI tool that protects their customers while delivering vital work automation that will increase efficiency and streamline processes for employees.”