12 September 2024

Roadmap to AI compliance now a top priority

Artificial Intelligence
| The European | Ana Paula Assis

The EU AI Act is an important step for the future of artificial intelligence, yet only an effective compliance strategy will allow organisations to capitalise on the advantages it will bring, says Ana Paula Assis of IBM



The EU AI Act has ushered in a new age for artificial intelligence in Europe, creating a long-awaited legal framework that all organisations operating in the EU will soon be required to follow. As the world’s first comprehensive legal framework for AI, the act is intended to ensure the safety and transparency of AI systems across the EU, with its implementation setting a critical precedent for countries and organisations worldwide. 

For those organisations that currently deploy AI applications within their operations, these new regulations might appear daunting and will undoubtedly require an initial increase in investment to ensure risk and technical assessments are completed in the designated timelines. However, the new framework should be viewed as hugely beneficial for businesses, bringing certainty and clear guardrails for AI strategies for the first time. 

By following a series of critical steps, organisations can invest in and accelerate their AI strategies with confidence, building robust frameworks that both ensure compliance with the AI Act now and prepare them for future developments. 

A risk-based approach

The crucial first step for companies to achieve compliance is understanding the Act’s risk-based approach, which will help businesses determine the level of preparation required. The risk-based approach dictates that AI systems used in different applications will be analysed and classified according to the risk they pose, and those risk levels will require less or more regulation. 

The highest, “prohibitive” risk level includes practices like social scoring. The “high-risk” level encompasses areas such as infrastructure, recruitment, and credit scoring. “Medium risk” use cases involve technologies like chatbots and deepfakes. Finally, the “low-risk” level includes AI-enabled games and spam filters.  

Importantly, generative AI itself will not be classified as high-risk – but generative AI usage will need to comply with transparency requirements and EU copyrighting law, such as disclosing that the content was generated by AI and publishing summaries of copyrighted data used for training. What’s more, high-impact general purpose AI models that might pose a systemic risk, such as the more advanced ChatGPT model GPT-4, will have to undergo additional evaluations. 

As with most major pieces of legislation, the EU AI Act includes an implementation period, giving organisations a timeframe in which they need to meet all requirements. The EU has adopted a phased approach with prohibitions to take effect in six months and most provisions being applicable in two years. 

At IBM, we consider this approach and framework appropriate to ensure transparency, accountability, and human oversight in developing and deploying AI while fostering healthy innovation and competitiveness in Europe. It aligns with our commitment to ethical AI practices and will serve to promote trust and confidence in an open AI landscape, something we can all benefit from. 

Critical steps to compliance

Of course, with set timelines now in place, organisations should start preparing to meet the standards as soon as possible. 

The first step is completing a comprehensive model inventory. Most businesses will already have several AI applications running in various areas of their company, but they will likely be siloed and operating on different systems. Many companies will have also underestimated the extent of their use cases operating already. Getting your house in order by completing an internal inventory of all AI and machine learning (ML) applications within your business is vital. 

The second step is undertaking a risk assessment. Once the inventory is completed, organisations will need to establish their risk level to ensure they are fulfilling all relevant obligations. For example, high-risk use cases come with seven essential requirements related to human oversight, technical robustness, privacy and data governance, transparency, diversity and fairness, societal wellbeing, and accountability. A complete risk assessment will also consider reputational and operational risks, elevating assessment beyond AI Act compliance and taking a long-term holistic approach to AI governance. 

The third step in the process is to implement the ten technical standards being developed to comply with the requirements of the Act. European standardisation organisations are currently developing new technical standards to facilitate this, as has the International Organisation for Standardisation (ISO), which recently released a new standard called ISO42001, to be adopted by the EU as a framework for risk management systems. Companies will be obligated to implement these and conduct a conformity self-assessment. Alternatively, they can opt for a third-party assessment of conformity. It is a complicated process, which is why companies must start familiarising themselves with the standards now to ensure they are ready for implementation. 

Building a trusted ecosystem 

Organisations going through the compliance process should use this opportunity to bolster AI governance strategies in tandem. Building a framework for responsible, governed AI will allow organisations to operationalise with confidence, manage risk and reputation, build employee trust, and meet stakeholder demand. 

Much of this relies on introducing cross-company workflow management tools. Until now, AI has mostly belonged to data science teams. But the onset of new regulations, in Europe and jurisdictions worldwide, means AI compliance will infiltrate more areas of an organisation, such as legal and risk assessment. These departments might be new to the AI lifecycle, so building workflow management structures will ensure stakeholders are aligned and transparency remains front and centre. 

Once your organisation’s AI applications and processes have gone through a model inventory, building a series of automated governance workflows in line with compliance requirements will establish in-built robustness and agility, therefore boosting AI compliance and ensuring you have everything in place as and when things evolve in the future. 

Perhaps the most important part of a building a successful AI programme is establishing an AI ethics board. Compliance with the Act itself demands a certain level of ethical consideration, but companies must also define their own ethical approach to AI, establishing a set of guidelines to dictate its implementation and future innovation. This approach has underpinned our own AI development at IBM, where our stringent ethical framework dictates which use cases we advance, what clients we will work with, and our trusted approach to data and copyright. By carving your own ethical framework, organisations can create alignment with all stakeholders from the start of their AI journey and mitigate any reputational risk that may appear further down the line. 

Potential challenges

We are of course in the early days of the EU AI Act, and it is not foreseen that this risk-based approach will pose significant compliance challenges. That said, some sectors may face a steeper learning curve towards compliance than others. For example, financial services firms are accustomed to stringent and comprehensive regulations and will already have the legal and risk frameworks in place to incorporate this new regulation. Less regulated sectors, such as leisure and the arts, or even HR departments within companies, may be required to take additional steps to meet the standards, such as creating new regulatory workflows or additional staff training. 

Challenges of harmonisation

Another challenge that could potentially arise is the emergence of diverging global frameworks. The EU has been clear that it wants the Act to provide a global standard, hoping that eventually multinational organisations will only need comply with one set of standards across their operations. The World Trade Organisation has also encouraged people to adopt ISO standards, which are in line with the EU. However, there are early indications this may not happen straight away. For example, to push innovation, competitiveness and attract AI companies, the UK has chosen a different approach to regulation and may be supported by different technical standards. It is highly likely that UK regulations will be somewhat aligned with the EU due to shared priorities and a desire for ease of cross-border business – but there may be some key differences that organisations will be required to navigate. 

Top priority

The EU AI Act signifies an important, responsible step for the future of AI. For organisations looking to capitalise on the regulatory certainty and increased competitiveness it will bring, setting out a roadmap to compliance as soon as possible should be a top priority in 2024. 

About the author
Ana Paula Assis is Chair and General Manager, EMEA, IBM.

Sign Up

For the latest news

Magazine Hard Copy Subscription

Get your
favourite magazine
delivered directly
to you

Purchase

Digital Edition

Get every edition delivered
directly into your email inbox

Subscribe

Download the App free today

Follow
your favourite
business magazine
while on the go.
Available on

Other Artificial Intelligence Articles You May Like

Website Design Canterbury