This blog post comes with a health warning !
Disclaimer: We do not claim to be professional Advisory Experts in the legal and regulatory space, BUT we do have a great deal of practical, hands-on experience of data and AI and the related regulations that our clients need to work within.
Currently, many of our clients at Beyond are in the midst of significant data and digital transformations—updating data systems, driving innovation, and leveraging AI for both decision-making and delivering new customer experiences.
So, as AI continues to gain ever more traction and awareness, we are well placed to help organisations grappling with getting their heads around how to effectively manage the regulatory landscape that comes with it.
This blog post therefore aims to provide our clients with an approachable overview or primer for AI regulation in the EU, with insights into what the UK might do, and practical guidance to help you navigate this complex environment.
The reason for focusing on the EU regulations is that they are ahead of us here in the UK and we know that the UK will follow the EU pretty closely.
The AI Regulatory Landscape in the EU
The European Union is pioneering its AI regulation with the goal of balancing innovation and risk management.
In terms of a layman's view it can be helpful to start with breaking down the key components, the Act itself, Compliance Requirements and associated Penalties, of the EU's approach:
The AI Act
The AI Act, the central tenet proposed by the European Commission, is designed to manage the risks associated with AI while fostering innovation. In simple terms it is a risk-based approach categorising AI systems into four risk levels as follows:
Unacceptable Risk: Banned systems, such as social scoring by governments.
High Risk: Systems used in critical areas like healthcare and transportation, requiring strict compliance.
Limited Risk: Systems needing transparency, like chatbots.
Minimal Risk: Most AI systems, with no additional regulatory oversight.
Compliance Requirements: High-risk AI systems must adhere to stringent requirements around data governance, documentation, transparency, human oversight, and robustness.
Timelines: These rules will come into effect over a staggered period from the law being enacted as follows:
Prohibited AI (unacceptable risk) Systems – 6 months
General purpose AI and penalties – 12 months
All rules (excluding Article 6.1/Annex II) – 24 months
Rules in Article 6.1/Annex II – 36 months (this is the section that outlines the obligations around high risk systems and will take time due to requiring more complex investigation, processes and standards development etc)
Penalties: Non-compliance can result in substantial fines, similar to GDPR penalties. These will be tiered ranging from €10 million or 2% of the total worldwide annual turnover up to €30 million or 6% of the total worldwide annual turnover. In addition, there will be specific penalties for banned practices or failures in documentation and transparency.
For more detailed information, we recommend you check out the EU AI Act Proposal.
Episode 2 of our podcast "Tyfano", hosted by our CEO, Paul Alexander, talks about the EU AI Act in great detail and investigates how this new legislation could shape the future of Artificial Intelligence, echoing the transformative influence once held by GDPR over data privacy
GDPR's Role in AI
The General Data Protection Regulation (GDPR) will remain a critical cornerstone in the AI regulatory landscape, and its role will include governing how personal data is used in AI systems. There are three concepts you should be aware of:
1. Data Minimisation - Data minimisation is the principle that only the minimum amount of personal data necessary for a specific purpose should be collected and processed. AI systems must be designed to limit the amount of personal data they process to what is strictly necessary. This helps reduce the risk of misuse or breach of sensitive information.
2. Transparency - Transparency is about providing clear and accessible information to individuals about how their personal data is being collected, processed, and used, especially in the context of AI decision-making. Users will need to be informed when an AI system is processing their data, the purposes of such processing, and how the AI reaches its decisions. The aim of this is to foster trust and allows users to make informed choices about their data and its use.
3. Rights of Individuals – this covers four key principles:
o Access: Individuals will have the right to access their personal data being processed by AI systems. This means they can request and receive a copy of their data.
o Rectification: Individuals can request corrections to their personal data if it is inaccurate or incomplete.
o Erasure: Also known as the "right to be forgotten," individuals can request their personal data be deleted under certain conditions, such as when the data is no longer needed for the originally intended purpose.
o Objection to Data Processing: Individuals have the right to object to the processing of their personal data, particularly if it is used for direct marketing or profiling purposes.
These principles ensure consumers are fully protected and that AI systems operating in the EU will be designed and deployed with data protection and privacy at their core, aligning with the GDPR's overarching goal of safeguarding personal data and individual rights.
Explore more about GDPR here.
Ethical Guidelines and Other Regulations
The EU has also developed guidelines to ensure AI is trustworthy and ethical. The Ethics Guidelines for Trustworthy AI emphasise principles such as human agency, privacy, transparency, and accountability.
Read about these guidelines here.
The UK's Approach to AI Regulation
Post-Brexit, the UK is navigating its own way in AI regulation. Whilst it is aligning closely with the EU's frameworks due to the interconnected nature of both our physical and digital economies, the style and approach reflect a slightly different attitude.
The National AI Strategy
The UK's strategy is centred on three pillars: investing in AI, effective governance, and leveraging AI opportunities across various sectors.
Investment: This includes the provision of funding for AI research and development, programmes to support the fostering of innovation, and specific help supporting AI startups and businesses.
Governance: The strategy is tasked with establishing robust governance frameworks that ensure AI is developed and used responsibly, with a focus on ethical considerations and public trust.
Exploitation of Opportunities: The strategy is placing particular emphasis on the use of AI to drive growth and efficiency in key sectors such as healthcare, finance, and transportation.
Read more about this here: UK National AI Strategy
Regulatory Sandbox
The UK is exploring a regulatory sandbox model, which allows businesses to test AI innovations in a controlled environment with regulatory oversight.
This point of this approach is to provide a safe space for experimentation, helping to identify and mitigate risks early while promoting innovation.
Regulatory support in the sandbox helps businesses understand and comply with legal requirements, enhancing the development of compliant and ethical AI technologies.
Information about this can be found here: UK Information Commissioner's Office (ICO) Sandbox
How does the Sandbox work in practice?
In practice, a regulatory sandbox is an experiment operating in a controlled live environment rather than a closed system using dummy data.
It works as follows:
Businesses apply to participate in the sandbox, outlining their technology and the regulatory questions or challenges they seek to address
Regulatory authorities review the applications and select participants based on criteria such as the potential for innovation, consumer benefit, and regulatory implications.
Participants operate in a controlled live environment, meaning they can test their AI applications with real users and data under regulatory oversight. This environment is designed to mitigate risks and ensure that any potential issues are identified and managed effectively.
Throughout the testing period, participants receive support and feedback from regulators. This includes sorting through legal requirements, helping to address compliance issues, and advising on best practices to allow it to expand.
The sandbox allows for iterative development, enabling the innovators to optimise their solution in a safe space.
Once complete, the products -if successful - can be scaled in the real world with greater confidence that everything has been done properly.
Upon successful completion of the sandbox testing, businesses exit the sandbox with a clearer understanding of regulatory expectations and a more mature product.
They can then scale their AI technology with greater confidence in its compliance and safety.
The Office for AI
The Office for AI is responsible for overseeing AI regulation in the UK, ensuring it aligns with ethical standards and maintains the trust of the general public.
Its responsibilities include coordinating AI policy across government departments, engaging with stakeholders, and promoting the UK's AI capabilities on the global stage.
The Office for AI aims to ensure that AI development and deployment adhere to ethical guidelines, focusing on fairness, transparency, and accountability.
Read more about the Office for AI
What are the differences between the UK and EU Approaches to AI Regulation?
As mentioned at the start of the blog, I’ve include the EU Act as our primary focus for this article as the UK is closely following the principles of this and given our close ties we are bound by these requirements from practical purposes if nothing else.
However, Post-Brexit, the UK and the EU have developed their own approaches to AI regulation. Whilst very similar, each reflects their respective regulatory philosophies and strategic priorities and it is helpful to touch on these to get a feeling for nuances.
Regulatory Framework
EU Approach: The EU's takes a risk-based regulatory framework with its four categories and strict requirements and rigorous compliance standards.
UK Approach: The UK has taken what it sees as a more flexible and principles-based regulatory framework. It aims to strike a balance between promoting innovation and ensuring safety, with less prescriptive rules compared to the EU. The UK wants to emphasise an agile and adaptive regulation to keep pace and align better with technological advancements. The aim is ultimately to make the UK an attractive place to develop solutions, and potentially attract the riskier, but higher value potential innovations that may require a more flexible framework within which to develop.
2. Governance and Oversight
EU Approach: The EU has established robust mechanisms, including national competent authorities and a European Artificial Intelligence Board (EAIB) to ensure consistent application and enforcement across member states.
UK Approach: The UK has set up the Office for AI to oversee AI regulation, but with a focus on coordination and stakeholder engagement rather than enforcement. The UK's approach is more decentralised, relying on existing regulators such as the Information Commissioner’s Office (ICO) for data protection aspects. It will be interesting to see which model is most effective.
3. Innovation and Experimentation
EU Approach: While the EU encourages innovation, its regulatory framework has been criticised by some as more restrictive due to the strict compliance requirements for high-risk AI systems.
UK Approach: The UK is seeking to promote innovation through initiatives like the regulatory sandbox to foster experimentation and iterative development, encouraging businesses to innovate with regulatory support. They are hoping this makes the UK more attractive for developers.
4. Focus Areas and Strategic Priorities
EU Approach: The EU places a strong emphasis on protecting fundamental rights, ensuring safety, and addressing ethical concerns related to AI.
UK Approach: The UK’s strategy is more focused on economic growth and competitiveness. Its sector focus is looking to position a global leader in AI innovation. Ethical considerations are still critical but they want to set a different tone in favour of innovation.