The EU Artificial Intelligence Act Jonathan Armstrong of Punter Southall Law

The EU Artificial Intelligence Act

19 Min Read

What’s the EU Artificial Intelligence Act all about?

The EU Artificial Intelligence Act – the so-called EU AI Act – came into force on 1 August 2024. It will change the way in which AI is regulated across Europe, and it has extra-territorial effect too.

But what will the new Act do and how should organisations prepare?

What is the EU AI Act?

The EU AI Act was passed by the European Parliament on 13 March 2024 and formally adopted by the EU Council on 21 May 2024. It was published in the Official Journal on 12 July 2024, and comes into force 20 days following publication.

The EU AI Act is in the form of an EU Regulation. The EU AI Act aims to ensure that AI systems placed on the EU market and used in the EU are safe. The EU claims that the EU AI Act is the first-ever comprehensive legal framework on AI worldwide.

What’s the EU approach?

The first thing to say is that even before the passing of the EU AI Act, AI was not completely unregulated in the EU.  There has been previous enforcement activity against AI under GDPR including:

  1. The Italian Data Protection Authority (DPA) ban for the ReplikaAI chatbot;
  2. The temporary suspension by Google of its Bard AI tool in the EU after intervention from the Irish DPA;
  3. Italian DPA fines for Deliveroo and a food delivery start-up over AI algorithm use;
  4. Clearview AI fines under GDPR.

The EU’s regulatory approach in the EU AI Act aims to be risk-based, which, according to the EU is as follows:

Minimal risk

Most AI systems present only minimal or no risk for citizens’ rights or safety. There are no mandatory requirements, but organisations may nevertheless voluntarily commit to additional codes of conduct for these if they wish. Minimal risk AI systems are generally simple automated tasks with no direct human interaction, such as an email spam filter.

High-risk

Those AI systems identified as high-risk will be required to comply with strict requirements, including: (i) risk-mitigation systems; (ii) obligation to ensure high quality of data sets; (iii) logging of activity; (iv) detailed documentation; (v) clear user information; (vi) human oversight; and, (vii) a high level of robustness, accuracy and cybersecurity.

Providers and deployers will be subject to additional obligations regarding high-risk AI. Providers of high-risk AI systems (and GPAI model systems discussed below) established outside the EU will be required to appoint an authorized representative in the EU in writing. In many respects this is similar to the Data Protection Representative (DPR) provisions in GDPR. There is also a registration requirement for high-risk AI systems under Article 49.

Examples of high-risk AI systems include:

  1. some critical infrastructures, for example, for water, gas and electricity;
  2. medical devices;
  3. systems to determine access to educational institutions or for recruiting people; or
  4. some systems used in law enforcement, border control, administration of justice and democratic processes.  In addition, biometric identification, categorisation and emotion recognition systems are also considered high-risk.

In addition, biometric identification, categorisation and emotion recognition systems are also considered high-risk.

There are some exemptions for AI systems which would ordinarily be high-risk but where these exemptions apply there’s still a record keeping requirement which is a little bit like the DPIA process under GDPR. It will be important to have proper assessment tools in place to help record this assessment as it must be produced to a regulator on demand.

Unacceptable risk

AI systems considered as constituting a clear threat to the fundamental rights of people will be banned outright 6 months after the Act enters into force. This includes:

  • AI systems or applications that manipulate human behaviour to circumvent users’ free will, such as toys using voice assistance encouraging dangerous behaviour of minors, or systems that allow so-called “social scoring” by governments or companies, and some applications of predictive policing;
  • In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used in the workplace and some systems for categorising people, or real time remote biometric identification for law enforcement purposes in publicly accessible spaces, subject to some narrow exceptions.

Specific transparency risk

Also called limited risk AI systems, which must comply with transparency requirements. When AI systems such as chatbots are used, users need to be aware that they are interacting with a machine. So-called “deep fakes” and other AI-generated content will have to be labelled as such, and users will have to be informed when biometric categorisation or emotion recognition systems are being used.

In addition, service providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.

What is a risk based approach?

Put simply, the higher the risk to cause harm to society, the stricter the rules.  

The European Commission’s materials accompanying the Act set this out in a diagram as follows:

What about general purpose AI?

The EU AI Act introduces dedicated rules for so-called “general purpose” AI (GPAI) models aimed at ensuring transparency. Generally-speaking, “general purpose AI system” means an AI system that is intended by the service provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others.

For very powerful models that could pose systemic risks there will be additional binding obligations related to managing risks and monitoring serious incidents, performing model evaluation, and adversarial testing – a bit like red teaming to test for information security issues. These obligations will come about through codes of practice developed by a number of interested parties.

What is systemic risk?

Systemic risk:

  1. Is specific to the high-impact capabilities of general purpose AI models;
  2. Has a significant impact on the EU market due to reach or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole;
  3. Can be propagated at scale.

Broadly speaking, there are two categories of GPAI: conventional GPAI and systemic risk GPAI. There are specific requirements for providers of GPAI models, and additional, more rigorous, requirements for providers of GPAI models with systemic risk, for example extra assessment and reporting obligations.

GPAI models can be integrated into a wide variety of systems and processes, to conduct a wide variety of tasks. These additional requirements seek to address the concerns that highly powerful models could cause potentially negative effects, such as disruptions of critical sectors and negative consequences to public health, and dissemination of illegal or false content.

What about enforcement?

National so-called “market surveillance authorities” (MSAs) will supervise the implementation of the EU AI Act at the national level. Member States are to designate at least one MSA and one notifying authority as their national competent authorities. Member States need to appoint their MSA at national level before 2 August 2025, for the purpose of supervising the application and implementation of the EU AI Act. It is by no means guaranteed that each Member State will appoint its DPA as the in-country MSA but the European Data Protection Board pushed for them to do so in its plenary session in July 2024.

In addition to in-country enforcement across the EU, a new European AI Office within the European Commission will coordinate matters at the EU level, which will also supervise the implementation and enforcement of the EU AI Act concerning general purpose AI models.

With regard to GPAI, the European Commission, and not individual Member States, has the sole authority to oversee and enforce rules related to GPAI Models. The newly created AI Office will assist the Commission in carrying out various tasks.

In some respects this system mirrors the current regime in competition law with in-country enforcement together with EU co-ordination. But this could still lead to differences in enforcement activity across the EU as we’ve seen with GDPR, especially if the same in-country enforcement bodies have responsibility for both GDPR and the EU AI Act.

Might I be subject to dawn raids?

Yes, in certain circumstances. The first is in relation to testing high-risk AI systems in real-world circumstances. Under Article 60 of the Act, MSAs will be given powers of unannounced inspections, both remote and on-site, to conduct checks on that type of testing.

The second is that competition authorities may perform dawn raids as a result of this Act. MSAs will report annually to national competition authorities any information identified in their market surveillance activities that may be of interest to the competition authorities. Competition authorities have had the power to conduct dawn raids under anti-trust laws for many years now. As such, competition authorities might conduct dawn raids based on information or reports received under this Act.

What are the penalties for non-compliance?

When a national authority or MSA finds that an AI system is not compliant, they have the power to:

  • Require corrective actions to make that system compliant;
  • Withdraw, restrict, or recall the system from the market.

Similarly, the Commission may also request the above actions to enforce GPAI compliance.

Non-compliant organisations can be fined under the new rules, as follows:

  • €35 million (around $US37.5 million at today’s rate) or 7% of global annual turnover of the preceding year for violations of banned AI applications;
  • €15 million (around $US16 million at today’s rate) or 3% for violations of other obligations, including rules on general purpose AI models;
  • €7.5 million (around $US8 million at today’s rate) or 1.5% for supplying incorrect, incomplete, or misleading information in reply to a request; and,

Lower thresholds are foreseen for SMEs and higher thresholds for other companies.

What is the EU AI Office?

The European AI Office was established by the Commission earlier this year as a new EU level regulator. It was established with the aim of being the centre of AI expertise and the foundation for a single EU AI governance system, and will support and collaborate with Member States and experts. The European AI Office will also seek to facilitate uniform application of the AI Act across Member States.  Despite it’s name the remit of the Office is the EU and not across Europe.

The AI Office will monitor, supervise, enforce, and evaluate compliance with the EU AI Act GPAI requirements across Member States. This is also the body that will produce the Codes of Practice for GPAI.

The Commission has granted the AI Office powers to conduct evaluations of GPAI models, investigate possible infringements of GPAI models, request information from model providers, and apply sanctions.

The AI Office will also act as Secretariat for the AI Board and convene meetings.

What is the EU AI Board?

The EU AI Board was established to support and facilitate the implementation of the AI regulations, and to assist the AI Office. The Board is composed of one representative per Member State and will be responsible for advisory tasks such as issuing opinions and recommendations and providing advice to the Commission and Member State authorities. 

In some respects then the EU AI Board mirrors the functions of the EDPB in GDPR enforcement.

What about data protection/privacy?

The relationship between AI regulations and data privacy regulations is important for a number of reasons, one of the most significant being how AI systems use and receive vast data inputs in its lifecycle, and how a significant amount of that data may be personal data.

The EU AI Act will run alongside existing EU data protection rules including GDPR. While GDPR does not explicitly mention AI, the EU AI Act does consider the relationship between AI and data privacy, stating that the Act is without prejudice to existing EU law on data protection.

As well as the cases mentioned above there’s also a significant volume of guidance from EU data protection authorities which will also need to be taken into account when designing or implementing an application featuring AI. An influential group of German data protection authorities, the Datenschutz Konferenz (or DSK) has already expressed concerns about issues like the allocation of responsibilities and we may see conflicts between the new Act and GDPR.

Does the EU AI Act have extraterritorial reach?

Yes, its extraterritorial application is quite similar to that of the GDPR. The EU AI Act may affect organisations in the UK, and elsewhere including the US. Broadly, the EU AI Act will apply to organisations outside the EU if their AI systems or AI generated output are on the EU market, or their use affects people in the EU, directly or indirectly.

For example, if a US business’s website has a chatbot function which is available for people in the EU to use, that US business will likely be subject to the EU AI Act. Similarly if a non-EU organisation does not provide AI systems to the EU market, but does make available AI system generated output to people in the EU (such as media content), that organisation will be subject to this Act.

The UK, the US, China and other jurisdictions are addressing AI issues in their own particular ways.

What about the UK?

The UK government published its white paper on its approach to AI regulation in March 2023, which set out its proposed “pro-innovation” regulatory framework for AI, and subsequently held a public consultation on the proposals. The government response to the consultation was published in February 2024.

However since then the UK Government has changed and we’ve seen the Government’s position on AI change too.  The position of the new Labour Government was set out in the King’s Speech in July 2024 with the new Government saying it would “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.”

The new Government will also set up a new Regulatory Innovation Office which will look at the challenges of AI and support existing regulators including the Information Commissioner’s Office and the Competition and Markets Authority in using their existing powers to regulate AI. 

We don’t yet know the shape of the new AI law (and no draft Bill was referred to in the speech) but this could  be a simplified version of the EU AI Act. 

What happens next?

It is important to stress that EU AI Act is not yet in force, though it has been published.

The EU AI Act was published in the Official Journal on 12 July 2024 and will enter into force 20 days after publication (1 August 2024) and become fully applicable two years after that, apart for some specific provisions; prohibitions will already apply after 6 months and the rules on general purpose AI will apply after 12 months.

The Official EU timetable looks like this:

DateOfficial EU timelineRespective Chapters / Articles
12 July 2024Publication in the Official Journal of EU. 
01 August 2024 (Entry into force)The Act entered into force 20 days after publication in the Official Journal. The following milestones are according to Article 113.Article 113
02 November 2025Member States to identify and publish the list of authorities / bodies responsible.Article 77 (2)
02 February 2025 (EIF + 6 months)Prohibitions on unacceptable risk AI – the so-called Prohibited Artificial Intelligence Practices, will apply 6 months from entry into force.Chapters I and II
02 May 2025 (EIF + 9 months)Codes of Practice for GPAI from the EU AI Office to be published 9 months after entry into force. The plan is for providers of GPAI models and other experts to jointly work on a Code of Practice.Article 56
02 August 2025 (EIF + 12 months)The main body of rules start to apply: Notifying authorities, GPAI models, Governance, Penalties, Confidentiality (except rules on fines for GPAI providers).  MSAs should also be appointed by Member States.Chapter III section 4, Chapter V, Chapter VII, Chapter XII, Article 78, Articles 99 and 100
02 August 2026 (EIF + 24 months)The remainder of the Act will apply except Article 6(1). As such, the majority of the EU AI Act applies 2 years after it comes into force, which includes matters such as but not limited to transparency notification or labelling and AI literacy.  High-risk AI systems under Annex III (AI systems in the fields of biometrics, critical infrastructure, education, employment, access to essential private and public services, law enforcement, migration and border control management, democratic processes and the administration of justice) are regulated.Article 113
02 August 2027 (EIF + 36 months)Article 6(1) and the corresponding obligations in this Regulation will apply. These relate to some high-risk AI systems covered by existing EU harmonization legislation (Annex I systems e.g. those covered by existing EU product safety legislation) and GPAIs that have been on the market before August 2, 2025.  However, some high-risk AI systems already subject to sector-specific regulation (listed in Annex I) will remain regulated by the authorities that control them today (e.g. for medical devices).Article 6(1)

What is the AI Pact?

Before the EU AI Act becomes generally applicable, the European Commission will launch a voluntary so-called “AI Pact” aimed at bringing together AI developers from Europe and around the world to commit on a voluntary basis to implement key obligations of the EU AI Act ahead of the legal deadlines. 

The European Commission has said that over 550 organisations have responded to the first call for interest in the AI Pact but whether that leads to widespread adoption remains to be seen.  The Commission published draft details of the AI Pact to a select group outlining a series of voluntary commitments as part of its the AI Pact in July. 

The Commission is currently aiming to launch the AI Pact in October 2024.

The EU Artificial Intelligence Act Summary

Legal issues concerning AI are not new and we are already seeing issues coming to the fore including through litigation, such as regarding the use of ChatGPT, notoriously concerning case-law hallucination. Organisations should consider reviewing what they are doing about AI in the workplace and at the very least set out the dos and don’ts for their employees about this. 

It is also wise to develop a formal process to look at issues like fairness and transparency both to meet existing legal obligations and to help comply with the new EU AI Act once it comes into force.

What can I do to prepare for AI regulation?

Organisations should start looking at the impact that this Act may have on its operations and governance.

The first step an organisation can take is to review the current position. Pertinent questions include: Are we currently using any AI systems? Are we planning to use any AI systems? Do we have any existing policies and procedures that are relevant?

Organisations can then conduct compliance gap analyses to identify the key issues to address and identify the key business areas or activities that will be affected.

How can Punter Southall Law help?

For most organisations the first step will be to build a bespoke Action Plan with key action points.  We have experience of helping clients develop their response to the EU AI Act and in developing their AI strategy. 

The work our team does includes:

  • Training employees on AI risk and responsibilities.
  • Helping with awareness campaigns on AI use.
  • Board level briefings on AI risks and opportunities.
  • Helping you take an inventory of your current AI systems to identify what AI systems are being used and their risk level.
  • Drafting and amending your internal policies and procedures on AI compliance. For example, updating your data breach plans to include EU AI Act reporting.
  • Preparing materials and notices to inform your customers of your AI use to meet transparency and other legal obligations.
  • Creating templates for required documents under the EU AI Act, e.g.: technical documentation and declarations of conformity.
  • Suggesting standardized clauses or addendums to add to your client and supplier agreements.
  • Assisting and advising you on appointing an authorized representative as needed.
  • Reviewing your marketing materials that reference AI systems and compliance.

Learn more about our services at Governance, Risk and Compliance Services.

Jonathan Armstrong Lawyer

Jonathan Armstrong

Partner

Jonathan is an experienced lawyer based in London with a concentration on compliance & technology.  He is also a Professor at Fordham Law School teaching a new post-graduate course on international compliance.

Jonathan’s professional practice includes advising multinational companies on risk and compliance across Europe.  Jonathan gives legal and compliance advice to household name corporations on:

  • Prevention (e.g. putting in place policies and procedures);
  • Training (including state of the art video learning); and
  • Cure (such as internal investigations and dealing with regulatory authorities).

Jonathan has handled legal matters in more than 60 countries covering a wide range of compliance issues.  He made one of the first GDPR data breach reports on behalf of a lawyer who had compromised sensitive personal data and he has been particularly active in advising clients on their response to GDPR.  He has conducted a wide range of investigations of various shapes and sizes (some as a result of whistleblowers), worked on data breaches (including major ransomware attacks), a request to appear before a UK Parliamentary enquiry, UK Bribery Act 2010, slavery, ESG & supply chain issues, helped businesses move sales online or enter new markets and managed ethics & compliance code implementation.  Clients include Fortune 250 organisations & household names in manufacturing, technology, healthcare, luxury goods, automotive, construction & financial services.  Jonathan is also regarded as an acknowledged expert in AI and he currently serves on the New York State Bar Association’s AI Task Force looking at the impact of AI on law and regulation.  Jonathan also sits on the Law Society AI Group.

Jonathan is a co-author of LexisNexis’ definitive work on technology law, “Managing Risk: Technology & Communications”.  He is a frequent broadcaster for the BBC and appeared on BBC News 24 as the studio guest on the Walport Review.  He is also a regular contributor to the Everything Compliance & Life with GDPR podcasts.  In addition to being a lawyer, Jonathan is a Fellow of The Chartered Institute of Marketing.  He has spoken at conferences in the US, Japan, Canada, China, Brazil, Singapore, Vietnam, Mexico, the Middle East & across Europe.

Jonathan qualified as a lawyer in the UK in 1991 and has focused on technology and risk and governance matters for more than 25 years.  He is regarded as a leading expert in compliance matters.  Jonathan has been selected as one of the Thomson Reuters stand-out lawyers for 2024 – an honour bestowed on him every year since the survey began.  In April 2017 Thomson Reuters listed Jonathan as the 6th most influential figure in risk, compliance and fintech in the UK.  In 2016 Jonathan was ranked as the 14th most influential figure in data security worldwide by Onalytica.  In 2019 Jonathan was the recipient of a Security Serious Unsung Heroes Award for his work in Information Security.  Jonathan is listed as a Super Lawyer and has been listed in Legal Experts from 2002 to date. 

Jonathan is the former trustee of a children’s music charity and the longstanding Co-Chair of the New York State Bar Association’s Rapid Response Taskforce which has led the response to world events in a number of countries including Afghanistan, France, Pakistan, Poland & Ukraine.

Some of Jonathan’s recent projects (including projects he worked on prior to joining Punter Southall) are:

  • Helping a global healthcare organisation with its data strategy.  The work included data breach similuations and assessments for its global response team.
  • Helping a leading tech hardware, software and services business on its data protection strategy.
  • Leading an AI risk awareness session with one of the world’s largest tech businesses.
  • Looking at AI and connected vehicle related risk with a major vehicle manufacturer.
  • Helping a leading global fashion brand with compliance issues for their European operations.
  • Helping a global energy company on their compliance issues in Europe including dealing with a number of data security issues.
  • Working with one of the world’s largest chemical companies on their data protection program. The work involved managing a global program of audit, risk reduction and training to improve global-privacy, data-protection and data-security compliance.
  • Advising a French multinational on the launch of a new technology offering in 37 countries and coordinating the local advice in each.
  • Advising a well-known retailer on product safety and reputation issues.
  • Advising an international energy company in implementing whistleblower helplines across Europe.
  • Advising a number of Fortune 100 corporations on strategies and programs to comply with the UK Bribery Act 2010.
  • Advising of Financial Services Business on their cyber security strategy.  This included preparing a data breach plan and assistance in connection with a data breach response simulation.
  • Advising a U.S.-based engineering company on its entry into the United Kingdom, including compliance issues across the enterprise. Areas covered in our representation include structure, health and safety, employment, immigration and contract templates.
  • Assisting an industry body on submissions to the European Commission (the executive function of the EU) and UK government on next-generation technology laws. Jonathan’s submissions included detailed analysis of existing law and proposals on data privacy, cookies, behavioural advertising, information security, cloud computing, e-commerce, distance selling and social media.
  • Helping a leading pharmaceutical company formulate its social media strategy.
  • Served as counsel to a UK listed retailer and fashion group, in its acquisition of one of the world’s leading lingerie retailers.
  • Advising a leading U.S. retailer on its proposed entry into Europe, including advice on likely issues in eight countries.
  • Working with a leading UK retailer on its proposed expansion into the United States, including advice on online selling, advertising strategy and marketing.
  • Dealing with data export issues with respect to ediscovery in ongoing court and arbitration proceedings.
  • Advising a dual-listed entity on an FCPA investigation in Europe.
  • Acting for a U.S.-listed pharmaceutical company in connection with a fraud investigation of its Europe subsidiaries.
  • Acting for a well-known sporting-goods manufacturer on setting up its mobile commerce offerings in Europe.
  • Comprehensive data protection/privacy projects for a number of significant U.S. corporations, including advice on Safe Harbor Privacy Shield and DPF.
  • Risk analysis for an innovative software application.
  • Assisting a major U.S. corporation on its response to one of the first reported data breaches.
  • Work on the launch of an innovative new online game for an established board game manufacturer in more than 15 countries.
  • Advice on the setting up of Peoplesoft and other online HR programs in Europe, including data protection and Works Council issues.
  • Advising a leading fashion retailer in its blogging strategy.
  • Advising one of the world’s largest media companies on its data-retention strategy.
  • Advising a multinational software company on the marketing, development and positioning of its products in Europe.

Resources

Related Insights