European Financial Review Jonathan Armstrong on the implications of the EU AI Act

European Financial Review: Jonathan Armstrong on the implications of the EU AI Act

5 Min Read

Jonathan Armstrong, Partner in our Compliance Department, recently contributed an article to The European Financial Review, a trusted source for essential market insights and strategies, discussing the EU Artificial Intelligence Act. We are pleased to share the full text of the article below.

What are the Implications of the EU AI Act? 

By Jonathan Armstrong

The new EU AI Act entered into force on 1 August 2024, though there is a staggered compliance timetable, and it will not be fully applicable until August 2027.   

But what significance will it have for finance companies and what do they need to do to prepare? 

The financial services sector is one of the areas most impacted by AI. The industry’s heavy use of data-driven models and algorithms offers rich opportunities for AI processes and systems.  

A survey published by the Bank of England in 2022 isaid that over 72% of respondent firms were using or developing machine learning applications, and that trend is expected to increase. In addition, AI applications are becoming increasingly embedded in day-to-day operations and becoming more critical to various business areas in the UK financial services sector.  

The EU AI Act categorises AI risk, with prohibited AI systems and high-risk AI systems at the top of the risk pyramid. The Act specifically lists as high-risk two types of AI systems of relevance to financial institutions: systems used to evaluate creditworthiness of natural persons, and systems for risk assessment and pricing in relation to natural persons in the case of life and health insurance. Financial institutions should also be aware that the Act’s obligations fall not only on providers or developers of AI systems, but also organisations that use them, for functions such as customer service chat bots or robo-investment advisors.  

How can financial services firms prepare? 

Building a bespoke Action Plan will be essential. Key action points must also include training employees, raising awareness, and briefing boards on AI risks and opportunities.  

Training board members will be especially important, as many current boards don’t have people with adequate AI and technology skills or knowledge. This gap needs to be addressed to properly understand the risks, and of course, opting out of AI is not an option.  

Financial institutions are already heavily regulated and are expected to have robust compliance models and procedures in place, with adequate resources to meet their regulatory obligations. As such, many organisations can use existing regulatory change frameworks and governance structures to help implement the Act.   

Firms will also need to take inventory of their current AI systems to identify the AI systems being used and their risk level. Keep in mind that the two types of AI systems above aren’t the only high-risk systems relevant to financial institutions. For example, AI systems intended for filtering job applications, evaluating candidates, monitoring performance, and decision making for promotions and terminations are also high-risk.  

The AI Act and DORA 

Another key concern for finance companies will be the integration between their work on implementing their AI Act obligations with their DORA (Digital Operational Resilience Act) obligations. There’s an opportunity here for efficiency as some of this work can be done in parallel. DORA seeks to address cyber and ICT related risks in financial services firms and is planned to be fully applicable in January 2025.  

Naturally, as AI systems are becoming increasingly critical in an institution’s operations, so will their dependence on third party ICT service providers. In a similar vein, financial institutions should also keep in mind the MiFID II obligations on governance, knowledge and competence, conduct of business, transparency, and most importantly the overarching imperative to act in the best interests of the client. 

It would also be wrong to regard this as simply an EU-focused activity. The EU AI Act has extra-territorial effect and there are equivalent rules to DORA coming in in the UK too. And we know that the new UK Government plans its own AI regulation. 

More regulation, less innovation? Not necessarily 

There are still a number of areas of uncertainty and confusion. The EU AI Act will likely suffer from some of the same issues as GDPR. There are likely to be regulators learning their way around their new powers, patchy enforcement, and struggles to keep up with innovation both in and out of the tech industry. As we’ve  said, financial services is already a significantly regulated industry, and this begs the question: will increasing its regulatory obligations discourage innovation in financial institutions?  

However, like the GDPR, the EU AI Act has also raised the profile of and changed the thinking around AI on a global scale, particularly in relation to the fundamental rights of natural persons. Already one big tech company has said they won’t bring some new products to market in the EU as they seek to understand the complexities the new legislation may bring. 

Finally, institutions may also want to keep an eye on the European Commission’s target consultation on AI in the financial sector, which will close on 13 September 2024.  

Every organisation needs a plan to respond to the Act. They will need to start with training and awareness, since many of the AI systems being bought-in or developed now will still be in use when enforcement starts. Most financial institutions are likely to be already using or planning to use AI systems, and they need to establish the risk level of those systems and perform their initial analysis. Whilst enforcement will not start until next year the time to start a compliance program is now.

Jonathan Armstrong Lawyer

Jonathan Armstrong

Partner

Jonathan is an experienced lawyer based in London with a concentration on compliance & technology.  He is also a Professor at Fordham Law School teaching a new post-graduate course on international compliance.

Jonathan’s professional practice includes advising multinational companies on risk and compliance across Europe.  Jonathan gives legal and compliance advice to household name corporations on:

  • Prevention (e.g. putting in place policies and procedures);
  • Training (including state of the art video learning); and
  • Cure (such as internal investigations and dealing with regulatory authorities).

Jonathan has handled legal matters in more than 60 countries covering a wide range of compliance issues.  He made one of the first GDPR data breach reports on behalf of a lawyer who had compromised sensitive personal data and he has been particularly active in advising clients on their response to GDPR.  He has conducted a wide range of investigations of various shapes and sizes (some as a result of whistleblowers), worked on data breaches (including major ransomware attacks), a request to appear before a UK Parliamentary enquiry, UK Bribery Act 2010, slavery, ESG & supply chain issues, helped businesses move sales online or enter new markets and managed ethics & compliance code implementation.  Clients include Fortune 250 organisations & household names in manufacturing, technology, healthcare, luxury goods, automotive, construction & financial services.  Jonathan is also regarded as an acknowledged expert in AI and he currently serves on the New York State Bar Association’s AI Task Force looking at the impact of AI on law and regulation.  Jonathan also sits on the Law Society AI Group.

Jonathan is a co-author of LexisNexis’ definitive work on technology law, “Managing Risk: Technology & Communications”.  He is a frequent broadcaster for the BBC and appeared on BBC News 24 as the studio guest on the Walport Review.  He is also a regular contributor to the Everything Compliance & Life with GDPR podcasts.  In addition to being a lawyer, Jonathan is a Fellow of The Chartered Institute of Marketing.  He has spoken at conferences in the US, Japan, Canada, China, Brazil, Singapore, Vietnam, Mexico, the Middle East & across Europe.

Jonathan qualified as a lawyer in the UK in 1991 and has focused on technology and risk and governance matters for more than 25 years.  He is regarded as a leading expert in compliance matters.  Jonathan has been selected as one of the Thomson Reuters stand-out lawyers for 2024 – an honour bestowed on him every year since the survey began.  In April 2017 Thomson Reuters listed Jonathan as the 6th most influential figure in risk, compliance and fintech in the UK.  In 2016 Jonathan was ranked as the 14th most influential figure in data security worldwide by Onalytica.  In 2019 Jonathan was the recipient of a Security Serious Unsung Heroes Award for his work in Information Security.  Jonathan is listed as a Super Lawyer and has been listed in Legal Experts from 2002 to date. 

Jonathan is the former trustee of a children’s music charity and the longstanding Co-Chair of the New York State Bar Association’s Rapid Response Taskforce which has led the response to world events in a number of countries including Afghanistan, France, Pakistan, Poland & Ukraine.

Some of Jonathan’s recent projects (including projects he worked on prior to joining Punter Southall) are:

  • Helping a global healthcare organisation with its data strategy.  The work included data breach similuations and assessments for its global response team.
  • Helping a leading tech hardware, software and services business on its data protection strategy.
  • Leading an AI risk awareness session with one of the world’s largest tech businesses.
  • Looking at AI and connected vehicle related risk with a major vehicle manufacturer.
  • Helping a leading global fashion brand with compliance issues for their European operations.
  • Helping a global energy company on their compliance issues in Europe including dealing with a number of data security issues.
  • Working with one of the world’s largest chemical companies on their data protection program. The work involved managing a global program of audit, risk reduction and training to improve global-privacy, data-protection and data-security compliance.
  • Advising a French multinational on the launch of a new technology offering in 37 countries and coordinating the local advice in each.
  • Advising a well-known retailer on product safety and reputation issues.
  • Advising an international energy company in implementing whistleblower helplines across Europe.
  • Advising a number of Fortune 100 corporations on strategies and programs to comply with the UK Bribery Act 2010.
  • Advising of Financial Services Business on their cyber security strategy.  This included preparing a data breach plan and assistance in connection with a data breach response simulation.
  • Advising a U.S.-based engineering company on its entry into the United Kingdom, including compliance issues across the enterprise. Areas covered in our representation include structure, health and safety, employment, immigration and contract templates.
  • Assisting an industry body on submissions to the European Commission (the executive function of the EU) and UK government on next-generation technology laws. Jonathan’s submissions included detailed analysis of existing law and proposals on data privacy, cookies, behavioural advertising, information security, cloud computing, e-commerce, distance selling and social media.
  • Helping a leading pharmaceutical company formulate its social media strategy.
  • Served as counsel to a UK listed retailer and fashion group, in its acquisition of one of the world’s leading lingerie retailers.
  • Advising a leading U.S. retailer on its proposed entry into Europe, including advice on likely issues in eight countries.
  • Working with a leading UK retailer on its proposed expansion into the United States, including advice on online selling, advertising strategy and marketing.
  • Dealing with data export issues with respect to ediscovery in ongoing court and arbitration proceedings.
  • Advising a dual-listed entity on an FCPA investigation in Europe.
  • Acting for a U.S.-listed pharmaceutical company in connection with a fraud investigation of its Europe subsidiaries.
  • Acting for a well-known sporting-goods manufacturer on setting up its mobile commerce offerings in Europe.
  • Comprehensive data protection/privacy projects for a number of significant U.S. corporations, including advice on Safe Harbor Privacy Shield and DPF.
  • Risk analysis for an innovative software application.
  • Assisting a major U.S. corporation on its response to one of the first reported data breaches.
  • Work on the launch of an innovative new online game for an established board game manufacturer in more than 15 countries.
  • Advice on the setting up of Peoplesoft and other online HR programs in Europe, including data protection and Works Council issues.
  • Advising a leading fashion retailer in its blogging strategy.
  • Advising one of the world’s largest media companies on its data-retention strategy.
  • Advising a multinational software company on the marketing, development and positioning of its products in Europe.

This article was first published in The European Financial Review on 6th October 2024. You can view the original article at The European Financial Review: What are the Implications of the EU AI Act? 

Related Insights