The EU AI Act Article 4 requires providers and deployers of AI systems to take steps so that their staff (and any other people dealing with AI systems on their behalf) have a sufficient level of AI Literacy.
This obligation started to apply from 2 February 2025.
The European Commission (EC) helpfully released some of their own FAQs on AI Literacy in May 2025. We’ve summarised those FAQs below. There is also a TechLaw podcast on AI Literacy with Jonathan Armstrong at TechLaw10: AI Literacy – What is it & how do we get there?.
What does “AI Literacy” mean?
This term is formally defined in the EU AI Act as: skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of [the EU AI Act], to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
For more AI related definitions, see our EU AI Act Glossary.
Key takeaways from the European Commission’s FAQs
Who needs to be AI Literate?
According to Article 4 it’s the staff of all providers and deployers of AI systems. The European Commission (the EC) has also explained that basically means anyone in the organisation directly dealing with an AI system.
In our view, it may be easy for some organisations or teams to assume that the AI literacy provisions don’t apply to them because they do not work in the tech industry. However, it’s important to note that deployers of AI systems are also included. This could catch a large number of organisations that otherwise don’t think they deal with AI at all. For example, given the popularity of AI recruitment and screening tools, most HR teams would probably be considered deployers of AI systems, with the result that those recruiting are included in those dealing with AI systems.
Article 4 also specifies other persons dealing with the operation and use of AI systems on their behalf. These people could include non-employees such as contractors. But it can also include clients and service providers.
That said, in our experience, it’s probably worth making sure that everyone receives some level of AI awareness training. Considering the prevalence of Generative AI usage in everyday life and popular social media recommendations to use Generative AI for work emails (e.g.: to adjust the tone), it’s important that all staff members and other persons are aware of the risks of AI and some dos and don’ts when using AI for work purposes. It’s important not to underestimate the issues with Shadow AI.
Even if an organisation bans AI use for work purposes the evidence shows that it is still likely to happen for example on an employee’s own personal devices. In many respects Shadow AI can be more risky for an organisation and including everyone in an AI literacy program can help reduce some of these risks.
What are the AI literacy requirements?
There are no specific mandated training or actions to take to comply with the Article 4 AI Literacy requirement, though there are specific topics to address at minimum:
Neither the Act nor the EC has mandated specific training to provide. The EC explained that it felt like a certain degree of flexibility was necessary. The EC has also explicitly not mandated any specific formats of trainings, noting that there is no one-size-fits-all. In the same vein, the EC has also decided not to mandate specific tests or certificates to prove an individual’s AI literacy.
However, their FAQs have set out some actions to address as a minimum to comply with Article 4:
- Ensure a general understanding of AI within the organisation: What is AI? How does it work? What AI is used in the organisation? What are its opportunities and dangers? As mentioned, AI isn’t always ChatGPT and Midjourney. Oftentimes, common and routine tasks use AI systems, such as the recruitment screening tools mentioned above, meeting transcription services, translation services, commonly used search engines or predictive maintenance algorithms to help maintain machines.
- Consider the role of the organisation (provider or deployer of AI systems): Is my organisation developing AI systems or just using AI systems developed by another organisation?
- Consider the risk of the AI systems provided or deployed: What do employees need to know when dealing with such an AI system? What are the risks they need to be aware of and do they need to be aware of mitigation?
- Concretely build their AI literacy actions on the preceding analysis, considering
- differences in technical knowledge, experience, education and training of the staff and other persons – How much does the employees/person know about AI and the organisation’s systems they use? What else should they know?
- as well as the context the AI systems are to be used in and the persons on whom the AI systems are to be used – In which sector and for which purpose/service is the AI system being used?
Can’t I just rely on the providers of the AI systems?
The short answer is no. The FAQs also explicitly stated that relying on the AI system’s instructions is not sufficient and that further measures are necessary.
The FAQs also show that one of the intentions of Article 4 is to provide training and guidance on AI. Furthermore, some for some AI Systems, high-risk ones, there is a specific obligation under the EU AI Act for the staff who deal with the AI to be sufficiently trained.
Does this apply to organisations outside the EU?
Yes. As we said in our EU AI Act FAQs, the EU AI Act’s extraterritorial application is quite similar to that of GDPR. The EU AI Act may affect organisations in the UK and elsewhere including the US. Broadly, the EU AI Act will apply to organisations outside the EU if their AI systems or AI generated output are on the EU market, or their use affects people in the EU, directly or indirectly.
How will these be enforced?
As mentioned, the AI Literacy obligation has applied since 2 February 2025. Currently the main risk is civil action with a number of pressure groups engaged in looking at the use of AI.
While there is an EU AI Office, it aims to be seen as a centre of AI expertise, to provide advice, foster innovation, and coordinate regulatory approaches. The AI Office will work with Member States on facilitating the application and enforcement of the AI Act, but the supervision and enforcement of the AI Literacy obligations fall under the remit of national authorities. So far, there are no plans for a central EU AI authority to directly take enforcement actions for Article 4 of the EU AI Act.
EU national authorities will start supervising and enforcing the EU AI Act from 3 August 2026. EU Member States should appoint the relevant authorities by 2 August 2025.
What is the maximum fine?
This will depend on the national authorities. The national authorities can impose penalties and other enforcement measures regarding AI Literacy infringements. The Member States should adopt their laws on AI Literacy penalties by 2 August 2025.
The EC has highlighted that enforcement must be proportionate and based on the individual case. Factors such as gravity, intention, and negligence should be taken into account.
What about specific sector guidance?
It is important to remember that AI literacy is on the agenda for other regulators too, not just those dealing with the EU AI Act. For example, the courts in England & Wales looked at AI literacy across law firms recently with the King’s Bench Division effectively imposing an AI literacy requirement on the heads of law firms and barrister’s chambers. UK financial regulators have also reminded financial services firms of the need for education with a warning that the senior manager regime could be used to lead to personal sanction if risks are not addressed. A good literacy program will need to take sector-specific guidance into account too.
Next steps
It is important to note that the EU’s guidance is just guidance. It would be open to a regulator or a court to take a different approach. For those organisations yet to start a literacy program however that’s likely to be a priority given that the legal obligation has now been in place for some time.
Organisations will likely want to consider:
- General awareness training across the organisation – that will include contractors etc too. We’ve done this training for clients and from our experience keeping things simple is key. It’s also important to look at risks and opportunities and to consider the different ways in which different groups use technology especially as the workforce includes greater numbers of digital natives who are more likely to look for their own workarounds.
- Specific role-based training for high risk areas e.g. HR and other employees using AI to process sensitive personal data
- A thorough program to review new AI applications and their risks – we’ve found that a modified DPIA process can work well for this. Pay special attention to high risk systems like those using surveillance technology, facial recognition and biometrics.
- A simple, easy to read AI policy to reinforce what’s expected.
- Board level training and awareness. The statistics show that even the largest organisations lack knowledge of AI risks and opportunities. Including the board will be key so that they can properly assess risk and allocate appropriate resources.
Punter Southall Law’s regulatory compliance lawyers
Punter Southall Law has extensive experience in providing bespoke in-house training on regulatory compliance matters including board level briefings, general awareness training for all staff, and niche training for specialised departments. We have also helped clients draft internal polices and memos such as “AI Dos and Don’ts”.
There’s more information on our recent work on AI at Artificial Intelligence (AI) Lawyers.
For further information: