EU AI Act Glossary Key terms & acronyms

EU AI Act Glossary: Key terms, Definitions & Acronyms

9 Min Read

This is a glossary of terms referred to in the EU AI Act and other relevant AI terms and concepts. 

We also have a companion glossary on GDPR terms which can be found at: GDPR Glossary: EU data protection key terms & acronyms.

Agentic AI = AI that is capable of autonomous decision-making, action, and task execution. In general, compared to the generative AI that most people are accustomed to, Agentic AI is far more independent and proactive. Additionally, unlike generative-AI, their focus is on decision-making instead of content generation, and they act upon high level goals instead of prompts. For example, Agentic AI can act as a project manager working towards a specific goal, and it can outsource specific tasks to other tools or even other AI agents, such as a language translation AI, all without human intervention or oversight. Agentic AI is not specifically referenced in the EU AI Act, though it would still fall under its scope as an AI system.

You can view a discussion of Agentic AI between Punter Southall Law Partner Jonathan Armstrong and Professor Eric Sinrod. at: TechLaw10: Agentic AI – what is it & what are the risks?

AI Agent = A system or software programme that uses AI to become capable of autonomously or semi-autonomously interact with its environment and perform tasks to achieve goals.

Algorithm = Colloquially, algorithms can be understood as a set of instructions used to perform tasks such as calculations and data analysis, usually using a computer or another smart device, this is the definition provided in the English Judicial Guidance on AI. However, in the fields of mathematics and computer sciences, the work to formally define this word is much more abstract and complex. Algorithms can be expressed in different kinds of notation, including natural human language, flowcharts, tables, and programming languages (common examples are Java and Python).

Biometric Data = The EU AI Act uses the same definition as in GDPR which is “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data.”

Dactyloscopic data = fingerprint data. As an example a gym introduces an electronic fingerprint scanning system which uses AI to match fingerprints with the fingerprints of its members held in its records. Members scan their fingerprint in order to get through the entrance turnstiles. This system is processing biometric data to identify individual members.

EDPS = The European Data Protection Supervisor. More details can be found in our GDPR Glossary: EU data protection key terms & acronyms.

Emotion Recognition = According to the EU AI Act an emotion recognition system is an AI system for the purpose of identifying or inferring emotions or intentions of people on the basis of their biometric data. Note that emotions here do not include physical states such as pain or fatigue. Additionally, simply detecting a physical gesture also does not count as emotion recognition. For example, detecting that a person is smiling is not emotion recognition, but concluding that a person is happy or sad, is emotion recognition.

Evaluation = According to the Commission Guidelines on Prohibited AI practices (C(2025) 884): ‘evaluation’ suggests the involvement of some form of an assessment or judgement about a person or group of persons. However, a simple classification of persons or groups of persons based on characteristics, such as their age, sex, and height, is not necessarily an evaluation. Additionally, the Guidelines mention that evaluation relates to the concept of profiling (see below).

Explainability = There is no formal definition in the EU AI Act or the Guidelines. In the context of AI, explainability is the capacity to provide clear and coherent explanations for how or why an AI enabled system led to a specific output, such as a decision, recommendation, or prediction, similar to an audit trail. It aims to answer questions like “Why did the AI system make this particular prediction?” by offering human-understandable justifications or reasons for a specific outcome.

General purpose AI model (GPAI) = A form of AI model, including where an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. 

However AI models are excluded from the GPAI definition if they are used for research, development or prototyping activities before they are placed on the market.  Recital 99 of the EU AI Act states that large generate AI models are a type of GPAI. ChatGPT is an example of a GPAI.

Generative AI = Commonly called GenAI, a type of AI that can create new content which can include but is not limited to text, images, sounds, videos, and computer code based on a prompt (see below). GenAI replies on sophisticated machine learning models (see below). Businesses can use GenAI for chatbots, media creation, product development and more. As mentioned above, the EU AI Act considers large generative AI models as an example of a GPAI model.

Generative AI Chatbot = A program which simulates an online human conversation using Generative AI (see above).

Hallucination = AI hallucinations are incorrect or misleading results that AI models generate. Hallucinations can be caused by a variety of factors, common factors include biases or insufficiencies in the training data and incorrect assumptions made by the model.    

Large Language model (LLM) = The EDPS describes LLMs as AI systems designed to learn grammar, syntax, and semantics of one or more languages to generate coherent and context-relevant language, and as a type of generative AI system, LLMs create new content in response to user commands based on their training data. LLMs are trained on vast quantities of text from a huge range of sources. Commonly known AI systems ChatGPT and Bing Chat use the OpenAI LLM.

Machine Learning = A branch of AI that uses data and algorithms to imitate the way that humans learn, gradually improving performance and accuracy through exposure to more data. Through statistical methods, algorithms are trained to make classifications or predictions, identify patterns, and to uncover key insights in data mining projects with minimal human intervention. LLMs (see above) are a type of machine learning model designed for natural language processing (see below) tasks such as language generation.

Natural Language Processing = A field of computer science and AI that enables computer systems to recognise, understand and generate human language. This field of research helped the growth of generative AI by helping image generation models understand requests and helping LLMs (see above) communicate. Most people probably encounter natural language processing applications in everyday life, from customer service chatbots to voice operated GPS systems and even digital assistants such as Siri and Alexa.

Profiling = The EU AI Act uses the GDPR definition of profiling: any form of automated processing of personal data consisting of the use of personal data to evaluate some personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements. Note that the Guidelines say that profiling constitutes a specific form of evaluation.

Prompt (in AI) = The input given to an AI system (for example, ChatGPT) which will generate a specific response or result. A prompt can be thought of as an instruction, command, or question, for example: “write a 500-word summary about King Lear by William Shakespeare” or “show me an illustration of a kitten wearing a hat”. Typically, prompts can be text or vocal.

Remote biometric identification system = an AI system for the purpose of identifying people, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database.

Responsible AI = The practice or framework of developing and deploying AI with certain values, such as being trustworthy, ethical, transparent, explainable, fair, robust and upholding privacy rights. Responsible AI involves the consideration of broader societal impacts of AI.

Social scoring = According to the Commission Guidelines social scoring is the evaluation or classification based on social behaviour or personal or personality characteristics over a period of time. This is separated into 3 different criteria to satisfy the definition of social scoring:

  • Evaluation or classification of natural persons or group of persons;
  • Over a certain period of time; and
  • Based on their social behaviour or known, inferred or predicted personal or personality characteristics.

Technology Assisted Review (TAR) = AI Tools used in the e-discovery process to identify potentially relevant documents, also sometimes referred to as computer assisted review or predictive coding. In summary, a machine learning system is “trained” on a set of documents, known as a seed set. Lawyers and or reviewers manually identify the documents in the seed set as relevant or irrelevant, and the tool uses the learned criteria to identify other relevant documents from the large disclosure data sets.

Transparency = The EU AI Act recitals state that transparency means that AI systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights.

If there’s a term you think we should add, please let us know.

Related Insights