Recent Caselaw: Investment advisor engagement letters
Certain types of agreement become frequent sources of litigation and caselaw.
9 Min Read
Read More Recent Caselaw: Investment advisor engagement lettersWe have put together this glossary to help explain some of the terms used in and around the EU AI Act. If there’s a term you think we should add, please let us know.
Biometric Data = the EU AI Act uses the same definition as in GDPR which is “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data.”
Dactyloscopic data = fingerprint data. As an example a gym introduces an electronic fingerprint scanning system which uses AI to match fingerprints with the fingerprints of its members held in its records. Members scan their fingerprint in order to get through the entrance turnstiles. This system is processing biometric data to identify individual members.
Emotion Recognition = According to the EU AI Act an emotion recognition system is an AI system for the purpose of identifying or inferring emotions or intentions of people on the basis of their biometric data. Note that emotions here do not include physical states such as pain or fatigue. Additionally, simply detecting a physical gesture also does not count as emotion recognition. For example, detecting that a person is smiling is not emotion recognition, but concluding that a person is happy or sad, is emotion recognition.
Evaluation = According to the Guidelines on Prohibited AI practices: ‘evaluation’ suggests the involvement of some form of an assessment or judgement about a person or group of persons. However, a simple classification of persons or groups of persons based on characteristics, such as their age, sex, and height, is not necessarily an evaluation. Additionally, the Guidelines mention that evaluation relates to the concept of profiling (see below).
Explainability = There is no formal definition in the EU AI Act or the Guidelines. In the context of AI, explainability is the capacity to provide clear and coherent explanations for how or why an AI enabled system led to a specific output, such as a decision, recommendation, or prediction, similar to an audit trail. It aims to answer questions like “Why did the AI system make this particular prediction?” by offering human-understandable justifications or reasons for a specific outcome.
General purpose AI model (GPAI) = A form of AI model, including where an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. However AI models are excluded from the GPAI definition if they are used for research, development or prototyping activities before they are placed on the market. ChatGPT is an example of a GPAI.
Profiling = The EU AI Act uses the GDPR definition of profiling: any form of automated processing of personal data consisting of the use of personal data to evaluate some personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements. Note that the Guidelines say that profiling constitutes a specific form of evaluation.
Remote biometric identification system = an AI system for the purpose of identifying people, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database.
Social scoring = According to the Guidelines social scoring is the evaluation or classification based on social behaviour or personal or personality characteristics over a period of time. This is separated into 3 different criteria to satisfy the definition of social scoring:
Transparency = The EU AI Act recitals state that transparency means that AI systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights.
Recent Caselaw: Investment advisor engagement letters
Certain types of agreement become frequent sources of litigation and caselaw.
9 Min Read
Read More Recent Caselaw: Investment advisor engagement lettersChallenges in bringing warranty claims under Share Purchase Agreements
Bringing warranty claims under Share Purchase Agreements (SPAs) can be a hazardous process for claimants.
11 Min Read
Read More Challenges in bringing warranty claims under Share Purchase AgreementsAlert: Number 10 Briefing with Peter Kyle on AI
On 27 February 2025, Peter Kyle, Secretary of State for Science Innovation and Technology gave a Number 10 Briefing focused on AI. This was part of a series of calls...
4 Min Read
Read More Alert: Number 10 Briefing with Peter Kyle on AI