Go to main content

Criteria issued in the European Union on the use of personal data for the development and implementation of Artificial Intelligence models

January 22, 2025 /

In the context of artificial intelligence models (“IA”) and its relationship with the protection of personal data, a new criterion has been issued in Europe on its legal implications. In September of this year, the Irish Data Protection Commission requested the European Data Protection Committee (“EDPS”) to issue an opinion on issues of general application pursuant to Article 64(2) of the General Data Protection Regulation (“GDPR”). These opinions issued by the EDPB shall be binding on one, several or all of the public and independent authorities established by the Member States of the European Union (“Control Authorities”), as indicated in the corresponding opinion.

The request was about the processing of personal data in the context of the development and implementation phases of AI models, specifically the following questions were raised:

  1. When and how can an AI model be considered "anonymous"?
  2. How can data controllers demonstrate the adequacy of the processing? legitimate interest as a legal basis in the development phases, according to the GDPR?
  3. How can data controllers demonstrate the adequacy of the processing? legitimate interest as a legal basis in the implementation phases, according to the GDPR?
  4. What are the consequences of illicit treatment of personal data in the development phase of an AI model on the subsequent processing or operation of the AI ​​model?

In light of the above, on December 17, 2024, the EDPB issued Opinion 28/2024 on certain aspects of protection related to the processing of personal data in the context of AI models, which is mandatory for all Supervisory Authorities (the “Opinion”).Opinion”), to which the EDPB provided the following responses:

  • Answer to the first question: The Opinion establishes that the anonymisation claims of an AI model must be assessed by the Supervisory Authorities, since the EDPB considers that AI models trained with personal data cannot, in all cases, be considered anonymous.

For such purposes, in order for an AI model to be considered anonymous, both the probability of direct extraction (including probabilistic) of personal data relating to the subjects whose personal data were used to develop the model, and the probability of obtaining, intentionally or not, such personal data from queries, must be negligible.

In order to carry out their assessment, the Control Authorities will review the documentation provided by the controller to demonstrate the anonymization of the AI ​​model. To prove the above, some elements that the controllers may use are: (i) the sources that were used to collect the information to train the AI ​​models; (ii) the preparation and minimization given to the personal data that was used to train the AI ​​models; (iii) the audits carried out to estimate and predict the probability of identification of the personal data; and (iv) the tests that were carried out on the AI ​​model against external attacks, among others.

  • Answer to the second and third questions: The Opinion sets out general considerations that the Supervisory Authorities should take into account when assessing whether controllers can invoke legitimate interest as an appropriate legal basis for processing carried out in the context of the development and implementation of AI models.

The Opinion outlined three processes that must be undertaken when assessing the application of legitimate interest as a legal basis for processing personal data, namely: (1) identifying the legitimate interest pursued by the controller; (2) assessing the necessity of the processing for the purposes of the legitimate interest(s) pursued (necessity test); and (3) assessing that the legitimate interest(s) do not override the interests or fundamental rights and freedoms of the data subjects (balancing test). It will also be important to consider the reasonable expectations of data subjects in the balancing test, as due to the complexity of the technologies used in AI models, it may be difficult for data subjects to understand the variety of different processing activities involved.

The Opinion also stresses that where the interests, rights and freedoms of data subjects appear to prevail over the legitimate interest(s) pursued by the controller, the controller may consider introducing mitigating measures to limit the impact of processing on such data subjects; for example, technical measures, pseudonymisation measures, measures to facilitate the exercise of rights, transparency measures, anti-web scraping measures (web scraping), among other.

  • Answer to the fourth question: The Opinion indicates that the Supervisory Authorities have discretionary powers to assess the potential infringement arising from the unlawful processing of personal data for these purposes, as well as to impose corrective measures that are appropriate, necessary and proportional, taking into account the circumstances of each case. The Opinion analyses three hypotheses regarding the processing of personal data in AI models:
  • In the first hypothesis, the subsequent processing of personal data carried out by the same controller who developed and implemented the AI ​​model is assessed, as well as its implications depending on whether the development and implementation phases have different purposes.
  • In the second hypothesis, it is analyzed that the person responsible for the implementation phase of the model has carried out an adequate evaluation to guarantee that in the development phase there has been no illicit processing of the personal data stored in the AI ​​model, which was developed by a third party.
  • In the third hypothesis, we study cases in which the processing of personal data in the development of the AI ​​model is illegal, but they are anonymised before the implementation and subsequent use of the model in the implementation phase. Therefore, the AI ​​model could operate without the GDPR being applicable as long as no additional personal data are subsequently processed. However, the GDPR would apply when, after the anonymisation of the AI ​​model, additional personal data are processed in the implementation phase. Therefore, the illegality of the initial processing in the development phase should not affect the legality of the processing carried out in the implementation phase of the AI ​​model.

Finally, although this Opinion is only effective and applicable in the Member States of the European Union, it may come to have collateral influence as a non-binding criterion in other jurisdictions, including Mexico, as has happened with the content established in the GDPR. Therefore, it will be important to know the practical impact that the Opinion will have in the European Union.

Related articles

Santamarina and Steta Infrastructure Promotion Law

The Law for the Promotion of Investment in Infrastructure is enacted…

On April 9, 2026, the Decree issuing the Law for the Promotion of Investment was published in the Official Gazette of the Federation…
shutterstock 2696150007

Amendments to Article 141 of the Federal Tax Code…

On April 9, 2026, the Decree reforming Article 141 of the Federal Tax Code was published in the Official Gazette of the Federation…
Site Update (1)

CFE Portfolio 2026-2027: 58 projects to strengthen the tra…

Executive Summary On March 22, 2026, the Federal Electricity Commission (“CFE”) presented its “Transmission Project Portfolio…”