Ireland’s Data Protection Commission (DPC) has been active on AI over the past few weeks. The DPC published a blog post on AI, Large Language Models and Data Protection following recent guidance from the CNIL and Hamburg DPA highlighting risks for businesses implementing AI tools, for example inaccurate or incomplete training data causing biases in AI systems. The blog also provides guidelines for AI product providers including reminding them to put processes in place to facilitate the exercise of data subject rights. The DPC had also brought a High Court application under Section 134 of the Data Protection Act 2018, in circumstances where the DPC expressed concerns about the processing of personal data contained in the public posts of X’s EU/EEA users for the purpose of training its AI ‘Grok’. After engagement with the DPC and an undertaking, X agreed to suspend its processing of this personal data which it processed between 7 May 2024 and 1 August 2024.  

Following these proceedings the DPC made a request to the European Data Protection Board (EDPB) for an opinion pursuant to Article 64(2) GDPR. The opinion invites the EDPB to consider, amongst other things, the extent to which personal data is processed at various stages of the training and operation of an AI model, including both first party and third-party data and the related question of what particular considerations arise, in relation to the assessment of the legal basis being relied upon by the data controller to ground that processing. Commissioner Dale Sunderland commented that: “The DPC hopes that the resulting opinion will enable proactive, effective and consistent Europe-wide regulation of this area more broadly. It will also support the handling of a number of complaints that have been lodged with/transmitted to the DPC in relation to a range of different data controllers, for purposes connected with the training and development of various AI models.” The EDPB will have a maximum of 14 weeks to produce the opinion. 

This request is particularly timely given the stances on AI training taken by several Supervisory Authorities (SA) including the French, Dutch, Belgian and Hamburg SAs and the EDPB’s report on its ChatGPT Taskforce (focussing on web-scraped datasets used to train AI models). And now the DPC has launched a cross-border statutory inquiry into Google assessing whether Google has complied with any obligations that it may have had to undertake a Data Protection Impact Assessment, pursuant to Article 35 of GDPR, prior to engaging in the processing of the personal data of EU/EEA data subjects associated with the development of its foundational AI model, Pathways Language Model 2 (PaLM 2).

The DPC now has a sharp focus on AI (both for users and providers), in particular on personal data used in training AI and the underlying risk assessments. While the statutory inquiry into Google will likely take an extended period before a decision is made by the DPC, it will be very interesting to see how the EDPB, inter alia, wrestles with the applicable lawful bases in respect of training and operating AI models. 

Watch this space!