The Italian data protection regulator (the “Garante”) announced on the 28 April 2023 that ChatGPT has been made accessible to Italian users again. This follows a temporary stop imposed by the Garante just over a month ago.
In its announcement the Garante noted a number of improvements that OpenAI (the US company that manages ChatGPT) had implemented following the Garante’s requests. These improvements can be split into three buckets:
- The ChatGPT website has an information page addressed to users of ChatGPT and non-users of ChatGPT (e.g. if your personal data is contained in one of the publicly available resources that ChatGPT uses to train its models). This page specifically addresses what personal data (or what it calls ‘personal information’) ChatGPT uses to teach its models and how personal data is used, plus a reminder to individuals that in certain jurisdictions they can object to such processing;
- The information on processing of data is now accessible from the homepage prior to an individual registering with the service;
- The OpenAI privacy notice (applicable to users of the service only) clarifies that while it will continue to process certain information in order to provide the service relying on contract as its legal basis for processing, it will instead process personal data for the purposes of training algorithms relying on its legitimate interests as its legal basis for processing, meaning individuals can opt out.
Control over data:
- The ChatGPT website has an easily accessible online form which allows users and non users to object to their data being used for training of algorithms (we’ve checked, and the link to the form works - whether anything inside the system happens after opting out is another matter…);
- Interestingly, the OpenAI privacy notice also flags that given the ‘technical complexity’ of how the models work, OpenAI may not be able to comply with a request to correct inaccurate personal data, and therefore individuals should make a request for data deletion instead;
- OpenAI have implemented a module which allows European users to disable training, which means their conversations and history won’t be used to train ChatGPT algorithms.
- For Italian users already registered with the service, individuals are asked to confirm they are over the age of 13 and where they are minors that they have parental consent; and
- For new users, the same age gate is in place.
These changes have been implemented with lightning speed (the last update of the privacy notice was the day before the Garante lifted the ban) and the Garante’s announcement has a much more conciliatory tone in comparison to the stop announcement. They ‘express satisfaction’ with the measures taken and ‘acknowledge the progress made to combine technological progress with respect for people’s rights.’ This could be a response to some of the critical commentary the Garante received after their initial stop announcement. However, the speed at which OpenAI has implemented these changes does leave some areas of grey, for example it’s not clear whether historical personal data used to train the ChatGPT model was collected with a valid lawful basis and whether it will be deleted if you make a request now. Further there are still technical issues around valid lawful bases for training models and in relation to the provision of transparency information to those data subjects whose personal data is being used to train a model but have zero idea this is happening. These are issues however not limited to OpenAI and ChatGPT but are issues that need to be considered-resolved (perhaps “fudged”) with many AI systems.
In the meantime, OpenAI are encouraged by the Garante to implement an age verification system and put out an information campaign for Italians explaining what changes have been made and that they have a right to opt-out from the processing of their personal data for training algorithms.
In parallel, the Garante will continue to cooperate as part of the special task force that has been set up by the European Data Protection Board to discuss possible regulatory frameworks for AI chatbots such as ChatGPT.
And for those of you interested in AI, please join us at our next In House Data Club on Artificial Intelligence – Realising the Rewards and Risks, taking place on 24 May at 4.00pm - 5.00pm on Zoom. Our expert speakers will be joined by Daniel Hulme, CEO of Satalia, and Reva Schwartz of the National Institute of Standards and Technology (NIST). For more information and to sign up please click here.