Just days after receiving a fine from the Irish Data Protection Commissioner for €390 million, Meta announced new restrictions - effective from February 2023 - on the type of teenagers' data advertisers can access to target their ads. Meta also took this opportunity to highlight the additional controls and resources which will be available to teenagers from March 2023.
What's changing?
Back in 2021, Meta initially restricted advertisers' targeting options by removing their ability to target teenagers based on their interests or what they were doing on other apps/websites.
Now, in a similar vein, the way in which teenagers have engaged with Facebook and Instagram - for example, if someone has followed a particular post - will no longer be used to personalise the ads they are shown. This batch of restrictions also goes somewhat further to also remove advertisers' ability to target ads on the basis of a teenager's gender. This means, moving forward, the only information which will be available to target ads will be teenagers' age and location.
Paired with these new restrictions, Meta also announced a new tool to give teenagers more control over the ads they are seeing. Recognising that just because an ad complies with their Advertising Standards doesn't mean a teenager should have to see it, teenagers will now be able to choose not to see a particular type of ad. They use the example of not seeing ads about a particular genre of tv programme, but in any case teenagers will now be able to go into their Ad Preferences settings and choose to 'See Less' or select 'No Preference' in relation to the type of ads they are being shown. Much like the new restrictions announced above, this is an additional layer of control for teenagers - who will still be able to hide any or all ads from particular advertisers.
This new suite of restrictions and tools is particularly significant as it represents a clear separation in the way users under the age of 18 are treated in terms of targeted advertising. The monitoring of young users' behaviour is removed entirely, as is targeting based on unessential characteristic identifiers. That's not to say ads tailored to the particular age and location of users won't be hugely beneficial to advertisers but Meta recognise that "teens aren’t necessarily as equipped as adults to make decisions about how their online data is used for advertising". Its for this reason that users are notified when they turn 18 as to how advertisers will now be able to target them, allowing them to then make informed decisions in relation to their data.
Why now?
In the last 16 months, Meta (or companies they own) have been fined over €1.3 billion - €5.5 million being added to the tally only last week in relation to WhatsApp's violations of the GDPR. It must be stressed that only one of these fines was specifically related to children's data but it was the second highest fine ever imposed under the GDPR. We've previously discussed the details of that decision, but the Irish Data Protection Commissioner fined Instagram €405 million for numerous infringements of the GDPR relating to the way Instagram had dealt with children's data. The substantial fine stemmed from the fact the default settings for business account users allowed children's email addresses and phone numbers to be visible, and led to findings which included that the safety measures on the platform were not adequate to protect children's data. Clearly failings in connection with children's data will warrant potentially eye watering fines.
Against this difficult backdrop, its not overly surprising to see Meta remove the gender targeting option. This is likely part of a host of other preparatory measures to get their 'ducks in a row'. The obligations under the Digital Services Act may not take effect until 2024, but do include (amongst other things) a prohibition on the use of minors' personal data for targeted advertising as well as requiring platforms to implement special protection measures for minors' safety. Similarly, the Online Safety Bill is still working its way through the legislative process but will likely (in the not too distant future) impose a duty on platforms to protect children from content which is harmful to them. Its for this reason the new restrictions on targeting ads stop short of preventing all targeting options (i.e. age and location). Ads need to be targeted, to a degree, to ensure that the ads children are seeing are age appropriate, particularly as some suggest that "inappropriate ads can cause as much harm as offensive or abusive content posted by others". Therefore, while this is a pre-emptive action, its only inevitable that other platforms are going to be following suit shortly - if they haven't already.
Online regulation is evidentially both a regulatory focal point and also a digital minefield. The new rules and obligations as to how you can target advertising and use the data of a platform's users are going to be coming thick and fast over the coming years. Now more than ever its crucial to keep up with the changes and what you need to do, to avoid potentially significant penalties. For a easy to digest summary of all the significant online regulation changes on their way you can see our Horizon Scanner or reach out to your usual Lewis Silkin contact who would be happy to discuss.
We recognize that teens aren’t necessarily as equipped as adults to make decisions about how their online data is used for advertising, particularly when it comes to showing them products available to purchase.
https://about.fb.com/news/2023/01/age-appropriate-ads-for-teens/