Generative AI is still in its infancy in relative terms, but growing exponentially, and is set to transform the ad industry. The possibilities are many and varied, including (as CAP and the ASA say) generating tailored ads that resonate uniquely with each individual viewer, as well as the ability to churn out dynamic campaigns that evolve according to consumer response. It can replace many human hours with a click of a button - but we'll leave aside the potential negative impact it may have on those working hard in the industry for the moment, even if it's an issue close to our hearts!!
There are other ways AI is set to impact the ad industry which are hard to imagine just yet. We don't know what we don't know. However, CAP and the ASA have issued updated guidance on how they will regulate the use of generative AI in advertising.
The CAP guidance covers the basics, but is useful:
What is generative AI?
CAP offers the following handy definition: "To put it simply, generative AI is a form of artificial intelligence that can create original content such as images, text or music tailored to a user’s requests. It learns from existing data that it has been trained on, to produce new things that have not explicitly been programmed in."
What you see AIs what you get
The ASA isn't aware of ever having ruled on ads that were created using AI, though obviously it's only a matter of time until it does so.
The CAP Code is media-neutral, and doesn't really distinguish between ads created by humans/organisations and ads created using generative AI.
The guidance reminds us that if an ad falls within the ASA's scope, the CAP Code rules will apply (or BCAP Code in the case of TV and radio ads), regardless of how the ad was created. As usual, the claims have to be true, not misleading and substantiated. Images/videos that portray a product, its use or its efficacy must not create a misleading impression - much in the same way as images that are photoshopped or subjected to social media filters might.
Even if marketing campaigns are entirely generated or distributed using automated methods, the advertiser remains responsible for the ad. Normally, 'advertiser' means the brand being promoted, but also the publisher in the case of advertorials, joint ads or content all about affiliated products.
So, it seems that a human eye might need to be cast over AI generated ads, for now, to ensure they are compliant.
Bias
Another area of potential concern is the risk of some AI models amplifying biases already present in the data on which they are trained. CAP points out that this could lead to socially irresponsible ads, including breaching the rules on harmful gender stereotypes, or causing harm or offence by perpetuating other stereotypes.
The CAP guidance refers to the documented examples of specific generative AI tools tending to portray higher-paying individuals as men or individuals with lighter skin tones. They also often portray idealised (sometimes unrealistic) body standards that could be potentially harmful or irresponsible.
Again, the advice is to have a human sense-check these ads wherever possible, for bias, social responsibility and reliance on stereotypes.