This month the IPA and ISBN have launched their industry principles for use of generative AI in advertising.
As noted by Sir Patrick Vallance, the Government’s Chief Scientific Adviser, in his March 2023 report for the Government as part of its Pro-innovation Regulation of Technologies Review: “The advent of generative AI globally represents both an opportunity and a challenge to the creative industries.”
The UK advertising industry’s success is due in no small part to its ability to innovate and adapt to new technologies. While it should embrace the use of generative AI, it should do so in an ethical way that protects both consumers and those working in the creative sector.
AI has the potential both to fuel an explosion in creativity and quality in advertising, and to exacerbate existing challenges around issues such as the monetisation of harmful content, intellectual property infringement, employment, and the handling of personal data. The IPA and ISBA will continue to disseminate best practice, inform on points of controversy or regulatory/legislative risk, and consider how we can help our members build their understanding.
The following principles focus on the use of generative AI by advertisers and their agencies in the creation of advertisements (and strategic insights). They are in addition to the industry’s legal and regulatory obligations.
The principles are:
AI should be used responsibly and ethically.
AI should not be used in a manner that is likely to undermine public trust in advertising (for example, through the use of undisclosed deepfakes, or fake, scam or otherwise fraudulent advertising).
Advertisers and agencies should ensure that their use of AI is transparent where it features prominently in an ad and is unlikely to be obvious to consumers.
Advertisers and agencies should consider the potential environmental impact when using generative AI.
AI should not be used in a manner likely to discriminate or show bias against individuals or particular groups in society.
AI should not be used in a manner that is likely to undermine the rights of individuals (including with respect to use of their personal data).
Advertisers and agencies should consider the potential impact of the use of AI on intellectual property rights holders and the sustainability of publishers and other content creators.
Advertisers and agencies should consider the potential impact of AI on employment and talent. AI should be additive and an enabler – helping rather than replacing people.
Advertisers and agencies should perform appropriate due diligence on the AI tools they work with and only use AI when confident it is safe and secure to do so.
Advertisers and agencies should ensure appropriate human oversight and accountability in their use of AI (for example, fact and permission-checking so that AI generated output is not used without adequate clearance and accuracy assurances).
Advertisers and agencies should be transparent with each other about their use of AI. Neither should include AI-generated content in materials provided to the other without the other’s agreement.
Advertisers and agencies should commit to continual monitoring and evaluation of their use of AI, including any potential negative impacts not limited to those described above.
These principles apply to the use of generative AI in content creation. There are many other instances where AI may be used and where abuses should be guarded against. This might include the large-scale creation of poor-quality clickbait on Made for Advertising (MFA) sites, or AI algorithms deciding to whom online ads are served. There may also be other legal issues to consider, such as the onward use or accessibility of sensitive consumer data fed into an AI system.