The Battle Against AI-Generated Deepfakes in Marketing

The Battle Against AI-Generated Deepfakes in Marketing
DALL-E generated image viewed in the C2PA content verifier

As the volume of AI-generated content increases, marketers need ways of distinguishing approved marketing content from deepfakes.

Why it matters: AI is enabling a proliferation of deepfakes, deployed through popular marketing channels, that are intended to be misleading or communicate misinformation. Some have the potential to cause harm to both consumers and brands.

Introduction

AI-generated marketing content is here to stay. Few dispute the benefits it can provide in efficiently creating personalized content across a wide range of audiences. In the future, we can also expect marketing platforms to generate dynamic content on the fly (with no human review) for newly detected personalization scenarios along the customer journey.

The ethical and legal issues of AI-generated content still remain, but will likely be resolved in time, regardless of whether this requires constraints on AI engines or suitable mechanisms for attribution and licensing of original content sources.

And for those who still believe that AI-generated content is easy to spot with the naked eye, the level of realism across all forms of media will continue to increase. Last week’s announcement from OpenAI of the Sora engine that creates video from text prompts is a mind boggling example.

What’s the challenge?

With the growing proliferation of deepfakes, brands have to deal with misleading content entering their marketing channels. For example, content generated by users or social influencers are sources likely to have high volumes of AI-generated content that cannot be practically screened by humans (although machine-based screening is currently a popular area of research).

In January, Taylor Swift appeared to endorse a Le Creuset cookware giveaway — a deepfake that had some credible context. Ms. Taylor does indeed have a collection of Le Creuset cookware that has appeared on social channels. But this ad was a pure scam, backed up with fake web properties appearing to look like well-known retail websites, all designed to get consumers to part ways with their money.

One example of many that show up continually. Not all involve audacious attempts with A-list celebrities, but most attempt to communicate legitimacy through context plus the simple application of labels claiming bona fide sponsorships or verified account sources.

The concept of watermarking

I came across the concept of digital watermarking about 30 years ago when I was working on color image compression. The concept has renewed interest today, where the approach remains largely the same. Using an image as an example, information or provenance data about that image is embedded within the pixels of the image itself. These watermarks are not visible to the human eye, but they can be read and displayed using special software.

The idea sounds ingenious. Assuming you can implement these techniques in popular creator tools and platforms that display content, this could be an efficient way to manage authenticity. There’s just one problem…

It has proven fairly straightforward to remove these digital watermarks. Taking our image example, performing basic image processing such as contrast adjustment or a Gaussian blur changes the pixels sufficiently for the watermark to be lost. Image editing operations like cropping and rotating can also be successful in making a digital watermark unreadable.

Image scientists continue to look for solutions that can make a digital watermark immutable, but the problem may remain for the foreseeable future.

The 'opt-in' opportunity for marketing

Despite these limitations, digital watermarking (and other associated techniques) may still have merit for marketers based on adopting an ‘opt-in’ approach. The argument is that good actors (e.g. commercial platforms, brands, agencies etc.), who want to mark their content as authentic and approved, can still do so using current generation tools. The watermark will remain effective as long as the content remains under the control of the brand and is displayed on channels that preserve it.

Trusted platforms can be modified to decode the watermark and display it in a form that consumers can consistently recognize and look for. Of course, consumers are left to make their own judgment about content that is either unmarked or marked by an unknown third-party, yet appears to be associated with a brand (similar to what most of us have to do with phishing emails today).

Supporting Technology

There are different technology efforts attempting to address the subject of content authenticity. Here are three worth looking at to get an understanding of what is coming:

  1. Synthetic Media Detection (SynthID): Developed by the DeepMind team at Google, SynthID provides watermarking capabilities for images and, more recently, music and audio. Currently in beta for users of the Google Cloud VertexAI service, you can add watermarks to images that have some level of robustness against basic image filters and compression techniques. SynthID also provides the ability to scan images looking for a digital watermark and assess the likelihood that an image (or part of it) was created with the Google Imagen engine.
  2. Coalition for Content Provenance and Authenticity (C2PA): C2PA has developed a standard for labeling digital content that identifies who created it, when it was created and how. The coalition led by Adobe, includes Meta and Microsoft, and more recently, Google. Today, C2PA credentials are embedded in a file as metadata (in the form of a digitally signed manifest) and like many other authenticity techniques this can be removed. However, with an ‘opt-in’ approach and adoption from platforms like Facebook and YouTube, C2PA could provide benefit for consumers and brands alike.
  3. Digimarc: Digimarc has developed sophisticated digital watermarking technology which they sell in a variety of commercial applications. They claim their current watermarking is more resilient to tampering, making it of interest to a large number of potential users. They have also demonstrated a unique integration where a C2PA manifest was interlinked with their digital watermark scheme making it easier to identify C2PA manifests that have been removed or modified.

Getting started

Digital watermarking is an important technology for content creators to understand. Take a look at some of the current technologies and try the C2PA tools listed below on some of your own AI-generated content to get a feel for how this is going to work.

This year, we can expect some of the popular creator tools and social platforms to add support for this type of technology. Hopefully, this will be a seamless experience for content creators with little extra work involved beyond some configuration and workflow adjustments.

An ‘opt-in’ type approach still requires mass adoption for it to provide substantial benefit. Standards like C2PA need to become as pervasive as SSL did for secure Internet connections. Adoption will be a marathon not a sprint, but one we should undertake to help consumers untangle legitimate content from deepfakes and ultimately give consumers a chance in determining what content is legitimate and help brands maintain their reputation.

Related Tools

C2PA Content Verifier

Digimarc C2PA Validator Chrome Extension


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Jon Baker | Agency CTO.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.