Adobe accelerates rollout of AI image watermarks for transparency

0

Adobe is ramping up its efforts to label AI-generated images with watermarks, aiming to make this the “ultimate signal of transparency in digital content.” In 2021, Adobe introduced Content Credentials, represented by a CR symbol that could be affixed to an image’s metadata, providing information about its creator and any manipulations. Initially available in the Photoshop beta, Adobe is now expanding the service on a large scale.

Microsoft is joining the initiative by introducing Content Credentials to all AI-generated images produced by its Bing Image Creator. These watermarks confirm the date and time of the original creation. Global communications company Publicis Groupe will also participate, deploying Content Credentials across its worldwide network of designers, marketers, and creatives as part of a “trusted campaign of the future”. Camera manufacturers Leica Camera and Nikon are incorporating these watermarks into their upcoming camera models.

Adobe’s Content Credentials can be added using Adobe’s photo and editing platforms. A symbol is embedded in the metadata to indicate the creator and owner. When users hover over this icon, information about the creator and any AI tools used will be displayed. Adobe believes that these watermarks will help restore trust online and become an “icon of transparency” and the “ultimate signal of transparency in digital content”.

Adobe stated in a blog post, “We believe Content Credentials empower a basic right for everyone to understand context and more about a piece of content. We see a future where consumers will see the icon on social platforms, news sites, and digital brand campaigns, and habitually check for Content Credentials just like they look for nutrition labels on food”.

The proliferation of free AI image generation tools has raised concerns about deepfakes and misinformation. Tech companies, including Adobe, have pledged to develop watermarking technology to combat the spread of fake images. Google has introduced its tool called SynthID, which uses invisible digital watermarks to identify AI-generated content through metadata. Digimarc has developed its own digital watermark tool with copyright information.

However, there are concerns that these digital watermarks may be susceptible to removal or manipulation. Some experts believe that current watermarking methods are not entirely reliable. Soheil Feizi, a computer science professor at the University of Maryland, has stated, “We don’t have any reliable watermarking at this point”. Ben Colman, the CEO of AI detection startup Reality Defender, added that watermarks could be easily faked, removed, or ignored, posing challenges to their effectiveness.

LEAVE A REPLY

Please enter your comment!
Please enter your name here