ChatGPT will digitally tag images generated by DALL-E 3 to help battle misinformation

At a time when fraudsters are using generative AI to fraudulent money Or tarnish your reputation, tech companies are offering methods to help users verify content — at least still images, to start. As teased in his Disinformation strategy 2024OpenAI now includes provenance metadata in images generated with ChatGPT on the web and DALL-E 3 API, with their mobile counterparts receiving the same upgrade by February 12.

The metadata follows the open C2PA (Coalition for Content Provenance and Authenticity) standard, and when one of these images is uploaded to the Check content credentials tool, you will be able to trace its lineage of provenance. For example, an image generated using ChatGPT will show a first metadata manifest showing its origin from the DALL-E 3 API, followed by a second metadata manifest showing that it surfaced in ChatGPT.

Despite the sophisticated cryptographic technology behind the C2PA standard, this verification method only works when the metadata is intact; The tool is of no use if you upload an AI-generated image without metadata – as is the case with any screenshot or image uploaded to social media. Unsurprisingly, the current sample images on the DALL-E 3 official page also came back empty. On his FAQ pageOpenAI admits it's not a silver bullet to combat the war on disinformation, but believes the key is to encourage users to actively seek out such signals.

While OpenAI's latest efforts to thwart fake content are currently limited to still images, Google's DeepMind has already Synthesizer ID for digital watermarking of AI-generated images and audio. Meanwhile, Meta tested the invisible watermark via its AI image generatorwhich may be less subject to falsification.

Source link

Leave a Comment