Q

Meta Aims to Set Standards for AI-Generated Content

Meta AI transparency
Photo credit: Meta

In an effort to position itself as a leader in the fast-evolving world of AI, Meta has revealed that it’s working with industry partners to develop “common technical standards” for spotting and labeling AI-generated video and audio content at scale.

Although the tech giant already has a label automatically placed on any content created using its AI tools, it is exploring ways to broaden its reach and alert users of AI-generated content they come across on Facebook, Instagram and Threads. Meta is in the process of building this capability and will start applying labels in all languages supported by each app, according to Nick Clegg, President of Global Affairs for Meta, who shared further details in a company blog post.

Harnessing the Power of Partnership

Meta is tapping into forums such as the Partnership on AI (PAI) to develop these common standards. Currently, the invisible marketers used for Meta AI images – IPTC metadata and invisible watermarks – are in line with PAI’s best practices. Additionally, the company is building tools designed to identify invisible markers at scale – specifically, the “AI generated” information in the C2PA and IPTC technical standards – so Meta can label images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock as they implement their plans for adding metadata to images created by their tools.

However, there is still plenty to come in this journey. Clegg noted that many companies are including these signals in image generators. However, they haven’t started doing so in AI-generated audio and video – at least at the same scale. To address this, Meta is allowing people to manually disclose when they share AI-generated video or audio. Should users fail to indicate when they have shared photorealistic video or realistic-sounding audio, Meta “may apply penalties,” according to Clegg.

“These are early days for the spread of AI-generated content,” Clegg said. “As it becomes more common in the years ahead, there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content. Industry and regulators may move towards ways of authenticating content that hasn’t been created using AI as well content that has. What we’re setting out today are the steps we think are appropriate for content shared on our platforms right now. But we’ll continue to watch and learn, and we’ll keep our approach under review as we do.”

Meta Aims to Set Standards for AI-Generated Content

by | Feb 8, 2024

Meta AI transparency

In an effort to position itself as a leader in the fast-evolving world of AI, Meta has revealed that it’s working with industry partners to develop “common technical standards” for spotting and labeling AI-generated video and audio content at scale.

Although the tech giant already has a label automatically placed on any content created using its AI tools, it is exploring ways to broaden its reach and alert users of AI-generated content they come across on Facebook, Instagram and Threads. Meta is in the process of building this capability and will start applying labels in all languages supported by each app, according to Nick Clegg, President of Global Affairs for Meta, who shared further details in a company blog post.

Harnessing the Power of Partnership

Meta is tapping into forums such as the Partnership on AI (PAI) to develop these common standards. Currently, the invisible marketers used for Meta AI images – IPTC metadata and invisible watermarks – are in line with PAI’s best practices. Additionally, the company is building tools designed to identify invisible markers at scale – specifically, the “AI generated” information in the C2PA and IPTC technical standards – so Meta can label images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock as they implement their plans for adding metadata to images created by their tools.

However, there is still plenty to come in this journey. Clegg noted that many companies are including these signals in image generators. However, they haven’t started doing so in AI-generated audio and video – at least at the same scale. To address this, Meta is allowing people to manually disclose when they share AI-generated video or audio. Should users fail to indicate when they have shared photorealistic video or realistic-sounding audio, Meta “may apply penalties,” according to Clegg.

“These are early days for the spread of AI-generated content,” Clegg said. “As it becomes more common in the years ahead, there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content. Industry and regulators may move towards ways of authenticating content that hasn’t been created using AI as well content that has. What we’re setting out today are the steps we think are appropriate for content shared on our platforms right now. But we’ll continue to watch and learn, and we’ll keep our approach under review as we do.”