TikTok Partners with Industry to Label AI-Generated Content

TikTok Partners with Industry to Label AI-Generated Content

In a significant move to combat the spread of misleading information, TikTok announced on Thursday that it will begin automatically labeling videos and images generated using artificial intelligence (AI) on its platform. The popular video-sharing app is partnering with the Coalition for Content Provenance and Authenticity (C2PA) to implement their digital watermarking technology called “Content Credentials.”

Key Takeaways

TikTok is implementing automatic labeling for AI-generated content uploaded from external sources using C2PA's Content Credentials technology.
The move is part of an industry-wide effort to promote transparency and combat the spread of misleading information, particularly in light of the upcoming 2024 election.
While experts recommend AI content labeling, they also caution that labels can cause confusion without proper context, prompting TikTok to invest in media literacy campaigns.
Watermarking may not be a foolproof solution, as researchers have found ways to break existing watermarks, and smaller AI groups may continue to produce unlabeled content.
The future of TikTok in the U.S. remains uncertain, with the company facing a potential ban if it is not sold within nine months.
The U.S. government is taking steps to regulate AI and protect the public against its potential risks, with the Department of Commerce requesting additional funds for the AI Safety Institute.

The new feature will attach metadata to AI-produced content, enabling TikTok to instantly recognize and label it. This capability is currently being rolled out for images and videos, with plans to extend it to audio-only content in the near future. TikTok has previously labeled content created using its in-app AI effects and mandated creators to label any content containing realistic AI.

With this latest update, automatic labeling will now be applied to AI-generated content uploaded from external sources as well. As concerns about the potential threat of deepfakes and AI-generated misinformation in the upcoming 2024 US election continue to mount, TikTok's announcement comes at a critical time.

Industry-Wide Collaboration to Promote AI Transparency

In addition to implementing automatic labeling, TikTok announced its participation in the Content Authenticity Initiative, an industry-wide group spearheaded by Adobe, focused on establishing standards to ensure transparency and traceability in the digital production of images, videos, or audio clips.

Back in February, TikTok was among the 20 leading tech companies that pledged to combat AI misinformation during the 2022 election cycle. Microsoft, Meta, Google, Amazon, and OpenAI were among the other signatories of this pact.

The Content Authenticity Initiative (CAI) promotes an industry standard for provenance metadata defined by the C2PA. The CAI cites curbing disinformation as one of the key motivations behind its activities. By joining forces with the CAI and C2PA, TikTok aims to drive the adoption of Content Credentials across the industry.

How Content Credentials Work

TikTok labelling AI content

Content Credentials is an open-source technology that attaches specific metadata to content, which platforms like TikTok can then use to instantly recognize and label AI-generated content. When a person uses an AI tool like OpenAI's DALL-E to generate an image, the tool attaches a watermark to the resulting image and adds data to the file that can later indicate whether it has been tampered with.

If that marked image is then uploaded to TikTok, it will be automatically labeled as AI-generated. This system relies on both the maker of the generative AI tool and the platform distributing the content to agree to use the industry standard.

While experts widely recommend AI content labeling as a way to support responsible content creation, they also caution that labels can cause confusion if viewers don't have context about what they mean. To address this, TikTok has been working with experts to develop media literacy campaigns that can help their community identify and think critically about AI-generated content and misinformation.

Challenges and Limitations

Despite the proactive measures being taken by TikTok and other tech giants, watermarking may not be a foolproof solution to the problem of AI-generated misinformation. Researchers at the University of Maryland found that watermarking techniques for digital images are not always robust, and they were able to develop an attack to break every existing watermark they encountered.

Moreover, smaller, less commercially focused, and less scrupulous AI groups will likely continue to produce unlabeled content. Open-source tools like Stable Diffusion, which have the underlying code freely available, can be modified to remove any attempts to label images. Midjourney, a popular image-generating tool, is not a member of the C2PA coalition.

Critics of C2PA argue that its labels can be tampered with, as the system relies on embedding provenance data within the metadata of digital files, which can easily be stripped or swapped by bad actors. Some argue that only blockchain's immutable ledger can give true confidence in content provenance.

The Future of TikTok and AI Regulation

The future of TikTok in the U.S. remains uncertain, with President Joe Biden signing legislation in April that grants ByteDance, TikTok's parent company, nine months to sell the app or face a ban in the country. TikTok and ByteDance have sued to block the law, arguing it violates the First Amendment.

As the battle over TikTok's future unfolds, the U.S. government is also taking steps to regulate AI and protect the public against its potential risks. The Department of Commerce requested an additional $62.1 million in fiscal year 2025 to safeguard, regulate, and promote AI, including protecting the American public against its societal risks.

United States Secretary of Commerce Gina Raimondo explained that these funds would go toward the AI Safety Institute, which will develop standards for adequate watermarking and red teaming to ensure the safety of Americans in the face of AI-generated content.

As the AI landscape continues to evolve rapidly, it is crucial for tech companies, policymakers, and the public to work together to ensure the responsible development and deployment of these powerful technologies. TikTok's move to automatically label AI-generated content is a step in the right direction, but it is clear that much more needs to be done to address the complex challenges posed by AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Join the Aimojo Tribe!

Join 76,200+ members for insider tips every week! 
🎁 BONUS: Get our $200 “AI Mastery Toolkit” FREE when you sign up!

Trending AI Tools
Ourdream.ai

The ultimate Ai companion playground Create stunning Ai art in Seconds Edgy, sexy, or mind-blowing, ai generates any art you imagine

Subscribr

Transform Ideas into Viral Videos in Minutes Stay Ahead with Real-Time Trend Analysis Collaborate Seamlessly with Your Team

Pig AI

Automate Windows Apps with AI! Control Windows Apps Like a Pro Prototype Workflows in Minutes

Synexa AI

Your AI-Powered Productivity Hack Your AI Assistant for Business Growth AI That Handles the Tedious Tasks for You

BlackBox AI

🚀 Code 10x Faster with Blackbox AI 💻 Turn Prompts into Code Instantly! 🔍 Instant Code Search for Any Problem!

GITEX ASIA 25_Meta Banner_1080x1080px
© Copyright 2023 - 2025 | Become an AI Pro | Made with ♥