YouTube Will Soon Require Creators To Disclose Their AI-Generated 'Realistic' Content
In a recent blog post, YouTube announced that creators will soon be required to disclose when they publish modified or artificially generated content that "appear realistic".
This move comes as the Google-owned video platform looks to tackle growing apprehensions regarding the potential for AI-generated content to mislead viewers.
According to the blog post, "This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials."
YouTube will introduce a warning label in the description panel for altered or synthetic content. Additionally, sensitive topics will have a more noticeable label on the video player.
The platform will also allow people to request the removal of manipulated video “that simulates an identifiable individual, including their face or voice”.
However, not all content removal requests will be granted, and various factors will be taken into account, such as, the nature of the content, such as whether it is parody or satire, or if it involves a public official or well-known individual.
This development comes as YouTube is gearing up to introduce its own AI products on the platform, which it unveiled in September.
This entails generating AI-made backgrounds for short videos by typing ideas into prompts, aiding creators in generating fresh video concepts, and expanding their audience with features like automatic dubbing.
Scientists Discover Reactive Oxygen Layer Amidst Venus' Atmospheric Currents
Click here