How does YouTube remove harmful content?
Our commitment to responsibility starts with our Community Guidelines. These policies are designed to ensure that our community stays protected. They set out what's allowed and not allowed on YouTube, and apply to all types of content on our platform, including videos, comments, links and thumbnails. Our policies cover areas such as hate speech, harassment, child safety and violent extremism, amongst others.
Each of our policies is developed in partnership with a wide range of external industry and policy experts, as well as YouTube creators, and we systematically review our policies to make sure that they are current. Examples include the major updates to our hate speech and harassment policies in 2019; the rollout of our 2020 policy to address harmful conspiracy theory content; and our COVID-19 medical misinformation policy, which has evolved throughout the course of the pandemic.
We remove content that violates our policies as quickly as possible, using a combination of people and machine learning to detect potentially problematic content on a massive scale. In addition, we rely on the YouTube community as well as experts in our Trusted Flagger programme to help us spot potentially problematic content by reporting it directly to us. We also go to great lengths to make sure that content that violates our policies isn't widely viewed, or even viewed at all, before it's removed. Our automated flagging systems help us detect and review content even before it's seen by our community.
Once such content is identified, human content reviewers evaluate whether it violates our policies. If it does, we remove the content and use it to train our machines for better coverage in the future. Our content reviewers also protect content that has a clear educational, documentary, scientific or artistic (EDSA) purpose.
How does YouTube reduce the spread of harmful misinformation and borderline content?
While our Community Guidelines set the rules of the road for content on YouTube, there will always be content that brushes up against our policies, but doesn't quite cross the line. This borderline content represents a fraction of 1% of what's watched on YouTube. That said, even a fraction of a percent is too much.
So in 2019, we announced changes to our recommendation systems to reduce the spread of borderline content, resulting in a 70% drop in watch time on non-subscribed, recommended content in the US that year. We also saw a drop in watch time of borderline content coming from recommendations in other markets. And as of March 2021, we rolled out changes to our recommendation system to reduce borderline content in every market where we operate. We are committed to continuing our work to reduce recommendations of borderline content. While algorithmic changes take time to ramp up and you might see consumption of borderline content go up and down, our goal is to keep views of non-subscribed, recommended borderline content below 0.5%.
How does YouTube raise authoritative content?
There are a lot of signals – such as relevance and popularity – that matter in determining which videos you typically see in YouTube search and recommendations. However, when it comes to topics such as news, politics, medical and scientific information, we know that there is no substitute for authoritativeness. That's why we have introduced a range of features to tackle this challenge holistically.
For example, in search results and recommended videos, we raise authoritative voices for newsworthy events and topics prone to misinformation. We also have dedicated product features such as the Breaking News shelf and Top News shelf, which feature relevant videos from authoritative news sources.
Context is critical when evaluating information, so we also provide information panels that feature text-based information alongside certain search results and videos to help you make your own decisions about the content that you find on YouTube.
How does YouTube reward trusted creators and artists?
Being accepted into the YouTube Partner Programme (YPP) is a major milestone in any creator's journey. As part of YPP, creators can start monetising their content, as well as gaining access to dedicated support and benefits.
Over the last few years, we have taken steps to strengthen the requirements for monetisation so that spammers, impersonators and other offenders can't hurt the ecosystem or take advantage of creators who have put their time, energy and passion into producing high-quality content.
To apply for membership in YPP, channels must meet eligibility thresholds related to watch time and subscribers. After they have applied, YouTube's review team ensures that only channels that meet eligibility thresholds and follow all of our guidelines are admitted to the programme, which makes them eligible to receive access to ads and other monetisation products.
Advertisers typically do not want to be associated with controversial or sensitive content on YouTube, as defined in our advertiser-friendly content guidelines. If a creator has turned on ads monetisation for a video but our reviewers and automated systems determine that the video does not comply with our advertiser-friendly content guidelines, the video will have limited or no ads appear against it, which means that they won't be able to make money on that video. We may also suspend a creator's channel from the YPP for severe or repeated violations of our YouTube monetisation policies.
Responsibility is our number one focus, and everything we do is seen through that lens. The downsides both from a user and a business perspective drastically outweigh all considerations.