Jump to content
Our commitments

How does YouTube address misinformation?

With billions of people visiting us every day – whether they're looking to be informed, to catch up on the latest news or to learn more about the topics they care about, we have a responsibility to connect people to high-quality content. So the most important thing we can do is increase the good and decrease the bad. That's why we address misinformation on our platform based on our '4 Rs' principles: we remove content that violates our policies, reduce recommendations of borderline content, raise up authoritative sources for news and information and reward trusted creators. Learn more about how we treat misinformation on YouTube.

Fighting misinformation

What policies exist to fight misinformation on YouTube?

Several policies in our Community Guidelines are directly applicable to misinformation.

The COVID-19 Medical Misinformation policy doesn't allow content that spreads medical misinformation that contradicts medical information about COVID-19 from local health authorities or the World Health Organisation (WHO).

Our guidelines against deceptive practices include tough policies against users that misrepresent themselves or who engage in other deceptive practices. This includes deceptive use of manipulated media (e.g. 'deep fakes') which may pose serious risks of harm. We also work to protect elections from attacks and interference, including focusing on combating political influence operations.

We also have a policy against impersonation. Accounts seeking to spread misinformation by misrepresenting who they are via impersonation are clearly against our policies and will be removed.

And finally, our hate speech policy prohibits content that denies that well-documented, major violent events took place.

How does YouTube deal with borderline content and harmful misinformation?

Content that comes close to — but doesn’t quite cross the line of — violating our Community Guidelines is a fraction of 1% of what’s watched on YouTube in the U.S. Our recommendations systems do not proactively recommend such content on YouTube, thereby helping limit the spread of borderline content or videos that could misinform users in harmful ways.

How does YouTube determine what counts as harmful misinformation?

We have careful systems in place to help us determine what is harmful misinformation across the wide variety of videos on YouTube. As part of this, we ask external human evaluators and experts to assess whether content is promoting unsubstantiated conspiracy theories, or inaccurate information. These evaluators are trained using public guidelines and provide critical input on the quality of a video. Based on the consensus input from the evaluators, we use well-tested machine learning systems to build models that generate recommendations. These models help review hundreds of thousands of hours of videos every day in order to find and limit the spread of harmful misinformation.

How does YouTube provide more quality information to users?

For content where accuracy and authoritativeness are key, including news, politics, medical and scientific information, we use machine learning systems that prioritise information from authoritative sources in search results and recommendations.

For certain types of content that tend to be accompanied by misinformation online, we have information panels that provide additional information from authoritative third-party sources alongside those videos to give you more context to make informed decisions.

How does YouTube evaluate what is an authoritative source?

We use a number of signals to determine authoritativeness, including inputs from Google Search and Google News such as the relevance and freshness of the content, as well as the expertise of the source, to determine the articles that you see in our officially labelled news surfaces. Additionally, we use external raters and experts to provide critical input and guidance on the accuracy of videos.

How is misinformation different from disinformation?

It's important to take into account the content's creator, as well as their intentions, to decide if that content is disinformation or misinformation. If someone shares false information, then it's misinformation. If someone is deliberately trying to deceive or mislead people using the speed, scale and technologies of the Internet, then we refer to it as disinformation. Given that disinformation is a subset of misinformation, we use misinformation as a primary term to refer to both throughout this site.