Jump to content
Our commitments

How does YouTube address misinformation?

With billions of people visiting us every day – whether they're looking to be informed, to catch up on the latest news or to learn more about the topics they care about, we have a responsibility to connect people to high-quality content. So the most important thing we can do is increase the good and decrease the bad. That's why we address misinformation on our platform based on our '4 Rs' principles: we remove content that violates our policies, reduce recommendations of borderline content, raise up authoritative sources for news and information and reward trusted creators. Learn more about how we treat misinformation on YouTube.

Fighting misinformation

What type of misinformation does YouTube remove?

As detailed in our Community Guidelines, YouTube does not allow misleading or deceptive content that poses a serious risk of egregious harm. When it comes to misinformation, we need a clear set of facts to base our policies on. For example, for COVID-19 medical misinformation policies, we rely on expert consensus from both international health organisations and local health authorities.

Our policies are developed in partnership with a wide range of external experts as well as YouTube Creators. We enforce our policies consistently using a combination of content reviewers and machine learning to remove content that violates our policies as quickly as possible.

What types of misinformation are not allowed on YouTube?

Several policies in our Community Guidelines are directly applicable to misinformation, for example:

  • Misinformation policies

These misinformation policies apply to certain types of misinformation that can cause egregious real-world harm such as promoting harmful remedies or treatments, certain types of technically manipulated content or content interfering with democratic processes such as census participation.

  • Elections misinformation policies

Our elections misinformation policies do not allow misleading or deceptive content with serious risk of egregious real-world harm like content containing hacked information which may interfere with democratic processes, false claims that could materially discourage voting or content with false claims related to candidate eligibility.

  • COVID-19 medical misinformation policy

The COVID-19 medical misinformation policy doesn't allow content that spreads medical misinformation which contradicts local and global health authorities' medical information about COVID-19. For example, we don't allow content that denies the existence of COVID-19 or promotes unapproved treatment or prevention methods.

  • Vaccine misinformation policy

The vaccine misinformation policy doesn't allow content that poses a serious risk of egregious harm by spreading medical misinformation about currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and by the World Health Organization (WHO). This is limited to content that contradicts local health authorities' or the WHO's guidance on vaccine safety, efficacy and ingredients.

How does YouTube limit the spread of borderline content and potentially harmful misinformation?

Sometimes, we see content that comes close to – but doesn't quite cross the line of – violating our Community Guidelines. We call this borderline content. Globally, consumption of borderline content or potentially harmful misinformation that comes from our recommendations is significantly below 1% of all consumption of content from recommendations. That said, even a fraction of a percent is too much. So, we do not proactively recommend such content on YouTube, thereby limiting its spread.

We have careful systems in place to help us determine what is borderline content and potentially harmful misinformation across the wide variety of videos on YouTube. As part of this, we ask external evaluators and experts to provide critical input on the quality of a video. And these evaluators use public guidelines to guide their work. Based on the consensus input from the evaluators, we use well-tested machine learning systems to build models. These models help review hundreds of thousands of hours of videos every day in order to find and limit the spread of borderline content and potentially harmful misinformation. And over time, the accuracy of these systems will continue to improve.

How does YouTube raise authoritative content?

For topics such as news, politics, medical and scientific information, the quality of information is key. That's why we have continued to invest in our efforts to connect viewers with quality information and introduced a suite of features to elevate quality information from authoritative sources and provide context to help you make informed decisions.

How does YouTube elevate quality information for viewers?

For content where accuracy is key, including news, politics, medical and scientific information, we use machine learning systems that prioritise information from authoritative sources in search results and recommendations.

To help you stay connected with the latest news, we highlight authoritative sources in news shelves that appear on the YouTube homepage during breaking news moments, as well as above YouTube search results to show top news when you are looking for news-related topics.

News content shelves

How does YouTube determine what is an authoritative source?

We use a number of signals to determine authoritativeness. External raters and experts are trained using public guidelines to provide critical input and guidance on the authoritativeness of videos.

Additionally, we use input from Google Search and Google News such as the relevance and freshness of the content, as well as the expertise of the source, to determine the content you see in our officially labelled news surfaces.

How does YouTube provide more context to viewers to help them evaluate information?

We highlight text-based information from authoritative third-party sources using information panels. As you navigate YouTube, you might see a variety of different information panels providing additional context, each of which is designed to help you make your own decisions about the content you find.

For example, in developing news situations, when high quality video may not be immediately available, we display links to text-based news articles from authoritative sources in YouTube search results.

Developing news information panel

We display information panels above certain search results to highlight relevant fact checks from third-party fact-checking experts.

Fact-check information panel

For well-established historical, scientific and health topics that are often subject to misinformation, such as 'Apollo 11' or 'COVID-19 vaccine', you may see information panels alongside related search results and videos linking to independent third-party sources including Encyclopedia Britannica, the World Health Organization and locally relevant health officials.

Topical information panel

Since knowledge around funding sources can provide context when assessing an organisation's background and help you become a more informed viewer, we also show government or public funding for news publishers via information panels alongside their videos.

Publisher funding information panel

Information panels alongside health videos provide health source context and can help you better evaluate if a source is an accredited organisation or government health source.

Information panel that provides health source context

How does YouTube encourage trustworthy creators?

YouTube’s unique business model only works when our community believes that we are living up to our responsibility as a business. Not only does controversial content not perform well on YouTube, it also erodes trust with viewers, advertisers, and trusted creators themselves.

All channels on YouTube must comply with our Community Guidelines. We set an even higher bar for creators to be eligible to make money on our platform via the YouTube Partner Program (YPP). In order to monetise, channels must also comply with the YouTube channel monetisation policies, which includes our Advertiser-friendly content guidelines which do not allow ads on content promoting or advocating for harmful health or medical claims; or content advocating for groups which promote harmful misinformation. Violation of our YouTube channel monetisation policies may result in monetisation being suspended. Creators can re-apply to join YPP after a certain time period.

Putting users in control

While YouTube addresses misinformation on our platform with policies and products based on the '4 Rs' principles, we also empower the YouTube community by giving users controls to flag misinformation and by investing in media literacy efforts.

How can the broader community help flag misinformation on YouTube?

YouTube removes content that violates our Community Guidelines, however, creators and viewers may still come across content that might need to be deleted or blocked. Anyone who is signed in can use our flagging features to submit content such as video, comment, playlist for review, if they think that it is inappropriate and in violation of our Community Guidelines. We also have tools and filters that allow creators to review or remove comments that they find offensive to themselves and their community.

What are YouTube and Google doing to help people build media literacy skills?

While YouTube tackles misinformation on the platform by applying the 4Rs principles, we also want to support users in thinking critically about the content that they see on YouTube and the online world so that they can make their own informed decisions.

We do this in three ways: help users’ build media literacy skills; enable the work of organisations who work on media literacy initiatives (such as the Alannah and Madeline Foundation’s Media Literacy Lab); and invest in thought leadership to understand the broader context of misinformation.

Enabling organisations

In 2020, the Alannah and Madeline Foundation launched the Media Literacy Lab thanks to a $1.4m grant from Google. The Media Literacy Lab seeks to empower young people to think critically, create responsibly, be effective voices and active citizens online.

With support from the Google News Initiative, First Draft launched its first bureau in APAC based at the Centre for Media Transition at the University of Technology Sydney. First Draft’s mission is to protect communities from harmful misinformation through knowledge, understanding and tools.

Alongside the Museum of Australian Democracy, we funded research undertaken by Western Sydney University and the Queensland University of Technology seeking to identify the knowledge and skills needed to advance the media literacy of young Australians.

Investing in thought leadership

As the nature of misinformation rapidly evolves, it is critical that people understand the broader context of misinformation on the internet. Jigsaw, a unit within Google, has developed research, technology, and thought leadership in collaboration with academics and journalists to explore how misinformation campaigns work and spread in today’s open societies.