Jump to content
Our commitments

How does YouTube protect the community from hate and harassment?

We have developed these policies in consultation with Creators who shared their perspectives, as well as expert organisations that study online bullying and the spread of hateful ideas online. We have also met with policy organisations from all sides of the political spectrum. There is no place for hate speech and harassment on YouTube, and we work quickly to remove content that violates these policies.

Standing up to hate

What's the difference between hate speech and harassment?

Our hate speech policy protects specific groups and members of those groups. We remove policy-violating content. We consider content to be hate speech when it incites hatred or violence against groups based on protected attributes such as age, gender, race, caste, religion, sexual orientation or veteran status. This policy also includes common forms of online hate such as dehumanising members of these groups; characterising them as inherently inferior or ill; promoting hateful ideology like Nazism; promoting conspiracy theories about these groups; or denying that well-documented violent events took place, like a school shooting.

YouTube executives from the Trust & Safety team explain how we're protecting our community from hate speech.

Our harassment policy protects identifiable individuals and we remove policy violative content. We consider content to be harassment when it targets an individual with prolonged or malicious insults based on intrinsic attributes, including their protected group status or physical traits. This policy also includes harmful behaviour such as deliberately insulting or shaming minors, threats, bullying, doxxing or encouraging abusive fan behaviour.

YouTube executives from the Trust & Safety team explain how we're protecting our community from harassment.

How does YouTube manage harmful conspiracy theories?

As a part of our hate and harassment policies, we prohibit content that targets an individual or group with conspiracy theories that have been used to justify real-world violence. One example would be content that threatens or harasses someone by suggesting that they are complicit in one of these harmful conspiracies, such as QAnon or Pizzagate. As always, context matters, so news coverage on these issues or content discussing them without targeting individuals or protected groups may stay up. Due to the evolving nature and shifting tactics of groups promoting these conspiracy theories, we'll continue to adapt our policies to stay current and remain committed to taking the steps needed to live up to this responsibility.

How does YouTube enforce its hate speech and harassment policies?

Hate speech and harassment are complex policy areas to enforce at scale, as decisions require nuanced understanding of local languages and contexts. To help us consistently enforce our policies, we have review teams with linguistic and subject matter expertise. We also deploy machine learning to proactively detect potentially hateful content to send for human review. We remove tens of thousands of videos and channels each quarter that violate our policies. For channels that repeatedly brush up against our policies, we take severe action including removing them from the YouTube Partner Programme (which prevents the channel from monetising), issuing strikes (content removal) or terminating a channel altogether.

Do these policies disproportionately affect political voices that YouTube disagrees with?

When developing and refreshing our policies, we make sure that we hear from a range of different voices, including creators, subject-area experts, free speech proponents and policy organisations from all sides of the political spectrum. Once a policy has been developed, we invest significant time in making sure that newly developed policies are consistently enforced by our global team of reviewers, based on objective guidelines, regardless of who is posting the content. We have created a platform for authentic voices that empowers our diverse community of creators to engage in a vigorous exchange of ideas.

Are there any exceptions to enforcing the hate speech policy?

YouTube is a platform for free expression. While we do not allow hate speech, we make exceptions for videos that have a clear educational, documentary, scientific or artistic purpose. This would include, for example, a documentary about a hate group; while the documentary may contain hate speech, we may allow it if the documentary intent is evident in the content, the content does not promote hate speech and viewers are provided sufficient context to understand what is being documented and why. This, however, is not a free pass to promote hate speech and you can flag it to our teams for review if you believe that you've seen content that violates our hate speech policies.

How does YouTube address repeated harassment?

We remove videos that violate our harassment policy. We also recognise that harassment sometimes occurs through a pattern of repeated behaviour across multiple videos or comments, even when individual videos may not cross our policy line. Channels that repeatedly brush up against our harassment policy will be suspended from the YouTube Partner Programme (YPP), eliminating their ability to make money on YouTube, to ensure that we reward only trusted creators. These channels may also receive strikes (that could lead to content removal) or have their accounts suspended.

What tools are available for creators to protect themselves and shape the tone of conversations on their channel?

While the goal of our policies and systems is to minimise the burden placed on creators to protect themselves from hate and harassment, we have also built tools to help them manage their experience, summarised below

We provide creators with moderation tools for comments so that they can shape the tone of the conversation on their channels. We hold potentially inappropriate comments for review, so that creators can best decide what is appropriate for their audience. We also have other tools that empower creators to block certain words in comments, block certain individuals from commenting or assign moderation privileges to other people so that they can more efficiently monitor comments on their channel.

To encourage respectful conversations on YouTube, we also have a feature that will warn users if their comment might seem offensive to others, giving them the option to reflect and edit before posting.

Finally, we have a list of resources to help creators to feel safe on YouTube. We know that there is a lot more work to be done and we are committed to moving this work forwards.