Hate Speech Regulation & Human Rights

How is hate speech regulated, and how does it relate to freedom of expression and other fundamental rights? This page presents a research project that explores these questions. It examines international and regional human rights frameworks governing hate speech and provides an overview of how hate speech is regulated in South Africa. 

What types of speech are prohibited under international law?

How can hurtful or harmful content be distinguished from content that amounts to unlawful hate speech? As a general principle, restricting speech should be considered an exception and only adopted when necessary to protect a legitimate interest (OHCHR, 2011). Any prohibition must therefore balance the need to prevent harm with the fundamental right to freedom of expression.

Internationally, major international law instruments include regulations against hate speech:

  • Incitement to hate
    Article 20 of the International Covenant on Civil and Political Rights (ICCPR) prohibits:
    “Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence.”
    The ICCPR prohibits incitement, however each state decides whether this prohibition should take the form of criminalization (UNHCR, 2013)

 

 

  • Incitement to genocide
    As the most severe form of hate speech, the 1948 Genocide Convention prohibits in Article 3(c) the: “direct and public incitement to commit genocide”, and
    Article 25 (3) (e ) of the Rome statute establishes individual criminal responsibility when it: “directly and publicly incites others to commit genocide”.

While legal prohibitions can be a reactive measure to combat hate speech, they can also be misused to suppress political dissent and silence differing opinions. This risk underscores the importance of clear, narrowly tailored, and human-rights–based criteria for determining when content crosses the line into unlawful hate speech. For example, to determine when a communication amounts to incitement to hatred, the Office of the UN High Commissioner for Human Rights (OHCHR), in its Rabat Plan of Action (2012), outlines a six-part threshold test used to assess when an expression can be prohibited. 

Hate Speech in Regional Frameworks

There are different regional frameworks that include prohibitions against various types of information harms such as hate speech, misinformation, and disinformation. In Europe, the Digital Services Act (2022) imposes due diligence obligations on online platforms to foster safer digital environments, including mechanisms for removing illegal content such as hate speech.

In Africa, the Declaration of Principles on Freedom of Expression and Access to Information in Africa through Principle 23 offers guidance to African states on determining when a communication crosses the threshold into hate speech. Moreover, there are emerging human rights–based regional initiatives aimed at regulating information in the digital age.

The African Commission on Human and Peoples’ Rights (ACHPR) has tasked the African Union’s Special Rapporteur on Freedom of Expression with developing guidelines to support member states in regulating major digital platforms:

  • Resolution 620 of 2024: Calls for the creation of standards for accountable data collection including online information.
  • Resolution 630 of 2025: Mandates the creation of state guidelines for information integrity and independent fact-checking online.
  • Resolution 631 of 2025: Calls for the creation of a focused regional policy framework for access to information as public interest content.

The three guidelines are expected to be launched between late 2025 and 2026. Resolution 630 represents a significant advancement in the regional response to hate speech, since information integrity is essential for effective content moderation.

The regulation of Hate Speech in South Africa

Hate speech regulation in South Africa is shaped by the country’s unique cultural, historical, and social context, which informs the legal discourse around what constitutes hate speech.

Nationally, groups such as the vigilante movement Operation Dudula use xenophobic rhetoric and online mobilisation to incite violence. Hate narratives also circulate online, including claims of an alleged “white genocide,” further fuelling social tensions (Rousset et al., 2022).

Within this context, South Africa maintains a multilayered framework to combat hate speech encompassing constitutional, civil, and criminal provisions.

Article 16 of the South African Constitution guarantees the right to freedom of expression, while explicitly excluding from protection any expression that advocates hatred based on “(1) race, (2) ethnicity, (3) gender or (4) religion, and that constitutes incitement to cause harm”.

Moreover, in 2000, the Promotion of Equality and Prevention of Unfair Discrimination Act (PEPUDA) was introduced as a civil framework to prevent and prohibit unfair discrimination and hate speech. Under PEPUDA, for an expression to amount to hate speech it needs to be “(1) harmful or to incite harm, and (b) to promote or propagate hatred”. Importantly, the Act also expanded the protected grounds beyond those listed in the Constitution to include additional identity factors such as sexual orientation, disability, and belief, among others. It also established Equality Courts, which are the competent bodies mandated to hear cases involving unfair discrimination and hate speech.

Complementary to PEPUDA, a recent legislative development is the Prevention and Combating of Hate Crimes and Hate Speech Act, signed into law in 2024. The Act differs from PEPUDA by introducing criminal provisions and penalties for hate speech offences.

The criminalisation of hate speech was actively debated during public hearings at the National Council of Provinces and the National Assembly. These discussions highlighted the tension between protecting fundamental human rights, such as the right to non-discrimination, and safeguarding other rights, including freedom of expression and freedom of religion or belief (FoRB). Some civil society organisations representing LGBTQIA+ communities supported the Act, citing its role in strengthening protections against hate crimes and hate speech. Conversely, some religious groups raised concerns that the bill could restrict freedom of expression and FoRB.

The South African approach to hate speech regulation highlights a central dilemma: balancing limitations on freedom of expression with the protection of other fundamental rights. Moreover, it illustrates how hate speech regulations are contextual, shaped by each country’s own history and evolving social dynamics.

Current Challenges

While hate speech is regulated through international, regional, and national frameworks, challenges exist in the effective enforcement of relevant provisions. In the digital age, social media companies operate as third actors, sometimes challenging traditional state-centred models for internet governance. Key issues include:

  • Access to information
    Enforcement of hate speech laws can be hindered by platforms’ reluctance to share data. Jurisdictional issues and the absence of local data storage or physical presence of tech companies make it difficult for authorities to substantiate legal claims.
  • Policy Misalignment
    Social media platforms may apply content removal standards that differ from national laws, undermining enforcement. For instance, in South Africa, the gratuitous display of the apartheid-era flag has been declared to constitute hate speech since 2019. Yet in 2025, Meta’s Oversight Board ruled that two Facebook posts showing the flag during the 2024 elections did not meet its removal criteria.

These challenges intersect with broader issues of social participation online. One major barrier is the Digital divide, described by the United Nations as “the gap between those who have access to and use ICTs—including internet connectivity, internet-enabled devices, and digital literacy skills—and those who do not” (United Nations, 2021).

Without adequate digital access and literacy, individuals may be unable to identify harmful forms of information, report violations, or protect themselves from online risks. This is why Media and Information Literacy (MIL) is essential: it empowers people to understand, evaluate, and create safer online spaces. (To learn more about MIL click here).

If you would like to learn more about this research case, you can read the policy brief: Hate Speech Regulation in Africa: Overview and Current Issues

Related Resources

Find digital tools produced in cooperation with partners and researchers from different regions.

Coverphoto: by Andrew Renneisen/Getty Images