
How can Media and Information Literacy (MIL) help us navigate the challenges of hate speech in an increasingly digital world? By fostering critical thinking and responsible engagement, MIL provides tools to question harmful narratives and promote ethical online behavior.
This page explores key concepts for empowering individuals and researchers to address hate speech online, including practical examples of how technologies like Artificial Intelligence can combat online hate speech.
What is Media and Information Literacy?
Media and Information Literacy (MIL) is a concept introduced by UNESCO to describe the skills and competencies essential for navigating the digital landscape. It empowers individuals to critically assess information, use digital tools effectively, and engage ethically with diverse content.
This concept covers areas such as the assessment of online information, the effective use of digital resources, and the use of artificial intelligence. A central element of MIL is its focus on how media shape social narratives, a vital factor in combating online hate speech. MIL also emphasizes understanding how media shape societal narratives, which is essential in combating hate speech and fostering inclusive dialogue.
Learn more about Media and Information Literacy with UNESCO’s report “Media and Information Literate Citizens: Think Critically, Click Wisely!” (2021). This resource offers practical insights into fostering critical thinking and ethical engagement in the digital age.
Defining Online Hate Speech
Online hate speech is a growing challenge, with no single definition universally agreed upon. Definitions of what qualifies as hate speech often vary by jurisdiction, reflecting differences in cultural and legal contexts. However, the United Nations provides a widely cited understanding in its 2019 Strategy and Plan of Action on Hate Speech, defining hate speech as:
“Any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor.” (Read more here.)
This definition highlights the diverse forms hate speech can take and underscores the importance of addressing it within specific cultural and contextual frameworks.
Hate speech on digital platforms often spreads rapidly due to the anonymity and speed of online communication. It can perpetuate hostility toward minority groups, using harmful narratives, symbols, and memes to sow division and incite violence. Read more about group hostility here.
The Role of AI in Combatting Online Hate Speech
Artificial Intelligence (AI) is emerging as a key tool in the fight against online hate speech. Tools powered by AI and machine learning can help identify harmful content at scale, providing a means to counter hate speech before it spreads.
A notable example is the research of Dr. Ethan Roberts at the University of Cape Town (LINK). His work focuses on refining algorithms to improve the detection of harmful online content, specifically using Large Language Models (LLMs) to classify hateful comments on social media platforms such as X (Previously Twitter). Dr. Roberts’ project also highlights the significance of local contexts, as it analyzes data specific to South Africa.
How is generative AI used to create hate speech, and how can it be harnessed to detect harmful online content? Interview with Dr. Ethan Roberts from the University of Cape Town.
For further exploration, check out the article “AI and the Holocaust: Rewriting History? The Impact of Artificial Intelligence on Understanding the Holocaust” (2024 Unesco). It provides valuable insights into how AI is reshaping our understanding of historical narratives and its implications for combating hate speech.
Related Resources
Here are a few selected recent resources related to this topic.
Coverphoto: by Andrew Renneisen/Getty Images