Top Qs
Timeline
Chat
Perspective
Algospeak
Obfuscated speech on social media From Wikipedia, the free encyclopedia
Remove ads
In social media, algospeak is a self-censorship phenomenon in which users adopt coded expressions to evade automated content moderation.[1][2] It allows users to discuss topics deemed sensitive to moderation algorithms while avoiding penalties such as shadow banning, downranking, or de-monetization of content.[3] A type of netspeak, algospeak primarily serves to bypass censorship, though it can also reinforce group belonging, especially in marginalized communities.[3] Algospeak has been identified as one source of linguistic change in the modern era, with some terms spreading into everyday offline speech and writing.
Remove ads
History
The term algospeak–a blend of Algorithm and -speak[4]—appear to date back to 2021, though related ideas have existed for much longer; for example, voldemorting,[5] referencing the fictional character also known as "He-Who-Must-Not-Be-Named", refers to the use of coded expressions to avoid giving attention to objectionable figures or platforms and receiving algorithmic attention from unwanted audiences.[6][7][8] The term algospeak gained wider recognition in 2022 after Taylor Lorenz featured it in an article for The Washington Post.[9] In 2025, Adam Aleksic published Algospeak, the first monograph dedicated to the phenomenon. It proposes an expanded definition which encompasses any language change that is primarily driven by the constraints of digital platforms.[10][11][12]
Remove ads
Causes and motivations
Summarize
Perspective
Many social media platforms rely on automated content moderation systems to enforce their guidelines, which the users often have no control over and may be changed at any time.[1] TikTok in particular uses artificial intelligence to proactively moderate content, in addition to responding to user reports and using human moderators. In colloquial usage, such systems are called "algorithms" or "bots". TikTok has faced criticism for their unequal enforcement on topics such as LGBTQ people and obesity, leading to a perception that social media moderation is contradictory and inconsistent.[3]
Between July and September 2024, TikTok reported removing 150 million videos, 120 million of which were flagged by automated systems.[13] Automated moderation may miss important context; for example, benign communities who aid people who struggle with self-harm, suicidal thoughts, or past sexual violence may inadvertently receive unwarranted penalties.[14][15][3] TikTok users have used algospeak to discuss and provide support to those who self-harm.[16] An interview with nineteen TikTok creators revealed that they felt TikTok's moderation lacked contextual understanding, appeared random, was often inaccurate, and exhibited bias against marginalized communities.[3]
Algospeak is also used in communities promoting harmful behaviors. Anti-vaccination Facebook groups began renaming themselves to “dance party” or “dinner party” to avoid being flagged for misinformation. Likewise, communities that encourage the eating disorder anorexia nervosa have been employing algospeak.[17] Euphemisms like "cheese pizza" and "touch the ceiling" are used to promote child sexual abuse material (CSAM).[18]
On TikTok, moderation decisions can result in consequences such as account bans and deletion or delisting of videos from the main video discovery page, called the "For You" page. In response, a TikTok spokeswoman told The New York Times that the users' fears are misplaced, saying that many popular videos discuss sex-adjacent topics.[19]
Remove ads
Methods
Summarize
Perspective
Algospeak uses techniques akin to those used in Aesopian language to conceal the intended meaning from automated content filters, while being understandable to human readers.[2] Other similar adoption of obfuscated speech include Cockney rhyming slang and Polari, which historically were used by London gangs and British gay men respectively.[14] However, unlike other forms of obfuscated speech, the global reach of social media has allowed the language to spread beyond local settings.[2]
Techniques used in algospeak are extremely diverse. In orthography, users may draw from leetspeak, where letters are replaced with lookalike characters (e.g. $3X for sex).[2] Certain words or names may be censored, or in the case of auditory media, cut off or bleeped,[a] e.g., s*icide instead of suicide. Another involves "pseudo-substitution", where an item is censored in one form, while it is present in another form at the same time, as used in videos.[20] Some may involve intersemiotic translation, where non-linguistic signs are interpreted linguistically, in addition to further obfuscation. For example, the corn emoji "🌽" signifies pornography by means of porn → corn → 🌽.[2] Others may rely on phonological similarity or variation, such as homophobic → hydrophobic, and sexy → seggsy via intervocalic voicing.[1] On Chinese social media, users sometimes replace sensitive terms with characters that differ only in tone. For example, 细颈瓶 (xì jǐng píng, literally “narrow-necked bottle”) is used as a stand-in for the name of General Secretary of the Chinese Communist Party, Xí Jìnpíng.[21]
In an interview study, most creators that were interviewed suspected TikTok's automated moderation was scanning the audio as well, leading them to also use algospeak terms in speech. Some also label sensitive images with innocuous captions using algospeak, such as captioning a scantily-dressed body as "fake body".[3] The use of gestures and emojis are common in algospeak, showing that it is not limited to written communication.[22] A notable example is the use of the watermelon emoji on social media as a pro-Palestinian symbol in place of the Palestinian flag in order to avoid censorship by Facebook and Instagram.[23] Black creators may simply present their light-colored palms to the camera to stand in for white people, and flip them to stand in for black people.[1]
Remove ads
Impact and detection
A 2022 poll showed that nearly a third of American social media users reported using "emojis or alternative phrases" to subvert content moderation.[18]
Algospeak can lead to misunderstandings. A high-profile incident occurred when American actress Julia Fox made a seemingly unsympathetic comment on a TikTok post mentioning "mascara", not knowing its obfuscated meaning of sexual assault. Fox later apologized for her comment.[14][24] In an interview study, creators shared that the evolving nature of content moderation pressures them to constantly innovate their use of algospeak, which makes them feel less authentic.[22]
A 2024 study showed that GPT-4, a large language model, can often identify and decipher algospeak, especially with example sentences.[25] Another study shows that sentiment analysis models often rate negative comments incorporating simple letter–number substitution and extraneous hyphenation more positively.[26]
Remove ads
Examples
According to The New York Times:[19]
- (to) unalive, unalived – to kill; killed, dead.[27]
- accountant – sex worker
- cornucopia – homophobia
- le dollar bean – lesbian, as derived from the written form Le$bian
- leg booty – the LGBTQ+ community
- nip nops – nipples
- panini, panoramic – a pandemic, especially the COVID-19 pandemic[28]
- seggs – sex
Other examples:[29][3][30][31]
- acoustic – autistic
- blink in lio – link in bio
- camping – abortion
- cheese pizza – child pornography
- fork – fuck
- grape – rape
- music festival – protest[32]
- opposite of love – hatred
- ouid – weed
- Panda Express – pandemic
- PDF file, PDF – pedophile
- pew pew – firearm
- restarted – retarded
- sewer slide – suicide
- shmex – sex
- tism – autism spectrum conditions
- yt – White people, though yt is also a common abbreviation for YouTube
Remove ads
See also
- AAVE – Variety of American English
- Bronyspeak – Vernacular of My Little Pony fans
- Cant (language) – Linguistic term for jargon of a group
- Chilling effect – Discouragement of exercising rights by threats of legal sanctions
- Dog whistle (politics) – Political messaging using coded language
- DoggoLingo – Internet "language" and slang
- Enshittification – Systematic decline in online platform quality
- Internet censorship – Legal control of the internet
- Koalang – Fictional language
- Leet – Online slang and alternative orthography
- Newspeak – Fictional language in the novel "Nineteen Eighty-Four"
- Un-word of the year – German ironic award
Remove ads
Notes
- Bleep censor is commonly implemented in the editing process of TV production, mostly for censoring out obscene language, rather than by YouTube and TikTok video authors themselves.
References
External links
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads