×

Algospeak changes conversations’ meaning

If you frequent any social media site, you’ve probably seen plenty of words that are referred to as “algospeak.”

Algospeak, according to an article from the Washington Post, is code words or turns of phrases social media users have adopted to create a brand-safe lexicon that will avoid getting their posts removed or down-ranked by content moderation systems.

To simplify, it’s words used in place of others to avoid breaking social media guidelines or having videos removed.

For example, users might say “unalive” instead of “dead” or “kill.” Another example would be saying “opposite of love” instead of “hate.”

The list of algospeak words is quite long.

People will use emojis to communicate, as well, like using the chain emoji to refer to a link, because “link in bio” could cause a lack of viewers seeing the content.

Or people find ways to use numbers and symbols to adjust their wording. Examples of that would be “bl00d” for the content-sensitive word “blood” and “auti$m” for the developmental disability “autism.”

But the ways of adjustment don’t end there, with initialism and abbreviation also playing into it, like referring to an eating disorder as “ED” or same-sex attraction as “SSA.”

Platforms such as TikTok, YouTube, Instagram, and Twitch are seeing algospeak become increasingly common as people use it to bypass content moderation filters.

The start of the coronavirus pandemic, which is also a sensitive word often referred to as “panda express” or “panorama,” pushed more people to communicate and express themselves online, giving rise to a new form of internet-driven language, eventually leading to algospeak.

YouTube was one of the first platforms to see issues with that, in 2017, in an era of uproar often referred to as the “Adpocalypse.”

The change was sparked by advertisers seeing their companies’ ads paired with hate speech and violent content. Rather than relying on its former algorithm, YouTube sought a new policy of automated demonetization that worked to console advertisers but sometimes did more harm than good.

In 2017, some LGBTQ+ creators spoke about having videos taken down for saying the word “gay.”

That resulted in them trying to find ways to avoid using the word or coming up with their own algospeak to bypass having their videos removed or promoted less.

A legitimate word they use, in a literal sense, to express themselves, causing no harm to others.

Still, seven years later, we’re seeing similar issues, showcasing multiple problems with the use of algospeak.

First of all, and my biggest pet peeve with algospeak, is the dissolving seriousness of the words that are being replaced and how certain topics, with heavy meanings, are put down.

It truly irks me and makes me wonder and worry about those who are affected by specific subjects and what the demeaning phrases mean to them.

For example, victims of sexual assault, which is referred to as “SA”, and those with depression, often seen as “depressi0n.”

And, as I mentioned before, the word “autism” is often content-sensitive, yet it’s the scientific name of a developmental disorder. How is it possible social media sites see it differently?

Adding to the lack of sincerity, algospeak is less professional.

Even when users try to make genuine content about serious topics, they still find it necessary to adjust their words so their informational and supportive content is still seen. So, like on TikTok, a video may share something true and educational, but its captions have odd spellings of words, taking away from the overall meaning of the video and often degrading the company or person who created it.

In a time where abortion is a wildly discussed topic, it can be seen on social media as “@b0rt!0n.” How are we supposed to address the gravity of that subject when the spelling is honestly and sadly ridiculous?

I think there’s also something to be said about our freedom of speech. This could just be another way for companies to try taking that away, telling us what we can and can’t talk about.

There’s a tough line to follow and not cross.

There can be some good associated with algospeak.

For one, it helps remove certain topics for younger users. However, when people find ways around it, that content can still be seen.

It also can be positive for sensitive content that wasn’t meant to be uploaded or shared or is graphic.

Aside from appeasing advertisers, I can see how censoring words could be helpful.

Yet, I am not a fan of algospeak. It doesn’t even work most of the time. People find ways to share their content, one way or another.

If we stop using algospeak or instead find a helpful way to censor content, we could have proper conversations and acknowledge the true connotations that come with the words we say.

The more that we play into algospeak, the more we conform to those sites, and the more we let these words change in meaning and lose effect.

Torianna Marasco can be reached at 989-358-5686 or tmarasco@TheAlpenaNews.com.

Newsletter

Today's breaking news and more in your inbox

I'm interested in (please check all that apply)
Are you a paying subscriber to the newspaper? *
   

Starting at $2.99/week.

Subscribe Today