Site Overlay

Are We Caught in ‘Misinformation Limbo’? Here’s What Social Media Sites Are (And Aren’t) Doing to Combat Misinformation

Let’s face it, we’ve all been—or are currently—obsessed with social media. Scrolling, swiping, liking, re-tweeting, and pinning have become verbs to showcase just how popularized social media has become. It’s not just an app or a site— it’s a lifestyle; a way to keep in touch with your family, friends, co-workers and perhaps even a place to run your business or individual brand.

When social media is good— its really good. It gives you little drops of serotonin with every new like or follow you receive.

However, like with anything good, there is always a dark side—and that dark side of the moon is mis/disinformation.

Social media platforms are taking initiatives to tackle misinformation, but are they working?

Are social media platforms left in a sort of weightless limbo still trying to figure out successful ways to combat misinformation?

Summary of TikTok’s Efforts To Combat Misinformation

While TikTok has community guidelines that prohibit various kinds of misinformation, the platform initiates other methods as a precaution to the inevitable.

https://www.tiktok.com/@tiktoktips/video/7024340171952803078?is_copy_url=1&is_from_webapp=v1&lang=en

TikTok uses human and AI moderators to “identify and remove false or misleading content as well as accounts that spread misinformation.”

Within the platform’s policy, misinformation is prohibited and going against this policy will result in the content being removed or reduced in terms of “discoverability”. If the violation is “severe enough” TikTok will consider suspending or permanently deleting the account.

TikTok partners with fact-checkers such as “Politifact, Lead Stories, Science Feedback, and the AFP” to verify the accuracy of content across 55 markets and 16 languages. If the fact-checking is “inconclusive”, TikTok claims it reduces the content’s visibility to “reduce the potential for misinformation to spread.

The platform utilizes community members to counter misinformation by allowing them to flag content through the app and also uses “industry-leading threat detection platforms to identify networks and suspicious activity.” While it’s not clear which detection platforms the social media site is using, most threat detection platforms have dark web monitoring capabilities and can detect malicious activity from within .

TikTok is additionally working alongside the U.S. Department of Homeland Security in working to combat “foreign influences”— such as bad actors fueling disinformation campaigns.

To combat misinformation from a different avenue, TikTok has partnered with NAMLE to produce a video series coupled with TikTok creators to teach users about media literacy skills such as questioning sources and graphics and being able to define fact vs opinion.

Examples of How Policies & Procedures Are Used

During the 2020 election, Twitter removed “over 340,000 videos” containing “election misinformation, manipulation, or disinformation.” There were an additional 51,505 videos regarding COVID-19 misinformation removed. These figures were published from TikTok’s transparency report.

Within their transparency report for 2020, TikTok states 70.5% of videos containing misinformation “were removed before they were reported, and 91.3% were removed within 24 hours of being posted.

Out of the total number of videos removed in 2020, just 2.4% of those were related to misinformation. It’s unclear whether this is because mis/disinformation isn’t as prevalent on TikTok as news reports have suggested or if this is because mis/disinformation on the site has gone undetected. TikTok does admit in their transparency report that they “are investing in our infrastructure to improve our proactive detection, especially when it comes to identifying misinformation.”

Merit of Policies & Procedures

While TikTok does flag videos that haven’t been fact-checked as “unverified content“, I haven’t personally witnessed this on any videos containing conspiracies or misleading information.

For example, there are a slew of TikToks claiming the tragedy at AstroWorld was a satanic sacrifice—which has been debunked by Politifact. There are also still TikToks that claim the 2020 election was “stolen” despite that having been debunked as well.

From my own research, I found all of these TikToks to not only be visible on the platform but to be without any “unverified content” label—seemingly unscathed.

I have, however, seen many TikToks about COVID (regardless if the information is true or not) with a label attached that urges users to learn more about the COVID-19 vaccine.

Of the videos that were taken down in 2020, 2,927,391 were reinstated after they were appealed. Does this affect the merit of TikTok’s policy to remove videos that go against community guidelines? If we mirror this with the finding that Facebook found that users, on average, rated seeing violating content like hate speech as a more negative experience than having their content taken down by mistake, perhaps the inclination to accidentally take down videos is less of a concern than the misinformation itself.

The policies appear to work to a certain degree—the transparency report shows that. However, from a user’s point of view, there are still a myriad of TikTok videos containing misinformation without so much as a warning label attached.

As The Conversation states, however, there are many nuances with detecting misinformation—such as “opinion, call to action and speculation.” It may be difficult for TikTok’s AI to detect subjective statements which would explain why the videos go unscathed.

The Conversation also brings up difficulties concerning Fact-checking audio visual content on TikTok with AI technology due to varying contexts such as “language, nonverbal cues, terms, images.” There is also the issue of racial and gender bias within AI to consider.

Suggestions to Improve TikTok’s Efforts

Firstly, I find it commendable that TikTok is working with NAMLE to educate TikTok users on media literacy skills—this shows that TikTok cares about eliminating the problem of misinformation and equipping its users with tools to do just that on their own without necessarily censoring.

However, TikTok doesn’t post these collaborative videos very often—the last video was posted a month ago. I think TikTok could be putting more time and energy into improving the viral quality of these media literacy videos to make them more interesting and engaging.

Based on my own observations, there is a minuscule use of warning labels. Even the TikToks claiming the ‘2020 Election was stolen’ are firmly holding their place on the site without any label whatsoever. And it’s been well over a year since this has been proven false.

If topics such as this are too subjective to be considered for a fact-check, then I’m curious as to what topics are considerable for a fact-check?

Because social media is community based, TikTok should encourage its users to report misleading information for it to be flagged.

This can help in two ways— one, as community members become more encouraged to report misleading information, they then become more aware of what misleading information looks like. And two, in turn, more videos featuring misinformation will be tagged as “unverified content” to help combat misinformation being shared and believed.

As Claire Wardle is quoted saying, “it’s the sharing that is so damaging.”

Summary of Twitter’s Efforts To Combat Misinformation

Twitter allows users to report misleading information, and from there, the platform utilizes “automated technology” to review those reports. The platform also has a team which reviews misleading COVID-19 information manually due to context of tweets.

There is a strike policy in place for those who continually post misinformation which results in an account lockout and may escalate to a permanent suspension if the individual reaches four strikes.

Twitter counteracts misinformation that isn’t deemed “dangerous enough” to be removed from the platform by imposing a “misleading content” label.

Overall, Twitter’s community guidelines on misinformation call for labeling misleading tweets as a warning, decreasing visibility of the tweet or removing the tweet all together. As a last resort, Twitter will enable a permanent suspension of the account in question if there are repeated offenses.

To counteract state-backed disinformation campaigns, Twitter “a range of open-source and proprietary signals and tools to identify when attempted coordinated manipulation may be taking place, as well as the actors responsible for it.”

I found Twitter’s description of its efforts to combat mis/disinformation to be rather vague in comparison to TikTok’s thorough bullet points. However, Twitter does state that the teams working on eliminating disinformation campaigns are made up of “data scientists, linguists, policy analysts, political scientists, and technical experts.” It’s the ‘how’ that is so vague.

Examples of How Policies & Procedures Are Used

In 2020, Twitter removed 14,900 tweets on COVID-19 that violated their misinformation policy. The platform has also banned and/or suspended numerous public figures for repeated offenses against their misinformation policy.

Last year, Twitter took down a “misinformation campaign” regarding Kenya’s President—this resulted in 230 accounts being suspended due to ‘platform manipulation’. However, these accounts were only suspended after researchers had brought it to Twitter’s policy team, according to The New York Times.

The platform has been open and transparent about its successes in removing government-backed disinformation campaigns.

The Guardian reported last year that Twitter removed “thousands of China state-linked accounts spreading propaganda.”

Within Twitter’s own blog, the platform states they removed “state-linked information operations”—totaling to a sum of 3,465 accounts in the latter half of 2021. There is a continual transparency report from Twitter which details how many state-backed disinformation campaigns are taken down each year.

Merit of Policies & Procedures

Twitter states the platform “requires deletion of tweets that contain…false claims about COVID-19 that invoke a deliberate conspiracy by malicious and/or powerful forces, such as: The pandemic is a hoax, or part of a deliberate attempt at population control, or that 5G wireless technology is causing COVID-19.”

However, I’ve found Twitter often does not remove pandemic hoax related tweets. A quick Twitter search of ‘covid hoax’ is evidence enough of this.

Not only are the tweets very visible and not at all deleted, but there are no warning labels attached to them.

To be fair, this is an account of just 100 followers. Twitter tends to crack down harder on accounts with a generous following. Majorie Taylor Greene, for instance, was given the boot for repeated violations of spreading misinformation on Twitter.

As uncovered by The Wall Street Journal, “Facebook measures the prevalence of certain types of content…by the number of views that content attracts. The company says this is a more accurate way of measuring the true impact of a piece of content that violates its policies. In other words, hate speech viewed a million times is more of a problem than hate speech viewed just once.”

Perhaps Twitter takes a similar stance when it comes to removing or flagging misinformation.

Famously, Twitter permanently suspended Donald Trump for incitement of violence. Interestingly, The Washington Post reported that misinformation on Twitter decreased “dramatically the week after Twitter banned Trump.”

This suggests Twitter’s misinformation procedures work to a certain degree. But from my experience on the site, the platform’s procedures for flagging content (regardless of following) seems lacking.

Suggestions to Improve Twitter’s Efforts

One aspect of Twitter’s efforts to combat misinformation—that I think they do really well— is implementing pop-ups to warn users before sharing an article. These pop-ups also warn users against sharing news links from websites that are known for spreading false information or conspiracies.

Since sharing is what is so damaging, warning users about sharing an article before reading is a way to nip misinformation at the very start of the cycle without worrying so much about censorship. It’s a friendly reminder for users to be conscious of what they are sharing.

In terms of better addressing the issue of misinformation, Twitter could benefit from a similar collaboration to that of TikTok and NAMLE.

Considering 59% of users regularly get news from Twitter, there is no doubt that a media literacy campaign through Twitter’s platform would show improvement to addressing the “sharing” aspect of misinformation.

While education isn’t free, social media is—which allows an opportunity for those to learn media literacy skills who otherwise couldn’t or have not been able to.

*All images, headers, quotes and fonts made with Canva and are free to use

Leave a Reply

Your email address will not be published.

Copyright © 2022 . All Rights Reserved. | Chique Photography by Catch Themes
css.php