There seems to be a lack of effective guardrails to prevent the spread of negativity and misinformation on social media. But who is – or who should be – keeping tabs on the quality and reliability of social media info? We’ll explore this complex issue.
The Challenges of Debunking Misinformation at Scale
Preventing dissemination of dangerous misinformation seems a laudable goal, and one that social media platforms are working towards. On September 29, 2021, YouTube announced they will be banning all content that spreads vaccine misinformation. “Our Community Guidelines already prohibit certain types of medical misinformation. We’ve long removed content that promotes harmful remedies, such as saying drinking turpentine can cure diseases.”
However, executing these objectives at scale is problematic. For example, YouTube videos focused on debunking prevalent misinformation have ended up being removed. It’s difficult to debunk misinformation without restating the claims, and algorithms haven’t proven effective at differentiating between criticism and advocacy.
Social media platforms have also struggled to maintain the necessary speed as guidance changes quickly. Twitter warnings have been applied to factual tweets from prominent public health experts that align with WHO guidance. For example, Martin Kulldorff, PhD, professor of medicine at Harvard Medical School, tweeted that those with prior natural infection and children do not need COVID vaccines. The tweet was labeled as “misleading,” and can’t be replied to, shared, or liked.
On June 21, he followed up by stating, “For not following WHO guidelines, Twitter put a misleading warning on a March 15 tweet when I wrote that children do not need the COVID vaccine. Since WHO now reached the same conclusion, maybe Twitter can remove the warning.” (Spoiler alert: they haven’t removed it.)
The Difficulty of Tackling Hate
Although misinformation has been a big focus lately, other social media challenges include hate speech and increasing polarization, which threatens democracy. As written on SWI swissinfo.ch, different countries are trying to deal with the problems by adopting new laws and regulations. Germany has taken a pioneering role with its Network Enforcement Act (NetzDG). Multiple countries enacted legislation inspired by the NetzDG, but the underlying concept can easily be misused by less democratic governments.
“In Switzerland, there are as of yet no regulations specifically aimed at social media. Web activist Jolanda Spiess-Hegglin is spearheading efforts [to] change this, and to fight hatred on the internet, mainly with the organization Netzcourage.”
The Conscious Influence Hub Code of Conduct was developed for influencers and people who work on social media. The code supports them in acting with respect, empathy and transparency, and is an important tool to encourage the community to consciously use its influence. Such guidance can be particularly beneficial where rules fall short or are in short supply.
Rather than Moderating the Content, Regulate the Algorithms?
As written recently in The Washington Post, Facebook whistleblower Frances Haugen identified highly personalized, attention-seeking algorithms as the crux of the threat that social media poses to society. “And as lawmakers and advocates cast about for solutions, there’s growing interest in an approach that’s relatively new on the policy scene: regulating algorithms themselves, or at least making companies more responsible for their effects… Forcing tech companies to be more careful about what they amplify might sound straightforward. But it poses a challenge to tech companies because the ranking algorithms themselves, while sophisticated, generally aren’t smart enough yet to fully grasp the message of every post.”
Who Should Moderate?
Who is paying attention and moderating to ensure social media isn’t used for violence and hate? Currently, it seems mostly only the platforms themselves, but there are a few options:
1. Government?
Many are justifiably concerned about elected officials acting to stifle criticism of themselves and their party, or to stifle speech in accordance with the whims of their base. The focus should be on removing false information, not unpopular facts.
At least in the United States, where the majority of social media platforms are headquartered, freedom of speech is enshrined in the Constitution and vigorously protected by law, which inhibits government’s ability to censor. Of course, these businesses could relocate to other nations and make use of the lack of global alignment and various loopholes, changing what users see in different locations.
2. The Social Media Platforms?
In 2019, Facebook created the Oversight Board to help, “answer some of the most difficult questions around freedom of expression online: what to take down, what to leave up, and why”. However, the platforms themselves haven’t proven to be up to this task alone. For one thing, their very business model conflicts with the objective of minimizing hate. Outrage drive clicks and clicks generate revenue.
Additionally, there are numerous difficulties in executing this colossal task. Nuance is ignored too often. In one lawsuit recently filed against Facebook, a journalist claims he was defamed by factcheckers labeling misrepresenting his content and labeling it “misleading.”
The platforms need to take a lot more responsibility in moderating and doing so accurately.
3. The Public?
Like a jury of one’s peers, forums and panels could democratize the moderation of social media. A similar concept is in use on Reddit where both posts and comments are up or downvoted, impacting what users see in their feeds.
Unfortunately, people are inclined to vote against claims they dislike, regardless of the truth of the claims. Again, there are challenges in implementing this effectively with a diverse, unbiased group that won’t lead to echo chambers. Additionally, sometimes the most passionate advocates for an initiative are also the most vocal. When those who are neutral remain silent, this often results in a distorted perception of the actual public sentiment.
Team Efforts are Needed, Including Influencers
Building structures to safeguard against misinformation and hate speech requires multiple entities taking more responsibility. On their own, each of these various stakeholders are flawed, but combining them can result in checks and balances for a positive impact. Each should play their own role and serve to compensate for the shortcomings of the others.
A myriad of voices, including NGOs, specialists, influencers, and other thought leaders should set the tone for proper social media conduct. Additionally, governments and other organizations can collaborate with influencers to impact how people act on a daily basis. 87% of consumers made purchases based on influencer recommendations. Consumers seem to trust shopping recommendations more from influencers than from family and friends. This makes influencer marketing a lucrative channel for brands, as well as an important pillar in the effort to uphold truth and positivity on social media. In addition to battling hate speech, influencers can combat misinformation by encouraging followers to use reliable sources of information and promoting independent news verification services.
Top 5 Tips for Brands Navigating Contentious Issues
- Avoid controversial topics not relevant to your brand and / or audience
There’s no reason for a fashion brand to weigh in on election fraud. People don’t need to hear about abortion rights from their favorite bottled drink. While your brand may wish to take a stand on certain issues, it’s best not to fill your feed with frequent statements on numerous controversies.
- Always stick with respected, official guidance
If you are going to weigh in on a topic, such as vaccinations, refer to the public health authority in your nation and share their updated, official guidance along with direct links to their online resources.
- Be certain your messages are clear
When addressing a controversial topic, even indirectly, is not the time for sarcasm or ambiguity. Make certain your stance is obvious. Leave no room for misinterpretation.
- Carefully vet influencers
When conducting due diligence, also be on the lookout for influencers who might have the potential to offend your audience, or who are prone to impulsive or deliberately shocking actions. Working with an influencer marketing agency can also help avoid disasters, since agencies conduct extensive research prior to doing business with any brand or influencer.
- Avoid disseminating misinformation
Of course, most brands don’t need to be reminded not to spread conspiracy theories such as that the Earth is flat. Still, misinformation could slip through accidentally, so, when in doubt, fact-check yourself.
Individual Responsibility is Vital
While team efforts are needed, they’re still not the whole solution. The most important element is to be the change you want to see.
Individual responsibility is crucial. Brands should be sure to follow and promote responsible social media use, and work with influencers and leaders to promote constructive and positive messaging. The Conscious Influence Hub Code of Conduct can serve as a template. It’s always beneficial for brands to assert their commitment to truth and wellbeing. Be sure to stand in opposition to hate speech and negativity.
Author: Megan Bozman, Owner @Boz Content Marketing