Social media platforms are under constant pressure to regulate misinformation. We analysed how the three most popular social media platforms are tackling fake news related to the pandemic.
During past pandemics like ‘Spanish flu’, people did not have access to information as we have now during the times of COVID-19. Today, social media allows for sharing information seamlessly. But this ease of communication has, in turn, given rise to an avalanche of fake news and misinformation. Today, health experts, policymakers and even the general public are not just dealing with a pandemic, but a phase of “infodemic” where fake news is harming human lives.
Since people are highly dependent on social media and chat apps for a daily dose of updates even about a deadly disease like coronavirus, it is important for social media to appropriately respond to the current situation.
Social media platforms can also be used to gauge the extent of fake news spread by using indicators by the number of “Likes” and shares – by looking at how people interact and how they respond to a particular post, image or videos shared. For social media sites and apps, it becomes imperative then, to regulate content and join hands in fighting fake news.
We analysed how the three most widely popular social media platforms in India – Twitter, Facebook, Whatsapp responded to the current pandemic.
Facebook:
The world’s largest and most popular social media platform has come under scrutiny time and again for various reasons including allowing promotion of ads with misleading claims but it seems like, it’s putting an extra effort to tackle the misinformation being spread through Facebook.
As on March 18th, 2020, Facebook started Coronavirus (COVID-19) Information Center, which appears at the top of News Feed, to provide a central place for people to get the latest news and information as well as resources and tips.
Furthermore, it has banned ads for hand sanitizer, disinfecting wipes, and COVID-19 testing kits. When users search for ‘coronavirus’, Facebook directs them to the WHO resource page. It has also launched the Messenger Coronavirus Community Hub with tips and resources to keep people connected to their friends, family, colleagues, and community, and prevent the spread of misinformation. It also includes advice on how to recognise and avoid scams and misinformation online. Facebook is labelling coronavirus misinformation with “fact check” labels and has pledged $100-million to support fact-checking initiatives of newsrooms, in addition to the previous grants made to local news and fact-checkers. It is also offering $25 million in the form of an emergency grant for local news through the Facebook Journalism Project.
Overall, it is making health and relief efforts along with access to correct information to limit the spread of COVID-19 misinformation.
WhatsApp:
Whatsapp has emerged as a hotbed of misinformation since the start of the pandemic. Fake messages, images, and videos spread like wildfire in private Whatsapp message groups. Since Whatsapp has end-to-end encryption, the posts cannot be monitored and scrutinised.
To tackle this, Whatsapp announced to impose a strict new limit on message forwarding and if a message has already been forwarded many times, users will only be able to send it to one chat at a time rather than five, as was the case before.
On March 18th, 2020 Whatsapp launched the WhatsApp Coronavirus Information Hub, working with the WHO, UNICEF, and UNDP to keep its global users informed about the pandemic and, therefore, limit the spread of rumours.
On March 20th, it launched the World Health Organization’s Health Alert on WhatsApp. The WHO Health Alert is free to use and answers common questions about COVID-19. To contact the WHO Health Alert, save the number +41 79 893 1892 in your phone contacts and then text the word ‘Hi’ in a WhatsApp message to get started.
It has also donated $1 million to the International Fact-Checking Network (IFCN) to expand the presence of local fact-checkers on WhatsApp.
Twitter:
Twitter started by directing its users to the WHO or CDC website when users search for information related to ‘coronavirus’.
On March 18th, Twitter broadened their guidelines on unverified claims that incite people to engage in harmful activities or could lead to widespread panic. It basically could ban tweets that might put people at a higher risk of transmitting COVID-19. The policy guidelines highlighted “we’ve adjusted our search prompt in key countries across the globe to feature authoritative health sources when you search for terms related to the novel”.
Since March 18th till April 23rd, Twitter removed more than 2,230 tweets containing misleading and potentially harmful content from Twitter.
Twitter also starting using labels and warning messages to provide additional explanations or clarifications in situations where the risks of harm associated with a Tweet are less severe but where people may still be confused or misled by the content. Labels now appear on Tweets containing potentially harmful, misleading information related to COVID-19.
The platform is continuously working to share credible coronavirus updates and has launched a section which collates all the latest fact checks to get health information to the widest possible audiences. The page is a timeline of Tweets with information from organisations that are certified by Poynter Institute’s International Fact-Checking Network (IFCN) to help understand whether information around the pandemic is credible and accurate.