Social media companies have been criticized in the past for not doing enough to stop the spread of wrong information on social media.
However, after the surge of the coronavirus pandemic surge, several countries have issued lock-downs with most people now relying on social media to get the latest information about the pandemic.
On coronavirus, however, some social media companies have competed to be responsible and reliable sources of information, while some are still reluctant.
Could this be social media’s time to rebuild trust with the public and with regulators?
In this article, The New Times looks at some of the latest strict interventions made by tech giants to stem the massive amount of wrong information being spread about coronavirus. And how they are rising to the challenge.
Facebook which totals some 2.4 billion monthly users, has led with the strongest and most consistent actions during the unfolding coronavirus crisis.
Mark Zuckerberg, the company’s CEO, announced on April 16, that it will start warning users directly if they engage with harmful misinformation about the coronavirus, such as theories that drinking bleach will help patients or that social distancing is ineffective at stopping the pandemic’ spread.
He pointed out the red flag notices will appear in users’ News Feeds and direct them to information provided by the World Health Organization.
Social media companies can demote, bock or elevate posts. According to Facebook, the average user sees only 10% of their News Feed and the platforms determine what users see by reordering how stories appear.
Therefore, the new feature will rely on third-party fact-checkers and health authorities flagging problematic content, and remove posts that fails the test. It also blocks or restricts hashtags that spread coronavirus misinformation on its sister platform, Instagram.
Additionally, Facebook also expanded its fact-checking system. Which also extends to Instagram. The company put warnings on 40 million posts last month. This seems to be an effective tool: According to Facebook, users didn’t go onto the original content after seeing these warnings 95% of the time.
"On Facebook and Instagram, we've now directed more than 2 billion people to authoritative health resources via our COVID-19 Information Center and educational pop-ups, with more than 350 million people clicking through to learn more” reads part of the blog post published on April 16.
Other radical steps taken by social media companies include Twitter’s new policy to remove misinformation that contradicts official public health advice, such as tweets encouraging people not to observe physical distancing guidelines, and WhatsApp’s new message which minimizes forwarding messages.
Instagram, on the other hand, delivers a pop-up urging its users to go to the website for the Centres for Disease Control and Prevention (CDC)or to the NHS.
Though Google-owned YouTube has not announced any transparent blocking policy of misinformation on a solid fact-checking system, the company removes videos claiming to prevent infections.
The coronavirus has become a pandemic with more cases being registered around the world. Only 11 countries of the 196 countries in the world have not confirmed cases.
As it stands, more than 2,172,030 cases have been confirmed, while 146,201 patients have lost their lives in 185 countries.