Nick Clegg, former Deputy Prime Minister, has provided an update on the steps Facebook is taking to remove misinformation online.
Facebook has come in for criticism in recent weeks about whether, alongside other social media platforms, it is doing enough to combat misinformation around Covid-19 online. The update comes in the same week that a sub-committee of the Department for Digital, Culture, Media and Sport in the UK asked members of the public to flag up misinformation on social media platforms.
Clegg, Vice President of Global Affairs and Communications for Facebook since 2018, has provided an update on Facebook’s steps to tackle the issue in a piece titled ‘Combating Covid-19 misinformation across our apps’.
It was announced on Wednesday Facebook was expanding its partnership with Reuters’ fact-checking unit to specifically target misinformation on social media in the UK. Reuters will offer its media verification expertise in reviewing false or misleading content from U.K. users posted on Facebook and Instagram for its U.K.-based audience.
Clegg said that in the wake of the World Health Organization (WHO) declaring Covid-19 a global public health emergency, Facebook has continued to work to connect people to accurate information and is taking aggressive steps to stop misinformation and harmful content from spreading.
In relation to Facebook and Instagram specifically the platform continues to direct people to sites including the WHO and other official sources of information he said. A new COVID-19 Information Centre would also soon be rolled out globally.
He added: “Last week, we launched the COVID-19 Information Center, which is now featured at the top of News Feed on Facebook in several countries and includes real-time updates from national health authorities and global organizations, such as the WHO. The COVID-19 Information Center will be available globally soon.”
Clegg said that Facebook was continuing to take active steps to remove Covid-19 related misinformation that could contribute to “imminent physical harm”.
He added: We’ve removed harmful misinformation since 2018, including false information about the measles in Samoa where it could have furthered an outbreak and rumours about the polio vaccine in Pakistan where it risked harm to health aid workers. Since January, we’ve applied this policy to misinformation about COVID-19 to remove posts that make false claims about cures, treatments, the availability of essential services or the location and severity of the outbreak. We regularly update the claims that we remove based on guidance from the WHO and other health authorities. For example, we recently started removing claims that physical distancing doesn’t help prevent the spread of the coronavirus. We’ve also banned ads and commerce listings that imply a product guarantees a cure or prevents people from contracting COVID-19.
In relation to conspiracy theories, the network continued to work with fact-checkers to limit the reach and distribution of false information.
He said: “For claims that don’t directly result in physical harm, like conspiracy theories about the origin of the virus, we continue to work with our network of over 55 fact-checking partners covering over 45 languages to debunk these claims. To support the global fact-checking community’s work on COVID-19, we partnered with the Independent Fact-Checking Network to launch a $1 million grant program to increase their capacity during this time.
“Once a post is rated false by a fact-checker, we reduce its distribution so fewer people see it, and we show strong warning labels and notifications to people who still come across it, try to share it or already have. This helps give more context when these hoaxes appear elsewhere online, over SMS or offline in conversations with friends and family. On Instagram, we remove COVID-19 accounts from recommendations and we’re working to remove some COVID-19 related content from Explore, unless posted by a credible health organization.”
In announcing yesterday that the DCMS sub-committee on online harms and misinformation was asking the public to flag up posts on social media, Julian Knight, chair of the committee said that tech giants should face penalties if they were not responsible with the content of their sites.
He added: “The deliberate spreading of false information about COVID-19 could have serious consequences. Much of this is happening on social media through private channels, putting the onus on friends and family to identify whether the information they are seeing is misleading.
“There have been some shocking examples in recent weeks and we want people to send us what they’ve come across.
“We will call in social media companies as soon as the House returns to explain what they’re doing to deal with harmful content like this to help give people the reassurances they need at this difficult time. Tech giants who allow this to proliferate on their platforms are morally responsible for tackling disinformation and should face penalties if they don’t.”