The Society of Editors (SoE) has called for more clarity over who will decide what conversations are to be censored under new rules brought in by social media giant Twitter.
Twitter has announced a new strategy for combatting tweets it recognises as spreading disinformation and fake news about the Covid-19 virus emergency.
The online platform has introduced a three-pronged approach which tackles tweets and links it identifies as a potential risk.
However, while recognising the need for social media companies to play their part in tackling the spread of disinformation, the SoE has called for clarity and caution in any attempt to stifle debate on Covid-19 and other news topics.
“No one can doubt this is a huge challenge for the digital giants and the spread of dangerous misinformation must be tackled, but companies like Twitter must be careful not to stifle legitimate debate in the process,” commented SoE executive director Ian Murray.
“Mainstream news media in many countries rely on Twitter and other social media sites to alert readers to their articles, which may be tackling issues surrounding Covid-19 where the experts do not agree. To remove or place warning on such posts would not be in the public interest.”
The Twitter initiative follows other social media giants who have announced they are to tackle the spread of fake coronavirus information.
In a blog post today, Twitter outlined steps it will take once a post has been identified as a risk. The specific action taken will depend on whether the company deems the claims in a tweet as “misleading”, “disputed”, or “unverified”.
Based on how severe the claims made in the tweet are, the company will apply either a label or a warning or remove the message altogether.
Twitter says the changes will also be applied to tweets sent before today.
For tweets deemed moderately harmful, Twitter will insert a link underneath the tweet directing users to reliable information. This strategy follows the company’s approach to so-called ‘deepfakes’, which was announced earlier this year.
Announcing the new initiative, Twitter said: “In serving the public conversation, our goal is to make it easy to find credible information on Twitter and to limit the spread of potentially harmful and misleading content. Starting today, we’re introducing new labels and warning messages that will provide additional context and information on some Tweets containing disputed or misleading information related to COVID-19.
“Moving forward, we may use these labels and warning messages to provide additional explanations or clarifications in situations where the risks of harm associated with a Tweet are less severe but where people may still be confused or misled by the content. This will make it easier to find facts and make informed decisions about what people see on Twitter.”
The company said it will take action based on three broad categories:
Misleading information — statements or assertions that have been confirmed to be false or misleading by subject-matter experts, such as public health authorities.
Disputed claims — statements or assertions in which the accuracy, truthfulness, or credibility of the claim is contested or unknown.
Unverified claims — information (which could be true or false) that is unconfirmed at the time it is shared.
“Our teams are using and improving on internal systems to proactively monitor content related to COVID-19. These systems help ensure we’re not amplifying Tweets with these warnings or labels and detecting the high-visibility content quickly. Additionally, we’ll continue to rely on trusted partners to identify content that is likely to result in offline harm.”