News

Society warns of harm to online investigative journalism

Posted on: September 15, 2020 by Claire Meadows

The Society of Editors has expressed concern following a warning by a leading human rights organisation that social media companies are taking down content that could threaten future journalistic investigations.

The report by Human Rights Watch, “Video Unavailable: Social Media Platforms Remove Evidence of War Crimes” warns that platforms such as Twitter, Facebook and YouTube are increasingly using artificial intelligence mechanisms to remove content that breaches their guidelines without archiving material in a manner that is accessible for investigators, journalists and researchers to help hold perpetrators to account.

The report warns that social media content, particularly photographs and videos, posted by perpetrators, victims, and witnesses to abuses, have become increasingly central to some prosecutions of war crimes and yet law enforcement officers and others are likely to be “missing important information and evidence that would have traditionally been in the public domain” because increasingly sophisticated artificial intelligence systems are taking down content before any of them have a chance to see it or even know that it exists.

Ian Murray, Executive Director of the Society agreed that while it is correct that platforms continue to remove content that incites or promotes violence, more needs to be done to ensure that accountability is maintained and that any future criminal or journalistic investigation is granted access to any such material.  

He said: “While the Society supports and, has welcomed, discussions around the need to crack down on illegal and harmful content online we remain deeply concerned that platforms are increasingly permanently removing content without careful thought or attention to the matter of future accountability through both criminal and journalistic investigations.

“The Society has already warned that in seeking to adhere to regulations and guidelines – whether it be through proposed Online Harms legislation or efforts to tackle disinformation – social media companies will likely use cheap algorithms to seek out and remove content that will prove far too blunt an instrument. As recognised by Human Rights Watch, such algorithms may already be acting as a barrier to future debate and accountability.

“As recognised in the report, journalists have played a vital role in documenting atrocities in Iraq, Myanmar, Syria, Yemen and Sudan to name but a few and without their investigations, criminal proceedings may never have taken place. It is not beyond the realm of future possibility that similar journalistic and criminal investigations could be hampered if content continues to be permanently removed with no thought to accountability or any means of accessing material down the line.”

The report by the US-based human rights organisation, calls for deleted content to be made available through an independent mechanism to criminal investigators, journalists, academics and non-governmental organisations.

The report pointed to the existing mechanism in the US to preserve potential evidence of child sexual exploitation as an example of how any new mechanism could work. US-registered companies operating social media platforms are required to take down content that shows child sexual exploitation, but also preserve it on their platforms for 90 days and share a copy of the content, as well as all relevant metadata—for example, the name of the content’s author, the date it was created, and the location—and user data, with the National Center for Missing and Exploited Children (NCMEC).

Any mechanism to preserve publicly posted content that is potential evidence of serious crimes could be established through collaboration with an independent organisation that would be responsible for storing the material and sharing it with relevant actors, the organisation suggests.