PARIS: Major tech firms pledged on Wednesday to pursue a range of fresh measures aimed at stamping out violent extremist content on the internet, amid growing pressure from governments in the wake of the massacres at two New Zealand mosques in March that left over 50 people dead.
The “Christchurch Call” was spearheaded by New Zealand’s premier Jacinda Ardern and French leader Emmanuel Macron, who gathered executives and world leaders to launch the initiative at a meeting in Paris.
The US government did not endorse the Christchurch Call and was not represented at the Paris meeting.
The vow came as outrage has mounted since the massacre in which the gunman broadcast his rampage live on Facebook via a head-mounted camera.
“The dissemination of such content online has adverse impact on the human rights of the victims, on our collective security and on people all over the world,” the signatories said in a statement.
Facebook, in particular, has faced withering criticism since the Christchurch attack, after the horrific footage was uploaded and shared millions of times despite efforts to remove it.
The social media giant, which participated in crafting the new commitments, said earlier in the day that it would tighten access to its livestreaming feature.
Google and its YouTube unit also joined the pledge, along with Twitter, Wikipedia, Dailymotion and Microsoft.
The companies said they would cooperate on finding new tools to identify and quickly remove extremist content, such as sharing databases of violent posts or images to ensure they don’t spread across multiple platforms.
“For the first time, governments, international organisations, companies and digital agencies have agreed on a series of measures and a long-term collaboration to make the internet safer,” Macron’s office said in a statement.
“We need to get in front of this [problem] before harm is done,” Jacinda Ardern told CNN in an interview on Wednesday. “This is not just about regulation, but bringing companies to the table and saying they have a role too.”
Many countries have already tightened legislation to introduce penalties for companies that fail to take down offensive content once it is flagged, by either users or the authorities.
Facebook said it would ban users of its Live portal who shared extremist content and reinforce its internal controls to stop the spread of offensive videos.
“Following the horrific recent terrorist attacks in New Zealand, we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate,” Guy Rosen, the firm’s vice president for integrity, said in a statement.
But analysts say the tighter controls pledged on Wednesday would go only so far in preventing people from circumventing rules and policies already in place against disseminating violence and hate speech.
“You can’t prevent content from being uploaded: it would require the resources for tracking everything put online by all internet users,” said Marc Rees, editor in chief of the technology site Next INpact.
“Can you imagine trying to get TV or radio to prevent libellous, abusive or violent speech that someone might say?” he wondered.
In an opinion piece in The New York Times over the weekend, Ardern said the Christchurch massacre underlined “a horrifying new trend” in extremist atrocities.
She said Facebook removed 1.5 million copies of the video within 24 hours of the attack, but she still found herself among those who inadvertently saw the footage when it auto-played on their social media feeds.
Around 8,000 New Zealanders called a mental health hotline after seeing the video, Ardern told CNN.