Facebook, YouTube and Twitter have agreed a deal with major advertisers on how they define harmful content.
The agreement – with the World Federation of Advertisers (WFA) – will see the social networks use common definitions for things such as hate speech, aggression and bullying.
Brands will also have better tools to control where their ads appear.
It follows an advertising boycott of Facebook earlier this year, involving more than 1,000 companies.
The boycott included some of the world’s biggest brands – such as Unilever and Coca-Cola.
It was driven in part by the Stop Hate for Profit campaign, a coalition of non-profit organisations urging brands to pull advertising to encourage radical reform of Facebook’s hate speech policies.
But this latest agreement is between the advertisers themselves and the social networks, and does not involve the non-profit groups.
It is also specifically about advertising – content policies do not need to change, and decisions about what to take down remain separate.
But the US Anti-Defamation League, responding on behalf of Stop Hate for Profit, gave a cautious welcome to the “early step”.
“These social media platforms have finally committed to doing a better job tracking and auditing hateful content,” chief executive Jonathan Greenblatt said.
But he warned that the deal must be followed through, “to ensure they are not the kind of empty promises that we have seen too often from Facebook” – and he said his group would continue to push for further change.Rob Rakowtiz from the WFA said the agreement “sets a boundary on content that absolutely should have no ads supporting it, therefore removing harmful content and bad actors from receiving funding from legitimate advertising.”
Independent audits
The details are being set by a group established by the WFA, called the Global Alliance for Responsible Media (Garm).It was set up in 2019, long before the boycott, to create a “responsible digital environment”, and it says the new deal is the result of 15 months of negotiations.
Garm will decide the definitions for harmful content, setting what it calls “a common baseline”, rather than the current situation where they “vary by platform”. That makes it difficult for brands to choose where to place their adverts, it said.The group will also create what it calls “harmonised reporting” methodologies, so that statistics on harmful content can be compared between platforms.By 2021, there will be “a set of harmonised metrics on issues around platform safety, advertiser safety, platform effectiveness in addressing harmful content,” it said.Independent audits will double-check the figures. And, crucially for advertisers, the new deal calls for control over how close an advert will appear to certain types of content.”Advertisers need to have visibility and control so that their advertising does not appear adjacent to harmful or unsuitable content, and take corrective action if necessary and to be able to do so quickly,” it explained.All three social networks publicly welcomed the agreement. None, however, said they were making any immediate changes to their wider content policies.