Facebook has come under fire for many things in the last few years. It has been accused of failing to take action on hate speech and misinformation; of encouraging addiction to social media and of infringing on the privacy of millions.


But Nick Clegg, Facebook’s VP of global affairs and communications – and former deputy prime minister of the UK – is keen to stress that the social media giant has taken far more action in the past few years than it is given credit for.


“Over the last year we have spent, and I'm now talking about billions of dollars, more money on integrity on our platforms, including keeping elections as safe as we possibly can, than the total revenues of the company when it was floated back in 2012,” he says, speaking at a talk at TNW2020 ahead of the US presidential election.


“I challenge you to find almost any other corporation which has made such an aggressive pivot, worth so much money with so many extra resources, in order to play our responsible role in keeping elections safe.”


Taking a combative tone with interviewer Richard Waters, west coast editor of the Financial Times, Clegg rejects the idea that Facebook simply isn’t doing enough to tackle issues of hate and misinformation, arguing that there are limits to what is practically possible.


“I do think we need to be quite candid with ourselves: you are never going to eliminate everything you don't want online. It is never, ever going to happen,” he says.


“I personally feel, as someone has moved from politics to Silicon Valley relatively recently, that the ambition of and the scale at which Facebook is operating now is probably more sophisticated technologically and at scale, than exists anywhere else in the industry.


“But I will never, ever claim, and no one ever can claim, that a journalist will not be able to scour the internet and find a piece of content which has got through the net and then slap it on the front page.”

Nick Clegg: Facebook is making progress on hate speech

One of the biggest issues levelled at Facebook is a perceived lack of action on hate speech. However, Clegg argues that the company is making “significant progress” on this issue.


“A central point is the fact that two years ago we were only able to identify 23% of hate speech before it was reported to us, but now we do so in over 90% cases,” he says.

“Is it 100%? No. Will ever be 100%? I doubt it. Is it progressing in the right direction? Yes.”

“[It] shows that our machine learning tools in particular, our AI systems, are able to operate at scale.”


He also pointed to progress on the company’s action against bot accounts, which are often used to proliferate misinformation or hate speech, proudly stating that the company’s automated system had removed “six and a half billion fake accounts” in 2019.


“Is it 100%? No. Will ever be 100%? I doubt it. Is it progressing in the right direction? Yes.”

“Our advertisers do not like clickbait”

On the issues of addictiveness, it has been suggested that Facebook benefits significantly from its content being highly polarised and incendiary, however Clegg is vehement that this is not the case.


“Just because it's repeated often doesn't make it true,” he says.


“Our business model is based on advertisers. Advertisers do not like polarising, clickbaity, unpleasant content.”


He also argues that while users may react to highly negative clickbait in much the same ways as a “lurid tabloid headline”, that does not mean that they return to the site because of it.

“We know from our own research that clickbaity, unpleasant, polarising, hateful content discourages long-term sustainable use.”

“Far from being something which promotes engagement, we know from our own research that clickbaity, unpleasant, polarising, hateful content discourages long-term sustainable use,” he says.


Clegg also points to efforts made back in 2018 to tackle the problem, when the company updated its algorithm to promote “local and friends and family content” and “meaningful social interactions”. Organisations, including news and brands, to saw their content deprioritised, in favour of that shared by people who users were following.


"[It] led, by the way, to the largest drop in Facebook share price,” he says.


“Facebook did that at the time because it knows that in the long run, if you want to retain users so they come back to enjoy the pleasurable experience of sharing content with their with their friends and family, you need to try and suppress clickbaity, polarising material.”

Cover image courtesy of Ms Jane Campbell / Shutterstock.com

Back to top

Share this article