Social media giants Facebook, Twitter, and YouTube appeared in front of the US Senate cyber-terror committee yesterday and everyone seemed very happy with the progress they have made in tackling extremist content.
The session, which was entitled “Terrorism and Social Media: #IsBigTechDoingEnough?” was expected to be a grilling of the three platforms seen by many as being most responsible for the spread of extremist content online.
Social media stats laid bare
But, the social media behemoths came well prepared with armfuls of statistics that showed quite clearly just how well they are doing.
YouTube claimed that they now remove 98% of videos featuring violent extremism using automated algorithms. Facebook claimed to remove 99% of ISIS and Al Qaeda-related terror content before it is even reported.
Their head of product policy and counterterrorism Monika Bickert went on to claim that “Once we are aware of a piece of terrorist content, we remove 83 percent of subsequently uploaded copies within one hour of upload.”
Twitter also made big claims, saying that their “in-house proprietary” technology picked up “more than 90 percent of suspensions” and that “three-quarters of those suspensions were flagged before the account had a chance to tweet even once.”
Of course, all of these stats can be flipped on their heads too. They can also be said to reveal that Facebook fails to catch 20% of re-uploaded videos and didn’t comment on how many videos it missed altogether. Meanwhile, YouTube’s own statistics show that they miss almost 30% of violent videos uploaded onto their platform.
They also didn’t comment on how many innocent videos had been blocked after being classified as extremist by the faceless algorithms. No automated system is perfect, and it is likely that thousands of completely innocent posts have already been blocked and will continue to be.
And of course, all of these statistics are compiled by the social media sites themselves anyway rather than an independent body. And they have a vested interest in making themselves look good and certainly not highlighting their own censorship practices.
Fortunately for the triumvirate, the US Senate Committee seemed content that they were already going above and beyond the call of duty and showed nothing but obedient interest in their latest plans to tackle radical content; the ‘Redirect Method’.
The ‘Redirection Method’
The new ‘Redirection Method’ is the result of a partnership between YouTube and Google’s in-house think-tank, Jigsaw.
It is, in theory, a simple idea. If users go onto YouTube and search for a video which their website deems to be extremist in nature, they will automatically be redirected to content which debunks extremist propaganda and seeks to stop individuals from being radicalised.
The idea is very much in its infancy and was actually mentioned not by the social media themselves, but Senator John Thune (R-SD), the chairman of the Committee.
To date, this approach is untested and basically just an idea, but the Senate cyber-terrorism committee chairman at least seems convinced that it is the answer.
Social media: our self-appointed censors
Of course, this approach to runs the risk of social media sites censoring innocent content, as it to will be a fully automated system.
All of which means that at the end of a Committee hearing which appeared to satisfy everyone present, there was very little satisfactory information to emerge.
The world’s biggest social media sites continue to claim they are working hard to block extremist content, but the statistics they provide to back up this claim suggest their actions to date are far from perfect.
Meanwhile, their algorithms continue to hoover up and block any content which they deem to be extremist without any consideration being given to just how accurate they are.
The situation leaves a sour taste in the mouth and individuals rapidly losing faith in the concept that social media sites are the self-appointed censors of the western world. It goes against the free and open internet concept that most internet users advocate.
But sadly, for now, at least, the only question they have to answer if whether they prefer their censorship to be carried out by a heavy-handed state or an automated tech corporation. Very much the lesser of two evils.