Why coronavirus means more online censorship

3D rendering of internet censorship

While remote working is proving to be a life-line for many companies and businesses around the world, it is also throwing up no shortage of unintended consequences and problems too.

One of those issues appears to be the prospect of more arbitrary and indiscriminate censorship. But for once, the reason for this is not over-zealous governments but automation.

How tech companies are turning to automation during the coronavirus crisis

An example of this is YouTube. They announced this week that they were reducing staffing in a number of areas of their business in an attempt to allow people to implement social distancing. This is, of course, perfectly understandable.

But instead of allowing people to do their work remotely, YouTube is instead turning to automated systems instead. This appears to particularly be the case for those who review and censor content.

“With fewer people to review content, our automated systems will be stepping in to keep YouTube safe,” they said in a statement.

This is extremely troubling because, even by their own admission, this automated system simply does not work as it is intended to. The statement even made clear that “More videos will be removed than normal during this time, including content that does not violate our Community Guidelines. We know this will be hard for all of you.”

YouTube users were quick to question why it wasn’t possible for staff to review and approve content remotely; something that would seem to be perfectly achievable. So far, there has been no response from YouTube to this question.

The big concerns that people have is why the use of automation for this task is necessary and also if and when YouTube will switch back to manual reviewing again.

The coronavirus crisis is going to result in huge changes in the way we live our lives, not just in the short-term but in the long term as well. We have already written about the prospects of a remote working revolution that lasts well beyond the crisis.

But there is also the concern that once governments and big tech companies like YouTube make changes like this, they will be very reluctant to change things back again once the crisis has ended.

If they don’t, the result would be an increase in arbitrary and unnecessary censorship across their platform.

YouTube is not the only company who are turning to automation to tackle the challenges of the coronavirus crisis.

Both Facebook and Twitter are doing a similar thing and turning to automated systems that they know are defective.

Facebook put up a similar statement to YouTube earlier this week stating that “We may see some longer response times and make more mistakes as a result.”

The dangers of automating content reviews

There are a number of different reasons why these automated systems are so concerning.

The biggest concern is that these automated systems simply don’t work very well. Their algorithms are programmed to spot certain keywords and phrases and the end-result is almost always the blocking and censorship of far more content than is necessary.

Content is also put up in all sorts of different languages too but these systems are invariably better with some languages than others. This can mean users that speak certain languages end up with far more content being blocked than others. This is not an issue if native language speakers review content manually.

Alternative content providers also argue that these automated systems are inherently biased against them and in favour of mainstream media. While it is necessary to be strict on the issue of fake news, especially during times like these, it is important not to silence those who express genuine opinions and different points of view.

Striking this balance is extremely delicate and undoubtedly beyond the capability of any automated system we have created so far.

Automated systems will always lean towards censorship if there is any doubt and that lack of nuance and inability to place things in context is why they are so unreliable.

A rise in global online censorship

If internet users shout loud enough, it is possible that, in time, the big tech companies will listen. But in the current climate of panic and battening down the hatches, this seems unlikely.

The grim reality is that these automated systems are going to play a big role in monitoring and reviewing online content in the short term. Sadly, that is likely to mean a rise in online censorship across the globe.

What is most important is that this move is not allowed to go unchecked. If your content is blocked on social media and you disagree with the decision, don’t hesitate to go through the process of appealing the decision and seek a manual review if that is possible.

But most importantly, we mustn’t forget that this switch to automation has happened and the ill-effects it has resulted in. And once coronavirus is under control and we can all get back to our normal lives, we must ensure big tech companies revert back to their old ways of reviewing content manually and censoring in only the most extreme of circumstances.

Author: David Spencer

Cyber-security & Technology Reporter, David, monitors everything going on in the privacy world. Fighting for a less restricted internet as a member of the VPNCompare team for over 7 years.

Away from writing, he enjoys reading and politics. He is currently learning Mandarin too... slowly.

Leave a Reply

Your email address will not be published. Required fields are marked *