There are many things policy-makers can do to fight fake news and propaganda, but they must be careful to ensure they don’t find themselves mimicking the behaviour of authoritarian states.
Author: William Echikson
When I worked at Google, I was proud to promote one of company’s most innovative products. It wasn’t the tech giant’s magical search engine. Nor was it its efficient Android mobile phone operating system or its crystal clear Hangout video calls. It was the Google Transparency Report. The report, the first of its kind, shone a penetrating spotlight on government censorship. It recorded the number of demands for information about users or takedowns of content that Google received from governments around the globe. The goal was to make the authorities think twice before making such requests and to show how Google defended free speech.
The more requests Google turned down, the more delighted I was. Given the report’s powerful message, many other internet and telephone companies soon began to publish their own transparency reports.
Fast forward a decade, and we find democracies are now agonising over fake news and terrorist propaganda. Earlier this month, the European Commission published a new recommendation demanding that internet companies remove extremist and other objectionable content within an hour of being notified — or face legislation forcing them to do so. The Commission also endorsed transparency reports as a way for a company to demonstrate its compliance with the law.
Indeed, Google and other big tech companies still publish transparency reports, but they now seem to serve a different purpose: to convince authorities in Europe and elsewhere that the internet giants are serious about cracking down on illegal content. The more takedowns they can show, the better.
Having once fought Europe’s “right to be forgotten” as a threat to free expression, the company recently updated its reports to promote its success in allowing Europeans to exercise this right. Since 2014, Google has received 2.4 million web requests to de-list web links. While the company rejected more than half of these requests, it did take down links to articles accusing a Finn of sex crimes and an Irishman of domestic violence, both of whom were subsequently acquitted,
We can also expect additional “transparency” designed to underline Google’s own content crackdown. The company’s transparency report does not yet include a full accounting for YouTube, the main vehicle for illegal content on Google’s services. But Google has hinted that it will soon produce a dedicated report for the video-sharing site. When and if it does, it promises to show not thousands but millions of annual takedowns, many for copyright violations, but also many for breaking “community rules”.
YouTube recently announced that it has begun removing all videos from groups designated as terrorists by the US or British government, even those that do not depict violence or preach hate. The pace of private-sector censorship is astounding — and it’s growing exponentially.
Only a few years ago, some six hours of video were going up every minute on YouTube. Today, it is 300 hours of video per minute. In June 2017, Facebook counted 2.01 billion monthly active users worldwide. Every 60 seconds, 510,000 comments are posted, 293,000 statuses are updated and 136,000 photos are uploaded. The only possible way to monitor such a huge volume of content is by using machines. Google is devising algorithms, ranging from keyword filters to artificial intelligence, to identify moderately objectionable content.
These tools work by matching patterns of behaviour and previously identified illegal content with new uploads or web browsing. But machine-filtering represents a danger to free speech. According to Emma Llansó, Director of the Free Expression Project at the Center for Democracy and Technology (CDT), machines find it difficult to distinguish between fake and real news as well as between what is appropriate and what is not.
In Europe, incentives are now aligned to take down first, then later to ask questions. Since January 1st, Germany has introduced a new Net Enforcement Law requiring that social media networks check and remove false and hate speech or face a €50 million fine. Legal content is being censored. When Justice Minister Heiko Maas tweeted that an author who opposes immigration was an “idiot”, Twitter removed the post. Beatrix von Storch, an MP for the far-right Alternative for Germany, criticised German police for publishing a New Year’s greeting in Arabic and Twitter suspended her account.
YouTube recently took down a video from the esteemed op-ed syndicate Project Syndicate examining the revival of Holocaust revisionism. Why? Holocaust revisionism is illegal in 16 European countries, and the video-sharing platform couldn’t distinguish between revisionism and an examination of it. There are many things policy-makers can do to fight fake news and propaganda. New legislation for websites could require transparency about sponsored content and who is financing them, and the amount of money for sponsored content could be capped. They could attempt to clearly define illegal hate speech. But they must be careful to avoid creating incentives for mass removals — and ensure they don’t find themselves mimicking the behaviour of authoritarian states.
Turkey demands that internet companies hire locals whose main task is to take calls from the government and then take down content. Russia reportedly is threatening to ban YouTube unless it takes down opposition videos. China’s Great Firewall already blocks almost all Western sites as well as much domestic content.
Against this disturbing trend of curbing internet freedom, companies should return to the original purpose of transparency reports — shedding light on government overreach and demonstrating how few, rather than how many, pieces of content they are willing to take down.
William Echikson, a former senior policy manager at Google, is head of the Digital Forum at the Centre for European Policy Studies. An earlier version of this commentary was published on Euractiv, 23 January 2018.
CEPS Commentaries offer concise, policy-oriented insights into topical issues in European affairs. As an institution, CEPS takes no official position on questions of EU policy. The views expressed are attributable only to the author and not to any institution with which he is associated.
© CEPS 2018