How Fake Fixes To Fake News Could Lead To Real Problems

How Fake Fixes To Fake News Could Lead To Real Problems

fake-news-opinion

©BigStock

Actualizado 27 | 03 | 2017 09:41

In his recent manifesto, Mark Zuckerberg asserts that the response to our dysfunctional and conflict-ridden politics is to build a stronger global community based on ubiquitous interconnection. We know of course that Facebook stands to profit from this utopian vision, and we should be skeptical of the motives underlying Zuck’s position. But it’s worth taking a second look at the idea of working on underlying economic and political issues in our societies, rather than focusing on the effects of online expression—particularly in the context of the moral panic over “fake news.”

The consternation about fake news from Western journalists, scholars of propaganda, and policymakers has inspired waves of stories and talk-shopsaddressing its growth as a threat to our public discourse, our journalism, and our systems of governance. And we see many attempts to understand, fix, or apportion blame. Yet many of the proposed fixes are deeply problematic because they advocate overly broad and vague restrictions on expression. Solutions that would limit suspected “fake” expression or strongly encourage private intermediaries to restrict some kinds of speech and prioritize or “whitelist” others are particularly troubling.

This week, Germany was the latest country to introduce a plan that would force social media companies to monitor and censor some kinds of online expression. Justice minister Heiko Maas wants to put regulatory pressure on social media companies, and especially Facebook and Twitter, to police expression, asserting that they have failed to do so voluntarily. Draft legislation proposes to fine social media companies up to €50 million for failure to quickly delete hate speech, fake news, and other types of misleading speech.

In this context, we can look to countries that have created regulatory regimes to control online expression—such as China—not as entirely “other”, but perhaps as cautionary examples. When posing solutions to fix fake news, we should be extremely careful not to build our own self-censorship machines.

“FAKE” NEWS AND THE ROLE OF STATES

Many recent false news stories have come from groups not affiliated with states, but examples from Russia, China, Iran, and many other countries should remind us that the biggest threat to our public discourse is false information used by and to the advantage of governments. Governments, after all, have the authority to couple shifting narratives of truth to state mechanisms of control. We ought to be especially attuned to states that restrict the “false” expression of their citizens, while at the same time creating misleading narratives and stories about themselves. When states attempt to control narratives, it’s time to start looking for signs of tyranny.

For the past 20 years, we have seen states or their affiliates use internet-based false news and disinformation as part of broader agendas to shape public opinion for political ends. Well-researched examples include China’s 50 Cent Party, Russia’s troll factories, and astroturfing bot engines contracted by the U.S. government, all of which are designed to flood internet forums and social media with falsities and distractions.

At the same time, some states have taken steps to regulate, restrict, and even criminalize “false” stories produced by citizens and journalists as a punitive method of controlling expression. In Bahrain, China, Egypt, Turkey, Russia, Venezuela, Iran, and elsewhere social media users have been arrested and prosecuted for sharing information deemed by governments to be false or misinformed. New regulations in China forbid the use of “unverified facts distributed via social media platforms” and prohibit websites “from quoting from unnamed or fake news sources and fabricating news based on hearsay, guesswork, or imagination.”

A recent declaration issued by a group of intergovernmental organizations, including UN Rapporteur for Freedom of Expression David Kaye, discusses these regulatory efforts from the perspective of international law and norms. They emphasize that international human rights doctrine explicitly protects expression that may differ from or counter governmental positions, even when it is factually inaccurate. Regulatory and technical approaches to reduce fake news should, they argue, continue to safeguard the diversity and abundance of speech. They write:

the human right to impart information and ideas is not limited to “correct” statements…the right also protects information and ideas that may shock, offend, and disturb, and that prohibitions on disinformation may violate international human rights standards, while, at the same time, this does not justify the dissemination of knowingly or recklessly false statements by official or State actors…

WHAT’S THE PROBLEM, EXACTLY?

The real-life consequences of fake news are unclear. A recent study by the MIT/Harvard research project Media Cloud, led by Yochai Benkler and Ethan Zuckerman, examines the effects of right-wing information sources in the U.S. It suggests that rather than wringing our hands over “fake news”, we should focus on disinformation networks that are insulated from mainstream public conversations. Benkler and his colleagues challenge the idea that “the internet as a technology is what fragments public discourse and polarizes opinions” and instead argue that “human choices and political campaigning, not one company’s algorithm” are the more likely factor to influence the construction and dissemination of disinformation.

Nevertheless, projects seeking to control fake news are running full steam ahead. These efforts have the potential to affect what information is easily available to the public, and if we aren’t careful, could even diminish our rights to expression. Approaches tend to fall into three broad categories:

  • Fix online discourse by nudging technologies that control or censor some categories of speech
  • Fix the public by making us better at distinguishing fact from fallacy
  • Fix journalism, generally with massive cash transfers from the technology sector

Notably, these approaches all focus on mitigating effects rather than confronting the underlying economic or technical incentives in the structure of media, or the broader social, economic, and political incentives that motivate speech.

FIX ONLINE DISCOURSE

In seeking to build systems to manage false news, technology companies will end up creating systems to monitor and police speech. We will quickly find that they need to use ever-more granular, vigilant, and therefore continuously updated semantic analysis in order to find and restrict expression. These proposed solutions to fake news would be in part technological, based on AI and natural language processing. They will automate the search for and flagging of certain terms, word associations, and linguistic formulations. But language is more malleable than algorithm, and we will find that people will invent alternative terms and locutions to express their ends.

The slipperiness of language could cause the hunt for “fake” or hurtful speech to become an end in itself. We have already seen this in the hunt for “toxic” language in a recent project called Perspective, made by Google’s Jigsaw, and other efforts will surely follow.

Companies are likely to supplement their automated processes with human monitoring—from social media platform users flagging suspect content to contractor armies interpreting those flags and implementing restrictions. Added to this, perhaps, will be ombudspeople, feedback loops, legal processes, and policy controls upon the censors. Those systems are already in place to deal with terrorism, extreme hate speech, extreme violence, child pornography, and nudity and sexual arousal. They can be further refined and expanded to police other types of expression.

Proposed solutions in this vein mostly fail to acknowledge that the technological incentives that encourage fake news are the same as the forces that currently finance the digital media industry—that is, advertising technology masquerading as editorial content.

The internet theorist Doc Searls calls this “adtech”, emphasizing that it is a form of direct marketing or spamming. The rise of fake news is driven in part by organizations seeking revenues or political influence by creating sensational and misleading stories packaged for highly polarized audiences. Producers of this content benefit from a system already designed to segment and mobilize audiences for commercial ends. That system includes the monitoring of consumer habits, targeted advertising, direct marketing and the creation of editorial products appealing to specific consumer segments. These forces coalesce in a dance of editorial and advertising incentives that leads to further polarization and segmentation.

FIX THE PUBLIC

The next approach — that we fix ourselves — relies on the Victorian idea that our media systems would work if only people behaved in ways expected of them by the builders of systems. Media literacy campaigns, public education, fact-checking, calling out and shaming tactics, media diets, whitelists of approved media: these solutions require that we blame ourselves for failing to curb our appetites. It is not wrong to suggest that we are susceptible to the allure of the media’s endorphin-injection strategy to hook us on the sensational and trivial, or that education is important for a healthy civics. To focus blame primarily on individuals, however, is victim-blaming of a sort.

FIX JOURNALISM

The third approach—devoting more resources to better journalism, is an example of the journalism community jumping on the current moment to reassert their expertise and value. While a more proactive, better-resourced media is definitely vital for the long-term health of our civic life, conversations about journalism need to start with the trust deficit many journalistic outfits have accumulated over the past decades. That deficit exists precisely because of ever-more sensational and facile reporting, news as entertainment, and the corporate drive to maximize profits over the interests of audiences and readers.

Given that the business model of the liberal, capitalist media is primarily to sell eyeballs to advertisers, they should not be surprised to find those of us being sold becoming wise to the approach. And while efforts to strengthen journalism and public trust in the media are important and much-needed, they will not make fake news go away.

SO WHAT ARE WE REALLY TALKING ABOUT?

The technological and the human-based approaches to controlling inaccurate online speech proposed to date for the most part do not address the underlying social, political, or communal causes of hateful or false expression. Instead, they seek to restrict behaviors and control effects, and they rely on the good offices of our technology intermediaries for that service. They do not ask us to look more closely at the social and political construction of our communities. They do not examine and propose solutions to address hate, discrimination, and bias in our societies, in issues such as income disparity, urban planning, educational opportunity, or, in fact, our structures of governance.

Frustratingly, we have seen these approaches before, in efforts to reduce online “extremism”, and also with dubious results. Countering violent extremism (CVE) projects suffer from similar definitional flaws about the nature of the problem, but that doesn’t stop governments from creating misguided responses. For examples, look to the many “counter-narrative” projects such as “Welcome to ISIS Land” funded by the U.S. State Department. These projects, supported by governments, international organizations, and companies, seek an array of technical, communications, and policy-based approaches to controlling extremism.

David Kaye, in an earlier joint declaration on CVE, notes the “fail[ure] to provide definitions for key terms, such as ‘extremism’ or ‘radicalization’. In the absence of a clear definition, these terms can be used to restrict a wide range of lawful expression,” but still inflict collateral damage, with pervasive surveillance and tracking that triggers the self-censor in all of us, resulting in the reduction of civic participation and dialogue.

How do we begin to tackle the larger challenges, those beyond simple technological fixes or self-blame? There are no easy solutions for the economic and social inequities that create divisions, and the technological and economic incentives that underpin our current information ecosystem are deeply entrenched. Yet we need to find a way to start serious conversations about these systemic challenges, rather than tinkering with their effects or simply assigning responsibility to the newest players on the field.

Sir Tim Berners-Lee, the inventor of the internet, has urged that we reform the systems and business models we have created to fund our online lives. He points, for example, to the use of personal data by companies as the driver for the creation of surveillance societies, which exerts chilling effects on free expression. He suggests seeking alternatives to the concentration of attention and power in the hands of a small number of social media companies that derive profit from showing us content that is “surprising, shocking, or designed to appeal to our biases.” He’s concerned by the use of these same tactics in political advertising, and its effect on our systems of electoral politics.

Confronting our social and economic inequities is even harder. It is the challenge of our time to find the language to conduct honest and frank debate about how we construct our economies and our states, how we apportion benefits, and which values guide us. Building civic communities that are rooted in trust, both online and off, is the ongoing and vital work necessary for public conversations about our collective future.

It is no small irony that the communications systems that we built to support such debate are imperiled, both by those who would explode the social norms of civic discourse for their ideological ends, and through resultant attempts to control extreme or misleading expression. It is easy to find fault with the technologies that facilitate our collective civic life. It is much more difficult to look at our civic life as a whole and determine whether and how it may be failing.

Etiquetas

Cargando noticia...