The Unbearable Weight of Defining Disinformation and Misinformation on the Internet


The past few weeks have been pivotal for the regulation of internet disinformation and misinformation (D&M): Elon Musk agreed to buy Twitter largely to change his approach to D&M; the US government announced – and then suspended – a disinformation governance council to oversee certain D&Ms; the European Union has completed landmark new Internet laws, some of which regulate D&M; and former President Obama changed his longstanding hands-off approach and called for government regulation of D&M.

No exchange better illustrates the difficulty of defining D&M than the recent one between President Biden and Jeff Bezos, founder of Amazon and owner of the Washington Post. Following Biden’s tweet “Want to lower inflation? Let’s make sure the wealthiest companies pay their fair share,” Bezos replied: “The new misinformation board should review this tweet, or maybe they should form a new non-sequential board instead. A global industry and regulatory framework is emerging to address the D&M of the Internet, but how difficult is the task?

Two conditions regarding D&M regulation are important to note. The first is that most of the advocacy for D&M regulation relates only to very large platforms, generally defined as having several million subscribers, leaving smaller platforms less regulated. This includes the European Union, many countries and several US states. The second is that D&M’s regulation would be consistent with a series of pre-existing Internet content regulations covering areas that have been regulated or prohibited both on and off the Internet for centuries – including infringements, child pornography, false advertising, defamation, threats of immediate harm, obscenity, rebellion, etc. These areas have a strong history of national definition, refinement, legislation and litigation.

The majority of content moderation that occurs on internet platforms today involves these existing forms of illegal/regulated content, and definitions tend to be similar between nations.

Regulating or banning D&M breaks new ground by moving into previously less defined categories such as politics, health, science, etc. – and trying to do it on a global scale.

When looking at something this complex, it’s often best to start at the beginning – and the beginning is July 3, 1995, the day everyone in the queue at any American supermarket came across a great cover from Time magazine showing a young boy behind a computer keyboard who was obviously in shock staring at the computer screen, with a huge headline screaming “CYBERPORN”.

An explosion of political concern over content on this new medium called the Internet followed, leading to groundbreaking Internet content laws, rules and regulations, the most significant of which insulated Internet platforms from liability for content created by d others and allowed platforms to edit content in any way they wished, with virtually no oversight or accountability.

As I explained in a previous article, almost all of this initial attention to Internet content was about cyberpornography, and it clearly established a clear right for the government to oversee Internet content. Previously, the government’s role in managing the content of message boards and chat rooms was much less clear.

Twenty-seven years later, few people talk about regulating cyberporn: the focus is almost entirely on D&M — but those initial cyberporn laws laid the groundwork for government regulation of D&M, and they lead to some same difficult questions.

Specifically: if governments or platforms ban D&M, then they need to define with some accuracy what is – and isn’t – D&M, just as governments have tried to define obscene porn over the last century. Precisely defining D&M today is much more complicated than defining obscenity in the 1900s – because major internet platforms serve a myriad of nations, societies, religions, jurisdictions, languages, etc. different. Accordingly, it is tempting to simply revert to Justice Potter Stewart’s 1964 definition. of obscene pornography – “I know it when I see it” – and to rely on “fact checkers” instead of judges to call out D&M “when they see it”.

It is not surprising that there is no universally accepted definition of “misinformation” or “misinformation”, although many definitions of misinformation focus on the concept of “fake” and misinformation on “misleading”. Webster defines D as “false information deliberately and often covertly disseminated (as by spreading rumours) in order to influence public opinion or obscure the truth” and M as “incorrect or misleading information”. Sometimes establishing truth or falsity is simple, but we all know that often it is not. My fourth grade teacher explained this to us by showing us a partially filled glass and asking if it was “half full” or “half empty”… we immediately split into respective camps. In seventh grade, we learned in a debate club that advocates emphasize the true facts that support their opinion and discredit the true facts that don’t.

Much more sophisticatedly, President Obama explained that “Any rules we propose to govern the distribution of content on the Internet will involve value judgments. None of us are perfectly objective. What we consider unshakable truth today may turn out to be totally false tomorrow. But that doesn’t mean that some things aren’t truer than others or that we can’t draw lines between opinion, fact, honest error, intentional deception. Sometimes, as Obama explained, what is considered true or false can change. As evidence of the evolution of D&M truth on the Internet, Evelyn Douek of the Knight First Amendment Institute recently described in Wired magazine how several D&M classifications have subsequently been revised or even reversed.

Be that as it may, dozens of governments have criminalized or regulated D&M on the internet and made major platforms liable for illegal postings of D&M by third parties. According to the Poynter Institute, posting “false information” on internet platforms is a crime in many countries and more are on the way. In these situations, governments – through their courts or bureaucracies – will decide what is and is not disinformation or misinformation. At the same time, the public is increasingly asking Internet platform company executives to more actively regulate or prohibit D&M outside of (or in conflict with?) government regulation.

Governments or business leaders face a very difficult task.

NOTE: This post has been updated from the original to correct the Time magazine cover date referenced in the sixth paragraph.

Roger Cochetti provides consultancy and advisory services in Washington, D.C. He was a senior executive at Communications Satellite Corporation (COMSAT) from 1981 to 1994. He also led Internet public policy for IBM from 1994 to 2000 and later served as Senior Vice President and Chief Policy Officer for VeriSign and Group Policy Director for CompTIA. He served on the State Department’s Advisory Committee on International Communications and Information Policy during the Bush and Obama administrations, testified extensively on Internet policy issues, and served on advisory committees to the FTC and various United Nations agencies. He is the author of Handbook of mobile satellite communications.


About Author

Comments are closed.