Combatting Political Disinformation
On January 6th approximately 10,000 Americans stormed the U.S. Capitol in an effort to prevent what they believed would be the final step in the stealing of the Presidential election from its rightful winner, Donald J. Trump. Still, they were just a small fraction of Americans who embraced the view. that Joe Biden had won the election by fraudulent means. Polling data at the time revealed that roughly 75% of the nation’s over 35 million registered Republicans believed that this to have been the case.. This came after election officials from each of the five hotly contested states, many of whom were Republicans, had certified that Biden had won the election. In addition, U.S. Attorney General William Barr had already conceded that the Justice Department’s investigation had failed to turn up any evidence of fraud that could have altered the outcome of the election. That same conclusion had also been voiced by those officials within the U.S. Department of Homeland Security charged with securing the nation’s elections. On top of this, over sixty lawsuits asserting that the election had been stolen from Trump had been dismissed.
It is absolutely astounding that so many people could continue to harbor such an erroneous belief in the face overwhelming independent findings. It wasn’t simply a matter of their accepting this falsehood. Thousands of Trump supporters were so taken in by it that they were even willing to become involved in a clearly unlawful insurrection. They were victims of two powerful forces: a concerted disinformation campaign orchestrated by Donald Trump for his own personal benefit and their own willingness believe that the nation’s electoral system had been corrupted. Still, truthful information about the outcome of the election was readily available, defying the conventional wisdom that truthful information will always overcome false information.
The problem is that the entire nation has become highly polarized along political lines with those on the left only getting their news from liberal news organizations and websites and those on the right only getting their news from conservative news organizations and websites. The result is that few Americans receive a balance of reporting so there is little opportunity for true information to drive out disinformation. There are two factors that reinforce this division. First, most individuals come to trust the information they receive from their favorite news sources and distrust the information disseminated by news sources catering to the other side of the political spectrum. This tendency is amplified by the news organizations advertising themselves as “trusted” or “reliable” and by politicians characterizing news organizations that criticize them or their policies as “fake news.” The second factor is that social media sites have become so technologically sophisticated that they can target political messages to persons whose views their advertisers are trying to impact.
There are three distinct dangers to any democracy posed by political disinformation campaigns. The first is that they allow politicians to avoid addressing important and controversial issues. As I pointed out last spring in an open letter to Joe Biden the world has become so complex that voters tend to be overwhelmed and make their voting decisions on the basis of their assessments of the candidates’ personas rather than their policies. Disinformation campaigns are particularly effective because they can be targeted to undermine the character of an opposing candidate and to excite the fears and hatreds among one’s own supporters. In this way, candidates running for public office never have to actually inform voters as to how they intend to conduct the affairs of the government after they are elected.
The second problem is that disinformation campaigns can be employed to delegitimize the election of an opposition candidate, thereby impeding his or her ability to govern. This is what Donald Trump did to Barack Obama in his campaign to convince voters that Obama was not born in the U.S. This made it easier for Republican politicians to justify their efforts to obstruct Obama’s presidency. You will also recall it is what President Trump accused the House Democrats of doing via their investigation of his campaign’s association with the efforts of the Russian government to interfere in the 2016 election. Now Trump and his supporters are trying to do the same thing to President Biden with their claims that he won the election via fraudulent means.
A third danger posed by disinformation campaigns is that they may incite dangerous activity, either intentionally or unintentionally. The obvious example of this is Trump’s “Stop the Steal” disinformation campaign which led to the storming of the U.S. Capitol on January 6th causing the deaths of five individuals, untold property damage and millions of dollars in unnecessary security costs.
Political disinformation campaigns are not new in America. They can be found as far back as 1800 when John Adams ran against Thomas Jefferson for the presidency. However, they have become more problematic in recent years for a number of reasons. Advances in information technology enable false information to be disseminated to broad segments of the population. Disinformation campaigns can also now be easily conducted from afar by foreign governments as was the case in the 2016 and 2020 elections. Lastly, and perhaps most importantly, we now have a political party that has spent the past thirty years establishing and organizing a media chorus that currently serves as an echo chamber for disinformation, rendering such campaigns highly effective.
This raises a fundamental question: Can a democracy function under circumstances in which falsehoods that can alter the outcome of elections persist among large portions of the electorate? This past year we came very close to finding that the answer to this question could be “No.” Still, there is the converse consideration: Can a democracy continue if freedom of speech is largely inhibited? Sadly, totalitarian governments around the world have proven time and again that the answer to this question is a definite “No.” This latter concern is what impelled the drafters of the Constitution to add the First Amendment guaranteeing freedom of speech.
Countless Supreme Court decisions have upheld the principle of freedom of speech in a wide variety of circumstances. One of the very few circumstances in which governments in this country are permitted to impair freedom of speech is when speech “is likely to incite imminent lawless action.” (Brandenburg v. Ohio). Even this exception is narrowly confined which means that any laws or regulations that impair free speech, even maliciously false speech, are not likely to survive judicial scrutiny. Accordingly, the antidote for political disinformation cannot take the form of a law or regulation prohibiting such communications. Disinformation can only be addressed retrospectively.
Civil legal actions seeking compensation for damages resulting from libelous and slanderous statements are a product of British Common Law and have long been a feature in American jurisprudence. They are not deemed to be a violation of the First Amendment and are tolerated because they do not seek to prevent someone from expressing a point of view. Such actions only tend to hold the declarer responsible for the substance of his or her previously voiced remarks.
Newspapers and broadcast radio and television are generally careful not to publish defamatory statements because they can face civil liability as well as sanctions by the FCC for having done so. With respect to defamatory statements made about public figures, such as politicians, the Supreme Court has held in The New York Times v. Sullivan that such civil suits may only succeed if the defamatory statement was made with “actual malice”, a very high bar for a successful damage action. There is currently pending a $1.6 billion civil damage action against Fox News and individuals appearing on that network for their assertions that voting machines manufactured by Dominion Voting Systems altered the results of the 2020 presidential election. Prohibitions against impairing freedom of speech should not prevent this or Dominion’s three other similar claims from proceeding. Laws and regulations facilitating such retrospective claims can thus provide a means for at least discouraging the dissemination of political disinformation.
In the last election, both Facebook and Twitter began labeling some false statements posted on their platforms as being of doubtful validity. Twitter even went so far to as ban Donald Trump from further use of his twitter account in reaction to the events of January 6th. Similarly, the right-wing website, Parler, was temporarily kicked off of Apple and Android devices for having incited violence in connection with the January 6th insurrection. These actions were voluntarily taken and not for the purpose of complying with any laws or governmental regulations. While news organizations and social media sites have the capability of identifying a high percentage of political disinformation, performing that function would entail a significant burden on them, one which they would not happily undertake, particularly if it meant losing advertising revenues. Moreover, they would likely seek to avoid adverse consequences from failing to carry out that function by arguing that the steps that they had undertaken (but which failed to identify the offending disinformation) were reasonable under the circumstances.
A more acceptable system would place the burden of identifying disinformation on the affected parties and empowering them to notify the news organizations or social media sites on which the disinformation appeared. In so doing, they would be required to submit the correct information along with authoritative statements of the facts. The entity or entities publishing the disinformation would then have 24 hours in which to publish the corrected information. In the case of broadcast news organizations, the correction would have to be published on the same program at the same time of day as the original disinformation was published. If it were published multiple times, the remedy would have to be repeated the same number of times. In the case of disinformation appearing on a social media site, the notice containing the correct information would have to be similarly posted for an equal duration. In this way, corrected information can be assured of reaching those impacted by disinformation campaigns. Requiring the person or entity originating the disinformation to pay for such additional publications might also discourage them from such activity.
The failure to comply with this requirement could result in one or more civil damages claims against the news organization/social media site brought by the Department of Justice and/or the person(s) adversely affected by the disinformation. It’s important that adversely affected individuals have standing to assert such actions as the Department of Justice might be reticent to pursue claims if the disinformation was disseminated for the benefit of the party in control of the government.
The amount of such claims would be established by statute and would be directly proportional to the length of the news organization’s delay in taking remedial action and the number of persons receiving the disinformation. Because the magnitude of the potential claim(s) in large measure would be controlled by the number of people receiving the disinformation, it would place an added incentive on news organizations and websites to be particularly careful in policing broadcasts and social media postings directed to large numbers of people. It might also be expected that news organizations and social media sites would become more wary of the people and organizations seeking to utilize their publication services if one or more of their messages had been the focus of prior demands for corrective action. Consequently, persons or organizations whose disinformation had given rise to prior requests for corrective publications might encounter difficulty securing future outlets for disseminating their messages.
For the most part, this remedial action would not be effective with respect to false information published by the government unless individuals outside the government had access to the actual facts. For example, it would probably not be effective against an administration’s publication of erroneous employment statistics in an effort enhance its chances of re-election. Nevertheless, it should serve as a powerful tool in combatting disinformation relating to an opposing political candidate or party.
This antidote to disinformation campaigns would appear to merit the support of Democrats who are frequently the targets of disinformation campaigns. As for the Republicans, President Trump has railed against “fake news” for the past six years and has voiced support for the government’s regulation of social media websites. Therefore, even though he has also been a significant beneficiary of disinformation campaigns, some of which he has personally instigated, at the very least, it might be embarrassing for him to speak out against against this form of regulation. Former President George W. Bush has also recently spoken out against disinformation campaigns raising the hope that moderate Republicans would also support this form of regulation.
The most likely source of opposition would come from the media organizations even though the proposed regulatory scheme would relieve them of the burden of having to identify political disinformation being disseminated through their facilities. That’s because they would frequently be caught in the middle of disputes between political opponents requiring them to devote substantial time and resources resolving those disputes. This concern could be alleviated in two ways: First, the party or entity that had generated the alleged disinformation would not have standing to participate in any legal proceeding seeking to compel the dissemination of corrected information. Nor would that party have a legal cause of action (A) against the media organization for having published the corrections sought by the government or the notifying party or parties or (B) against the complaining party(ies) for having demanded the publication of corrected information. Second, should the matter be submitted to a court for resolution, the unsuccessful litigant(s) would be required to reimburse the legal expenses of the prevailing party(ies). This regulatory framework would likely create a bias in favor of the media organizations’ simply agreeing to published the requested corrections. That, in turn, might also discourage some revenue producing advertising from both sides of the political spectrum, but such losses might be partially offset by requiring the sources of disinformation to pay for the corrected messaging.
It should also be anticipated that the courts will not welcome the task of resolving disputes involving political messages and will look for ways to limit their roles in doing so. For example, they might impose limitations by defining who is an aggrieved party entitled to the protection afforded by this regulatory scheme and might establish guidelines for determining when a statement is sufficiently misleading to require a correction. Such limitations are likely to be necessary in order to prevent the regulatory process from becoming bogged down with pettiness and remain a functional deterrent to disinformation.