Most people can't discern fact from fiction
If most people are not intelligent what hope is there for people to learn to discern fact from fiction?
ChatGPT provided some help in formulating some points for this post. Some of the text below was taken from a ChatGPT session; however, it has been heavily edited and revised by me. The majority of the words that you read below were written by me. Consider this writing to be a synthesis of man and machine.
Gurwinder wrote a provocative post recently, The Cure to Misinformation is More Misinformation. The thrust of his piece is that efforts to combat AI-generated misinformation are bound to fail, so we should instead focus on providing people with the tools to discern fact from fiction.
In our new age of infinite misinformation, the counter-misinformation complex’s methods are obsolete, and its goal of a world where people can trust what they see is hopelessly naive. Its attempts to censore the misinformation pandemic will achieve nothing but momentarily quarantine a few online platforms from the inevitable, lulling their users into a false sense of trust, delaying the development of their doubt, and ultimately making them more vulnerable to bullshit.
We should therefore strive to do the opposite—to let misinformation spread so it becomes a clear and constant presence in everyone’s life, a perpetual reminder of the fundamental dishonesty of the world. Deceit is part of nature, frm the chameleon’s complexion to the Instagram model’s beauty filters, and it will never be legislated away while life still exists, so let’s stop trying to prevent people from seeing lies, and instead teach people to see through them.
This sounds fine in principle, and I agree with his conclusion that efforts to combat AI-generated misinformation are doomed to fail. This observation is esepcially pertinent given the near-infinitely scalable content generating abilities of AI tools. In his words, a tsunami of bullshit awaits: it is futile to legislate against this.
Nonetheless, I take issue with a number of points made in the post.
The post argues that the threat posed by synthetic media, such as deepfakes, to reality itself is one of the most pressing concerns in the AI age. It highlights various instances where synthetic media has been used for deception and manipulation, from fake photographs winning contests to the spread of deepfakes for political purposes. The author contends that regualtion of misinformation, advocated by the “counter-misinformation complex” of government agencies, tech giants, think tanks, and fact-checking organizations, is not an effective solution.
He further argues that censorship and regulation will only exacerbate the problem, and make it harder to discern truth from falsehood. He proposes an alternative approach inspired by the concept of vaccination and hormesis. Rather than trying to eliminate misinformation, he suggests exposing people to controlled doses of misinformation to fortify their ability to recognize and resist it. He proposes turning the internet into a game-like environment where people are constantly exposed to misinformation, in order to harden their defenses.
But there are at least four refutations to his argument which bear consideration:
While the text raises valid concerns about the challenges posed by synthetic media and the limitations of regulation, it downplays the potential harm and risks associated with the spread of misinformation. Misinformation can have serious consequences, including the erosion of trust, the amplification of divisive narratives, and the manipulation of public opinion, which can impact democratic processes and threaten societal stability.
He argues that there is no evidence that misinformation leads to a significant increase in conspiracy theories or real-world harms. However, we need only look at Fox News to see that this isn’t true1. Innumerable events over the past several years have shown the detrimental effects of misinformation on individuals and society, including its role in fueling violence, fostering polarization, and undermining public health efforts.
The proposed approach of exposing people to misinformation as a form of inoculation assumes that individuals will develop the critical thinking skills to discern truth from falsehood. It further assumes that one’s mind works as one’s immune system does. While this analogy sounds neat, it’s a facile one. People are susceptible to cognitive biases and may struggle to differentiate between accurate and inaccurate information, particularly when the information confirms their pre-existing beliefs. One’s mind does not work as one’s immune system does. We may use the biological language of viral infection to understand mimesis: “virality,” “viral spread,” etc. These phrases, and others like them, have entered the lexicon as metaphorical shorthands to express the notion that certain ideas take hold in people’s mind, much the way that a virus or bacterium takes hold in the body. But, barring immuno-compromised individuals, the immune system functions similarly from person to person. Cognitive biases, and critically engaging with them, do not. Understanding cognitive biases, and, more importantly, being able to think about thinking is a pretty heavy cognitive load. Not all people are able to do this.
Finally, while the idea of using synthetic media to warn people about synthetic media is intriguing, it may not be a foolproof solution. Misinformation and deepfakes can be highly persuasive and difficult to detect, even for indivudals who are aware of their existence. Relying solely on synthetic media to counteract the effects of misinformation may create a cat-and-mouse game where people are constantly deceived and must constantly adapt to new forms of deception. Again, the analogy to biology is tempting: to immunize you against the flu, you are intentionally infected with dead virus particles, in order to get your immune system to develop the antibodies required to fight the real infection. And, again: it is not clear that the mind works as the immune system does.
Gurwinder makes a convincing argument for the futility of trying to rein in misinformation. He’s correct that no legislature, government body, or industry association can rid the world of AI-generated bullshit. One can philosophize about teaching people how to discern truth from falsehood, but this exercise in philosophical musing buts against the hard reality that many (most?) people simply don’t have the desire or capacity to reason about their own biases.
Yes, your observation that the other side is equally capable of concocting misinformation is correct, and duly noted.