The growth of conspiracy theory popularity and the hype around artificial intelligence converge in a key similarity: magical thinking.
You’ll often see magical thinking applied in anthropology, where it gets applied to such things like prayer and ritual, or in psychology, where it gets coded as a belief that one’s thoughts themselves can be made manifest in the world (a la The Secret). But a broader definition of magical thinking might just say that it’s a blurring of cause and effect — intentionally or not — and that this blurring will often give agency to something/someone even when it’s not deserved.
Arthur C. Clarke’s Third Law states that “any sufficiently advanced technology is indistinguishable from magic.” Computers, machine learning, and AI plot a course for us that is in many ways becoming harder and harder for the human builders to understand, analyze, and interpret. This is as much a problem for identifying cat pictures from dog pictures as it is for determining prison sentences in real life.
AI has a black box problem where we can’t peer into the decision-making logic of the algorithm — and so how do we interpret the findings? Magical. Well, not magical per se, but we still tend to give the algorithm the benefit of the doubt, and it’s almost as if the black box nature of the systems lend them even more credibility in the eyes of the lay person reading the interpretation. This has actually been demonstrated, that we are more and more crossing a threshold of trusting algorithms more and more (a phenonemon referred to as algorithm appreciation) — nudging out human judgment in the process. The algorithm takes in the relevant data, much more than I can understand or know what to do with, and so the results must be sophisticated, accurate, and fair…so the thinking goes. We assume an intelligence behind the decision.
This is where we see the similarities with conspiracy-thinking — that there is a group or groups of people controlling things behind the scenes, that things we interpret as random or as coincidental simply can’t be the case and that there must be intelligence behind the events — a coordination, even if it’s for nefarious purposes.
The problem with magical thinking in AI is that if you spend 5 minutes talking to anyone that develops algorithms or builds machine learning models, you’ll realize how little these folks actually trust the output, much less the interpretability of them. Engineers and data scientists are extremely skeptical about the infallability of their models. Unfortunately, this skepticism doesn’t always get communicated and picked up by the policymaker, the CEO, or the sales team.
Wishful thinking becomes magical thinking.
We are surrounded more and more by algorithms that recommend what we listen to, what we watch, and determine which news and friend feeds we are exposed to. The algorithms that manage and guide our lives are only going to grow and grow. I hope that for our sakes, we are skeptical enough to demand answers from the black boxes, brave enough to expose the Wizards of Oz who would have us believe they are magical, and resolute enough not to choose shortcuts out of convenience over the long and hard path of teasing out the messy, disorganized data that is befitting a chaotic and random world filled with complicated and complex cause and effect.