How analysis dies
https://benn.substack.com/p/how-analysis-dies
When we read some argument or piece of analytical work—be it a proposal for a business strategy, a pitch for an investment, or an op-ed making a political point—we typically fancy ourselves as rational thinkers and impartial jurors. But we aren’t good at being either. As Randy Au called out in his most recent piece, sometimes “there are different hypotheses that all fit the data to a similar extent and there’s no way to tell which is the valid one.” Even trained analysts struggle to find the capital-T truth, and instead “fall into the trap of finding some initial findings that confirms a story,” with “no guarantee that any of the stories we weave out of data have any actual truthful basis!”
There is, however, one useful protection against us constantly being fooled by questionable reasoning: It’s a lot harder to come up with decent arguments for something that’s wrong than something that’s right. Yes, we can torture data to make it say anything—but making it confess a lie takes a lot more effort than getting it to tell the truth. Reality, such that there’s such a thing in a dataset, is more available. We are protected from misleading conclusions and wild conspiracy theories not because we’re smarter than them, but because misleading conclusions and wild conspiracy theories are hard to create.
This difficulty—that, to make any kind of argument, we have to connect a bunch of dots in a seemingly reasonable way—gives most analysis a kind of foundational legitimacy: Someone had to figure it out. Someone had to weave a story that other people’s logical calculators, underpowered and imprecise as they are, would accept.
In the 1970s, researchers at Stanford found the mere presence of an argument that supported a particular conclusion made people more likely to believe that conclusion was true, even when they were explicitly told the argument was made up. This suggests that, even in the absolute best-case scenarios—when we know some analysis was manufactured by a chatbot, with potentially no basis in fact or reason—we’ll struggle to evaluate the legitimacy of the story.
That’s bad enough on its own. How, then, can we possibly protect ourselves when the deception is less apparent? If we can’t unsee patterns that we know are illusionary, what hope do we have in evaluating the “seemingly-feasible” arguments that can now be generated for anything (both ones we ask of ChatGPT, and ones that others create)? Do we have any chance of making sense of that funhouse of mirrors?
As Sarah Guo suggests, I’m not sure we’re ready for this.