But the objective world is different. Here, we traffic in literal facts—but the permanence of those facts matters less than the means by which they are generated. What follows is an imperfect example, but it’s one of the few scientific realms that I (and many people like me) happen to have an inordinate amount of knowledge about: the Age of Dinosaurs.

  In 1981, when I was reading every dinosaur book I could locate, the standard belief was that dinosaurs were cold-blooded lizards, with the marginalized caveat that “some scientists” were starting to believe they may have been more like warm-blooded birds. There were lots of reasons for this alternative theory, most notably the amount of time in the sun required to heat the blood of a sixty-ton sauropod and the limitations of a reptilian two-chambered heart. But I rejected these alternatives. When I was nine, people who thought dinosaurs were warm-blooded actively made me angry. By the time I hit the age of nineteen, however, this line of thinking had become accepted by everyone, myself included. Dinosaurs were warm-blooded, and I didn’t care that I’d once thought otherwise. Such intellectual reinventions are just part of being interested in a group of animals that were already extinct ten million years before the formation of the Himalayan mountains. You naturally grow to accept that you can’t really know certain things everyone considers absolute, since these are very hard things to know for sure. For almost one hundred years, one of the earmarks of a truly dino-obsessed kid was his or her realization that there actually wasn’t such a thing as a brontosaurus—that beast was a fiction, based on a museum’s nineteenth-century mistake. The creature uninformed dilettantes referred to as a “brontosaurus” was technically an “apatosaurus” . . . until the year 2015. In 2015, a paleontologist in Colorado declared that there really was a species of dinosaur that should rightfully be classified as a brontosaurus, and that applying that name to the long-necked animal we imagine is totally acceptable, and that all the dolts31 who had used the wrong term out of ignorance for all those years had been correct the whole time. What was (once) always true was (suddenly) never true and then (suddenly) accidentally true.

  Yet these kinds of continual reversals don’t impact the way we think about paleontology. Such a reversal doesn’t impact the way we think about anything, outside of the specialized new data that replaced the specialized old data. If any scientific concept changes five times in five decades, the perception is that we’re simply refining what we thought we knew before, and every iteration is just a “more correct” depiction of what was previously considered “totally correct.” In essence, we anchor our sense of objective reality in science itself—its laws and methods and sagacity. If certain ancillary details turn out to be specifically wrong, it just means the science got better.

  But what if we’re really wrong, about something really big?

  I’m not talking about things like the relative blood temperature of a stegosaurus or whether Pluto can be accurately classified as a planet, or even the nature of motion and inertia. What I’m talking about is the possibility that we think we’re playing checkers when we’re really playing chess. Or maybe even that metaphor is too conservative for what I’m trying to imagine—maybe we think we’re playing checkers, but we’re actually playing goddamn Scrabble. Every day, our understanding of the universe incrementally increases. New questions are getting answered. But are these the right questions? Is it possible that we are mechanically improving our comprehension of principles that are all components of a much larger illusion, in the same way certain eighteenth-century Swedes believed they had finally figured out how elves and trolls caused illness? Will our current understanding of how space and time function eventually seem as absurd as Aristotle’s assertion that a brick doesn’t float because the ground is the “natural” place a brick wants to be?

  No. (Or so I am told.)

  “The only examples you can give of complete shifts in widely accepted beliefs—beliefs being completely thrown out the window—are from before 1600,” says superstar astrophysicist Neil deGrasse Tyson. We are sitting in his office in the upper deck of the American Museum of Natural History. He seems mildly annoyed by my questions. “You mentioned Aristotle, for example. You could also mention Copernicus and the Copernican Revolution. That’s all before 1600. What was different from 1600 onward was how science got conducted. Science gets conducted by experiment. There is no truth that does not exist without experimental verification of that truth. And not only one person’s experiment, but an ensemble of experiments testing the same idea. And only when an ensemble of experiments statistically agree do we then talk about an emerging truth within science. And that emerging truth does not change, because it was verified. Previous to 1600—before Galileo figured out that experiments matter—Aristotle had no clue about experiments, so I guess we can’t blame him. Though he was so influential and so authoritative, one might say some damage was done, because of how much confidence people placed in his writing and how smart he was and how deeply he thought about the world . . . I will add that in 1603 the microscope was invented, and in 1609 the telescope was invented. So these things gave us tools to replace our own senses, because our own senses are quite feeble when it comes to recording objective reality. So it’s not like this is a policy. This is, ‘Holy shit, this really works. I can establish an objective truth that’s not a function of my state of mind, and you can do a different experiment and come up with the same result.’ Thus was born the modern era of science.”

  This is all accurate, and I would never directly contradict anything Neil deGrasse Tyson says, because—compared to Neil deGrasse Tyson—my skull is a bag of hammers. I’m the functional equivalent of an idiot. But maybe it takes an idiot to pose this non-idiotic question: How do we know we’re not currently living in our own version of the year 1599?

  According to Tyson, we have not reinvented our understanding of scientific reality since the seventeenth century. Our beliefs have been relatively secure for roughly four hundred years. That’s a long time—except in the context of science. In science, four hundred years is a grain in the hourglass. Aristotle’s ideas about gravity were accepted for more than twice that long. Granted, we’re now in an era where repeatable math can confirm theoretical ideas, and that numeric confirmation creates a sense that—this time—what we believe to be true is not going to change. We will learn much more in the coming years, but mostly as an extension of what we already know now. Because—this time—what we know is actually right.

  Of course, we are not the first society to reach this conclusion.

  [2]If I spoke to one hundred scientists about the topic of scientific wrongness, I suspect I’d get one hundred slightly different answers, all of which would represent different notches on a continuum of confidence. And if this were a book about science, that’s what I’d need to do. But this is not a book about science; this is a book about continuums. Instead, I interviewed two exceptionally famous scientists who exist (or at least appear to exist) on opposite ends of a specific psychological spectrum. One of these was Tyson, the most conventionally famous astrophysicist alive.32 He hosted the Fox reboot of the science series Cosmos and created his own talk show on the National Geographic Channel. The other was string theorist Brian Greene at Columbia University (Greene is the person mentioned in this book’s introduction, speculating on the possibility that “there is a very, very good chance that our understanding of gravity will not be the same in five hundred years”).

  Talking to only these two men, I must concede, is a little like writing about debatable ideas in pop music and interviewing only Taylor Swift and Beyoncé Knowles. Tyson and Greene are unlike the overwhelming majority of working scientists. They specialize in translating ultra-difficult concepts into a language that can be understood by mainstream consumers; both have written bestselling books for general audiences, and I assume they both experience a level of envy and skepticism among their professional peers. That’s what happens to any professional the moment he or she a
ppears on TV. Still, their academic credentials cannot be questioned. Moreover, they represent the competing poles of this argument almost perfectly. Which might have been a product of how they chose to hear the questions.

  When I sat down in Greene’s office and explained the premise of my book—in essence, when I explained that I was interested in considering the likelihood that our most entrenched assumptions about the universe might be wrong—he viewed the premise as playful. His unspoken reaction came across as “This is a fun, non-crazy hypothetical.” Tyson’s posture was different. His unspoken attitude was closer to “This is a problematic, silly supposition.” But here again, other factors might have played a role: As a public intellectual, Tyson spends a great deal of his time representing the scientific community in the debate over climate change. In certain circles, he has become the face of science. It’s entirely possible Tyson assumed my questions were veiled attempts at debunking scientific thought, prompting him to take an inflexibly hard-line stance. (It’s also possible this is just the stance he always takes with everyone.) Conversely, Greene’s openness might be a reflection of his own academic experience: His career is punctuated by research trafficking on the far edges of human knowledge, which means he’s accustomed to people questioning the validity of ideas that propose a radical reconsideration of everything we think we know.

  One of Greene’s high-profile signatures is his support for the concept of “the multiverse.” Now, what follows will be an oversimplification—but here’s what that connotes: Generally, we work from the assumption that there is one universe, and that our galaxy is a component of this one singular universe that emerged from the Big Bang. But the multiverse notion suggests there are infinite (or at least numerous) universes beyond our own, existing as alternative realities. Imagine an endless roll of bubble wrap; our universe (and everything in it) would be one tiny bubble, and all the other bubbles would be other universes that are equally vast. In his book The Hidden Reality, Greene maps out nine types of parallel universes within this hypothetical system. It’s a complicated way to think about space, not to mention an inherently impossible thing to prove; we can’t get (or see) outside our own universe any more than a man can get (or see) outside his own body. And while the basic concept of a limited multiverse might not seem particularly insane, the logical extensions of what a limitless multiverse would entail are almost impossible to fathom.

  Here’s what I mean: Let’s say there are infinite universes that exist over the expanse of infinite time (and the key word here is “infinite”). Within infinity, everything that could happen will happen.33 Everything. Which would mean that—somewhere, in an alternative universe—there is a planet exactly like Earth, which has existed for the exact same amount of time, and where every single event has happened exactly as it has on the Earth that we know as our own . . . except that on Christmas Eve of 1962, John F. Kennedy dropped a pen. And there is still another alternative universe with a planet exactly like Earth, surrounded by an exact replica of our moon, with all the same cities and all the same people, except that—in this reality—you read this sentence yesterday instead of today. And there is still another alternative universe where everything is the same, except you are slightly taller. And there is still another alternative universe beyond that one where everything is the same, except you don’t exist. And there is still another alternative reality beyond that where a version of Earth exists, but it’s ruled by robotic wolves with a hunger for liquid cobalt. And so on and so on and so on. In an infinite multiverse, everything we have the potential to imagine—as well as everything we can’t imagine—would exist autonomously. It would require a total recalibration of every spiritual and secular belief that ever was. Which is why it’s not surprising that many people don’t dig a transformative hypothesis that even its proponents concede is impossible to verify.

  “There really are some highly decorated physicists34 who have gotten angry with me, and with people like me, who have spoken about the multiverse theory,” Greene says. “They will tell me, ‘You’ve done some real damage. This is nuts. Stop it.’ And I’m a completely rational person. I don’t speak in hyperbole to get attention. My true feeling is that these multiverse ideas could be right. Now, why do I feel that way? I look at the mathematics. The mathematics lead in this direction. I also consider the history of ideas. If you described quantum physics to Newton, he would have thought you were insane. Maybe if you give Newton a quantum textbook and five minutes, he sees it completely. But as an idea, it would seem insane. So I guess my thinking is this: I think it’s extraordinarily unlikely that the multiverse theory is correct. I think it’s extraordinarily likely that my colleagues who say the multiverse concept is crazy are right. But I’m not willing to say the multiverse idea is wrong, because there is no basis for that statement. I understand the discomfort with the idea, but I nevertheless allow it as a real possibility. Because it is a real possibility.”

  Greene delivered a TED talk about the multiverse in 2012, a twenty-two-minute lecture translated into more than thirty languages and watched by 2.5 million people. It is, for all practical purposes, the best place to start if you want to learn what the multiverse would be like. Greene has his critics, but the concept is taken seriously by most people who understand it (including Tyson, who has said, “We have excellent theoretical and philosophical reasons to think we live in a multiverse”). He is the recognized expert on this subject. Yet he’s still incredulous about his own ideas, as illustrated by the following exchange:

  Q: What is your level of confidence that—in three hundred years—someone will reexamine your TED talk and do a close reading of the information, and conclude you were almost entirely correct?

  A: Tiny. Less than one percent. And you know, if I was really being careful, I wouldn’t have even given that percentage a specific number, because a number requires data. But take that as my loose response. And the reason my loose response is one percent just comes from looking at the history of ideas and recognizing that every age thinks they were making real headway toward the ultimate answer, and every next generation comes along and says, “You were really insightful, but now that we know X, Y, and Z, here is what we actually think.” So, humility drives me to anticipate that we will look like people from the age of Aristotle who believed stones fell to earth because stones wanted to be on the ground.

  Still, as Greene continues to explain the nature of his skepticism, a concentration of optimism slowly seeps back in.

  In the recesses of my mind, where I would not want to be out in public—even though I realize you’re recording this, and this is a public conversation—I do hold out hope that in one hundred or five hundred years, people will look back on our current work and say, “Wow.” But I love to be conservative in my estimates. Still, I sometimes think I’m being too conservative, and that makes me excited. Because look at quantum mechanics. In quantum mechanics, you can do a calculation and predict esoteric properties of electrons. And you can do the calculation—and people have done these calculations, heroically, over the span of decades—and compare [those calculations] to actual experiments, and the numbers agree. They agree up to the tenth digit beyond the decimal point. That is unprecedented—that we can have a theory that agrees with observation to that degree. That makes you feel like “This is different.” It makes you feel like you’re closing in on truth.

  So here is the hinge point where skepticism starts to reverse itself. Are we the first society to conclude that this time we’re finally right about how the universe works? No—and every previous society who thought they were correct ended up hopelessly mistaken. That, however, doesn’t mean that the goal is innately hopeless. Yes, we are not the first society to conclude that our version of reality is objectively true. But we could be the first society to express that belief and is never contradicted, because we might be the first society to really get there. We might be the last society, because—now—we translate absolutely everything into ma
th. And math is an obdurate bitch.

  [3]The “history of ideas,” as Greene notes, is a pattern of error, with each new generation reframing and correcting the mistakes of the one that came before. But “not in physics, and not since 1600,” insists Tyson. In the ancient world, science was fundamentally connected to philosophy. Since the age of Newton, it’s become fundamentally connected to math. And in any situation where the math zeroes out, the possibility of overturning the idea becomes borderline impossible. We don’t know—and we can’t know—if the laws of physics are the same everywhere in the universe, because we can’t access most of the universe. But there are compelling reasons to believe this is indeed the case, and those reasons can’t be marginalized as egocentric constructions that will wax and wane with the attitudes of man. Tyson uses an example from 1846, during a period when the laws of Newton had seemed to reach their breaking point. For reasons no one could comprehend, Newtonian principles were failing to describe the orbit of Uranus. The natural conclusion was that the laws of physics must work only within the inner solar system (and since Uranus represented the known edge of that system, it must be operating under a different set of rules).

  “But then,” Tyson explains, “someone said: ‘Maybe Newton’s laws still work. Maybe there’s an unseen force of gravity operating on this planet that we have not accounted for in our equations.’ So let’s assume Newton’s law is correct and ask, ‘If there is a hidden force of gravity, where would that force be coming from? Maybe it’s coming from a planet we have yet to discover.’ This is a very difficult math problem, because it’s one thing to say, ‘Here’s a planetary mass and here’s the value of its gravity.’ Now we’re saying we have the value of gravity, so let’s infer the existence of a mass. In math, this is called an inversion problem, which is way harder than starting with the object and calculating its gravitational field. But great mathematicians engaged in this, and they said, ‘We predict, based on Newton’s laws that work on the inner solar system, that if Newton’s laws are just as accurate on Uranus as they are anywhere else, there ought to be a planet right here—go look for it.’ And the very night they put a telescope in that part of the sky, they discovered the planet Neptune.”