Whoa, Dude, We're Not Inside a Computer Right Now!

Whoa, Dude, We’re Not Inside a Computer Right Now!

Whoa, Dude, We’re Not Inside a Computer Right Now!

By Johannes Niederhauser

Professor Massimo Pigliucci.

We recently published an interview with NASA scientist Rich Terrile about how incredibly likely it is that we’re all Sims. You all loved it for three reasons: 1) It’s simple; 2) You smoke waaay too much weed; 3) You don’t want to die. But, in reality, comprehending life is a bit trickier than relying on the notion we’re all living in a reality coded by a programmer from the future. Plus, the simulation theory is basically nothing but a rehashed version of medieval philosophy that taught how everything exists only in the mind of God.

Because I don’t want to make everything too easy for you, I thought I’d try to debunk Terrile’s theory by having a chat with Massimo Pigliucci, author of Answers for Aristotle and professor of philosophy at the City University of New York.

To quote Massimo: we really don’t know shit. So stop dreaming about connecting your brain to your Playstation and get serious. Life is harder than pressing restart.

VICE: Hi Massimo. What do you think of theories like Terrile’s?
Massimo Pigliucci: I’m not directly familiar with Terrile’s work, in particular, but that’s an old idea, which has been proposed by philosophers in many guises. In general philosophy, it’s known as “idealism,” meaning the concept of what we call reality is the manifestation of a mind, i.e., it’s an idea. George Berkeley—the namesake for the Californian university—was a proponent of idealism. Although he thought that the mind in question was that of God, not of a computer.

I actually think the argument is interesting, philosophically speaking, but it has two problems: it is entirely empirically untestable—i.e., it’s not science—and is based on what I think is a fundamental flaw. The philosopher Nick Bostrom argued something along the lines of what Terrile is proposing quite recently.

What was that?
Well, the argument is that, if it’s possible to simulate minds inside a computer, if there’s an existing civilization capable of doing so and if at least some of these civilizations are curious enough to actually do so, then they’re likely to do it many times over. Just like we don’t only make one copy of The Sims, we make millions. It follows that there are likely many simulated universes and only one or comparatively few physical ones.

Given that, the odds that we’re inside a simulated universe are much greater than those dictating that we’re in a physical one, and Bostrom concludes that it’s likely we are inside someone else’s simulation. The argument is actually very clever.

So what do you find problematic about it?
Bostrom—and, I assume, Terrile—accepts a strong version of the so-called computational theory of mind, i.e. the idea that the mind is like computer software and can be “run” on many different substrates, including computer chips. While some version of the computational theory is widely accepted by neuroscientists and philosophers, I think this is far from established and I agree with the minority view voiced most famously by John Searle.

According to Searle, the mind simply isn’t like a computer, because “minding” is a particular biological activity—like, say, breathing—that is likely tied to specific biological substrates and the result of a specific process of biological evolution. This isn’t to say that there is anything mystical about consciousness, of course. Nor that we couldn’t reproduce the phenomenon artificially. But it doesn’t seem to make much sense to say that we could “simulate” it inside a computer, or “upload” it in a computer, so that we become immortal, as supporters of Singularitarianism claim.

These kind of theories seem to be getting extremely popular at the moment. My personal guess is that it’s because it would make this life a whole lot easier if it were all just a simulation.
Yeah, I see it as religion for nerds. If you think about it, the big simulator in the sky is like God, but you don’t have to go for that yucky Old Testament stuff.

That’s what I find so ridiculous about these kind of speculations. It’s often self-styled “atheists” who believe it, then they talk about some “programmer” who has created this simulation we all live in. Anyway, Terrile claims that the world is a simulation because, when one looks at the smallest parts through a microscope, they appear pixelated. Is that even an argument? 
I’m not sure what he means by that. I don’t think fundamental physicists think of quarks, say, as “pixelated.” Now, if you want a real mind-bender, check out James Ladyman and Don Ross’ Every Thing Must Go book. They extrapolate what all currently viable fundamental theories in physics—relativity, quantum mechanics, string theory, M-theory, loop quantum gravity—have in common and reach the conclusion that, according to the best science, there is no “bottom to reality, which means it can’t be “pixelated.”

Would it be appropriate to call the kind of skepticism that people like Terrile use “pseudo-skepticism”?
No, I wouldn’t go as far as that. Radical skepticism, which is what this is called in philosophy, has a long and interesting pedigree. Just think of Descartes’ famous thought experiment that led to his “I think, therefore I am” bit, as well as countless variations printed on t-shirts. But it’s well understood in philosophy that, although radical skepticism cannot be logically or empirically refuted, it’s a dead end.

There is nothing else you can do about it. So, I think of radical skepticism as a humility check—as in, “yeah, we really don’t know shit”—and then still get my coffee in the morning, under the reasonable assumption that it is real and hopefully tasty.

Do you think it’s impossible to create artificial intelligence and artificial consciousness?
I’m not sure it’s impossible, I just don’t think that the so-called “strong” AI program is the way to do it. That program has pretty much come to a halt in recent years, anyway. From one perspective, we are thinking “machines,” the result of a natural process of biological evolution. So there is no reason why—in principle—we couldn’t replicate substantive bits of that process and create artificial intelligence.

But there are problems there, too?
Yes, there are likely a lot more constraints—physical constraints—than enthusiasts of artificial intelligence seem to be taking seriously. When they say, for instance, that the mind is “computable,” they are equivocating on the meaning of computation. Pretty much everything in the universe is “computable” in the broad sense of being tractable by a Turing-like machine. But that doesn’t mean that we can “upload” rocks, planets, and stars into a computer without losing the most important thing about them: their physicality.

So yes, we might be able to eventually generate AI, and it may even evolve to be smarter than us, but it isn’t going to be a simple matter of “simulating” a purely logical string of symbolic operations.

The question of whether what we perceive as reality is, in fact, “real,” is as old as philosophy. Don’t theories like Terrile’s lead away from the problem of “true” knowledge? Or even harm philosophy and science?
Ah, if you were to ask Plato, “true knowledge” is a redundant concept, since knowledge is justified true belief, meaning that truth is already built into the idea of knowledge. That aside, I think one of the best aspects of philosophy is the ability to run thought experiments that are outside the strictly empirical realm of science—to explore logical space, if you will. The problem begins when people make a big deal of comparatively bad or uninformative thought experiments.

That makes me feel like claims such as Terrile’s are pseudoscientific, and that those dreaming of simulations cannot deal with the fact that they will die someday, which is why they dream of AI and simulations.
I wouldn’t use the word pseudoscientific too lightly. Terrile, for instance, is a legitimate scientist doing legitimate science. When he doesn’t talk about pixellated reality, that is. That’s different from, say, Deepak Chopra and his nonsense about “quantum healing.” That said, academics—be they scientists or philosophers—have a special responsibility when they talk to the public and should make clear where the science or philosophy ends and personal speculation begins.

But yes, I suspect that a lot of what we are talking about does have to do with the human inability to deal with death.

Why do you think the more esoteric branches of science and philosophy, like this, have always attracted the masses?
Because that’s where the fun is. The human imagination likes to run wild, so we buy books about string theory rather than Newtonian mechanics, or about zombies rather than the basics of logic. The frontier stuff is always most exciting. And that’s not a problem at all, as long as the people who write about these things do it responsibly, which is not always the case, unfortunately.

So how could science and philosophy fruitfully work together? 
Well, despite recent public clashes between scientists and philosophers, there are plenty of very good examples of fruitful collaboration between the two, especially in fields like the philosophies of biology, physics and mathematics. In those scholarly areas, philosophers take the science very seriously and scientists openly engage with the philosophy.

But we also have to understand that philosophy is a different type of intellectual activity from science. Scientists are in the business of discovering things about the nature of the physical universe, and what they propose about that nature has to be empirically verifiable. Philosophers are concerned with exploring logical possibilities, as well as with examining the meaning and coherent—or not—deployment of concepts in both everyday and scientific language. So, ideally, good philosophy should be informed by the best available science and science should be aware of the fact that—as Dan Dennett famously put it—it takes on quite a bit of unexamined philosophical “baggage.” But, other than that, it simply boils down to respecting the value of two different intellectual enterprises, neither of which is going to be reduced by the other.

Follow Johannes on Twitter: @JohnVouloir