Good Conversation Project
Good Conversation Project
Joe Carlsmith - Deep Atheism, Death, and Sincerity
0:00
-1:54:25

Joe Carlsmith - Deep Atheism, Death, and Sincerity

Joe Carlsmith is a philosopher and writer at Anthropic whose concept of “deep atheism” has changed how I think about religion. In this conversation we explore what it means to trust or distrust the universe, why death brings life into focus, and why it can be hard to be sincere. We discuss Teilhard de Chardin, the problem of evil, Elinor Ostrom, Nick Land, and end on grace as a model for how a whole system can work together.

Links: Joe on deep atheism, part of a series on otherness and control in the age of AGI, and on death, sincerity, and “fake thinking” and “real thinking”.

Transcript

The following transcript has been cleaned up and filler removed. It was generated by AI, and checked by a human, but errors may remain. Check the video before quoting.

Introduction and defining deep atheism

Michael Nielsen: Hi, welcome to the Good Conversation Project. I’m Michael Nielsen. I’m a research fellow at the Astera Institute. I’m joined today by Joe Carlsmith. Joe is a writer and philosopher at Anthropic. His interest is in helping humanity orient wisely toward the long-term future. Joe, we’ve talked a little bit before about this idea of deep atheism, which I think is a really interesting concept.

You wrote this long essay as part of an even longer book-length essay. And the idea as I understand it is you’re a deep atheist if your fundamental belief or fundamental orientation toward the universe is that it is not benevolent, that it is somehow malevolent. So you give this great example of Timothy Treadwell, the grizzly man who lived with grizzly bears for ten-plus years, in a very trusting kind of a way, maybe too trusting, as a canonical example of somebody who’s not a deep atheist. And then you have Eliezer Yudkowsky, who’s one of the prophets of AI doom, as an example of somebody who’s much more suspicious towards the universe.

Something that I really struggled with while reading the essay was: what type of a concept is it? I’m trying to think of it as a scientific concept. Is this something that we can deduce from the laws of nature? And it maybe doesn’t seem like you can, because most of the universe is vacuum, which is incredibly hostile. And so in some sense, we should all be deep atheists. But how do you think about that? What’s the right kind of conceptual frame for it?

Joe Carlsmith: I think we can try to get a somewhat quantitative-ish definition of at least one aspect of it, though I haven’t tried to work this out, so this is going to be rough. I will say, I think the most important concept—you’d said not benevolent. And I think not benevolent is importantly different from malevolent.

The important thing about deep atheism for me is more about the notion of trust. Basically, the deep atheist is someone who doesn’t trust the universe to take care of them, to go towards good outcomes by their lights by default.

And correspondingly—and this is why I got interested in it—feels the need to exert a kind of control over the direction that the universe goes. And that’s different from thinking that the universe is actively trying to mess with you or actively hostile in some more intentional sense. And maybe that’s not what you mean, but I just thought I’d clarify that as an important backdrop.

Michael: That’s very helpful actually. The way you’re describing it, it sounds like it’s actually quite a psychological state, maybe primarily. It’s not the same as paranoia, but it’s a little bit related in some sense.

Joe: Not necessarily. I think there is a component. If we were going to try to define it somewhat rigorously, what I would do is say, okay, let’s say you’ve got some values. And we’re going to use those values to rank possible states that the universe ends up in. And we have some notion of the default that happens if you don’t try to control stuff. I think my baseline if I was going to try to define deep atheism would be: deep atheism is the view that the world will be bad by your lights by default, if some sort of gentle control is not exerted over it.

Michael: Yeah.

Joe: And then I think that has a component of the empirical direction of the universe by default, whatever that means. And then there’s a different thing, which is about your values. An interesting counterexample, or a case that’s sort of an edge case of deep atheism: suppose your value system is such that the main thing you care about is the absence of suffering. In a sense this worldview is often associated with a kind of deep pessimism and a kind of aversion towards the world, because in some sense there’s all this suffering, it would be better if there was nothing. But actually there’s a way in which this view is not very deep atheist, in the sense that most states—if you imagine the world sort of rolling forward—most states don’t have much suffering in them.

And so by default, unless someone is really steering towards suffering, the universe kind of doesn’t create suffering. Most stuff is just blank, but blank on that ethics is great. It’s the top you can get. So there’s a sense in which this worldview is very optimistic. If you just don’t do anything, the universe probably won’t have that much suffering, and that’s the best you can get. So that’s an example of a case where it really matters what the value system is. You can have the same empirical picture of where the universe might go by default, but the value system changes your read as to whether that’s good or bad.

Michael: I just want to check that I understood, because I’m not actually a hundred percent sure I did. You’re saying that whether or not one is a deep atheist does depend fundamentally on your other set of values.

Joe: Yep, that’s right.

Michael: Okay, that makes a lot of sense. And it’s still a response then. Actually, for most sets of values, somebody who’s living near grizzly bears is going to think, I should probably be somewhere else. Their response is to control the situation, to say, I need to get out of here. Living in a tent 300 metres away from grizzly bears is probably not the best idea. And it probably wasn’t Timothy Treadwell’s values that made the difference there. He probably wasn’t thinking it doesn’t matter whether or not he lives or dies. He actually seems to be a person who quite enjoys life. Maybe struggles with conventional life, but he does actually really enjoy his life. And he doesn’t want to suffer either. So he has quite conventional values in those ways.

But he still had very limited need for control in some interesting way. And that’s what made him—I don’t know what the right term is, the opposite term to deep atheist—but he was certainly not deep atheist.

Joe: Yeah. The term I use in the series that’s the closest is this notion of “green,” where green is associated—it’s a term from the Magic the Gathering ontology, somewhat remixed and reconceptualized—but where green is understood as an attitude that has some kind of fundamental trust in the notion of nature. And nature is a kind of funky term here, but it’s importantly the other—the thing that’s prior to you, the universe, the thing beyond yourself.

Michael: Do you mind if we stick with just me saying “the opposite of deep atheist”? Because in my reading of your essay, it didn’t seem to me that you wanted the term green to be exactly the opposite of deep atheism. It was doing a lot of the same work as that opposite, but was doing importantly different work as well.

Joe: Yeah, that’s true. I should say, one of the things I really appreciate about your engagement with this concept is it’s pushing me to try to figure out what are the actual boundaries of this, how does it actually fit together. And so I’m very open to what we learn being, actually, there’s a bunch of different things here and it goes different directions. I’m not pretending to have worked all this out.

I think the actual person—yes. Opposite of deep atheist: there’s a sense in which Timothy Treadwell is not a deep atheist in that he’s very trusting in this context towards a certain kind of natural process, namely the grizzly bears. And the way in which I would see him as differing from someone like Eliezer in that context is not so much the values he’s bringing to the table, but rather the empirical bet. I think he’s somewhere saying that he doesn’t expect to get eaten. So there’s a way in which it’s not—if he had said something like “I’m down to get eaten because I want to feed nature,” then it’s more of a values difference that’s driving the trust.

Michael: He was on Letterman, and he was interviewed by a bunch of people. And many of them asked variations on more or less this question. He’s maybe not a hundred percent consistent in his answers, but he certainly doesn’t come across as somebody who—he’s not committing suicide here. He’s not even tacitly oriented that way. He actually seems to be a person who quite enjoys life. Maybe struggles with conventional life, but he does actually really enjoy his life. And he doesn’t want to suffer either. So he has quite conventional values in those ways.

Joe: That’s right. And so the deep atheist has similarly conventional values. And the mistake they’re trying to avoid is trusting in the universe to care for those values. And Treadwell is at least some example of someone who is trusting in the universe to care for—in this case, the value of his life. And it didn’t.

Deep atheism beyond the local: Teilhard de Chardin and cosmic teleology

Michael: It seems like in the most immediate case, it is a psychological attitude towards one’s ambient surroundings. You take a plane halfway across the world, you’re in some city, and maybe you actually feel quite unsafe, or maybe you feel surprisingly safe. You walk into Kyoto at 11 PM and you’re like, wow, this place feels amazing. Different people have different levels of trust under those circumstances, and some people will be very distrustful habitually. It’s kind of situational, but you’re talking about something quite broad—broad expectations, a broad model relative to your values.

Joe: Yeah, I think you’re pointing at something that does seem important, and it’s a way in which I actually wouldn’t think of Treadwell as a paradigm person to think about in the context of deep atheism. I agree that he’s not a deep atheist, but I do often think of deep atheism and its contraries as more oriented towards the trajectory of the development of the universe and of life on a broader scale. Or at least that’s where I started thinking about the concept, because I was thinking about it in the context of AI risk and the aspiration a lot of people have to somehow have control over the future. The thought is you have to exert that sort of control, otherwise you’re going to get a loss of all value.

I think it’s separable in the following sense: you could be someone who has both the values and the empirical views that I’m associating with Eliezer here, and nevertheless land in a place that’s reasonably safe and be calibrated about how safe it is. To the extent you’re mostly talking about the local environment, that’s more a matter of local epistemology than it is about your convictions about the structure and dynamics that the universe tends towards at a fundamental level.

Michael: That’s actually quite helpful. So it’s—in some sense, much of your local environment, obviously it’s incredibly important for you wherever you are, but it’s also a little bit shallow. It’s not expressing all of the range of possibilities of the universe. And you’re more interested in that broader range of possibilities—what does the universe have in store for us potentially? You’re more pushing in that direction if I understand correctly.

Joe: Yeah, that’s right. And so I actually think that the paradigm non-deep atheist that I think about is—well, there’s sort of two. I think maybe the most salient is this Jesuit priest named Pierre Teilhard de Chardin.

Michael: Really? Wow.

Joe: He’s, to my mind, the least deep atheist—or like Bergson or these people who are not just Christian in the sense that they believe that there’s a God somewhere. It’s really that there’s a God as manifest in the inevitable evolution of complexity—

Michael: Yeah. The Omega Point and all of this. It’s a very strong singularity kind of a notion.

Joe: Yes, and a very strong inevitability. There’s this crazy passage in Teilhard’s book, The Phenomenon of Man, which I think illustrates—I don’t like his view. But he has this passage where he looks out at evolution and he sees this progress towards complexity and greater mental connection. He has this whole teleology that he’s extrapolating from the paleontology and the record. And then he says, okay, some people say maybe we could go extinct, maybe humanity would get wiped out. And he’s like, no way. Why not? Because the universe clearly wants to get to the Omega Point. And if humans got wiped out, then it wouldn’t get there. And so it’s going to get there. This is a very intense kind of retroactive application of his teleology to basic causal dynamics. I think he’s actually talking about asteroids—could we get hit by an asteroid? And he’s like, nah, because then the progress of evolution would be halted.

So in his case there’s this very strong inevitability to the default dynamics that we are already caught up in. It’s not even that you could mess it up. And you will get to the Omega Point, and Jesus will look back on history and smile—or sorry, there’s a bit at the end of the book where it looks like he thinks Jesus is present at the Omega Point.

Michael: I wonder—he was writing that in the 1930s or something like that. This is before the atomic bomb, and before MAD [/ed/ - I got this wrong – it was published in 1955]. I’m thinking about Petrov and Arkhipov—single individuals who could or could not essentially push a button. It wasn’t actually a button, but—either send the human species back to 5,000 people living in caves, or not. And fortunately for us, his argument would almost be something like: it was inevitable that Petrov would decide it’s probably just a bug in our early warning system, we shouldn’t launch a counterattack and end civilization.

That’s almost the point of view he would have to take with that teleology.

Joe: Yeah. I think probably the more charitable version of his view is one that you do see. That particular intensity of inevitability of the good outcome is, I think, rare. A view that I would see as more nearby is one that treats something like the process of evolution, capitalism, natural selection—some analog—and sees that process, if uninterrupted, as inevitably leading to a good outcome. Teilhard goes further and says it won’t be interrupted, but we can let that go and say, well, maybe you get hit by an asteroid, maybe you have a global pandemic, but if you don’t, and you just let some kind of emergent, decentralized process of selection and competition and cooperation do its thing, then very robustly you will end up in an approximately optimal outcome, or at least a good outcome.

Selection, progress, and Steven Pinker

Michael: That’s really interesting. You can sort of view it at some level as just a point about what selection filters are being applied. If your selection filters are actually pretty positive, then it’s actually not such a terrible—it’s kind of believable that you can make up a story where this makes a lot of sense. Certainly, I don’t think somebody like Darwin would have believed this, but maybe you go long enough and the selection filters actually become positive enough. It’s almost a self-reinforcing thing where if the environment is already ambiently good enough, the selection filters will actually ensure that things get better. And I suppose you could imagine making that kind of argument.

Joe: Yes, I think this way of rejecting deep atheism is in fact empirically on the table, depending on different sorts of values and how high your standards are for satisfaction with the outcome. I gave a talk which you and I discussed a while back about whether goodness can survive intense processes of competition.

One vision of optimism there is: you have a bunch of competition, but inevitably in order to be competitive in many processes—and I agree, it matters what the selection filter is—being competitive requires something like being friendly with other agents and making deals and abiding by norms that involve respecting boundaries and avoiding zero-sum conflict. If something like the thing that falls out of reasonable game theory or cooperative norms applies more and more as you go into the process of greater evolution and competition—or sometimes people talk about our global mind getting more synthesized together, more connections, maybe the thing to do if you’re trying to be competitive is to build a giant coalition with more and more people being happy and more and more people contributing their part of the cognition to some collective intelligence. I think that’s quite close to what Teilhard is talking about. And I think variants of that, you still see this crop up tons of places.

Michael: Can I just make a connection? I never would have thought of Teilhard as being similar to somebody like Steven Pinker or Max Roser, but actually you’re pointing out that there is something very important that they have in common.

Joe: Yes, though I feel like Steven Pinker does less in terms of—he doesn’t extrapolate out the graphs enough. He sort of says, check out my graph, but he doesn’t do a bunch of “and what happens in the limit of competition” or “what is the teleology of evolution.” But yep.

Michael: Yeah, it’s much more local and much more empirically focused. He’s just interested in this question of, well, have things actually been getting better? And that’s interesting—they are, at least on some axes. Why? And then he has a theory which is essentially about selection filters. He doesn’t extrapolate it. He’s not trying to claim that in a hundred thousand years we’ll all have ascended to the sky or whatever.

Joe: But I do think you’re right that there is a similar shape to the trajectory of history as being presented, which is up and to the right. As you go forward, there’s some sense of trust in the default. And then there’s also a question of how concerned are you about that project being knocked off course? I helped at one point with this book, The Precipice by Toby Ord, which is actually in many respects in a Pinkerian historical vision, but is very concerned about this: well, there’s a default trajectory of progress that just needs to be not interrupted. But it’s not so much about steering it one direction versus another. It’s more like, don’t die. If you don’t die, you’ll get to the good place.

Michael: Yeah. It’s sort of asking, how can you die?

Joe: And then interestingly, you start to sneak in more and more ways of dying. For example, what if you get the bad values? And is that death, or—the AIs take over, but the AIs in some sense are quite continuous with the process of capitalism and whatever else. So there’s a weird way in which you can start sneaking in more and more deaths, until it’s a little bit unclear whether there’s really a default here, or whether you just want to steer to a very particular outcome in the end.

Deep atheism, religion, and the problem of evil

Michael: That’s very interesting. It’s an interesting way of thinking about Toby’s book. Just in general, I think part of why the deep atheism stuff has remained so much with me is that it helped me understand better than anything ever before—and I’ve mentioned this to you—that the question of whether or not you believe in God isn’t actually that important for religion. It’s very important in the Abrahamic religions. Moses and Jesus and Muhammad are all very clear on this—”I am the Lord your God and you shall have no other gods before me.” Jesus’s great commandment is to love the Lord God with all your heart, all your mind, and all your soul.

Very important in the Abrahamic religions, but not so important in some other religions. There’s this fact I love about the Buddha, that there are these 14 unanswerable questions in Buddhism, questions like: How did the cosmos start? How will it end? What is its fundamental nature? And the Buddha is like, just quit wasting your time. These questions are a complete waste of time. Focus on suffering. This is what’s important, ameliorating suffering.

He’s not interested in a lot of these big cosmic questions that the Abrahamic religions are very interested in. Buddhism is certainly much less interested in the question of whether or not there’s a God or what the nature of God is. And at first when I read about deep atheism, I was like, it’s so badly named, what’s it got to do with whether or not God exists? And I think I’ve come around a little bit to the term. It’s still hard—my first 20 minutes of conversation about it with anybody is always them complaining about the term. So better branding might be helpful. But I do understand: it is this fundamental question about your orientation towards the cosmos, what do you think the cosmos is about, what do you think its nature is? And you can’t be agnostic on this. Every human being has some orientation of this type. And it is a kind of religious orientation.

I’ve spent a little time thinking about essentially ontologies—what else has this character, big broad questions about the nature of the cosmos? Just one example, which I think is very common among scientists. I find it particularly moving in reading Einstein, more than anybody else. You read Einstein’s letters and the extent to which he believes that the universe is comprehensible and simple—it is just astounding.

He has deeply internalized this point of view, and I get teary reading it, not because I necessarily agree, but just because of the intensity of the conviction. And you realize, oh, this is part of what made him a great, great, great scientist—this intensity of belief. And again, there’s nowhere in the Bible where it says this, there’s nowhere in the Torah or any other scriptures—I mean, you can make arguments, but they’re—

Joe: Well, Job—famously, “who were you to be—were you there when I made the hurricane?” I forget the exact words—

Michael: I don’t think so. You’ve got to admit that it’s something of a reach. People do try to connect Judeo-Christianity to modern science, and obviously Newton was deeply religious and whatever, but it’s a different kind of a thing. You don’t get general relativity out of this point of view, I don’t think. It’s somehow much deeper, the notion of this incredibly deep simplicity. I’m just curious if there are other broad questions which seem to you to have this same kind of flavor as deep atheism, or for that matter, this belief in the simplicity and comprehensibility of the universe.

Joe: Yeah, I like this. “Is the universe comprehensible?” is a very interesting fundamental stance. There’s some question—I think it’s an interesting point—how easily do you give up if there’s something you don’t understand? And that can come from “I’m personally not going to understand it,” or it can come from some backdrop sense that the world is full of fundamental incomprehensibleness.

Michael: Do you know the story of Einstein when he was four? He saw a compass for the first time and it oriented towards north and he wanted to know what it was doing. And then he reports it as an epiphany—to realize that there was this fundamental order in the world that could be detected in this way. And I think it’s interesting that even at age four, he had that—my God, there are hidden forces here and we can understand them. I don’t think most four-year-olds—compasses are cool, but they’re maybe not that cool.

Joe: Right. Yeah, four is early for that degree of ecstasy.

Michael: Yeah.

Joe: My point with Job was more actually in the opposite direction. I think the Christian tradition has a—if you’re going to read a view about the comprehensibility versus incomprehensibility of the world onto the Judeo-Christian tradition, I would have said it leans incomprehensible, at least in the context of the problem of evil. I think the problem of evil is the big problem for religion. And in that context, you’re stuck either saying something like “it’s incomprehensible, sorry, you just don’t get to know,” or you end up talking about the demons doing it, and you end up doing this quite unsatisfying game of trying to explain it.

Michael: I think it’s a problem for monotheistic religions. Much less of a problem for everybody else. Would you agree with that?

Joe: Yes and then no. And this gets to some of the stuff about fundamental stances, where I think the problem of evil threatens religious orientations at a more fundamental level than at the level of belief in a perfect, all-powerful God. Because there’s a different thing you can have, which is a fundamentally positive stance towards being itself.

I would associate that with many religions, many kinds of spirituality-adjacent vibes, including some associated with quite pessimistic religions like Buddhism. There’s a way in which—Alan Ginsberg has this poem where he’s like, holy is everything, holy the saxophone, holy the city, holy the cock, holy all these things. He wants to say everything is holy. And I think there’s a lot of spiritual traditions that aren’t explicitly monotheistic in the Christian sense, but nevertheless want to attribute a kind of sacredness or value just to being as such, or reality as such. And I think the horribleness of much of reality is a barrier to that kind of stance, even independent of the metaphysics at stake.

Michael: I’m just thinking about Shinto, where absolutely everything seems to have some kind of deity associated to it. And they don’t seem to have any problem with the fact that some of these are pretty bad. The bear out the back that’s going to rip your head off—okay, it’s got a deity associated to it, but maybe not a particularly friendly one.

Joe: Yeah, I agree. And the question is, do you end up at some other level viewing this bad deity as sacred or not?

There’s actually a really interesting scene in the movie Princess Mononoke. Have you seen it?

Michael: I think I have, although the Ghibli movies do shade into one another a bit for me.

Joe: It’s the one where it opens with this boar that has been infected with some horrible demon—it used to be a boar god and it’s rampaging with all these horrible bloody worms coming out of its skin. It’s clearly a demon, labeled as a demon. Bad. The hero kills it with an arrow.

It eventually dies in front of the village, and the village wise woman comes out and kneels before the demon and says something like, “great Lord of destruction and death”—some kind of honoring of this demon. And the demon interestingly replies, “you know, filthy humans, you will suffer as I have suffered.” It has no—it doesn’t bow back. It’s just raw hatred. And this is expected. I think it’s a really interesting scene of confronting a thing that is being labeled as genuinely bad in the world, and nevertheless holding a sense of honoring it, some deeper sacredness, some part-of-the-totality sense.

Michael: And you would view that as an attempt at a resolution of the problem of evil? I think that’s more or less what you’re saying?

Joe: It’s the version that seems best to me. I think the balance I feel is necessary is: you need to be able to say that bad things are bad. Really say it. If you see some horrific suffering, torture—I think there’s something that feels really wrong about honoring that and saying, this is holy. There’s something important about being able to say, no, this is bad and we will end it, and it is not sacred.

And then there’s some other level at which one wants to be able to affirm—or maybe not affirm, but love, or forgive. There’s a sense of grace. This is the synthesis of deep atheism with spirituality that I attempt in the series. You want to be able to both mistrust the world at an empirical level, and nevertheless have some kind of love for it in the same way you love a flawed thing.

Michael: I think you’ve written about this at the level of individual aging and death as well, where you’re quite sympathetic to the idea of celebrating that process and accepting that process, but also rejecting it—you’re doing both. Particularly at the end of a life. And somewhere in one of your notes you talk about, with I think quite justified disdain, the fact that at funerals sometimes people will tell half-truths, will just not be honest and frank. And that’s a moment at which, as far as I can tell, you want a lot of frankness.

There’s this interesting tension—at least the tension I feel when you’re talking about individual deaths—between wanting to fight it and regard it as something where we do want to exert control, we do want to defer things. And then at some point, there is a moment at which one accepts it and starts to integrate it and starts to celebrate it even. And you want to be able to have very frank, almost joyful conversations about it. I’m putting words into your mouth—that’s maybe more my interpretation. But that seems like the single individual level analog of the deep atheism point. Is that fair?

Joe: Death is definitely one of these areas where many deep atheists think that we go wrong in our degree of trust and affirmation and acceptance towards the world. Yudkowsky and others come out of this—actually prior to AI, a lot of that tradition is rooted in this really strong “death is bad, death is the enemy, let us end death.” And they see with horror the way in which people apply a kind of trust, acceptance, yin, sacredness vibe to death. They see this as almost a kind of learned—well, there’s different diagnoses, but they think this is really misleading, to be like, oh, death is sacred, death lends meaning to life. That’s a kind of cope that causes you to exert less control, when in fact—and maybe that makes sense when you can’t control this, but if you actually can, then you need to be able to see that it’s bad and that you should change it.

Dylan Thomas, Wendell Berry, and the synthesis

Michael: Two of my favorite poems—one is Dylan Thomas, “Do Not Go Gentle Into That Good Night,” the famous “Rage, rage against the dying of the light.” This is the deep atheist point of view you’re talking about—fight back.

Joe: Yeah, totally.

Michael: And then there’s Wendell Berry, who’s very much a green in your description. He has this wonderful poem about the peace of wild things, who do not tax themselves with forethought of grief. And I think in some sense, part of why I respond to a lot of your writing and really enjoy it is I feel like you’re attempting a synthesis of these two points of view, where you regard them as a consequence of a single deeper point of view, and you’re searching for what that deeper point of view is. Obviously I’m massively putting words into your mouth here, but instinctively that’s part of what I feel in it.

Joe: I appreciate that. I am trying to synthesize this stuff.

I think there’s two aspects. There’s an empirical question about the nature of the universe, and also about our values at the level of what philosophers would call axiology—you’re looking at a state of the universe and ranking them. A particular way of evaluating an outcome.

And then—and you’ve written about this and we’ve spoken about it a little—there’s a different thing, which is: what is the subtlety of the shape of your spiritual orientation towards this? Both this empirical picture and this picture of your values, but there’s some other component: what is the integrative pattern of resistance and acceptance and love and rejection? There’s a bunch of ways your mind responds to particular things in the world. I am trying to somehow hold space both for a genuine hard no—this is bad, to be fought against, to be killed, this should be ended, it’s not sacred—and then underneath that, there needs to be love, or there needs to be some grace, forgiveness. At least that’s the synthesis I’m trying for.

I sometimes think about, does Jesus ever kill something? How does Jesus kill something evil? And I sort of think he kills it with love somehow. He sees its true nature, there’s a way in which it is holy for him. And also he’s doing something that needs to be done.

Michael: Can I give an example, which is a very different direction but same person? Jesus—the story of the gospels does do this incredible thing of making death into something to be celebrated. You have a story of a terrible death which ends in triumph. And that’s a very interesting rhetorical move. It’s quite an incredible thing to have made that into something which is an occasion for incredible joy. My understanding is that’s the way many Christians relate to it—the worst defeat followed by an even greater triumph.

Joe: Though it is tough, because it’s a very particular empirical story. And that is the concern about the way in which a lot of Christians get around some of this deep atheism. I’m kind of like, well, it’s different if there’s heaven at the end and all this stuff.

Michael: A get-out-of-jail-free card, kind of. So you’re saying this is of course a very conventional Christian response to the problem of evil—to punt and say, it’s okay, here’s the big reward at the end, no matter what.

Joe: I think there’s at least—I don’t think that is enough as a response to the problem of evil. Even with heaven, people are like, well, couldn’t God have created a world where you just didn’t have forest fires burning these deer alive? Why you gotta do that, God? But I think there is a way in which if you have heaven in the background, it is easier to see the fundamental stance towards reality as one of trust and affirmation. Because ultimately, all shall be well and all manner of things shall be well.

And a nice thing about Christianity is that heaven is bifurcated from earth—at least in the more traditional Christian view. People like Teilhard are doing something different, but the traditional view is: earth has fallen. The natural world is beautiful but broken. That’s how you accommodate all the ways in which this is bad, and then heaven is a different story. So there’s a way in which you can actually have a horror towards the actual empirics of the world as a theist. There’s a theist blogger named Bentham’s Bulldog who has a post affirming a deep atheist stance—horror towards the actual empirics of the world—that’s possible as a theist, if you’ve put your stock in heaven.

Michael: That’s a good story. It doesn’t exactly solve the problem, but it certainly makes things a lot easier.

Joe: I would feel radically different as a—there’s also a way in which if you’re a theist, you think that the ultimate explanation for things grounds out in a perfect being. Everything comes from God. So even in the actual world with all its pain, there’s some way in which it is easier to say things like, this must be part of the plan. There’s a comprehensibility analog to Einstein’s view, but with a moral component. It’s not just that the universe is comprehensible at an intellectual level. It’s morally comprehensible—ultimately this was worth it, this was good, even if it really doesn’t seem good. If I had that view, I would have a very different attitude towards the universe.

Michael: A book I absolutely love and find very moving is C.S. Lewis’s A Grief Observed, basically his private diary after his wife died.

Something that I find so moving, even though I myself am an atheist, is that his Christianity is so important to him. And yet he’s so angry at his God for allowing this terrible event. There’s a particular passage that you highlight, and I could barely continue to read at that point—where he’s considering seriously the notion that God is evil, that God is bad. This deeply religious man who sacrificed so much of his life. I suppose it’s both a beautiful articulation and it’s also so deeply felt. I think the question I want to ask is just for your response to it, and also if there are any other similarly sharp examples where it’s been articulated so beautifully and with so much depth of real feeling.

Joe: I think it’s his most raw book. It’s also plausibly his best. The Great Divorce is also really good, different. I recommend it. It’s his vision of heaven, this endless journey into God.

I think what strikes me about that passage is that it illustrates how a certain sort of religious affirmation towards the universe can be separated from one’s metaphysics of God. It could be that you still believe in God, but you’ve lost a kind of affirmation of God, basically. And this also happens in The Brothers Karamazov, which is another example—again, it’s the problem of evil that does it. Ivan, in virtue of the suffering of children, hands back the ticket. He hands it back even if it’s justified. He doesn’t want to hear the story for why it’s okay for these children to suffer. Maybe in some sense it is okay—I don’t care, God, I’m giving you back the ticket. And it’s a little elusive exactly what’s going on there, but at the least there seems to be some rejection of God that is nevertheless compatible with still believing in God.

There’s this interesting pattern of loss of deference or dependence on God. I write about this in the series—the play Angels in America has a similar thing, Philip Pullman as well. There are these patterns of stories where there is a God and nevertheless, your role is to not defer or trust in God, but rather to be an independent agent in your own right. And we see that happen even in theistic contexts. Lewis—he eventually goes back to Christianity, but there is this brief moment where—he never doubts God, he claims—but he does start to doubt whether God is worthy of his allegiance.

Michael: This is reminding me why deep atheism is actually quite a good term. It’s not really about belief in God or not, but in that frame, the response of people who want a tremendous amount of control in the world is a very natural deep atheist response. If you don’t believe that God is good, or if you don’t believe that the universe is benevolent, wanting more control is very natural.

It’s interesting to think about. It’s such a psychological thing. You meet different people and you very quickly realize how much control and power they do or do not want. I’d never really thought of it in theological terms before, but it is that type of thing.

AI risk, e/acc, and Nick Land

Joe: Yeah. I got interested in this because of AI risk and the aspiration to somehow have control over the future. The thought is you have to exert that control, otherwise you’ll get a loss of all value. The traditional story is that the AIs will steer the world in some other direction. But the AIs are really just a kind of amplified version of the indifference of nature. Nature doesn’t care. Intelligence doesn’t care either, because goals are orthogonal to intelligence. So everything—intelligence, nature—it’s all orthogonal to value. There’s a very specific vector you need to be steering along, and all the rest go in bad or valueless directions.

People talk about how AI risk is some theological thing. Sometimes that’s not true, but in this case, there actually is a real deep connection between your fundamental metaphysics of the universe and one’s relationship to the control at stake in AI alignment. You see that too in the context of—I started writing this when e/acc was just coming out. And e/acc has this version of a rejection of deep atheism that is rooted in the writings of Nick Land and this allegiance to capitalism.

There’s some notion that capitalism, competition, evolution will just inevitably go in some direction. In this sense, it’s kind of like Teilhard, but Nick Land is much more of a nihilist. Teilhard is like, it’ll go and it’ll be good—there’ll be love, joy, consciousness. Nick Land is like, no, it will kill everyone and maybe drive out consciousness.

Michael: And “what are you going to do? [apathetically]” seems to be the response.

Joe: Something like that. And it’s an important way in which that’s not deep atheism, in that there’s a canceling of the value component. Normally deep atheism is: something will go a bad direction by default. Nick Land, at least understood as nihilistic, is just saying something will go in some direction, and I’m not going to do the value thing, or I’m going to affirm that in some weird way.

Michael: Well, within the terms you explained earlier, you can understand it as—maybe he just has a different value system, and relative to that value system, he could be a deep atheist. That seems at least consistent.

Joe: I guess to the extent he has a value system that affirms the arbitrary destructive potential of unbridled competition, then I would say he’s not a deep atheist in the relevant sense. Similarly, there’s some parts of the e/acc writing where it sounds a little bit like they’re really into the dissipation of energy as waste heat. There’s some weird stuff about the death drive, some lineage that goes through Land—various things about burning the sun and getting rid of the energy. And it’s almost a pro-entropy stance. And the reason you’re into capitalism and life, or at least this is the suggestion, is that it’s the fastest, most efficient way of converting energy to waste heat.

There’s something about this where—again, it’s a radically different value system. If you’re really into entropy, then the universe is great, because entropy is this really strong force. You’ve got a really strong god operating on your side. It’s just that that’s a radically alien value system.

Michael: It’s a funny thing to want to be maximizing. Actually, I’m just thinking about the fact that if you do things in information theory, if you compress information as much as possible, it starts to look like it’s essentially black body radiation in one dimension.

So there are interesting connections between maximizing entropy and desires for efficiency. There’s a great paper from I think 1996 by Cris Moore—they’re interested in the question of looking at signatures of alien civilizations. Let’s assume that [aliens are] very concerned with maximizing the efficiency of how they communicate. They’re going to be limited by results from Shannon about information theory, and you can ask what it’s going to look like. An old joke, going back many decades even prior to that, is that it’s going to look like blackbody radiation, so we’re not going to be able to tell. But they point out it doesn’t look like blackbody radiation in three dimensions—it looks like it in one dimension, so you could actually get a signature.

I haven’t thought about this in 15 years and I’m probably butchering it, but there are funny connections between things looking a lot like heat and things actually being done extremely efficiently. The point is you can compress down so that all of the redundancy is eliminated, so of course it starts to look like noise.

Joe: In some sense, that picture is quite continuous with Land’s thing, in that he’s really into efficiency in some sense. But the problem is that efficiency qua efficiency doesn’t look on its face like it’s especially good. You need something that it’s efficient for.

I think there are some interesting questions here as to whether maybe the process of life and growth and intelligence is really quite nice and benign by default, even as it becomes more efficient or more powerful. Maybe power is actually really deeply unified with goodness enough—that’s the best-guess version in my head of an actual empirical worldview that could falsify deep atheism at the empirical level.

Competition, cooperation, and protecting the weak

Michael: This is a very Steven Pinker kind of argument—actually, the way to succeed in a world once you’ve achieved some minimal level of law and order is to be very trustworthy. It’s to trade, it’s to communicate well, all the good boundaries kind of stuff. And it is interesting that there are at least some simple models in which you get an emergent ethics which is surprisingly positive out of what seems like quite a brutish world.

Joe: Yes, that’s the sort of thing I’m talking about. The question is how does that work? Does that sufficiently protect the weak? I think that’s an important aspect. We do see predation, raw predation.

Michael: Have you ever read the Code of Hammurabi? It has the most amazing passage very early on where he explicitly says that the purpose of it is for the strong to protect the weak. My understanding is that it was probably codified very early, as he was getting started. To some extent it was justification, but it does this amazing rhetorical move of getting its moral force from that argument. I’m going to rule over you. And part of my moral justification for doing it is I am going to protect the weak from the strong. Whatever that is—3,800 years old or something.

That’s a very sophisticated and at some level very encouraging idea. It suggests that human beings are sufficiently good at coordination and talking to one another that the way for me to get power is to act on behalf of the powerless. That’s kind of the implicit theory, at a very early stage. You can go forward in time to Locke and whoever—maybe there are similar ideas much later. But I think it’s very interesting that it’s already so strong. You can tell a story about Christianity that’s a bit similar.

Joe: Certainly. Christianity is famously interested in this pattern of concern for the weak. The question is whether that pattern of concern is suitably connected with power such that, if you really selected for power in the world over time, you would persistently get beings that cared about the powerless. The argument for why you wouldn’t is: the powerless, by hypothesis, don’t help you. They’re not very useful trade partners. You could be like—and we also do see this. Concretely, we fund vaccines for diseases in developing countries way less because people don’t pay for it in the same way. And there are non-human animals and all these patterns of ignoring the interests of tons of beings that are stitched into even the relatively cooperative structures we see.

Michael: I don’t know the history of the Code of Hammurabi—it’s so remote in time that probably nobody does. But you can say, what’s actually going on? It’s not that the individuals are powerless. They have a little bit of power. And Hammurabi has realized that he can aggregate that and use it to support his claim by acting on their behalf. So there are two distinct ways of looking at it. The other would be that he really is concerned out of some kind of pre-Christian ethic, doing this out of the goodness of his heart and his sense of empathy. And maybe both of those things may actually be true.

Joe: Yeah, it’s probably a mix. The question is—you could think, it’s really hard for people to have no trade value or to be bringing nothing to a coalition. And so you would expect that it’s quite advantageous to be nice to tons of people in a way that allows for productive trade and other forms of cooperative relationship.

But there’s a bunch of gnarly empirics there. It’s sort of related to comparative advantage. And unfortunately, it seems contingent. It wouldn’t necessarily be a super deep feature of the universe, because the costs of cooperation and subsistence—you don’t hire everyone at your company. There are lots of people who could contribute, but it’s not actually worth it. We do see cases where domination might just work. I’m Pinker-curious, but not Pinker-confident about where this actually goes in the limit. And I’m concerned that there’s a shallowness of the atheism at stake in some of the Pinker/Roser liberal optimism discourse, where they’re kind of like, I’ve looked over a century of history and it sure looks like something like liberal democracy also converges with the most economically or militarily competitive arrangements. And I’m like, well, yeah, maybe until the ‘90s. But even if it’s true, it’s been true for a few centuries over this very small set of selection processes and different experiments. Trusting in that over the longer-term selection processes is a much tougher and higher standard.

Michael: I think I understand you. What you’re saying is we are unfortunately in an n-equals-one case here. You can maybe make a case that China developed somewhat independently of India developed somewhat independently of the West [etcetera]. But there was a tremendous amount of mixing. And it might be that if something slightly different had happened 10,000 years ago in the Indus Valley, the world would actually look completely different. Even in the last hundred years—Philip Roth, I think, wrote this alternate history in which the Nazis won World War II. Even that, on the scale of 10,000 years, is probably not the largest intervention that was possible, but it does lead to a very different world and a very different set of values. Same biology, different culture. And yet there does seem to be that level of contingency.

Joe: Yeah. There’s a few different ways that example could function. It could be that there’s a bunch of contingency where the Nazis won World War II and then, because there just wasn’t that much competition—so it’s a question of who would be—at least when I think about this in the context of whether goodness is compatible with competition, which I think of as quite related to deep atheism. Competition is one of these fundamental forces driving the evolution of agents interacting. And if you can’t trust in competition, then you need to do a bunch more control. You have to shut down competition in a bunch of ways if it ultimately leads to very bad places.

The concern would be: if things can just go in all sorts of directions, if you’re concerned about competition, then path dependence is your friend. You’re hoping you can be the path-dependent one where you create a nice world and it just doesn’t get selected against super hard because the selection pressures aren’t that bad. My concern would be something more like: if you looked across the universe, the prediction of someone who’s really optimistic about liberalism or norms of cooperation is that everyone’s being treated nicely, everyone has property rights, there aren’t unhappy workers, there are no slaves anywhere, because slavery is economically disadvantageous. And I’m just not sure. I think empirically, if you did a bunch of alien civilizations, it totally could be that slavery is part of the most efficient arrangement. And that’s bad. It’s not that that’s good.

Michael: You think about E.O. Wilson and his two great examples of eusocial animals on Earth—the primates and the insects. I don’t know how good life is for an individual ant. I don’t think it’s particularly good. And there are a lot more ants than there are people.

Joe: It certainly doesn’t look like a paragon of liberal property rights in an anthill.

Michael: No. And you have to go out to a separate level. And maybe at some other level it actually makes sense. You can repeat that argument at the primate level as well. An individual cell in my body does what it’s told at some level. It doesn’t have a whole lot of agency.

Joe: It’s true. And what are the immune cells killing themselves for you all day? Who’s paying them? What are they getting out of it?

Michael: Exactly. This is definitely some sort of very extreme dictatorship, much more along the lines of the ant colony. There are different levels of abstraction. We’re focusing at one particular level and saying we get property rights there, but it seems as though in order to get it there, at some other levels you actually have these little dictatorships going on. The cytokine response is doing its thing and your immune system is responding. I don’t understand very well how to think about those separations. You have a particular type of morality at one level and then it goes away completely at other levels. And that maybe might actually be necessary.

Joe: I think to some extent it’s unsurprising if you think that the right kind of norms—the dynamics that shape the equilibrium at a given level—are empirically contingent and alter at different levels. Then you should just expect lots of variation in terms of what morality emerges at a given level of organization, because different dynamics apply.

This would be my best guess. If you look at nature: sometimes there’s a bunch of cooperation and sometimes there’s a bunch of predation, and it really depends on the case. If you say it’s a fundamental law of nature that everything is predatory—well, not only, there’s some nice stuff. And if you say everything is cooperative—no, that’s not right either.

There is an interesting question of whether there’s a step change at some level of intelligence. My friend Katja Grace has this great post about why we don’t trade with ants. And her take is: well, you can’t talk to them. If you could talk to the ants, we could totally trade with them, get them doing stuff for us.

Michael: But we do know a lot about the chemicals. We know an awful lot about how they communicate. I’m not sure—maybe the trouble is on the other side. Maybe we can sort of send information to them, but they’re not necessarily that interested in talking to us.

Joe: Well, there’s some set of—we can manipulate the ants, and we do this with sheepdogs and stuff. We can make an animal useful to our purposes by training them. And I can imagine a case where we put down little chemical trails—I don’t know, what would you want an ant to do? Clean out your pipe or something?

Michael: Very often people just want them out of their house. They do use signaling to do that.

Joe: Right, you could just lead them out. As opposed to killing them. I was thinking something a little more like, if you’re trying to apply comparative advantage arguments to the ants, why—

Michael: Yeah, but I’m just poking a little bit at Katja’s argument to say that in fact we can communicate in one relatively limited way, but maybe not vice versa. They don’t really have any concept of us, as far as I know.

Joe: Yeah, that’s interesting. I think the barrier is that even if you can do some communication, there’s some notion of all the cognitive structures that need to be in place to have something akin to a deal. As I’m saying that, though, I’m like, well, how do we understand other forms of symbiosis? There’s a way in which the deal can take place at the level of selection rather than at the level of communication.

But I think it’s at least interesting: if you think there’s something unique about the cognitive equipment a certain level of intelligence gets you, such that you can do deals in some more full-throated sense, then maybe as we go forward, there will be more and more deal-making such that you’ll see more advantage to respecting boundaries or incorporating someone in a—

The fundamental dynamic that I think you would expect to crop up, that seems very real, is: it’ll often be the case that two agents will do better by both their lights by not fighting. I think that’s going to be a robustly real feature of the world. And it’s a happy feature. You have this great writing about happy features of the universe—things that aren’t deep-atheist-y. The existence of gains from trade, of mutually beneficial stuff.

Michael: Even just the fact that threat displays are so often much better than actually fighting in so much of the animal kingdom is at some level surprisingly encouraging. It all mitigates against a deep atheist point of view. There are a lot of non-trivial, very surprising ways in which the universe is actually better than you would have naively predicted.

Joe: Yeah. Some of the Adam Smithy invisible hand stuff is a bit like that. There’s a bunch of this. It starts to feel especially non-deep-atheist if it’s a decentralized, uncoordinated process that nevertheless leads to a good outcome. Because then it’s a kind of uncontrolled default.

Elinor Ostrom and governing the commons

Michael: One of my heroes is Elinor Ostrom, who wrote this amazing book, Governing the Commons—about exactly this, how decentralized groups can sometimes govern themselves for hundreds or even thousands of years, despite the fact that it looks like they should be in bitter competition with each other. Very interesting. Definitely very green, not deep atheist at all.

Joe: Elinor Ostrom is very green.

Michael: Yeah. It’s a funny word to apply to her book. She reads in many ways as a political economist—she won the Nobel Prize for economics. But fundamentally, really quite green in your terms. Not just in terms of the attitude she brings to it, but maybe more in what she discovers in the world. She goes out and points out things like, there are these irrigation areas in parts of Spain which have been very functional for nearly a thousand years—I think it’s 800 years or something. And there’s no real governance structure in the conventional way. It should have failed long ago, but they’ve figured out how to do it. They’ve developed these amazing governance structures that don’t eliminate competition and don’t eliminate argument. Instead, they’ve figured out how to resolve that, in ways that have basically worked for 800 years around Valencia.

That’s a discovered type of greenness in the world. And maybe even more positively, it’s interesting that you can take what she’s done and start to apply it in other contexts. I’ve heard people say this is the best book written about open source software.

Joe: So what do you take from it? Because it strikes me as a little bit like the predation-symbiosis case, where you have things that go in one direction and things in the other. And the real question is how do you predict or cultivate the conditions under which you’ll get the good one?

Michael: It’s a while since I’ve read it, but I don’t remember her talking very much about the instigation process. She has these—I think it’s six principles—which these structures have to satisfy. She’s more interested in finding extant examples in the world. She goes through a ton of different cases, cases where things have utterly failed, cases where things have worked, and tries to sort out what makes it work versus not.

But she’s not so interested in the genesis question. My general impression is that very often it didn’t require that larger group of people to really be focused on establishing a lot of the preconditions. Some of them are just classic liberal things—property rights which are respected. She wants very well-defined resources. She points out that when there’s a lot of ambiguity about what counts or what doesn’t, it tends to rip the systems apart. So she wants very clear boundaries. And she has a number of examples where that fails to hold, with disastrous consequences. So from a philosopher’s point of view, conceptual clarity is very important.

Joe: But not actually—that’s slightly incongruous with a broader green aesthetic. Interesting.

Michael: Yeah, it’s very much: you can make small, apparently self-organized groups do much better than you might expect without a traditional notion of property rights. There’s this discovered set of design principles that you can use. But she doesn’t really do a deep history of how you set these systems up from scratch, how they come into being. And that seems like an incredibly interesting question.

Joe: I’ve been trying to figure out how pessimistic I am about some of these self-organizing decentralized processes. I want to name that you can actually go the other direction in terms of deep atheism, where you have more allegiance to decentralized competitive uncontrolled processes but you don’t think they’re inevitable in their victory. Both of us have written about this in different ways. There’s a way in which some of the threats to the future come from a kind of uncontrolledness of the world. You’ve written a lot about the vulnerable world—having a lot of actors who have access to very dangerous technology that can destroy everything. That discourse leads very quickly to this very intensive need for control.

Michael: Yeah.

Joe: And in particular, political control—an especially scary form of control where you need to lock down the access of more and more actors to more and more stuff.

Michael: Or do surveillance. There doesn’t seem to be any particularly good solution.

Joe: Yeah. It’d be interesting to see if you get an Ostrom-like thing. There’s no iterative potential with these one-strike-and-you’re-out cases. I imagine self-organizing systems do better when there’s a chance to be robust and iterative and messy, where—no, you’re just never allowed to have this thing happen ever. It ends up being a worse fit.

Michael: No, yeah, all the violations in her examples are relatively small and relatively local. So you get a lot of benefit from that. You also get a tremendous amount of surveillance, but in this very friendly way where a lot of it is just being done by neighbors. They know each other. And the rules tend to be of the form: if two people have a conflict, get a third-party witness. If you’re dealing with fisheries or forests or these kinds of things, it’s not going to work a hundred percent of the time. But if it’s working 95% of the time, you’ve solved 95% of your problem. She leans a lot on that kind of argument in a very compelling empirical way. She’s not just talking about it in theory—she’s done the hard slog of finding out how it works in practice.

There is actually a really interesting thing in connection with your suggestion. The examples she chooses involve between 50 and I think 15,000 people, and they’re never huge. And I think maybe that’s quite discouraging for attempts to do it at real scale.

Joe: You’d mentioned open source software as a potential example.

Michael: Yeah, but again, open source projects tend not to involve many people. I don’t know how many people are contributing to the Linux kernel—I’m sure it’s a large number. And even there, they’ve transitioned more to something like a government model with relatively centralized arbiters who have a lot of power. The ability to veto or accept commits is already a governance structure, which maybe doesn’t have quite the same analog in an irrigation system around a city where you just have a lot of farmers separately irrigating. There’s certainly still some centralization of power, but the whole point was it wasn’t being delegated to the city—it was actually being done collectively by the farmers.

Vulnerable worlds and the limits of control

Joe: I’m curious—I feel some concern that I go around saying, there are risks from too much control and risks from too little control, you’ve got to look at it case by case, reality has a lot of detail. And then from there you can kind of—people have an intrinsic allegiance to one versus the other, they want to come in with a prior about, well, that involves control. I’m kind of like, is this even productive as a frame? It’s sort of, you can notice these tensions, notice the dialectic, but it’s not very productive to be like, whose side are you on?

Michael: I think my instinctive response is: you want an allegiance to closely watching the world and then responding contingently, not holding on too closely to your ideological priors. If it turns out that classic liberalism is taking you in the right direction, great. But if you start to see a whole lot of exceptions, you want to think pretty hard about that.

Joe: Yeah, I agree. Though—for example, the vulnerable world stuff is interesting, and I feel like you’ve been admirable in your attention to the painfulness of some of the tradeoffs that the vulnerable world dynamics implicate. Though I haven’t heard you go as hard as Bostrom on, the only solution to this is some intense—I remember you had a deep discussion of privacy at one point. Depending on how you set up the empirical assumptions, the vulnerable world thing can just—Bostrom does this by definition. The vulnerable world hypothesis is basically that set of circumstances such that you need this horrible surveillance apparatus in order to avoid existential catastrophe. And he’s just like, guys, there are ways the world could be such that that’s the only way you avoid existential catastrophe.

Michael: Or his other solution is a singleton, more or less. So you get—

Joe: Or isn’t—I think there’s some detail where he’s like, either you lock down the motivations or you lock down the access to the tech. He has a disjunctive definition.

Michael: Yeah, I think you’re remembering it slightly better than I do, but there are multiple alternatives, and you read through them and think, I’m not keen on any of this, thank you very much.

Joe: None of them are good. And sometimes the empirical circumstances are such that one is not keen on the necessary solution and one is just in a bad place.

There’s an empirical bet that one makes, which goes to your point about being attentive to the actual circumstance. But interestingly, when you talk about it, you often draw on your intuition. When we’ve talked about this in the past, you’re like, well, as a physicist, it’s very hard for me to argue about exactly why the world would be vulnerable, but I just have this deep intuition that physics will make this sort of technology available. And that’s an interesting case where it is a question of how much we want to bet on that intuition. I’m often—people’s threat models for the future become sufficiently like the vulnerable world, and then I sort of tap out, where I’m kind of like, well, that’s the vulnerable world case, that’s going to lead us to a global surveillance state. There’s some way in which I’m actually bracketing some of those as an especially hard case—let’s see if we can win in the other ones. But it’s interestingly difficult to make those bets at the level of rigorous empirics, because you just don’t have that.

Michael: Yeah, it’s of course about any possible future at this point, and that’s part of what makes it difficult. You’re implicitly taking points of view which are quite strong and quite hard to justify, no matter what your intuition says.

Death and sincerity

Michael: I want to come to a set of stuff around death and sincerity that you’ve written about that I found really interesting. You have this great phrase—I think it’s your review of Atul Gawande’s book /Being Mortal/—you talk about this observation that death often brings life into focus. Encounters with death: the person who gets cancer and all of a sudden they know exactly what they need to do, or they have very strong goals where maybe before they didn’t. It brings people into focus, and it also—I don’t know if you used the word there, but you’ve used it a lot elsewhere—it makes people more sincere sometimes.

I thought this was really interesting. You talk about having it as an aspiration for yourself. The most basic question, which I have some instinct for but don’t quite understand, is: why does it do this? And on the other side, why is it so hard to be sincere some of the time? You talk about fake thinking versus real thinking as well—a separate line of thought, but clearly very connected. It is very tempting to do fake thinking when you’re in creative work, which is a little bit detached from the world. Why is that the case? Why is the threat of death so helpful here, and how do we simulate it in other ways?

Joe: I love this question. Partly—it’s interesting, with some of these terms like sincerity or fake thinking, it’s almost this question of: why would you do the other thing? What’s good about the other thing?

Michael: You can give signaling answers to that question—it’s about achieving social goals, so you pretend. And you give some nice examples where it’s very hard to believe that’s really what’s going on. Somebody has chosen to do something low-status that other people aren’t very interested in, that they’re not going to get paid for. And then they still find themselves doing fake thinking. Why? The social explanations just don’t make sense to me. I think you point this out quite compellingly.

Joe: I think one hypothesis is that—

When I’m doing bad fake thinking as a philosopher, the reason I’m doing it is because it’s easier. It takes a kind of energy to—instead of working with some framework that is familiar and kind of chopping the logic—be like, no, what is true here? And then I’m like, but that concept is a little fake, isn’t it? And I can see myself wanting to be like, can’t we just work with that? And I’m like, what if we really—is that really right? There’s a way in which it’s burdensome and effortful to have to try to refactor your ontology to make sure that you really believe in the way you’re thinking about something.

This sometimes happens with other forms of spirituality too, where people—I don’t want to think about death today, I don’t want to try to live my best life, I just want to watch TV. And there’s a way in which that is easier. If you’re going to get out there and have a bunch of meaning, you’re putting pressure on yourself.

But the death case—let’s focus on that. That’s a great example. Why does the consciousness of death—and this is Heidegger, he has this whole thing about being-unto-death, it throws you out of the das Man/—I think /das Man is actually his analog of inauthenticity, similar to fake thinking. You’re caught up in some social plane, and the contact with your own death throws you into your bare existence and consciousness of your existence as Dasein, something like that.

But what is this thing where you get caught up in the das Man? What is inauthenticity? What is bad faith? What is shallowness? There are all sorts of non-ness—non-there, non-real—that are a deep part of our lives and our baseline ways of being. Things I would guess: one is there’s clearly something going on with not wanting to look at things. If your brain is constantly doing “I can’t look at that, I can’t look at that”—or if it has a bunch of inner conflict that it doesn’t want to resolve—there’s a way in which, if your mind is not able to be coherent and sit with itself, and is instead bouncing off of things all the time, that contributes to various forms of unreality.

Michael: Can I give what might be a concrete example? A lot of scientists, particularly often in an academic context, will get quite frustrated because they’re very curious about a set of things, and they get used to the idea that if they apply for grants to do this, they get turned down. Instead they’re doing something else that they know they can get funded for. But it’s adjacent—it’s not actually what they want. I have some sense they’re having to fool themselves all the time, and so the work becomes a little bit insincere, it’s a little bit more fake thinking. Does that seem like an example where this kind of bouncing off is likely almost a survival strategy?

Joe: Yeah. The way I would put it is: when something is harnessing fewer aspects of your being, it often becomes faker. The paradigm of being really present with real thinking in my head is: your whole system is firing on all cylinders. You’re really fully present. I think same with something like an encounter with a terminal diagnosis. There’s a way in which it unifies your system towards an end. Whereas something like doing work that you don’t actually want to do—there’s some part of you that is like, I’m not bought into this project, I’m out. And so you’re losing a part of your brain.

This is a hazy model, but I think at least one dimension of being real, authentic, sincere is getting all parts of your cognitive resources to be together and look at each other and be willing to be in the same room and doing something together. And if half of them are noping out, or pretending they’re not there, or going to sleep because they’re bored—then you’re less alive.

Michael: It seems as though at least part of what’s going on—I mean, we both do this very abstract creative work, and at least some of the time, you don’t know what you’re doing. You’re pursuing something because you have to. Parts of this conversation have been very speculative, but you’re entertaining possibilities in the sincere hope that it does lead to progress on goals which you really deeply care about.

I’m just thinking about death—I suspect a lot of people essentially quit their job when they get a terminal diagnosis. Sometimes just for practical reasons, but sometimes for meaning reasons. It’s interesting that for some other people, it’s the opposite. I believe George Orwell wrote 1984 after being diagnosed with tuberculosis. And there are a number of examples where somebody really—it seems to have concentrated their attention enormously in the way you’re describing.

Actually, both of those things are consistent with what you’re saying. It’s forcing you to choose. Am I really all in on writing this book? Or do I really want to go and spend every waking minute with this person or that group or in this location? It’s a symmetry breaker.

Joe: I still love this question of why does it do that, and why do we not ordinarily do it? Because we’re all dying all the time.

I think one thing playing a role is resource preservation. When I’m doing bad fake thinking as a philosopher, it’s because it’s easier. It takes energy. And I think there’s a way in which trying to have a certain kind of spiritual orientation stably—it’s burdensome and effortful.

Michael: Kevin Kelly has this great story about how he decided to set himself a deadline six months in the future, and that was going to be his death date. And he was going to try and live literally as though he was going to die then. He gave away his money. He really massively restructured his life and he tried as hard as he could to prevent himself from thinking about things after that date. So he’s gone through a very—it’s a LARPy kind of thing, but which he took very seriously—a practice death. And I wonder to what extent that becomes a reusable resource, in the way you’ve talked about wanting for yourself to live with that kind of sincerity all the time. Maybe that’s a way of permanently increasing your sincerity or living more fully.

Joe: That’s really interesting. How much in the future was it?

Michael: Six months. Which is a good period of time. You can do a lot in six months, but you’re also going to be thinking a fair bit about it if you know you’re going to die in six months.

Joe: Yeah, that’s crazy. I feel like if you were actually doing that, you would just make all sorts of mistakes. There’s something often wrong about pretending that something is high stakes when it’s false. But it certainly seems interesting.

I’m sort of skeptical that it’s possible to—at least in my experience, the effort and the project of trying to have a certain kind of spiritual orientation stably does not ever involve a single durable “and now I am always something.” It’s more like diverting a river, where your mind has some—you can have epiphanies that change how you are, and it has some structure, almost like a key that opened some lock in your mind. That does change the structure of your mind. But if you keep hitting that motion over and over, it wanes in its impact on your orientation.

That idea—at the time you might have thought, if I just remember that I’m going to die in 30 years—sometimes I’ll watch a movie and come out with some intense sense of existential urgency. And then why does that go away? There’s some way in which your brain—it’s a novelty thing maybe, or it gets the message and moves on, or your physiology changes. But in my experience, it’s much more about having an ongoing injection of effort and new ways of coming at it, and continually, slowly, patiently becoming more the type of person you want to be, rather than a thing that always works.

Michael: For Kelly, at least in this particular case, there was an ongoing process. There was always this event slowly marching closer one day at a time, where every day he had to get up and think about how he was restructuring his life. He apparently sent a whole bunch of checks for $500 or $1,000 to friends anonymously—basically spending down his relatively meager wealth at the time. And just the choice of where am I going to be, who am I going to spend time with—it sounds like he considered that very deeply over six months, reconsidering and actually moving himself around. So in some sense it was a stable focus that required continual decision-making, which is a slightly different orientation than what you’ve described. Maybe it was very helpful.

Joe: Yeah, I’m often more optimistic about practices—something where you do a thing and you actually have to be doing something each day that is renewing. Just having had a thought, or even an experience, or even a quite profound experience—at least it’s harder for that to have a lasting impact. And even the thing I was thinking—I don’t know what happened to Kevin’s attitude towards death after he passed this point—I imagine it was a quite formative experience, but I would also imagine that it is nevertheless an ongoing challenge, insofar as there was some sincerity that he gleaned from that, to hold that going forward over many years.

Michael: Can I ask you then: what do you feel has increased your own sincerity or your own depth of engagement with the world the most? Is there any practice that you’ve found that you’re unexpectedly pleased with?

Joe: Certainly death is one instigator. I’ve done a bunch of meditation and adjacent practices, which I think were a little less about cultivating sincerity and more about something like presence—a kind of prerequisite poise in engaging with the world.

I think actually some kind of ethical inquiry has made a difference for me, in the following sense. A lot of what can feel like insincerity is if you have a kind of background agenda that you’re not acknowledging. There’s some exercise where you go: what actually am I doing, all of it? And you try to have a full accounting, where there’s no secret extra thing that you’re not looking at. You’ve brought to mind and integrated all of your motivations. Obviously you can’t excavate every unconscious whatever, but something where you’re not holding back.

There’s no “I’ve got my life, and then there’s this other thing I’m doing on the side.” It’s like, no—there’s only this one chance. If the thing you want is power and status, or you want to play video games—whatever your secret agenda that you just want—you can do that. Go for it. Sincerely. You can integrate the full range of your motivations into a sincere orientation towards your life. I think that helps partly because it starts to seem less like some motivations are privileged as part of your official internal narrative and some have to be in the background. It’s more like you can have your whole life and really ask yourself, what actually matters? How do I actually want to make tradeoffs? You can ask that question with your whole being.

Grace as integration

Michael: I just happened to see a little video involving the soccer great Ronaldinho. Something that has struck me often about him is how relaxed he is. He shows no hesitation in his physical movements. He seems to speak very freely. Obviously I have no idea, but it seems very consonant with what you’re describing. He seems very integrated in at least some ways. I wonder if that’s connected with his ability to bring everything he had to his football.

Joe: I think the notion of grace is a very interesting analog of some kind of integration. I always use the word poise, and this is often a word I use for a kind of mental integration. But I think you see it in really beautiful dance and beautiful physical movement—a kind of non-conflict between parts of the system. The whole thing is working together.

Maybe he’s managed it at the mental and the physical level, and they’re integrated. It’s interesting—often beautiful, graceful movement—but often there’s a deep connection between the mental and the physical.

Michael: I think we’ll end there, on a balance between mental and physical grace. Seems like a very nice place to end.

Joe: Awesome.

Discussion about this episode

User's avatar

Ready for more?