In today's episode, I have the joy of interviewing Julia Galef. Julia and I talk about updating your beliefs, the difficulty of fighting our biases, seeking truth, and her new book, The Scout Mindset.
Today's episode is sponsored by LaunchDarkly. LaunchDarkly is today’s leading feature management platform, empowering your teams to safely deliver and control software through feature flags. By separating code deployments from feature releases, you can deploy faster, reduce risk, and rest easy.
If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com/contact.
If you would like to join the new experimental Discord group, reach out at developertea.com/contact, developertea@gmail.com, or @developertea on Twitter.
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
Transcript (Generated by OpenAI Whisper)
Hey everyone, welcome to Developer Tea. Today's episode of Developer Tea is the second part I'm going to interview with Julia Galef. If you missed out on the first part, make sure you go back and listen to that first part. Before you listen to this one, Julia is the author of a book called Scout Mindset, which is available on Amazon and in local bookseller. Retail Store is not kind of thing. Julia is also the host of Rationally Speaking, which has been around much longer than this podcast has been around. Go and check that out in whatever podcasting app you're currently using. If you don't want to miss out on the next episode of Developer Tea, which should be coming out in just a couple of days, it should be a Friday refill coming up next. Then go ahead and subscribe to this podcast in your current podcasting app of choice. Thanks so much for listening. Let's get straight into this interview with Julia Galef. Yeah, I do believe that a lot of the important work on this is not so much about can we fix our brains to stop being biased. And I may be wrong about this. I think the important work is being done to understand how do we subvert that in our real actions in the world or in the things that we care about. How can we, you know, buy us something else to balance it? Or compensation? Yeah, exactly. Yeah. Which I think is not so much to be... Right, exactly. If his point is, I think to some extent he's trying to be humble. And you know, it's an important point because when you write a book about rationality or irrationality, people often have their kind of suspicious that you think you're rational and you're telling other people they're irrational. And so I totally understand the impulse to try to to avert that suspicion by saying, you know, oh, I can't overcome these biases myself. Which wins my credibility to the work in, you know, the most right people, right? Right. I think it helps make people more receptive, especially if, you know, Connemon's book, Thinking Fast and Slow, was about the existence of these biases. He wasn't trying to offer a solution. Yeah, right. So he didn't actually need to convince people that it was very descriptive in nature. Right. Yeah. But, you know, I would be surprised if he hasn't made any progress at all in noticing these biases in himself and overcoming them. Just talking to him, I've had lunch with him a couple times and he came to a couple workshops that I ran. He's quite good at avoiding overconfidence, which is another bias that he talks about a lot and saying, well, you know, I, you know, this is speculative or I can't be totally sure about this. Right. And so I think that's a good example of someone overcoming the innate human tendency to overconfidence. So I would give him more credit than he would give himself is what I'm saying. Yeah, there is pretty good evidence that merely having vocabulary changes behavior. Right. And knowing understanding, you know, that there is a term for this can give you a chance to label something which gives you another chance to give it some kind of observation. And this is actually been proven in kind of a parallel way that if you have the name for, let's say, there's the, for example, a type of plant that is otherwise foreign to you, right. That people who, where that plant is, this is so strange, it's actually fruits that they used. They said that people who have vocabulary for that fruit, notice the fruit more often. So they believe that it exists more readily than people who don't have that vocabulary. Right. In other words, it makes total sense, when you apply, when you kind of try to, I guess, and I'm taking a little bit of liberty with study, but you can apply that to other things that say, Hey, you know what, overconfidence is more prevalent than we think because I know what it is. So I can recognize it because I have the words for it. That's right. Yeah, I think that's absolutely true. And I think even better than having the words is having, having kind of salient examples in your mind of what it looks like to be in soldier mindset and what it looks like to be in scout mindset. And so that was part of my goal in writing the book is, you know, I don't think that, I don't think that there's a set of words I can give someone that will magically make them change the way they think. But I do, I did pack the book with lots of examples. And so I was hoping that just increasing the salience of these examples would make people better at noticing themselves in soldier mindset and also better at having kind of templates for, Okay, this is a way you could react to criticism or this is a way you could, um, react to evidence that contradicts something you believe that's different from my default way of reacting. So just having those templates as role models in your mind to, to do instead of your default, I think is really helpful. I found it anyway. I did. It's, that is such an interesting point you make. Many times on this show, I've talked about the importance of having a story to attach things to this. How so? In the sense that, uh, if I tell you what overconfidence is just using kind of clinical, you know, definitions, Oh, yeah. Then you might, you know, understand it. You might try to draw those connections to something that you know, but our brains are not really, if I understand it correctly. Yeah, they're not designed for that. We're designed to understand more practically how things, how things connect and how it impacts us directly. And so when we hear a story, it's no wonder that stories communicate much more effectively to people and move people both emotionally and, uh, in terms of changing their actions. Yeah. Then let's say pure data, right? Yeah. Even data we need to wrap in some kind of, uh, more tangible descriptor. Exactly. That provides information as a padding for that data. Yeah. Uh, that's so well put. I, I, I think this is a really important and underappreciated point. And I've been trying to, when people ask what something is or the definition of something or I've been trying to get better at giving, explaining myself by way of pointing at examples instead of just giving an abstract definition. And it seems to be, it seems to be a lot more effective. Um, and I think so. Yeah. Absolutely. I remember things, right? Yeah. Absolutely. I just, and we, we know that's how we, we remember things and learn things. And we know that's what makes me memorization also. Yeah. And yet it doesn't, that alone didn't cause me to remember that principle when I was trying to communicate to people. So I, I had to have someone make that connection from me explicitly. Um, and yeah, I think especially when the, like because we're humans are kind of social creatures and social learners, where we do seem to be built for really easily copying the behaviors and the attitudes of people around us. And so I was trying to exploit that property of human psychology as well. And give a bunch of examples of people behaving in ways that could be more easily copied once you have that example in mind. And so for example, one, uh, one small moment that really stuck with me and has helped me change the way I react was when a friend of mine, uh, I guess he was, someone was arguing with him and he, uh, he realized he was wrong. And he said so, but in just this very cheerful, nonchalant way, he was like, Oh, yep. Uh, I take back what I said before you're right about this. Uh, never mind. But he said it in, in such a relaxed and true matter of fact way that it didn't, you know, often when people quote, unquote admit they were wrong about something, it sounds well, very often they don't even do it because they, but even when they do it's, yeah, it's kind of sheepish or defensive or it sounds like they're, they're confessing a sin and they're kind of trying to atone. It's kind of a big deal orality or something, right? Right. And, you know, sometimes I think yes, being wrong means you screwed up somehow. But most of the time I think being wrong just means no, you didn't do anything wrong. You were processing the information you had, the best you could with the limited time and computational power that your brain has. And so you formed a conclusion that was wrong, but it was, it was a perfectly justifiable thing to believe given the information you had. And you should not feel sheepish when you, you know, when you learn new information or when it's pointed out to you that you were missing something, it should just be cheerful and matter of fact. Like, Oh, yep. Okay. I'm revising that view. And so, you know, I think I intellectually knew that yes, being wrong doesn't mean you did something wrong. I think I, if you'd asked me before this moment, I would have said, yes, I agree with that. But having this very tangible example of someone reacting in that way to learning they were wrong made it so much stickier and made it possible for me to react that way in the future as well. Yeah. So I have read and I don't remember where. So I apologize. This could be complete garbage. That there's, you know, the kind of, if you were to look at this from a evolutionary psychology perspective that the reason for this is, is social, right? If you are wrong about something and you on average are only living for 40 to 50 years, maybe sometimes 60, that's the social credit that you receive is going to be how often is this person right? And if they're wrong, we can't really trust them. They're not going to be able to climb the social ladder. They're not going to be in leadership in our tribe. Because it's dangerous, right? It's dangerous to be wrong when, when being wrong means that you go without food for a whole season, right? So, yeah. But now we can update that belief kind of cognitively, if not evolutionarily. We can say, hey, we can be cheerful about being wrong because it no longer means going without food for a season. Now it means that we can learn something, right? There is, there is actually only upside to this, recognizing that the social signals are no longer, you know, valid. They don't make the same, there's not the same reason to outcast somebody for being wrong that there might have been, you know, 10,000 years ago. Yeah, it's, it's a very interesting and kind of compelling evolutionary argument. I just still, and I've made similar arguments in the past, but I have to admit I'm still kind of confused by how, how off our, our intuitive predictions seem to be about what happens when we say we were wrong about something. Yeah. We really, we really do, even in cases where being wrong actually did have stakes. Like, yeah, you were wrong about a decision that you made for your team or your company or something. And we feel like admitting we were wrong will cause everyone to hate us or shame us or something. We catastrophe. And yet, yeah. And yet the vast majority of the time in my experience and in the experience of other people who have talked to me about this, you know, leaders of team, CEOs, et cetera, they're just pleasantly surprised by how positively people react when they say, yeah, you know what guys, I was wrong about that. And, and so, yeah, the people I've talked to who are unusually good at noticing when they were wrong and saying so matter of factly, what they've told me is that they didn't start out this way. They started out feeling really averse to ever admitting they were wrong about something. And then they forced themselves to do it a few times and noticed with pleasant surprise, oh, this actually went way better than I thought. People reacted so much better than I thought they would. And so they did eventually get to the point where they could do it more easily. But it took, it took the repeated practice of seeing that the outcome wasn't nearly as bad as they kind of emotionally expected it was going to be. And I do think it's an interesting question. Why are brains seem to expect really bad outcomes for admitting we were wrong and in practice, that doesn't, that doesn't match reality. Yeah. And we protect ourselves sometimes in really obvious way. And it's very clear when somebody's being defensive about being wrong, which almost, I feel like it almost hasn't even more detrimental effect. It seems like it's hard to teach ourselves that that's actually worse than something, you know, certainly in some circles. I can't help but think my wife and I have been very intentional with our, with our now almost four year old, which blows my mind, it's on my floor. But we teach him that being wrong is okay. And it has this funny effect where if we are wrong, it's hysterical to me. He very cheerfully lets us know that we were wrong. So I was like, you were wrong. That, you know, it's, it's this moment of like reminding me that this is fine. Like, it's okay. And it can be something that we can laugh about together, we can learn about, you know, and usually it's about the smallest things. And then he will, but the great part is that when he's wrong, he also says it the same way. He's kind of equalized. Oh, that's so great. And it's mine. And I feel I keep it every time it happens. I tell my wife, this is a parenting win. We figured something out here that we need to write a book about or something one day because this is really important. That is a win. And you should write a book about that or at least at least popularize that because I think that's a really important principle of parenting that a lot of people, hasn't occurred to a lot of people. And as you were talking, I was remembering that I, my parents were also pretty good about this. And I noticed it even when I was seven years old or something and appreciated it. But when we disagreed about something like, you know, a particular rule that they had for me or something like that, they would sometimes come back later and say, you know, Julia, we thought about it, we talked about it and we decided you were actually right about this and we were wrong. And so we'll change that rule or something. And I was appreciative that they were actually considering my arguments seriously. But I also was, I admired it. I just thought that was a really cool way to be and I wanted to be like that as well. So yeah, it's cool to have some independent confirmation of that parenting trick. Welcome to the the sub podcast of Developer Tea. This is my parenting podcast. One more thing about parenting that we've learned recently and that I feel like is applicable is, I just lost it. I just lost it. I had it in mind and it was. I was talking about my parents telling me they were wrong. And oh, the idea that, okay, yes, I remember now. So you mentioned this idea that your parents kind of revised their position with you. They came back. They admitted they were wrong, etc. So I read recently about the way that my child's brain works. That's different than mine. And how one of the biggest parenting mistakes you can make is assuming that your child's brain is effectively like an adult's brain, but just in a child's body that he can process the same things that you can at the same speed that you can in particular. And what it mentioned was the idea that his registering, in this case, my son Liam is why I keep on saying his, his registering of the information that I'm giving him the words that I'm saying to him is offset like by a pretty significant margin. So it takes him about 30 seconds to understand really what I'm saying to him. And so when I get impatient within about 10 seconds, he's confused. He's not being obstinate. He's confused why I'm impatient because it hasn't even registered to him what exactly it is that I want from him. Right. And so we've tried to understand more in terms of how do we try to think in the same way that he's thinking and give him, you know, advanced notice, for example, he loves, he's crazy about Mario. Advanced notice of, hey, you're going to have to turn off Mario in like five minutes from now. It times coming, it's, you know, it's coming up rather than saying, all right, it's time to turn it off. And then him being like, what? No, there's no way I'm turning this off right now. You just said it. And this is news to me. I had plans here. And the thing that really struck me was the idea that, that I was expecting something from him that I could never let him expect from me. I was going to say I would feel the same way, actually, if I, if somebody was like, through this now, right? Exactly. If I'm told, here's the thing I'm going to expect you to do in the future, then I have time to adjust to it and expect it and it doesn't feel like it's being suddenly sprung on me. And so, yeah, I would want someone to treat me that way too. Yeah, exactly. And it was very impactful for me from an empathy standpoint of, this is another human being. And I think, and I guess to get out of the parenting podcast and go back to our regular schedule programming, this is true in other relationships. I think we are very prone to not recognizing, you know, what the other side, what it would feel like to be on the receiving end of whatever it is that we're putting out into the world. That's not true for everybody. Some people are more aware than others. But certainly we have this lens that prefers our own. And I imagine this is very much related to our, our soldier mindset in the sense that it's confirming what we believe is right. And we feel justified in our actions in a given moment, but we're easily willing to judge another person in their actions in that same moment. Right. Yeah, there's this expression that, you know, when, when I screw up, it's because I'm having a bad day, but when my coworker screws up, it's because he's incompetent. Yeah, exactly. Yeah, there, there are a lot of, a lot of versions of that. And yeah, this is, again, this is another thing where I think I'm, I think I'm probably better than average it, at least at cognitive empathy where I can try to understand why someone thinks what they think. Emotional empathy is a little bit different, although I also try to be good at that too. But it's still, I still catch myself failing at it. Like the other day I was, I was trying to have a productive disagreement with someone online. And I was about to respond to them. And I'm not, I don't remember all the context. I won't try to give it. But I was about to respond to them saying something like, so in your mind, such and such, that's just a coincidence. And I didn't think that it was a coincidence, but seemed like that's what the person was arguing. And then I stopped and I heard my, I tried to listen to my words as, as if someone was saying them to me. And I realized, Oh, the phrase, so in your mind, that sounds really kind of condescending or it sounds like I'm caricaturing their view. And I hadn't been, I hadn't been aware that that's what I was doing when I was typing those words. But I was feeling kind of annoyed at them or kind of, you know, disgusted at their claims. And that came through in my words, even though I was trying to not let it. And so I really do have to consciously go through this check of how would this sound if someone said it to me. And I often realize that I'm unconsciously betraying my, my bias in the way that I expressed my disagreement, even though I thought I was being good about it. And then I have to revise it and make it better. So you're very ridiculously wrong mind. I can't imagine what someone could object to in that. That's good. We'll be right back with the final portion of my interview with Julia Galef. Right after we talk about today's sponsor, Launch Darkly. Today's sponsor is Launch Darkly. Launch Darkly is today's leading feature management platform, empowering your teams to safely deliver and control software through feature flags. And I want to go off script here for a second and talk a little bit about the fundamental value that Launch Darkly provides to you. If you're listening this right now and you're thinking, oh, feature flags, we already have that. We built that. Well, I want to give you just a moment of hopefully some advice. All right. If you're building your own feature flags, this is a very dangerous scenario to be in. Not only is it dangerous, but it's also not very extensible. You're not going to be able to integrate that with a bunch of other stuff. You're going to need somebody who knows that feature flag system inside and out. Now feature flags, if you only had one or two, right, then then I can imagine you're saying, okay, well, I'm not going to go through the process of integrating an entirely new product just for my one or two feature flags. But for the people who launch Darkly makes the most sense, it's also the people who think that they need to build out a robust system of feature flags in their own software. There's a few problems with this. The biggest problem, the biggest problem is that feature flags are a huge opportunity for bugs to be introduced. And in order to mitigate that, you need to really invest a lot of time and energy, right? That means that you're paying your developers. If you're a manager or if you are controlling a budget, you're paying your Developer Their spending time developing features feature control systems rather than focusing on the software that matters. They're developing this kind of meta software. And it's not their bread and butter. It's not what you're supposed to be really good at doing, right? You're not actually investing in the product. You're just investing in control systems. And by the way, the moment that those fail, right? Or the moment that that engineer leaves, if you don't have excellent documentation in place, which also costs time and also costs money and often goes stale, well, you're once again, you're in a really tough scenario where these things are very important, by the way. My feature flags are very important to the running of your software, whether it's because you're releasing features in a time gate away from when the code is complete, right? Or maybe you're releasing them partially to some users launched, launched, launched, they can do all of this. They can do all of this. And they have SDKs. Literally, this says this on their website. They have SDKs for days. They have client SDKs for Android C++ for Adam apps for iOS for Gatsby JavaScript. Of course, they also have server side things, JavaScript once again, but go, you know, go laying, they have early, they have C++ on the server side. They have all of these SDKs. So you're certainly not going to be up a creek when it comes to integration. So go and check it out. However to launch darkly.com, once again, we win a little off script here, because I wanted to convince you that you don't need to build your own feature flag system, be great with launch darklies. And you're going to get a lot of benefit with a much lower lift. One more reason here, that I just thought of. If you have multiple clients, then you're going to have to implement those feature flags in all of those clients or all of those different platforms separately. That's a huge value ad for those SDKs. I was just listening off. So go and check it out. However to launch darkly.com, small businesses and huge enterprises are both relying on launch darkly already. People like IBM, people like Glowforge, people like O'Reilly Media, go and check it out. That's launch darkly.com. Thanks again to launch darkly for sponsoring today's episode of Developer Tea. So the book comes out tomorrow. Yes. We're recording this. It's going to be out when this episode goes live, certainly. You also have been involved with the Center for Applied Rationality. Yes. We talk a little bit about what this is something I encountered a while back, by the way. And I thought it was really interesting. And I believe, if I remember correctly, I saw some videos that were all about actually taking the things that we've been talking about and doing what we were saying earlier, which is trying to figure out what we do about this stuff. It's not just about understanding what these distortions are or whatever. It's what do we do now? And I'd love for you to talk a little bit about that. But also, maybe as we're doing now, we can talk about some of the ways that I can recognize when I'm in that soldier mindset versus scout mindset, if you have anything, any kind of final tool that you want to provide as an example of what's in the book. Sure. Yeah. Well, I'll just briefly say first that I co-founded the Center for Applied Rationality in early 2012. It's an educational nonprofit in the Bay Area that runs workshops on basically reasoning and decision making, how to apply a bunch of these concepts from cognitive science or philosophy to your actual decision making about your life and career and so on. And so I co-founded in 2012 and helped run it and teach it workshops until, I guess, early 2016. So I'm not at CBR anymore. And they've pivoted to some extent to focusing on more on researchers and researchers focusing on AI. So it's less of a general all-purpose educational nonprofit than it used to be. And so I don't, yeah, I can definitely talk about CBR, but I don't want people to assume that that will match the current mission of CBR. But yeah, it was, you know, we would take principles about, like the thing we were talking about with Daniel Coniman about how our predictions are systematically over-optimistic and assuming we're going to finish things faster than we will or things will take less time than we expect and trying to notice that and correct for it using techniques like reference class forecasting, which is essentially using the outside view, looking at previous examples or examples from other people to see how long those took and just trying to find ways to apply that to improve your own decision making and planning it work. So things like that. And then to your question about practical ways of getting better at noticing whether you're in scatter soldier mindset in your own life, yeah, I talk a lot about this in the book. And one kind of category of technique that we've touched on a little bit already in this conversation is a thought experiment where you, you know, those different versions of thought experiments. One that I talked about earlier is the one where I asked myself suppose this study had found the opposite results. So suppose it supported my views instead of opposing my views, how would I judge the methodology of that study in that case? And so that can help you notice when you're applying a different standard of rigor to evidence depending on the conclusions. And so I do things like that also when, I don't know, suppose I see an article online criticizing feminism or something and the critic of feminism gives examples of well here's some people on the internet who were feminists who said awful things. And my reaction is, well, that's not fair. You can't just like cherry pick a few examples of random people in the internet being jerks and use that to criticize a whole ideology. And I think that's true. But then you have to do the thought experiment of asking, well, suppose they were criticizing an ideology I dislike, how would I react then and notice, oh, I wouldn't have objected to this technique if they were criticizing the ideology that I dislike, like, I don't know, conservatives or something. And so that kind of thought experiment can help you be more aware of the can I accept this versus must I accept this property of our brains. I call it the selective skeptic test. But then there's other kind of thought experiments too. Like there's an outsider test I talk about where you try to become more objective about a situation you're in your life that you're dealing with by imagining that someone else was in that situation. And thinking about, well, how would I what would I think that person should do if they were in the situation of trying to decide whether to, you know, quit grad school or not or whether they need to fire this person or not. And it's really just quite striking to me still how different the situation can seem like the right course of action can seem when all I change about the situation is whether it's me or not who's in it. So I think that kind of that experiment can also be really instructive. That's excellent. There have been some that I've seen that are very similar to this at this whole class of basically taking trying to take yourself out of the equation in some way. It seems to be kind of a category of thought experiments. Yeah. Where if you can, another good example of this is, well, if you're facing a dilemma, what advice would you give somebody else who's facing the same dilemma? Right. And then why is it different? It's not, it's not necessarily invalid. Yeah, no, that's a great point. That's, yeah, there can be, there can be disanalogies where, you know, okay, maybe, maybe their situation actually is different or maybe you want to hold yourself to a higher standard than you would hold someone else or something. But you should at least be consciously aware of those differences. Exactly. So you can ask yourself, do I think these, this is a valid reason to behave differently than I would tell someone else to? Right. And I do this regularly, you know, and I think some of this is, it comes down to even simple things like preference. Example, should I buy this very expensive guitar? Well, do you like guitars? I like guitars, but the person that I'm giving advice to, I probably would say no, but I like guitars. So like, maybe I should, you know, not necessarily saying that everything is, you know, justifiable necessarily, but or that you should, you know, always use this as a crutch away. But becoming aware of your reasoning, I think, is a huge step towards potentially more effective thinking. I hesitate to say anything as platitude level as that. But no, I think that's an important and underrated point, honestly, that we tend to feel, if we ever do notice ourselves being biased or, you know, in soldier minds that we tend to feel, she pushed about that or we feel disappointed in ourselves or we feel bad. And I think that's counterintuitively actually to your question that I dropped earlier about what's a counterintuitive thing about your book. I think in fact, you should feel good when you notice yourself being biased or engaging in motivated reasoning in soldier mindset, because, you know, soldier mindset is, it's very innate and it's very universal. It's just kind of baked into how the human mind works. And so if you don't notice it regularly, what's more likely that you are an exception to how all of humanity thinks or that you're just not very self-aware. And so I think noticing these things, noticing yourself doing this stuff is not a sign that you're unusually bad at reasoning. It's a sign that you're unusually good at self-awareness. And that's a crucial step on the path to actually changing the way you think. Yeah. I have had this big swing personally and I'd be interested, you know, everybody goes through this their own way. As big swing away, I feel like for a little while I kind of treated this rational approach religiously that in the sense that my, you know, drive to become more rational was a moral obligation for me. And that choosing things that are irrational, like for example, spending money on a guitar simply because I like it, it is somehow wrong or that finding a rational pathway is possible in that kind of scenario. How can you wait your subjective appreciation for things? It's very difficult to do and a lot of our human experience is very much a subjective experience. And so, you know, when we try to take these subjective experience things and find a rational pathway, it's very easy to heap guilt on ourselves or much worse, you know, begin to pass judgment on other people. When we see things that they're doing that are completely irrational, it sounds very much to me, you know, having grown up in the deep south, seeing religious environment all the time, it has the same feel to me as someone who's kind of looking at somebody with a glare that has a tattoo, you know, like, it has that same feel of, well, this really doesn't matter very much, right? But this person is taking a route that from my very objective position, which is not objective at all, but I feel it's objective is wrong, right? It's wrong in the sense that they're trying to do something that I don't, for some reason, I don't believe they should, and the should is coming from my understanding of a rational path. And I've just seen that become a very, and I also fell into that trap for myself thinking, okay, rationality is the goal, but I don't know that I think truth and rationality have a large overlap, but because the human experience is not purely rational, I don't think that it's one and the same. I don't think it's, you know, a perfect circle overlap, certainly. Well, that's very interesting. I think, I think the way a lot of other people understand what it means to be rational is different from what I, how I understand it or what I mean by the word. And in my, when I talk about rationality, it's not something that excludes, like, buying a guitar because that makes you happy. There's nothing. I don't see that as irrational, but I know that a lot of people might call that irrational, because you can't justify it in terms other than just your own enjoyment. But I think your own enjoyment is a perfectly valid reason to do things. A different thing that I would be more inclined to call irrational is if you have strong reason to expect that you will regret buying the guitar, like, you know you that there are other things that you really need the money for, that actually are more important to you than the guitar. But you do it anyway, because in that moment, you just really wanted and you're kind of ignoring the broader picture. So I might call that irrational, although even so, I like, there are a lot of cases in which it might seem like that's what's happening on the outside, but when you really dig into the details, it actually makes much more sense what the person is doing. But I just wanted to contrast those two situations where buying something because it makes you happy or gives you enjoyment is nothing inherently irrational about that at all. I think that's actually pretty rational. But doing something that you, yeah, I was just repeating myself. I was just going to say that the, I guess, the part of my brain that breaks down a little bit for my experience on this has been that I try to figure out, you know, when I hear rational, I hear specific or explicit, yeah, discrete, maybe the right word for it. Right. I want the exact, you know, where is the tipping point on this guitar purchase, where it becomes, it flips from rational to irrational. Is it a, is there a way? And because I can't really pinpoint that, that's where I say, or that's what has given me this ground to feel like, okay, if I can't pinpoint a specific tipping point on that scale of this is a perfectly rational decision to buy this guitar to this is absolutely insane what you're doing. And there's, you know, there has to be theoretically, there would be a point there, right? But at some, you know, in some world, all points on that scale could make sense, you know, for a given person. And so that's what has given me this feeling that the drive to find that specific point maybe is the error, right? It's not necessarily the desire to be rational that I want to depart from. It's the drive to say, well, once you've, you know, spent that $6 rather than the fifth one, the $6 really where you go over the edge. And the fifth one was fine, right? Yeah. And making it more binary than it needs to be. Yeah, I definitely don't think real life is the sort of thing where there would be these discrete cut-offs that you could draw where it's, you know, a great decision before the cut-off, and then you spend once and more on the terrible decision. Or most things in life, I think, are kind of spectrums where, I don't know, I guess it depends on how you're conceiving of a good or bad decision. In theory, there could be a tipping point where, I don't know, it's a little too abstract for me to think about clearly, I think. But as a general rule, I think things are messy and you have to be satisfied with just, you know, using heuristics and trying to take your best guess and making rough estimates. And that's not irrational, that's just inevitable. Right. Yeah. We don't have perfect information and we don't have infinite computing power in time. And so this is the best we could possibly do. I don't think we should feel bad about that. Yeah, that's great advice. Probably something I needed to hear. Julia, thank you so much for going over on time. Oh, my pleasure. Yeah, this was such a fun conversation. And I typically ask these two quick end questions. If you have the, yeah, sure. A couple of seconds here. The first question that I like to ask is, what do you wish more people would ask you about? I guess I, the thing I don't often get to talk about that would be fun if people ask me about is my, like, what I've learned about having good podcast conversations myself. Maybe that's too meta for you. But it's a thing I think about so much, but I never, no one ever actually asks me about it. So yeah, that's one thing or I guess about how to have good disagreements online. That's something I also think about a lot, but it doesn't tend to come up naturally in conversation in interviews. Yeah, that makes sense. It is difficult. I imagine to say, well, how do I go and tell people that they're wrong? That's kind of a hard thing to, to you, that organically arrive on, I suppose. Yeah, you know, one thing that I found which, I suspect you already do to some extent, but maybe this won't be apparent to some of your listeners. But, you know, the way I do my podcast does inherently involve disagreeing with people a lot. And I do tend to disagree with people a lot just, you know, socially or online. So that's kind of unavoidable, but there are other things I think you can do to soften the blow of disagreement and make people more open to it. And that includes just your tone, like just being friendly and warm, I think helps a lot. But also, I think it's helpful to give what I would call honest signals of good faith, where an honest signal is something that is hard for someone to fake. So an honest signal of good faith disagreement might be something like pointing out things that I'm uncertain about, just voluntarily bringing up, like, you know, here's what I think, but, you know, I can't be sure whether such and such or voluntarily bringing up points that support their side, even if you don't agree with them, saying, you know, well, that doesn't seem right to me, although I would agree that it holds true in such and such cases. Like, those kinds of things are, I think, a signal to the other person that you genuinely are trying to just share perspectives or understand their way of thinking, or trying to, you know, work together to understand the disagreement and you're not trying to, you know, shoot them down. And so you can still disagree with people without getting a ton of pushback or defensiveness from them if you go out of your way to give these other signals of good faith and camaraderie. Yeah, that's that's a really good point. I, you know, as you're saying that, part of me felt like one of the biggest things I miss in myself is recognizing when I'm not actually doing it in good faith. Yes, well, that's the thing. They, they, you have to actually be doing it and not just trying to show that you're doing it, right? Right. And it's kind of this foe, and I see this online quite a lot. This foe, you know, approach as it's as if you're trying to be genuine, but it pretty quickly falls apart. I know. It's like when people say, I'm genuinely curious. And then they ask a question that's totally pointed in leading and, you know, like, I'm genuinely curious, how can anyone be so stupid as to think that such a fact? Oh, the classic one is, well, I just, I just think it's interesting. You know, I go, curious. Yeah, I like the way, you know, I want to hear more about that. Right. Yeah. I encountered that quite a bit. So, so whenever we follow up, maybe you put another episode, we can do a whole discussion on how we can maybe be better at disagreeing even with ourselves sometimes. Maybe that's a healthy, healthy, yeah. Yeah. Yeah. Julie, thank you so much. One final question from here. If you had, you know, 30 seconds to give advice to software engineers, which we really haven't touched on explicitly too much in this episode. But that is the audience here. What would you tell them? And I'll give you a little more guidance here. In order to become more aware of this idea of finding a clearer map of the territory. Well, so another piece of advice that I didn't talk about, I talked about thought experiments, but there's another piece of advice that might be, might appeal more to, to software Developer Than to your average person. And so I'll share that now. And that is the idea of betting on your beliefs or at least thinking about how you would bet on your beliefs. Because often, you know, we tell ourselves things that kind of sound plausible. But when we're forced to put skin in the game and think about, you know, how would I still stand by this belief if I had something at stake, something to lose that can often force you to realize, oh, actually, I'm not as confident in that as I thought I was. Or, you know, actually, my view is something different than I thought it was when I didn't have skin in the game. And a bet can be anything. It doesn't have to be, you know, betting money like you would at poker table or something. It can just be any kind of stake. So, you know, if the thing you're telling yourself is our servers are highly secure, I'm confident in that. Then imagining a bet might look like, okay, suppose that I was going to hire a hacker to try to break into our servers. And, you know, I have to pay a thousand dollars if the hacker can do it in five hours or something. And you imagine that very concrete situation. And just notice, do I feel excited about taking this bet or do I feel a little bit nervous? And if you feel a little bit nervous, maybe that's a sign that, you know, maybe I'm not quite as confident that our servers are secure as I thought I was when there weren't stakes. Yeah, that's that's really good. Another really good bet to make on on the servers being secure is your Friday night, which is a very realistic thing. Right, right. It goes down at 505 on a Friday. Is that really what you want to risk here? That's right. Yeah, and often, you know, often there are actually stakes for us being wrong. We're just not, the stakes are very abstract to us in the moment. We're not really, yeah, exactly. So, you have to just really think concretely. Okay, here's the thing that happens if I'm wrong and think about it concretely and notice whether you feel like you want to take that risk or not. Yeah, absolutely. Yeah. Julia, thank you so much for all of the advice and the very thoughtful conversation and for pushing me on my own perspectives. And I really appreciate the time that you spent and everybody needs to go and get this book because we all are probably more wrong, more often than we realize and sometimes we forget when we're right. And there's a lot of other reasons to buy this book. Where can people find it? So, yeah, it's on sale. Well, you can pre-order it now, but by the time this episode comes out, it'll it'll just be on sale. You can get it on the Amazon site or the Penguin Random House site where you can buy it from other booksellers as well. Or if you just go to my website, juliegalef.com, you can read more about the book on there too. Excellent. Thank you so much, Julia. I'll talk to you soon. My pleasure. Bye. Thank you so much for listening to today's episode of Developer Tea, the second part of my interview with Julia Galef. Of course, if you missed out on the first part, you might want to go back and listen to it. It'll make this one make a whole lot more sense to you. Thanks so much for listening to this show. Week in and week out, we do three episodes a week. So, make sure if you don't want to miss out on future episodes like this one, to subscribe and whatever podcasting app you currently use. We have a bunch of interviews that are coming up in the next couple of weeks of Developer Tea. And, of course, our refill Fridays will continue on. Thanks again for listening. If you want to join the Developer Tea Discord, head over to developertea.com slash discord. Thanks to today's sponsor, launch darkly. If you want to have boring, boring release days, boring release days, head over to launchadarkly.com and get started today. Until next time, enjoy your tea.