If you have questions about today's episode, want to start a conversation about today's topic or just want to let us know if you found this episode valuable I encourage you to join the conversation or start your own on our community platform Spectrum.chat/specfm/developer-tea
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
This is a daily challenge designed help you become more self-aware and be a better developer so you can have a positive impact on the people around you. Check it out and give it a try at https://www.teabreakchallenge.com/.
Sentry tells you about errors in your code before your customers have a chance to encounter them.
Not only do we tell you about them, we also give you all the details you’ll need to be able to fix them. You’ll see exactly how many users have been impacted by a bug, the stack trace, the commit that the error was released as part of, the engineer who wrote the line of code that is currently busted, and a lot more.
Transcript (Generated by OpenAI Whisper)
One of the most amazing things about the human brain is its ability to make connections. The ability to make connections between things that may seem unconnected at first. This idea is exactly what we're talking about in today's episode. Today's guest Gabriel Weinberg is the CEO of DuckDuckGo. He uses this kind of connection making to help him steer his company. And these aren't just random connections. What we're talking about today are called mental models. This is something that we talked about in the recent past on this show and something that you've probably heard about on other podcasts and across the web because this is not a new concept. But it is becoming more and more relevant for software developers like you. My name is Jonathan Cutrell and you're listening to Developer Tea and I create this show to help driven developers find clarity, perspective, and purpose in their careers. Gabriel is on the show today to talk about his new book, Super Thinking. This is a big book of mental models somewhere around 300 of these models that Gabriel and his wife Lauren connect and explain. I highly encourage that you put this on your reading list. You can pre-order this or when it comes out on June 18th you can order it directly. Let's get straight into the interview with Gabriel Weinberg. Gabriel, welcome to the show. Hi, thanks for having me. Of course, people who are coming to this episode most likely know that you are the CEO and founder of DuckDuckGo. I love for you to take a moment and share what you hope people will remember you for. What do you want people to know you to be? What kind of legacy would you like to leave, I guess? Interesting question. I am only 39, which sounds old to some people and young to other people. I hope that I don't know the answer to that question yet. I have my own kind of North Star, which is a mental model actually, about a mission statement. The North Star in reality is Polaris and it's a star that always ports North. You can figure out what direction you are going to navigate to. As a metaphor, it's like a personal mission statement or a company mission statement that directs your activities. Mine is really to maximize making a unique positive impact on the world. There's a number of ways to date I've been trying to do that. One is through DuckDuckGo and helping people really get privacy on the internet. Our mission statement, the company, is raised a standard of trust online. I've also written two books now that are in completely different areas that I thought would also be unique impacts and it kind of maximizes because many people can read the books. I hope to continue that mission statement. I don't know. I'm definitely continuing with DuckDuckGo. I hope there are other things too that kind of follow that list of things before I exit the planet. Yeah. Yeah. I mean, to answer this question is really difficult, right? Because you're kind of predicting what you would want once you're gone and it's kind of hard to know what wanting looks like when you don't exist anymore. Exactly. But that can be helpful. I mean, I would hope that people would look back and say, you know, he made a positive impact on the world and then on a personal level, you know, that my family enjoyed the time with me. Yeah. I think so. You mentioned mental models and you mentioned that you've written two books and one of the big topics of today's conversation is going to be your book, Super Thinking, which you co-wrote with Lauren McCann. Lauren's own life, by the way. It's your wife, right? Yeah. It's kind of interesting. But my wife's middle name is McKay. So when I first saw the book, I had to do a double take to make sure that I wasn't seeing things there. But I've read through the majority of the book. Unfortunately, I've only had it for a short period before we had this conversation. This is maybe one of my, this is going to go up on my shelf as one of the like recurring books alongside Daniel Connamens thinking fast and slow and a few other books that I think are really just ground, very good grounding books. I'd love for you to talk for a moment. Just kind of give a general overview. I know that's such a broad question, but maybe a definition or how you explain to people what a mental model is in the first place. Sure. And so a mental model is really just a fancy word for concept. And there are billions, millions, over some big number of concepts in the world. And to developers, there are many mental models for developers, right? There's all sorts of design patterns. Every one of those design patterns, MVC or whatever you, is a mental model. It's a way to think abstractly about development. And then every discipline has some of those, so like I was a physics major in college. And there's a ton of all the physics concepts you learned in high school and then you know college. Now some of those concepts though are special. And they're special because they're useful beyond the discipline at hand. And so for example, in physics, there is a concept of critical mass, which is you know the mass you need to make a nuclear chain reaction. But that concept is very useful outside of physics because you can apply it, say like to a product, you know, if you're building a product and you think that product is a critical mass situation, that means that if you can get a certain number of users behind it or get a certain amount of data in it, you can unlock something different. And so if you know that applies, you can automatically think more strategically and kind of higher order thinking about the situation. Now for developers, there are a bunch of mental models in development that are very useful outside of development. So that might be a good one to explain. For example, like technical debt, developers are very familiar with technical debt. But that concept extends to like diversity debt inside a company or management debt or any other kind of debt where you need to pay that down later. And so the idea with super thinking is there's about 300 of these generally useful mental models for decision making. And if you can get a grasp about all of them and kind of have them in your head at any event time, when you're faced with a random problem, you can be an amazing problem solver and just better decision maker, both professionally and personally. This is such a critical, I guess, meta concept since it's the concept of concepts for developers and non-developers as well, to grasp. I'd love to know, I know there's people who are listening to this episode right now who are thinking, okay, well, mental model is this idea of taking a concept oriented approach. How can I even practice that? I'm told exactly what to do at my job. I'm giving very clear specifications. I'm not in a decision making position. So I can imagine they're reaching for the pause button or they're closing the podcast app right now because they don't think that this applies. But I think you absolutely know that it does. And I'd love for you to speak to that for a moment. For somebody who is an individual contributor, is there a model that you can kind of hook their interest with for a moment and say, okay, yes, actually, this isn't just for the managers. It's not just for the CEOs. It's also for pretty much anybody can get some useful information out of these models. Yeah, absolutely. I wrote down, so there's about 300 in the book and they're grouped into nine narrative chapters on different topics, themes. One is how to spend your time wisely, which is basically how to be productive as chapter three. And there's a bunch of mental models in there that are very useful for individual contributors and developers. I wrote down a few. They wrote down a whole list, but here's a few that we could kind of kick off with. So the first, there's really a group of three that are all about the same concept, multi-tasking, which people have heard of. I'm sure the top idea in your mind and deep work. And the basic idea here is developers are trying to solve pretty difficult problems that don't have obvious solutions. And to do that, you really have to have some creativity. But the creative process, unfortunately, is not very linear. And if you just sit down and try to crank at a hard problem, you may not solve it. Oftentimes, it may come when you're taking a walk or in the shower or you have that kind of creative breakthrough. Now, the top idea in your mind, that is a mental model to really describe a bit how your mind works and that there's generally kind of one idea kicking around the back of your mind that you're thinking creatively about. And this relates to multi-tasking because humans are unfortunately just, we can't multitask honestly. If you try to do two things at once, your context switching between them and that's also probably known to developers. If you go try to like read Reddit or something and come back, you just start all over again. So you generally don't want a multi-task and you generally also want to have this one idea be the idea that you're trying to solve so that when you're kicking around in the shower or walking around, that's the idea that you're solving. And then the third mental model that kind of works in there is this concept called deep work based on another book by Kellnutport of the same name. And what it's saying is to really attack those creative problems, you also, and work through the solutions you may come up with in the shower, you really need dedicated amounts of time. You can't be interrupted by either meetings or other things and you should block those out in your calendar to really have this notion of working deeply, deep work. And so if you bring those together, you want to think, what is the critical idea I'm trying to solve? You want to make sure that's kind of top of your in your mind. Then you want to block out whole areas that you can do deep work. And within those areas, you want to cut down a multi-task in and shut off all your notifications and things like that so you're not distracted. So that's one example. I'd love to give some more as well, but when we start there. Yeah, so I want to kind of zoom in on something that you mentioned here because I think a lot of developers intuitively know that interruption is really detrimental to our work. I know that I can barely think if somebody's talking 10 feet away from me, I can barely read a sentence. So I have a really hard time. And I think that's actually true for a lot of developers, not only reading, but also reasoning about something. But something that you mentioned there that I think is really critical is, it's not just external interruption that matters. If you are interrupting yourself, I think we have this kind of disillusion as Developer That the best developers are able to do 20 things in a given day. And they're able to hold all of this information in their head and be an expert at all of those things simultaneously. What you're calling out here, and I think it's really critical for developers and really everyone to grasp. And that is that your problem solving skills or whatever your creative output is, when it's divided, it doesn't get divided equally. There is a loss factor when you divide it. There's something missing because of those context switches. But I also like this idea that if you explicitly identify the top idea in your mind, then you have this new mechanism of being able to kind of explicitly say no to the other ideas. I think that's really critical. So I love the mental model of the top idea in your mind. Yeah, just to give one story, which we account in the book is, you know, it came from a now venture capitalist who wasn't a computer contributor at one time, Keith or boy, at PayPal in the early days with Peter Teele. And what they did is, you know, what he recognized, or Peter recognized, is that people want to feel productive at work. And that makes a lot of sense. And so if they have a really hard problem, call it the A plus problem, but then you also have a bunch of other things you could do, call them B plus problems, you're going to navigate toward those B plus problems all the time. Because they're easy to solve and you can check them off and feel good about yourself. But if you do that, you're skipping that A plus problem, which would be more impactful to your work in the company. And so you have a whole group of people just always solving B plus problems and those A plus problems never get solved. And the answer to that, you know, is to really make that A plus problem the top idea in your mind. And now you mentioned, you can do that as a individual contributor as a leader in the company. I also try to do that for everybody, so we actually have a thread that we have every week called a top priority thread where everyone is explicitly listing what their top priority for the week is. And that is how to operationalize that idea for our company where you're explicitly writing down that top idea in your mind and what you're trying to get solved this week. Yeah, I think I think there's a lot of implications in this book throughout for managers, certainly, and not to leave out the individual contributors, as we already mentioned. But certainly, you know, when you're thinking about, for example, analysis, right? This is such an important thing for managers to consider. Analysis and cognitive biases, there's a whole list of these, not only in the book, but also you have an excellent medium post that kind of summarizes some of them. But I'd love for you to talk about maybe one kind of really important factor for managers, a model that you think a lot of managers might miss. In particular, if you have one that's relevant to developer, kind of engineering managers, I'd love to hear your thoughts on what you think engineering managers, unfortunately, don't often use, but you think they should. Yeah, I'll give you two that are kind of related that we use a lot of talk to go. So this idea of top priorities that we're discussing really bridges, you know, individual contributors and management, and one of the key, really the key job of management is the first job, at least, is to make sure, you know, the right people are working on the right things and implicit in that is what are the right things. And so there's this model of a customer economics called opportunity cost, which is the cost of what you're working on is what you're not working on. And to rephrase that in other way is, you know, a lot of people, including developers, can come up with lots of important projects to do, right? In the company, we got to refactor this piece of code. We got to make this new feature. We got to fix this bug. And you can make a case for why they're all important to do. But that's not really the case you need to be making as a manager. And ultimately, as you turn into a manager in your individual contributor, you also want to be making this case of, I want to do this thing, not because it's important, but because it's more important than all these other things. And so when you're doing that, you're explicitly looking at what the opportunity cost is, because if I work on this, I can't work on these other things. And so constantly thinking about the people that you're working with and are they working on the highest leverage thing is a great mental model for thinking about priorities. Now, the other related one I wanted to talk about was called the forcing function. And what that is is scheduled processes to force everybody to think critically. And it doesn't just have to be about what's the thing to work on. And it could be, are we still doing the right thing? Is this code well structured, et cetera? And so let me give you some examples of forcing functions of.to go. We have a project lifecycle that's pretty structured. So every project has a kickoff call. And in that kickoff call, we call it pre-mortem, where we ask, how might this project fail? And then in the middle of the project, if it's a long, we'll do a mid-mortem where we're asking, is this project failing and why or why not? And then after every project, whether it was very successful or not, we have a post-mortem where we say, what went well here, what didn't go well, what could be better. And all those things are forcing functions because they're kind of pre-scheduled points of critical thinking to really think about what has gone well and how we can improve. And some other ones that are very deliver specific is, in that process, if it's an engineering project, a technical design template, which a lot of companies do, where we're explicitly writing down in this template what we're trying to get done. And then there's a discussion around that, which is also a forcing function to really think about the technical design. Today's episode is sponsored by Century. The models you use to debug your code are probably broken or at least limited. Often we use the same kind of effort-based models to fix problems that are in production. And you probably have some of these. There's probably broken code that you don't know about that is hurting your customer base, it's hurting your users, and ultimately your bottom line. And the problem is, the only way that you're going to find out about it is if your customers tell you about it. This is a really bad recipe. So that's why Century decided to approach it from a different angle. Century will tell you about errors that are happening in production as they happen. So you're going to get the stack trace, even the Git commit, that links you to the code that's causing the error. Go and check it out. Head over to century.io to get started for free today. That's century.io. Thank you again to Century for sponsoring today's episode. I love this idea of having a forcing, I'm coming up with a hundred different ways that I think forcing functions would be useful. One's a couple that you actually mentioned here, and this will probably bring us into another discussion about mental models. You mentioned this pre-mortem, and because you and I both know about the hindsight bias, that is something that can protect against that. But for people who don't understand why you might force thinking about failure ahead of time, how does that help when you are trying to analyze and prepare for failure in the future? Yeah, so it turns out like we have a lot of good ideas for different things in projects and in development, maybe ways to do this code or infrastructure to use. But if you don't write it down and think critically about it, you might miss something that if you thought about it a little harder, you may realize, wow, I could have done this in a completely different direction and saved a month of time or maybe we shouldn't do this project at all, or maybe we can do this a lot simpler if we ran a little experiment first. So what a pre-mortem does is ask a really simple question of how might this project fail? By asking a question like that, you are putting yourself in the mindset of failure to help you actually succeed. It's a little counterintuitive. When you ask a question, it forces you to really think about what could go wrong. Then you can really have the, you know, you're giving yourself the opportunity and the leeway to kind of think about that because people don't want to fail. So it gives you a safe space to do so. Then you can think about, okay, well, you know, maybe this thing I'm thinking about is too complex. We won't figure out this algorithm or this code is not going to work together with that code, you can start to get very specific about these things. Then once you list them out, then you can decide whether they're really risky or not. In my experience, always something gets turned up that it was not thought of before. Yeah, absolutely. You're hitting on so much about, you know, the human brain is such an interesting thing. We don't really totally understand it yet, but one of the things that we know about it is that there are some kind of quick action things that the brain does. One of those quick action things is it tries to solve for gaps, tries to fill in when there is a missing piece of information. Sometimes we'll just make it up. So when you ask questions, for example, what could go wrong, our brains immediately essentially involuntarily jump into action. So questions on their own are kind of forcing functions and they can be incredibly powerful. For example, another forcing function question might be, what is the real question that you are asking? This calls back to it's a conamin-diversky thing. They did some research and found that often we skirt around difficult questions by answering a substitute question. And so if you kind of force somebody to restate what they really mean, then the question that they're really asking can come forth, right? Or the answer that they're really giving, what is the real answer that you want to give me that can come forth if you ask that question a little bit differently? Yeah, that reminds me a couple of things that at MIT, there was this thing called the Help Instance, which was an amazing concept that I tried to replicate but never could. And what it was is it was effectively like a Slack channel would be the most common thing nowadays, but before, instant messaging even existed, it was on a protocol called Zephyr that was predated I am. It's like the first kind of I am. And basically all these people, like 300, 400 people would be subscribed to this and it would go to everybody and literally pop up on their screen and people would ask technical questions around programming. And invariably, I'd say 50% of the time, the first response was, what are you really trying to do? Someone wouldn't ask a question and they'd be like, what are you really trying to do? And we kind of operation as that, Dr. Goh, with each of these templates I'm talking about for like technical design and even our projects have a very specific template. And so like the project template has a background and objective and the objective is really listing out very specific success criteria for the project. And then the technical design, the first thing at the top is the problem statement like you're defining. And most of the conversation that happens in the kickoff call, the technical design call, is about clarifying that problem statement and the success criteria. And I feel that's where the most fruitful conversation comes and it often gets changed. You know, people are like, is that really the problem that you're solving that kind of thing? Right. Yeah. And the same thing can kind of come from it. Maybe it really is the problem that you're trying to solve, but it's an abstract representation of the problem that you're trying to solve. And this goes to another mental model, the five wise. Can you explain kind of what the five wise is and where it comes from? Yeah. Five wise is a great mental model to use in post mortems or other places, even when you're doing bug finding, especially. And what it really is trying to do is get to another mental model called the root cause versus approximate cause. So when something bad happens, let's say a bug, the approximate cause is the thing that immediately caused it, that you noticed caused it. So like maybe it's in the search engine, which we run, say you type in a query and it breaks the server, which was a real bug at some point in our history a long time ago. And so the approximate cause would be you type in this particular query and it causes the site to crash. The root cause is what is really that line of code at the root of this whole problem, right? And that might be, as people are familiar with the stack trace, that might be several lines up on the stack trace. And that's what the five wise does is help you get to that root cause by asking why. So you say, okay, that query is causing the site to break, but why? And you go, okay, well, it's because it ran this function and you say, well, why does that break it? Well, that function called this function. Okay, but why does that matter? Well, that function ran this other line and called this data that used a regular expression that had an infinite backtrace, which is a real example, by the way. And so then you get to the end of it. And if you do it correctly, it doesn't have to be five questions. It could be more or less, but you get to the real root cause of the problem. And then once you know what the root causes, then you can actually have a good fix. And that's what you want to do in post mortems. You want to figure out, okay, I found a root cause through this root cause analysis. And five wise would be one example of a root cause analysis. We can now know what the real problem is and we can decide we need to refactor that code or whatever the problem is. Yeah, there's an interesting ceiling that I found with this. And this is just kind of more entertaining than it is, particularly useful. But maybe it is actually useful because the ceiling that I found is related to kind of jumping from one domain to another. So eventually when you ask why enough, you're going to get into like a very personal reason. Like why was that code there in the first place? Well, it was written by this particular person. Why was it written by this particular person? Well, because they're employed here. Why are they employed here? Right? Yes. You're jumping into a different domain entirely. So it's important, you know, I guess at least partially important to know, okay, where is the actionable why in this particular chain of wise? Because it doesn't, it does seem a little bit exaggerated in that example, but in other examples, it may not be. I think that's a great point. I mean, that actually corresponds exactly what we wrote in the book as an example. So we use the example of the challenger explosion, which I'm old enough to actually remember. Unfortunately, it was a kind of sad day, which was a space shuttle that exploded. And the reason, Proxima Cosplay exploded was an O ring basically failed. And if you keep asking the five wise of that, you get to a point where what happened was is they launched the shuttle in very cold weather, weather that they had, it was colder than they've ever launched anything before and outside the acceptable range of the O ring. And that's why all this stuff failed. And then you ask why and it then jumped, like you said, really to a management problem, where the engineers actually flagged this problem, even made a presentation about it. And the managers overwrote it because they didn't feel the risk was a tie. They didn't kind of believe the risk assessment. And they were just wrong, which is another biased mental model. By the way, we talk about called optimistic probability bias, where you're exaggerating the probability of something. And it turns out it was a management problem. They didn't have enough checks and balances. And so the real solution there was really to add more checks and balances and to make sure things like engineering could have the veto on safety. And so sometimes that jump is important. I think it's good to ask all the way down though to figure out the railroad cause. Yeah, it actually speaks to another model that you have in the book, which is First Principles Thinking. Certainly not a unique discussion. This kind of in the zeitgeist of discussions on mental models. But I imagine that you can connect this five wise into First Principles Thinking. You kind of get down to a barrel, like a barrel level of, you know, what are we actually refactoring here? And we're really going to try to put a bandaid on this problem that we really actually have a management problem, but we're going to try to, you know, skirt around it with engineering. That's probably not a good solution, right? Yeah. Well, interestingly, you mentioned that because you're, I literally wrote down a list of top mental models, I think, in the book that we use for developers. And the ones that you two just listed, five wise, and I can first principles, I wrote down in order at the top of the list. I think we're on the same wavelength there. Yeah, I agree. And I think First Principle is actually even more broadly useful for developers and everybody because what it really is asking you to do is throw out your assumptions. And sometimes assumptions are great, but when you're first starting out on a project, it's really useful to throw them out at least for some time and think about, am I doing the right thing? And so an example would be, you know, you have design patterns and design patterns are generally useful and should be followed in kind of another mental model, but not except when they shouldn't. Or you're always using a certain infrastructure because that's what you have and have always used before. And so your assumption is you're going to, it's the best infrastructure for this or the best programming language for this or the best tool for this. And it may not be. And so it's useful to take a step back before you kind of start something and to list out, okay, why am I using that? And from First Principles, you would say, okay, here's the problem I'm really trying to solve. And I think this is the right tool because of X, Y, or Z where X, Y, or Z is not just because I've used it before or that's what I have available. It's because it's the best tool for the job. Now maybe that is the best tool for the job because it's the one that I know and the company is authorized. That might be the ultimate reason. But you're explicitly writing that down and questioning it. Right. Yeah, that's the last piece is so critical to remember that we aren't just throwing everything out every single time and choosing the most optimal solution based on only the requirements. We're also taking into account that we are participating, right? And so based on all of the variables, not just based on, you know, in a vacuum where I don't exist, where there is some sufficiently talented developer that knows this tool to a reasonable proficiency level, right? That's not a realistic thing. But that actually speaks to another mental model, which is a thought experiment. So you create these and I'm going to summarize thought experiments and then you tell me where I'm filling the gaps where I'm missing things. And a situation that's not necessarily realistic, but it allows you to control the variables. And for example, you can say, okay, in this vacuum where, let's say cost is not a factor. Well, you can actually simulate that. You can think about if cost was no factor and if we had this particular person who's proficient in this language, then we would choose framework X. Now we can now, you know, adjust those variables and see how that comes might change. Yeah, that's exactly right. I mean, so it came from, I mean, people didn't thought experiments probably forever. And but the real kind of popularity of it came from physics. And oftentimes in physics, like there wasn't experiments, you could even run. You couldn't write the code because it was about some theoretical concept that wasn't possible to experiment on yet. And so what people could do is do these thought experiments and try to think about it. So the famous one is Schrodinger's cat from quantum physics where I won't get into the quantum physics of it. But the basic idea is that you have this cat in a box and it could be killed by a pellet of radiation. And you don't know whether it's randomly kind of happens and you don't know whether it happens or not and before you open the box is the cat dead or alive. And it sounds like a open and shut case until you start thinking about it deeply and it literally went on for decades and people are agreeing about it. But that's what the power of thought experiment is. And we talked about earlier the primordium where you're asking why things can fail. That's an example of a thought experiment. But I really like your example of going to extremes because that's what really can test the boundaries. It's kind of like testing code, you know. But instead of writing the code and giving extreme values to test your functions, you're thinking about it ahead of time. And a lot of these things like the primordium and the force of function and they're all about effectively saving you a lot of time and energy because you're thinking about these things before you're doing anything, right? Not that you have to do it always before, but you're using your mind as a tool and without having to build everything. Yeah, yeah. And I like to think about all of these models and there's so many more and I'm sure you have a couple more that you'd like to go through. I like to think about all of them as kind of guardrails. They're not always necessarily diametrically opposed from each other. But one may guide you in a direction that another one may guide you away from. And so, you know, using multiple models to kind of, you know, wrap your mind around a given situation, a given decision, for example, it's important to not just rely on one. It's incredibly important to understand that, hey, you know what, there are multiple models for how numbers may progress, right? So for example, you have exponential, and developers actually have a lot of these, most likely, especially those who are more formally trained, we have mental models for algorithms, right? We have big O analysis is exactly that. We have these models of how complexity may cause a function that the amount of time that a function takes based on a set, how that grows, right? So you know that O log in is going to grow logarithmically versus something that is exponential. And we also have the concept of constants, right? So there's all of these things that naturally developers may not initially think that those are mental models, but they absolutely translate. So I think it's important that we, you know, recognize that there's not just one kind of model, but we use these as guardrails to balance, you know, one model balances another model out. Yeah, totally agree. I like to think of them as kind of shortcuts to higher level thinking. And you know, the shortcut, maybe the wrong shortcut. And so you want to think about multiple things for multiple angles and see which is the right for this situation. But in general, you want to be using them because they're just, they'll be making so much more productive. Thank you so much to Gabriel for joining me for today's episode. Of course, this is only part one of two parts. So if you enjoyed this episode and you want to hear more, make sure you subscribe in whatever podcasting app you're currently using to listen to this. Thank you again to Century for sponsoring today's episode. If you are relying on your users to report errors to you in production, that's the wrong way to do it. Go and check out Century for a better way, centreet.io. Remember part two of this episode comes out on Friday. Thank you so much for listening and until next time, enjoy your tea.