Author & Poker Champion Annie Duke

winning bets: decision science in venture capital

Annie Duke is a former World Series of Poker champion, bestselling author of Thinking in Bets and Quit, and decision science expert who advises Fortune 500 companies on strategic decision-making.

In this episode of World of DaaS, Annie and Auren discuss:

  • Cognitive biases in venture capital

  • Loss aversion and mental accounting

  • One-way versus two-way doors

  • Reading people in poker and business

Annie Duke on Decision-Making in Venture Capital

Annie Duke, a former professional poker player with over $4 million in tournament earnings and a World Series of Poker bracelet, has transitioned into a leading expert on decision-making. She is the author of bestselling books like Thinking in Bets and Quit: The Power of Knowing When to Walk Away. In her conversation with Auren Hoffman, she breaks down key cognitive traps in venture capital and how investors can make better decisions.

The Myth of Gut Instinct in VC

Many VCs romanticize gut decision-making, believing that certain investors have an innate ability to spot winners. This reliance on intuition often leads to overconfidence and poor decision-making. Without structured record-keeping, investors remember their winners as obvious successes while attributing failures to bad luck, making it difficult to refine their strategy.

Endowment Bias and Follow-On Investments

Investors tend to overvalue companies they already own, leading to follow-on investments driven more by emotional attachment than financial logic. Founding partners who championed an initial deal are especially prone to pushing for continued investment. Meanwhile, some VCs hesitate to double down on clear winners, fearing over-concentration, despite these bets often yielding the best returns.

Loss Aversion and Misinterpretation of Data

Prospect theory shows that people avoid realizing losses, leading VCs to hold onto struggling investments longer than they should. At the same time, they often take premature profits on winners to lock in gains, even when letting them run would be more profitable. Survivorship bias compounds these issues, as investors focus on past successes without analyzing failures for better learning.

"There's a lot of mystique around gut decision-making in venture, like some investors have a magical ability to spot winners. But that’s just a dangerous way to justify decisions without real analysis."

"As long as a company is still in your portfolio, you haven’t ‘officially’ lost. That’s why VCs hold onto bad investments—because writing them down makes the loss real. But that’s just fooling yourself."

"When a decision feels hard, that means it’s actually easy—because both choices are good. If you’re stuck between Paris and Rome, just flip a coin. Don’t spend two months on TripAdvisor.”

The full transcript of the podcast can be found below:

Auren Hoffman (00:00.654) Hello fellow data nerds. guest today is Annie Duke. Annie is a former professional poker player. She's won over $4 million in tournament earnings during her career and holds a world series of poker gold bracelet from 2004. Annie's also the author of a number of bestselling books, including Thinking in Bets, which is one of my favorite books and Quit, the power of knowing when to walk away. Annie, welcome to World of DaaS. Very excited. Now, when you look at...

Annie (00:23.866) Thank you for having me. I'm very excited.

Auren Hoffman (00:28.31) like how venture capitalists evaluate deals. What are like the big cognitive traps that you see in their decision process?

Annie (00:38.464) Right, so I would say.

Okay, I'm gonna try to bucket these. The first one is that I think that there's in general in venture, a lot of sort of mystique around gut decision-making, right? I mean, that's a little bit like what the Midas List is about. It's this idea of like there was a magical investor who just has a really good eye for talent.

And there's just a lot of reliance on that almost as your decision-making process, right? That this person who has such a good sort of eye for talent or nose for value should really be driving the train on a decision about whether to invest. So like we can go into that, but that's like the first big bucket that I think is a really big trap. The second has to do with just like a lot of

cross-influence and then this really big problem of I think pretty bad record-keeping, like accumulation of data around the decision point itself.

Auren Hoffman (01:51.736) So like, cause when you go back to your winners, remember these people as being super charismatic. And when you go back to your losers, you remember them as like, you know, stumbling and you just invested anyway, you're not sure why type of thing.

Annie (02:02.256) Yeah, or I think there's also the problem of when you're operating a world that's governed by power law, right? It's very easy to just say, well, that was just one of the losers in my portfolio, right? Because you're expecting, you know, maybe you're, I mean, you're hoping for maybe three fund returners in a fund, right, out of, you know, 25 to 40 investments. So

So I think that in a world where there's so much luck, it's really easy just to say, well, I lost that. That doesn't have anything to do with my decision process. That's just the way it is. And that one that I won, I totally saw it. And so I think that's a big bucket as well. And there's no way if you're not really keeping a really good record, creating a really good artifact of your decision at the time that you invest,

I don't think there's a really good way to resolve that, right? To really look back and say, was that a good decision or not? And I think that's really hard. So that would be kind of like bucket number two.

Auren Hoffman (03:10.222) Is part of the problem is just these decisions are very long range like you might not know for 10 years or so. Okay.

Annie (03:15.62) That I would put in bucket number three is pretending like things are long cycle when they're not. So that's a great one. I wasn't going to bucket that one, but I'll just put that in one big bucket, right? Like just sort of like waving your hands and saying it's very long cycle, but it's actually not. think number three would be that I think there's not enough attention to pay to follow on decisions in a variety of ways. And we can talk about that.

Auren Hoffman (03:24.585) Okay, yeah.

Annie (03:43.6) And then number four is that I think that there's just a lot of misuse of data. Like I think there's a lot of data mining happening. I think there's a lot of survivorship bias that's really driving some of the data work. You know, and I think that there's...

Look, this topic of when you're looking at data that you know is true, right? Because it's like data about your own fund or your own sales process or your own whatever it is. So you know it's going to survive a fact check, right?

Auren Hoffman (04:53.774) Interesting. What is there, let's talk about the follow on decisions a bit. where do you see, is it just like they invest, they always do follow ons or they don't do super pro rata enough or they're not as discerning or they don't think it, like where do you think their decisions could be better?

Annie (05:14.83) Well, OK, so I think that there's a few problems. Look, people very often follow on at the next round, not necessarily as a value bet, but because they're trying to support their investors. And so I'm not talking about that. I'm not talking about a different bucket of following on, where you're following on for reasons that don't have to do with really trying to discern what

which companies in your portfolio are going to have the highest return or not, but you're doing it because you're being supportive of your founders. And so that's a wholly different bucket, which I have no critique of. But there are lots of follow-on decisions that are made that are not for that reason, that are actually value decisions, right? Like where you're actually trying to identify the companies that you have put in your portfolio that

that you believe are going to have the highest return. so therefore you would like to put more money in them because you think the probability that they're going to be a fund returner is very high. And what I think that I found when I talk to venture firms that aren't my clients is that there's just a lot of sort of, if point partner is pounding the table, then we're going to do it.

And I think that people, you know, I think people need to be more aware of some of the cognitive biases that can really get in the way of good decision-making around these types of decisions. Number one is that everybody should familiarize themselves with endowment bias. So the endowment effect is that you like things that you own more than identical things that you don't own. That includes ideas.

So I think we've all heard where like, you'll say you'll give some idea in a meeting and then somebody else who sort of pooh poohed your idea will like to say the same idea and they'll think it's great. And you're like, wait a minute, what happened there? So it's true for ideas as well. So we just like things that we own much more than identical things that we don't own. a simple example of that would be like, I'm sure everybody at some point in their life has sold a car and

Annie (07:35.318) when you look at the Kelly Blue Book, know, sort of what you're supposed to be getting for the car, you're like, well, obviously I should get top end for this car. Like this is a great car. But if you were buying the identical car, you would be arguing for the bottom of Kelly Blue Book. I mean, there's no question that that's true. And the science on this is very strong. It's, it's well replicated that people just like stuff they own better than stuff they don't.

Auren Hoffman (07:59.02) For on the VC perspective, it seems like a lot of VCs when you say they're evaluating, like doing more money in the series D and they were in the series A and it's already up 50 plus X and it's already maybe potentially a fund returner for them. You I think sometimes I see the bias going the other way where they're like, well, this is already a fund returner.

I shouldn't put more eggs necessarily in this basket. Maybe even I should take some off the table here. I feel like much more likely they're going the other way at that point.

Annie (08:36.046) Yeah, so it depends on where you're following on. So endowment is a problem, which is this idea of point partner pounding the table. One of the concerns that I always have is that point partner is probably the most endowed. As we think about that, mean, obviously, the whole partnership has that company in their portfolio, but point partner is going to be the most endowed, which I think is just an interesting problem.

The problem that you just talked about, and then there's a third problem that I'll get to, so this is research that, the first finding in this category is from Kahneman-Tversky in 1979. So Daniel Kahneman actually ended up eventually winning the Nobel laureate for something called prospect theory.

And this finding from 1979 was really a key point of prospect theory. And it has to do with sure loss aversion, loss aversion, sure loss aversion, and then this idea of how we deal with gains. this is kind of the thought experiment. let me kind of get your, maybe you can play along a little bit. All right, so you owe me $100.

And so you owe me $100, right? So you're have to give me $100 or I offer you a coin flip and we can flip a coin double or nothing. we'll just like, does your gut sort of tell you like, yeah, I'll gamble there or.

Auren Hoffman (10:14.414) No, my gut says I give you because because the 200 does the zero. I don't know, the zero loss doesn't seem like that great. I'd rather I'd rather have I'd rather just give you $100 than have a $200 loss.

Annie (10:27.694) Okay, so the majority of people in that situation will actually go for the gamble.

Auren Hoffman (10:32.334) They would? Really? they would take the 50 % chance of paying more?

Annie (10:39.034) Well, the expected value is actually identical. Yeah.

Auren Hoffman (10:41.057) No, of course, but the downside is a lot more the downside side is a lot higher.

Annie (10:47.568) Yeah, but you have a chance to get even, which is why people will take that bet. well, yeah, know, no, majority of people in that.

Auren Hoffman (10:50.926) Oh, interesting. OK, I'm surprised. I would have thought most people would be risk averse like me.

Annie (12:31.17) So we've got Kahneman Tversky, 1979. They offer two propositions to participants in the study. Okay, so the first one is, I can give you $100, or we can flip double or nothing, and you can win $200, you can win zero. And the question is, are people willing to take that bet?

Obviously, the expected value is the same. It's $100 either way. But will people take that bet? And the answer is no, they will not. But when you say you owe me $100, or we can flip double or nothing, do you want to take that bet? Then the majority of people will take that bet. So notice that in both cases, the expected value is the same. The only thing that's different is whether you're in the gains, right? So I can give you money, or you're in the losses.

you can give me money, right? So that's what...

Auren Hoffman (13:34.061) question for you on the on first one, which I think is a simpler one for people to understand, what are the odds that people are willing to switch? Because 50 50, I'm taking 100 bucks for sure. But at 6040, I'm probably willing to roll the dice.

Annie (13:49.698) Well, so that that's an interesting thing. So that everybody has a different indifference point. But basically, the finding is like, if I say I can give you $100. So so tell me what I mean, you're obviously you understand expected value. But like, what is your instinct about what most people would say? If I say, you've got $100 in your hand, like I'm going to hand it to you. Or we can flip double or nothing. You'll win 220 or lose zero or win zero.

Auren Hoffman (14:20.622) Yeah. My expectation is they, they take the a hundred on the two 20. Yeah. It would have to be, it would have to be probably over 300 for them to, for the average person to pay. Now a hundred dollars, maybe not. Now, if you're $10,000, like we're really like at that, maybe at a hundred bucks. I like, sure. I'll take the two 20. Cause like maybe neither of them moves the needle for them. Okay. Interesting.

Annie (14:22.946) Right, so they just take the hundred there pretty big exactly Totally

Annie (14:39.906) But they don't take the 220, not even then.

Auren Hoffman (14:43.694) Yeah, I mean, but you know, these are always with poor college students, but let's say with like the average American where like $10,000 like really changes the game for them. They'll probably take the 10,000 rather than roll the dice for, for 22,000 or something. Yeah.

Annie (14:46.012) But if you made it 10,000 and we can slip, right.

Annie (14:53.794) for 11,000, 22,000, brother, yeah. So, I mean, of course, in the 100 versus 220, they're costing themselves $10 in that spot in order to not take the gamble. So basically, they're paying to take the variance to zero, right? So what they're trying to do is take the sure gain. So the pain of like having been up 100 and going to zero would be too great, right? So now,

we can take the other side and say, if somebody like, you're gonna have to give me $100 or we can flip zero or 220. People take that bet. So notice that's $10 worse for them. So now they're paying $10 to keep the gamble on.

Auren Hoffman (15:34.595) Yeah.

That just seems crazy to me, assuming they could afford the hundred bucks. Now, if someone can't afford it, I could see how they would take the bet. Because while they're going bankrupt anyway, so they might as well take the gamble. But so if because if you're like, it's a million dollars loss and they can't afford the million dollars and the gamble makes a lot more sense to me than well, well, 2.2 million, who cares? I'm already so much in debt like it doesn't matter. But a hundred bucks doesn't make that sense to me on that side.

Annie (15:40.034) Yeah.

Annie (15:52.757) Right, so that's

Annie (16:04.362) Yeah, so what's interesting there is that that's why that paper was so groundbreaking. Because prior to that, economists who could calculate the expected value were like, well, if you offer those two things, people won't behave differently. So let's go back to the original one, right? So what an economist would have said is that you have some sort of risk appetite, right?

Auren Hoffman (16:21.806) Hmm

Annie (16:28.674) And so if you're willing to take the gamble when you're in the gains, you would also be willing to take the gamble when you're in the losses because those are symmetrical problems. So if I owe you $100 and you want to take the gamble, an economist at that time would have predicted that you would also take the gamble.

Auren Hoffman (16:47.148) It's not exactly true because like, if you add a million dollars to your net worth, let's say your net worth is $10 million and you add a million dollars to your net worth, your net worth only goes up by 9%. But if you take a million dollars out of your net worth, your, your million, your, your, your net worth goes down by 11%. So you are actually way worse off to, you know, you're kind of to, to lose that a million then to. Yep.

Annie (17:09.92) Yeah, but we can control for that, right? So we can control for that. let's assume, so what economists would have said at that point is prior to this is people have risk appetites and people are either risky or not. And if they're willing to take the risk and flip the coin in the gain situation, they will also do that in the loss situation and vice versa. But it turns out that

People are very different when it comes to the two propositions. They will not take that bet when they are winning, but they will take the bet when they're losing. Which actually, interestingly enough, is the opposite of what you just said, right? Even if we take into account the 10 versus 11 % or whatever, right? So you would predict the opposite. But this is the problem. When we're actually calculating, so what would a rational actor do?

Auren Hoffman (17:54.466) Yeah. Yeah, yeah, totally. Yeah.

Yeah, I would predict the opposite.

Annie (18:06.354) This is not what they actually end up doing. So they take the gamble when they're in the losses and they take the gains when they're in the, they take the gain when they're at the money, when they're in the game. So they refuse the gamble. Okay. So what's happening there is that we are very averse to taking losses on paper and turning them into realized losses. And we seek

to take gains on paper and turn them on to realize gains, even when it is irrational for us to do so.

Auren Hoffman (18:40.482) Yep. Now, just to jump in, like the venture capitalist is very, is very weird because in taking the, so there's a scenario where, let's say you're going to write down a company to zero. or, today, or there's a hundred percent probability you're going to write down that same company in five years to zero. you know, you would think you'd want to do it as soon as possible from your

kind of IRR perspective or something. Or let's say even simpler, let's say your company can give you back all their money today that you put in. So it's a 1X today. You put a million dollars into the company, it'll give you back 1X today. Or the company will give you back 1X five years from now, right? You would think every venture capitalist would rather have the 1X today, but almost every venture capitalist would rather have the 1X five years from now because they're fundraising, there's stories on it.

Annie (19:38.774) Well, yeah, it's not just that though. It's this. So if I write a company down today, right, I am taking the loss as long as that company exists on paper. I have not lost yet. Right. So this is the exact same problem. It's the reason why people will flip the coin, even though they could lose 220. Right. It's the exact same thing. Why are you keeping the risk on? Right. Because if I write it down today, I'm just taking the risk. I'm like, I'm done, right?

Auren Hoffman (19:38.968) There's other types of things that they would prefer.

Auren Hoffman (19:52.086) Yeah, there's a chance. Yep, yep.

Auren Hoffman (20:08.066) Yep. Yep.

Annie (20:08.288) So why are you keeping the risk on? Because as long as you do that, you haven't actually realized the loss. That's the way that it feels to us, right? In venture capital, if you take a 1X return, that's a loss, right? So you're just not gonna do that because the frame in venture is 10X or better, right? So that's kind of the frame. And so,

You know anything that's kind of less than that. You're sort of considering a loss So if we go to like why are people taking money off the table with an ounce at series d with an amazing company that's already returned 50x well because they're already in the gains and We know that there's some probability that even if a company is marked at 50x it can go to zero. So exactly, so if you don't continue to bet on that and in fact if you trade on the secondary

Auren Hoffman (20:54.38) goes to zero. Yeah. We've seen it all. We've seen it happen many times. Yeah.

Annie (21:04.49) you're realizing that gain, you're taking all the risk out of it. And we like to do that when we're in the gains, right? So this became like the cornerstone, this research became the cornerstone of prospect theory, which ended up winning Daniel Kahneman, the Nobel laureate in economics. But this is very basic human behavior. And what you're pointing out about these scenarios that you've offered up in venture is

just a real life instantiation of prospect theory, of what you see. And if people are interested, there's very interesting work by Richard Taylor, who's also a Nobel Laureate in economics on mental accounting. And what I think is important to note is this. So one of the things that people will say to me is,

But why would you behave that way toward one company? Isn't that the whole point of having a portfolio, right? I mean, that's the base of portfolio theory, right? And what Richard Thaler's work shows is that you don't think about it as a portfolio though when you're approaching a decision about a single company. So if we put a company in our portfolio, we've now opened an account up, a mental account for that company.

Auren Hoffman (22:27.438) Yep.

Annie (22:28.704) And we're tracking that company. We're not actually thinking across the portfolio because obviously if we're thinking across the portfolio, you might not make the types of trades that you're talking about, right? You might be willing to take one X today. You might be willing to write a company to zero today. You might be willing to do what you're supposed to sort of rationally supposed to do at series D, but you don't.

If you're actually thinking about it in the context of the portfolio, but that's not the way we think about it. We think about it. We open an account for that thing. So like a classic example, and I think this is a pretty easy thought experiment for people to kind of wrap their heads around is you buy a stock at 50 and it's trading at 40. If you were approaching the stock fresh today, you would not buy it. Do you sell it?

And the answer is no, because you want to get your money back. Well, that's silly, because it's like across all stocks, and there's opportunity costs associated with that. You could put that money, that $40 into another stock that has a higher expected value, right? But we won't do it. Now, how do we know that this isn't like a kind of a ledger problem, but it's actually a cognitive problem, a mental accounting problem? Well, because of this scenario. You buy its stock at 50, it goes up to 75, and it's trading at 75 for a while.

And now it goes down to 60. Do you sell? Notice that on your physical ledger here, you're $10 in the gains. But on your cognitive ledger, you're $15 in the losses because you were at 75. And people won't trade under those circumstances either. And so what's interesting is once you can clear the account,

Auren Hoffman (23:59.371) Yeah.

Auren Hoffman (24:03.81) Yep.

Annie (24:11.584) will tend to approach risk more rationally, but we have to clear the account in order to do it. In other words, like if I do actually get forced to sell that stock or somehow I end up selling it, the next decision that I make about a new stock is going to be more rational than the decision I would make about that stock. there's some really fun research which shows that using casino players tracking cards, that when people are playing slots and they get into a loss situation, that they'll really play a very long time, right? So they'll continue to go.

but once they leave for the day, right? So they'll not only continue to go, they'll get more.

Auren Hoffman (24:46.958) Like once they get asleep and come back, then they're more rational again.

Annie (24:50.55) Because account. Right, because it's a new account. So what's happening is that as they get in the losses and you sort of track them at sort of at the end of a session where they're in the losses, their bets get bigger, they play faster, you know, they're really just taking on more risk, right? But then when they leave, the next time they come back, they start at their baseline again, because they've closed that mental account.

Auren Hoffman (25:13.678) And that is because of that mental account, not just because they're more wasted or something or...

Annie (25:16.67) No, no, no. this is like, you can see this across, like this is a very large scale study. Yeah.

Auren Hoffman (25:25.016) Question for sure, if you think of a private equity person, their main job is just never to lose money. That's the worst thing that they could do. And there are some private equity firms that have never lost money on a deal. If you think of a venture capitalist, which is different, their biggest mistake they could make is not investing in that 50-bagger that comes across their desk. Because that 50-bagger may only come across their desk every few years.

and they have to make sure that they invest in that particular thing. And if they don't do it, it's just a huge monumental mistake that they make. How do you square those when you're kind of talking to people?

Annie (26:04.578) So this is true of all decision making is you need to figure out what type of error is worse. So what you're describing is a type one or type two error. And you've also heard it talked about a specificity versus sensitivity. That was something that got sort of tossed around.

Auren Hoffman (26:21.539) Yep.

Auren Hoffman (26:32.364) or kind of a false positive versus false negative or something like that.

Annie (26:34.516) Right, exactly. So.

So basically, so let's think about it as sensitivity versus specificity to start. So let's imagine that we're testing people for COVID. So we're way back when, we're back in 2020 in the pandemic and we test people for COVID. A sensitive test is always going to capture people who have COVID. What that means is that you're gonna have a lot of false negatives, right? Because it's gonna be flashing a lot, right? It's gonna be like.

COVID, COVID, COVID all the time. Now you're never gonna miss somebody who has COVID in that scenario, but you're gonna capture a lot of people. False positives I meant, right? Did I say false negatives? Sorry. So I meant false positives. So you'll have lots and lots of false positives. You're gonna capture a lot of people who don't have COVID in that set. Specificity would be that

Auren Hoffman (27:18.082) You all have false positives. mean you have tons of false positives, but you won't have false negatives. Yep.

Annie (27:37.472) you're really trying to only catch the people who have COVID. So in that particular case,

In that particular case, we're going to miss a lot of people who have COVID. So there's always a tension between the two. And it depends on a variety of factors, whether you're going to lean towards sensitivity or specificity.

Auren Hoffman (28:11.928) So just to kind of like walk with your example, if we were doing, we're gonna test everyone in the country and only 1 % of people have COVID, we wouldn't wanna test with even a 1 % false positive rate because then we, like literally 50 % of the people who take the tests would say they're positive when they're not.

Annie (28:30.956) So that would depend on how dangerous the disease was. in a case where you were 100 % to die if you had it, you would be very tolerant of false positives. So that's actually kind of, this is where it gets really interesting, right? It's not that you would never want.

Auren Hoffman (28:36.504) Correct. Okay, good point. Yeah.

Auren Hoffman (28:42.092) Yep. Okay. Good point.

Auren Hoffman (28:46.83) But you would freak, if that case you could freak the other 50 % out or something and they might do something crazy.

Annie (28:49.042) That is true, but assuming there was other tests, given that it's highly, like let's imagine something that's highly contagious and kills everybody who gets it. It's totally fine if you have a test that's really high on sensitivity and low on specificity, right? But we don't want that for something that is lower down on the danger rate.

So like one of the ways that I like to think about it is it's really good for mice out in the wild to index on sensitivity as opposed to specificity, right? Because if they don't run away when a predator is coming, they're little tiny mice. They're gonna be dead. So it's better for them to just run away way too often. Assume the danger, run away.

Auren Hoffman (29:35.063) Yeah.

Auren Hoffman (29:41.442) Just assume, assume the danger. I'll do that too. Like if I think there's a lion, you know, I'm not, I'm not going through that, that brush. If I think there's a lion there. Yeah.

Annie (29:47.538) Right, exactly. So, fine. Right? Okay. So, that's totally fine. But if you're like an elephant...

Right? So now maybe you're not just running away because like you can fend off a lot of predators and it's a whole bunch of energy. There's a big energy cost to you to actually run away like that. Yeah. I mean, I think that that's something that actually is kind of stuck in my brain. mice, so let's bring it to like type one versus type two error. Mice are making a lot of type one errors, lots of false positives.

Auren Hoffman (30:03.223) Huh.

Auren Hoffman (30:16.13) Yep, okay, I like that. That's a good analogy.

Annie (30:31.788) They're like, bling, bling, predator, predator, predator all the time. And they're running away a lot when they probably don't need to. But that is better for them because they're little tiny mice. But elephants are going to make more type two errors, which is false negatives. But then again, they're very big and they can, there's a big energy cost to running away and that kind of thing. So elephants and mice should actually be different in that way. Okay, so let's bring this back now to all decision making.

With all decision making, what you have to ask yourself as you go into the decision is what's worse? Is the bigger error a false positive or is the bigger error a false negative? Because that's gonna tell you where you're supposed to set your threshold for good enough. So what's the point at which I've built enough conviction in order to be willing to invest in this? So if we take...

Auren Hoffman (31:07.502) What's the biggest error or the bigger error? Yeah.

Annie (31:29.428) a long short fund, for example. They're going to not want to have a lot of false positives, right? They've got a very small portfolio. They tend to hold a very small number of positions for a very long time. They can be very confusing.

Auren Hoffman (31:45.198) And they could be choosy. they can pass on tons of deals if it works out, it works out, and they can learn from it.

Annie (31:49.194) Right. Yeah, if they miss something that makes a lot of money, it's fine because they can probably get something else that makes a lot of money. Like, it's not really a big deal. so you're not really too worried about false negatives in that world, right? But in venture, you're very worried about false negatives because there's a huge penalty because of power law. There's a huge penalty to missing a winner. That's a ginormous penalty.

Auren Hoffman (31:56.813) Yep.

Auren Hoffman (32:12.749) Yeah.

Annie (32:17.014) So generally what we can sort of think about there is that if we're really worried about false negatives, what that means is that we're going into an understanding that we're going to kind of purposely end up with duds in our portfolio. We don't know which they're going to be because there's uncertain. We're going have a much higher loss ratio, but we're tolerating that and we're going to construct our portfolio to withstand that.

Auren Hoffman (32:33.486) Mm-hmm.

Auren Hoffman (32:37.602) You're willing to have a higher loss ratio essentially than two, Yeah.

Annie (32:46.402) So, you know, if you look at some long short funds, know, or Berkshire Hathaway or whatever, it's like they have a very small number of positions. Whereas venture firms have a very large number of positions.

Auren Hoffman (32:59.374) So even most venture firms, they have, you know, in a given fund, they may only have like 20 positions or something like that. Like, do think they should have 50 to 100 positions or more in those funds or something to deal with that?

Annie (33:09.898) Well, so there's a trade off, right? Which is, look, let's take it to the extreme. If I invest in everything, I will never miss anything. So remember, just because we're saying we're not really tolerant about false negatives, it doesn't mean that there aren't going to be false negatives. So one of the things that I say is if you're investing well, two things are always going to be true. You're going to pass on something that wins.

Auren Hoffman (33:18.242) Yep. Yep. Right.

Auren Hoffman (33:29.485) Yeah, of course.

Annie (33:38.144) and you're going to invest in something that loses. And I can always guarantee that. I can always guarantee that. And the worst mistakes that you can make, I think, as an investor, there's two of them. One is looking at the bottom performers of your portfolio, drawing conclusions that are way too big given the amount of luck, and starting to change your investment process to try to prune the bottom range of your somehow looking forward.

Auren Hoffman (33:41.334) Yeah. A hundred percent. Yep.

Annie (34:07.558) To prune the bottom range of your portfolio, I just think that's a huge error. Not that you can't learn something from your losers, but it's probably a relatively small sample size and there's a lot of luck. And if you reran it 100 times, those might be the top of your portfolio. There's a whole bunch of stuff that can happen there. So that's one big mistake. And then the other big mistake you can make is trying to make it so you literally never have a false negative.

Auren Hoffman (34:15.65) Yeah, yeah.

Annie (34:33.974) Right? Because then you're just not going to be choosy at all. And what's your point on earth? Right? Like, why are you a steward of somebody else's capital?

Auren Hoffman (34:34.348) Yeah.

Auren Hoffman (34:38.892) Yep, yep, exactly. And also like it's it'd be impossible to invest in anything anyway, because like of any meaningful amount, no one has like a trillion dollars.

Annie (34:44.126) Right, exactly. Yeah, but also then but also like the whole point the whole reason that LPs are investing in you is because they feel like you have good taste.

Auren Hoffman (34:53.422) Yeah. Yeah. What, one of my things I struggle with is just bet sizing. and cause you could just say a scenario where like, imagine that same seed fund, they just put a million dollars in everything and it's just simple, or they own like, you know, 5 % of everything, whatever they decide that, you know, and that's kind of how they bet size or there's something where you're being a little bit more discerning. You're okay. I'm,

I'm putting more risk in this. taking, I'm having my normal allocation this other thing. Like how should funds think about that?

Annie (35:30.444) So I a little bit, like, look, your mileage may vary. I don't want people to take this as like investment advice for me. I just want people to kind of take this as just sort of like where because of my training in cognitive science, my head goes with that. In early stage venture, you're making decisions under extreme uncertainty.

that uncertainty comes from two places. One is you know very little. I mean, if you think about an angel investor, it's like, hey, I have a pack of gum and idea, right? Like, there is no product market fit. There's no...

Auren Hoffman (35:57.995) Extreme. Yep.

Auren Hoffman (36:06.338) Yeah, totally.

Auren Hoffman (36:10.963) You may know something about the founders. Yeah, you've worked with them before. You have some sort of sense that they're quality people or something.

Annie (36:12.598) But that's it. Like you really don't know anything. Yeah.

Exactly. so like you, you really don't know anything. And there's just a whole lot of luck. You know, as I say, like, if you reran Uber 100 times, I don't know how many times it ends up being Uber, but it's probably low. Just because right? Like it's for every single company, right? Like that's just it, you're not going to do it a hundred times out of a high, as much as the people who

Auren Hoffman (36:35.404) Yeah, yeah, that's true. For every company it's probably that.

Auren Hoffman (36:43.565) Yep.

Annie (36:43.734) founded Uber would like to think that they would win 100 times out of 100. I would certainly short that position.

Auren Hoffman (36:51.202) Yeah, though I would say that I would say if you actually look at a founder level and let's say that founder has the ability to found many companies over time, I think that there is a much higher correlation with success. So my guess is like Travis Kalanick would maybe not be at the same wealthy as if you ran the simulation a million times or something, but he probably would be pretty.

pretty successful in most scenarios. And he would, if Uber didn't work, he'd do something else or something where like, he's just that type of guy who would be very, who'd be pretty successful.

Annie (37:24.532) Yeah, although there's a lot of, yeah, that's true. Although there's a lot of path dependence there, right? So if you do well in one company, you're more likely to have opportunities in another company, regardless. And then, you know, so we've got hidden information and luck, right? So those are two. And by the way, here's the thing. If he doesn't do well in Uber, you don't know he may choose something less risky. So we don't know, right? Because it's path dependent.

Auren Hoffman (37:35.969) Of course, yeah.

Auren Hoffman (37:51.064) That's true, yeah, yeah, that's right, yeah. He might become a product manager at Facebook or something, yeah, yeah.

Annie (37:55.156) Exactly. Right. We don't know what's going to happen. So, but okay, so this is where I kind of think about like, really, what is our acuity? Right? What is our cognitive acuity to be able to distinguish one thing from another? To be able to distinguish one thing from another? And I like I actually like to analogize it to to choosing something off off a menu. So you'll see people all the time.

who are looking at a menu and they're like, I can't decide, right? Do I want the chicken or the fish? it's like, you know, and it's like they're quizzing everybody at the table and they're asking the wait staff and they're looking up Yelp reviews. I don't know. But they're having a really hard time.

Auren Hoffman (38:32.117) Yeah, yeah, right.

Auren Hoffman (38:42.474) Or even like what movie to watch tonight or something.

Annie (38:43.54) Yeah, exactly. Like these, what should I wear? What should I eat? What, you know, what vacation should I go on? They're trying to optimize exactly. And, you know, and then like, you know, they finally somehow decide on something like they order the chicken, and then it comes back dry. And they immediately think they made a mistake. But like the last time I checked when they were trying to order off the menu, they hadn't eaten those dishes before I'm assuming it's a new restaurant.

Auren Hoffman (38:47.564) Yep. It's like the optimizer personality.

Auren Hoffman (39:12.653) Yep.

Annie (39:13.406) And they don't so they don't really know anything. So they they're not omniscient and they don't have a time machine. So like, is that really a mistake? Like they narrowed it down to two things they really like. And then they made a choice. So one of the things I like to point out to people is like, it's not like you're having trouble choosing between what's a dish that you like, what's a food that you hate. Great. So it's not like you're having trouble choosing between the chicken pasta and the puttanesca.

Auren Hoffman (39:36.425) I don't like olives.

Annie (39:43.488) That is not a problem for you. Now the reason why that's not a problem for you is because they're so far apart that one of them does not meet your threshold for good enough. So we can think about this threshold for good enough and what does that threshold look like? How do we figure out when two options are good enough? When we say to ourselves, if this were the only thing that I could order, would I be fine ordering it?

Auren Hoffman (39:46.125) Right, right.

Annie (40:11.734) And if the answer is yes for the chicken and yes for the fish, you're kind of done with the process at that point. It's like, okay, they're both good enough. I can really flip a coin between the two and it doesn't matter. Right? Okay. So that's the talk about ordering in a restaurant. By the way, even if you could tell the difference between the two, what a waste of freaking time. Cause they're both already good enough. Cause you're totally fine. You're totally fine. Ordering either of them off the menu. All right. Now, how do we decide the threshold for good enough?

Auren Hoffman (40:11.788) Yeah, yep, yep.

Auren Hoffman (40:34.126) You

Annie (40:41.003) It relates back to that idea of false positives versus false negatives. So the more tolerant we are of false negatives, the lower our threshold for good enough. The less tolerant we are of false negatives, the higher our threshold is for good enough. So let's just tie that up in a bow. So how does this relate to position sizing in really early stage venture? I don't know. You get to something that's good enough. Can you really tell the difference between the two?

Auren Hoffman (41:06.306) Yep. Probably not. Yeah. So maybe I should just do the same thing and everything.

Annie (41:11.756) So I think that what ends up happening, I think it's really, my personal feeling is that you have a goal of sort of, and you can decide what the goal is one of two ways. You can decide that you want a certain amount of ownership in a company or you only want to invest a certain amount in each company, whatever, I don't care. You be you, right? But naturally what's gonna happen is that you're gonna end up with different size bets regardless because you can't always get your.

what you want, right? You can't always get your allocation. And then what you should actually think about is this, right? Like, okay, so I've got my allocation. Now I know more about the company than the market does. And then you probably will have you probably can discern later. Do see what I'm saying? It's like, right, so I just sort of saved the whole discernment thing for

Auren Hoffman (41:43.224) can't always get your allocation right exactly.

Auren Hoffman (41:57.271) Yeah.

Auren Hoffman (42:02.496) Okay. That makes sense. Yep. Absolutely.

Annie (42:08.066) Personally, that's what I would, but that's not what everybody does. And other people do sort of think about big bats, medium sized bats, small sized bats. That's fine. I just personally am a little bit skeptical about it. People are welcome to go, you know, sort of look at science on this issue of like how well can you discern the difference between things that are really similar to each other. And I think it's just really easy for us to tell ourselves stories about that.

And I also think it's honestly a little bit of a waste of time, but that like, again, that's just me. Like I'm not no financial advice here. Thank you very much.

Auren Hoffman (42:41.646) What do you, I mean, you mentioned flipping a coin. What do you think about flipping coin about a very big decision? Like I have friends and they flip a coin about all the big decisions they make if they're not sure about it. So they even like decided, they even flipped a coin of should they have a fourth kid? So like something like that monumental, they flipped the coin on.

Annie (43:00.404) If they feel like it's 50-50, absolutely.

Auren Hoffman (43:04.492) Yeah, why not? And just like, that's the, that's the way to make the decision.

Annie (43:06.56) Yeah, it's like, again, like, like, here's a very big decision. Like, I've got a week of vacation time, should I go to Paris or Rome? Big decision. Last time I checked, nobody has trouble choosing between Paris and Gary, Indiana. Right? Like, Paris and Gary, Indiana, I'll just go to Paris. Paris and Rome, that's actually a really hard decision. And but the thing is that I say hard decision because it feels really hard to people, but it's actually really easy. Right? Like, and this is like, I think it's a big unlock.

Auren Hoffman (43:14.029) Yep.

Auren Hoffman (43:18.882) Yeah, they're both pretty good. Yeah. Yeah.

Auren Hoffman (43:24.813) Yep.

Annie (43:35.958) When a decision is hard, that means it's easy. When it's hard in the sense that, ooh, I really can't decide between these two things, right? Then just flip a coin. don't, right, don't spend two months on TripAdvisor. Like what are you doing? You don't have a time machine. You can't go and see what it's gonna be like when you're there. Like stop with the two months worth of research on that.

Auren Hoffman (43:42.03) They're both good. Yeah. Yeah. Yeah. Yeah. Or two job offers that are both both great. Do you think it'd be good about? Yeah. Yeah.

Auren Hoffman (43:52.888) Yeah.

Auren Hoffman (43:57.87) But what about the things like, you know, should I marry this person? I have another kid? you know, things that are maybe more of one way doors.

Annie (44:09.41) Flip a coin. Look, again, again, here's the deal. Right. So look, if something is really first of all, a kid is a one way door, marriage is not a one way door. But so one of the things so I don't I actually don't like the one way door, two way door. analogy, and I'll tell you why they're having a kid is a one way door. Death is a one way door.

Auren Hoffman (44:11.406) Okay, all right. I love it. Yeah. Yeah, if you're not sure. Yeah, obviously if you have data, yeah

Auren Hoffman (44:23.374) Okay, good point, yeah.

Annie (44:38.944) That's it. Those are the two one-way doors. Everything else is a two-way door, but some of them are more expensive.

Auren Hoffman (44:47.362) Some of them are more expensive to get out of. Yes. Yeah. Okay. Right. Okay. That's fair.

Annie (44:48.566) That's right. So when we think about how expensive it is.

Auren Hoffman (44:54.466) Yeah, some doors are a dollar to give and some doors are billion dollars to go for.

Annie (44:56.894) Right. So what I would love people to do is when it's more expensive to open that door and go back through, when it's more expensive, you should take more time with the decision and you should have a higher threshold for getting to the coin flip. Right. So like, here's kind of how I think about it. Right. It's like there's two ways that we think about. One is how expensive is it to reverse how expensive it is to quit the thing that I'm doing. And the other one is

Auren Hoffman (45:06.924) Yep.

Auren Hoffman (45:13.357) Yep.

Annie (45:26.162) What's the long term impact if I get a bad outcome? Those are the two things that we want to think about. So let's think about hiring an intern versus hiring a CFO. I hire an intern and they don't work out. It probably has no impact on my bottom line in a year. And gosh, it's super easy to reverse at almost no cost to me. Right? I'm just like, sorry, you didn't work out. Go back to college. That's it. So my

Auren Hoffman (45:37.89) Right, clearly.

Annie (45:54.176) threshold for getting to good enough should be really low in that situation, right? I should have a couple of checklist items that have to do with like, you know, education recommendations, whatever I can decide what's on that checklist. And then I should end up with like 20 interns that would be fine. And then I can put their name in a hat, and I can pick one and just move on with my life, which is not what people do. They spend weeks trying to sort through the pile of intern resumes, figuring out who to hire. But now let's think about CFO.

If I get a bad outcome with the CFO, long term it's probably going to have an impact on my bottom line. And it's actually quite hard to unwind. It's probably pretty expensive. Certainly severance is a bigger issue. There are big cultural impacts to the organization. I have to go through a big long search again that's going to be expensive. There's a whole bunch of stuff that goes on with that. So my threshold for getting to good enough with the CFO is going to be higher, which means I have to spend more time on the decision.

But in the end, even for one way door decisions like having a child, if you really, you decide what the threshold is. Do I need to be leaning 60-40 to do it? Do I need to be 50-50? Right, so you decide whatever that is and then whatever. If you decide that it's 50-50, maybe you go for it or maybe you flip a coin. I don't really care.

Auren Hoffman (47:15.702) On the hiring thing, let's say you're hiring for a CFO. know, the classic thing is like the secretary's problem where you have N number of people in the pool. You first look at N over E of them, and then you hire the first person after that that is better than the first set that's there. Do you subscribe to that or like you got to look at a few firsts to know what you want or?

How do you think about hiring?

Annie (47:46.38) So, okay, so you're talking about, I don't know what the PC name for that is anymore. I think it's like the apartment. I think people call it the apartment problem. It used to be called the Sultan's daughter problem. And then at some point it was the secretary problem, but I think you're supposed to just call it an apartment hunting problem now.

Auren Hoffman (47:54.636) I don't know what it is, but back in the day in the sixties or when the fifties seventies, they call it the secretary's problem. Yeah.

Auren Hoffman (48:03.774) the song started. Okay, got it. Okay, so it goes back. Okay. Yeah.

Apartment under problem. Okay, sorry. That's the PC wing now. Yeah. Yeah.

Annie (48:11.722) I don't know, whatever. but I'll give you the original Sultan's daughter problem, because it's important for understanding what the difference is. Because there's a difference between the apartment problem and the Sultan's daughter problem that I think is very important to point out. So in the Sultan's daughter problem, it goes like this. So a Sultan is going to find a suitor for his daughter. And he selected 100 suitors who are suitable for his daughter. And then.

Auren Hoffman (48:17.283) Yeah.

Auren Hoffman (48:42.004) All of them are suitable. Okay.

Annie (48:42.24) All of them are suitable, right? All of them, all of them, you know, like they're a prince or they're whatever, right? None of them are, no peasants, no peasants. Okay, so the daughter is gonna meet each of them. And if she rejects one, they're rejected forever. That's the end of that. And then she's gonna have to choose one at some point.

Auren Hoffman (48:49.41) Yeah, yep. They read today, they're all over a certain bar. Yeah, yeah, okay, yeah, yeah.

Auren Hoffman (49:06.508) Yep. Yep.

Auren Hoffman (49:12.302) She's gonna meet them consecutively,

Annie (49:13.026) consecutively. Okay, so there's some math around this, but it turns out that you should meet 37 % of them.

Auren Hoffman (49:22.414) Where as 100 over E, which would be 37. Yeah.

Annie (49:23.626) Yep. And then the person who's better than the ones that you met before, the first person that's kind of like better than the ones you met before, at least as good as the best person you met before, you should lock that window. Yeah. OK. So here are the issues that don't apply very well to real life here. In real life,

Auren Hoffman (49:37.528) Yep. I remember doing this in college. Yeah.

Annie (49:48.18) You learn as you go before you choose the next person. So generally it's not like here's a hundred Go for it, right? It's like think about dating Exactly. So if i'm dating somebody Right. I go on a date and I learn things about my own preferences And myself and then that informs my choice of the next person I go on a date with so that's the first thing that's different the second thing that's different is that

Auren Hoffman (49:56.588) Yep. You don't even know what the N is. You don't know what N is. Yeah.

Auren Hoffman (50:07.682) Yep. And yourself and yep.

Annie (50:17.65) in real life, I can often go back to options that I've rejected in the past, which I cannot do here. So I do think that we need to take that into account. But understanding this idea that you should do some sampling before you choose, I think that that's a really valuable idea to understand. But I don't want you to say, how many people do I think I'm going to meet? then that's how, you know, then I'm going to reject everybody, you know.

Auren Hoffman (50:23.554) Yes. Okay.

Annie (50:47.05) Right. Like, because I mean, it's a very interesting problem. But it's not, it's like, it's not a good isomorph. So if we go back to the apartment hunting problem, you could, suppose, say, I'm going to look at 10 apartments and then I'm going to choose after four. You could do that. But you have to remember that you don't need to just pick 10 apartments. You can.

Go look at a few apartments and say, I thought that I was going to like this, but actually it turns out I don't like this neighborhood very much, or I actually do want a third bedroom, I don't, whatever. And then that's going to inform your choices about the next apartment. Then you look at number one. And even if you didn't do that, it's not like you can't say, you know what, I want to go back to the first apartment.

Auren Hoffman (51:39.084) Yeah, I mean sometimes you can't because they go. Yeah, Yeah, yep.

Annie (51:40.386) Sure, sure, sometimes you can't, because they go, but sometimes you can. And you should always sort of be taking that into account. that's like with hiring, really do approach it that way because you're getting candidates and then you're like, maybe I should change my job description if this is the pool that I'm getting. As you start to talk to people, you realize you should have specified different things or it's attracting someone who has a whole set of people who aren't a good fit.

And so there you're learning as you go. And so you may actually change, you could change your job description, for example, you could change your screening process.

Auren Hoffman (52:17.046) Yeah. Okay. Really interesting. Now in poker, you you, you, you, you talk a lot about like things like mathematical decision making, but you also in your poker career, I think talked a lot about the importance of being able to read people. I'm personally terrible at reading people. What are like one or two things that somebody like me can do to get better?

Annie (52:39.476) Yeah, okay, so

A couple of things. In poker, the bulk of the work that you're doing, assuming that you're not playing just game theory optimal, let's assume that's not what's happening, but you're actually playing an exploitative game. I just want to be very clear. I'm using exploit in the game theory sense, not in the manipulated. Hi now.

Auren Hoffman (52:59.48) Yeah.

Auren Hoffman (53:10.67) Remember, we're amongst data nerds. We know what you're talking about. Yeah.

Annie (53:12.864) I just want to, I just always feel like I need to say that. But somebody has tendencies which you can exploit. And so you're actually, you're not playing game theory optimal in that. You're not following Nash equilibria or anything like that. You're like, I know that, Orin calls too much. So I'm not going to bluff him very much, but I'm going to be what's called a nuts peddler. I'm going to start running the nuts into him because I know he's going to call me every time.

So you'll change your play based on what the other person is doing. When you're playing cash game poker, a lot of what you're doing is actually really just trying to create a model that's better than the general base rates for the player that you're playing against. So if you're like, given that I know nothing about you, I'm going to assume you're going to enter 20 % of the pots that you

that you play in a nine handed game. And this is gonna be your range for opening hands and things like that. So there's a whole bunch of things where I'm going in with like a general model, given reference class of poker player, right? And I could have some general like easy ways to actually get a little bit more refined even before I know you. So if you're a retiree at 10 a.m. on a Wednesday morning,

That's a different reference class that I'm gonna be pulling from before I've ever seen you play a hand than if it's two in the morning on a Saturday and you're drunk, right? So that's whole other. So I have different reference classes, but that's gonna be my starting point. That's gonna be my prior. And then what I'm trying to do is I watch you play is update my priors to refine my model of you personally.

Auren Hoffman (54:45.58) Yep. Yep.

Auren Hoffman (55:00.662) just with data, not about reading them in a way. Okay. Cause then, then I could do that. Like with the data, my problem is reading the person. Yeah.

Annie (55:02.124) Just with data to this is, yeah. But then there's a reading people part. Okay, so you're doing that just with data, but that's actually really important to do, right? Because one of the mistakes that a lot of people make is they think that other people play just like them. And we don't wanna do that, because people are different. And I wanna be able to see if there are some holes in your game. And if I can find those holes, then I can exploit them.

And I can win more money that way. So that's that's a nice thing to do is build a model of your opponent. OK, but where does the where does the reading come into play? So when I'm playing poker, always what I'm trying to do is to narrow down the range of hands that you are holding. So I can't see your cards. I don't see your cards, but what's going to be different between a really great poker player and like an amateur is that the

possibilities of hands that I think you have is going to be narrower and more accurate than what an amateur might think you have. So it's not that I'm going to think that you have every two hand combination because I'm to be like, well, he doesn't play, he plays a low percentage of his hands and he's opened in first position and he doesn't bluff a lot. And so now I've got you narrowed down pretty tight, right? Like, so I'm going to be better at doing that. Now I'm going to do a lot of that just from the data work.

Have I built a really good model of you? But some of that is going to be from reading my opponents. And when that's going to be the most important, and you're going to hear this as an echo from what we talked about before, is when it's a really close call. So remember when I said, what's the threshold for getting to good enough? If the price I'm getting from the pot is really big, I should not spend a lot of time trying to read you.

Because I don't actually want to convince myself to fold because I'm totally fine. I'm totally fine with investing there knowing that I may end up with a false positive. Why? Because maybe I'm getting 10 to 1 from the pot. So I'm making money if I win 10 % of the time if I'm getting 10 to 1. I'm making money. So I don't want to have false precision there.

Auren Hoffman (57:21.974) Yep. Yep.

Annie (57:26.146) I don't want to be thinking that I know more than I do. And so I'm not going to bother reading you or anything. I'm just going to go, I'm getting 10 to 1 from the pot. Like, can I win 10 % of the time? Sure. OK. And I'm going to call. OK. But when it gets close, so when you're in a situation where, you're getting 2 to 1 from the pot, and you're like, well, I have to win this 33 % of the time, and this is pretty close, now reading your opponent can actually

get you to fall onto one side of the fence or the other. And that's where it becomes really incredibly important. So how do you read your opponent? Well, number one is to recognize that when people are being stared at, they don't like it. And so you can get a lot of what are called false tells when you're just staring somebody down at the table because they're just uncomfortable. Would you want to be stared at really hard by somebody? No.

Auren Hoffman (58:15.79) No, definitely not. Yeah, I'm the kind of guy looks at my shoes, so yeah.

Annie (58:18.58) Exactly. So you might start doing some stuff that like I connect to your hand. That's really actually just about general discomfort that's being created by like, it's a long time that I've been sitting there and staring at you. So generally, what you want to understand is that whatever physically you're getting from somebody, you're going to want to get relatively early on, right? And that's generally going to be better. It's not that someone might not give you something later on. It's just I think it's just a lot noisier when that happens. Okay.

So what am I going to be able to see? There's a whole bunch of signs of discomfort and there are signs of happiness, right? So as an example, interestingly enough, at the poker table, when someone's like moving their knee up and down, know, it's actually a sign of nerves. It's a sign of a good hand. It's like a dog wagging its tail. So.

Auren Hoffman (59:10.574) Okay, interesting. So when they're kind of jittery, you know, they're, they're, they're

Annie (59:13.794) Well, it depends, specifically when the knee is going up and down. When that happens under the table, it's like...

Auren Hoffman (59:18.636) knee. Okay, interesting. Okay, that was like a Sam Bankman freed thing with his knee going up and down all the time. Yeah.

Annie (59:24.874) Right, so in a lot of settings, like a social setting, it might be a sign of nerves or being nervous or anxious or something like that. In that particular setting, it's usually a sign of Right. Then it's like...

Auren Hoffman (59:30.893) Yeah.

Auren Hoffman (59:34.592) Right, you see it and it's not normally the case, then okay, maybe they've got a good hand. Okay.

Annie (59:39.83) Because think about it, if someone's really bluffing, that's true. Now they'll bounce their knee up and down. There's a whole bunch of stuff that has to do with the mouth. There's a self-soothing behavior. So if you see someone ever rubbing their hand on their fingers, or if they're moving their tongue on the inside of their cheek, that usually means that they're bluffing. Or they're uncomfortable with their hand, because it's called a self-soothing behavior. So with a man, if they have a beard, don't do this. But if they're like,

Auren Hoffman (59:42.594) But now everyone knows that, now everyone just like bounces their knee up and down all time. Yeah.

Auren Hoffman (01:00:01.995) okay.

Annie (01:00:09.058) sort of rubbing the back of their neck or sometimes they'll be sitting at the table and they'll be sort of going like rubbing their fingers or if they're going like on the inside of their cheek or something like that.

Auren Hoffman (01:00:16.451) Mm-hmm.

Auren Hoffman (01:00:19.906) But again, now everyone knows that, so you think they like, I assume if you know that, other people know that, so now they're like faking it or?

Annie (01:00:23.168) Well, it's true, except that that's true, except that. Yeah, because involuntary actions look really different than voluntary ones. You can also look at like something called the hooding. So have you ever seen someone where like, and you kind of always instinctively know they're lying when they say the thing and they sort of have this really long blink and then they open their eyes? That's called hooding.

Auren Hoffman (01:00:28.302) except it's hard to fake.

Auren Hoffman (01:00:33.166) Okay.

Annie (01:00:50.315) But there's actually a great guy who talks a lot about these body languages. His name is Joe Navarro. Yeah, so he's got great work on this idea of how do you actually read people.

Auren Hoffman (01:00:56.716) right, right, of course, I read his book, yeah.

Auren Hoffman (01:01:03.318) Interesting. And like, I've basically gone through life just, I just like assume everything people tell me is a hundred percent true. And it's generally worked out pretty well. Like, is there any other advice you would have for someone like me who just like assumes everyone, everything you tell me is correct?

Annie (01:01:13.154) Yeah.

Annie (01:01:25.41) So I think that's interesting. I think most people are telling you the truth. I agree.

I think that what we need to understand though is that whatever their interpretation is, is not necessarily true. So I, it's their belief of it. But I think, you know, I mean, people talk a lot about misinformation online. There's work by Duncan Watts, which shows that like misinformation versus what we call like misleading information is 46 to one, like misleading information, 46 times the problem.

Auren Hoffman (01:01:42.926) Yep, yep. It's their belief of it.

Auren Hoffman (01:02:02.22) Yeah, yeah, that makes sense.

Annie (01:02:03.146) And the misleading isn't people purposely trying to mislead you. It's they're telling you some data and they think it's true. Yeah, so like an example, you you see a lot of people talking about crime statistics online a lot. And when people tell you that New York is more dangerous than Macon, Georgia, they think they're telling you the truth because they're like, I don't know how many murders.

Auren Hoffman (01:02:11.778) from their side of the story.

Annie (01:02:30.978) We're in New York last year, but maybe 350 or something like that. Well, let's look it up. I have the interwebs. Let's see.

Auren Hoffman (01:02:41.868) I like this, doing live reporting here.

Annie (01:02:42.25) Yeah.

Annie (01:02:46.978) Okay, so last year, let's see.

Auren Hoffman (01:02:54.114) Because safety is always this hard thing because it's not just the number of murders, it's all the other crime and it's just how you feel. And it's like you feel safe walking around and do people stare at you? Just someone just yell at you, you know, and that makes you feel unsafe, even though it may not be a crime to yell at somebody. So these things are very, very like psychological, I imagine.

Annie (01:03:02.113) Right.

Annie (01:03:09.644) Right.

Annie (01:03:15.66) Okay, so.

Oh, it just, doesn't tell you for all of 2020, this figure that I have doesn't tell me for all of 2020 for, which is sad.

Auren Hoffman (01:03:28.918) The interwebs are a problem right there. But I think part of what you're saying is that these things are emotional. They're not just data driven, right? They're just... Yeah.

Annie (01:03:31.124) Right, right, exactly.

Annie (01:03:38.41) Well, no, what I'm actually so let's make up some numbers. This is what I'm actually saying. So generally, when they tell you that New York is less safe than Macon, Georgia, it's that New York had 350 murders and Macon, Georgia had 90. And they're like, see, New York is more dangerous. Those people are not lying to you.

Auren Hoffman (01:03:58.838) Yeah, yeah, yeah, yeah, yeah, yeah, yeah. And New York is like, you know, gazillion times the population or yeah.

Annie (01:04:05.132) They're just, there's denominator neglect going on there. So what I, you know, and I think that there's all sorts of places where like the denominator hides in the shadows, right? Like people don't realize that there's a deno, now if if I said, if they said, if I said back to them, yeah, well Georgia is more dangerous than New York City, they would understand there was a denominator problem right away. Right, because they'd be like, wait, Georgia is a state and New York City is a city. But, so,

Auren Hoffman (01:04:08.353) Yep. Yep.

Annie (01:04:34.21) actually just so people know because I have I have looked it up before because it's making Georgia is actually more dangerous per capita than New York City. And neither of them are you're not likely to get murdered in either of them. But but but but but so people get that reverse. Now look, I really don't think that the people who tell you that are lying. I really don't like what one of my favorite

Auren Hoffman (01:04:42.51) Okay, I've never been to Macon,

Auren Hoffman (01:04:58.754) But by the way, even when people use real data, like they'll say, you know, it's more dangerous to be in the city of Baltimore than it is to be a soldier and a U S soldier in Afghanistan or something. like, well, maybe that's technically true, but like, I could probably avoid the danger more in Baltimore than I could if I was a soldier in Afghanistan or something.

Annie (01:05:11.039) If you're talking about

Annie (01:05:16.308) Right. And they're generally making the mistake of telling you counts of things, as opposed to, well, how many total people were exposed, how many people ended up dying, you know, that kind of stuff. So one of my favorite examples of this actually comes from the Washington Post. So I read an article in 2022, in the fall of 2022, and the title was, COVID is no longer a pandemic of the unvaccinated.

So that was the title of the article. And so I thought, whoa, this is interesting. I'm to go read this because at that time, mean, the Washington Post definitely had a pro-vaccine bias. So I was like, oh, there must be some very interesting data behind this very interesting headline. And so I went and looked at it. And it was that in the month of August of 2022, 58 % of the people who died of COVID were vaccinated.

Auren Hoffman (01:06:14.99) But I assume a very high percentage of the population was vaccinated by that point or something. Right, right. Yeah. Yeah.

Annie (01:06:17.226) Yeah, well, you wouldn't know from the article because that was the question I asked and it wasn't in the article. Now, I think we can agree that that reporter was not trying to lie. And if you go fact check that number, by the way, in August of twenty twenty two were fifty eight percent of the people who died of covid vaccinated. That will come up. Fact check. Yes. Right. OK. So so I asked the same question you did. I was like, wait a minute, what percentage of the population is vaccinated? And I went and looked it up because I couldn't get it wasn't in the article.

Auren Hoffman (01:06:26.691) Yeah.

Annie (01:06:47.106) And at that time, 80 % of the population was vaccinated. So I was like, well, that seems like vaccines are pretty effective. I don't know. So then I was like, well, also, I guess you need to do some age matching because there's a lot more vaccinated people who are old than young. And old people are much more likely to die than young people. So it'd probably be good to do some age matching. I'm lazy. I wasn't actually going to do that. But I found a blog from someone who had done it. I was very lucky.

And it turned out in the month of August, was five times better to be vaccinated than unvaccinated. Now the thing is like, I have no truck with anybody who doesn't get vaccinated for COVID. Like that's your own choice. The problem is that you're trying to make a choice that data driven and not everybody is Orin who's gonna go, wait a minute, how many people are vaccinated? Maybe I should do some age matching, whatever. So you're just looking at that.

Auren Hoffman (01:07:39.438) Yep.

Annie (01:07:42.722) here's a newspaper, they're not trying to lie to you, they're drawing a conclusion from this data that is totally misleading, and now you're going to make a decision based on that data whether to get vaccinated or not. My whole thing is like, I just want you to make a decision where you actually understand how you're supposed to make sense of the data and then you can decide for yourself. I'm 20, I don't have a lot of risk or whatever. I don't care, you be you, make your own decision about it. It has nothing to do with me.

Auren Hoffman (01:08:06.296) Yep. Yep.

Annie (01:08:12.106) So, but I think that this is generally the problem, right? Is I think that mostly people are telling you the truth, at least what they believe the truth is, but you aren't taking enough responsibility in terms of your own decision-making to say, well, wait, how do I interpret the information that I've just been told? And I think we just sort of stop at someone telling us a fact wrapped in their interpretation.

And then we don't actually say, well, wait, what more are we supposed to ask here?

Auren Hoffman (01:08:45.196) Really interesting. This has been awesome. Last question we ask all of our guests, what conventional wisdom or advice do you think is generally bad advice? And we probably already went through a few today. Trust your gut. OK.

Annie (01:08:55.842) Trust your gut. Yeah. Just like, that's gotta be the worst. So what I would say is if the decision doesn't matter, it doesn't have any long-term consequences, it's a super inexpensive door to go back through. trust your gut. But like, you're gonna end up turning into, you know, away from a lot of skids that you should be turning into. It's like, sometimes your gut's pretty good, but like if you don't

If you don't make explicit what your gut is, so how your gut is modeling a problem, you're going to dismiss a lot of errors.

Auren Hoffman (01:09:33.774) It's interesting. So I, I, I never trust my gut to do something, but I almost always trust my gut to not do something. So if it said to hire somebody, I wouldn't trust my gut, but if it said, don't hire this person, I would definitely trust my gut. Where do I have it wrong there?

Annie (01:09:52.13) So what I would say is this, that if you've gone through a really good decision process and then your gut intuition is like, I think that's a really good reason to go back and examine. So I do think that...

When someone has a lot of experience, sometimes there's something that they may have missed and their gut is sort of putting up an alarm bell, right? I'm a little nervous about this. And I think that then you should go back and look because I think the thing is that you should always strive to be able to do is actually give the real rationale for why your gut feels that way. Right? So I just think that's really important. Now, if you want to have a rule,

Auren Hoffman (01:10:25.485) Yep.

Annie (01:10:39.786) that your gut can't make you do something, but it can stop you from doing something, that might be okay, except what if you're like super risk averse, right? Now you're gonna make a lot of really bad decisions on that role. So that's why I don't like, what I like to do is think before I face a decision, sort of what are the things that I care about? What are my values? What's the threshold for deciding? I try to sort of think about all of those things in advance.

Auren Hoffman (01:10:48.864) Yeah, yeah, good point, which I am. Yep.

Annie (01:11:07.554) then I go through all the work of doing it. And then if there's something going ping, ping, ping, ping, ping, I'm like, I'm going to go back and look at this work again. What more information do I need to know in order to understand? Because I'm going to say, well, why is my gut feeling like that? Right? And then I'm going to go from there.

Auren Hoffman (01:11:29.966) This has been great. Thank you, Annie Duke for joining us on World of DaaS. I follow you at Annie Duke on X. I definitely encourage our listeners to engage you there. This has been a ton of fun.

Annie (01:11:39.136) Well, thank you for having me. This was a super fun conversation.

Auren Hoffman (01:11:44.014) Amazing.

Reply

or to participate.