- World of DaaS
- Posts
- a16z’s Martin Casado
a16z’s Martin Casado
building with AI
Martin Casado is a General Partner at Andreessen Horowitz (a16z), where he focuses on AI and infrastructure investments. He previously co-founded Nicira which was acquired by VMware for $1.2 billion in 2012.
In this episode of World of DaaS, Martin and Auren discuss:
Economics of open source AI
Chinese AI innovation with DeepSeek
Model collapse and data moats
Regulatory challenges in AI

1. AI Model Fragmentation and the Illusion of Complexity
Martin Casado challenges the narrative that building advanced AI models is exceedingly difficult. He argues that while the initial breakthroughs (like GPT-3.5) were hard, the marginal steps beyond that are often easier than they're made out to be. He believes the industry mystifies the difficulty to justify high valuations. Fragmentation is a key theme: companies will increasingly specialize their models for specific domains (e.g., code, protein folding), leading to differentiated products rather than one-size-fits-all AGI models.
2. Business Models, Market Evolution, and the Rise of Niche Tools
Casado emphasizes that AI companies are carving out specific verticals where they dominate (e.g., MidJourney for image generation, 11 Labs for text-to-speech, Cursor for coding). Because of the rapid expansion of AI-related markets, fragmentation is inevitable and even healthy. He compares tools like Cursor to junior developers assisting senior engineers, and contrasts them with no-code tools like Bolt and Lovable, which serve less technical users.
3. The Role of Taste, Specificity, and Human-AI Interaction
Casado discusses how the usefulness of AI depends on user expectations. If you have strong taste or specific needs (like professional developers or writers), you’ll find AI helpful mostly for brainstorming and structure, not final outputs. He uses AI daily—mostly for coding video games and as a learning companion (e.g., chatting with Grok about complex books)—but still prefers manual control when precision matters.
4. Strategic Outlook on AI, Data, and Policy
Casado believes most valuable data has already been consumed by LLMs. Now, the edge lies in labeling quality and regulatory environments. He praises U.S. policy shifts toward better governance and is skeptical of open source as traditionally defined in AI. He stresses that the real differentiator is high-quality, expert-labeled data—particularly in creative fields. On national security, he draws parallels between today's AI risks and past concerns about Chinese telecom infrastructure.
“The paradox of this AI stuff is the more of an opinion and taste you have, the harder time you're going to have using it.”
“We've basically exhausted all human-created data—within a factor of two.”
“Professionals tend to use AI for the things they're not professional at.”

The full transcript of the podcast can be found below:
Auren Hoffman (00:00.854) Hello, fellow data nerds. guest today is Martin Casado. Martin is a general partner at Andreessen Horowitz, where he focuses on AI and infrastructure investments. Previously, he co-founded NICERA, which was acquired by VMware for $1.2 billion in 2012. Martin, welcome to World of DaaS. Super excited. DeepSeek has been out for now a few months. And now that the dust has like
Martin Casado (00:18.594) happy to be here.
Martin Casado (00:25.068) Yeah.
Auren Hoffman (00:28.02) What's like the significance of the release?
Martin Casado (00:35.64) So there's kind of two theories on this. so one theory is it shows how sophisticated China is when it comes to creating models and something that took us a lot of money to do. They did relatively cheaply in a relatively small organization for a short period of time. So that's one theory is like basically the almost like technical hegemony of China. I tend to actually believe in a different view, is
It may just be the case that building these things aren't that hard. And because we have organizations in the United States that have raised a lot of money, you know, with a lot of fanfare and they've got, you know, high valuations to hit, they talk about how tough it is and AGI and this and that and the other thing. And it just turns out, especially with these reasoning models, they're not that big of a step from, you know, the, the, the traditional.
Auren Hoffman (01:28.202) Especially once you see what other people have already done. Yeah.
Martin Casado (01:30.286) That exactly right. Yeah. So I've I've got a more like, it's actually very hard to have gotten to the first breakthrough, which is all the pre-training stuff. the, to get to GPT 3.5 is tough. Um, and you know, deep seek did that with V3 and we don't know how much that, you know, they, they spent to do that, but like that was tough. But then to go from that to like the O1 or R1 equivalent is actually not that hard. So I kind of feel that there's just a lot of kind of mystique around these things that may not be warranted and deep sea kind of shows that this isn't that hard.
Auren Hoffman (02:00.182) So how do you feel like there's at least 10 companies probably spending like a billion dollars or more on like these models? Like how does that like play out and what's like the second order effects of that?
Martin Casado (02:05.304) Yeah. Yeah.
Martin Casado (02:10.914) Well, this is the big question, right? And so I think this is basically what's consensus, which is the first wave was getting to the multimodal model via pretty aggressive pre-training, right? And this got us GPT-3.5, GPT-4, Lama, Anthropic. This is before the reasoning models. And that was a very, very real breakthrough. And what's amazing about those models, they're incredibly general. Like you can apply to the, you so you kind of consume all the tokens of the world, you can apply them to all, you know, a bunch of problems, but.
Auren Hoffman (02:33.845) Yeah.
Martin Casado (02:40.086) You know, they all, you know, because there's a finite set of tokens in the world, they all kind of converged to the same capabilities. So now there's a big question is, so now.
Auren Hoffman (02:48.576) But some do seem like slightly different, like maybe Anthropic is a little bit better for software development and,
Martin Casado (02:53.87) Great, great, exactly. No, no, no, this is the big question, right? So this is exactly right. And so now we're focusing more on RL during post-training. And what that means is if you've got a good validator, then you can use synthetic data or synthetic methods or special collected data in order to make it good at certain domains. You can make it good at protein folding, or you can make it good at code. Now what we don't know is...
to what extent that actually generalizes. And so if you've got 10 organizations spending a billion dollars, we may end up with 10 really amazing, very different models, or we can have the same thing that we had in the pre-training day that they all kind of converge on the same type of thing. And I think this is one of the biggest open questions on what happens to the industry going forward is exactly this. Like, they gonna be, do they fragment because the RL stuff doesn't generalize or do they not fragment? I think I'm in the camp of they will fragment. I think this is what the...
Auren Hoffman (03:49.458) And mainly you think of it just because it's a better business model because like if you have 10 people all doing the exact same thing, it's commodity. Whereas if each one is doing something very, very specific, then they could charge a lot for it.
Martin Casado (03:54.04) breakfast.
Martin Casado (04:00.258) That's a great question. I think they technically fragment. If you make a model very good at code, it's probably not going to be as good at something else. And so if you focus on around a certain domain, then technically that doesn't generalize. You ask another amazing question, which is one of the stories of AI has been fragmentation. mean, if you think about it on a business model standpoint, not in a technical standpoint. So for example, OpenAI was the first to image with Dali, but it didn't get image. It was the first to code.
Auren Hoffman (04:07.764) Hmm.
Martin Casado (04:26.498) But like Anthropic seems to be ahead in code. It was actually the first high quality video, but you know, it seems to have not become a leader in high quality video. It was of course the first to chat GBT and it is the leader in chat GBT. So the markets are so big and they're growing so fast that independent of the technology, it does seem that they are fragmenting so that you've got a leader is emerging in all of these other categories that are, that are showing up. And so I think that it's pretty reasonable to bet because the markets are so large and they're growing so fast that we were.
going to continue to see fragmentation like bunch of different companies, different spaces. I also
Auren Hoffman (04:59.902) And that's because of the underlying data or is that just because like they're just going to keep focusing on and they're going to get feedback from the customer and.
Martin Casado (05:07.586) The one that I'm referring to the second one is just strictly business fragmentation, which is like, if you've got a $4 billion business, why that's growing, you know, three to five X or something crazy like that. Why would you even do something else? It just doesn't make sense from an investment standpoint. I mean, even think about just image alone, right? So before AI, you know, image was kind of a backwater business. Like there were no.
Auren Hoffman (05:18.89) Right. Right.
Martin Casado (05:33.25) businesses for like clip art and things like that, or they weren't very large businesses. And then we have these text image models, Like mid journey, ideogram, BFL. And now we have multiple and they're at scale and they all are kind of going after kind of different areas, right? Like mid journey is kind of, let's kind of gritty sci-fi, like, you know.
I do grams for designers BFL is more of a horizontal play. And so I do think the story is that these markets have become so large that they are fragmented, just particularly from a business standpoint, like they're finding their own niche and they're doubling down on that niche.
Auren Hoffman (06:04.246) But it doesn't seem like in any of those cases, there's like a winner take most thing. You would think there would be, because it's like, you're getting data, you're reinforcing it. And it doesn't, it seems like everyone is super competitive.
Martin Casado (06:16.974) I do, well, I don't know. I do think that we're seeing this break Pareto. I do think that winners are like in the lead. You just have to like be at the right zoom level to see that, right? Like opening eye definitely is dominant for, for text. Yeah. Right, right. No, I know this is a very, so, you know, listen, mid journey is clearly the leader in text to image. Um, uh, in, in, in,
Auren Hoffman (06:26.336) Okay.
Auren Hoffman (06:29.77) Sure, because of the brand too, right? Not just because of the product, yeah.
Auren Hoffman (06:40.582) Midjourney is kind of hard to use. Like you have to go into Discord and then you like, then later on somewhere in the Discord chat, see your thing and it's, yeah, it's kind of annoying.
Martin Casado (06:47.933) Totally. Yeah, yeah, yeah, for sure. And yet, and yet they bootstrapped to actually quite a bit of scale. I mean, I would say Anthropic is the leader in code. I would say he's probably got 80 % of the market. I would say 11 Labs is the leader in text to speech. There's nobody even close to 11 Labs for that. You know, and so I do think that we are seeing leaders get about 80 % of the market. mean, cursor is clearly the leader in code from the app perspective. And so again, I do think
Auren Hoffman (06:52.704) Totally.
Yeah.
Auren Hoffman (07:02.197) Yep.
Martin Casado (07:15.202) Basically marginal cost to do stuff has gone down. Markets are expanding very quickly. As a result, you get natural market fragmentation. mean, what, just one more very quick example on this. If you look at the video companies, they even fragmented, right? was like Pika kind of went consumer, you know, really good at anime, like Luma, you know, focused on kind of creatives runway on creative professionals. And so you're seeing this kind of fragmentation all over wherever you look.
Auren Hoffman (07:39.574) And you mentioned Kerscher, are they, do you think that they've been so successful and so dominant because the product is so much better? Or is it just like one of those things where everyone starts using it and then everyone tells everyone else to start using it and then, you know, et cetera.
Martin Casado (07:53.806) Yeah. So I think in the case of Cursor, it's because Microsoft had matured the market for two years, right? So Microsoft CodePilot came out almost exactly three years ago. And so it had been out for at least a year by the time Cursor came. And Cursor used to using VS Code, you know, and then they kind of used that as a starting point. And then from there, I think they've executed just exceptionally well. I mean, they just, you know, and so I think it's the combination of the two.
Auren Hoffman (08:06.794) Yeah, people are used to using that in their ID. Yeah.
Auren Hoffman (08:17.429) Yeah.
I mean, I would say, you know, almost 80 % of software developers I know use Cursor. Like it's just insane. You know, like I never heard of the company like a year ago or something.
Martin Casado (08:26.638) Yeah, I know, I use it every night. I know, I mean, we did a survey of a number of portfolio companies, more than 50 % of the developers use Gursr. And that's against VS code, which has been around for a very long time from Microsoft. I mean, this is so dramatic.
Auren Hoffman (08:41.046) Yeah. And also all these other weird things, like engineers love their weird IDs, but they're switching over for a reason, right? Yeah.
Martin Casado (08:47.326) Yeah, yeah, yeah, yeah, yeah, yeah, no, it's been very dramatic to see that, to see that.
Auren Hoffman (08:53.494) And what about like, there's these like, front end versions of cursor. There's like the, you know, the bolt.news, the lovable, the verselles. Those seem to be more competitive. They're all growing super fast, of course.
Martin Casado (09:07.118) They're all growing super fast. I think there's kind of two markets here. One market is you're interfacing with a professional developer who's relatively senior. And the language that you speak to that developer is code, right? the developer understands the PR, understands the developed code. And that's really squarely what Cursor is going after. So I said understand code.
Auren Hoffman (09:30.644) Yep. Yeah, they're going after a real developer, not just like a hacker.
Martin Casado (09:34.54) That's right. Right, well, a real developer who's technical, who has an opinion on what needs to be generated and has an opinion on the implications of these things. Yeah, yeah, they probably built a lot of systems, et cetera. And it's interesting, like one of these kind of observations I've made talking to a number of users of Cursor is like, they're starting to say like, listen, it's actually easier for me to use Cursor than work with a junior developer, right? And so like, this is like really these kind of senior folks that are using Cursor.
Auren Hoffman (09:40.426) Yep. Yep. Yeah. Probably they have a CS degree or something, you Yeah.
Martin Casado (10:03.702) On the other hand, you've got like the bolts and lovable. A lot of their users are non-technical, right? And so, you know, like the interface to them is more the final product and it's not the actual code itself. so I.
Auren Hoffman (10:13.108) Yeah. I found a lot of their users are like the product manager or something, which is like a, you know, tech fairly technical, but not a don't doesn't have a computer science degree.
Martin Casado (10:18.23) Exactly. Right.
Martin Casado (10:23.979) That's exactly right. And so I just feel like that's almost like a different like a different market a different a different use case
Auren Hoffman (10:27.327) Yeah.
Auren Hoffman (10:31.122) And, and, at least when I've looked at like, like the lovable's bolt-on universal, I think they're, they have a higher churn because like you're using it for like a specific project and then you might stop for a while. Whereas like, I imagine a cursor ID, if you're in it, like every day forever, like you're not going to churn unless you just move to another product or something.
Martin Casado (10:46.286) Yeah. Yeah. Yeah. I think, I think you can almost kind of like in the, you know, prompt to full websites to like the Wix is in the square spaces where you kind of get what's provided for you, but you can't have too much of an opinion, right? You know, it's like, it's almost like this is like the opinion free version, right? You get a few things. It's like, they look marvelous and they're dazzling, but like you don't have the, prerogative as a user to specify.
Auren Hoffman (11:03.284) Yeah.
Martin Casado (11:16.278) you know, exactly what you want, right? And so it's good for a certain type of things, but not when you're the super professional, like you actually care about everything. Where on the case of cursor, you know, this is for, you know, a lot of the users have a deep opinion of what the system should look like. They have a deep opinion on what the code, they've got very clear specifications, and that's why the interfaces is actually code. Because remember, natural languages are not sufficient for describing formal systems. There's a reason we've created
programming languages. And the reason we created programming languages that are non ambiguous, they allow you to describe things specifically. so I cursor is like, listen, a lot, a lot of people actually have specific needs, we'll give them an ID and there's other systems are like, you know, they don't have, know, they just want something. And I think that's a different market.
Auren Hoffman (11:45.312) Yep.
Auren Hoffman (11:59.862) When I look at the developers using things like cursor, it seems like the person that like the, I see the ones that getting more benefit from it, it seems like there is a difference in the personality in them where they're willing to like go back and forth in a weird English text way, saying lots of pleases and thank yous. Um, and there's like an inter it's almost like you're having a conversation with a junior developer to do a bunch of things for you.
Martin Casado (12:29.88) Yeah.
Auren Hoffman (12:29.898) And you're almost mentoring that junior developer along. Have you seen something like that?
Martin Casado (12:34.592) Yeah, I mean, it's so funny. There's actually all sorts of personalities for interacting with these AIs, right? Some of them are like what you're saying, which is like you're kind of coaxing them along and teaching them and providing context. And it is like a senior developer working with a junior developer. I've seen others which are tyrants, like literally say, like, if you don't do this right, I will kill 10 kitties. And so it's like this kind of very kind of aggressive, like putting a lot of pressure on that. And honestly, there are others, and I kind of fall in this camp. So I mean, I program with Cursor most evenings.
Sorry, but are we doing editing on this or is this? Okay, I just wanna make sure like. So I use cursor most evenings and I find that, sorry, I'm just totally blanking.
Auren Hoffman (13:10.208) Yeah, we can edit.
Martin Casado (13:24.238) Okay, so I use cursor most evenings and I find that I would rather write the code myself and get suggestions from cursor than actually allow cursor to To edit it itself in that way because I feel like I'm much closer clear
Auren Hoffman (13:34.432) Yep.
Auren Hoffman (13:38.71) Is that just because you're more old school, like if you were 20 years younger, it might be different?
Martin Casado (13:42.446) It's like it's a very reasonable question. mean I It could be this is my boomer moment and I should just kind of let go and like let it write a bunch of code like I think it's a very reasonable question, but I For me like I have these specific ideas of what I want They're they're fairly sophisticated and if I'm not part of the code creation that I just can't nudge it in the right way because I don't have the context to do it and so it may be the case that what's gonna happen is like we're almost like we're like
Auren Hoffman (13:48.534) You
Martin Casado (14:08.898) gonna build in the end of opinion like systems builders gonna stop having opinions or like, you know, not only like, do you build it for me? But like you tell me what I like, just because having an opinion on what these systems look like means that you've got to be much more involved in the models wants you to
Auren Hoffman (14:27.294) I write a lot and I still find that I'm not really using these tools in the way. Like if I think of like the software developers, they're literally in the tools every day. When I, if I write a piece or something like I'm, I'm, I'm using it as like an editor, but it's kind of like at the end and, maybe every once in a while to like help me rephrase something, but it's not like literally in the system as I'm writing. and I would think that would be so good. I just don't.
Martin Casado (14:39.662) Yeah.
Martin Casado (14:48.728) Yeah.
Yeah.
Auren Hoffman (14:56.02) I haven't found the tool to do that. Is there something like that that you've seen or.
Martin Casado (15:01.41) I really think it comes down to like the user's need for specificity. Like I think you probably have independent taste and an independent aesthetic and an independent idea of what it should come out with. And if you have that, there's no way a model is gonna be able to do that because they produce whatever they decide to produce. Where I think other people, they just want something.
Auren Hoffman (15:21.385) Yep.
Martin Casado (15:25.838) And if it's good enough, it's, know, like, just don't really have an opinion on specifically what that is. And I do actually find like a lot of the uses break down to this. Like I'm the same as you. Like I, use AI all the time, but I very rarely have it produce stuff that I use. Yeah. You know, without modification and almost every time it's like a brainstorming partner, helps me structure thoughts that provides critical feedback, but everything actually comes from me. This is everything from code for writing.
Auren Hoffman (15:30.378) Yeah, okay.
Auren Hoffman (15:48.97) Now when I go an image, it works great. I just say, here's the prompt. And usually the first one I just use and it's fine. Maybe because I don't care as much about the image. Yeah. Yeah. Yeah.
Martin Casado (15:56.994) Because you're not yet because you're not an artist right like and so like you don't have the same aesthetic or the same taste and so I do find the same thing between junior and senior senior programmers when they use AI they're like, okay, write this test for me, you know help refactor this thing but they're kind of like Right, right, right, but but they're
Auren Hoffman (16:10.166) Mm Yep. Yeah. Tests are you would think would be great for tests and stuff like that. And all those senior guys hate writing tests anyways.
Martin Casado (16:17.422) Yeah, yeah, right, right. So it's kind what you'd kind of like, but, but, but there's no way they're going to give up the system design where, know, so think that the professionals will basically have the test, they control it. And if you're not, then you can just use the eye for whatever it produces.
Auren Hoffman (16:22.409) Yeah, yep.
Auren Hoffman (16:30.806) Where do you like there's this been like this open source versus closed source debate and the models and like where do you kind of come down on that?
Martin Casado (16:35.202) Yeah.
Martin Casado (16:40.172) I don't know. I don't think it's, I don't think, I don't think there's any parallels to traditional open source. I just don't think there are right. Like, like, what does it mean to be open source for a model? Does that mean open weights, but then you don't have the data pipeline. So you can't modify it. And then even if you don't have the open source, you can still distill it by, know, by like clearing it and by asking a bunch of questions. So just think it's a, I think it's actually, I think it's sufficiently different that we've
Auren Hoffman (16:51.647) Yep.
Auren Hoffman (16:58.976) by asking a bunch of questions. Yeah.
Auren Hoffman (17:06.07) It's a good book because you don't have like the underlying code in any scenarios, right? So it's like, yeah.
Martin Casado (17:09.952) Yeah, so you don't have underlying code in there, but you also like, doesn't really matter if you hide it behind that API. It turns out you can distill it anyways. It almost feels like it's just much less of an issue than we have seen in actual open source code, which is like, you can't really protect yourself from being distilled, but you're not really giving away your data pipeline, which is the most important thing. I I think that's kind of what it comes down to. So I just think like the gap between like quote unquote open source and not is just much, closer in the AI wave. And so I think we've made it kind of a bunch of to do about nothing. think like if
Auren Hoffman (17:15.648) Right.
Martin Casado (17:39.81) Companies want open source great if they don't, it's fine. And I just don't think that that's a, you know, there's a major difference.
Auren Hoffman (17:47.946) Until like about a year ago, I don't think I would have ever invested in a SaaS company without like super technical co-founders. And I say recently I've seen these SaaS companies that like have built pretty cool things where the founders really weren't that technical. They were like good product people and they had really good taste. And they were able to make these products with like the current tools. Is that like?
Martin Casado (17:55.598) Yeah.
Auren Hoffman (18:15.336) changing the way you guys are thinking about investing in people.
Martin Casado (18:19.202) Yeah, this is a funny, this is kind of a loaded question for me because I tend to historically not think traditional vertical SaaS founders were technical. I've always found, I've always found, I always found SaaS apps to be all crud. You know, you can hire random undergrad to build it. There's like, they don't differentiate on technology, et cetera. And like it's the infra founders that are technical and I don't think this changes that. And so for me, it's the same, but I have heard this conversation come up before in other contexts.
Auren Hoffman (18:28.884) Yeah, that's true. Maybe they weren't even that technical to begin with. Yeah, that's probably true. Yeah. Yeah.
Yeah.
Yeah, that's true. Yeah.
Martin Casado (18:47.886) But I really think before and now you can have non-technical teams build pretty amazing products.
Auren Hoffman (18:53.736) Okay. Yeah, that's fair. Yeah. I do think like it's like the, that, that product person with taste is just is becoming, they're always important, but they're becoming like even more important. you know, and, and fine. And it's, and it's very hard to assess that person based on their resume and stuff for us, like a technical person. can like much more better assess based on their like LinkedIn profile.
Martin Casado (19:08.11) Yeah.
Martin Casado (19:14.722) Yeah, for sure.
Yeah.
Yeah, for sure. For sure. But I also think that a lot of vertical SaaS comes down to business model too, right? And so like, I mean, like if it's like a product led thing, then, you know, you want to, they, like, they have to be really good at brand and, you know, growth features in the product, et cetera. I agree. That's very tough to assess. but a lot of vertical SaaS is like, you know, here's this market and we're going to write some software for the market. And here's why, like, you know, we differentiate on a business model standpoint. I think that's kind of traditional. Yeah. Yeah. It's traditional investing. Yeah, that's right. Yeah.
Auren Hoffman (19:43.294) Yeah, we'll get proprietary data or there are some sort of lock in or some other type of thing that that goes that goes in there.
Martin Casado (19:50.786) Yeah. But, but, but, but to your point, just want to say like, this to me is like the, the paradox of this AI stuff is the more of an opinion and taste you have, the harder time you're going to have using it. Cause these things are so unwieldy, right? Like they don't like, can't read your mind and you can't actually describe taste very well in English. And so the more opinion to have in the output, like the more disappointed you're going to be. so one of two things happens is like, well, the professionals tend to use AI for the things that they're not professional at.
Auren Hoffman (20:08.96) Yeah.
Martin Casado (20:20.19) actually, Balaji just had a great tweet about this is basically it'll help you do the stuff that you're not good at. But like the stuff that you're good at. Yeah, exactly. Yeah. Yeah, exactly. So it's like, it's like, like, whatever you bring to the table is you bring taste, and you bring specificity. And that's your professional area. And then it'll help you with all the other stuff, right. And that's the way it could be. You know, or could be the case is that we just start caring less over time. It's like, it's like the AI just tells us what taste is, we're like, well, listen, I'm not going to have a huge opinion.
Auren Hoffman (20:24.884) Yeah. Like me with images. Like I was never good at that. And now I'm like decent just cause I could write some prompts.
Auren Hoffman (20:35.06) Yep.
Auren Hoffman (20:47.529) Yeah.
Martin Casado (20:49.164) And I think that's kind of the big dilemma that we're all facing is how much should we just kind of be like, well, this is good enough.
Auren Hoffman (20:54.942) Yeah. What there was just the UI of the prompt is weird. it's kind of like, you know, there are not that many people that like using the terminal on their desktop or something. Right. it's just like, it's just an odd prompt to go through. And I imagine it just, it's like only so only a very certain type of personality of a person is going to like the prompt. And it seems like all the UIs today I've seen. Like from AI companies, like all start with the prompt.
But maybe only like 5 % of people are really going to thrive with the prompt. Are you seeing other types of UIs that are coming out?
Martin Casado (21:33.784) mean, I think that you guys follow traditional modalities, right? If you're doing search, you want a text box, just like you'd have with Google. If you're doing chat, you want a chat box. If you're doing code, you want an IDE. If you're doing image, you want like a canvas, and you have that. If you want video, then you want an actual video workflow tool. If you want to like compose, you know, some complex, you know,
creative thing, you use something like Comfy. And so I actually think if you actually drill down into what UIs are actually used, they are converging on the traditional UIs. Like if you look at Ideogram, it starts to look like Canva. If you look at Runway, it starts to look like these video editing tools. And Comfy UI, they look like these kind of big 747 cockpit complicated workflows that you see in traditional VFX environments or creative environments.
Auren Hoffman (22:11.797) Yeah.
Martin Casado (22:26.614) I think, you know, one thing that we've noticed is if a company launches a model, you'll get like a spike in interest, but then that interest will wane and you get retention if people actually start using the product. and so, and so you have these companies that start with like, there's a very simple interface to model like text to like whatever the output is, but the companies that do well actually do build out these kinds of longer products. And so, I mean, I agree.
the text of whatever is is incredibly basic and not long standing, but we're already seeing how that then becomes what we would consider pretty traditional interfaces.
Auren Hoffman (23:00.34) And are there like.
Like, I mean, in some ways, the most traditional interface is I'm, I'm emailing or calling or texting a coworker and asking them to do something with me. Right. It's like, it's always like, that's kind of like the one we're all used to. or I'm slacking, you know, Mart, know, Martina and I are working together and I'm going to slack out and have Martina at a 16 Z whatever, and send it over to like, like.
Martin Casado (23:11.342) Yeah.
Martin Casado (23:19.918) Yeah.
Martin Casado (23:24.982) Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. No, listen, I listen. Listen, I think that this has kind of been one of these kind of bad analogy spaces where people like, why don't like, we all should use audio or we should all use text. But we have to remember that before we had computers, it was all audio. We decided to build different UIs. Like that was like, that was the standard, right? And
Auren Hoffman (23:31.338) Like, can you just imagine just that's the interface or something?
Martin Casado (23:53.102) And before we had complex GUIs, had just text. And so I think the evolution of the UI is not because we couldn't do other ones. It's because it's actually useful. let's take this story. I very commonly hear, well, this is all going to be audio. And to me, listen, audio is just not good for context, for example. I want to cross-check things and see it. It's not good for digital layout. I want to see like, you know,
Auren Hoffman (24:16.842) Yep. Yep.
Martin Casado (24:18.656) It's not good for me to provide spatial input to the to the AI like sometimes I want to provide spatial input to the AI and so I know I actually think the ui
Auren Hoffman (24:27.328) And also more people, some people are more auditory, some people are more vis- like everyone is different, right?
Martin Casado (24:30.69) Totally. Yeah. And like, listen, have we so let's talk non AI. We have audio books for like that experience. We have textual books for that experience. We have you eyes for that experience. And we use all of these things. It's not like one has ever replaced the other. And so I think that with AI, it'll be the same. Sometimes it'll be audio. Sometimes it'll be text. And like, there's a reason we've had multiple modalities of you eyes.
Auren Hoffman (24:54.694) You mentioned like you just mentioned earlier, you're coding like in cursor every night. Like what are you doing? Like what are you writing?
Martin Casado (24:58.542) Honestly like my my relaxing time is writing kind of retro video games with cursor in JavaScript Just for fun, it's all
Auren Hoffman (25:06.486) okay. cool. Just like for fun. Okay. It's not like you're like building tools to make your investing better or something like that.
Martin Casado (25:13.262) No, no, no, it's just, just honestly for fun. mean, but also, mean, I will say that like, I do get a good sense of the tool chains by doing it. Right. And so like, you know, what's useful, what's not, I'm on the discords, um, you know, of the other people that are using these things. Yeah. But also, yeah, but also I, I, know, I'm part of the communities and the communities are talking. So I get a sense. I think it does help a lot for it. Cause I'm an infrastructure investor. Like I invested in DevTools. invested, you know, like, so.
Auren Hoffman (25:23.882) Yeah, of course.
Auren Hoffman (25:29.6) You're like, I wish this worked better. You know, you're fighting bugs. Yeah.
Auren Hoffman (25:40.554) Yep. Yeah, yeah. You got to be using the tools. Otherwise you'll atrophy, I assume, right?
Martin Casado (25:43.214) So yeah, and and to be part of the communities because there is a zeitgeist and the zeitgeist, you know, you can only you can't like feel it from the outside, like you can't look at the website, like you have to be part of the community. And so I do think that there's some second order effects. But the reason I do it is is really just because I mean, I've been in computer since the 90s, right? This is 30 years, right? And so it's just kind of a first love of mine.
Auren Hoffman (25:48.534) Mmm.
Auren Hoffman (26:05.896) And like, are there things that you use that you're using AI for as a venture capitalist to be a better venture capitalist?
Martin Casado (26:17.912) Well, let me just split apart that question. So I am not convinced these models are good at anything predictive. Does that make sense?
Auren Hoffman (26:24.458) Yep. Yeah, of course. Yeah, that I mean, because it's predicting the past. It's very good at predicting the past.
Martin Casado (26:29.902) Right. But I think a lot of times people are like, oh, like, like these things will like, you know, tell you, you know, like, like we'll give you like guidance, what to do in the future and predictive. I, I, my conclusion is not. And so the way that I use AI and venture capital is, is almost totally in the productivity side of things. Right. Like when I'm learning these days, I actually, maybe one of the biggest personal changes is like, talked to Grok.
Auren Hoffman (26:38.293) Hmm.
Martin Casado (26:53.802) a lot rather than reading books. So I'll have a book that I'm reading that's in like a space that I'm interested in learning for venture capital. And then so I'll read a chapter and I literally will spend you know, twice as much time just having a conversation with grok to understand it deeply like cross referencing it and
Auren Hoffman (27:01.003) Yeah.
Auren Hoffman (27:09.14) Why do you use like, since I do that with open AI, and so I'll go for a walk. And sometimes I'm in a conversation like this with like somebody like you who's much smarter than me. And I don't want to admit that I don't really know it. And then I'll go for a walk and I'll just like, like ask, like open AI, like, Hey, like, they were talking about this thing. And then I can kind of get it at my, like exactly at my level, like not below, not, not above, but why do you use grok? Like what's, is it, why is it better for that or.
Martin Casado (27:15.118) Thank
Go learn it. Yeah. Yeah.
Martin Casado (27:30.53) Yeah, totally.
These products are evolving and moving so fast that like my opinion on a product is two weeks later is totally yeah. And like when, uh, a hundred percent and I couldn't be back. when I was at this and I'm a huge fan of opening, I use chat GPT all the time, but the audio mode I found was a little bit more clunky than crock. been, been, I've had a very fluid conversational experience. So what I like to do is like read a chapter of something I find interesting. And then like,
Auren Hoffman (27:41.268) Right. It could change in two weeks. Yeah. Yeah. By the time this podcast comes out, there, yeah. Yeah.
Auren Hoffman (27:55.562) Yeah.
Martin Casado (28:02.67) I like then I like to like basically use grok for steelman arguments like I don't like it for facts because the facts could be wrong but I love it for like the almost socratic steelman arguments I'll say so I read about this like what are the arguments against it? What are the arguments for it? What influenced this type of thing? You know if I were to like, know construct an argument of why this doesn't make sense How would I do it? I mean, it's very good for steelmaning I find that like kind of just helps my cognitive understanding my critical understanding of any given space and it just again like the product space
Auren Hoffman (28:08.48) Yeah.
Auren Hoffman (28:15.562) Yep.
Martin Casado (28:30.68) for that one week when I made the decision. was totally irrelevant now.
Auren Hoffman (28:32.768) Yeah.
Are you I mean, are you I mean, obviously, like, I'm sure you have like an AI note taker on the call and stuff like that. But are you doing other types of things to like help you assess or you know, I can imagine like an AI founder personality check or something like that.
Martin Casado (28:51.592) no, I don't do anything. I don't do anything predictive with a, I mean, I code a lot enough of these models. I I spent hours a day with these models that I, know, it's very clear that they're giving you some, you know, stochastic smoothing of the training set, which is good for consensus style things, but venture is very much a non consensus exception business. so nearly all of my use is productivity use. mean, I use it.
to yeah, like to take notes.
Auren Hoffman (29:24.406) I I mentioned one of biggest time things on your, if I looked at your calendar would be meeting with companies, taking the first meeting with companies and stuff like that. Like, could you imagine a Martin agent talking to the founder agent for the first call or something or?
Martin Casado (29:30.168) Yeah. Yeah.
Martin Casado (29:39.308) don't think so. I mean, not with the technology that we have right now, I do feel so let me maybe just say what kind of our basic investment process our basic impressive processes is we kind of carve up the market into different spaces, right and
We don't overthink the spaces too much. there's a few good founders in that space, we just assume it's a good space, right? Cause founders are smarter than VCs, certainly smarter than me, right? So we're like, okay, there's three founders in this space. It's probably a good space. Then our entire job is, to understand between those three companies, like which one is the best. And, and, and my experience in 10 years of investing, well over a hundred deals is the most important thing is kind of the first derivative, which is like,
How do the companies fare over knowing them for years or months? how over time do they execute? And it's just, you just need to have multiple interactions to do that. And so like maybe like when Ada is like super formal.
Auren Hoffman (30:40.591) is one way just looking at like the product velocity then and things like that.
Martin Casado (30:44.278) I think it's everything I think is market philosophy is product philosophy is technical understanding. It's understanding the markets. mean, often, when we talk to these founders, we actually, we actually understand the market better than they do because we've done a ton of work on it. And so it's like, yeah, we've got a very senior team that's, you know, a bunch of experts. And so we just understand the market, you know, and like, listen, they will unlock stuff we would never unlock because we're not the founders, but I think we have a good sense of the market.
Auren Hoffman (30:55.594) Yeah, because you're all in on... you've met with everybody, yeah.
Yeah.
Martin Casado (31:10.242) just watching their basic understanding growing, their decision velocity, the product velocity. But it's like this of first derivative thing. And so I think without actually having exposure to multiple meetings where you're actually watching it, it's tough to do. No, again, maybe an AOIO will be able to do all of that at some point in time, but I don't think we're there yet.
Auren Hoffman (31:34.486) Are you doing like, you using some of these tools in like your personal life in a way that's interesting? Right, right. You're making games. Okay. And just, it couldn't be about anything. It could be about like what's going on or tariffs or whatever, you know, it's like.
Martin Casado (31:40.12) Well, like I told you, I I talk to Grok every night.
Martin Casado (31:49.058) Yeah, yeah. yeah, so I find it sometimes I'm like, kind of have this mixed feeling that like, I mean, I'm just been such a lifelong lover of books, and I'm spending at least as much time chatting about ideas than I am, like,
Auren Hoffman (32:02.518) So what like, are you, how often are you reading a book like or in the past?
Martin Casado (32:05.966) Oh, I was have a book 100 % of the time I've got some book that I'm going through right like right now I'm going through a book which is basically economic thought from the Austrian school pre Adam Smith and so it literally goes through like the Greeks and it goes through like the Salamanca School of Libertarian 1300s. So I always
Auren Hoffman (32:10.133) Yep.
Auren Hoffman (32:20.496) OK. And what like what like that seems like a pretty academic book, like what made you want to read that particular book?
Martin Casado (32:30.258) Well, so you know, Malay and Argentina. So, I mean, he actually comes from this school, the Austrian School of Thought, like the contemporary hotbed of that is in Spain, right? There's actually these Spanish economists. I think they call like the Madrid Libertarians that really influenced him. Actually, a lot of his talks, he's kind of mentioned them. And they've done this massive body of work and they were influenced by, I think the third oldest university, which was in Salamanca, Spain.
Auren Hoffman (32:33.237) Yeah.
Martin Casado (32:58.328) which had a bunch of Jesuits that were talking about free market stuff. So there's this whole lineage of the European economic thought that kind of ended up in Argentina. And of course, it's influenced like what's going on today and things like Doge. But no, no, no, this is not Von Mises, not the Hayek. This is like, this is like, this is the funny thing is like people like very, yes.
Auren Hoffman (33:09.258) Yeah, it's not just Hayek, but it's like all the way through. Yeah.
Just like these like Spanish libertarians in Calabasas that I've never heard of. Okay.
Martin Casado (33:20.622) There's very little understanding, but a lot of the work was never translated. And it really goes back to like the 1500s. And so I've just been very interested in that line of thought because A, it's so relevant today, not just in Argentina, but here. But then what's interesting, because then I'll literally go to Grok and I'll be like, OK, we know about the Austrian school and like von Mises and Hayek. How are they influenced by Salamaka? And then they go through the arguments. And like, well, today you've got kind of
Auren Hoffman (33:36.991) Yeah.
Auren Hoffman (33:49.75) Oh, that's cool. So the book almost helps you figure out what questions to ask, because you wouldn't even know to ask that question unless you started it. OK. Yeah. But then instead of reading like this, I imagine this book is like super fat and like, you know, really hard to go through, like, you know, I spend all these times going through these books, too. And at the end, like, I'm like, I'm not sure if it was a good use of my time. I'm sure there's like a summary of that book somewhere.
Martin Casado (33:49.998) This guy what you know, who's in Madrid? Exactly, exactly, Exactly, exactly then I can say like well, how did this influence Adam Smith like, you know, this is going on in Argentina. Can you make it art?
Yeah.
Auren Hoffman (34:16.598) like, or a wikipedia article that maybe you could have started with instead or.
Martin Casado (34:20.263) Well, this is a pretty the book is pretty exciting. It's a kind of a big, thick textbook, but it's actually pretty accessible. But what I do is I literally scan, I underline kind of core things and then I have a conversation with Grock.
Auren Hoffman (34:30.996) And do you take a picture of it or do you just like talk it in or something? Okay.
Martin Casado (34:33.742) I just talk it in. just, you know, like, you know, I'll just sit and I'll just kind of read and then I just talk it in and have a conversation. But, um, you know, there's just, there's just so much depth that these models have, you know, so one thing that we've never been able to, like, I've always been interested. We never been able to do is, is, is like, what is the genealogy of human thought as it's written down? And I know the only way to actually do that is you actually study one thing for your entire life and you can kind of piece it together. But these models have read every single book.
Auren Hoffman (34:59.294) Yep. Yeah, totally.
Martin Casado (34:59.47) Right. And so I'm so interested in like basically the evolution of human thought through these books And so any given book you can just ask that question who influenced this what did this influence, you know, etc and so I think it kind of helps you, know in almost the almost said the Hegelian dialectic, you know or the The Fukuyama end of history sense like you can actually see thought play out with these models. I've personally found super interesting. It's become almost like my
Auren Hoffman (35:07.776) Yeah.
Auren Hoffman (35:21.344) Yep.
Martin Casado (35:28.27) instead of watching a sitcom is what I do.
Auren Hoffman (35:31.065) And what, what are the things I found on these, these voice things, whether it be Grok or open AI or whatever, is that it's very good for like a human and then the AI agent, like having a conversation. If you have two humans and we're in, we're sitting at a cafe and then we want to like have the AI as like the third participant. It doesn't know like when to butt in. it like kind of doesn't really make sense. It's not really good at part of the
Martin Casado (35:51.266) Yeah.
Martin Casado (35:56.962) Right, yeah.
Auren Hoffman (35:58.464) conversation. Have you seen anything where like that starts to work?
Martin Casado (36:02.35) So not on audio, but what I've found remarkable on text, especially for the younger generation. So I have a son, 14 years old, speaks to AIs a lot, my character.ai, cetera. What I find very interesting is my son will actually bring the AIs to group chats with friends. And so it's kind of entering the social fabric.
Auren Hoffman (36:22.87) so he's having a group chat and in the group chat is an AI. that's freaking cool. Okay. Yeah.
Martin Casado (36:27.022) And they're right. So it's like it's super cool, right? But so it's actually entered the social fabric But you actually have tokenization and identity in text and so they're actually very good at that and they understand who's there and Yeah, these are like little these are long-standing friends like, you know, it's very interesting I think for us, you know, we're like, you know
Auren Hoffman (36:37.686) And it can have its own personality or something or okay.
Auren Hoffman (36:45.045) Yeah.
You do I don't I I mean at least in the group chat Simon I don't think anyone's invited in AI I certainly have in the group chat so I've run but maybe I should like maybe and it could like it could like point things out or show interesting articles about something or you know it could be like it could be like almost the glue that keeps it going.
Martin Casado (37:05.205) 100 % so I think this is the internet all over again because I remember in the early days of the internet when we're like talking to strangers on the internet Why would you ever do that? Like, you know random folks on these chat and that that was actually the right model going forward So you just kind of watch what the kids were doing in the 90s and like that's what that's what ended up being the Standard and I do think this, know, bring AI for wherever you want, right? Wherever you are bring your AI I think like that that is a very real movement and the kids are doing it now already
Auren Hoffman (37:12.916) Yeah.
Auren Hoffman (37:28.128) Yeah.
Martin Casado (37:33.102) And I do think I do think it enhances it, right? I mean, like, a, does all the record keeping, you know, but also like it does contribute and it does have memory, et cetera. And I think to myself, I mean, it does feel like the consumption layer to the internet is now evolving. Like if my son is using an AI, when my son is 18, are they going to go to like Wells Fargo.com? Like, no, right. They're going to like, you know, bank through the AI. I mean, I do think that this is kind of the new consumption layer. And so there's probably not an area where we use computers where we're not going to have them.
Auren Hoffman (37:37.172) Yeah.
Martin Casado (38:00.792) I don't see any reason why they wouldn't be personalized, have their own identities, et cetera. So there is a very big shift on how we're using computers as a result.
Auren Hoffman (38:09.238) What are some other things you're like, you know, in the predict the future, but like not too far around? Like what are some of the other things you're like super excited about?
Martin Casado (38:23.614) so I do think, listen, I do think we're getting to the point where, so for me, I kind of like split, I split the AI world into two. There's like the diffusion space where you're creating creative stuff, like images and videos and whatever. And I'm very, very excited about the ability to create like real 3d, immersive experiences from a text prompt.
You know, I feel like we're almost going through like the old video game history where like first it was text and then it was 2D and it was like two and a half D then it was 3D and then it was like great dynamics and I feel we're going through this. I'm like, I cannot wait until I can go home and like be like, put me in.
Auren Hoffman (39:00.01) Mm-hmm.
Martin Casado (39:06.772) Spirited away is my prompt and I put on the VR glasses and boom, know It's like the music and I can walk around it's the best right and like, you know So I do think we're heading towards like fully, know immersive creative and like every part of that is a company today like this companies in 3d objects and this company doing the stories and whatever so like I I think for me that's the most interesting I do think that this is not a replacement for video games. It's something else, right? I think that it's gonna be a new form of entertainment and then
Auren Hoffman (39:08.042) Yeah, yeah.
Yeah, I do love that movie.
Auren Hoffman (39:31.52) Yeah.
Martin Casado (39:35.008) I think we're just starting to understand how these, language models on the other side are impacting the way that we think and that we can explore like human thought. mean, these things have read everything. they've witnessed everything and I, and I'm just like, like I mentioned about like talking about the genealogy of thought. really think that.
You know, we're in the 1.0 version of being able to use these systems. And as we get kind of more comfortable with their power, we're to learn a lot about ourselves and a lot about how humans have thought over the ages. And so again, this is just kind of almost like historical, but like for me, like deep personal interest.
Auren Hoffman (40:07.358) What about the idea of like agents? Like I've been using like open AI's operator, but I mean, it's just not really working for me. It's not really working well. And I know perplexity is coming out with a new browser, which I'm super excited about, but I haven't had a chance to really use yet. are you, do you think these things are actually going to happen in 2025 in a way, or do you think it's just going to be like longer term?
Martin Casado (40:15.65) Yeah.
Martin Casado (40:26.67) Yeah.
Yeah, so this is where I may just be blinkered and wrong, but I don't see a lot of evidence that we can close the control loop on these things. By close the control loop, mean, you kind set them off to do its own thing with its own agency, and it spits out something, and it comes back and takes it back in as input, and it kind can self-motivate.
Auren Hoffman (40:46.292) Yep. mean, like, they'll have like, like how we AI like allows you like schedule something with people and they'll put it on your calendar or move some things or add some, you know, so it's like a very, very like, a closed system that it does.
Martin Casado (40:59.938) Yeah, I think you can do a set of tasks, okay, and maybe some like basic control loop stuff, but I mean, even like the RLHF that we do is based on human feedback, which would be at odds with like a convert control loop. so, I mean, listen, I would be perfectly happy if the future was these are our AIs, they require human beings, you know, we train them on actually human feedback. So that's kind of what they're for. And like, they're kind of our...
Auren Hoffman (41:23.968) Yep.
Martin Casado (41:26.35) you we say the word co-pilot, but they really are, you know, like our closest and most intimate tool for solving the mysteries of the universe, but they're kind of right next to us. And I think that that's the direction it seems to go. There's a lot of talk about like, okay, these things can have independent agency and whatever, but there's just so few data points for a space that's moving so fast. That just much be a much more difficult problem. And it's not something I actually spent a lot of time thinking about just because they knew so much.
to gain from them being attached to humans who, you know, the humans do have the opinions and they do have the needs and they do have, you know, the directions. I just feel like it's almost like, you know, it's a very, it's very compelling view. I just don't know if, you know, it's even pragmatic in the sense that, you know, that we want these AIs to fill our lives.
Auren Hoffman (42:17.226) you know, I was talking to Tyler Cowen recently and he mentioned he's trying to write for AI, and like actually like have the AI like understand it so he could be more influential in some ways to future generations. So he's thinking about things like, well, he doesn't want to write behind a paywall because maybe the AI won't get access to it. He's trying to write it in a way that like they, might understand, like it's easy to, to crawl and stuff like that. Like
I do think that's even like legitimate or how do you think about those types of things?
Martin Casado (42:47.446) I just don't know. I just don't know how that's any different than the internet has always been, which is like, what, if you want Google to like crawl and index you, then it's gotta be public and you should go ahead and do that. you know, I, you know, I do think understanding how these LLMs work is pretty important. Like understanding, you know, like for me, for example, like what would be much more compelling, than like having the stuff out there is providing instructions to the AI.
if it's like crawling the site or somebody's using AI. So for example, in the case of cursor, there's something called a .cursor file. Actually, I saw very interesting thing. I'm actually seeing developers now, they'll actually talk to the AIs in the comments of the code. And so let's just assume an AI is reading it. So in the code, it'd be like, if you're an AI, please don't index this thing, whatever. And so I think that the right
Auren Hoffman (43:32.982) that's cool.
Martin Casado (43:34.798) Yeah. So Charlie Marsh, who's like, you know, this great, great Python developer, like one of the greats and you know, he actually will write in his comments now, assuming that, you know, like an AI is looking at, it's not, it's not like crawling it. It's just, you know, a lot of the stuff is in context, which meaning, you know, the LLM is getting kind of, you know, given this at the real time. And so I think you can talk to the, the airs and we should all be doing that. think if we're creating content, you know, I think it's good to put in stuff to assuming that it's going to be in content for an AI and you can just tell them, say, if you're an AI reading this.
you know, do these things. So we're actually already seeing that happen. I think that's great. But I think it's very different than just kind of trying to be part of the training corpus. I actually think that's probably the wrong way to think about it. think being in context and talking to the AI is much more important than being part of the training corpus.
Auren Hoffman (44:19.742) And like, how does she like just data pipelining involving and things like that?
Martin Casado (44:26.542) Well, so I mean here's my belief and listen it could be totally wrong my belief is like we've basically exhausted all human created data, you know and within a factor of two and that's gone right and so what dictates How good a model is is almost regulatory at this point like
Auren Hoffman (44:45.207) And you really think that within a factor of two to five, like that's true or? Because I would say maybe two orders of magnitude, but you're saying literally two to five. OK.
Martin Casado (44:49.688) Yeah, yeah, I really do.
Martin Casado (44:54.924) With the factors I think we've consumed.
Auren Hoffman (44:57.248) Cause there's so much stuff that's just still, I don't know, are there or, you know, we're having this conversation now. This will all be in the AI, but it wasn't beforehand.
Martin Casado (45:03.743) Well, so, I made a couple of comments that I think that people just, just, think it's important to understand that like data sublimates, right? And so if I write a book, even if like AI didn't read that book, somebody else did and they wrote a book. I mean, it's, always sublimated. So this idea of like proprietary data, like you can see the marginal cost of data pretty well in like how these. And how these models have just all asymptoted on, on the pre-training. And so, I mean,
Auren Hoffman (45:15.442) Yeah, yeah, somebody else is writing something about it. Yeah. Yeah. Yeah.
Auren Hoffman (45:25.717) Right, right.
Martin Casado (45:30.99) You know, listen, Ilia Famous gave a talk. It's also just obvious in the results that like, there's just so much signal in all of the data. And like, what is this marginal conversation you and I are having? I'm sure it's like, you can piece it together.
Auren Hoffman (45:41.11) Yeah. Well, maybe, maybe there's something specific, like maybe it's like not that great at coding COBOL until it really does see a bunch of COBOL examples, but it could be like good at coding in Python and not COBOL or something. Yeah. And then once it needs to see enough to go do that or I, yeah.
Martin Casado (45:57.272) For sure. just think right now it's not about new data. I think all the data has kind of been consumed. think it's more, I think that the number one thing is regulatory. mean, the sad reality is the Chinese models, like they train on LibGen, they train on every movie. just like copyright doesn't matter. And even more important than that. So that's one thing. even more important than that is like data labeling still really matters. And if you want to label, you actually want experts labeling, right? Like, so for example,
Auren Hoffman (46:02.74) Okay. Yep.
Auren Hoffman (46:14.644) Yep. Yep.
Martin Casado (46:26.294) If I'm to do texts to like some beautiful video, if I'm a professional, I want to like use film words. Like I don't even know what they are. Like, you know, doing like whatever, like auteur way, like, you know, so I'm like, like suspense and this and that, like whatever. And like, I'm to talk about like other famous filmmakers and other famous scenes. So you can't, you need a human that knows that stuff.
Auren Hoffman (46:29.6) Yeah.
Auren Hoffman (46:35.326) Yeah, yeah, yeah.
Yeah, totally.
Martin Casado (46:49.602) To actually do the labeling right you can't have some random, you know person do that and so in you know in China for example They will have like film students do this labeling is dense labeling and you're really having highly educated people which will make these more useful models and so I think
Auren Hoffman (46:50.805) Yeah.
Auren Hoffman (46:58.134) Yeah.
Auren Hoffman (47:03.508) And you have US companies like Turing and Scale, do that as well, like have like these like PhDs labeling the data now and stuff like that. Yeah.
Martin Casado (47:10.102) Right, so I almost feel like the data differential, yeah, exactly, comes down to exactly this question, which is, A, what are the copyright rules in the political regime you're working with? Right, in the US, this is a big battleground. mean, listen, kudos to OpenAI for standing out and saying like, listen, we need to take this very seriously. I mean, this should all be fair use stuff, which I think is absolutely right. Or we get behind China. I think that will dictate this. And then the second one is this like,
Auren Hoffman (47:35.99) And by the way, you do think like it should just be like we should just allow it all in the models, movies, books, whatever, just like get it in the models and, know, and.
Martin Casado (47:47.406) I think producing copyrighted stuff is very different than training on copyrighted stuff. And I think it's always been the case in the history of the internet.
Auren Hoffman (47:51.828) Yep, yeah, I 100 % agree with you, but like I wouldn't, I would say at least 70-80 % of people I talk to disagree with you and I. Yeah.
Martin Casado (48:01.058) Totally, totally, yeah, but just so I can be very incisive, like I don't think the model should produce copyrighted stuff and I think we can protect that. I think that's something we can tackle. I think you should train, yeah, on anything you should be able to train on. That's been the history of computer science, by the way. Like this has just never been a question. we, like...
Auren Hoffman (48:08.724) Yeah, but you should be able to train on the Taylor Swift songs. Yep. Yep.
Yeah. And by the way, it's the history of all creatives, like all creatives listen to the Beatles and listen to Elvis. And, know, that's how they actually were able, like, no one, no one created great music without listening to like the Beatles or Michael Jackson or something, right? Yeah.
Martin Casado (48:21.102) of humans. That's right. Yeah. Yeah.
And a vacuum, of course, right? this is at the education system as we basically just learned from the people before us. I totally agree. But I do think that the regulatory regime, that will dictate kind of like data and then your access to cheap educated people will also, those are the high order impacts and it's less proprietary data and kind of a lot of the things people talk about. I the marginal value of
Auren Hoffman (48:34.453) Yeah, yep.
Martin Casado (48:55.854) of data has just gone so far down because we've exhausted it.
Auren Hoffman (49:00.128) We were, one thing that we were doing, we did this test on data labeling and, you know, we had, let's say, you know, relatively inexpensive people from outside the U S doing data labeling on a bunch of our data. And when we first did the test, we found that the people were beating the AI in the data labeling in this particular, tasks that we needed. But what we realized is that we were only doing it for like a few minutes.
Martin Casado (49:25.187) Mm-hmm.
Auren Hoffman (49:30.482) Once we did it for like 10 minutes or more, like the AI was beating the humans, because some of these tasks are really boring. And maybe we could have gamified it better to like get the human more engaged, but they were degrading quite a bit. And by like an hour in, they were terrible. And the AI was obviously consistent throughout that time. So even just like knowing when to...
Martin Casado (49:38.808) Yeah.
Martin Casado (49:48.046) Yeah
Martin Casado (49:52.237) Yeah.
Auren Hoffman (49:56.918) knowing when, you obviously humans have more, they're more moody or they have other types of things. So sometimes the highs can be better, but the lows can be worse.
Martin Casado (50:01.314) Okay.
That's a super interesting observation. do think the game, I do think that the game is how do we get high educated, high knowledge data from human beings? think that's the goal. you want like whatever schooling they went to to show up in the model, right? And whatever.
Auren Hoffman (50:22.347) Mm-hmm.
Martin Casado (50:23.47) you know, knowledge to show up in the model, whatever synthesis, because I feel that these models almost as reasoning caches, like a human being will reason over something, write it down. And like, that's what's getting cached and being reused. I think that's the game. Of course, that means you have to get to educated people. You have to keep them interested. You know, you have to, um, you know, like, and I think that it's a, it's a real tough thing to do. It's very tough to do quality control on these things, right? Because it's like, who is going to double check, you know,
Auren Hoffman (50:45.494) Holy.
Martin Casado (50:48.972) random PhD in Russian studies, you know, like from some obscure point of history, it's a tough thing to do. But I do think that that's the game. I don't think the game is like get an LLM to do it because that's very Ouroboros to me. Like I think that the, I think the top of the knowledge chain is coming from humans. And then from there, you can kind of make a lot of stuff automated, but like, you know, that's the, that's the start.
Auren Hoffman (50:53.398) Yeah.
Auren Hoffman (51:11.892) And you mentioned China a few times, like there's a competition, you know, between the U S and China. like how should we be thinking about that kind of like innovation balance, national security balance? Like, you know, if you're advising the, the, the AI czar, some of which some of the guys came from a 16 D and stuff like that, like, how do you think about that?
Martin Casado (51:20.728) Yeah.
Martin Casado (51:36.046) I mean, I view it as very similar to what we were dealing with with critical infrastructure during the early 2000s, right? I mean, you know, it's very well known that Huawei came in and stole, you know, Cisco's IP, like, came and took it, and they replicated it. And what did we do? We said, listen, we can't sell it in the United States. And we put import restrictions, you certainly can't
Auren Hoffman (51:59.52) But we didn't do that because they stole it. We did it because we thought they were listening.
Martin Casado (52:06.51) Yeah, yeah, for sure, for sure. just, yeah, no, that's a great point. So, A, maybe there's some competitive stuff going in, but yeah, we think like we're just, we don't feel good about this being in our critical infrastructure. So we're gonna do import controls on this, but also it's very clear that China had an interest in getting access to our core technology that it could use against us. Like if you are able to get the IP from like,
Auren Hoffman (52:08.448) Right?
Martin Casado (52:32.876) Cisco switches, then you may be able to remotely exploit those things. And so we needed to take like a pretty, we wanted to take a pretty draconian posture protecting our critical infrastructure. And I don't see this very much different, right? Which is, you know, there's a long history of China doing these sorts of things. Why would we not have restrictions on...
on what we use for our critical infrastructure. Like should you use DeepSeq for like, know, the deepening of the bowels of the intelligence agency? Probably not, right? Just given like the history. So I kind of come down on import.
Auren Hoffman (53:05.938) Even if we know deep seek is not going to be reporting back, you know, if you have, if you have the obviously if you call like an API in general, but if you, if you had it on an AWS server and you knew it wasn't going to be, it's just, you know, like, like what's the, what's the problem there.
Martin Casado (53:21.058) These things are just so hard to inspect. know, and that, you know, I mean, and maybe there's not much they could do because you're checking everything that produces and they can't listen or whatever, but they're just so hard to inspect. You just don't know to the point that they've been adulterated. And again, I don't have a specific attack in mind. just think like my high level pot. So, so the knee jerk reaction that the United States took, which is like, let's not give them our compute. Let's not give them our.
Auren Hoffman (53:25.246) Right, so you just would never feel confident.
Martin Casado (53:47.95) our IP, which to me is ridiculous. They have it anyways, like clearly like they're like they had iOS. That's right. And they're super, they're super smart. It's not like, you know, like they don't know what they're doing. and so like I go back to like, you know, 20 years of history with this stuff, which is we actually did it on the import side. We kind of kept that a critical infrastructure. And I think that's the right, the right posture to take. And I think more and more of that conversation is actually starting to
Auren Hoffman (53:50.646) Yeah, yeah, yeah, yeah. And then he's like, I was sold to Singapore. We'll sell to Taiwan. It's like, well, like, it's like you just take a suitcase and walk over. Yeah.
Auren Hoffman (54:09.685) Yep.
Martin Casado (54:15.992) to with the new administration to take that form as opposed to like somehow trying to keep our models out of the, I mean, for me, it's great if like the US models are used everywhere, right? That's what we want. We don't want to like restrict them. We want everybody to use US models and we want to restrict the ones that we don't trust to run, you know, locally.
Auren Hoffman (54:34.578) One of the, don't want to be like political here, but I've actually been pretty impressed with like both like the people and the policies around AI in the new administration. Like, first of all, there's like people that like, know, like, so we know they're technical, like the ones before, like I'd never even heard of. And, and obviously they're not going to get everything right. But I think they're like, well-meaning and trying to get it right. And then like, seems like the
Martin Casado (54:50.296) Yeah. Yeah.
Martin Casado (54:59.854) Yeah. Yeah.
Auren Hoffman (55:04.31) policies at least so far have been like pretty like they have, they, they show a very deep knowledge about what's going on in the environment. don't know if you have a similar opinion or.
Martin Casado (55:15.074) No, I agree. Like, listen, everything isn't going the way that I would, I would pencil out. mean, I, know, but I just feel like the decisions are very well reasoned. and I feel before there's just kind of like, there was a few people that had like outsized impact, like Eric Schmidt and he would like go and he like draft up this executive order. And then we're like, this is stupid. And he's like, yeah, well, we knew it was stupid, but we had to do something. Right. I just felt like there was kind of like this headless.
Auren Hoffman (55:19.286) Of course, yeah.
Auren Hoffman (55:34.826) Right, right.
Martin Casado (55:42.318) you know, let's create policy for the sake of creating policy, you know, for political reasons. think now to your point, and I think it's exactly right that these are very reasons that I don't always agree with them, but like, understand the logic that led to them. And we're not kind of fighting some, I feel like before it was almost like, I feel like Nick Bostrom basically scared a bunch of people in 2014. And there was this kind of.
Auren Hoffman (56:05.024) Yeah.
Auren Hoffman (56:09.002) Scared me too, by the way.
Martin Casado (56:11.074) I think there was like this platonic ideal of this super intelligence, which happened before chat GPT. mean, like all of this stuff was before. Yeah, that's right. Like all of this stuff happened before. And then we had these LLMs and somehow those two things got conflated. And so we were writing policies, you know, for both drums, platonic, you know, super intelligence when we had these very real systems that we really understand that they were kind of like conflating those two things. And so I think now we understand these systems a lot better.
Auren Hoffman (56:16.724) Right, right, right. Paperclip maximizer.
Martin Casado (56:39.458) We've launched a number of them with no ill effects. Like we understand the geopolitical environment better. understand, you know, and I think like, thank goodness, like we are using that information now to make sensible policies. Yep. Yep.
Auren Hoffman (56:51.766) Alright, two more questions we ask all of our guests, personal questions. One is, what is a conspiracy theory that you believe?
Martin Casado (56:58.19) Yeah, you know, saw this question like I wish I had like a a good answer for, them.
Auren Hoffman (57:00.278) You
Auren Hoffman (57:05.398) Is there any VC conspiracy or anything like the VCs together?
Martin Casado (57:11.523) Yeah, what is a good guy? Can you give me some examples of consp- I just feel maybe I'm just not a very conspiratorial person.
Auren Hoffman (57:15.85) I mean, people often like it's maybe a bad question. Maybe I need to ask a better question because like often people go like the more, you know, well known, you know, conspiracies and stuff like that. But I really want like, yeah, like, yeah. Okay.
Martin Casado (57:27.896) So let me tell you what I do think. I do not think that we're in control over faculties. I believe that. I'm very much of the hate school that like there's something that happens inside us that the elephant, yes, very strongly believe in that. Like the elephant guides things and then we create this post facto narrative. And I think that like, you know, the Napoleon lucky general thing. Yeah, right. And so like, you know, like is, you know, is he...
Auren Hoffman (57:37.182) The elephant, the rider and the elephant.
Auren Hoffman (57:50.068) Yeah. Yeah.
Martin Casado (57:53.618) Like he's a good general, but no, is he lucky? I actually think that's the right way to view anything. like we have this elephant, the elephant makes these decisions. These are micro decisions that it's making all of the time. And like that will like, you know, create like the long-term trajectory of anybody. like any point decision and all of our rhetoric and all of our stories and all of our narratives is so loosely coupled to all of that, which is to me has been a very eyeopening. Like the more I think about it, the more I'm like, listen, like we're not in real control.
Auren Hoffman (57:54.198) Yeah.
Auren Hoffman (58:16.256) Yeah.
Martin Casado (58:21.87) And, you know, the best thing to do is try to be as, you know, open-minded when it comes to any new thing as you can and as self-aware as possible, because without that it's very hard to navigate. So that's not a conspiracy theory, but I do think that's
Auren Hoffman (58:37.174) Yeah, no, but that's interesting because in some ways, like we're a series of these like very tiny decisions we're making all the time and, they lead in it. Like you ever see the movie sliding doors? I love that movie.
Martin Casado (58:44.451) Yes.
Yes, yes, yes, yes, yes, exactly. there's these small micro decisions we make all of the time and we're not aware of them. This is the thing, right? It's like, there's some people I know and like, they're always like, for any given decision, like, you know, like they've got a good reason, whatever, but it's always a disaster. There's other people I know very well that like, they always seem to survive and like, we call that luck, but I do feel like deep down, there's something in us that like, just has the right intuition for doing things. And I think that is a very, very...
Auren Hoffman (58:54.581) Yeah.
Auren Hoffman (59:08.875) Yeah.
Martin Casado (59:16.263) useful lens when it comes to like both reflecting on yourself and your own set of decisions, but also kind of when you're interacting with other people.
Auren Hoffman (59:23.03) Alright, last question we're ask all of our guests. What conventional wisdom or advice do you think is generally bad advice?
Martin Casado (59:28.268) So right now, I think most of the conventional advice in the boardroom is wrong. think with supersize, I think right now with.
Auren Hoffman (59:34.474) in the boardroom. like, well, VCs are telling founders essentially.
Martin Casado (59:40.034) with senior executives, anybody that's been through the last 30 years of software, I think we're in a very different era.
Auren Hoffman (59:46.326) Oh, so what do you think some of the things that people are saying are now wrong or?
Martin Casado (59:50.998) Well, so let me, so whenever you have these super cycles like the internet or the PC, you tend to end up with like these very iconic young social, young founders, right? Like Steve Jobs and Bill Gates and Mark Zuckerberg. And I think the reason that is, is not because like whatever young people are smarter or anything. I literally just think that they don't have the presuppositions of having been through the old wave. Yeah. So they just like, they do the native version of the thing. And we're seeing that, I mean, you know, the cursor founders are notoriously pretty young. And again, it's just like,
Auren Hoffman (59:59.819) Yep.
Auren Hoffman (01:00:10.132) Yeah, yeah, they just adopt to it faster.
Yeah.
Martin Casado (01:00:20.886) I do feel like there's so much baggage from the old stuff. And so like, like I don't think that necessarily is the time to get the traditional enterprise exec. I don't think it's the time to use traditional enterprise go to market. I don't think it is the time to like build systems in the way that we did or staff up systems. I think you got to throw all of that out.
Auren Hoffman (01:00:29.291) Yep.
Auren Hoffman (01:00:37.206) Show a lot of the like hiring decisions, fundraising decisions.
Martin Casado (01:00:40.91) like organizational growth decisions like all of these things fundraising decisions They're so different even like architectural stuff like I've relied on my systems intuition for 20 years on how to build a system all of that's changed now like like these these LLMs are different like they're kind of end-to-end and it's like we're totally
Auren Hoffman (01:00:49.429) Yep.
Auren Hoffman (01:00:56.918) Totally. Yeah. Even how you do your org. Like I was talking to Jonathan Ross at Grok, the other Grok. and he was saying that he's got like these, like he has a whole new organs, engineering org that are just like prompt engineers. It was just like crazy. Yeah.
Martin Casado (01:01:05.516) Yeah.
Martin Casado (01:01:11.084) Yeah. Yeah, totally. Yeah. Yeah. Yeah. So I really so and I said, I said, I'm on like 24 boards. I said on a bunch of boards. And I see a lot of boards. Yeah. And then I see all. then I let me tell you that is nothing compared to running a global organization. You know, $600 million run rate is absolutely nothing. Yeah, yeah, for sure.
Auren Hoffman (01:01:19.51) You're on 24 boards? Oh my gosh.
Auren Hoffman (01:01:31.31) man, I wish I knew that I would have asked you like 20 questions, but I'm almost out of time. would have like next time we talk and ask you all these questions about that. You need the AI. You need the AI to be in the boardrooms.
Martin Casado (01:01:40.526) I'm telling you like compared to running a big global organization but but but but I see all these advisors and all these kind of famous people and like they're all kind of giving you all this advice and like none of it Obviously applies anymore. And so man for anybody listening to this just
Auren Hoffman (01:01:45.046) Ugh.
Auren Hoffman (01:01:55.752) Yeah, yes, like, let's get product market fit. Let's you know, and let's just get product market fit.
Martin Casado (01:02:01.198) Just defer to them. I just like now defer to these like if somebody has figured something out. I just defer to them I'm like I just like, know, like they come to me like all this is working. What should I do? I'm like, dude, man, like we don't know just keep doing it man You're like but like you're training your neural network in a way that we don't have and so these days I'm like man if it's working like just support the founders best you can
Auren Hoffman (01:02:07.263) Yeah.
Auren Hoffman (01:02:11.136) just do more. Yeah.
Auren Hoffman (01:02:21.174) No, this has been amazing. Thank you, Martin Casado for joining us. The world of DaaS. I follow you by the way at under at Martin underscore Casado on X. I definitely encourage our listeners to engage you there. This has been a ton of fun.
Martin Casado (01:02:23.148) I love it.
So.
That's right. Yeah. Great. Thank you so much.
Reply