The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: Bret Taylor: The AI Bubble and What Happens Now | How the Cost of Chips and Models Will Change in AI | Will Companies Build Their Own Software | Why Pre-Training is for Morons | Leaderships Lessons from Mark Zuckerberg
Wed, 02 Oct 2024
Bret Taylor is CEO and Co-Founder of Sierra, a conversational AI platform for businesses. Previously, he served as Co-CEO of Salesforce. Prior to Salesforce, Bret founded Quip and was CTO of Facebook. He started his career at Google, where he co-created Google Maps. Bret serves on the board of OpenAI. In Today's Discussion with Bret Taylor: 1. The Biggest Misconceptions About AI Today: Does Bret believe we are in an AI bubble or not? Why does Bret believe it is BS that companies will all use AI to build their own software? What does no one realise about the cost of compute today in a world of AI? 2. Foundation Models: The Fastest Depreciating Asset in History? As a board member of OpenAI, does Bret agree that foundation models are the fastest depreciating asset in history? Will every application be subsumed by foundation models? What will be standalone? How does Bret think about the price dumping we are seeing in the foundation model landscape? Does Bret believe we will continue to see small foundation model companies (Character, Adept, Inflection) be acquired by larger incumbents? 3. The Biggest Opportunity in AI Today: The Death of the Phone + Website: What does Bret believe are the biggest opportunities in the application layer of AI today? Why does Bret put forward the case that we will continue to see the role of the phone reduce in consumer lives? How does AI make that happen? What does Bret mean when he says we are moving from a world of software rules to guardrails? What does AI mean for the future of websites? How does Bret expect consumers to interact with their favourite brands in 10 years? 4. Bret Taylor: Ask Me Anything: Zuck, Leadership, Fundraising: Bret has worked with Zuck, Tobi @ Shopify, Marc Benioff and more, what are his biggest lessons from each of them on great leadership? How did Bret come to choose Peter @ Benchmark to lead his first round? What advice does Bret have to other VCs on how to be a great VC? Bret is on the board of OpenAI, what have been his biggest lessons from OpenAI on what it takes to be a great board member?
I think we are in a bubble. I am inherently skeptical of companies doing pre-training. Unless you are an AGI research lab, doing pre-training on a model, I believe, is just burning capital. Software is like a lawn. It needs to be tended to. It's not like you write software once and it just works forever.
I think almost every company that has chosen to build their own in an area of their business where there is a software as a solution service available has regretted it.
This is 20VC with me, Harry Stebbings, and I'm so excited to welcome one of the true OGs of Silicon Valley to the show today. This is the most impressive CV in Silicon Valley. He co-created Google Maps. He was the CTO of Facebook. He founded Quip. and sold it to Salesforce for $800 million, where he was then co-CEO with Marc Benioff of Salesforce.
And now he's the CEO and co-founder of Sierra, a conversational AI platform for businesses backed by Peter Fenton at Benchmark. Enough said, the man is a hero. The best CV in Silicon Valley history right there. But before we dive in, I'd like to introduce you to one of my favorite brands, Atteo. Atteo is the next generation of CRM.
Setting up Atteo takes less than a minute, and in seconds of syncing your email and calendar, you'll see all your relationships in one place, all enriched with valuable data. Atio also lets you build Zapier-style automations, gives you powerful reports, and works perfectly for any go-to-market motion from PLG to sales-led.
Atio is designed for the next era of companies like yours, and companies like yours shouldn't have to deal with inflexible, one-size-fits-all CRMs. Join industry leaders like Eleven Labs, Replicate, Modal, and more to scale your startup beyond the next level. Head to atio.com.
And talking about incredible companies, I want to talk to you about a new venture fund making waves by taking a very different approach. It's a public venture fund anyone can invest in. not just institutions and accredited investors. The Fundrise Innovation Fund is democratizing venture capital, which could have big consequences for the industry.
The fund is already off to a good start with $100 million into some of the largest, most in-demand AI and data infrastructure companies. Companies like OpenAI, Anthropic, and Databricks. Check out the Innovation Fund's impressive list of investments for yourself by visiting fundrise.com slash 20VC.
Carefully consider the investment material before investing, including objectives, risks, charges and expenses. This and other information can be found in the Innovation Fund's prospectus at fundrise.com slash innovation. This is a paid sponsorship. And finally, let me tell you about UiPath. What do Henry Ford and AI have in common? Neither could change the world without automation.
In the future, there will be two types of businesses, those that have automated and those that wish they had. UiPath's new AI agents don't just follow rules. They think, make decisions.
and work alongside the world's most powerful software robots, already trusted by over 10,000 businesses, if agentic automation sounds new, just think of UiPaths as your more growth, not more overhead platform, or your happier customers, happier employees platform. Whatever you want AI to do for your business, agentic automation with UiPath will make it happen.
Try UiPath's new AI agents for free at UiPath.com. The future of automation is both agentic and robotic. Don't get left behind. You have now arrived at your... Brett, I am so excited for this, my friend. I've been a fan from afar for a long time. You've had such an incredible career. So thank you so much for joining me.
Thanks for having me.
I know this one's a little bit off back because it's not even on the schedule. So you're like you're breaking the rules from round one. But when I go through the different achievements you have, it really is incredible. When you were young, did you know that you were going to be successful? Did you have that innate feeling?
I don't think so. When I was young, first I wanted to be Indiana Jones, which I know is not a job, but to me, he was by far the coolest example of an adult that I'd ever seen. By the time that I was in school and started thinking about a job, I thought I wanted to be an attorney in high school. I'm happy to tell the story.
It's actually kind of an interesting story, but I ended up getting a job at a gas station and then hustling my way into making a website for a mechanic that was nearby and I was getting paid $4.25 at the gas station an hour, which was minimum wage at the time, and ended up getting paid $400 for the website.
So I quit the gas station job the next day and ended up making websites for a lot of local businesses in my hometown. most of those websites endured for decades, you know, because it turns out if you're a florist, it's not like you're actively SEOing your website. So, you know, my fingerprints on the internet in 1996 and 97 lasted for longer than you'd expect.
And even when I went to Stanford, I think if you'd met me that summer before, I probably would have said I probably want to be a lawyer. But then the combination of my accidental entrepreneurial experience, plus going to Stanford and the dot-com bubble, First quarter at Stanford, I took a class called CS106A, which is sort of the intro class and the rest is history.
I was so obsessed with software at that point. I would do it in my spare time. It had nothing to do with school. I was just totally obsessed with the craft.
I do have to start, though, with some semblance of structure. You know, we've seen some mega rounds go down in the last few months from some of the biggest people in AI. Ilya recently raised a billion dollar seed round. I just want to start on the foundations of where we are.
Is that the first billion dollar seed round, by the way? I think it is the first, right? There's no, there must be, right? Yeah.
I wanted to think like if Elon did another company, I mean.
That might be a $10 million seed round.
My question to you is, are we in peak AI? And is this the ultimate sign of a bubble?
I think we are in a bubble, but I think bubbles have different shapes. There's a Mark Twain quote that history doesn't repeat itself, but it rhymes. And I think the AI bubble will rhyme with the dot-com bubble. I believe with the benefit of hindsight, most of the excess of the dot-com bubble might have been justified.
If you look at the top market cap companies in the world, they include Amazon, they include Google. If you look at across segments, it's PayPal, eBay. If you look at the enterprise software companies like Salesforce started in 1998, if I'm remembering correctly. All of these companies were started in the dot-com bubble.
And I think people associate mentally and emotionally the dot-com bubble is associated with Webvan and Pets.com. But actually, if you look at the most frothy statements about the dot-com bubble and the transformation of the economy, and you fast forward almost 30 years from that point, maybe it was true.
When you look at how much Amazon disrupted commerce, how much consumer payments have been transformed by digital technology, it took a few waves of technology like smartphones and NFC to really fully realize that vision. A huge percentage of the gains in the stock market over the past 30 years have more or less been these digital companies created in the dot-com bubble.
And so I haven't done the math on how much money was burned in that period. But I think that doesn't mean that the excitement around the impact of the internet on the economy was false. So I think the same thing is likely to happen in AI.
We will look back and laugh at some of the excess, but I am confident we will have brand defining, likely trillion dollar consumer company come out of this, 10 plus enterprise software companies that are enduring, public companies coming out of this that are native to this new technology. So I think it is both a bubble.
I think there are areas of access, just like there are areas of access in 1997 and 1998. But I think it would be dangerous to dismiss a bubble as strictly excess. And in fact, there'll probably be outsized returns within it.
Is it not different in the way that the risk was priced in? And what I mean by that is Salesforce's first rounds were not done at billion-dollar valuations. Amazon's was not either. The companies of 1998 to 2002 were priced not insanely when you have x.ai raising $18 billion. I mean, these are potential trillion-dollar companies where with dilution, you'll get less than 100x.
I think it's a reasonable point. And it's as a venture capitalist makes a ton of sense. You're thinking about it that way. I'm more thinking about the impact on the economy. We're in a world where there's a lot more capital than there was. There's a lot more, I'd say, structure around how people invest in technology companies.
As you talked about the private equity surge over the past 20, 30 years. It doesn't surprise me that given the amount of capital available, valuations are sort of markedly different than they perhaps were, though I think it seemed excessive back then too, right? I don't think people could contemplate a trillion dollar company in 1998, rationally anyway. What you're saying is reasonable.
I also think that from my vantage point, I'm not investing, I am creating. And my perspective is like, where are consumer behaviors going? How will the automation implied by large language models and agents change productivity, change the structure of companies, change the economy? And how do you define a generational company based on those trends?
It's up to you to figure out the nuances of whom to invest in and why. I'm happy to give my perspective, but I think for what it's worth, for companies that are pursuing artificial general intelligence, it's hard to figure out what's the valuation of a company that creates that. The numbers might be insane, so maybe it's completely rational.
I'm not the one writing those checks, so I also don't look at it dismissively. I look at it and say, there's probably a case to be made. I'm not sure I would write all those checks. I wouldn't say it's entirely rational either, just because I do think
This technology in current form has a ton of value, but particularly as you project forward towards things resembling superintelligence or general intelligence, there is so much value in platforms like that. It's a very unusual investment, but it might not be irrational.
Before we discuss, I love that also, a venture investor thinks multiples and entrepreneurs like generational defining company impact. I feel like a schoolboy who's been told off, Brad. I feel terrible. I feel really guilty for that. But anyway, you mentioned kind of AGI and kind of the value that could come from that.
There is kind of a step before that, though, which is the models themselves are actually so good and so advanced that they bundle all verticalized or unbundled software products really and subsume them, so to speak. To what extent do you think that is a threat, that everything will really just be subsumed by very sophisticated models?
I don't believe that will happen personally. Analogies are dangerous, but I think they might be illustrative in this case. I actually think the AI market commercially will play out like the cloud market did over the past 20 years. So if you look at the cloud market, I would say there's really big, three big categories of cloud software. The first is infrastructure as a service.
So Amazon Web Services, Azure, Google Cloud, services like that. there's toolmakers. So, you know, Snowflake, Databricks, Datadog, you know, basically, what is the software that you need confluent? What is the software that you need when your company is moving to the cloud? And then there's software as a service. So Salesforce, ServiceNow, Adobe, and the extremely long tail of solutions there.
And I would say, you know, we were talking about the public companies in the stock market in that kind of $2 billion to $20 billion range. There's a huge number of really interesting and really valuable software-as-a-service solutions. Why did that play out that way? Certainly I heard, isn't Salesforce just a database in the cloud? I'm like, come on, it's a solution.
It's a solution for sales, service, and marketing teams, and it has a ton of value. And the same reductive backhanded comment can be made of any software-as-a-service application. And I think it's borne by companies, CIOs, CTOs, CEOs, knowing that actually they don't want to be the one building software. They just want a solution that works. Software is like a lawn. It needs to be tended to.
It's not like you write software once and it just works forever. And the total cost of ownership of building and maintaining software is so great that I think almost every company that has chosen to build their own in a area of their business where there is a software as a solution service available has regretted it.
And you've seen this like secular trend towards away from build your own towards software as a service. I think the same will be true of AI. I think there's a bit of a focus right now on both the data centers and the models because the future is so unclear.
It is by far the clearest way to sort of invest in AI right now is to invest at the lowest layer of the stack because you know that whatever happens on top that those layers will sort of collect taxes. of everyone working on AI above it.
But I don't see why companies would want to take this bag of floating point numbers and morph it into a solution themselves, because I believe the same dynamic that played out in the cloud will play out in AI. So at CIRA, which is my company, we make a solution. We're not doing pre-training. We're fine tuning other people's models to build the solution.
And we're helping companies build customer facing agents primarily for customer service. So for companies like Sonos or Sirius XM or Chubbies, there are other companies like Harvey who are making legal agents, companies making coding agents that are essentially building software.
And I think that if you are a head of a legal department or you're the CTO of a company, why would you want to take a model and try to build all the workflows for your engineering team or take a model and say, OK, let's work with our IT department and see how our partners can use this instead of a paralegal. What you want is a push button solution that solves a problem.
And so I think this idea that somehow the way the world wants to buy software will change because these models are really smart doesn't resonate with me. The area actually of AI that I am most excited about Obviously, everyone's excited about AGI. It's why I chose to work with OpenAI. But I'm really excited about applications. I think it's early there.
And I think there's a bunch of companies saying, we're going to actually build a product that solves the problem. It doesn't just help with productivity. It actually solves a problem. And we're going to cater that solution to a department or a buyer that isn't technical. And it's going to be magical. There's a ton of value there. And I believe that's the way most companies want to buy software.
There's a couple of things I just have to unpack there. You said about companies wanting to buy solutions and the ease that they require when implementing these solutions. I actually said before on Twitter that I think AI services companies over the next three to five years will actually be the biggest winners in AI. And you've seen a lot of these consulting firms post billions in profit.
There was one that actually had more revenue than OpenAI. Do you agree that AI services companies will be a dominant strain of this community and that they will be needed though for the implementation of this next generation of application layer?
In the early days of technology adoption, you tend to have very low level platform building blocks available and quite a bit of professional services spend because there is no option other than building it yourself. So you tend to get a short term spike in professional services spend along with some low level building blocks.
And my guess is at least some of that revenue you're describing is companies not having an out of the box software as a service solution available. they see these amazing models like GPT-4 available. And if they want to apply them to their business a year and a half ago, two years ago, their only option was essentially to pay one of these firms to do the last mile themselves.
Over time, I do think that that will diminish as solutions become available that have shorter and simpler implementations. I think that's what companies like mine are doing is essentially, you know, reducing the last mile to actually configure the software. However, the reason I think, you know, this is nuanced and you may be right.
And actually, I think it can be a lot of value that professional services firms provide is around change management. So if you imagine you have a contact center in the Philippines managed as a BPO with one of these customers and you're migrating a huge percentage of your cases to AI, it's not just a technology change, right? It is actually a huge change in the operations of your company.
And then similarly, if you imagine these technologies becoming even more advanced, whether it's reskilling your workforce or actually transforming even the way an entire department operates because there's a agent that comes out that actually means you can completely restructure the way a department is run. One thing that software companies have always been
bad at, for good reason, I don't think it's necessarily what we do, is actually helping companies manage the adoption of this technology. Most software companies try to be trusted advisors to their companies, but at the end of the day, they have a vested interest in the product that they're selling. And it often helps to have a third party there to help you actually manage that change.
So I do think that there's probably some short-term professional services spend that reflects the lack of the maturity of the AI applications market right now. When there are solutions like Sierra and others for specific domains available, you shouldn't have to spend as much to deploy those effectively in your business.
However, I think that as AI changes and disrupts the way companies operate, you know, I would hope that the best professional services firms have consulting arms that can help companies with that change management and it might compensate it for a different way. So I think if you itemize the receipts, the revenue might change over time.
One thing that's really striking to me is the speed of commoditization among the models. Is this not the fastest technology to commoditize? I mean, every week I see like, you know, Mistral kills it. Next week, Gemini kills it. Next week, OpenAI has crushed it. And I'm sitting here going... Fuck, I'm getting dizzy. Like, which one should I use? Oh my God.
And then Claude comes and it's like five things you can do with Claude that you can't do with anything else. And I'm like, Christ, I've got no idea what's going on. Are they the fastest technology to commoditize?
let's start with the high level i really like reid hoffman's framing of this market as foundation models and frontier models so foundation models are any of these large language models that aren't necessarily the best of the best or the higher highest parameter count but particularly now where you have relatively low parameter count models that are meter exceed the quality of say gpt 3.5
That market of foundation models is quite important and quite commoditized in that market. If you need a model like that, you should download Lama. That's the answer. It's like you don't need much of a cheat sheet on that, you know, or maybe Mistral, but pick one of the open source models that are adequate and fine tune it.
The frontier model market is a little different when you talk about this, the experience you've had being dizzy using these tools. My perspective is that we've seen real leaps there. So when ChatGPT came out, that was a meaningful step function change that lasted for a while.
And the insight around instruction tuning and the quality of sort of the GPT models after GPT-3 was pretty remarkably different. Similarly, when GPT-4 came out, I haven't done the math on it, but it certainly had a meaningful lead for quite a while. And now you're seeing a lot of models sort of catch up to that.
My sense is we see a lot of incremental improvement followed by step changes in quality. But going back to the market itself, I am inherently skeptical of companies doing pre-training, unless you are an AGI research lab. Doing pre-training on a model, I believe, is just burning capital.
It's roughly the equivalent of an entrepreneur coming to you and saying, you know, we're building this software solution, and the first thing we're going to do is build our data center by hand.
And I think for 99% of software companies, they should lease their servers from an infrastructure as a service provider, not because it's the most vertically integrated and efficient, but because it's not what their company does. Similarly, as you're exploring and finding product market fit, the last thing you want to do is have a big upfront investment to build a data center.
There was a number of companies that were started by incredibly talented AI researchers. And, you know, step one of their product plan was build, pre-train a model. And I think for especially with the existence of these high quality models like, you know, GPT-4.0,
mini that you can fine tune or the open source models like Lama 3.1 to spend capital on pre-training now, unless you're one of the behemoths, I think is nonsensical.
Can you continue to have step function changes with every model in terms of GPT-3 to GPT-4? Obviously, there's GPT-5 coming next. Don't worry, that's not a spoiler. It would just be a natural guess. unless they're going for a radical rebrand. I'm not that smart, Brad. I'm a VC, but, you know, GPT-4 left me with one thing as an option.
Do you remember in the early 90s, it went from like Windows NT to 95 to 98 to 2000, you know, or something like that. I might be mixing it up. So, you know, we could pull that out to start changing numbers up.
I was born in 96. But I remember reading about it. There we go. But we can't continuously have step changes, can we? Are we at a stage where you start to see slightly diminishing returns?
are distinct to me. So starting with the step function, I don't think it's a foregone conclusion that we'll have step function changes. I believe the most responsible way to develop AGI is responsible iterative deployment. The reason for that is I believe that as you're thinking about things like the societal impact, access to this technology and the safety side of AGI as well, that the
best way we can learn about how to ensure that these models benefit humanity is to consistently release them, learn from those experiences on the safety side, learn about the harm, learn about really specific vulnerabilities like jailbreaking and improve it at every turn. We could end up with a plateau of progress, or as you said, diminishing returns. The three inputs to progress in AI are
Number one, data. Number two, compute. Number three, algorithms and methodology. So if you look at the short history of sort of this current wave of modern AI, it started, I think, with the Transformers model, which was a paper from Google called Attention is All You Need, which changed the scale with which you could build these models, which led to many of the sort of
GPT breakthroughs that came next. You ended up with instruction tuning, which was how you turned one of these models into a chat interface, which was a breakthrough as well. Given even existing data, existing compute, we have all of the best minds in computer science thinking about different techniques. It's similar. There's folks even looking beyond the Transformers model and things like that.
So I think that that's one area where you could have a a big breakthrough. You have compute, just pick up a newspaper and read about the investment in GPUs. And these clusters are getting even bigger and bigger. And even with the same amount of data training and both pre-training and post-training can have a really big impact on quality
And then on the data side, there's a lot of writing about sort of running out of some of the textual data. But there's a lot of really interesting companies working on simulation. There's a lot of interesting explorations in synthetic data generation. There's multimodality. So, you know, what is true of text is, you know, there's lots of video, audio, image content as well.
you know, in any one of those, you could probably make a very rational intellectual case that we're going to hit a wall, but then you have the two others. And I don't think you can make the case for all three that they're all coming up on a wall. And I think like any big scientific effort, it will probably be a mix of progress and all of those. And as a consequence, I
I'm optimistic, you know, in the progress of these models towards something that resembles artificial general intelligence. And I'm excited about it.
In terms of kind of the pursuit of AGI and then also building useful applications for consumers, a company has to have a priority. I think we both agree on that. How does one hold dual priorities of chasing AGI and building a great consumer or enterprise product at the same time?
What is the purpose of building AGI? It should be to benefit humanity. And so what does it mean to benefit humanity? You know, the OpenAI mission is to ensure that artificial general intelligence benefits all of humanity. That can mean a lot of things.
I think it means a lot different things to different people, which is why OpenAI has been sort of a honeypot for controversy, you know, in a lot of ways, because it's very important in this space. And that mission can be reflected through the lens of your own values to mean a lot of different things. But One, it can mean access.
So when you think about how do people access this amazing new technology, one could argue that ChatGPT has perhaps been the biggest breakthrough in providing universal access to AI. I'm not sure the idea of building this conversational agent that everyone can just use by visiting a URL was a, that was not a thing people conceived of probably before that.
And it's why, at least my understanding why it has a sort of a goofy name as it was a research preview that turned out to be the most important product of the past decade, you know, and, um, And I think that one of the things I think about is, wow, what an important mechanism to deliver the value of AI and AGI to the world. And I think it's very aligned with that high level mission.
And I think that to your point on the things you would do building consumer products that are different than AGI, Yes, and that's sort of the complexity that all of these research labs or mission-driven companies are dealing with.
But to imply that sort of building a widely used consumer experience is somehow contrary to delivering value of AGI, I don't buy because how else are you going to deliver it? And there could be different answers, by the way. But you really want to ensure that once these technologies exist, that it's broadly available to everyone in the world, obviously, and responsible in a safe way.
And so I think it's really great that a lot of these research labs have found a form factor that resonates with so many people.
I get you, but our end of complexity is like, no, we're not doing that. We're building a Google killer. That's what we want to replace. And then OpenAI has an enterprise product with an enterprise division, and then AGI and safety teams. It's kind of cloudy. Do you see what I mean?
I think these issues are complex, to be honest with you, Harry. I mean, I don't think it's, you know, you could describe it, you know, as a enterprise team. You could also say you're trying to take the value of these models and ensure that they benefit humanity. Do you want every product that benefits humanity to be built exclusively by OpenAI?
And so enabling developers to build on top of it is a meaningful part of distributing the value to the world. So I don't want to minimize the complexity of all of these decisions, but I also think that as you think about delivering the the value of these models in a way that maximizes their benefit.
It doesn't seem that far off, you know, and it's, and I'm, and it's also, you know, what a lot of other research labs are doing for, I think, similar reasons with similar missions. So I think it's, I'm excited about the impact that it's having.
You know, I, I ended up so many of the entrepreneurs I know who are working in AI do it in large part because of how inspired they were by using these models as consumers using the APIs. And I think it's having a super positive impact on the world right now.
Do you think knowledge is proprietary to companies given the incestuous nature of just some of the movements we've seen between people and teams?
Certainly some knowledge is. I also think that there is a right now a lot of these companies are pursuing a mission that's bigger than any one organization. And so, you know, a lot of these the folks working on AGI are. are in or come from academia where the ethos is to publish, which has obviously shifted, you know, a bit over the past few years. So it's a very complex question right now.
I think the breakthrough ideas sort of like, I don't know the story actually, but, you know, the Wright brothers invented the plane. Apparently there was another group of, you know, I actually don't know who it was, like came close as well. And they were the ones who hit it. I think there's also this dynamic where these ideas are sort of in the air, you know, between different researchers as well.
We mentioned the commoditizations of foundation models as a technology. We've also seen price dumping and a race to the bottom in terms of price as well in a lot of cases. How do you think about AI business models that are sustainable given incredible training and inference costs?
So when I made the comment earlier about skeptical of companies doing pre-training, it was really based on the premise that most companies should be applying AI to build solutions and most companies should have relatively modest training costs. And most of their costs should be correlated with inference, which should be correlated with revenue and usage of your product.
And I think that that's essentially because if you end up pre-training a very large model, you end up with such upfront capital requirements. You have to have a really valuable business model on the other side of that to justify that investment. So first...
I think companies should really focus on how to find product market fit prior to taking on meaningful training costs that are fine-tuning, might be fine, certainly sort of pre-training models. On the inference side, I actually think the costs of AI are huge. going down really, really rapidly. I've seen a lot of people tracking sort of the cost of the GPT models over time.
And what's remarkable about the cost going down is the quality is also going up. So it reminds me, you joked about when you were born, but what Well, around the time you were born, every time I got a new computer in my house, it was twice as cheap and twice as good. So I think on the inference side, I think that margins will probably improve for a lot of these use cases.
There's a lot of interesting technology trends like distillation, taking a large parameter count model and making a smaller parameter count model from it that has similar levels of quality. And essentially, what that means is you're sort of transferring some of the well, you trained a very large model, the you can run inference on something that's much smaller, cheaper and faster.
And then there's obviously a bunch of improvements on the hardware side as well. And I'm incredibly optimistic that just the cost of running AI will could probably track something like Moore's law like Moore's law. I don't think it's a law. I just think it's a trend. And I think that's a really exciting thing for all companies.
If we think about that reducing cost over time and Moore's Law proving out, we're also just seeing Meta, we're seeing Amazon, we're seeing Google. I say they are going to invest ungodly amounts in the next three to five years. Does that go against Moore's Law and the reducing cost for them? And how do you think about those two seemingly kind of paradoxical things?
I think the large hyperscalers are in a challenging position where there's a big difference between owning and operating one of the best frontier models and not. As a consequence, I think that I'd probably make very similar decisions to all of those firms because there's so much value for consumer products,
infrastructure as a service providers to have a differentiated frontier model available to their customers that betting on the future and then similarly betting on breakthroughs and AGI I think is really rational. But the reason I was talking about the sort of Moore's law part of it is I think that
Sort of like in the infrastructure as a service market, it really consolidated around a very small handful of companies. And much like AI, you know, building data centers like scale helps. So, you know, the more data centers you operate, the more you can afford the CapEx to expand your data center footprint.
You know, it should be financed and built by the large hyperscalers because of the CapEx requirements to do so. And I think as the training market sort of consolidates and people start, I think it will probably help because the revenue will sort of consolidate, you know, around those providers as well. So it is a complex situation.
I think these companies have an imperative because of the potential impact of AI to spend. And, you know, the CapEx numbers are mind boggling, but I probably would do the same thing.
When you look at Google and Amazon, their cash cow to fund this is cloud. Zuck and Meta do not have a cloud business being their cash cow to fund this. What does that enable or mean that Zuck can do differently with the cash cow not being cloud? Is there anything he can do differently? Is there any freedoms that he has?
One of the things that I have changed my mind on over the past year is how quickly open source foundation models would be impactful. So I had a thesis when we started CIRA in March of last year that eventually we'd end up with a
few frontier model frontier models essentially built and financed by some of the hyperscalers in partnership with the research labs and that we would eventually have a meaningful open source model or two the equivalent of postgres and mysql in the database market
that would come out, eventually be adopted by one of the larger tech companies that wasn't one of the hyperscalers, just in the same way Google adopted Linux or Facebook adopted MySQL and Memcache and contributed a lot of patches upstream to those projects. And I would say Mark Zuckerberg sort of accelerated that by a meaningful amount. Not only the timing of when that happened, but the quality.
You know, Lama 3.1 is a really high quality model. I think it comes from what you said, you know, without a cloud business to finance it, his incentives are different than, you know, the cloud providers. And I think he wrote, no need for me to say it. I mean, if you just read his post on why he believes that this is the right strategy, I thought it was a really well articulated post.
I think it's probably good for the AI market overall. Just look at the cloud infrastructure market. You have a lot of proprietary solutions like DynamoDB to store data, and you have a lot of open source things like Kubernetes to manage your infrastructure. And then you have commercial companies commercializing those open source projects like Confluent with Kafka.
So I think that a healthy AI market probably needs all of the above. You know, you're going to have the frontier models that are the best of the best that are licensed directly. The cloud providers will probably provide both, you know, both options. And, you know, if you're building these frontier models, you need to maintain a quality lead on the rest.
And I think it's really great for the ecosystem that there's a super high quality open source model available right now.
Is there ever a stop to the cash tap that's been turned on? Someone said the other day, it's kind of like the Manhattan Project for them, which is just like you're in and you can't stop. And the sunk cost is there and you're like another 20 billion. Is there any turning off of that cash tap requirement?
You know, one of the big questions is, you know, what scale of supercomputer and what methodology and what data is required to create something that might resemble AGI or create that breakthrough in economic value that would justify the investment? No one really
knows that you have a lot of theories you know about it but i think when you look at these companies investing this kind of capex in that future i think it's absolutely great you know i think it's totally understandable investors would look you know at the capex and say give me the spreadsheet that justifies the returns um well that's completely rational and i'm sure there's folks doing that the idea that we have this potential to create something that benefits humanity this much
to have this kind of impact on the economy, to create something that valuable. I'm very grateful that there are some bold CEOs investing in that future. I think at every stage you end up with that sort of increasing resolution about how it will be monetized, what the great products will be. In the first wave, it was lots of co-pilots. Now you have, as I said, agents.
My sense is there could be a ton of value created here. And I think you're in this position now where you you don't want to be sort of penny wise pound foolish when you're sort of investing in this future.
It doesn't mean that, as I said, that's why when I mentioned I'm skeptical of startups doing pre-training, like that's a risk that I find irrational because you don't have the capital structure to take on that risk. It's probably running towards a cliff that you probably won't, you know, have wings to fly off of by the time you get there.
If you're a, you know, one of the larger companies you're referring to, and you think about how do you grow your revenue by a meaningful amount over the 10-year period, tell me the better option than this. And so I think there's a lot of understandable skepticism, but I also think it's a very exciting future.
Everyone on the show has said that we will see consolidation in the market. We've had the founders of Adeptum. We've had Character AI. We've had Cohere. We've had Reid, obviously, from Inflection. Do you agree that if you are not one of those core, then we are in a consolidation market?
I think we'll see consolidation of companies pre-training their own models. I think that the cost structure of the tools and applications companies are different and perhaps more sustainable. So like any market, you'll see consolidation when there's winners, but I think it will happen over a more measured time period.
You mentioned agents there. I do want to move into agents and the future of agents. First off, with Sierra, you can literally do anything, Brett, if we're honest. Why did you decide to do Sierra?
So let me just describe what Sierra does, and then I'll tell you why I think it's very exciting. So At Sierra, we help primarily consumer brands build branded, customer-facing AI agents. So if you buy a new Sonos speaker or you're having a problem with your speaker, you'll chat with the Sonos AI powered by our platform.
So we're essentially helping companies build their branded AI agent for all parts of their customer experience. The reason why I think this is a really exciting area for our customers and for me personally, is that I think we're in the era of conversational software.
So I remember when in 2007, when when Steve Jobs announced this and I'm guessing you were 11 then based on our previous conversation. So you may not remember it as vividly as I do. I remember it well, Brad. OK, good. I remember this. That's good.
Thousand songs in your pocket, the iPod. It was mind blowing.
Yeah, it was mind blowing. It really was. What's interesting though is in the corporate world, the dominant smartphone at the time was the Blackberry. And if you talk to anyone who had a Blackberry, they'd be like, there is no way I'm going to ever type on a touchscreen. The Blackberry keyboard is and was beloved. People still talk about how efficient they were with it.
But you fast forward 10 years and 100% of those people had iPhones in their pocket. Why was that? The reason was the multi-touch interface in the iPhone, plus all the benefits afforded by having a big touchscreen from having a full featured web browser to be able to watch media.
We crossed a quality threshold where it was actually effective enough relative to the BlackBerry keyboard that everyone said, this is just better. We're just going to adopt it. And now we have more smartphones in the world than people.
And I think if you measure what percentage of human computer interactions are coming from smartphones, touchscreens today versus mice and keyboard, it's got to be 95% plus. I think with GPT-4, we crossed that quality threshold of effectiveness with conversational AI, meaning you can now have a conversation with a computer and it actually works. It understands nuance. It understands sarcasm.
And as a consequence, if you fast forward four or five years, when you're interacting with any of the consumer brands you work with, your insurance company, your phone company, you will probably be having a conversation with an AI more than you'll be clicking around on a website or clicking around on an app.
And just like mobile apps didn't replace websites, they just sort of took a number of use cases away from them. If you think about when you go to your bank, your bank's website versus the app on your phone. I don't think conversational agents will replace apps and websites, but I do think that every company will need one.
We like to say like in 1995, the way you existed digitally as a business to have a website in 2025, the way you will exist digitally is to have an agent. So in the context of Sierra, in the context of that word agent, we're trying to enable companies to build their own, the one with their brand on it that does everything that their customers want to do.
And really in the fullness of that vision, if you think about everything you can do on a company's website, it's amazing. It's pretty expansive. It isn't just about automating something that exists and helping with customer service. So that's a meaningful part of it.
We really think this is a new category of digital experience and companies will and do want to be present in this world of conversational AI, but it's very hard to do. And that's why we're building a solution to facilitate it.
why is chat the right form factor and is it multimodal is it like i can take a picture of the domino's pizza and put it in my agent and it's like oh that's the mighty meaty 17 inch you can tell i'm not a vegan to all vegans i'm sorry i just lost a big swathe of our audience uh And image-based is like me on a run being like, ah, you know, I want this.
How do we think about multimodalities and like why chat isn't, has, and may be the dominant interface?
I think chat and voice and multimodality, I think the reason why I think conversational AI is a meaningful form factor is because it's low friction. So if you look at the use of WhatsApp around the world, this means that you can essentially exist as a business in WhatsApp and be a completely full featured customer experience.
If you think about CarPlay, which if you've tried to use apps while driving your car and using CarPlay, it's fairly limited, right? But now imagine that you can have a full productive experience on your commute into work. You look at five or ten years ago when Alexa was exploding and everyone was putting smart speakers on their kitchen counters. We have them in our house as well.
All of a sudden, you're right now for our family, that's getting the weather turning on music, right? That type of thing. Imagine that was a full featured computer and you could order an Uber, you could check your calendar, you could
you know follow up on an email while you're making your coffee having these conversational experiences both text voice well i'm not arguing that it is the perfect form factor for every experience but just like the the same way i don't know what percentage of your email you type on your phone versus your Probably 90 plus percent.
And you wouldn't say it's because typing on your phone is easier than typing on your keyboard. It's a convenience thing. And so my point on going through those different form factors of being in your car, being in your kitchen, you know, being in WhatsApp and not having to install an app, those things, like I think, consumer experiences are driven by convenience and lowering friction.
And my thesis is just like touchscreens have come to dominate our experience with computers because of convenience. You can have a conversation in so many different places. You don't need an instruction manual. I think it will be the main way we work with computers.
Do you think we see the removal of the phone, though, as the primary interface? You've seen Zuck with the Ray-Ban glasses. Why do you need the phone at all if I can just talk to myself, which would look kind of weird but normal because I do often. I could ask myself, hey, get an Uber. I'm here. Do we see the removal of the phone?
It certainly seems feasible, but I temper that with if you look at the past 15 years of consumer electronics innovation, how many companies, including the ones that make smartphones, have tried to make devices that replaced or augmented the phone unsuccessfully. This device here, this phone... It's so good at so many things and everyone already has one.
It's essentially completely removed the market for almost every other type of consumer device. So in the short term, my intuition is that the combination of a smartphone with Ray-Ban glasses or AirPods or the like, probably meaning you might need to look at your screen less than you do today.
But my intuition is because of the prevalence of smartphones around the world, it will still end up being the primary computer that mediates those conversations. But to the point that you made, as conversational experiences start working more I always get the big phone just because I like the big screen. I think a lot changes.
And I always go back to the early app store days and the early apps being such skeuomorphic apps like flashlights. And then you have the mobile native experiences like WhatsApp, DoorDash, Uber, Instacart. It took one generation for those things to really exist. I have a sense that it will take a little while to see agent-native consumer experiences and agent-native devices.
And the hard part about particularly consumer electronics is you kind of need the consumer experiences to lead a little bit to have the market available. So it might take a while, but it certainly seems in the cards now in ways it wasn't before.
Are WhatsApp not best placed in terms of installing an app store for every big brand in the world to implement their own channel? And then you have existing distribution to a billion, however many users it is, integrated already into functionality and apps that they use already.
I think WhatsApp is very well situated. If you look at the usage of WhatsApp in places like Brazil and India, approximating this already, but I think, you know, large language models and agents like the ones we built at CIRA open the door to sort of much more full featured experiences. But I also think the same is true as most mobile platforms.
You know, I think that, you know, when you install an app on this, it's probably going to be an app and an agent in the future. Like when we work with our customers, you know, we want to enable them take their AI agent and whatever form factor becomes a dominant consumer experience, you should be able to install your agent in that experience.
Brad, what was the hardest thing with Sierra that you did not anticipate being so hard?
I'll describe a technology problem, and then I'll describe the human problem that was harder than I expected around it. So generative AI is very creative, but inherently non-deterministic. It's very hard to create determinism, the same inputs creating the same outputs, in particular because if you think about the breadth of human language, it's just inherently less precise than most.
And then similarly, if you afford AI the ability to reason, you know, sort of by definition, you can't enumerate all the possible outcomes from there. So when you're building industrial grade agents, you know, for businesses that have real business rules they need to follow, we like to say software is going from the age of rules to the age of goals and guardrails.
And the hard challenge there is how do you enable businesses to express their goals and guardrails effectively?
What's the difference between rules versus goals and guardrails? Are guardrails not rules?
When I think of rules, just imagine you're a retail website. You probably have a menu at the top left and you click it and it has the ability to sort of filter down all the items that you sell. Men, women, shoes, socks, pants, that type of thing. You probably experience this. You've essentially enumerated the rules by which people engage with your site. Here's the categories.
Here's what you can click. You could probably have someone actually click through all possible pages on your site and verify that they look correct if you wanted to. Now imagine you put an AI agent on your site. It's a free form text box. People can type whatever they want. If you explicitly enumerate all the things the agent can say, it's going to feel like a robot.
And that's essentially what chatbots from like three or four years ago felt like. And actually, in fact, many of them had almost like the multiple choice options available to you because they couldn't figure out how to express that universe, nor did they have the natural language understanding to create a meaningful experience.
So with an agent, you want to enable the AI to have agency and creativity to actually understand and really comprehend what the customer's problem is. But then once you go to, say, a let's just say you're a streaming service and you want to use your agent to process cancellations.
So when someone wants to cancel their account, probably the thing you should do is ask why you might want to offer a discount. And if the person doesn't still wants to cancel, you might want to cancel their subscription.
The goal might be to process the cancellation and the goal, and you probably want to afford the AI agent some creativity on how to present those discounts to really do some discovery like a good salesperson was on like what value you hope to get from the streaming service, you know, things like that. And then eventually you want to cancel.
Within that, there's lots of areas where you want to afford the AI agency and creativity, just like a really good salesperson would have that conversation with you. And in an empathetic, not pushy way, just try to figure out if there's a way to retain you as a customer. And that's nuanced, right? Empathetic, not pushy. That's where you need to get a lot of agency.
But you don't want the AI to go off script.
You know, there was an article- Wouldn't it be quite funny if you're like, I'd like to cancel. Well, you're a dick.
Yeah. Or even worse, there was an airline that had a chatbot that hallucinated a bereavement policy. Someone had a death in the family and the chatbot's like, the ticket's on us. I won't name the brand on your podcast. But it was like, it was a pretty bad thing. So you don't want the ad to have so much agency that in the extreme case, it hallucinates.
And in the case that you mentioned, you don't want the ad to just basically represent your brand poorly as well. So essentially, when you're making an AI-mediated customer experience, like a conversational agent, you need to really be able to declare both the goals of what the AI is supposed to do and the guardrails, which could be around language and brand.
It could be a tone, how pushy you want to be, how forceful. And then similarly, like, here's the offers that are available, things like that. So that's the technical problem that we solve at Xero, and I think fairly novel, like in a novel way.
Does that mean, sorry, then you only see kind of agentic implementation for bluntly low risk activities? Hey, I want money back on my dominoes. Listen, if you fuck it up, kind of who cares? But if it's like, you know, my operating system for finances, whatever that may be, or your sales force, I really don't want to fuck up pipeline for a billion dollar business.
I think that as AI improves, you'll see these agents adopted for increasingly more mission-critical systems. So I think the adoption curve rationally starts with relatively low-risk interactions and then progresses from there. But our customers already are using it for revenue generation, sales, subscription churn management for subscription services, things like that.
So, you know, I think that as companies develop confidence in their agents, they can go to sort of increasingly higher risk areas. But this is actually sort of getting to the challenge where we started this conversation is it's a very different design problem than traditional consumer design problems.
You know, if you think about designing a website or designing a marketing campaign, you can have quite a bit of control over it. You can sort of enumerate all the different permutations that your customers might see.
You know, an AI agent, in addition to your consumers being able to say whatever they choose to, the agent, the more you give it agency, the more it will have empathy and feel delightful, but the less control you'll have over it. So the really interesting discussion we have with our customers is,
If you want your agent to have a ton of personality and a ton of empathy, you probably need to turn the knob up on agency. But with that comes risk. You can turn the knob all the way down to zero, which, by the way, our platform supports for the high-risk cases. There's some cases where you don't want a ton of creativity or non-determinism. But in that case, your agent might sound more robotic.
You might sort of regress back to the chatbots of a few years ago. So we don't come in necessarily with a prescriptive view on what's right for a particular customer workflow or a particular brand, but it's a really interesting discussion.
And I think that just like the concept of a user experience designer was a new category of job as the web took off, it wasn't just the domain of box software to design user interfaces. We think that there's a role of an agent engineer who builds these agents on their platform. We think there's a role of an AI architect
who's a customer experience leader, whose job is to do the conversation design and shape the behavior of these agents. And we're essentially building in products and tools for these different new types of jobs that we think are just as meaningful as UI designer or web developer. And I think that's really exciting. But it's also creating this natural tension at our customers.
And I mean tension not in the personal way, but just actual intellectual tension, which is How much agency do we want to afford our AI? And making the guardrails more narrow makes the agent slightly less delightful, but making them more broad reduces control. And that's such an interesting discussion to have with brands.
On the flip side, how much agency do you give a human who's been trained for a week and sits in your Detroit customer service department and could get high and then abuse a customer? Do you know, I always think we forget this when we talk about AI hallucinations, we're like, yeah, and humans hallucinate too.
This is the interesting thing about modern large language models and what I think the industry has come to call generative AI is I think it violates most of the rules we have in our head about computers. You know, computers are designed to be reliable. You click this button, the same thing happens every time you click it. They're designed to be databases.
They're not designed to be creative, right? They're designed to give you facts, follow the rules that we have really, really fast. And just think about software engineering, the craft of software engineering. There's entire methodologies now about how to get
increasingly reliable software, which involves using source control like GitHub and using immutable binaries so that you can roll back and have the same behavior you had yesterday if something goes wrong. We've essentially spent decades trying to make things deterministic, repeatable, reliable.
And now you make this new piece of software that is slow, somewhat expensive, extremely creative, and fairly non-deterministic. You're like, blows people's minds. And I think that as a consequence, people are modeling AI through the lens of how do we make it as deterministic as software was two years ago? I'm not sure that's the right model.
I actually think the thought exercise you did is, okay, let's assume that our salespeople or our call center agents occasionally go off script. How do we deal with that? there probably are operational mechanisms at your company to deal with those situations. Okay, why don't you just use the same mechanisms to deal with the AI as well?
And actually thinking of stop putting AI software in the bucket of computers and that rule set and how you deal with it to try to get to five nines of repeatability. and say, okay, this is actually going to be a really creative, really impactful, much lower cost solution. It will do some things that are incorrect some of the time.
How do we deal with that eventuality rather than try to fully prevent it, which right now is almost impossible.
Can I ask a slightly off-tangent one? But it makes me think of moderation there when you said about how do we really think about determining whether someone went off script or not and what we do with it. My biggest concern, honestly, is, like it or not, I look at most of the stuff on my Twitter timeline and I'm like, is that real or fake? And I send it to my family and they're like, fake or real?
It's unbelievable, the switch in terms of our questioning the verification of content. And someone said on the show very recently, Arvind Narayanan, who's at Princeton University. I now get to interview professors, very, very intelligent. But my mother's like, really? But he said, you know, the thing is, Harry, it's not that we will believe stuff that you see that's not true.
It's that you won't believe anything at all. Do you agree with him? And what are your biggest worries about this next wave?
I really do believe for most of the problems in AI, there are AI solutions to those problems as well. For a lot of content you're looking at, it would be interesting to put it into things like ChatGPT and ask it, is this real? How should I determine if it's real? And you might get some good advice.
As we think about information, veracity, authenticity, my hope is that you end up with the sort of white hat and the black hat. And the white hat teams in this, just like in the world of cybersecurity, will give us all the Iron Man suits we need to be successful and trust or distrust the information that we see.
So I think just like with all of these things, that's why I mentioned that that outlook worm, you know, I think
as these technologies get developed, you know, you end up with collectively are learning about the ramifications of these technologies, which is why I believe in responsible iterative deployment of AI, because I think it's very hard to, in an ivory tower, predict all of the first and second order effects.
But then it is an imperative for there's an industry that we develop technologies and mediations and to, you know, for these different downsides of the technology. But I feel confident we can. I think All the great AI minds are trying to think about how this benefits humanity.
And, you know, for every problem, there's a great entrepreneur or technologist or researcher who I think will come up with ways of meeting that challenge.
Can I ask you a weird one before we move into a quick fire? You're Brett Taylor. When you go out to fundraise, it must be a little bit different now, Brett. How does that work? Do you know what I mean? It's like, so just help me understand. You decide you're going to do Sierra and you're like, ah, I mean, I guess it's kind of a question of like, why fundraise?
But then there's also a question of like, how did you approach that? Now you could raise from anyone.
Well, first, why did I fundraise? I really believe in the importance of boards and having stakeholders and the accountability of having a board and investors and employees. And I want the employees coming to Sierra to know that Clay and I aren't doing this as a side hustle. We want to build a generational company. And then similarly, I really value the advice I've
been a board member as well as an executive. And I really value the strategic advice I got. So when we started the company, I just called Peter Fenton, who I've worked with twice before. He's the only person I talked to. And that was our first board member. And with our subsequent round, similarly- How long was that conversation?
Yeah, I don't want to disclose private details, but Peter and I have worked together a lot before. It probably could have been even shorter than it was, but I'm not there to, I want to talk to Peter about what we're doing and why and get his advice. So it was the right conversation because I wasn't there in a transactional capacity.
And I think that the best relationship between investors and entrepreneurs is one where you really like They're your first phone call on a strategic issue. And thankfully, I've known Peter for almost 20 years. So it was pretty clear to me, like the first person I was going to call.
I have a man crush on Peter's brain, so it's totally fine. I remember when I had him on the show first, I was like, wow, he's the most articulate orator I think I've ever had on this show. Listen, I want to do a quick fire round, Brett. So I say a short statement, you give me your immediate thoughts. Does that sound okay?
Yeah.
So what have you changed your mind on most in the last 12 months?
how quickly the cost of AI will go down, largely thanks to the emergence of distillation and open source models like LAMA.
What is the biggest misconception of the next 10 years of AI?
The focus on hardware and models and not enough focus on the applications of AI. I think many of the defining companies in AI will be delivering consumer and business solutions that happen to be powered by AI, not just the models themselves.
Who's the best board member you sat on a board with and why them?
I don't stack rank these board members, but a board member that I've worked with twice is Fiji Simo. So she's the CEO of Instacart. I work with her at Shopify and she's also on the board of OpenAI. And one of those folks who's an operator who knows also how to be an effective board member and a remarkable intellect.
Which VC is the single best picker do you think and why them? It can't be Peter.
I don't know. I don't follow it enough.
This episode is brought to you by Peter Fenton.
I actually honestly don't follow that as much, not because I don't care, but I follow the companies more than the investors.
Can I ask your advice? You've sat on some of the best boards. I sit on boards now. I am a young board member. I want to be the best board member that I can be. Is there any advice that you'd give me having seen many different types of boards and types of entrepreneurs?
The art form as a board member is how to be involved enough without jumping into the operations of a company and knowing how to give advice in a way that the CEO and the management team actually hears. Finding that balance of creating the cadence with the companies you work with to get the information you need so you know where you're going to add value when you know when to like
call the proverbial bat phone because something's wrong is the biggest art form. So I would say, you know, board members who treat every engagement the same are probably not doing it right because different executive teams, different CEOs have will hear device advice in different ways. And the businesses are very different.
So I think really treating it very uniquely and finding an operating cadence, you can get the information you need to actually provide good advice.
What yes that you got was the most important or significant yes?
Probably the most impactful, unexpected point in my career was Mark Zuckerberg making me chief technology officer of Facebook. I'm not sure I was qualified to do that job. He saw something in me and I obviously saw it in myself as well. I would say that moment kind of changed my own conception of myself from being sort of an engineer to being able to lead larger teams.
And it was largely because of Mark's faith in me.
What's your favorite story from Facebook?
When the movie The Social Network came out, we rented a movie theater and all watched it together. It's a fine movie, but there was this funny scene where they order appletinis, which was kind of a lame drink, let's be honest. No one orders an appletini and maintains their reputation on the other side of it. So we go out to a bar afterwards and I walk up to get a beer and the bartender's like,
What the fuck is it with you guys and appletinis? People have been ordering it all night. So after the movie, like everyone just orders appletinis. And the guy at the old pro was like, I ran out of what are the toxic looking green liqueur is. And he was like, what is it with appletinis today? So that's one of my favorites.
I first heard about venture when I was 13 because I was sitting in a cinema in London and I saw the scene with Peter Thiel and Clarion where he invests in the Young's Ark. And I was like, oh my God, I want to be a venture investor.
Ironic part of that movie is whatever the director was trying to achieve, I've met a lot of entrepreneurs who view it as a source of inspiration, which I'm not sure was the director's goal.
I know. I think the exact same. And also, I think everyone took entrepreneurship away from it. And I was like VC. Brad, I literally don't think anyone has the view of leaders that you've had working alongside Zuck, Benioff, board of Shopify with Toby, board of OpenAI with Sam. This is the greatest leaders of a generation. What do they have that is non-obvious that makes them great leaders?
One of the things that I've admired the most about the leaders you mentioned, whether it's Larry and Sergey, Mark Benioff, Mark Zuckerberg, Marissa, whom I work for at Google, is this sort of relentless drive. Every time you might get comfortable with a situation, they're always looking out towards the horizon. I always found Mark Zuckerberg particularly remarkable at this.
Every time I thought I was thinking long term, whatever Mark was thinking was about 2x farther in the future than I was thinking. And it was so disconcerting and motivating for me. I think when I became chief technology officer of Facebook after they had acquired my social network, I was 29, if I'm remembering correctly. I think it was 2009.
See how his brain worked to definitely change my perspective on what bold leadership meant and taking bets that could have been unpopular or complex in the short term to achieve a long term goal. And I think you really see with some of the great entrepreneurs.
this ability to think extremely long term and make decisions that, you know, especially if you're nowadays, if you're a public company, it's such a challenging cadence to parade yourself out in front of investors every three months. And, you know, while investors claim to be long term, very few have the patience that they extol on their website, you know, and
It really requires relentless focus on the future. And the other thing that I would say that all of them have in very different styles is the ability to communicate that vision to employees and stakeholders. Employees probably being the most meaningful when you're running a company and motivating the team to work forward.
You really have to tell people what the future will look like, why it will be important and why it's an important thing to pursue and why people need to sort of overcome these short-term challenges. You both have to have the vision and you have to bring the team along with you. And all of them have it in very different styles, but in ways that are incredibly inspiring.
Dude, rock and roll. You're a star. Thank you so much. Thank you. Thank you. I am a student of Silicon Valley, and so having the chance to do that show with Brett was just such a treat for me. If you want to watch the full episode, of course you can, on YouTube by searching for 20VC, that's 2-0-V-C. But before we leave you today, I'd like to introduce you to one of my favorite brands, Atio.
Atio is the next generation of CRM. Setting up Atio takes less than a minute, and in seconds of syncing your email and calendar, you'll see all your relationships in one place, all enriched with valuable data. Atio also lets you build Zapier-style automations, gives you powerful reports, and works perfectly for any go-to-market motion from PLG to sales-led.
Atio is designed for the next era of companies like yours, and companies like yours shouldn't have to deal with inflexible, one-size-fits-all CRMs. Join industry leaders like Eleven Labs, Replicate, Modal, and more to scale your startup beyond the next level. Head to atio.com.
And talking about incredible companies, I want to talk to you about a new venture fund making waves by taking a very different approach. It's a public venture fund anyone can invest in. Not just institutions and accredited investors. The Fundrise Innovation Fund is democratizing venture capital, which could have big consequences for the industry.
The fund is already off to a good start with $100 million into some of the largest, most in-demand AI and data infrastructure companies. Companies like OpenAI, Anthropic and Databricks. Check out the Innovation Fund's impressive list of investments for yourself by visiting fundrise.com slash 20VC.
Carefully consider the investment material before investing, including objectives, risks, charges and expenses. This and other information can be found in the Innovation Fund's prospectus at fundrise.com slash innovation. This is a paid sponsorship. And finally, let me tell you about UiPath. What do Henry Ford and AI have in common? Neither could change the world without automation.
In the future, there will be two types of businesses, those that have automated and those that wish they had. UiPath's new AI agents don't just follow rules. They think, make decisions.
and work alongside the world's most powerful software robots, already trusted by over 10,000 businesses, if agentic automation sounds new, just think of UiPaths as your more growth, not more overhead platform, or your happier customers, happier employees platform. Whatever you want AI to do for your business, agentic automation with UiPath will make it happen.
Try UiPath's new AI agents for free at UiPath.com. The future of automation is both agentic and robotic. Don't get left behind. As always, I so appreciate all your support and stay tuned for an incredible episode coming on Friday.