Menu
Sign In Search Podcasts Charts Entities Add Podcast API Pricing
Podcast Image

How I Invest with David Weisburd

E146: The 92% AI Failure: Unmasking Enterprise's Trillion-Dollar Mistake

14 Mar 2025

Description

In this episode of How I Invest, I sit down with Matt Fitzpatrick, CEO of Invisible Technologies and former head of McKinsey’s QuantumBlack Labs. Matt shares his deep insights on enterprise AI adoption, model fine-tuning, and the challenges businesses face when integrating AI into their workflows. We explore why only 8% of AI models make it to production, how enterprises can overcome friction points, and the future of AI-powered enterprise solutions. If you’re curious about the intersection of AI and business strategy, this episode is a must-listen.

Audio
Featured in this Episode
Transcription

Full Episode

0.349 - 8.455 Host

An AI native solution. The framework is not just that AI is replacing what a human is doing, but how would you design the model with AI in mind?

0

8.955 - 26.247 Host

I think most of the material benefit you're going to see is when you clean sheet any process to be like, how would I design this process knowing all the AI tools I have from scratch? And how do I use both technology and humans? And by the way, I think the example for that is going to involve both for a long, long time. In fact, I think Humans are a core part of this solution.

0

26.267 - 42.311 Host

I think Invisible, we believe that's the human machine interface where all the value sits. But it's not necessarily just giving all your people on an existing process and a tool. It's redesigning the process to use all the tools at your disposal. So let's talk about Invisible. Give me some specifics on how the company is doing today.

0

42.671 - 56.334 Host

I joined in mid-January. We ended 2024 at $134 million in revenue. Profitable. We were the third fastest growing AI business in America over the last three years. So how will deep seek affect invisible?

0

57 - 67.807 Host

The viral story was that it was $5 million to build the models they did. The latest estimates that have come out since in the FT and elsewhere would say it's closer to $1.6 billion. I think the number that's been cited from a compute standpoint is like 50,000 GPUs.

67.847 - 84.837 Host

So if you had just told that narrative as the exact same story, but with $1.6 billion of compute, I don't even think it would have been a media story. The fact that it costs over a billion dollars to build that model means it is a continuation of the current paradigm. Look, there are some interesting innovations they've had, a mixture of experts and

85.037 - 100.75 Host

They did some interesting stuff around data storage that does have some benefits on reducing compute costs. But I think those are things we've seen other model builders experiment with already. If I think about types of data, they basically went around things that are base truth logic, like math, where there's a fair amount of synthetic data available.

101.17 - 107.08 Host

That's a fairly small percentage of the overall training tasks that I'd say most model builders are focused on. Tell me more about that.

107.12 - 125.884 Host

Think about training as kind of three main vectors. So you have base truth information where a lot of synthetic or kind of internet broad-based data exists. So math is a really good example of that. Then you have tasks like creative writing where there is no real kind of AI feedback. There's no synthetic data that's existing. There's no way to train those models without human feedback.

Comments

There are no comments yet.

Please log in to write the first comment.