Nathaniel (NLW) Whittemore
👤 PersonAppearances Over Time
Podcast Appearances
We want people building these technologies to have extremely high standards and aspirations.
And this is the main point.
There are two or even three totally different AI conversations happening right now.
Over there in another location is market AI, which is just its own beast.
But the two that I'm most interested in for our purposes today are the difference between builder AI and applied AI.
Applied AI happens downstream from builder AI and is a radically longer and different process.
Applied AI is about taking the possibility that was built during builder AI and turning it into value.
It happens at a significant lag to technological process, especially when it runs up against human and corporate inertia.
Years and years of calcified process buildup that AI has to slowly undo and change.
And I mean no disrespect at all to Andre when I say that he does not have any stake in or any particular insight around applied AI.
He's not in the trenches understanding how these tools are actually impacting the boring knowledge work that is the main part of work that AI will impact outside of the specific technology field in which he operates.
You can tell, frankly, that he's operating in a different world based on his definitions.
The definition of agents, those agents that he is saying suck, he makes clear is full human replacement, something you can hire instead of people.
That might be a fine definition of a fully realized agent, but it's dismissive not only of discrete automations that take on one workflow, but also of agents that, while not replacing entire jobs, are replacing big sets of tasks.
Maybe those sets of tasks don't add up to human replacement, but human replacement isn't the sole or even necessarily main barometer of AI impact.
The point is that companies aren't investing in AI based on a careful analysis of where it will be in five years.
They're investing based on what it can do now.
Aaron Levy from Box retweeted Andre's clarification post and said, This is actually extremely pragmatic and realistic based on what is likely to happen, especially in an enterprise context.
We have rapidly improving model capabilities, but the diffusion of these capabilities into real-life workflows will take time and require lots of integration, change management, and new solutions that must be built.
In another post, he wrote, Having talked to hundreds of IT leaders over the last year alone, it's clear we have a capability overhang where the current AI models are already very good at solving many problems that haven't been adopted yet.