Hi all,
It’s been a minute! Sorry for the hiatus, our team has been seeing so many exciting AI businesses I haven’t had the time to share my reflections (AKA my ruminations). I’ve carved out some time to be back at my usual fortnightly cadence.
As usual skip to what’s interesting to you:
Will NVIDIA’s CUDA moat last the LLM wave?
The value of combining a better product with a strong economic model
Content the team has enjoyed
Will NVIDIA’s CUDA moat last the LLM wave?
I’ve been reading a lot about the lower-level elements of the AI race. It’s fascinating reading and it can teach you a lot about competitive dynamics.
I recently came across
’ and have since torn through a bunch of his writing. I really like this piece of his on AMD vs NVIDIA. You should read the full piece, but I wrote this partial summary for our team:CUDA is NVIDIA's software that enables people to optimise GPUs for certain problems/workloads.
In 2001 NVIDIA introduced programmable shaders, which enabled people to program GPUs.
Academics were among the first to recognise the potential of parallelism used in graphics and they started using GPUs for problems like solving partial differential equations.
Five years later, Stanford PhD Ian Buck developed and released CUDA while at NVIDIA.
After CUDA’s release, NVIDIA began developing programming libraries for a broad set of problems. Not only was NVIDIA developing these libraries, other developers built libraries and frameworks on top of CUDA too.
This underpins NVIDIA's software moat today: they have breadth and depth across a long-tail of hundreds of different domains and thousands of problem spaces.
Prior to LLMs, this meant NVIDIA competitors had to compete across this depth and breadth and that would require thousands of programming hours to overcome all of the work that the ecosystem had done around CUDA.
I say prior to LLMs, because some believe that the large opportunity size associated with LLM compute workloads may provide enough ROI to overcome these network effects.
There's a great quote from Buffet included:
All moats are subject to attack in a capitalistic system so everybody is going to try. If you got a big castle in there, people are going to try to figure out how to get to it. What we have to decide – and most moats aren’t worth a damn in capitalism, I mean that's the nature of it – but we are trying to figure out what is keeping that castle still standing and what's going to keep it standing or cause it not to be standing in 5, 10, 20 years from now? What are the key factors and how permanent are they? How much do they depend on the genius of the lord in the castle?
If even this summary is too long to read but you want to know more about the CUDA moat, check out this video snippet.
The search engine market versus LLMs
Square Peg co-founder and partner Paul Bassat often shares his own ruminations with the team. Here’s one he shared recently:
I have been thinking about how the LLM market might evolve and was reflecting on the search engine market.
The first search engines date back to 1990 and there were literally hundreds of search engines before Google arrived from nowhere in 1998.There were two quite separate reasons why Google became dominant and I doubt they would have become anywhere near as dominant unless both factors had occurred;
1) It was a much better product.
They invented the Page Rank algorithm which resulted in much more relevant search results. Prior to Page Rank, relevance was based on the content of a web page. Search engine results were horrible. Google appeared almost like magic.
Page Rank determines relevance based essentially on how many sites link to a particular site and the quality of those linking sites.2) They built AdWords.
There wasn’t a great economic model for search until AdWords came along.
Interestingly they copied and improved upon a product developed by Goto.com (later became Overture).The combination of the best product and the best monetisation engine meant by about 2003 they were dominant in basically every market other than China (baidu) and Russia (Yandex) and over time their market share exceeded 90%.
What is the relevance of all of that?
It was not pre-determined in any sense that Larry and Sergey would build Google and the interesting question for me is what would have happened if Google had not come along. No doubt market leadership would have emerged but it is highly unlikely that anyone would have achieved Google-style dominance. The market would probably have been more fragmented.Right now there is no Google in LLMs. OpenAI took the initiative but we are in early days in terms of product superiority. I think it is reasonable to guess that no “Google” might emerge and I think the base case is to think about the market in that sense if a Google type player emerging.
What the team’s been reading and listening to
Sequoia Partner David Cahn’s perspective on the next phase of AI
SP Partner Piruze shared this piece by Sequoia titled Steel, Servers and Power: What it Takes to Win the Next Phase of AI with the excerpt:
The race to model parity has been the defining project of the last 12 months in AI.
This phase was characterized by the search for new research techniques, better training data and larger cluster sizes.The next phase in the AI race is going to look different: It will be defined more by physical construction than by scientific discovery.
Casey note: This emphasis on physical construction in the next phase of the AI race is something I’ve been thinking a lot about. I think Australia could play a special role in this next phase given our rich resources and energy/mining expertise, if our politicians realise in time. My theory is that a lot of politicians are scared to talk about AI because it’s associated with job loss, but there could be an incredible story of job creation if we focussed on providing for the lower level of the stack (minerals, rare earth elements, energy).
Sequoia Partner David Cahn interviewed on 20VC
In response to Piruze, SP Principal Jethro recommended this 20VC episode that covers much of the same:
Adept Founder David Luan interviewed on 20VC
SP Principal Lucy recommended another 20VC episode with David Luan:
She wrote the following to our team:
This David Luan podcast (founder of Adept, ex-VP of Engineering at OpenAI) is a must-listen. There's too many takeaways but I'll jot down the main ones and how it impacts the way I think about investing in this area:
In the AI agent space, he thinks the only way to make agents reliable enough to work for a use case is to vertically integrate i.e. you own the end user interface + the foundation model that enables the agents to work.
The big way to disrupt incumbents is to disrupt their business model. With AI, he believes AI agents will disrupt robotic process automation ("RPA") businesses. Today when you implement RPA, you have to hire consultants to map out your processes, identify the ones that are high volume and repetitive, then get RPA engineers to code it. This takes 6-9 months. With AI agents, you put one in your environment, get it to observe what the worker is doing and then invoke automation with natural language.
Casey note: unfortunately many people have had this same idea. It’s one of the most common AI-native ideas we get pitched. I’ve written about it here.
This is not an AI platform shift-specific insight, I genuinely believe that's the case for all platform shifts and gives me bullishness on startups' ability to win against incumbents here at the application layer. AI application startups that find a novel business model where incumbents would have strong economic incentives not to copy that model - that's what I want to look for.
In terms of where we are at the core budget vs. experimentation budget phase, he believes we are still in experimentation and will be for a while. It is use-case dependent and some are definitely finding PMF within enterprise but think about how slow enterprises move. It's 2024, and still there are enterprises running on-prem servers. "We'll be on this adoption curve for enterprise AI for a very long time"
I don't think that discourages us from investing - we just have to be eyes wide open about the nature of the ARR and really understand: 1) who is the buyer? (are they just an innovation team?) 2) what is the quality of that ARR? (repeatability, durability etc.)
While we are early in AI applications emerging, a HUGE amount of value will be created here and we shouldn't necessarily be afraid of services in this phase. Right now, you're an enterprise and you need capability X. Then over there, you have a base model that's pretty smart. In between, there's a massive gulf that will, in the near term, be filled with service providers e.g. AI consultants, agencies etc. Then you start seeing that a use case is really useful for enterprise. And then people will just go and productise that thing.
Previously James wrote a page about AI-enabled service businesses and how that's a viable business model now. I'd go one more step and say it is actually a necessary bridge right now to being able to productise. We're so early in use-case discovery that unless you hold enterprises' hands through this and get deep into their business, you won't know what to productise.
I will leave it there - this is plenty long!
Thanks,
Casey
"In between, there's a massive gulf that will, in the near term, be filled with service providers e.g. AI consultants, agencies etc. Then you start seeing that a use case is really useful for enterprise. And then people will just go and productise that thing."
I agree, and this is exactly what we have been doing at Time Under Tension. My add to this is that the "people" doing the productisation will include these AI consultants, as they are well placed to see common challenges across clients and have the skills to build products that address them. They will disrupt their own services model.