Additionally, we introduced a new LLM (Low-Level Memory) developer stack that reflects the compute and tool vendors that companies rely on to build generative AI applications in production.
Re-examining our topic
Our original paper proposed a thesis for the generative AI market opportunity and a hypothesis for how the market might develop. How did we do?
Things are moving fast . Last year, we estimated it would be almost a decade before we had intern-level code generation, Hollywood-quality videos, or human speech that didn’t sound robotic. But with a quick listen to Eleven Labs’ voices on TikTok or Runway’s AI Film Festival, it’s clear that the future is already here, fast. Even 3D models, games, and music are rapidly getting good.
The supply side was the bottleneck. We did not anticipate south korea mobile database that end-user demand would outstrip Nvidia’s GPU supply. The bottleneck for growth for many companies soon became not customer demand, but access to the latest GPUs. Long waits became the norm, and a simple business model emerged: pay a subscription fee to skip the queue and get a better model.
Vertical separation has not yet occurred. We still believe there will be a separation between “application layer” companies and the underlying model providers, with model companies specializing in scale and research and application layer companies specializing in product and UI. In practice, this separation has not yet clearly occurred. In fact, the first successful user-facing applications have been vertically integrated.