Talking to a few investors about raising for Clio AI, I had this sudden realization: currently most (if not all) AI investments are correlated. Investment funds are operating under the following assumption:

  • Real value lies exclusively in an application layer that is independent of any model layer. This app layer products get access to the best commercial models and are best positioned to disrupt the existing market.
  • Commercial model providers would compete to sell tokens at the lowest price, and the margins would go to the wrappers.
  • Market for wrappers will be huge because it replaces resources and software costs.
  • Building out any training infra or capacity is a waste of time and resources. This holds true for both pretraining and post training capacity.
  • generic models + wrappers would satisfy all the existing demand, including those in sectors like finance, healthcare, where privacy and data security is a huge issue.

This is a great thesis with a couple of key assumptions that are not very obvious. One, there is no significant technical breakthrough at the app layer to give model builders a significant advantage. Eg: if continouous learning finally works, you dont need wrappers and the work they have done. Two, and more important one, customizations like fine tuning or training a custom training would offer negligible advantage compared to generic off the shelf models. This no longer holds true in 2025. You have reasoning models which used Reinforcement Learning in post training to offer a great advantage to model builders. So much so that Openai was confident enough to offer a $200 plan. Other than reasoning, Openai’s Operator and Claude computer-use both take advantage of post training work to save on all the effort it takes to make a typical wrapper work, and they would likely exceed the performance of atypical wrappers.

What follows is the realization that most of the investments from the thesis are highly correlated. Instead of multiple diversified bets, it’s increasingly looking like one unintended, adventurous bet against breakthroughs and significant technical advances in a fairly nascent industry with a lot of $$ and researchers working actively.

What’s next then?

Code and Search - two use cases where there was a product market fit and a heavy use over the last few years meant these were easy to produce. The next use case is coming to drug discovery. Google recently launched a model, Openai has made noise about it before, and RAG/Wrappers just do not work in this space. Typically, these industries are not as matured when it comes to AI adoption, and would need a dedicated team of model builders to work with the specific companies. The atomicity? We might see a customized model as a product for each company in the next few years.