We are at a unique moment for AI companies to build their own foundational models.
First, there’s a whole generation of industry veterans who made their names at big tech companies and are now going it alone. You also have legendary researchers with vast experience but vague commercial aspirations. There is a clear possibility that at least some of these new labs will become OpenAI-sized behemoths, but there is room for them to do interesting research without worrying too much about commercialization.
The end result? It’s hard to tell who is actually trying to make money.
To simplify things, I propose a sort of sliding scale for any company building a foundation model. It’s a five-level scale where it doesn’t matter if you’re actually making money – only if you’re trying. The idea here is to measure ambition, not success.
Think of it in these terms:
- Level 5: We are already making millions of dollars every day, thank you very much.
- Level 4: We have a detailed multi-phase plan to become the richest man on earth.
- Level 3: We have many promising product ideas, which will be revealed in full of time.
- Level 2: We have a plan to outline an idea.
- Level 1: True wealth is when you love yourself.
The big names are all at Level 5: OpenAI, Anthropic, Gemini, etc. The scale becomes more interesting with the new generation of labs now being launched, with big dreams but ambitions that can be difficult to read.
Importantly, people involved in these labs can usually choose the level they want. There is so much money in AI now that no one is going to question their business plan. Even if the lab is just a research project, investors will feel happy to be involved. If you’re not particularly motivated to become a billionaire, you’ll probably live a happier life on Level 2 than Level 5.
TechCrunch event
San Francisco
|
October 13-15, 2026
Problems arise because it’s not always clear where an AI lab lands on the scale — and much of the current drama in the AI industry comes from that confusion. Much of the concern over OpenAI’s transition from a nonprofit came because the lab spent a few years at Level 1, then jumped to Level 5 almost overnight. On the other hand, you could argue that Meta’s initial AI research was firmly at Level 2, when the company was really seeking Level 4.
With that in mind, here’s a quick overview of the four largest contemporary AI labs and how they measure up at scale.
people and
Man and was This week’s big AI newsAnd part of the motivation to come up with this whole scale. The founders have a compelling pitch for the next generation of AI models, with the scaling law giving way to an emphasis on communication and coordination tools.
But for all the glowing press, Humans is tight-lipped about how that will translate into actual monetizable products. it seems by doing Want to build products; The team just won’t commit to anything specific. At most they said they would build Some types of AI workplace tools, Slack replaces products like Jira and Google Docs but redefines how these other tools work at a fundamental level. Software at work Post-software at work!
It’s my job to know what this stuff means, and I’m still pretty confused about that last part. But it’s specific enough that I think we can put them at level 3.
Thinking Machine Lab
This rate is a very difficult one! Generally, if you have a former CTO and project lead for ChatGPT that raised a $2 billion seed round, you have to assume a pretty specific roadmap. Meera Murati doesn’t strike me as someone who jumps in without a plan, so come 2026, I’d love to have TML level 4.
But then Happened in the last two weeks. The departure of CTO and co-founder Barrett Joff has gotten most of the headlines, in part for a reason in special circumstances But at least five other employees have left with Zoff, many expressing concern about the company’s direction. Within just a year, nearly half of the executives from TML’s founding team were no longer working there. One way to read the events is that they thought they had a solid plan to become a world-class AI lab, only to find out the plan wasn’t as hard as they thought. Or in terms of scale, they wanted a level 4 lab but realized they were at level 2 or 3.
There’s still not enough evidence to justify a downgrade, but it’s getting closer.
World Labs
Fei-Fei Li is one of the most respected names in AI research, best known for founding the ImageNet Challenge, which kickstarted contemporary deep learning techniques. He currently holds a Sequoia-endowed chair at Stanford, where he co-directs two different AI labs. I won’t bore you with the various honors and academy positions, but suffice it to say that if he wanted to, he could spend the rest of his life just getting awards and being told how great he is. his book Very good!
so In 2024When Lee announced that he had raised $230 million for a spatial AI company called World Labs, you might think we’re operating at Level 2 or below.
But that was over a year ago, which is a long time in the AI world. Since then, World Labs has shipped both A full-world production model And a commercial product built on it. During the same period, we saw real signs of world-modeling demand from both the video game and special effects industries — and none of the big labs produced anything that could compete. The result looks an awful lot like a Level 4 company, perhaps graduating to Level 5 soon.
Secure Super Intelligence (SSI)
Founded by former OpenAI chief scientist Ilya Sutskever, Safe Super Intelligence (or SSI) seems to be a classic example of a Level 1 startup. Sutskever took a long time to insulate SSI from commercial pressures An attempted acquisition turn from the meta. There is no product cycle and, with the exception of the still-baking Super Intelligent Foundation model, there doesn’t seem to be any product. With this pitch he collected 3 billion dollars! Sutskever has always been more interested in the science of AI than the business, and every indication is that this is a truly scientific project at heart.
That said, the AI world moves fast – and it would be foolish to count SSI out of the commercial realm entirely. on His recent Dwarkesh lookSutskever gave two reasons why SSI might pivot, either “if the timeline gets longer, which it could” or because “there’s a lot of value in having the best and most powerful AI to influence the world.” In other words, if the research is either very good or very bad, we may see the SSI jump up a few levels in a hurry.