Twenty years ago, a professor at Duke University, David R. SmithThe artificial composite materials used are called “metamaterialsto create a Real life cloak of invisibility. Although the cloak didn’t really work as well as Harry Potter, exhibiting a limited ability to hide objects from light of a microwave length, those advances in materials science eventually trickled down to electromagnetism research.

Today, based in Austin NeurophosA photonics startup started out of Duke University and Metacept (an incubator run by Smith), is taking that research further to solve what may be the biggest problem facing AI Labs and hyperscalers: how to scale computing power while keeping power consumption in check.

The startup has come up with “metasurface modulators” with optical properties that enable it to work Tensor Core Processor To do matrix vector multiplication — the math that is at the heart of a lot of AI work (especially inference), is currently performed by specialized GPUs and TPUs that use traditional silicon gates and transistors. By putting thousands of these modulators on a chip, Neurophos claims, its “optical processing unit” is significantly faster and far more efficient at inference (running trained models), which can be a fairly expensive task, than the silicon GPUs currently widely used in AI data centers.

To finance the development of its chips, Neurophos raised $110 million in a Series A round led by Gates Frontier (Bill Gates’ venture firm) with participation from Microsoft’s M12, Carbon Direct, Aramco Ventures, Bosch Ventures, Tectonic Ventures, Space Capital and others.

now, Photonic chips are nothing new. In theory, photonic chips offer higher performance than traditional silicon because light generates less heat than electricity, can travel faster, and is much less sensitive to changes in temperature and electromagnetic fields.

But optical components tend to be much larger than their silicon counterparts and can be difficult to mass-produce. And photonic chips also require converters to convert data from digital to analog and back, which can be large and consume a lot of power.

Neurofos, however, thinks the metasurface it has developed can solve all those problems in one fell swoop because it is about “10,000 times” smaller than traditional optical transistors. The small size, the startup claims, enables it to fit thousands of units on a single chip, resulting in greater efficiency than traditional silicon because the chip can perform many more calculations at once.

TechCrunch event

San Francisco
|
October 13-15, 2026

“When you shrink the optical transistor, you can do more math in the optics domain before you make that transition in the electronics domain,” Dr. Patrick Bowen, CEO and co-founder of Neurofos, told TechCrunch. “If you want to go faster, you have to solve the energy efficiency problem first. Because if you’re going to take a chip and make it 100 times faster, it’s going to burn 100 times more energy. So you get the privilege of going faster after you solve the energy efficiency problem.”

The result, claims Neurofos, is an optical processing unit that can outperform Nvidia’s B200 AI GPU. startup said Its chip can run at 56 GHz, achieving a maximum of 235 peta operations per second (POPS) and consumes 675 watts, compared to the B200, which can deliver 9 POPS at 1,000 watts.

Bowen said Neurofos has already signed up with multiple customers and that companies including Microsoft are “looking very closely” at the startup’s products.

Still, the startup is entering a crowded market dominated by the Nvidias of the world Most Valuable Public Companywhose products have underpinned more or less the entire AI boom There are also other companies working in photonics, though some like Lightmatter pivoted To focus on interconnections. And Neurophos expects to have its first chips on the market in mid-2028, still years away from production.

But Bowen is confident that advances in Metasurface’s performance and efficiency will prove moot enough.

“What everyone else is doing, and that includes Nvidia, in terms of the basic physics of silicon, it’s really evolutionary rather than revolutionary, and it’s tied to TSMC’s progress. If you look at TSMC’s improvements in nodes, on average, they’re about 15% improvement in power efficiency,” he said.

“Even if we list Nvidia’s improvements in architecture over the years, by 2028 we still have a massive advantage over everyone else in the market because we’re starting with 50x Blackwell in both power efficiency and raw speed.”

And to solve the manufacturing problems optical chips have traditionally faced, Neurophos says its chips can be made with standard silicon foundry materials, tools and processes.

The new funding will be used to develop the company’s first integrated photonic compute system with datacenter-ready OPU modules, a complete software stack and early-access developer hardware. The company is also opening a San Francisco engineering site and expanding its headquarters in Austin, Texas.

“Modern AI demands enormous amounts of inference power and computation,” said Dr. Mark Tremblay, corporate vice president and technical fellow for Microsoft’s Core AI infrastructure, said in a statement. “We need a breakthrough in computation to match the leaps we’ve seen in AI models, which is what Neurofos’ technology and high-talent concentration team is developing.”



Source link