Happy Cerebras IPO eve! Cerebras (CBRS) represents the first major AI infrastructure IPO since CoreWeave (CRWV). I want to say it would be a good indicator to measure investor appetite in AI but I honestly think the explosive broader movement in SMH, the semi-conductor ETF, is more than enough to confirm that there’s plenty of appetite right now.

this is actually even WORSE than it looks now as Figure, Hinge, Bullish, Figma, and Netskope all went negative in 2026. So only two winners total with CRWV and CRCL, a hit rate of 10%.

After a somewhat abysmal year for IPOs in 20251this doesn’t bother me because most of the underperforming names of yesteryear were from the prior generation of private companies comprising of overhyped SaaS, crypto, and fintech–no real bearing on AI sentiment, let’s hope for better things to come in 2026.

Let’s talk about Cerebras, the company.

They’re an AI chip startup, founded in 2015, that’s now known for building the “Wafer Scale Engine” (WSE) — essentially a dinner-plate-sized AI chip made from an entire silicon wafer instead of many smaller chips stitched together like Nvidia GPUs. Their core bet is that modern AI systems are bottlenecked less by raw compute and more by moving data between GPUs and memory. Cerebras tries to solve this by putting an enormous amount of SRAM and compute cores physically together on one giant chip, massively reducing latency and interconnect overhead. For years Cerebras was viewed as a fascinating but niche “anti-Nvidia” moonshot in AI hardware. But the explosion of generative AI suddenly made their architecture much more relevant, especially for inference.

Cerebras originally filed for an IPO in 2024, but the offering was delayed after U.S. regulators had to dig into one of their foreign large holders in Abu Dhabi. This delay ended up being for their benefit, as they’re now going to debut in a market with much, much greater heat for their particular product, the WSE.

To give you an idea of the opportunities in private markets, Cerebras was widely available in the $30-50 range just 6-12 months ago on accessible secondary marketplaces such as EquityZen and Hiive. That’s a 3-4x at IPO tomorrow, probably more when there’s a gap above the $185 underwriting price. You don’t have to chase ultra scarce liquidity in Anthropic at any price, you can just y’know, research stuff that’s available and have conviction and get rewarded?

They probably made about $500 million in revenue last year (estimate). Assuming an even $50 billion valuation at IPO tomorrow, that means it’s trading at 100x 2025 revenue. Big multiple–that’s hot.

To understand the Cerebras bull thesis and why it’s such a hot IPO, you have to understand a couple things

  1. The role of memory in AI computing chips–specifically SRAM and HBM
  2. The value of inference in the future

BUT FIRST… a little about its former rival

Most of what I’ve learned on these chip upstarts actually comes from trying to buy a former rival of Cerebras, another chip company called Groq. I attempted to secure some shares of Groq in the $7-8 billion range in November 2025. My SPV manager, who believed that Groq had 10x upside, had secured a spot to co-invest with an investor on the cap table through an option to add more as a follow-up. SPV manager and counterparty agreed on a deal but approval for the follow-on shares sat on the board for a couple weeks. Then boom–on Christmas Eve–Nvidia announced that they had acquired Groq. Our deal evaporated into the ether and the SPV manager promptly returned all of our money. Too bad–it could’ve been a nice quick win for yours truly.

At its core, Groq was the same bet as Cerebras. Groq was founded in 2016 by Jonathan Ross, one of the key engineers behind Google’s original TPU project. Ross had concluded that GPUs were too inefficient for the next era of AI where inference would be more important. Groq’s flagship product is the “LPU” (Language Processing Unit), a custom AI inference chip optimized for ultra-low latency and predictable token generation speed. For years, Groq struggled to get the manufacturing scale and supply chain access needed to seriously compete with Nvidia. Like many AI chip startups, designing the chip turned out to be only half the battle. The harder problem was actually securing advanced foundry capacity, packaging, networking components, and enough capital to manufacture at meaningful scale. So even with a great product, the roadmap to a sustaining business would remain extremely difficult–this might explain why they simply decided to join the king itself.

Back to memory

So let me try to explain “the memory wall” as a layman. When I was trying to understand Groq’s LPUs and their value add to the AI boom, this is what I learned.

Training is what you do to build the model. You feed it massive amounts of data and the system gradually adjusts itself over time. Speed isn’t as paramount because the model can just train continuously while you sleep.

Inference is the output from a trained model. It accesses enormous amounts of memory to generate a response for the user in real time. You want speed because you’re sitting there typing into ChatGPT and waiting for an answer to something like: “What’s a good Mother’s Day gift when my wife says she doesn’t want anything?”

NVDA’s GPUs are amazing for training but imperfect for inference. This is because they rely on HBM–high bandwidth memory, which is physically far away from the GPU core. This creates more latency. It’s like having to go into the basement to get an ingredient while cooking.

Cerebras and Groq rely on SRAM, where the memory is built into the chip itself. This is like taking the ingredient out of the kitchen cupboards in front of you–it’s just faster.

So you have the big players and the upstarts in this high stakes AI chip game. And then the biggest player, Nvidia, ate one of the upstarts in Groq, which not coincidentally, was when Cerebras’ private market valuation started to skyrocket.

source: Notice.co

The acquisition became a validation of the thesis. Cerebras, for now, will be the only pure play on the public market for an AI chip moonshot bet. A future where they legitimately compete with NVDA–currently a $5 trillion dollar company–represents TREMENDOUS upside.

Get the Tokens Out NOW!

The inference market is projected to grow to $250 billion by 2030.

source: Markets and Markets

Why do we even need ultra-low latency inference? Can I just wait a couple extra seconds for GPT to spit out: flowers and chocolates?

Well the future of AI isn’t just “chatbot, fill in the blank” anymore. There’s going to be voice AI that needs to talk to you without lag. There’s going to be robotics that need to react in real time. There will be millions and millions of agents who may need some kind of speed to do whatever it is they’re doing.

So faster inference… it’s a big conversation of the future and Cerebras might part of the solution.

I know, I know. What good does any of this information do for you, the common day trader? Probably nothing. It is merely an exercise for me to convey some of the knowledge that I’ve gained in the past few months and I’m passing it on to you, for whatever it’s worth.

You want Pete the ex-daytrader’s opinion? Eh, you really shouldn’t ask that washed up bum what he thinks but okay, if you insist…

$200 or lower: buy.2(this was posted assuming it was a $150-160 pricing range, they said $185 so now you have to think like low 200s i.e. 210-220)

$300: wait

$400? short for a daytrade. 3or don’t short ever because you’re not suicidal maniac like some of us.

Now go send in orders for the open print and grind that order flow, you degenerate lowlife4.Disclaimer: this is not advice, this is just throwing out numbers for fun, don’t lose money because of what I wrote

don’t be this guy tomorrow, trade carefully

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Pete

Typically replies within a day

Powered by WpChatPlugins