Marvell pops on report it will help Google with custom AI chips. Broadcom shares sink

Marvell stock popped on Monday following reports that the firm is helping Google with two updated AI chips.

Top competitor Broadcom fell nearly 2% Monday, although the Google-Broadcom partnership remains strong.

Marvell also saw a $2 billion investment from Nvidia in March, as AI demand continues to surge.

Shares of Marvell Digital systems gained nearly 6% on Monday amid reports that Google will apply the chip design firm for two latest chips to power artificial intelligence workloads.

Until now, Google has relied on Marvell rival Broadcom for the design of its in-house Tensor Processing Units, or TPUs. Broadcom shares fell nearly 2% Monday following the report by The Information.a memory processing unit along with

The potential deal between Google and Marvell could include a TPU, The Information reported on Sunday. Google and Marvell did not immediately reply to requests for comment.

Both Marvell and Broadcom help their customers translate chip designs into silicon, providing back-end support before the processors are sent off to be manufactured at huge fabrication plants by companies like Taiwan Semiconductor Manufacturing Corporation.

It’s a role that’s fueled the growth of both Marvell and Broadcom as more tech giants design in-house accelerators for AI.

Amid that hustle to construct enough silicon to power AI, it’s no surprise to see Google diversify its chip deals beyond Broadcom. The Google-Broadcom partnership is alive and well, having just been extended through 2031 in an expanded deal stated earlier this month.

Meta last week also made a significant deal with Broadcom, committing to deploy 1 gigawatt of its own custom MTIA chips using Broadcom software.

Marvell stock gained more than 20% in March as the firm posted strong fourth-quarter earnings and guidance amid surging demand for AI. Shares have continued to soar in April, up nearly 50% so far.

Nvidia also proclaimed a $2 billion investment in Marvell in March. The deal makes it easier for Nvidia customers to access the application-specific integrated circuits, or ASICs, being made by hyperscalers like Google.

Google was the first hypserscaler to begin developing its own custom ASIC to accelerate AI workloads, releasing its initial TPU in 2015. Giants like Amazon, Meta, Microsoft and OpenAI all followed suit, as Substantial Tech scrambles for enough compute and lower-cost alternatives to Nvidia’s AI chips.

Google released its latest 7th generation “Ironwood” TPU in November, and may release its next chips at its annual AI conference, Google Cloud Next, later this week. This also touches on aspects of wall street.

Originally trained for internal workloads, Google’s custom microchip has been available to cloud customers since 2018. Meta, Anthropic and Apple all now utilize TPUs, as Google increasingly encroaches on a marketplace cornered by Nvidia’s graphics processing units.

Memory has been one of several bottlenecks facing AI chipmakers in recent months, with a shortage of supply from memory makers like Micron, SK Hynix and Samsung.

CNBC’s Kristina Partsinevelos contributed to this report.

Watch: Inside Google’s chip lab, where it makes custom silicon to train Gemini and Apple AI models

AI Disclosure: This article has been generated and curated using advanced AI technology. While we strive for absolute accuracy, some details may be summarized or translated by autonomous systems. Please cross-reference critical financial data with official sources.