Marvell Pursues AI Silicon Opportunities At Hyperscale

By Patrick Moorhead - May 21, 2024
Marvell Technology headquarters in Santa Clara, California
getty

Virtually every technology company has jumped into the AI race, and estimates of the AI market’s size seem to grow with each newly published forecast. Case in point: With the launch of AMD’s MI300-series accelerators in December 2023, chief executive Lisa Su predicted that the market for AI silicon could reach an astounding $400 billion by 2027. More recently, silicon giant Broadcom announced several silicon innovations aimed at enabling hyperscale datacenters to deliver new AI experiences to consumers. Unquestionably, AI has stimulated the silicon market, perhaps like never before, and with good reason. Training and inference are unique workloads with specific needs, and this has enabled many companies, both old and new, to play the role of disruptor.

This past week, it was Marvell Technology’s turn to unveil its custom silicon, also aimed at the hyperscale market. What exactly did Marvell recently announce, and how will it play out in the market? In this article, I weigh in with my own insights, with contributions from Moor Insights & Strategy principal analysts Matt Kimball and Will Townsend.

AI Market Needs And Revenue Opportunity

While Su’s forecast of $400 billion by 2027 may seem aggressive, many observers are aligned with her thinking. In fact, Marvell believes spending on AI technology in data centers cumulatively over the next five years could total $2 trillion. Within this total, Marvell estimates its total addressable market for the products it develops for AI—switches, optical interconnects, custom processors and other devices—could hit $75 billion per year by 2028. That’s only a portion of the total spend above, but it’s also more than three times the $21 billion spent on those same markets in 2023.

The three elements of AI – store, move and compute – are each equally important parts of the AI equation. In combination, these elements deliver best-of-breed AI platforms, a strategic goal of every IT and datacenter architect. However, the equation is different when deploying AI at scale—let alone at hyperscale. In these datacenter environments, distinct requirements call for distinct solutions. This is not simply about high-performing GPUs and CPUs connected to storage over high-speed networking. Hyperscale datacenters deploying AI require compute, storage and networking to not simply be connected, but deeply integrated at the lowest levels to achieve the speed and latency required as user demand increases. Furthermore, every cloud is unique, and each requires purpose-built silicon to maximize performance.

With these needs in mind, Marvell is in a unique position to deliver what enterprises require in next-generation AI infrastructure. As a result, the company aims to double its market share over the long term. While this may seem to be an aggressive goal, it is certainly achievable. Demand is exploding, as evidenced by the rapid adoption of generative AI, and the hyperscale market will be ill-prepared to support this gold rush without next-generation silicon to power the underlying infrastructure.

Network Support And Acceleration

Marvell is unquestionably a leader in networking silicon, offering a comprehensive range of interconnect technologies for data center applications and workloads, including optical modules, DSPs, optical drivers, TIAs, silicon photonics, switches, adapters and controllers. The company’s investment and innovation in networking continues to enable reliable high-bandwidth, low-power connections within and between data centers, as well as provide a solid foundation for supporting emerging AI fabrics.

Forbes Daily: Join over 1 million Forbes Daily subscribers and get our best stories, exclusive reporting and essential analysis of the day’s news in your inbox every weekday.

Regarding the latter, Nvidia supplies a majority of the interconnect solutions used for supercomputing and next-generation AI workloads with its InfiniBand offering, However, Ethernet is quickly emerging as a viable alternative, providing customers with a strong balance of performance and affordability. (I talked at length about InfiniBand versus Ethernet in this Broadcom analysis.) On the Ethernet front, Marvell has partnered with Nvidia, providing the emerging market leader in AI platforms with Marvell DSPs. Bottom line: Interconnect technologies are a big deal. GPUs must be networked together, and the underlying hardware requires robust connectivity to the clouds that host large language models to facilitate high performance for generative AI applications.

Ethernet switching and optical components also play an important role in facilitating next-generation AI workload processing. From an Ethernet switching perspective, Marvell offers its Teralynx programmable switch families. For cloud data centers that require massive bandwidth for scale-out, Teralynx provides capacities up to an astounding 51.2 Tbps. It is also worth highlighting that Marvell’s development of a common switch architecture has the potential to reduce development costs and accelerate time to market for infrastructure providers that leverage the company’s switching silicon. It’s also worth highlighting that Marvell offers a wide range of Ethernet adapters and controllers, as well as customized network interface cards based on unique customer requirements.

Finally, most connectivity links beyond 5 meters are now optical, but one of the biggest challenges lies in high power consumption. Many companies, including Marvell, are providing TRO DSPs that effectively reduce power consumption by eliminating overhead. By doing so, advances in silicon can address concerns about generative AI’s power-hungry nature.

Custom Arm Cores In The XPU

While Marvell is known for many things, its custom silicon solutions for general-purpose compute and AI performance are perhaps not as well known; yet they are critical strategically. Building these specialized accelerators is a win for the company because Marvell can leverage its expertise in advanced process nodes, packaging technologies and, of course, core design.

The foundation of Marvell’s custom group came through the acquisitions of Cavium, which developed one of the first Arm server chips, and Avera, which was once IBM’s custom group. All combined, Marvell claims that its custom group has designed more than 2,000 devices.

Aligning compute to the needs of the market
Marvell

As noted above, hyperscale environments require highly tuned silicon that meets the needs of specific customers and functions. Off-the-shelf processors and accelerators simply don’t fit the bill in these environments in which every inefficiency, however small at the micro level, translates into big costs at the macro level. Because of this, large-scale and hyperscale datacenters look to suppliers such as Marvell to deliver finely tuned silicon.

The custom accelerated compute market is significant—and growing.
Marvell

Undoubtedly, the custom accelerated compute market is a rich opportunity for Marvell. As seen in the slide graph above, this market is projected to grow at up to a 45% compound annual growth rate, reaching nearly $43 billion in 2028. Even at a more conservative projection in the 30% CAGR range, the opportunity looks to be more than $27 billion in potential annual revenue within five years.

Interestingly, Marvell just disclosed design wins with three of the four major hyperscale operators based in the U.S. Its custom silicon deployments range from AI training and AI inference to general-purpose Arm CPU compute. With this footprint and the company’s portfolio of IP, it is certainly wel-poised to continue its growth in custom compute. More specifically, Marvell’s first custom AI accelerator and Arm CPU are expected to debut this year, followed by an AI inference chip in 2025 and AI devices for another company in 2026.

Wrapping Up

The opportunity for Marvell to capitalize on the current and future demand for AI silicon is substantial. There are only a handful of companies that have the technical depth and resources to provide purpose-built silicon at scale, and Marvell is one of them. The company has proven itself over the past three decades, delivering the performance, security and power efficiency required for demanding datacenter operations. As generative AI and future workloads present hyperscalers with new challenges, Marvell is well-positioned to deliver silicon at hyperscale.

Patrick Moorhead
+ posts

Patrick founded the firm based on his real-world world technology experiences with the understanding of what he wasn’t getting from analysts and consultants. Ten years later, Patrick is ranked #1 among technology industry analysts in terms of “power” (ARInsights)  in “press citations” (Apollo Research). Moorhead is a contributor at Forbes and frequently appears on CNBC. He is a broad-based analyst covering a wide variety of topics including the cloud, enterprise SaaS, collaboration, client computing, and semiconductors. He has 30 years of experience including 15 years of executive experience at high tech companies (NCR, AT&T, Compaq, now HP, and AMD) leading strategy, product management, product marketing, and corporate marketing, including three industry board appointments.