Marvell announced today that it is moving its EDA (electronic design automation) workflow from on-premises to Amazon Web Services (AWS). It also publicly revealed what many insiders like me knew already: that Marvell is a provider to AWS for “electro-optics, networking, security, storage, and custom-designed solutions.” I believe this announcement is positive for both companies.
EDA is the software tools engineers use, typically from Cadence and Synopsis, to design, simulate, debug, and verify IP blocks, chips, and SoCs. Over the past decade, these tools have integrated AI to automate specific procedures and improve time to market. Cadence recently introduced its Verisium platform, which it says can improve debug productivity by 10x.
As these EDA tools increase capability, engineers can pile on resources to speed up processes. This reality means EDA becomes a spiky workload that requires spiky compute and storage resources. If you want an answer quicker, pile on the compute and memory resources. The public cloud, and in this case, AWS, is well suited for these kinds of workloads versus on-prem.
Raghib Hussain, president of products and technologies at Marvell, agrees. “Performing EDA workloads in the cloud will transform the way semiconductors are developed. By utilizing AWS’s EDA in the cloud services, Marvell will be able to optimize our chip developments and accelerate our time to market.”
Marvell wasn’t specific about what AWS EC2 instances, storage, memory, tools, or file systems it will use, but you can likely find what Marvell’s here.
Marvell also disclosed publicly that it was a significant semiconductor supplier to AWS. This should not be a surprise, as Marvell is a market leader in cloud storage, electro-optics, DPUs (data processor units), networking, and HSMs (hardware security modules).
The company has leading-edge technology, IP, packaging, and interconnects. It’s at the front of the line with TSMC bleeding edge nodes like 3nm, has many accelerators, experts in high-speed mixed signal, and leaders in MCMs, co-packaged optics, and in-package memory designs. Marvell’s end product approach is flexible; customers can buy, partner (its custom IP), build via custom ASICs, and integrate with complex SoCs. These capabilities all come together across hyperscale data center compute, security, and storage racks and connect those racks via switches and connect datacenters via optical interconnects. While AWS didn’t announce it, I am confident that Marvell enabled AWS’s Nitro SSD. And it would make much sense if AWS used Marvell’s HSM as it is the clear cloud leader here.
David Brown, vice president of Amazon EC2 at AWS, commented on Marvell’s silicon capabilities. “Our customers have benefitted from our collaboration with Marvell, as it brings silicon innovation to the broadest and deepest set of cloud services,” he said in a press release. Getting a quote from Dave Brown is a massive show of support.
So, what does this strategic love-fest between two very successful companies mean?
While AWS has many flavors of home-grown silicon (Nitro System, Graviton Compute, Inferentia Inference, and the upcoming Trainium for ML training, it has been evident that it can’t go alone and needs merchant silicon providers. This is where Marvell fits in. I am super-interested to see how and if AWS uses Marvell’s custom capabilities which Marvell calls its “flexible R&D model.” For AWS, this represents a major win for its cloud-based EDA services that, for decades, have been on-prem for 40 years. It also represents a win to provide its customers new capabilities into the future.