One would think, or hope, that a startup shipping its second-generation AI chips and software stack would be content to have over $200M in the bank to fund future development and market penetration. For UK AI startup Graphcore, that is a lot of money. When you are a darling in a fast-growing land grab, though, and investors keep throwing money your way, I guess it is hard to say, “No thanks, we’re fine.” In this blog, I will review Graphcore’s prospects for success.
This week, Graphcore announced the numbers from its E series funding round—$200M at a stunning valuation of $2.77B US, post-money. The Canadian teacher’s fund, Ontario Teacher’s Pensions Plan Board, led the round, joined by Fidelity International and Schroders. Several existing Graphcore investors upped their stake as well, although not all participated. Including this round, Graphcore says it has $440M in the bank. This additional funding should enable Graphcore to patiently work on customer adoption and deployments, which can take months or even years to earn.
A company update
Graphcore had a great year from a product standpoint. It launched its second-generation IPU-M2000 chip, which comes on a flexible IPU Machine platform. Additionally, the company recently released new tools on the Poplar software stack. Notably, in a nod to the open-source community, Poplar v1.4 added full support for PyTorch. That smart move will help simplify adoption and engage a broader community to improve performance for newer AI models.
We have not yet heard from Microsoft or Dell, or major AI service providers on whether they will adopt the updated IPU. Both of these companies were early investors and adopters of the first-generation platform. That said, I suspect these projects are probably on track, given Graphcore’s ability to raise additional funds.
While I find the new IPU impressive, with more on-die memory and access to shared memory for larger models, I find its recent performance claims unfortunate. Specifically, Graphcore compared 4 IPUs to a single NVIDIA A100 chip, stating that the IPU is faster. To be fair, the IPU-Machine does have four chips, so that is a reasonable basis for comparison—it is what people buy. Google also makes similar claims, comparing four TPUs to a single NVIDIA GPU.
I would surmise that Graphcore would look better if one compared its price/performance to competitors’ platforms. Specifically, the HMB2 memory used on other AI platforms dramatically increases performance but also substantially increases costs. Graphcore memory architecture uses super-fast on-die memory, augmented with additional DDR4 memory on the IPU.
This company is on fire, with a great team, solid product designs, an impressive software stack and lots of cash on hand. With additional funding in place, Graphcore certainly isn’t going to implode any time soon.
Also of note, Graphcore recently joined MLCommons, the group that organizes the mlPerf benchmark collaboration. Adopting and publishing these benchmarks eliminates the apples-to-orange comparisons. Instead of arguing about the accuracy, latencies, data segment lengths and node count, mlPerf community members all abide by the rules. This yields valid peer-reviewed comparisons for chip adopters and investors alike.
One thing is for sure: the Cambrian Explosion in AI continues unabated. Look for my annual Forbes blog with predictions for 2021 coming in early January.