If you’re listening to VMware’s CEO Pat Gelsinger at VMWorld yesterday, the question of whether ARM-based processors might fit well into storage servers boils down to a straight-up processor performance and drivercompatibility issue. As reported yesterday, Gelsinger reportedly says, “Even if you reduced the power consumption of ARM CPUs to zero,” x86 will still win”. First, is that true, and second, what are we talking about? Why do storage servers need processors with big Intel Xeon x86 cores (called “brawny” cores by those who really like them) or with smaller Atom x86 or ARM cores (called “wimpy” cores by those same folks)? I’ll give you the short version here, but if you want the deep dive, we have penned a research note with performance and scalability benchmarks here. Let me provide some background first.
Small organizations start out by storing their data on disk drives that are directly attached to their PCs and servers. The industry calls that, unsurprisingly, Direct Attached Storage (DAS). When an organization grows large enough, file sharing becomes a challenge when data is scattered among a bunch of devices and probably hidden by poorly designed networks, passwords and other access controls. Storage servers evolved to meet this basic business need.
Enterprise storage servers are bought in two major flavors – Storage Area Networks (SAN) and Network Attached Storage (NAS). The important design feature of both is that they each have one central point of contact to an enterprise network that fulfills all of the file requests from the rest of the servers on a network into a SAN or NAS. We call that a “storage appliance.” Over the years, storage appliances have become full-blown servers. Some are designed to handle an enormous volume of file operations requests. Dell, Hewlett-Packard, and IBM now sell their own branded NAS and SAN storage appliances, in addition to the industry leaders like EMC (which acquired VMware in 2004) and NetApp.
Because SAN and NAS storage appliances became full-blown servers, they are built from shared components that those vendors buy in quantity, including those brawny Xeon cores. But they are not the same products sold as servers, they are dedicated products with their own designs and feature sets.
Cloud changes the equation. Single points of access are single points of failure to cloud architects. So they started designing a new set of distributed storage architectures based on commodity servers, each with local DAS, connected in a highly scalable network mesh. Distributed storage architectures are already massively deployed at web services most of us use daily.
The web giants like open source, for a variety of reasons, but the goodness here is that they are backing an ecosystem of open source developers who are designing distributed object storage software projects with names like Swift, Ceph, and GlusterFS (now incorporated into Red Hat Storage).
So, open source distributed object storage systems based on commodity server hardware are now easily available. Cloud customers want to know if they can lower their hardware costs even more while keeping scalability and performance, and they want to know if they can lower power consumption for the parts of the storage server that don’t actually store data. This is where Intel’s Atom and ARM-based processors come into play. If you want a deeper dive, we’ve researched a paper takes an early look at using Calxeda’s ARM-based processors to help answer the first question.