One way to figure out that a market is mature is that as the customer base stabilizes and stops growing so much, and even flattens out, segmentation becomes increasingly complicated and fragmented. This is a challenge for server giants like Hewlett-Packard, Dell, and IBM.
The current enterprise IT datacenter market is following just that path. As enterprise IT becomes a stable market, the new market serving service oriented, scale-out datacenters – the “web giants” like Google, Amazon, Facebook, Ebay, and even Microsoft – is booming and they are buying servers by the rack or in groups of racks. But the datacenter supply chain is stuck in the existing, highly fragmented segmentation model, which doesn’t really address rack level performance in a modern datacenter.
The big problem is that the existing segmentation assumes that each server is will be independently configured for its own application (a bit archaic and today mostly confined to small business and branch offices) or that a datacenter needs to be provisioned for a certain level of virtual machine in a peanut butter spread of over-provisioning.
We looked around for useful options, because honestly we would have been happy using someone else’s useful segmentation. But we couldn’t find a viable set of segments. Some of the options for describing new markets included:
- “Density optimized” is stuck in the old world of describing chassis and not rack level buying.
- “Microserver” is kind of denigrating and aimed at current limitations for small core processors.
- “Extreme low energy” assumes that processor sockets are still a leading cause of system power consumption as compared to networking, memory and storage – which was true 15 years ago but processor power consumption has come a long way since.
Almost universally we hear “virtualized” and “hyperscale” referred to as markets, and the concept of “hyper virtualized” is starting to surface, so we put some effort into defining them. To start with, we asked ourselves “what is rack level density really describing?”
We threw out price first. That’s a competitive issue within segments but doesn’t necessarily affect performance. It is also extremely difficult to weight for buying scale – list price doesn’t have much meaning when a customer is buying enough servers to fill a warehouse.
We also jettisoned power consumption, mostly because it’s too processor centric at the moment, as I mentioned above. Also, no one is collecting or reporting power consumption for fully provisioned racks yet, and if they were it would not be standardized because standards have also not been defined yet.
What’s left? For our first cut, we chose to scale socket level integer performance to rack scale. It’s easy to assess with standard metrics and easy to scale as a proxy. But it doesn’t account for workloads based on data movement as well as processor performance, so we also incorporate the stated networking bandwidth for each chassis or tray in a rack as pushed to a top of rack (TOR) switch – in other words we measure the maximum north-south (N-S) network throughput for a rack. We rely on manufacturer networking option specs and make some simple assumptions about scaling up to rack level. We assume there is a TOR for consistency.
When we graphed integer performance and North-South bandwidth at a rack scale, four segments emerged. We call them: Small Scale, Virtual, Hyperscale, and Hyper Virtual. You can read a lot more about them here. Four rack level segments are a lot simpler than looking at a bunch of legacy server chassis form factors, and most of the existing segments map fairly conveniently onto the new segments for backward compatibility.
Where will we take our rack level server segmentation from here? First, we’ll overlay high performance computing, which appears to have a lot in common with hyper virtual at a high level. Then we’ll try to tackle the sticky issue of measuring new architectural designs tuned to provide greater east-west (E-W) traffic within a rack or small cluster of racks, but for now those systems show up conveniently as a lot of integer performance with not a lot of N-S bandwidth – our hyperscale segment.
Cost and power will be cross-segment competitive issues for a while and will enable lots of room for innovation and differentiation within segments. But they are not core attributes for describing density. You can download Moor Insights & Strategy’s new server segmentation here.