As a technology (not financial) analyst, my company talks to and works with very many companies ranging from consumer handset makers to the leaders in mega datacenter cloud infrastructure. What I am constantly looking for are those game changing technologies that can alter the competitive equation. You know, the “disruptors”. I found an interesting disruptor while working with a company called NextIO and while talking to vendors who disclosed information at last week’s Open Compute Summit. NextIO does what is called “I/O virtualization” and this technology, from my assessment, poses a real threat to core products from networking companies like Cisco, Juniper and Brocade.
Today’s data centers consist of rows and rows of what are called racks. Racks are like your old component stereo systems where you could slide components in and out like an AV receiver, amplifier, DVD player, Blu-ray player, power conditioner, etc. Instead of these type of CE components, in a datacenter, those components are the server which includes the CPU and memory, storage, uninterruptable power supplies, a top of rack Fibre channel storage switch which lets the servers access different high-speed storage units, and a top of rack Ethernet switch which lets the servers talk to each other and let the rack talk to other racks or outside the datacenter. I am grossly simplifying this as there are multitudes of variations on the explanation above. As you can see in the picture above, there can be 150 cables that are connecting these together, sometimes more when you add management cables.
The reason the industry arrived at this ugly mess is because server architecture was created as an additive process, where each generation was built on top of each other, driven by the varying server workloads, innovations in CPU, storage, memory, and networking. The end result is a sub-optimal solution that costs a lot more than it needs to on an upfront (acquisition) and ongoing (op-ex) basis and limits the performance of the rack. That is where I/O virtualization comes into the picture. I/O virtualization centralizes all that Ethernet and Fibre-channel communications into one rack, replacing multiple top of rack switches with one I/O virtualization rack.
By replacing both the ethernet and Fibre channel switches, this could really help the enterprise as it provides:
- opportunity to buy smaller form factor and much less expensive servers. You don’t need the larger form factor servers for the cabling or the air flow.
- lot less cabling and connections (5X less). This increases airflow which decreases power draw and improves cooling. It also helps reduces the amount of time to get the server up and running. Someone actually has to plug every one of those cables in, set it up and troubleshoot if there’s an issue.
- ability to add compute and storage without changing the top of rack I/O. This obviously provides flexibility.
When I first started digging into this, it sounded too good to be true and I had to ask why more people aren’t doing this. It’s a simple explanation. I/O virtualization is a relatively new concept and companies like NextIO are way ahead of the curve, so awareness is low in classic enterprise. Classic enterprise is conservative and waits for “someone else” to do it and talk about it. Also, this is a huge threat to guys like Cisco, Juniper and Brocade, so it’s not like they want to raise awareness of it, so they minimize it by ignoring it. Where I/O virtualization is taking off is in these mega data centers that few will give details on or who like to go public. As they are the most sophisticated and typically the first movers on new technologies, they are deploying solutions from companies like NextIO first.
As I talk with the traditional and megadatacenter IT ecosystem, it’s apparent this approach will be a popular way of doing communications for many parts of the ecosystem. The standard interface that NextIO uses, PCI Express, is literally available on everyone’s traditional server today and will be tomorrow, so it’s not like some new standard needs to get developed, ratified and deployed. It’s here today. Some exceptions to this are in the super dense space where PCI Express and ethernet are virtually on the chip, or solutions from guys like Calxeda, which minimizes power and eliminates the need for any comms box at the top of the rack.
A big question is, if I/O virtualization does catch on, how will Cisco, Juniper and Brocade react? They’re losing multiple boxes at the top of rack. It’s the classic innovator’s dilemma where they would need to invest in something that eats into a very good business. My expectation is they will need to develop something like NextIO has to fill in that gap but will come late as they don’t want to collapse their market. Challenge is that this is a “follow” move, not a leadership move, and that rarely portends well.
So if the traditional network guys lose out, who gains? In the NextIO case it’s Intel, Emulex, and obviously NextIO. Intel picks up the 10Gb cards and Emulex picks up the Fibre channel cards that go into the NextIO box.
Who said enterprise IT wasn’t fun? If you want a deeper, more technical dive into I/O virtualization and NextIO, you can check out a paper my firm wrote here.