Traffic is a problem, not just on the streets, but also in the datacenter where demand for bandwidth is outstripping capacity. On the street, it is difficult to raise performance because raising the speed limit and shrinking or space between the cars can bring both more capacity, but also more challenges; it’s a physics problem more than anything else. Datacenter networks are facing their own physics problems, but in a much grander scale, something Applied Micro Circuits (AppliedMicro) is working to solve.
(Image credit: 123rf.com/ Kheng Ho Toh)
Content driving data density
Since 2012, network IP traffic has doubled to 168 Exabytes per month driven mostly by things like cloud storage, big data, IoT, messaging, social media and CDN (content distribution network). Aggregate rack bandwidth today is around 20Gb, and in a few short years that will explode to 50Gb. Cloud companies like Google and Facebook are establishing their own specific paths while telecom carriers like AT&T, Verizon and Vodafone are not too far behind as they lean on vendors like Cisco Systems or Juniper Networks to deliver more bandwidth.
The typical datacenter backbone is built on 100Gb/s Ethernet, but by 2014 market demand had pushed the IEEE (who sets Ethernet standards) to pursue 400GbE. This would be a boost to carriers who have to deal with large amounts of traffic, bringing enough bandwidth to stream up to 50,000 simultaneous HD video streams for instance. Some of those on the cutting edge like web giant Google are already moving faster than expected, but while their needs are very specific, the rest of the market is best advised to focus on formal standards versus trying to “roll your own”.
Big 7 cloud players will soon choke on the data
Copper is at its physical limit, and now optical switching, using light, is the only real way to boost performance beyond 100Gb/s. Today’s optical switching requires 4 wavelengths bonded together to hit the 100Gb/s speed. In 2015 IEEE approved an optical switching standard called PAM4 that uses light wavelengths to achieve greater bandwidth; it’s showing better promise for reaching 400Gb/s speeds affordably and efficiently. AppliedMicro is creating the new silicon to address the market, leveraging the advanced 16nm Fin Field Effect Transistor (FinFET) technology, allowing it to deliver both performance and power efficiency.
Historically speaking, using four combined links to reach max speed has always been the fastest way to market, with the vision of a single link implementation as the final target much further down the road. The transceiver, the physical device between the cable (optical or copper) and the switch is where these links come together. 100GbE connections today use a QFSP-28 transceiver (four 28Gb connections delivering roughly 100Gb/s), but to step up to 400GbE those transceivers need to change. More than anything else it needs to move from 4×28 to 4×100 in order to get to the 400Gb speed. The new QFSP-DD transceiver doubles the connections while boosting data by 4x, all in the same form factor, so both power and componentry (cost) are reduced. Where 4 wavelengths were required to get to 100Gb/s in the past, with PAM4 a single wavelength will eventually get to 100Gb/s. Four of these bonded together, will then hit 400Gb/s in a single transceiver slot, which is the easiest (and best) method for hitting 400Gb/s.
What this all means to business
But what do all of these acronyms and numbers mean to a business? Plenty, because what drives network economics more than speed or technology is volume. When you have multiple options in the market, most customers stand on the sidelines until the standard shakes out. But that is a catch-22: people hold off until a standard evolves, however standards evolve because the market coalesces around a single choice. How does this cycle ever get resolved?
Well, as we said earlier, while some of the largest cloud companies/carriers might be chasing down specific solutions that work for their specific needs, the broader market is looking for something that will drive the right economics, and PAM4 looks like the better option in that realm. In a recent presentation with AppliedMicro, MACOM, an optical manufacturer, indicated that some of their products have been in testing with Cisco Systems. As the de facto market driver, if Cisco Systems productizes 400GbE using single wavelength PAM4 optical, then we can expect the broader enterprise market to gather momentum behind that standard. Right now 56Gb optical with PAM-4 has been demonstrated and the feasibility of achieving 106.25Gb over a single wavelength would signal the ability to get to 400Gb in commercial products down the road.
Wrapping up
All of this points to a path from our current limitations in the datacenter to a better solution. But don’t wave the checkered flag just yet. Data is not going to stop at current rates, or for that matter even level off. If the millions of IoT sensors out in the market are any indication, network traffic is exploding even faster, and by the time we hit 400Gb/s people will be hard at work on faster speeds, eyeing a path to terabit (1000Gb/s) speeds. It’s a good bet that the people at Google will probably be the first ones to plant their flag on that beach, but just like 400Gb/s, it might not be the first ship there that matters, but the mainstream armada that brings the real market. And that happens when the technology hits its volume stride, driven by customer demand, and, most importantly, economics.