Hyperscale datacenter operators want to expand the compute capacity of their server racks. Some server vendors are reacting by creating dense chassis that absorb the functionality of mid-rack switches, to the point where they are also capable of bypassing the top of rack (TOR) switch and connecting directly to the end of row (EOR) switch. But, that is not the only way.
AppliedMicro’s X-Weave architecture enables datacenter system architects to build dense, shared memory space servers at a rack level using simple network topologies from the TOR switches. X-Weave could help lower Total Cost of Ownership (TCO) by simplifying network management at scale, eliminating mid-rack switches, significantly reducing cable count and by enabling higher server node densities.
- Executive Summary
- Background: Remote DMA and Coherent Memory
- Memory Coherence Over Networks
- Hyperscale Architecture is Different, and So is HPC
- AppliedMicro X-Weave Architecture and Gearbox 2
- AppliedMicro X-Gene 2 System Design with Gearbox 2
- Practical Applications of X-Weave Architecture and X-Gene 2 SoC
- Call to Action
- Figure 1: AppliedMicro Hyperscale Tray Reference Design: 10 Server Nodes Sharing Four 25Gbps Links
- Figure 2: AppliedMicro Hyperscale Server Node Reference Design
- Figure 3: AppliedMicro Hyperscale Rack Reference Concept: 240 Server Nodes Using 24 x 25Gbps TOR Links
You can download the paper here.
- Red Hat