Xilinx , the market share leader in Field Programmable Gate Arrays (FPGAs), presented five sessions at this week’s HotChips ’18 conference in Cupertino, California. These included CEO Victor Peng’s keynote on the company’s vision for the future of adaptable domain specific computing, new features of its upcoming 7nm Project Everest architecture, and pre-built applications for Deep Learning for Artificial Intelligence on FPGAs. Xilinx previously announced broad goals for the company’s 7nm Project Everest earlier this year but these were the first details the company has disclosed. FPGAs have been relatively slow to gain traction in the datacenter, outside of Microsoft’s use of Intel’s Altera chips. However, Xilinx was clear that it sees this trend gaining momentum, and that its next generation products will include specific features to enable growth in AI and wireless 5G networking.
ACAP: Adaptable, domain-specific acceleration
Mr. Peng laid out the company’s strategy to deliver over 10X faster processing in key markets while easing program challenges for FPGAs. The next generation devices will be based on the Adaptable Compute Acceleration Platform (ACAP) 7nm technology and will include innovations such as the “Software Programmable Engine,” faster I/O, a fast on-die fabric, and other features the company will likely detail at its upcoming Xilinx Developers Forum, October 1-2 in San Jose. Mr. Peng believes that the ACAP approach will open new markets for his firm’s products as first-class accelerators in their own right, beyond their traditional use as a development platform for building custom chips.
Figure 1: Xilinx is combining hard logic with soft re-programmable logic in what the firm believes to be a new category of acceleration devices. XILINX
As I have covered before, Xilinx and its partners began offering pre-built IP (RTL code and libraries) on Amazon Marketplace and AWS F1 instances to speed up time to solutions for markets such as AI, genomics, and video encoding. However, these all run on the Xilinx FPGA, which, while very flexible and changeable, is slower than hardened logic on Application-specific Integrated Circuit (ASIC) chips. With ACAP, the company will be taking this concept to the next level, offering the ability to call pre-built hard logic on the ACAP chip from C or C++ programs, through a feature currently referred to as Software Programmable Engines.
ACAP’s programmable engines
Mr. Peng revealed that ACAP’s new Programmable Engines will initially focus on two areas experiencing dramatic growth: accelerated Deep Learning inference processing and wireless 5G applications. Think of these engines as an embedded ASIC, with arrays of fabric-connected vector engines with local memory, and an instruction set architecture used to program these processing elements. Developers will still be able to use traditional FPGA features to accelerate specific customizations in the Programmable Logic array, which can communicate directly with the SW engines. It is safe to assume Xilinx will ultimately offer a range of product SKUs. This will allow users to choose how much die area they want to have for which type of processor, hard ASIC, or programmable logic.
The ACAP architecture represents major innovations that could be a game changer, and Everest chips could reach a larger market than the traditional hand-built custom approach that has historically been required for FPGA deployment. However, one challenge Xilinx must overcome is the fact that it must attract two universes of programmers that are quite distinct: AI and 5G programmers who would access the SW engines typically code in high level languages such as C, C++, or “frameworks” such as TensorFlow or CAFFE, while the users of the FPGA gates typically program in RTL or High-Level Synthesis languages to define the actual hardware logic. I encourage potential adopters of ACAP to attend the upcoming Xilinx Developers Forum I mentioned earlier to learn how the company intends to bridge these two approaches.
Turn-key deep learning for FPGAs
A few months ago, Xilinx and AWS added a Deep Neural Network (DNN) Toolkit to Amazon AWS MarketPlace, providing a pre-built Inference Engine, a Network Compiler, and a runtime library for Xilinx F1 instances. Microsoft , SK Telecom , Baidu , and other cloud providers have demonstrated excellent inference performance on FPGAs, and this AI-in-a-can approach is intended to broaden the appeal for this fast-growing market. While GPUs from NVIDIA dominate the market for training AIs in the datacenter, the market for executing those neural nets is really wide open, and FPGAs show a lot of promise with their performance, lower power, and very low latency (response time), even for small batch sizes. While not a user of this new Xilinx DNN platform, SK Telecom recently shared excellent performance in its new inference engine on the Xilinx FPGAs, which it is now offering as a service to Korean developers and customers.
Deephi AI now part of Xilinx
A few weeks ago, Xilinx announced its acquisition of Deephi, a Chinese FPGA solution provider who had been developing AI apps on Xilinx FPGAs. The company’s co-founder and CEO, Song Yao, presented Deephi’s technology and strategy to the HotChips audience in depth. It certainly seems to hold significant value for the company in both the traditional FPGA and the new SW Programmable Engine space. Deephi had some impressive demos as well at the event, showing real-time inference processing in facial and object recognition applications. From my perspective, the acquisition of Deephi will not only help Xilinx advance in the market for AI inference acceleration, it will also give the combined company a foothold in the Chinese marketplace.
Xilinx has historically been a fairly conservative and quiet company, selling technology to engineers who possess a rare combination of hardware and software expertise. I suspect most of these techies will remain somewhat skeptical about the marketing surrounding the ACAP approach until they see more detail to convince them that it is indeed fast, useful, and easy to program. This dynamic is why the heightened level of active market engagement may become the new normal under Mr. Peng’s tenure. The company needs to keep up the stream of information to both its base and new potential adopters of ACAP if it is going to break out of the custom chip and tools business and become a broad platform provider. To this end, the upcoming Xilinx Developers Forum on October 1-2 in San Jose will represent an important opportunity for engineers to learn more about the company’s plans and technologies. This enable them to make their own plans to exploit the potential of this interesting new technology that combines the best of two worlds: ASICs and FPGAs.