2016 was a strong year for Machine Learning (ML) and Artificial Intelligence (AI) with many high tech firms claiming that they are now an “AI Company”, notably Amazon, Baidu, Facebook, Google, IBM, Intel, Microsoft, NVIDIA, and Tesla. In 2017, the field will broaden to include AMD, Qualcomm, and Xilinx. Moor Insights & Strategy (MI&S) expects Machine Learning based AIs to begin to expand from cloud-based services and applications (primarily internal to the web company’s operations) to specialized applications and edge devices, providing intelligent solutions for a wide range of enterprise, consumer, and industry-specific applications. This MI&S analysis segments these emerging AI applications and explores the underlying hardware required to run these cloud, edge, and hybrid applications.
You can download the paper here.
Table of Contents
- Machine Learning Applications
- From the Edge to the Cloud
- Hybrid Applications, Between the Cloud & the Edge
- Hybrid Hardware for the Hybrid Environment
- Hardware Advantages & Disadvantages for Deep Learning
- GPUs in the Cloud
- CPUs in the Cloud
- FPGAs in the Cloud
- ASICs in the Cloud
- Typical Embedded SOCs & Embedded GPUs
- Embedded Reconfigurable SOCs & FPGAs
- Figure 1: A Machine Learning Application Landscape
- Figure 2: Machine Learning Compute Platforms