If you follow high-tech like I have for the past 25 years, it probably “feels” like the rate of change is increasing. While I’m not a die hard “Singularity” devotee, it just makes sense that if today’s technology is used to develop newer technology for tomorrow, the rate of change will increase. It’s like the compound growth of money. The evidence is all around us, and you need not look any farther than Facebook FB +0.27% and smartphones.
Five years ago, Facebook had 100M active subscribers, growing to 1.1B this year. It only took 7 years for 50% U.S. households to adopt smartphones but 11 years for the PC-based internet. Now mobile devices like smartphones and tablets are dictating the entire tech agenda for PCs, servers, networking, storage, security, software, and services.
To stay on this sharp growth trajectory, technology companies need to more quickly embrace the new style of computing, a specialized one, also known as heterogeneous computing. For those who aren’t embracing this specialized approach and are sticking with homogeneous computing, I say “good luck”.
Let me define the different types of computing.
Homogeneous Computing Defined
Homogeneity is all about things being the same. For decades, the tech industry took advantage of that scale and uniformity homogeneity brought with it. Most of that computing took place on a general purpose CPU accompanied by an operating system like Windows you would buy or license. This is still the predominant compute style for PCs and servers today.
Homogeneous computing is great if you don’t have to hit peak efficiency, lowest power or if time to market is the priority.
Heterogeneous Computing Defined
The opposite of homogeneous computing is heterogeneous computing, or specialized computing. This is where different kind of tasks run on specialized processors. There are many different kinds of tasks, or workloads. Whether its office productivity, games, video playback, photo filtering, database lookups, or network packet inspection, all run better of different kinds of processors.
While there are many different kinds of processors, they typically can be segmented into a few categories:
- CPU (central processing unit)
- GPU (graphics processing unit)
- DSP (digital signal processor)
- FPGA (field programmable gate array)
- Fixed function processor (i.e. video decode)
To be clear, having multiple CPU cores of different sizes or types is not heterogeneous computing.
The final note I want to make on heterogeneous computing is about complexity. It is more complex and takes a lot more skill in hardware and software to design any platform, whether it be a heterogeneous-compute based smartphone, tablet, PC or server. It requires a very balanced approach to get the right task on the right processor, taking significant hardware and software architecture and development time. There are industry players making that easier, but I will cover that in an upcoming column.
Now I want to discuss the book ends of mobility and servers in the context of heterogeneous computing.
Smartphone and Tablet Computing
Smartphones and tablets are the farthest ahead with heterogeneous computing as they have had the most constraints to deal with. Smartphones, and for that matter, feature phones for 15 years have had battery life requirements to contend with the smallest batteries in a compute device.
As consumers want to do even more with their phones, whether it be games, augmented reality, better image processing, and always-on activities, the efficiency requirements are increasing even more. To tackle those needs, many chip and phone designers are moving toward specialized computing. The smartphone “chip” or “SOC” (system on a chip) literally contains multiple specialized integrated processing cores one for each task like general computing, 3D, video display, video capture, camera, music, gestures, sensors and connectivity like WiFi and 4G. Only via this specialized approach are handset makers able achieve the feature sets and efficiency levels required. This isn’t an easy task.
As I noted previously, specialized computing requires specially tuned hardware and software to extract the efficiencies and performance. Each one of those subsystems needs to work well on their own, but also work well as a team. The phone needs to know what task to throw each one of those subsystems without a delay or you’ve wasted the benefit of specialization. All things equal, specialized computing is harder to architect the hardware and software.
Qualcomm QCOM -0.09% appears to be investing the most time, effort, and money into a broad set of specialized processors and software. They have invest billions into “Krait” CPUs, “Adreno” GPUs, “Hexagon ” DSPs, camera, video, sensor cores, “Atheros” WiFi and “Gobi” modems and other cores.Apple AAPL -1.14%, while not broadcasting publicly, has invested a lot into OpenCL GPU acceleration for its photo and video experience. Let’s move to servers.
For the last decade, the name of the mainstream server game has been homogeneous server computing in enterprises on Windows and Linux, driven by consolidation through virtualization. In a nutshell, this meant moving server workloads from many old servers to fewer, new X86-based servers. The glue that let the different operating systems and apps to work together on one server was through virtualization software. Virtualization saved money and made a lot of sense for enterprise IT but not as much sense in the future for cloud workloads or “scale-out” datacenters like Google GOOG -0.04%, Facebook, and Microsoft.
You see, scale-out datacenter providers need the maximum in efficiency per workload per datacenter square foot and that can only be achieved through specialized computing, not homogeneous computing. There is no way to meet the future datacenter needs without changing the approach to server computing. There’s just not enough power and concrete available to build enough datacenters to affordably fulfill the needs. Scale-out datacenters and tech providers are starting to approach the challenge in some different ways.
Hewlett-Packard’s approach is very far along by taking their Moonshot Platform integrating CPUs (AMD, AppliedMicro, Calxeda, Intel, Texas Instruments), GPUs (AMD), DSPs (Texas Instruments), and FPGAs (SRC). Nvidia has been doing GPU compute for 8 years with their CUDA platform, focusing on HPC.
While the datacenter is far behind smartphones and tablets, it must move to heterogeneous computing.
Long-Term IoT Future
Like many, I believe the future will be about the billions of devices connected to each other, to people, gateways and/or the cloud. It will be as much to do about millions of A/C units talking to each other to maximize energy efficiency as it will be “Super” FitBits telling us we are at risk of a stroke in the next week. Of course, the future doesn’t exclude personal compute devices like smartphones, tablets, personal computers, wearable, and TVs, but the biggest changes and challenges will encompass the billions of end points and how they’re managed between each other, through gateways and the cloud.
The only way the industry will realize IoT/IoE, which has even higher battery life demands than smartphones, will be to drive even further into heterogeneous computing. I believe those companies leading in heterogeneous computing will have the tactical advantage in IoT.
While smartphones, tablets, servers, and the future IoT seem like worlds apart, all require specialized, or heterogeneous computing to realize their full potential. Smartphones and tablets had to adopt heterogeneous computing first before PCs and servers based on low power demands, but PCs and servers are on a steep growth and learning curve.
The biggest challenge we have in all these markets is that progress isn’t happening fast enough, particularly in operating systems and application development environments. It’s extremely hard to program apps for heterogeneous computing. High-tech companies and consortiums need to get their collective acts in gear if they hope to continue the level of growth.
While it’s unclear exactly how specialized computing moves forward, one thing is for certain- those who don’t adopt specialized, or heterogeneous computing will be left behind. My next column will be a drill-down on mobile heterogeneous computing.