Amazon Web Services (AWS) is the undisputed leader in the public cloud space–it is growing quickly every quarter and delivering strong earnings back to its parent, Amazon.com. Microsoft ‘s Azure is narrowing the gap and IBM, Google and Rackspace offer significant enterprise public cloud services, too. AWS is very profitable, generating $3.7 billion in operating profit before stock on $12.2 billion in revenue. AWS subsidizes Amazon’s international operations which lost $1.3 billion on $43 billion in sales and domestic operations which only made $2.4 billion on $80 billion in 2016. While it is clearly the poster child for the benefits that public cloud can bring – agility, a metered business model, robust toolsets, endless innovation and more – Amazon also suffers whenever there is an outage. To be fair, on the day of the outage, it was reported that AWS up-time was ~99.59%, which is under their 99.95% target, but still, based on the magnitude of how the media handled the outage, you’d think that number was a lot lower. Amazon runs a tight ship, and it shows.
All this attention comes at a time when customers are beginning to consider the true costs and true business impacts of public cloud services. This afternoon I was talking with our senior analyst John Fruehe about a conversation he had during lunch at a technology event. He was mentioning that a customer had recently stated “It’s very easy to put data into the cloud…” and my lunch partner completed, without missing a beat “…But very expensive to take it out.” A common complaint of public cloud customers.
This incident is an indictment, not of AWS or Amazon.com, but of business and IT decision makers. Too often the decision to move IT services to the public cloud is driven by either cost or the thought that “We need to get to the cloud to be competitive.” But not understanding the value that your IT can deliver today shortchanges the business.
If you look at IT back in the 1990’s, there were legacy platforms (mainframes, minis, etc.) and x86 servers. To listen to server vendors, x86 servers would rule the world eventually. Then, in the early 2000’s virtualization came along, with people predicting that virtualized servers would reign supreme. Except that the cloud happened. People began to think that the public cloud would eat the world, except for those pesky legacy platforms that nobody can shake. Now the future is even more crowded with private cloud being added to the mix. The graphic below captures the change in mindset and the march towards Hybrid IT (from a directional standpoint, not a market share standpoint.)
But not every workload can be in a public cloud. SaaS workloads like Salesforce.com, Workday or Office 365 are easy moves to the cloud and once customers get there, they stay there. But not every cloud workload will be public. Sometimes it is about data sensitivity, sometimes user experience, sometimes it is application latency, sometimes it is privacy or regulation – but whatever the reason, not everything will leave your data center. Therefore, private cloud makes sense. But in between those pure cloud plays and the legacy or bare metal platforms back in the corporate data center, outages like we see this week remind us that we need a balanced approach because some workloads might not be best served outside of your datacenter. A hybrid IT environment is what most businesses will end up with.
There clearly are workloads that will live in the public cloud and when businesses start considering the cost of high availability (i.e. running out of two different cloud data centers or on two different providers) the true price differential and savings is smaller than expected. Too many businesses that are feeling the pinch today are not feeling the pinch because of Amazon’s problems, they are feeling it because they themselves failed to architect their apps with the expectation that there could be a failure.
The dynamics between public and private cloud tip a little more towards private cloud every time that there is an outage or with each new generation of IT technology that folks like Cisco Systems, Dell/EMC, Hewlett Packard Enterprise, Huawei Technologies, IBM or Lenovo bring to the market. It is unlikely that businesses will halt their march to the cloud and move back to hosting all their applications on the traditional bare metal or virtualized server environments that had been popular for the last 20 years, but these episodes may cause people to think a little harder about data location and availability. This makes private cloud more interesting. Hybrid IT, gives businesses a combination of their data centers, co-location and external cloud for hosting applications that could be traditional, virtualized, public cloud or private cloud.
There is no “one size fits all” in IT; it won’t even be bi-modal. As business environments have become more complicated, so too, shall IT. Every business will probably have all those elements in their portfolio in the future as each shares an important role. I even expect that some smaller startups that eschewed IT infrastructure and went totally serverless, relying on the cloud, will eventually cede that some IT, though not a lot, will be better served under their control.
The other night, before the outage I spoke to a reporter who was very convinced that everything was going to the public cloud and eventually IT organizations will be out of a job. No, that will never happen, even in a cloud-based world there is still a need for creating and connecting services. And considering the disruption today, I wonder if he is rethinking his position a bit. That is if he can get online to ponder it.