Stop buying enterprise storage and read this.
Modern companies now recognize their most valuable asset is the data running their business. This data (and keeping it or not) may be the most challenging IT discipline. Not only is the underlying technology changing unbelievably quickly, its growth is astronomical. Some estimate worldwide storage will reach >40ZB in 2020, and the data will come from just about everything. Some interesting shifts
are emerging as vendors such as DellEMC
, Hewlett Packard Enterprise
, Pure Storage and many others present new solutions.
Enterprise IT needs to prepare now.
I recognize one of today’s big challenges is the mass of confusing techno-babble. I’m going to avoid it and provide 5 questions for you to ask.
- What are you going to do with (or to) the data?The ultimate goal of IT is, of course, transforming data into information with actionable insights, which is easier said than done! You need to know how you are going to do this—methods, applications, processes, etc., before all else.
- Does management have predisposed business and economic expectations (e.g., using on premise or public cloud offerings)?Business direction, partnerships, or corporate mandates, such as “Use the public cloud!’ or “These datacenters must close!” can exist. You may not agree and they may be incorrect, but more times than not they will dictate much of what you do.
- Will applications that are essential to the business today remain essential in the future?Many times, there are “anchor” applications (e.g., Oracle Financials) that are key to your business. They may have resource expectations and progressions that set your technology direction.
- Does your company plan to be on the leading edge of IT evolution (e.g., embracing containers or supporting micro-services)?A company’s ability to adopt new technologies (e.g., software defined storage, hyper-converged appliances, all-flash arrays) and a realistic view of available talent will determine much. You may need to enhance your organization’s expertise.
- Is your company willing to deploy multiple architectures to support both conventional enterprise operations and next generation cloud-based workloads?Adoption of multiple architectures may provide benefit, but they will increase complexity, management and support of toolsets / software / process.
This is a pivotal time as the explosion and evolution of data change about everything: storage, management, process, etc
Storage Framework (Source: Moor Insights and Strategy)
The remainder of this article describes a framework (illustrated above) to help you consider your path…
The best place to begin is with a simple view of general storage types. There is overlap in underlying technology and implementation, but these are the principal types. Unless you’re an IT expert, you won’t need to know all the details but should have a rudimentary understanding of following.
- Lowest-level data grouping used in a single transaction
- Almost universally used to store data directly on physical devices: HDD, DVD, flash, etc.
- Block storage via a network
- Example: Fibre Channel SAN (Storage Area Network)
- Higher-level abstraction used to track groups of physical blocks
- Usually a referenceable unit for data or programs
- Mechanism for clients to share data over a network
- Example: Network Attached Storage (NAS) over LAN
- Even higher-level abstraction
- Supports unstructured data: web content, multimedia, data backups, archival images
- Accessed by a simple, scalable reference and presents data and metadata with equal importance and accessibility
- Designed to reach extreme scale usually over a LAN
- Does not require software to have any knowledge of the underlying physical storage.
- Example: OpenStack Swift
As the storage industry has matured, vendors have developed approaches designed to add scale and deliver value-added services. Don’t be afraid to question their continued use. Here are suggested implementation categories to consider as you plan your approach….
Many organizations are considering cloud-based storage for efficiency and economics. (Amazon
Azure, Dropbox, etc
.) This may be the right approach, but is not to be taken lightly. You must clearly understand the real costs, service level agreement (SLA), data availability (resiliency, backup / restore) and most importantly, data lifecycle (especially the real cost) of how data is imported, exported, used and destroyed.
Scale Out Servers
Many new solutions use standard servers maximized for storage (HDD, flash) with software to provide storage features and functions or software-defined storage (SDS). Their primary objective is to use less expensive, off-the-shelf hardware with shorter development and validation. SDS is often an appliance-like, scale out model that is not necessarily co-located on premise.
Conventional storage is a heavy legacy solution. Examples primarily use specialized hardware like RAID for availability and Fibre Channel HBA(s) plus switches for scale. They usually exist on premise but can be replicated to other locations. Vendor updates generally require storage-specific hardware and often have lengthy time-to-market.
We will refer back to our storage framework in future articles as we explore the pros and cons of key technology directions, disruptions and solutions.
Think about these five questions, and don’t be afraid to challenge the use of what we have today. We tend to work ”what we need” to do into “what we are doing”. As data changes our industry this can be a problem, so stay tuned!