April 9th, 2013
Today the race is on to virtualize all aspects of the data center. Dubbed the software-defined data center (SDDC) or sometimes software defined networking (SDN), SDDC is a market IDC projects will top $3.7 billion by 2016.
It's a hot market, too: just this week, Cisco, IBM, VMware, Red Hat and others have banded together under a Linux Foundation-hosted consortium called OpenDaylight. But while this is a significant step toward virtualizing the networking layer of the data center, it may simply be a prelude to the next phase of virtualization: storage.
VMware led the way in virtualizing servers in the data center, creating enormous value for its shareholders over the last decade. Originally acquired by EMC for $635 million in 2003, VMware is now a standalone company with a market capitalization of more than $30 billion. Last year it acquired a leading SDN startup, Nicira, for nearly $1.3 billion. That move scared a lot of data center vendors – primarily Cisco – who don’t want to see VMware dominate networking virtualization as completely as it came to own server virtualization.
Too often overlooked in all the billions of dollars sloshing around servers and networking competition in SDDC is the laggard, storage. Traditional storage is a $10 billion annual business, but until recently it hasn’t made much headway into virtualization.
That may be about to change.
To better understand the trends shaping the rise of the software-defined storage play, I sat down recently with Dr. Kieran Harty, CEO of Tintri, makers of storage systems for software defined data centers, and one of a core virtualization pioneer. Harty ran engineering at VMware from 1999 to 2006 and his teams created the software products that virtualized the server side of the SDDC equation.
ReadWrite: Remind us again what VMware was trying to do a dozen years ago when your teams were focused on bringing virtualization to servers.
Harty: The basic problems virtualization solved back then we called server consolidation and over-provisioning. Business wanted to move compute workloads from large, costly, proprietary, single servers (usually Sun servers) running one application, oftentimes at only 10% of capacity, to clusters of cheap, commodity, Linux servers. VMware pioneered a technology called the hypervisor that allowed virtualization to make this possible – on the server.
ReadWrite: Today VMware enjoys roughly 90% market share in server virtualization. The spectacular success of server virtualization begs the big question of what comes next. Can the same benefits of virtualization on servers be applied to the rest of the data center?
Harty: This is what gives rise to the concept of the software-defined data center (SDDC) – a data center with infrastructure that is fundamentally more flexible, automated and cost-effective; infrastructures that understand application workloads and can automatically and efficiently allocate pooled resources to match the application demands. Rather than construct data centers full of over-provisioned and siloed resources, a SDDC would more efficiently utilize and share all aspects of the infrastructure: servers, networking and storage.
While servers, and to a lesser extent networks, have embraced SDDC, storage lags significantly behind and continues to cause a great deal of pain in the data center today. Fortunately, some of the key technologies that brought the sweeping changes to servers and networks are taking shape for storage.
ReadWrite: What kind of changes?
Harty: A quick look at some of the most successful disruptive technologies reveals that many of them “crossed the chasm” with the help of a few common key ingredients: standardization, hardware innovation and abstraction. In the case of server virtualization, the standardization of Intel’s x86 platform and the proliferation of the open source Linux operating system massively disrupted the server market. Armed with a new generation of multi-core processors and VMware’s hypervisor technology, server virtualization conquered the data center.
Networks followed a similar path starting with TCP/IP standardizing the network protocol. Gigabit Ethernet increased transmission speed by an order of magnitude. OpenFlow, which set the foundation of an open and standards-based software-defined networking, paved the way for the most significant changes in networks in several decades.
ReadWrite: What kinds of changes in standards, hardware innovation and abstraction are leading to disruption in the storage market?
Harty: For 20 years, little has changed in the world of legacy storage designed for physical environments. As data centers become more virtualized, there is a growing gap due to the complete mismatch between how storage systems were designed and the demands of virtual environments. It’s a bit like people who don’t speak the same languages and have a hard time understanding each other - storage speaks LUNs and volumes; servers speak VMs.
As a result, they don’t understand each other very well. Storage allocation, management and performance troubleshooting for the virtualized infrastructure are difficult, if not impossible with legacy storage. Companies have tried to work around this obstacle by over-provisioning storage which is very expensive and increases complexity.
ReadWrite: Is there where flash technology enters and disrupts storage? Can we power through these legacy storage challenges with performance improvements that are an order of magnitude over those of traditional spinning disk?
Harty: Storage has always been about performance and data management. Flash removes the performance challenges and levels the competitive playing field for storage vendors. Flash enables very dense storage systems that can host thousands of VMs in just a few rack units of space. But flash by itself – without the intelligence – only gets us so far.
And while some industry players are attempting to make virtualization products adapt to legacy storage through APIs, or retrofit legacy storage to become virtualization aware, neither goes far enough to bridge the yawning gap between these two mismatched technologies – you can put lipstick on a pig, but it’s still a pig. What is needed to solve this problem is storage that has been completely redefined to operate in the virtual environment and uses the constructs of virtualization. In short, VM-aware storage.
ReadWrite: What do you mean, VM-aware?
Harty: Virtualized environments require storage designed for virtualization. Enterprises expecting to get the full benefit out of the software-defined data center need storage that’s simple and agile to manage, while delivering the performance required by modern applications. They will need storage that understands the IO patterns of virtual environments and that automatically manages quality of service (QoS) for each VM.
We eliminate an entire layer of unnecessary complexity if we stop talking about LUNs or volumes.
The broad adoption of virtual machines as the data center lingua franca gives us de facto standardization for software-defined storage. The rapid growth and declining cost of flash technology provides the hardware innovation.
This leaves us with the one last essential missing piece – the abstraction between storage and VMs, an abstraction that understands VMs while being able to abstract and pool the underlying storage resources and deliver the benefits of simple, high performing and cost effective storage. We call that VM-aware storage.