We work in an industry that loves buzzwords, buzz phrases, and even buzz acronyms. Often when a new term is unleashed on the industry, it is so loosely defined that it can be difficult to relate what one vendor is saying to what another is saying. Fortunately the nebulous nature of these terms usually gives way to industry-standard definitions. Cloud computing is a great example of this phenomenon. When first launched it was seen as the magic potion that would conquer all corporate technology challenges. As it has settled into the industry, I think it’s safe to say that the cloud definition includes several well-defined subcategories, including IaaS, SaaS, and PaaS, all of which are tangible. Of course, there is always room for interpretation.
As we begin 2014, the new buzz phrase seems to be “software-defined,” and like keywords that have come before it, it is left open to some interpretation. Generally speaking, however, this phrase refers to an IT element that at one time required dedicated hardware, but today can be created, modified and run entirely through an abstraction layer that sits above the hardware. I believe the example of compute makes this most clear. In years past, a server was defined by its physical attributes – number of processors, amount of memory, disk space, and I/O adapters. With server virtualization, however, the hypervisor decouples these physical attributes from the server instance and defines what the OS and application see in software.
The initial benefit of server virtualization was simply consolidation and reclamation of unused physical resources. It quickly became apparent that a host of other advantages were to be had through this decoupling, including higher levels of availability, simplified disaster recovery, and automation, to name a few. As the limits of server virtualization are pushed, other bottlenecks creep in to thwart further advancements. While there are certainly more, storage and networking are currently getting the most attention. As happens with all new catchphrases, vendors are quick to put the rubber stamp on existing products, further confounding would-be buyers. As it pertains to “software defined,” it is probably easiest to look at it in this way: If you can buy the functionality as software only and run it on commodity parts from multiple vendors (a la VMware), then it’s software defined. If not, it is at best “software controlled.”
So what is the point of all this? Today, workloads still have some tie to the infrastructure they run on – perhaps it’s the replication done at the storage array level or the VLAN tagging that’s done at the network port. Once we get to a point where all workload attributes are independent of the underlying hardware, the freedom and flexibility to define, modify and move workloads are limitless. This is where cloud starts to become a reality.