Containers are stand-alone, lightweight packages, that contain the necessary libraries and executables to run code. They are available for both Linux and Windows applications, and provide a runtime environment that can function independently of the operating environment. Containers void the excuse that an application ran perfectly on one system, but not so well on a different system. In effect, containers are an abstraction of the operating system and runtime environment, tantamount to how hypervisors are an abstraction of physical servers in the operating system context. Hypervisors have been evolving for the last 20 years; moreover, the concept of containerization has been evolving even longer. In fact, shipping containerization, not to be confused with compute containerization, greatly streamlined international trade in the 1950’s because a port in Shanghai could use the same equipment to load a shipping container onto a ship, as the destination port in Los Angeles would use to unload the shipping container. The standardization that came from uniform shipping containers greatly enhanced international trade. Therefore, it is no accident that the industry leader in compute containerization, Docker, uses a shipping container as part of their logo.
Compute Container adoption is strong in many areas such as:
- Dev / Ops – Acceleration of application deployment, and higher agility with micro services
- Cloud – Scalable, elastic compute that has a small foot print and maximizes resource utilization
- Applications – Portability across disparate platforms, with isolation to standardize infrastructure
The industry leader in hypervisors, VMware, started in 1998. In the early days of hypervisors, the workloads they ran were typically for test and development. Fast forward to today, and it is not unusual to see 90% of a production environment running in hypervisors like VMware, Hyper-V or KVM. In 2015, I wrote a blog discussing how IBM had taken a leadership position by integrating their storage with VMware primitives, like the vStorage API for Array Integration (VAAI), and the vSphere API for Storage Awareness (VASA). This integration enabled VMware administrators to manage IBM storage through the VMware control plane, without having to conduct manual storage operations, and thereby, not having to engage a storage administrator, possibly. Now that containers are consuming production workloads, they have become a first-class citizen in the IT landscape, as well.
The system up-time of physical and virtual machines is measured in years or months, respectively. Container-based systems, on the other hand, measure their up-time in hours, or days, because containers are ephemeral by nature. The short life span of containers has created new management challenges for IT. A popular framework for container management and orchestration is called Kubernetes, which provides the orchestration of container availability, while maintaining application state. But in turn, it has created another management challenge for container storage. This is because application state can only be preserved by ensuring that persistent storage is available for ephemeral containers, paradoxically.
The good news is, IBM has once again taken a leadership role with storage management, as it relates to Docker containers, just like they did with VMware. IBM has released an open source storage plugin for Docker containers called Ubiquity, which is a universal plug-in for IBM storage. When used in concert with Kubernetes, IBM storage can be mounted directly by containerized applications without manual intervention. The foundation of any application environment is the storage infrastructure. If an application is using containers, it must have a solid storage foundation. IBM can provide this with its enterprise class storage hardware, and/or Spectrum Scale software, using Ubiquity.
To learn more about IBM Spectrum Scale, check out these blogs:
Please contact your Mainline Account Executive directly, or click here to contact us with any questions.