Open Systems
OpenStack is all the rage in some circles. For those who are unfamiliar with this open source project, it began as joint project between RackSpace Cloud and NASA that integrates code from NASA’s Nebula platform and RackSpace’s Cloud Files platform. At this point, 136 companies are using or supporting OpenStack. The list includes companies such as AMD, Citrix Systems, Canonical, Cisco, Dell, HP, IBM, and Zenoss. It is free open source software released under the terms of the Apache License.
Why am I bringing this up? Well, it reminds me of other “open systems” movements I’ve seen over the years. In each case, the goal was making it possible for organizations to focus solely on the workloads needed for them to be productive using hardware, software and services from just about any vendor, hosted on just about any hardware architecture all without the fear that interoperability would suffer.
In the 1970s, the industry saw minicomputer suppliers such as DEC, DG, Pr1me or IBM, make claims that their systems would offer platform independence and interoperability. In the 1980s, that mantel was handed over to the suppliers UNIX-based systems, such as DEC, IBM, HP and Sun. Microsoft picked up a version of that message and claimed that its Windows operating system offered the same platform independence. Linux appeared in the 1990s and used similar messages. Today we’re seeing history repeat itself. This time, however, OpenStack is the central focus of a movement.
Does reality match promises?
If we examine what actually happened in each one of these industry waves of “open systems” enthusiasm, we would quickly come to the conclusion that the promise of any vendor, any hardware architecture, any application and total interoperability didn’t really occur.
In each wave of “open systems,” some vendors decided to try to hold onto their customers by introducing clever hardware or software “lock-ins” that made it difficult to actually achieve the promised state of freedom.
The minicomputer suppliers had their own versions of COBOL, FORTRAN and PL/I that offered vendor specific features. If a customer was extremely careful to only use the common subset of these development languages, interoperability was possible. Hardware incompatibilities, such as how numbers were stored within machine words (big-endian or little-endian), made data migration challenging. Operating system job scheduling functions made migrating complete workloads equally difficult.
Although UNIX was expected to address these issues, there were several UNIX camps that didn’t agree on many issues making interoperability challenging as well.
Microsoft clearly learned from both the minicomputer and UNIX systems eras. It came out with Windows NT and joined something called the “ACE Initiative.” It was going to supply the same operating system, having the same job scheduling functions, the same programming languages and made suppliers support a hardware abstraction layer (HAL) in the hopes of addressing the endian and other vendor-specific hardware issues. It is clear that Microsoft broad promises weren’t supportable in fact. Windows NT 3.1 supported X86, MIPS, Alpha and SPARC based systems. SPARC was dropped by the time Windows NT 3.5 arrived. Alpha was dropped when Windows NT 3.51 was launched. MIPS support bit the dust by the time Windows NT 4 WAS launched. Microsoft also had difficulties maintaining software compatibility over time. Companies have faced challenges moving from Windows NT to Windows 2003 and again when moving from Windows 2003 to Windows 2008.
Linux was introduced as a commercial product in 1993 and became the intense focus of the industry. Since there wasn’t a single supplier behind this operating system, multiple camps appeared. IDC, the industry research firm, at one point in time, was following nearly 400 different Linux distributions! Over time, there was significant consolidation in the industry. Linux is available on nearly every server platform. Linux, in the form of WebOS and Android, is available on many popular handheld and smartphone platforms. As in the past with UNIX and Windows, applications optimized for one platform (called distribution in the Linux world) may not work well or work at all on another platform.
OpenStack
Here we are in the second decade of the 21st century and the industry is still facing challenges in interoperability, application migration and the like that were the primary targets of each of the previous open systems movements. This time, the platform is OpenStack and it is a set of software technologies designed to support infrastructure as a service (IaaS) cloud computing applications.
As with open systems movements in the past, the same old messages of any vendor, any platform, any application and strong interoperability are being bandied about. This time, the focus is on a much higher, much more abstracted layer of software that cares little about the hardware and operating system platform.
OpenStack is based, instead, on Web standards that are already supported by all vendors, on all hardware platforms and offers support for virtual servers supported by nearly every virtual machine software platform available today. The interoperability offered is also based upon well known and well supported standards.
As I mentioned earlier in this article well over 130 suppliers and 1300 major organizations are already on board. Many service providers have either announced OpenStack-based offerings or are working on them.
Will this be the open systems movement that actually sticks? Only time will tell.