|HOME||PUBLIC LIBRARY||ANOTHER PERSPECTIVE||INFOPERSPECTIVES||CONTACT|
The more you know about virtualization, the ability of a computer to support working images of systems that don't physically exist, the less sure you can be about its roots. For IBM's big commercial customers, virtualization arrived in the mid-1970s. Now the leader in virtualization, IBM was a laggard back then, and there is every possibility that virtualization technology from others will yet upstage Big Blue's achievements. In computing, stardom can be as ephemeral as the theatrical ghosts of 1862 whose stunning impression on audiences set the stage for technology that first appeared nearly a century later.
IBM offers virtualization technology in all its computer lines, as it explains through its Systems Software Information Center. The most mature and arguably the most comprehensive offerings are available to users of zSeries mainframes. pSeries and iSeries both provide virtualization that is pretty sophisticated, too. The story isn't quite as impressive in the xSeries arena, where most of the products use X86 chips and, consequently, are still limited by an architecture that stands in the way of virtualization. If you like, you can pin some of the blame on Alan Turing.
In 1936, at Cambridge, Turing proved that there was a theoretical computing machine, which we now call a Turning machine, capable of imitating the functions of any other computing machine of this type. What is not guaranteed is the machine's efficiency. Nevertheless, the work done by Turing showed very clearly that you could have computing wheels within wheels, an idea brilliantly turned into actual computers by IBM with its System/360. Various models in the System/360 product line were based on different computers, and all these computers did only one thing: They emulated the System/360 architecture. In a sense, they were all virtual machines supporting a single image of a System/360 built of firmware and hard logic.
Still, the System/360 line didn't have any of the virtualization features that are now becoming universal. The only step toward virtualization that users could see taken by the System/360 appeared in the very special 360/67, which had virtual memory. With its next generation of mainframes, the System/370, IBM first offered real memory systems, then models that included virtual memory as well as an add-in for installed fixed memory machines that provided virtual memory.
IBM's virtual memory was not a first. Burroughs had virtual memory on its B5000 and B5500 systems a couple years before the debut of the System/360. And Burroughs learned about virtualization from the Atlas project at the University of Manchester.
Still, it took IBM to make virtual memory a commercial reality, because IBM had the whole package: disk drives, tape drives, microelectronics fabrication technology, memory manufacturing technology, software expertise, an outstanding sales team, and a customer base that trusted IBM to provide strategic technology for bookkeeping and other record keeping.
IBM also had a vision that encouraged the creation of virtual machines, even if its sales for and, for the most part, its commercial customers had absolutely no idea where this concept would lead. Like many ideas at IBM, this vision became the basis of products only after a shock. In the mid-1960s, MIT wanted a machine with hardware that would support multiple levels of security for its Project MAC. IBM lost out to General Electric, and the people at MIT built the impressive Multics timesharing system on GE's hardware. Bell Labs, at the time the most prestigious industrial research facility in the USA, joined the project, left the project, and glommed a number of key ideas that led to the creation of Unix and its descendants.
In reaction, IBM made an incredible effort to go beyond the technology GE provided and created a system that was, in effect, a single computer that looked like a bunch of complete System/360s, CP/CMS. From there to the VM family of operating systems was a path straight enough to be mapped out in a brief reflective essay on the still lively web site dedicated to the story of Multics and its developers.
The timesharing wars of the 1960s and 1970s are long since over, and their lessons may be forgotten by IBM's managers, but the ghosts still luck on the battlefield, which is now the Internet. But Pepper's Ghost still lives in cyberspace, too.
Pepper's Ghost, which probably should be called Dircks' and Pepper's Ghost, is an astounding theatrical effect that, for years after its public debut in 1862, captivated theatre audiences and inspired many other theatrical effects, magical illusions, and other developments that culminated in the invention of the motion picture. The connection between ghosts and virtual reality persists in our culture and language. Intelligence operatives whose tradecraft includes the assumption of virtual identities are sometimes called spooks, and the head of the CIA is none other than a fellow named Polter Geist, or something like that.
The concept was that of Henry Dircks, an inventor from Liverpool. John Henry Pepper, a lecturer at the Royal Polytechnic Institute in London, working with Dircks, perfected it. Basically, by projecting the image of an actor onto the surface of a piece of glass placed between an audience and a stage with other actors, the virtual image, the ghost, could interact with the directly visible players. The audience didn't know the glass was there; all they saw was the translucent image of the projected player interacting with the live players who could pass objects, including themselves, through the image.
It might have been a poor virtual reality, although not for an audience that wanted to believe in what is saw, but so is the virtualization on X86 platforms.
The processors used in all IBM's other servers, Sun engines, Itanium CPUs, and pretty much all the other chips created with servers in mind can provide multiple levels of control, so a program at the highest level can stay on top of things done by software, including whole operating environments, running at lower levels. But X86 chips don't now provide as capable a hardware basis for virtualization. Both Intel and AMD will be enhancing their chips to remedy what is now seen as a lack, at first for circuits aimed at the X86 server market and eventually, we believe, for all their chips. But the rollout will be measured and the technology well be adjusted as necessary as the chip makers gain field experience.
Two of the best known X86 virtualization schemes, VMWare and Xen, explicitly acknowledge the limitations of the processors on which their code runs, and the technical community has been well informed about the situation. But living with the state of the art and being content with it are two different things entirely.
The possibility that X86 systems and applications can violate the integrity of virtualization schemes means that VMWare, Xen, and their ilk are, at least in the opinion of skeptics, not a proper basis for production systems. Both schemes work well if the user plays by the rules and no accident or hack penetrates the mirrors and smoke behind the virtual illusion. But that's not good enough for the most cautious users, who daily encounter worms, viruses, and intrusion attempts on corporate systems, personal computers, PDAs and even their cell phones. Any successful penetration of a virtual machine monitor can be as deadly as the injection of a rogue rootkit in an ordinary operating system.
VMWare believes its virtualization software on an X86 platform is as secure as any operating system, and has a lot of independent support for its confidence. But, like more familiar operating systems, security experts probing VMWare do, from time to time, discover vulnerabilities. VMWare has been good at addressing security matters, but has yet to gain the degree of user confidence it feels it deserves. Xen uses different technology and claims productoin quality resiliency only when hosted environments have been modified to work within its virtualization scheme.
The rush to virtualize the X86 world is now underway, and the first solutions with hardware support to bolster the software of virtualization engines will reach the market later this year. IBM is hoping its additions to VMWare will be a success. Microsoft is moving ahead with its . virtual versions of Windows Server software, but, like everyone else, Microsoft really needs Intel and AMD to provide hardware support. Linux users are likely to favor Xen, which, like other schemes, will be adapted to take advantage of any extra hardware assistance the chip makers can provide. Solaris users can pretty much count on Sun to exploit any new hardware wrinkles in the X86 space, too.
The result, if not this year than certainly next, will be inexpensive servers with a lot more security and stability than current ones. The benefits will be of great help not only to corporate users, who constantly seek to improve the resilience of their servers, but also to small companies who depend on ISPs with shared servers for the integrity of their web sites, email services, and other Internet-based functions. Between these extremes, mid-sized companies with X86 servers will quickly come to appreciate the benefits of systems and applications that run inside protective supervisory programs. In short, virtualization is going to bring about a boom in server replacements, but only when the technology is shown to work as promised.
For the rest of the server world, which already has access to machines with their favorite architecture that can deliver virtualization, there will be progress, too. All the server makers will have to improve their virtualization schemes so their premium products can remain ahead of the less costly alternatives in the X86 universe. If they don't, their claims of superiority will become as transparent and ephemeral as Pepper's Ghost.
— Hesh Wiener January 2006