Header Ads

Getting Started with Virtualization

Let’s begin with a simple definition: virtualization creates the illusion of multiple separate computers (known as virtual machines or VMs) running concurrently on a single physical computer. Each of these VMs appears to have its own CPU, memory, disks, network cards, USB ports, and so on. It can be powered up, booted, shut down and powered off independently. New virtual machines can be created as required, and destroyed (or terminated) when they're no longer needed. Each VM can have an operating system installed in it, and the various VMs within a single physical computer don't all have to be running the same OS: one might be running Ubuntu Linux, one Red Hat, one Windows 7 and one Windows XP.

In architectural terms, the layer of software that performs this magic can either run directly on the physical hardware, or on top of a host OS. These two arrangements are sometimes known as Type 1 and Type 2 hypervisors respectively- a classification that comes from Robert Goldberg's 1973 doctoral thesis at Harvard.

A Type 2 hypervisor runs on top of a host OS, providing virtual hardware on which guest OSes may be installed.

In the Linux world, Xen is probably the best known type 1 implementation, whereas the popular VMware Workstation and VMware Player products are type 2. The OSes on the virtual machines are known as guest OSes.

The idea of virtualizing hardware is hard to get your head around until you actually see it in action. The light bulb moment for me was the first time I powered up a VM and found myself in amongst its BIOS setup screens. After installing Linux (from standard installation media) and watching the installer discover what it thought was real hardware it became apparent how complete an illusion of PC hardware virtualization actually provides.

The virtualized hardware may be quite different from the underlying physical devices - for example it can provide multiple CO drives (virtualized as ISO file images within the host OS's file system) and the virtualized network interface is almost certainly not the same brand as the one providing the actual physical connection. The storage space for virtual disk drives is taken from the file system of the host OS. Within the host system, it will appear as one or more large files.

It's worth taking a moment to compare the degree of isolation offered by a VM with that offered by a single process within an operating system. All of the processes within a single OS are separate to some extent. They all have a separate memory address space, and they all operate under the illusion of having the CPU all to themselves. But they share the file system, an IP address, and the TCP port space so that, for example, if one of those processes is a web server listening on port 80, no other process will be allowed to bind to that port. And of course they share the operating system.

Contain yourself

Virtual machines offer a far higher level of containment. For example each VM has a separate file system, a separate IP address, and a separate TCP port space (so it's fine to have web servers listening on port 80 within different VMs). Each VM can be started and stopped without affecting the others. It can have (virtual) hardware added or removed, and have its own operating system installed.

For Linux desktop users (including myself) virtualization is a useful way to install, test and play with the latest and greatest Linux distros without impacting your main machine. Also, for users who have migrated to Linux but have a couple of legacy Windows applications, running Windows inside a VM offers a far more convenient solution than dual-booting.

The training company Learning Tree International uses virtualization extensively to support lab work. The machine the students see during hands-on exercises IS usually a VM (Red Hat, Windows 7, Solaris) running on a Windows XP host.

This provides a high degree of standardization in the way that course loads are built and imaged onto the classroom machines. The VMs are run in full-screen mode and in most cases attendees reach the end of the week without ever realizing there was a host operating system under the covers, unless by chance they hit the deliberately obscure hot-key combination needed to drop the input focus out of the VM.

Canonical's training course, Deploying Ubuntu Server Edition, uses KVMI Qemu virtualization to provide each student with three servers for lab exercises. These are used, for example, to demonstrate the deployment of proxies and mirrors for the Ubuntu repositories. The students are very aware that virtualization is in use as they're required to switch between the VMs and even get to build their own.

But it is perhaps in the server room that the value of virtualization is greatest. Traditionally, many servers run way below capacity - figures of 10% to 20% utilization are common, and a company recently estimated that 150 of its 400 servers were using just 3% of their capacity.

Reducing rack space

Using virtualization to consolidate the workload of these machines can reduce the rack space, power and cooling requirements of the server room or data center.

Two attendees on a recent course, who managed the computing infrastructure of a major UK university, reported a 25-to-1 consolidation ratio, replacing 100 old servers with just four (presumably newer and more powerful) machines. This is good ecologically, given that the carbon footprint of the IT industry is roughly the same as that of the airline industry (about 2%). And of course, virtualization technology is at the heart of that seemingly infinite pool of rapidly-provisioned, self-service resources that we call the cloud:

Red Hat's Enterprise Virtualization for Desktops uses server-room virtualization (KVM and Qemu) to host virtual desktops (essentially Instances of Red Hat Linux or Windows). Users sit at 'thin client' machines (perhaps repurposed PCs) and access these desktops using SPICE (http://spicespace. org). A modern remote-desktop access technology that (unlike older remote desktop protocols such as ROP) can offer a user experience close to that of actually sitting at the machine where the desktop is running.

Virtualization has given rise to a useful little sub-industry of so-called virtual appliances - a pre-built VM image configured for some specific purpose. Suppose you want to try Magento to build an e-commerce shop front - you can either manually install it (along with, presumably, Apache, MySQL, PHP, phpMyAdmin and other dependencies) and configure it up, or you can download and run a virtual appliance in which it's all installed, configured, and ready to go. The VMwaresite (www.vmware.com/appliances) lists more than 1.800 such appliances including open source software stacks such as SugarCRM, Orupal, Alfresco, Joomla and many others. These can all be downloaded for free, as can the VMware player, which is all you need to run them.
  • Bitnami is a major contributor to the VMware virtual appliance library. Its website (bitnami.org) carries almost 30 application stacks' and 11 'infrastructure stacks' to install into your existing Linux system, as VM images, or as Amazon Machine Images (AMIs) that can be launched in the EC2 cloud. 
  • Turnkey Linux (www.turnkeylinux.org) carries about 45 appliances all based on Ubuntu 10.04 (the current LTS version). Each is provided as a downloadable VM image, or as an ISO image that you could install into an 'empty' VM (or indeed. onto bare metal). They also offer the option to launch the appliances directly into the Amazon EC2 cloud. 
  • Jumpbox (www.jumpbox.com) also offers a portfolio of about 60 appliances, packaged ready for a variety of virtualization products including VMware. Xen, Parallels, Microsoft's Hyper-Vand Sun's VirtualBox. Again, many of its appliances are also packaged for immediate deployment within the Amazon EC2 cloud. It's a subscription-based service with a flat monthly fee, but there is a 15-day free trial. 
 A sample of the preconfigured virtual appliances available from Turnkey Linux. There are similar offerings from Bitnami and Jumpbox.
If you would like a more DIY (and more strongly Linux flavored) virtualization experience, consider installing KVM which is a full virtualization solution for Linux on x86 hardware. It requires CPU virtualization extensions (Intel VT or AMD-V- see boxout) and consists of a loadable kernel module. kvm.ko. and a processor-specific module. kvm-intel. ko or kvm-amd.ko. KVM is used in conjunction with Qemu (pronounced kee-mew) to emulate other hardware such as network cards, disks and graphics adaptors. You'll also need to install libvirt and some userspace management tools such as virsh (for a command-line experience) or virt-manager (a graphical tool.)

A brief history 

The concept of virtualization is not new - the first production computer supporting full virtualization was the IBM System/360-67 mainframe, which came into service, in about 1967. UNIX vendors such as Sun and HP were selling virtualized systems in the late 1990s- but only on high-end (i.e. very expensive) servers. PCs have supported virtual memory (providing each process the illusion of having the address space all to itself) since the lntel80386 processor was introduced in 1986. Indeed, it was precisely this development that inspired Linus Torvalds to start writing Linux in the first place.

VMware, a major player in the PC virtualization market, first released VMware Workstation in 1999. But it is perhaps only since 2006 or so that PC-class machines have become powerful enough to really make virtualization fly. In particular, Intel and AMD both announced extensions to the x86 processor architecture around that time that made full virtualization possible and efficient.

Will it work for me? 

All modern servers should have processors that support virtualization, but some low-end laptops may not. A simple way to find out if your processor supports it is with the command:

$ egrep 'vmxlsvm' /proc/cpuinfo

If you see either vmx or svm listed in the CPU flags, then you have Intel or AMD virtualization respectively. Be aware though, that even if the processor supports it, it must be enabled in the BIOS to be active. (Ubuntu has a little script called kvm-ok that checks for this.)

If you're choosing a new laptop, verify the processor type number and (for Intel) look it up in the list at ark.intel.com/ vtlist.aspx. We're not aware of an equivalent list for AMD