Printer Friendly

The secret sauce: has virtualization finally come of age for software development?

I first began to use virtualization for software development. Procuring development machines and building out test environments that reasonably mirrored production was expensive. Even when money and resources weren't the issue the lead time involved in assembling enough hardware to create an integration environment for new software was.

First generation server imaging and duplicating tools like Ghost helped but it was still a nasty and tedious process to build out a server farm complete with domain controllers, database servers, email servers, load balancers, etc. Virtualization software changed all that.

VMware was first on the block and their product provided inexpensive, easy to manage tools that allowed us to create server images that could then be cloned on demand and repurposed for a specific role. Since it was possible to "snapshot" virtual machines at any point in time they were very useful for testing things like new code, patches and fixes, and third-party tools. If something didn't work out quite right it was very easy to return the machine to the state it was in before the introduction of the new component.

Bypass the Engineers

Virtualization also provided us with a way to get around the hassle of working with the server engineering team. Getting those guys to procure, rack, build, patch, and turn a server over to the application team was an arduous and unpleasant process. However, repurposing a development server we already owned using virtualization allowed us to actually consider agile methods of software development.

If you were really cool you had a couple (or more) hard drives for your development machine. One would boot to Windows (probably NT) and the other would boot to Windows Server. If you were over the top cool you had a third hard drive that booted you into Linux.

Virtualization solved that issue too. Using VMware or Virtual PC, it was relatively easy to configure your laptop to launch a virtual instance of whatever server system you were developing on. An added benefit to that was you could actually install the development tools in those virtual servers--something that was and is taboo on a "real" server in a production network.

Looking back on those times with a more mature attitude, I suppose I regret breaking or avoiding proper IT governance by creating all those virtual servers without the knowledge of the engineering team, but I don't regret all the additional work we were able to accomplish.

In those times there was no consideration of using virtualization for production servers. I am talking 10-plus years ago.

The hardware we were using to host the virtual machine manager and the VM's themselves were pushed to the maximum. Hardware wasn't beefy enough to take full advantage of virtualization. The two major limiting factors were processors or cores and RAM. If you were running your VM's locally you were also constrained by hard drive space.

VM images and their associated snapshots required lots of disk space--even more so if your applications had significant quantities of data that they either consumed or produced. The standard solution was to replace the CD cradle on your laptop with a second hard drive.

Hardware Overkill

We are in a different world now. Hardware technology continues to improve. Even mid-range servers now have multiple sockets with multiple cores. A 64-bit operating system can provide astonishing quantities of RAM. Fast network storage is readily available. But here's the deal.. ..server specifications now exceed the requirements of the applications they host.

I would love to have a 32 core server with a terabyte of memory but none of the applications I support can really make effective use of that that much power. Most of the things I do are Web based and not terribly dependent on processor time. RAM is always a good thing to have, but 8-16 GB of memory is about all I really need on a Web server.

So in order to gain maximum effectiveness of the hardware now available I need to use virtualization. That way I can dole out those processors and memory over multiple server instances. If I want to do that I no longer have a short list of virtualization products that begin and end with VMware.Microsoft (Hyper-V), Citrix (XenServer) and Red Hat (Enterprise Virtualization) offer products that can compete directly with the latest from VMware--vSphere.

OS as VM

It's a little ironic. Isn't an operating system just a virtual machine running on the server metal? The purpose of an operating system is to provide a mechanism to effectively use the processors, memory, I/O devices, connectivity devices, and storage devices that the server provides.

Early operating systems only provided bare bones control of the physical machines. In the early days of PC's, applications controlled printing, screen display, file and memory management, and most everything else. PC (and by extension server) operating systems have abstracted most of those things so that applications are able to use the OS API for access to everything--including the processor.

Now that we have more powerful host machines we find the need to create a further layer of abstraction by installing a hypervisor on the bare metal. The hypervisor provides the machine/software boundary. So do we really need the ability to add another layer of abstraction between the machine and our application? Given the fact that I am discussing the latest virtualization products and techniques this could be considered a silly question. Yet the answer is critical, for it can guide us to the best way to use virtualization and it may point us to the proper product for our needs and environment.

The premise starts with this--available servers exceed the requirements we have for individual applications. Our response to this could be:

Rebuild our application to make it infinitely multi-threaded and to efficiently and effectively consume vast quantities of memory. This obviously is not a realistic approach. There are valid reasons for running multiple instances of the same application. Of course some of those reasons are less valid if we are running those multiple instances in virtual machines which are hosted on the same physical machine. This is important. If we are going to use virtualization for production servers they should be distributed across various host servers and those servers should be distributed across multiple zones in a data center with separate power sources, external connections, storage devices, etc. That is, it is important if we are interested in maximizing our SLA's for up-time and redundancy as part of our virtualization initiative.

Lift and move our existing applications to virtual servers using physical to virtual tools (P to V). Legacy applications are typically running on deprecated (or out of support) servers. The requirements for these applications are relatively lightweight. Theoretically we could retire many existing servers by virtualizing these servers and their applications. The benefits are manifold--fewer machines mean less energy consumption and less heat generation. Newer machines mean more efficient machines (think green). Fewer machines also equates to reduced rack space, less cabling, fewer devices that require actual hands-on support, etc. All that is quantifiable and real. But don't get hung up on the cost and resource savings. They are important but don't tell the whole story. Other considerations are not so easily identified. Not all legacy applications may respond well to a virtualized environment. An application written in C running on OS2 may not work exactly as expected on VM built on an Itanium 64 bit machine. There are differences in the ways that a virtual operating system and a bare metal install interact with a physical machine. For the most part they are not significant but if your application is expecting a particular processor instruction or is actually accessing the CPU it may fail.

Old School

Another consideration with legacy applications has to do with the very reason they are already running on "old" hardware. The source code, installation bits and/or installation configuration notes are often missing for legacy applications. Instead of planning a direct P to V migration it might make sense to actually install and configure the application on a fresh, modern (albeit virtual) operating system. That might take a little more time but at the end of the day you now have an application that is no longer a black box and can more easily be supported. Additionally by installing the legacy application in a virtual environment we have reduced our risk--if the application fails in the new virtual environment we have only lost time--we still have the original physical hardware running the application.

Use virtualization technologies for all new applications. This allows us to select the best virtualization software to support our stack of server needs. If we are running a Hadoop cluster we may choose Red Hat Enterprise Virtualization. If we are going build out a SharePoint and Lync farm we may opt into a Hyper-V model. If we are going to support a broad range of guest operating systems VMWare vSphere may be the answer--it even supports OS/2 Warp. Go figure.

More to IT than Saving Money

Lift and drop is fine. It will gain some immediate savings and efficiencies. Going green in the data center makes sense in a multitude of ways and will win you lots of friend and accolades, but that is only part of the story. Don't base your virtualization strategy solely on $$$. Base it on computing efficiency and availability.

If your only goal is saving money you will not improve your service offering. Wouldn't it be nice to add capacity on demand? Wouldn't it be nice to never have to patch a live production server? Wouldn't it be nice to have another layer of DR redundancy? These things all require a solid and redundant physical infrastructure behind the virtual layer.

Cheaping out on the physical underpinnings of a virtual service catalog is a common mistake. If your basic offering is either a 2x4 or 4x4 Windows or Linux server than you are probably already walking that line. The cost savings per computing unit you achieve through virtualization should be returned to your customers as better machines, not just as a deduction on the P&L. If you are serious about doing virtualization properly you need to have an inventory of VM-ready machines locked and loaded and racked.

Redesigning your data center and the way you deploy applications based on virtualization is the real sweet spot. I touched earlier on ways to improve your SLA's using virtualization. And that is just the beginning.

Virtualization is the key to private clouds. Properly design your data center and your virtual service catalog and you can deliver software or hardware as a service for your customers. And it can all be done on commodity hardware.

Look at the way Amazon, IBM, Microsoft, Salesforce and Google are managing their clouds and learn from them. These guys have already done the heavy lifting. It's not too much of a risk to follow in their footsteps. This stuff is easy, but it requires a different mindset from a traditional (read old fashioned) datacenter model. And keep in mind that virtualization is the secret sauce that makes it work.

Please address comments, complaints, and suggestions to the author at prolich@yahoo.com.
COPYRIGHT 2011 Summit Business Media
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2011 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Trends & Tech
Author:Rolich, Paul
Publication:Tech Decisions
Date:Jun 1, 2011
Words:1871
Previous Article:New kid on the block: entering a new market can be difficult without the right systems.
Next Article:Figuring out highly configurable: a better approach to writing software is winning the market.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters