What is virtualization and why use it
Virtualization is a method of running multiple independent virtual operating systems on a single physical computer. It is a way of maximizing physical resources to maximize the investment in hardware. Since Moore's law has accurately predicted the exponential growth of computing power and hardware requirements for the most part have not changed to accomplish the same computing tasks, it is now feasible to turn a very inexpensive 1U dual-socket dual-core commodity server into eight or even 16 virtual servers that run 16 virtual operating systems. Virtualization technology is a way of achieving higher server density. However, it does not actually increase total computing power; it decreases it slightly because of overhead. But since a modern $3,000 2-socket 4-core server is more powerful than a $30,000 8-socket 8-core server was four years ago, we can exploit this newly found hardware power by increasing the number of logical operating systems it hosts. This slashes the majority of hardware acquisition and maintenance costs that can result in significant savings for any company or organization.
When to use virtualization
Virtualization is the perfect solution for applications that are meant for small- to medium-scale usage. Virtualization should not be used for high-performance applications where one or more servers need to be clustered together to meet performance requirements of a single application because the added overhead and complexity would only reduce performance. We're essentially taking a 12 GHz server (four cores times three GHz) and chopping it up into 16 750 MHz servers. But if eight of those servers are in off-peak or idle mode, the remaining eight servers will have nearly 1.5 GHz available to them.
While some in the virtualization industry like to tout high CPU utilization numbers as an indication of optimum hardware usage, this advice should not be taken to the extreme where application responsiveness gets excessive. A simple rule of thumb is to never let a server exceed 50% CPU utilization during peak loads; and more importantly, never let the application response times exceed a reasonable SLA (Service Level Agreement). Most modern servers being used for in-house server duties are utilized from 1 to 5% CPU. Running eight operating systems on a single physical server would elevate the peak CPU utilization to around 50%, but it would average much lower since the peaks and valleys of the virtual operating systems will tend to cancel each other out more or less.
While CPU overhead in most of the virtualization solutions available today are minimal, I/O (Input/Output) overhead for storage and networking throughput is another story. For servers with extremely high storage or hardware I/O requirements, it would be wise to run them on bare metal even if their CPU requirements can be met inside a Virtual environment. Both XenSource and Virtual Iron (which will soon be Xen Hypervisor based) promise to minimize I/O overhead, yet they're both in beta at this point, so there haven't been any major independent benchmarks to verify this.
How to avoid the "all your eggs in one basket" syndrome
One of the big concerns with virtualization is the "all your eggs in one basket" syndrome. Is it really wise to put all of your critical servers into a single physical server? The answer is absolutely not! The easiest way to avoid this liability is to make sure that a single service isn't only residing on a single server. Let's take for example the following server types:
HTTP
FTP
DNS
DHCP
RADIUS
LDAP
File Services using Fiber Channel or iSCSI storage
Active Directory services
We can put each of these types of servers on at least two physical servers and gain complete redundancy. These types of services are relatively easy to cluster because they're easy to switch over when a single server fails. When a single physical server fails or needs servicing, the other virtual server on the other physical server would automatically pick up the slack. By straddling multiple physical servers, these critical services never need to be down because of a single hardware failure.
For more complex services such as an Exchange Server, Microsoft SQL, MySQL, or Oracle, clustering technologies could be used to synchronize two logical servers hosted across two physical servers; this method would generally cause some downtime during the transition, which could take up to five minutes. This isn't due to virtualization but rather the complexity of clustering which tends to require time for transitioning. An alternate method for handling these complex services is to migrate the virtual server from the primary physical server to the secondary physical server. In order for this to work, something has to constantly synchronize memory from one physical server to the other so that a failover could be done in milliseconds while all services can remain functional.
Physical to virtual server migration
Any respectable virtualization solution will offer some kind of P2V (Physical to Virtual) migration tool. The P2V tool will take an existing physical server and make a virtual hard drive image of that server with the necessary modifications to the driver stack so that the server will boot up and run as a virtual server. The benefit of this is that you don't need to rebuild your servers and manually reconfigure them as a virtual server—you simply suck them in with the entire server configuration intact!
So if you have a data center full of aging servers running on sub-GHz servers, these are the perfect candidates for P2V migration. You don't even need to worry about license acquisition costs because the licenses are already paid for. You could literally take a room with 128 sub-GHz legacy servers and put them into eight 1U dual-socket quad-core servers with dual-Gigabit Ethernet and two independent iSCSI storage arrays all connected via a Gigabit Ethernet switch. The annual hardware maintenance costs alone on the old server hardware would be enough to pay for all of the new hardware! Just imagine how clean your server room would look after such a migration. It would all fit inside of one rack and give you lots of room to grow.
As an added bonus of virtualization, you get a disaster recovery plan because the virtualized images can be used to instantly recover all your servers. Ask yourself what would happen now if your legacy server died. Do you even remember how to rebuild and reconfigure all of your servers from scratch? (I'm guessing you're cringing right about now.) With virtualization, you can recover that Active Directory and Exchange Server in less than an hour by rebuilding the virtual server from the P2V image.
Patch management for virtualized servers
Patch management of virtualized servers isn't all that different with regular servers because each virtual operating system is its own independent virtual hard drive. You still need a patch management system that patches all of your servers, but there may be interesting developments in the future where you may be able to patch multiple operating systems at the same time if they share some common operating system or application binaries. Ideally, you would be able to assign a patch level to an individual or a group of similar servers. For now, you will need to patch virtual operating systems as you would any other system, but there will be some innovations in the virtualization sector that you won't be able to do with physical servers.
Licensing and support considerations
A big concern with virtualization is software licensing. The last thing anyone wants to do is pay for 16 copies of a license for 16 virtual sessions running on a single computer. Software licensing often dwarfs hardware costs, so it would be foolish to run a $20,000 software license on a machine on a shared piece of hardware. In this situation, it's best to run that license on the fastest physical server possible without any virtualization layer adding overhead.
For something like Windows Server 2003 Standard Edition, you would need to pay for each virtual session running on a physical box. The exception to this rule is if you have the Enterprise Edition of Windows Server 2003, which allows you to run four virtual copies of Windows Server 2003 on a single machine with only one license. This Microsoft licensing policy applies to any type of virtualization technology that is hosting the Windows Server 2003 guest operating systems.
If you're running open source software, you don't have to worry about licensing because that's always free—what you do need to be concerned about is the support contracts. If you're considering virtualizing open source operating systems or open source software, make sure you calculate the support costs. If the support costs are substantial for each virtual instance of the software you're going to run, it's best to squeeze the most out of your software costs by putting it on its own dedicated server. It's important to remember that hardware is often dwarfed by software licensing and/or support costs. The trick is to find the right ratio of hardware to licensing/support costs. When calculating hardware costs, be sure to calculate the costs of hardware maintenance, power usage, cooling, and rack space.
1 comment:
I get it clear idea about what is Virtualization and how its working all the points get it from through this blog.Excellent written skills posted here.web hosting companies
Post a Comment