There are many Server computers that host files and applications on computer networks and these must be powerful. Some have central processing units (CPUs) with multiple processors that give these servers the ability to run complex tasks with ease. Computer network administrators usually dedicate each server to a specific application or task. Many of these tasks may not be compliant with others, dictating that each gets its own dedicated machine.
There are many Server computers that host files and applications on computer networks and these must be powerful.
Some have central processing units (CPUs) with multiple processors that give these servers the ability to run complex tasks with ease. Computer network administrators usually dedicate each server to a specific application or task. Many of these tasks may not be compliant with others, dictating that each gets its own dedicated machine.
One application per server also makes it easier to track down problems as they arise. It’s a simple way to streamline a computer network from a technical point of view. However, there are a couple of problems with this approach, though. One is that it doesn’t take advantage of modern server computers’ processing power.
Most servers use only a small fraction of their overall processing capabilities. Another problem is that as a computer network gets larger and more complex, the servers begin to take up a lot of physical space. A data centre might become overcrowded with racks of servers consuming a lot of power and generating heat.
Server virtualization is an attempt to address both of these issues in one solution by using specially designed software; an administrator can convert one physical server into multiple virtual machines. Each virtual server acts like a unique physical device, capable of running its own operating system (OS).
In theory, you could create enough virtual servers to use all of a machine’s processing power, though in practice that’s not always the best idea. Until recently, the only way to create a virtual server was to design special software to trick a server’s CPU into providing processing power for multiple virtual machines. Today, processor manufacturers like Intel and AMD offer processors with the capability of supporting virtual servers already built in. The hardware doesn’t actually create the virtual servers network engineers still need the right software to create them.
Virtualization isn’t a new concept. Computer scientists have been creating virtual machines on supercomputers for decades. But it’s only been a few years since virtualization has become feasible for servers. In the world of information technology (IT), server virtualization is a hot topic. It’s still a young technology and several companies offer different approaches. Why are so many companies using server virtualization in their computer networks?
There are many reasons companies and organizations are investing in server virtualization. Some of the reasons are financially motivated, while others address technical concerns;
Server virtualization conserves space through consolidation. It’s common practice to dedicate each server to a single application. If several applications only use a small amount of processing power, the network administrator can consolidate several machines into one server running multiple virtual environments.
For companies that have hundreds or thousands of servers, the need for physical space can decrease significantly.
Server virtualization provides a way for companies to practice redundancy without purchasing additional hardware. Redundancy refers to running the same application on multiple servers. It’s a safety measure if a server fails for any reason, another server running the same application can take its place.
This minimizes any interruption in service. It wouldn’t make sense to build two virtual servers performing the same application on the same physical server. If the physical server were to crash, both virtual servers would also fail. In most cases, network administrators will create redundant virtual servers on different physical machines.
Virtual servers offer programmers isolated, independent systems in which they can test new applications or operating systems. Rather than buying a dedicated physical machine, the network administrator can create a virtual server on an existing machine. Because each virtual server is independent in relation to all the other servers, programmers can run software without worrying about affecting other applications.
Server hardware will eventually become obsolete, and switching from one system to another can be difficult. In order to continue offering the services provided by these out-dated systems sometimes called legacy systems a network administrator could create a virtual version of the hardware on modern servers.
From an application perspective, nothing has changed. The programs perform as if they were still running on the old hardware. This can give the company time to transition to new processes without worrying about hardware failures, particularly if the company that produced the legacy hardware no longer exists and can’t fix broken equipment.
An emerging trend in server virtualization is called migration. Migration refers to moving a server environment from one place to another. With the right hardware and software, it’s possible to move a virtual server from one physical machine in a network to another.
Originally, this was possible only if both physical machines ran on the same hardware, operating system and processor. It’s possible now to migrate virtual servers from one physical machine to another even if both machines have different processors, but only if the processors come from the same manufacturer.
(To Be Cont’d)