Latest generation virtualization techniques doubles capacity terminal servers
With that statement Ruben and Jeroen have just released Phase II of Project Virtual Reality Check (VRC) to create this whitepaper they have done more than 150 tests with Login VSI to measure the performance of servers while being stressed by a great amount of simulated users. This whitepaper has a few advantages to whitepapers published by the vendors themselves and whitepapers published by blogs that are only testing one hypervisor:
- The whitepaper is truly independent
- The whitepaper is approved by the different vendors
- Everybody can repeat the tests with the free available Login VSI
- The authors aren’t biased
- You can compare the results easily (the servers have been stressed the same way)
One of the most interesting conclusions of Phase II: The performance increase measured is not caused by improvements to the hypervisor but mainly by Intel’s innovations in the Nehalem architecture VRC states that it can be almost solely accredited for the performance improvements seen with TS workloads.
Get your free copy of the whitepaper at www.projectvrc.com
Hyper-V servers: Workgroup or Domain?
A common misconception while implementing server virtualization is to isolate the hypervisors from directory services. The reason for that is because we have accepted the fact that VMware ESX hosts do not run Windows and therefore cannot be part of let's say Active Directory. Even this is not totally true, VMware is very capable of authenticating users by utilizing Active Directory, see this article
Back to my specialism, Hyper-V... As a general statement it is safe to say that Hyper-V hosts should be members of the Active Directory. Why? well why did we have Active Directory in the first place? It was to increase business efficiency and IT operations throughout the enterprise. Since Hyper-V servers benefit from GPO's, secure single sign on and security policies it sounds logical to make them domain members.
There are however a few side notes:
- At least 1 domain controller should be a physical host, or at least 1 domain controller should not be part of your Hyper-V infrastructure
- If Hyper-V is implemented within the DMZ, domain membership is not desired
By implementing only virtual Hyper-V based domain controllers we would actually implement a mutual dependency, or as we say in Holland: 'a chicken and an egg situation' thus not a good idea. So at least 1 domain controller needs to be isolated from your Hyper-V environment.
But... doesn't that result in decreased efficiency within the environment? Well, one must keep in mind that virtualizing your server infrastructure isn't the ultimate goal, reducing costs and gaining flexibility however is. The business case for implementing server virtualization should be based on the servers hosting your business applications, not on the infrastructure servers. By virtualizing infrastructure related servers you reduce operational costs of the IT department but it will not enhance your business processes. Virtualization of the mission critical servers on the contrary will instantly open a window of opportunity concerning matters as flexibility, high availability and business continuity.
What to do with servers within DMZ Cache?
A couple of years ago the DMZ consisted of very few servers. With the increasing adoption of telecommuting and online services the DMZ started to grow to levels that match or even extend the corporate servers workgroup based or actually withdrawing them from the DMZ so that they can be added to the Active Directory.
As for DMZ based servers I strongly encourage customers to implement Active Directory Lightweight Directory Services (ADLDS, formerly ADAM). ADLDS makes it possible to centrally manage your DMZ whilst maintaining a secure environment, a typical compromise between security and functionality.
Okay it took a bit longer then desired, however here is the first actual part concerning installation of Hyper-V.
So what is actually the most important component when designing a Hyper-V environment? Just like VMware it is actually the storage. Of course you could start out with local storage but quite frankly; that is not anything near a modern infrastructure anymore. One could argue that for a test/home environment local storage is sufficient but I don't quite agree with that: Most challenges in a server virtualization implementation are actually storage and backup based. Although it pretty much extends the scope of my inital blog series plan I find it important enough to discuss it so here it goes!
Central vs Local storage
Local storage of course limits you in flexibility and expandability. However on my demo laptop I am forced to use local storage while keeping my VM's synchronized between my Laptop and my regular environment. How I will synchronize this I will get back to while blogging about replication and site recovery.
Apart from a portable demo environment there is no justification to implement server virtualization with Hyper-V on local storage.
Central Storage Iscsi or Fiber?
It will be obvious that a fiber based storage solution is a bit above my budget for home usage thus that answers that question quite easily. However what about production environments?
Fiber is the way to go
When taking variables as price, performance and proven technology into account a fiber based SAN is the way to go. Fiber will obviously give you a maximum throughput of 4 Gb/s whilst Iscsi is limited to 1 Gb. Indeed, 10 Gb ethernet is available yet not quite stable in relation with Iscsi yet. The storage market is rapidly moving (like the virtualization market) however and I do expect that 2009 or 2010 will be the definite breakthrough of stable ISCSI based storage networks. If I would be an IT manager and I would have to make the decision now, I would simply play safe and choose fiber.
For test and small production environments ISCSI is however an excellent low cost choice.
Hardware or software based storage solution?
Basically this is a no brainer; production and business environments will use something like HP EVA, DELL Equallogic or Netapp based hardware. In principle all fine solutions that differentiate on matters way beyond the scope of this Blog and storage expertProxy-Connection: keep-alive
will be way more valuable to consult concerning this. What I want to talk about are the software based solutions on the market available today since that will be the material I'll be working with. There are a couple of options:
How to install & configure Hyper-V
Did you ever try it? did you ever get stuck while in the midst of your configuration? Did you look on the net before? Didn't you discover that there are so many people that wrote a blog-post about a miniature part of the installation and configuration process, but a full tutorial from A to Z is still nowhere left to be found?
Well welcome to this blog then! The coming weeks I will post several steps in the configuration process where everything will be covered, with an exception of the actual installation; I assume next,next finish you can pretty much figure out by yourself. If not... well eehm.. go to one of the existing 500 blog's about installing it!
First let's get familiar with the testing environment I will discuss:
- iSCSI based SAN storage
- 2 identical Windows 2008 Hyper-V core machines configured as cluster
- 1 Hyper-V server machine
- 1 SCVMM 2k8 management server
I believe this is pretty much a standard environment that pretty much simulates real world implementations.
Now it's time to get my lab environment ready.. as soon as my SAN is offering iSCSI disks I'll be back for chapter 2 of this post.