Tutorial :Is there any advantage to having more than 16 GB RAM on a Windows developer machine?



Question:

Assuming a machine (Dual Quad Core Xeon (2.26 GHz) with 24 GB RAM) running Windows Server 2008 and Hyper-V. How many VMs can I expect to run at the same time with good performance?

Is this overkill? Can you really have too much RAM?

Assuming 2 GB per VM. That's around 16 GB for the VMs with 8 GB left over for the main OS and Hyper-V.

Does it sound about right?


Edit:

I tried to make the question sound less like bragging. Was never my intention. It's a hard question to write.


Solution:1

"I want to run about 8 VMs at one time using Hyper-V."

Why on earth would you want to do such a thing?

Since you are unlikely to be able to load ALL 8 concurrently during development (except, perhaps, for a brief load test) you can probably run all 8VM's in 2Gb and still make good use of all those cores.


Solution:2

I have a couple of commodity machines running VMware we use for testing, continuous integration, etc.

These machines have a single quad-core processor and 16 GB of RAM each - in our current set-up we have observed that:

  • We are I/O bound first, CPU bound second, and memory is a non-issue.
  • Using iSCSI and OpenFiler to host the virtual hard drives on a separate dedicated file server over dual gigabit Ethernet helped lower costs and improved performance vs. local disks.
  • The 16 GB of memory is underutilised on each machine, with most instances only being allocated between 512 MB and 2 GB of RAM, and the OS have 5 or 6 GB free.
  • For the cost of one dual quad core Xeon machine with 24 GB of RAM you could purchase 2 or more likely 4+ commodity machines, each with 16 GB of RAM and a single quad-core CPU (avoiding FBDIMM's can save a lot of cash) - especially as redundancy is not such an issue with a test/development machine.
  • When testing complex applications / environments, it often helps to have different virtual machines sitting on different physical environments, just so you get a realistic level of network latency between the various services.

As for using your rig for development - key concern for me would be I/O - if you are running that many virtual machines, you will see a negative impact on your disk access times while developing which will slow down compiling, etc. I would be inclined to shift as many of the virtual machines as possible onto a separate box and leave your developer machine unburdened so compilation and other developer related tasks are lightening fast.

Blog post on my initial setup from last year.


Solution:3

It depends which version of Windows you're using. Here's some info: http://msdn.microsoft.com/en-us/library/aa366778.aspx

64-bit Windows Vista Business and above, and 64-bit Windows 2008 Server Standard and above should be able to address the RAM.


Solution:4

With 64-bit Windows, addressing that much memory shouldn't be an issue. I would think your biggest concern would be I/O, with that many VMs running at once. I'd suggest investing in SAS drives at the fastest RPMs available to effectively support that many VMs.


Solution:5

I had a similar question and rather than debate it in theoretical terms, I decided to buy with the idea that I'd replace/upgrade it if necessary. I wound up with a Core i7 920 with 12 GB RAM, 2 Intel 80 GB SSDs (RAID 0), two 1 TB SATA HDDs (RAID 1), and a throw away 1 TB SATA.

I tossed on Windows Server 2008 x64 and hosted a couple VMs on my SSDs. Very, very fast responses. (I have some experience with VMs and know that disk I/O would eat me up hosting a developer environment in a VM, especially when adding SQL Server to the same spindles.

I really did enjoy this setup, but then a VM playground arrived (a Dell 1950 with 32 GB of RAM and a nifty little SAN). I threw those VMs over to it along with some others and loaded Windows 7 on my SSDs. (I felt I could play around with my system because I was now hosting some VMs independently of my new workstation.)

The biggest thing I noticed was how much nicer it was to develop on a non-VM machine. Not the speed so much but the visual effects, the antialiasing of fonts, etc. The SSDs really made the I/O a non-factor, but they make everything feel instant. (Also, Windows 7 is sweet.)

I know I'll have to rebuild it when the RTM comes out, but I do have VMs that I can work in while it is being rebuilt. I'll need to use VPC instead of Hyper-V for building VMs that I need to ensure that no one else has messed with, but I think this is a reasonable tradeoff.

In short, I'd echo the others that say to host VMs on a separate server, but I'd like to add that Intel's SSDs are very quick. separate machines give you more flexibility. Your drives sound fast enough for testing, but for development work, instant beats fast.


Solution:6

Since each VM "owns" it's own memory in Hyper-V the number of VMs you can actively host is bounded by the available RAM. So can't really have too much.

Of course the biggest speed bottle neck on a dev machine is the hard drive. With the extra ram you could set yourself up a RAM drive which could have a huge performance benefit.


Solution:7

You won't have any problem addressing that RAM, you might be able to get away with 20 GB but at this point you might as well get the extra 4 GB. Why are you doing this on a developer machine? Unless you are a one-man show there should be a central server that handles stuff like this.


Solution:8

If your scenario is about server + clients a multi machine setup would be better for simulation, and cheaper. VMs actually aren't the same as native OSs, threading and profiling are broken so you'll be quite off target, and worse won't actually be able to profile accordingly.

My 2 cents


Solution:9

Robert,

If you intend to be running 8 VMs (for dev, or otherwise) at the same time, I highly recommend exploring server virtualization options offered by VMWare. In general server virtualization technology is far more optimized and efficient in utilizing physical resources than it's workstation counterpart.

I've had the opportunity of working with VMWare Infrastructure 3 (which is the umbrella nomenclature for VMWare's server virtualization family of products/technologies) and I must say I'm impressed. The server edition is extremely efficient when compared with the workstation version and it offers incredible flexibility.

I don't have experience with Hyper-V but many people who have used it speak highly of it. However I consider VMWare a superior alternative for the simple fact that it allows you to create VMs that run different Operating Systems on the same physical host which is something the Hyper-V can not do (AFAIK)

As far as RAM goes, the VMWare infrastructure 3's limitations cap out well beyond the 24Gb and it allows you to provision the available memory, and even each core, however you desire between your guests - as long as the guests support it.

If you're interested in learning about VMWare Infrastructure 3 I highly recommend this book as it contains discussions about the architecture of VMWAre ESX server and techincal considerations that you will hardly find anywhere else.

I hope you'll find this useful, although it is not a direct answer to your question and that you will excuse my comment (24Gb for a dev machine does seem a little out of the ordinary.. for a while at least)


Solution:10

We are running Hyper V and hosting instances of both Server 2003 and XP on the same machine.

---this was supposed to be a reply the user who said that you can't host different OS's on the same machine in Hyper V. Or that is how I read it, anyway.


Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Previous
Next Post »