Memory Overcommitment. Bluff or Real Requirement? 7


In my humble opinion, yes, it does. Now let me explain why.

As a real world example, you guys have us, WTSLabs. When we decided to move to a virtual world, I personally looked at most of the offerings available: Microsoft Hyper-V 2008 R2, Citrix XenServer and VMWare ESXi (considering our size, free would do the trick for us for sure). The deciding factor that took us down the VMWare ESXi route was the simple fact it can overcommit memory.

Once you look at how our VMs were performing, most of the time these were sitting idle, consuming few resources (that was the case with our environment – your environment may be completely different and in that case overcommitment may not be for you).

No matter what anyone else says, if you all remember, years ago one of the main driving factors (or sales pitch if you will) towards virtualization was to consolidate your X physical servers into a bunch of physical hosts. I remember seeing several times Sales/Pre-sales guys going to offices explaining that most of the time the customers servers were there doing nothing and thanks to that, bringing all these ‘idle’ servers under one single host was possible.

I am not saying that is the case with any server and/or any environment. For sure there are several SQL, Exchange boxes out there that are always being hammered, working hard. But for tons of companies out there, especially in the SMB market, it is almost guaranteed that is not the case.

Back to our own scenario here, we now run 6 VMs. The resource hog one is our Exchange 2007 SP2 box (what a surprise…) setup with 4GB. Then we have one domain controller, web server, TS (running Windows Server 2008 R2) and two XP VMs. By monitoring these up and running on a regular day they are indeed idle most of the time, not using many resources. I do not remember all the numbers but I know we are overcommiting memory but not by a lot (probably one to two gigs – our Dell Server has 8GB).

Like WTSLabs there are many other companies out there on the same boat. And for these, if you cannot overcommit this may mean buying another server. For large enterprises another box may be just a drop in the ocean. Not for us. 🙂

Performance wise, nothing to complain so far; everything works great and seems responsive. To me, the reality is there will be cases where overcommitment is indeed not a good idea and there could be performance issues if used. But on the other hand, there will be way more cases where overcommitment will not be an issue and everything will work great, saving companies money.

The reason why Microsoft and Citrix as of today downplay memory overcommitment and all the technologies behind it (you can read more here) in my mind is simple: they do not have it.

Will they add that? I am pretty sure they will and if they do there will be two possible reasons for that:

1. They added a feature they consider useless just because they are right and the world is wrong.
2. Added it because it is really important and useful.

I will go with the second option. And once they do it I may take a look at Hyper-V and XenServer again for our needs.

CR

4,065 total views, 1 views today


Leave a comment

Your email address will not be published. Required fields are marked *

7 thoughts on “Memory Overcommitment. Bluff or Real Requirement?

  • Steve Greenberg

    OK- but why overcommit if you know what you need? the only case it makes sense is for predictable workloads that are known to need extra RAM for known periods of time. Otherwise, why not just allocate what each VM actually needs??

  • Andy Wood

    Indeed – I saw a similar article on R2 not being ready for the enterprise because it doesn’t support overcommit (http://bit.ly/4bTdI0) – but I’m still left feeling “why did you just not size it for what was needed in the first place”?

    • admin

      Well as I mentioned to Steve Greenberg today the thing is environments change, as do needs. And if you size something to fit a very specific need/solution, if the requirements change slightly once the solution is up, you are stuck (i.e. some servers will not allow more than a certain amount of memory). Overcommitment gives you that flexibility, not having to buy a new server just because I needed an extra 2GB for some VM. Someone could say “Well just get a box with 64GB instead of 32GB RAM to accommodate future growth”. That is true but that also means this box will be underutilized until the day you actually need the extra resources. And you may not need them all in one shot. If we are consolidating servers to make sure we maximize our hardware utilization (getting servers always idle and consolidating them into one big box that will be busy all the time), oversizing just brings us back to having servers idling, not busy, what was in many cases brought us to virtualization at the first place! 🙂

  • nostradamus

    Overcommit is overrated, just buy more RAM!! It is cheap now.
    Is overcommit the only trick in the bag that VMware has left that XenServer and Hyper-V have left to do??

    I use HP (490) blades with 96GB of RAM no need for the unpredictability of memory overcommit, I am already getting enough saving and increased ROI by saving on the support costs that i would have spent on VMware (use that to by more memory/servers).

    XenServer is faster and for me and gives more predictable performance with my production Linux and Windows guests that have bursty load (I/O).

    I left VMware for XenServer 5.5 and have no regrets. I am saving a lot of money (even though i still got essentials EE) and the performance of our VM’s (especially Linux guests) is much better than the testbed VMware environment.

    Not sure why people have not yet realized that the horizon is changing and with XenServer, Hyper-V, KVM all offering enterprise class features (and more to come) the technology is no longer a secret.

    Virtualization is a commodity, VMware knows this and the only way they can prevent the constant errosion of their once protected market is by moving into new markets such as cloud computing.

    • admin

      Well what is cheap for some is not that cheap for others. 🙂
      Memory for enterprise class servers cost a lot of money when compared to the Crucial RAM we have on our small Dell PowerEdge here. I am not saying overcommitment is ‘the’ feature. What I am saying is in some markets like SMBs, being able to create 6 VMs that in theory use 10GB of RAM but in reality are using 95% of the time only 6.5GB as in our case, is a nice thing to have. This allowed us not having to buy another server. As this one only goes up to 8GB, buying more memory was not an option. Agreed that if you are actually using all the time more RAM than your physical RAM on the box then you have a problem for sure. But again, not the case in most SMBs we have been working with.
      And yes, virtualization is really a commodity these days and VMWare for sure needs to move and move fast. Thanks for the comment!

  • Andy Wood

    Fair point, but flip that round; overcommit could mean you’re in a position to over-utilising.

    In the UK many banks let you go overdrawn by prior arrangement. However, go over that overdraft and your hit with penalties.

    I’d have said overcommit lets you slip up once in a while, gives you some headroom sure But you’ve got to be managing that memory-use closely because if you’re constantly overcommitting you’re more than likely impacting on the performance of the VMs… and if you’re doing that – constant management, reviews of memory use, adding extra memory – why not just size it differently in the first place?

    And then I read these posts http://bit.ly/nOOav and now I’m having a bit of a scratch of the head

    Its probably a moot point as at least XenServer’s going to have it in sometime soon I heard a bird say.

    Interesting talking point tho’ – cheers 🙂

    • admin

      Oh let me be clear. When I mention overcommitment I am not saying you are already overcommiting the memory by actually using it. For example our server here I can see it is using around 6.5GB of RAM but as I said all our VMs together have 9GB or 10GB allocated to them. This means our servers are NOT using these extra 1GB or 2GB and therefore have no performance penalty. If we tried to deploy this exact same environment on Hyper-V or XenServer we would not be able to unless we allocated less memory to the VMs or bought another box. 🙂 As I said I do not see as a big deal in enterprise land but for us, small companies out there, it could be a big one. Time I guess will tell if this is or not a must have feature. Cheers Andy!