The main issue with VDI? Windows.

Yes, you read it right.

As the VDI debate continues, now heated up thanks to the iPad (piece of crap IMHO, subject to another post), I decided to write this post that has been sitting here, waiting for me for at least 6 weeks. It goes to the heart of VDI: Windows.

As of today when we talk about a hosted desktop solution, we like it or not, Windows is the OS of choice (the desktop versions is what we are discussing here like XP, Vista, 7). And the reason why I think VDI has a long, really long way to go, unless Microsoft takes action, is this same OS indeed. Windows.

Let me start by saying this. There are several posts and information on the web that clearly show that Windows was optimized over the years to run on, guess what, real, physical hardware. Why? By the simple fact until people started talking about VDI (circa 200X), all Windows deployments were 100% done on physical hardware! That is why the OS was tweaked/optimized to run on real hardware. Kind of makes sense huh?

Now if you look at a post by Ruben on storage  this is clearly shown and stated. And we are just talking about the disk subsystem here. There are for sure several other things/components that were changed/tweaked to get the best performance out of real hardware.

Add to that a very simple thing: Windows was never designed with things like application layering (explained on this post by Gabe), sharing a master image with differential vDisks and so on, in mind. These changes, required to make VDI an affordable, scalable and stable reality, introduce several issues. The main one for any serious, large deployment, will be what? Support. The next one, the simple fact that no single vendor offers all that is needed for a scalable, stable VDI solution. This means you will probably end up with VMWare on your virtual backend, Citrix XenDesktop as your VDI solution and several other pieces from several other vendors like Atlantis, MokaFive, McDonalds, you name it. Yes, McDonalds is jumping into the VDI bandwagon (what leads us to my post about VDI and Patchworking  – worth reading – bringing several issues to the table).

Back to the topic, even though some vendors may say their mechanisms are not that intrusive (like my discussion with John Whalen from Mokafive last night on Twitter), the bottom line is not 100% of your apps may work and more than that, if they apparently work and you find issues down the road and call Microsoft or any other vendor, chances are they will simply tell you to go ____ yourself. You can fill in the blanks.

Some may say that was the case with Terminal Services/Citrix years ago. Yes, in a way that is true. The difference is TS/Citrix was in several ways, way, WAY less ‘destructive’/’intrusive’ on its approach to make things work, than VDI is. VDI has to deal with sharing disk images, dealing with deltas for each user. Dealing with layers. And so on. If you know the internals of any OS you can see right there what sets VDI and traditional SBC apart.

As soon as Microsoft brought TS under its umbrella, making it an OS (NT4 TSE) or a service on Windows Server OSs (since it introduced Windows 2000 Server), things changed. All the sudden Microsoft had to support its own solution and products running on it. Vendors could no more ignore the fact people were actually using TS/Citrix to run their apps. And if you look at Windows Server 2008 R2 you can see how Microsoft changed the OS to make RDS (formerly known as TS) a better solution for hosting applications. Not to mention the release of tools like the RDS Application Compatibility Analyzer.

So at the end, Microsoft changed Windows Server to make it the ideal SBC platform (I will not go into discussing if they succeeded or not – I do think they have done, with Citrix, an excellent job over the years; still room for improvement, like anything else in life).

VDI is no different. The problem is we now have much deeper issues related to the OS than before. And the only ones that can actually fix these is Microsoft. Period.

Windows may need a big redesign to accomodate VDI needs/requirements. I am sure there are several things that could be changed on Windows to make it the perfect OS for VDI (what I do think most of you will agree with me, Windows is NOT perfect for VDI; for God’s sake, even on physical hardware it has its own issues). Once these changes are done (some may be fundamental changes on the OS) I am sure we will be able to scale a VDI solution without all the storage hassles, disk image sharing, disk deduplication and so on. And it will be supported and (knock on wood), stable.

Of course I assume you want VDI to be scalable, stable and supported. If you do not need all three, for sure you can deploy a VDI solution today. It will be scalable and stable but unsupported. Or scalable and supported but unstable. Pick your two options.

If you think I am nuts, go ahead and leave your comment or email me directly. The bottom line, at least for me, is simple: Windows was never designed with VDI in mind AND VDI has deeper ties than TS/Traditional SBC had at lower level OS components and these two little things introduce several issues.

And that is why, as mentioned several times, thanks to these issues that companies like MokaFive and Atlantis exist and the reason why VDI as a solution keeps moving forward (honestly, I think we all owe a lot to these guys, the ones trying hard to make the virtual world less virtual, more real).

This is simply put, people taking matters into their own hands, while we wait for the day Microsoft will release Windows-V, the first release tailored for virtualization.

Windows-V? You heard about it here first.

CR

10,720 total views, no views today