Welcome to the world of tomorrow.

Here I am at BriForum 2009, sitting in a conference room watching Brian talking about Client Hypervisors and thinking about a discussion I had last night with Harry Labana, Citrix CTO for XenApp.

What the future of our desktops, laptops and computing devices will look like and why I think client hypervisors and VDI are the way to go and how they will end up merging at the end.

The thing is, today there is a lot that can be done with the current technology available but as VDI and Client Hypervisors are seen as two completely different beasts the integration I am talking about is not there. Or is it?

The future I see is simple actually. Master disk images for my work machine, home machine, porn loaded machine and so on will exist somewhere, whatever they call that in the future. The cloud is what they are calling it today. It may change to heaven in the future. Or Hell, depending how you look at it and how these guys implement it.

So if I am at the office, my device (a laptop like one) can  connect to my work machine running on a cluster (so here I connect to it using some remote display protocol) – I would do that for example from a location where power is an issue so I use this low power mode on the device to connect (here an actual small app running on top of the hardware built-in hypervisor just to do the remote display protocol part) or if I am on a location I can actually use all the power (and have bandwidth) my device, with a locally cached copy of the image I want to run, just downloads the differences from what is local to what is in the cloud and I run that image locally, at full power and with full access to the hardware (yes, Intel/others will indeed change the PC architecture as we know it today to provide access to GPUs, etc).

When I get home the same takes place. I load my home PC image (just differences) and run it locally. Or again connect to it running somewhere.

Assuming that we get to this point (what I do believe it will happen but not by next year), will there be a market for the traditional TS model we know as of today? I am not sure if it will even be needed.

Hardware architecture changes will indeed allow for much higher densities in the future. Same will happen for power. I am sure Citrix will have their own version of the greatest device of all time that we all know: Mr. Fusion from Back to the Future. That alone will be able to power a shitload of hosted things – remember 1.21 Gigawatts is a lot – probably with minimal heat.

Of course all the management layers to handle all these images running, the applications and patches on them and so on will need to be there. But I guess we will have that sorted out by the time this reality becomes… reality.

Thanks to Microsoft we learned a lot so far on how NOT to design profiles, application deployment tools and so on and if all these companies now working to create this ideal world of the future are watching this and learning the future is bright.

Very bright.

CR

1,064 total views, 2 views today

Quest for VDI

I do respect Brian. And a lot. For sure he was the guy that brought the whole TS community at the time together (Ron Oglesby, Shawn Bass, Tim Mangan, Jeff Pitsch, Benny Tritsch, myself and several others) and kept it like that. BriForum, the child of endless discussions among everyone mentioned above is now a well known conference and a great place for everyone looking for SBC/TS/Citrix/VDI/Virtualization info.

If you read his website he is always saying something (well at least lately) about VDI. Even the discussions we have with Microsoft and their RDS team (sorry, most stuff under NDA otherwise I would post about these here) are now moving towards VDI. For God’s sake, even the lady that comes to clean my house every once in a while is now talking about VDI. So in a way, it is becoming mainstream. Wait. I do not mean people are actually using it. I mean it is mainstream in terms of discussing it, talking about it. Not implementing it.

Problem is even on these discussions with all the other Microsoft MVPs (I think all mentioned above are part of this group) I am not sure exactly what every single one of them think about it. So today when thinking about it I decided it would be a good idea to talk to every single one of them, one-to-one talk, record all that and then publish every day/week these conversations here on our blog. Once I have all the recordings I will then create some sort of final analysis, based on what all these guys said, resuming all that, and publish it here.

The reason for that is in a way I want to know what all these big shots in the industry think and see if they actually agree on something. 🙂

Secondly, I want to stop talking about VDI for at least six months. Why six months? Well that will bring us to 2010, the year when according to Brian, VDI replaces the whole world and we throw everything we know/have out of the window.

And the most important reason: that will give me a six months break discussing VDI with the cleaning lady.

CR

828 total views, no views today

My take on VDI.

In the past year if I could get a penny everytime I heard the word VDI I would not be here writing this post anymore. In the Server Based Computing/Virtualization industry, Virtual Desktop Infrastructure (VDI) is “the” topic and as mentioned, has been like that for a while.

Some people in the industry (mostly the Microsoft MVPs for RDS – the new name for Terminal Services) do know what I think but as not everyone is part of that group, here you have my take on this:

1. I am not sure why people like Brian and others do not compare VDI to real desktops. In a typical VDI scenario virtual machines running a desktop OS like Windows XP or Windows Vista are accessed by users using some sort of protocol (RDP, ICA, etc). For example Citrix XenDesktop uses ICA and Provision Networks/Quest uses RDP. But today, with client hypervisors (a local hypervisor installed on your PC) you can run all these virtual machines directly on your own PC and not on a remote server. So VDI in a way is evolving. In the future I do see users using their VMs over ICA/RDP when at work and when disconnected, using them locally through a local hypervisor. Get back to the office and all changes are replicated. Cool.

If we think about how many companies simply skipped the whole Server Based Computing thing, that never ran any application or desktop off a centralized TS/Citrix farm and how many companies are just now taking off the ground I do think it is simply natural their IT guys willing to compare how a VDI solution compares to a full blown desktop (real desktops/fat clients – whatever name you want) approach. Especially now that local hypervisors can be seen in the wild.

Again, these companies simply missed the SBC bandwagon. Like several companies I know that never deployed Windows 2000. Jumped straight from NT 4.0 Domains to Windows Server 2003 Active Directory. For them, whatever Microsoft introduced or did with Windows 2000 was completely irrelevant. The same applies here. These companies never cared about SBC/TS/Citrix. They are/were a full blown PC/Desktop shop. Now that virtualization is becoming widespread they simply want to know how a regular PC environment compares to a virtualized one. Dead simple. And I can totally see and understand their reasons.

2. So far, there is always some performance hit associated with VDI. The problem here is simple. If you are trying today to deploy a VDI solution for running Windows 2000 or XP, with a 4-7 years old application, chances are scalability will not be that bad (meaning you will be able to squeeze quite a lot of users in one big server, reducing the cost per user at the end). But if you are always trying to keep up with technology and if your company always goes for the latest and greatest, this means you may be going down the road with Windows 7 with Office 2009 sometime soon. And probably your applications will be written relying on the .NET Framework 4.0. Yes, I do know these are not out today. But keep in mind that with cheap hardware comes lazy programmers and huge frameworks. Long gone are the days when we had to squeeze as much performance as we could out of a DOS app because an extra 1MB of RAM on each PC would break the company.

I cannot see .NET ZZ getting leaner or faster; same for Office 20XX, Windows YY (replace X, Y and Z with any integer). They may look faster but that is the result of much faster hardware with much more memory. That is why I came up with the ‘Claudio’s Law’ like in the ‘Moore’s Law’ (that old dude from Intel): “The time it takes for Windows XXX to boot and load Office YYY on its current generation hardware is constant” and you can try that for yourself. Get an old PC (PII 266MHz with 64MB RAM) with Windows 98 and try loading Office 97. Now fast forward to today and get a typical machine running Windows Vista with Office 2007 and do the same. The time it takes to load is virtually the same!

Where do I want to go with all this? If you keep running the latest and greatest I cannot see VDI being a scalable solution. It is a solution for sure but if scalability is not there it means a much higher cost per user as you cannot run hundreds of VMs in a single box. Plus if you want to do it properly, you will not be hosting 100s of users on cheap hardware. You will go for the good stuff. And good stuff comes at a price. An 8-CPU box with 32 cores and 64GB RAM, RAID and fast hard disks does not come cheap. And now, in a recession, I am 100% sure costs will decide the fate of several IT initiatives out there. The bottom line in many places will be indeed this: money.

Unless Microsoft/Intel/God comes up with a new way of doing things that will allow us to run 100 VMs on the above hardware, all running the latest and greatest OS and apps, I cannot see this changing.

3. Local Hypervisor. Ok this adds quite a bit to the picture as now you can run the VM directly on your PC, without sharing resources with anyone else. Sounds great, doesn’t it? The problem here is there are several OS enhancements that are now dependant on the hardware. For example, Snow Leopard and Windows 7 are now offloading certain tasks to the GPU. Several other components on the OS rely on that low level direct access to the hardware. When a hypervisor layer is present, as of today, several of these enhancements are lost. This means a performance hit. Of course there are several benefits with that approach (i.e. your ‘master images’ become hardware independent, running pretty much anywhere, as long as the hypervisor is there) but in an age where users can go to Best Buy and get a decent, fast PC for under $600, are they willing to work on something that is slower (potentially much slower depending how OSs evolve) than what they have at home? If hardware manufacturers start implementing changes that will allow things like a virtual GPU and so on that will probably be minimized/eliminated and VDI may take off.

But then we may break the whole cycle of software/hardware Wintel upgrades and the industry behind that. Companies like Dell, HP, Lenovo, etc do rely on users and companies buying and replacing computers every couple years. So at the end, what impact such approach will have in the industry? I do know we, human beings, always adapt and I am sure these companies would have to adapt to survive the new way of doing things.

Well that is what I think. As you can see I do not think VDI is bad, ugly, beautiful or great. I do think it has its own merits, it is capable of solving problems other approaches may not work well and it is still in its infancy. But I cannot simply see how all its drawbacks/issues/costs will be addressed by 2010. Sorry Brian.

CR

1,140 total views, no views today