What is going on with Thin Clients?

One of these days after a discussion about how cheap hardware has become (and the power that is now available on your $400 desktop/laptop) I started to think, especially after winning that Wyse notebook/terminal at the BriForum GeekOut show, what is going on with Thin Client hardware.

No matter what the vendors may say these should be cheaper and have way more power that what we see on the market these days. To start, they are pretty much all x86 based. Ok, they may have a different, fanless power supply, and something else. But why these things have not dropped in price while increasing their power as PCs/Laptops did?

I do understand these devices do have flash memory and several other things your regular PC does not have. But in my opinion there is no reason why these are so underpowered and at the same time, overpriced when compared to regular PCs.

And that in a way kind of slows down the adoption of server centric solutions. Note I am not using the term ‘Server Based’. Server centric means any solution that runs off a centralized server model, whatever that is a TS/Citrix farm or a VDI hosted solution.

With overpriced and underpowered clients at the user end, the overall experience is reduced and more than that, IT starts to question what the benefit is on having on someone’s desk something that costs as much or more than a full blown PC. Ok I do understand things like power consumption and so on. But these are usually variables that most IT people are not even aware of. And regardless of all these arguments, even if it sucks 1/10 of the power a PC does, does it need to cost more and be that underpowered? I am sure there is room for improvements on thin clients. But as I see it, the manufacturers are kind of trying to maximize their gains by selling something that has indeed an outdated design that for sure has not changed for years! So why invest money to come up with the killer thin client that can provide PC like experience to SBC/VDI environments and still use way less power than a PC, having no moving parts if we can still sell that 10 year old design and make way more money?

Well what do you think? To me, if this industry wants to become mainstream in what they do they must change. The same way companies like Citrix and Microsoft are changing the way we access our applications today.

CR

2,793 total views, no views today

Welcome to the world of tomorrow.

Here I am at BriForum 2009, sitting in a conference room watching Brian talking about Client Hypervisors and thinking about a discussion I had last night with Harry Labana, Citrix CTO for XenApp.

What the future of our desktops, laptops and computing devices will look like and why I think client hypervisors and VDI are the way to go and how they will end up merging at the end.

The thing is, today there is a lot that can be done with the current technology available but as VDI and Client Hypervisors are seen as two completely different beasts the integration I am talking about is not there. Or is it?

The future I see is simple actually. Master disk images for my work machine, home machine, porn loaded machine and so on will exist somewhere, whatever they call that in the future. The cloud is what they are calling it today. It may change to heaven in the future. Or Hell, depending how you look at it and how these guys implement it.

So if I am at the office, my device (a laptop like one) canĀ  connect to my work machine running on a cluster (so here I connect to it using some remote display protocol) – I would do that for example from a location where power is an issue so I use this low power mode on the device to connect (here an actual small app running on top of the hardware built-in hypervisor just to do the remote display protocol part) or if I am on a location I can actually use all the power (and have bandwidth) my device, with a locally cached copy of the image I want to run, just downloads the differences from what is local to what is in the cloud and I run that image locally, at full power and with full access to the hardware (yes, Intel/others will indeed change the PC architecture as we know it today to provide access to GPUs, etc).

When I get home the same takes place. I load my home PC image (just differences) and run it locally. Or again connect to it running somewhere.

Assuming that we get to this point (what I do believe it will happen but not by next year), will there be a market for the traditional TS model we know as of today? I am not sure if it will even be needed.

Hardware architecture changes will indeed allow for much higher densities in the future. Same will happen for power. I am sure Citrix will have their own version of the greatest device of all time that we all know: Mr. Fusion from Back to the Future. That alone will be able to power a shitload of hosted things – remember 1.21 Gigawatts is a lot – probably with minimal heat.

Of course all the management layers to handle all these images running, the applications and patches on them and so on will need to be there. But I guess we will have that sorted out by the time this reality becomes… reality.

Thanks to Microsoft we learned a lot so far on how NOT to design profiles, application deployment tools and so on and if all these companies now working to create this ideal world of the future are watching this and learning the future is bright.

Very bright.

CR

1,068 total views, no views today

My take on VDI.

In the past year if I could get a penny everytime I heard the word VDI I would not be here writing this post anymore. In the Server Based Computing/Virtualization industry, Virtual Desktop Infrastructure (VDI) is “the” topic and as mentioned, has been like that for a while.

Some people in the industry (mostly the Microsoft MVPs for RDS – the new name for Terminal Services) do know what I think but as not everyone is part of that group, here you have my take on this:

1. I am not sure why people like Brian and others do not compare VDI to real desktops. In a typical VDI scenario virtual machines running a desktop OS like Windows XP or Windows Vista are accessed by users using some sort of protocol (RDP, ICA, etc). For example Citrix XenDesktop uses ICA and Provision Networks/Quest uses RDP. But today, with client hypervisors (a local hypervisor installed on your PC) you can run all these virtual machines directly on your own PC and not on a remote server. So VDI in a way is evolving. In the future I do see users using their VMs over ICA/RDP when at work and when disconnected, using them locally through a local hypervisor. Get back to the office and all changes are replicated. Cool.

If we think about how many companies simply skipped the whole Server Based Computing thing, that never ran any application or desktop off a centralized TS/Citrix farm and how many companies are just now taking off the ground I do think it is simply natural their IT guys willing to compare how a VDI solution compares to a full blown desktop (real desktops/fat clients – whatever name you want) approach. Especially now that local hypervisors can be seen in the wild.

Again, these companies simply missed the SBC bandwagon. Like several companies I know that never deployed Windows 2000. Jumped straight from NT 4.0 Domains to Windows Server 2003 Active Directory. For them, whatever Microsoft introduced or did with Windows 2000 was completely irrelevant. The same applies here. These companies never cared about SBC/TS/Citrix. They are/were a full blown PC/Desktop shop. Now that virtualization is becoming widespread they simply want to know how a regular PC environment compares to a virtualized one. Dead simple. And I can totally see and understand their reasons.

2. So far, there is always some performance hit associated with VDI. The problem here is simple. If you are trying today to deploy a VDI solution for running Windows 2000 or XP, with a 4-7 years old application, chances are scalability will not be that bad (meaning you will be able to squeeze quite a lot of users in one big server, reducing the cost per user at the end). But if you are always trying to keep up with technology and if your company always goes for the latest and greatest, this means you may be going down the road with Windows 7 with Office 2009 sometime soon. And probably your applications will be written relying on the .NET Framework 4.0. Yes, I do know these are not out today. But keep in mind that with cheap hardware comes lazy programmers and huge frameworks. Long gone are the days when we had to squeeze as much performance as we could out of a DOS app because an extra 1MB of RAM on each PC would break the company.

I cannot see .NET ZZ getting leaner or faster; same for Office 20XX, Windows YY (replace X, Y and Z with any integer). They may look faster but that is the result of much faster hardware with much more memory. That is why I came up with the ‘Claudio’s Law’ like in the ‘Moore’s Law’ (that old dude from Intel): “The time it takes for Windows XXX to boot and load Office YYY on its current generation hardware is constant” and you can try that for yourself. Get an old PC (PII 266MHz with 64MB RAM) with Windows 98 and try loading Office 97. Now fast forward to today and get a typical machine running Windows Vista with Office 2007 and do the same. The time it takes to load is virtually the same!

Where do I want to go with all this? If you keep running the latest and greatest I cannot see VDI being a scalable solution. It is a solution for sure but if scalability is not there it means a much higher cost per user as you cannot run hundreds of VMs in a single box. Plus if you want to do it properly, you will not be hosting 100s of users on cheap hardware. You will go for the good stuff. And good stuff comes at a price. An 8-CPU box with 32 cores and 64GB RAM, RAID and fast hard disks does not come cheap. And now, in a recession, I am 100% sure costs will decide the fate of several IT initiatives out there. The bottom line in many places will be indeed this: money.

Unless Microsoft/Intel/God comes up with a new way of doing things that will allow us to run 100 VMs on the above hardware, all running the latest and greatest OS and apps, I cannot see this changing.

3. Local Hypervisor. Ok this adds quite a bit to the picture as now you can run the VM directly on your PC, without sharing resources with anyone else. Sounds great, doesn’t it? The problem here is there are several OS enhancements that are now dependant on the hardware. For example, Snow Leopard and Windows 7 are now offloading certain tasks to the GPU. Several other components on the OS rely on that low level direct access to the hardware. When a hypervisor layer is present, as of today, several of these enhancements are lost. This means a performance hit. Of course there are several benefits with that approach (i.e. your ‘master images’ become hardware independent, running pretty much anywhere, as long as the hypervisor is there) but in an age where users can go to Best Buy and get a decent, fast PC for under $600, are they willing to work on something that is slower (potentially much slower depending how OSs evolve) than what they have at home? If hardware manufacturers start implementing changes that will allow things like a virtual GPU and so on that will probably be minimized/eliminated and VDI may take off.

But then we may break the whole cycle of software/hardware Wintel upgrades and the industry behind that. Companies like Dell, HP, Lenovo, etc do rely on users and companies buying and replacing computers every couple years. So at the end, what impact such approach will have in the industry? I do know we, human beings, always adapt and I am sure these companies would have to adapt to survive the new way of doing things.

Well that is what I think. As you can see I do not think VDI is bad, ugly, beautiful or great. I do think it has its own merits, it is capable of solving problems other approaches may not work well and it is still in its infancy. But I cannot simply see how all its drawbacks/issues/costs will be addressed by 2010. Sorry Brian.

CR

1,144 total views, no views today