In a Citrix world, does the iPhone matter?

As I am always reading, today I saw this post at the Citrix Community Blogs regarding the Citrix Receiver for the iPhone. As you can see over there I made some comments and the guys at Citrix replied.

My main question regarding this post is, does the iPhone really matter in this context? Is it a game changer device that will help the adoption of Server Centric solutions (VDI or TS, does not matter)?

I ask because as of today, several Windows Mobile phones not only have video outputs (so you can hook them up to a monitor/Projector/TV) but also have support for Bluetooth keyboards, features that are NOT supported (at least officially AND using the SDK available to us, simple mortals) on the iPhone.

So today, if you want to, you can go out and buy a phone that you can hook up to a monitor and using a small, foldable Bluetooth keyboard, use it as a thin client (an RDP client is indeed available for Windows Mobile; I am sure that is the case for VNC, LogMeIn and as per the post I mentioned above, a Citrix Receiver for Windows Mobile will be out soon). As far as I know that did not really cause a huge commotion on the market. Plus to be honest, I do not know anyone actually doing this. And finally, yes, I think it is somehow a little cumbersome…

If we expand on that idea, you could simply go out and buy something like the RedFly from Celio, that is basically a netbook type device that connects to your Windows Mobile (and other phones like BlackBerries) and gives you a 7″ or 8″ screen and a reasonably sized keyboard. Same as the failed Palm Foleo if you remember that. That would be a killer solution I think, ONLY if the price was in the $50 to $99 range. At $199 (starts at that), you are now in Netbook territory. So if I will be carrying an extra device, why would I go for the RedFly? Yes, I know it is hard to justify…

Back to the iPhone, if all the above is available today, why the iPhone is seen as the ‘Jesus Phone’ (love that term, coined at Gizmodo!) for accessing Citrix?

Not sure to be honest. I do think the iPhone is a great device but to become a really useful thin client, a lot more is needed. The small form factor is indeed great for quickly accessing your servers and doing something… quickly. But for long term use, the form factor does not help at all. And for quick access I can do the same from Windows Mobile or even from the CrapBerry (yes, I do think it is crap. But that I will save for another post).

The netbook like form factor I do think is the way to go but carrying another device is not really a solution. If hotels were willing to rent devices like the RedFly out for $5 a day, THEN I see the potential, big time. They would have these paid off in a couple weeks and would provide a real option for Windows Mobile/CrapBerry users to access Citrix backends! Of course support for the iPhone could be easily added, assuming Apple blesses this type of usage for its iPhone (oh yes, you cannot use the iPhone the way you want; you use it the way Apple wants you to – funny thing is you do not even notice Apple is actually manipulating you all the time).

Is the fate of all this really in the hands of a mobile device like the iPhone at the end? Or in the hands of someone that sees the potential for renting RedFlys at hotels, eliminating the need for us carrying another device?

Time will tell.

Cheers.

CR

2,546 total views, 1 views today

User needs and the impact on TS/VDI.

After reading Daniel Feller’s post today on ‘VDI or TS’ I started thinking about one of the main arguments people have to justify VDI:  flexibility to deliver a unique desktop/environment to the user.

The more I think about this, the less sense it makes to me. When we start to think about tools, procedures, regulations and a bunch more things that surround us every single day and that are part of our lives, we can find a common thing/trend there: everything has a set of rules that we all, as a society, accept and follow. Several without questioning.

Before we go ahead let’s take a look at certain examples. When driving, if you see a red light you stop. You never questioned why the light is red and not bright pink. Plus the traffic authorities will not really change the red/yellow/green traffic light at the corner of your house just because you prefer Purple/Bibbidy Bobbidy Blue/Crushed Mellon. Nope. Once you get your license you accept the fact these are the colors and that they have a certain meaning that you will follow.

Same for your bank. You know they are usually opened from 9:30am to 4:30pm and that you cannot withdrawn $100,000 in cash on that ATM close to your place. Again, the bank is not really going to acommodate your needs to be able to withdrawn $1,000 in one dollar bills just because you think it is more efficient for you if you could do that. Or open at 2:00am because your wife prefers that.

Once you go through all the scenarions/things that are around us it is easy to understand the reason why we have regulations in place like SoX, HIPAA, etc. To have a common set of rules/procedures that guarantee certain things will always be there, done in a certain way and so on.

Why IT services these days are seen in a different way I have no clue. What I mean here is simple. IT is always being pushed by users to deliver something extra because every freaking user these days has a different, unique requirement!

Why does someone need his icons shown using Hattenschleisse fonts instead of Arial? Why does he need a picture of his three year old single testicule three legged albino camel as a background instead of the corporate logo? You get the picture.

Why users cannot live with a common, standard set of tools? I do understand Engineering needs different tools than Accounting and I am fine with that. But why do we need to support twenty five different Accounting departments in a company that has 25 users in the Accounting department? Is there really a need, in a business environment, to give every single person a unique set of tools so they can work? Cannot they work with something called a ‘common toolset’?

TS can deliver that extremely well, assuming a common toolset is there and is enforced. At several places we deployed SBC the users had to adapt to the working environment and not the other way around.

I can definitely see the value on VDI and several reasons to use it. But the simple reason ‘TS cannot address all the user requirements we have at our company’ is giant, MEGA BS to me. Why all the sudden users do not need to follow rules on their working environment the same way they do for everything else in life?

If that was not the case we would have traffic lights with pink, purple and brown lights just because your grandma likes and wants it.

As proven over the years, IT goes through cycles, always coming back to something that was done years ago. I am sure that will be the case here.

Once this generation of architects/admins/consultants creating these ‘do-whatever-you-want’ business enviroments are gone, I am certain someone down the road will realize how much of a PITA these are to manage and we will get back to the old days where you would get the right tool for the job and nothing else.

Before you ask, no, I do not hate users. But I cannot understand why you need a pink keyboard matching a yellow mouse.

At the end, are these valid user needs or simply user ‘whinning/bitching’? Ask that yourself the next time you are asked to deploy XenDesktop.

Cheers.

CR

2,056 total views, no views today

What is going on with Thin Clients?

One of these days after a discussion about how cheap hardware has become (and the power that is now available on your $400 desktop/laptop) I started to think, especially after winning that Wyse notebook/terminal at the BriForum GeekOut show, what is going on with Thin Client hardware.

No matter what the vendors may say these should be cheaper and have way more power that what we see on the market these days. To start, they are pretty much all x86 based. Ok, they may have a different, fanless power supply, and something else. But why these things have not dropped in price while increasing their power as PCs/Laptops did?

I do understand these devices do have flash memory and several other things your regular PC does not have. But in my opinion there is no reason why these are so underpowered and at the same time, overpriced when compared to regular PCs.

And that in a way kind of slows down the adoption of server centric solutions. Note I am not using the term ‘Server Based’. Server centric means any solution that runs off a centralized server model, whatever that is a TS/Citrix farm or a VDI hosted solution.

With overpriced and underpowered clients at the user end, the overall experience is reduced and more than that, IT starts to question what the benefit is on having on someone’s desk something that costs as much or more than a full blown PC. Ok I do understand things like power consumption and so on. But these are usually variables that most IT people are not even aware of. And regardless of all these arguments, even if it sucks 1/10 of the power a PC does, does it need to cost more and be that underpowered? I am sure there is room for improvements on thin clients. But as I see it, the manufacturers are kind of trying to maximize their gains by selling something that has indeed an outdated design that for sure has not changed for years! So why invest money to come up with the killer thin client that can provide PC like experience to SBC/VDI environments and still use way less power than a PC, having no moving parts if we can still sell that 10 year old design and make way more money?

Well what do you think? To me, if this industry wants to become mainstream in what they do they must change. The same way companies like Citrix and Microsoft are changing the way we access our applications today.

CR

2,793 total views, no views today

Welcome to the world of tomorrow.

Here I am at BriForum 2009, sitting in a conference room watching Brian talking about Client Hypervisors and thinking about a discussion I had last night with Harry Labana, Citrix CTO for XenApp.

What the future of our desktops, laptops and computing devices will look like and why I think client hypervisors and VDI are the way to go and how they will end up merging at the end.

The thing is, today there is a lot that can be done with the current technology available but as VDI and Client Hypervisors are seen as two completely different beasts the integration I am talking about is not there. Or is it?

The future I see is simple actually. Master disk images for my work machine, home machine, porn loaded machine and so on will exist somewhere, whatever they call that in the future. The cloud is what they are calling it today. It may change to heaven in the future. Or Hell, depending how you look at it and how these guys implement it.

So if I am at the office, my device (a laptop like one) can  connect to my work machine running on a cluster (so here I connect to it using some remote display protocol) – I would do that for example from a location where power is an issue so I use this low power mode on the device to connect (here an actual small app running on top of the hardware built-in hypervisor just to do the remote display protocol part) or if I am on a location I can actually use all the power (and have bandwidth) my device, with a locally cached copy of the image I want to run, just downloads the differences from what is local to what is in the cloud and I run that image locally, at full power and with full access to the hardware (yes, Intel/others will indeed change the PC architecture as we know it today to provide access to GPUs, etc).

When I get home the same takes place. I load my home PC image (just differences) and run it locally. Or again connect to it running somewhere.

Assuming that we get to this point (what I do believe it will happen but not by next year), will there be a market for the traditional TS model we know as of today? I am not sure if it will even be needed.

Hardware architecture changes will indeed allow for much higher densities in the future. Same will happen for power. I am sure Citrix will have their own version of the greatest device of all time that we all know: Mr. Fusion from Back to the Future. That alone will be able to power a shitload of hosted things – remember 1.21 Gigawatts is a lot – probably with minimal heat.

Of course all the management layers to handle all these images running, the applications and patches on them and so on will need to be there. But I guess we will have that sorted out by the time this reality becomes… reality.

Thanks to Microsoft we learned a lot so far on how NOT to design profiles, application deployment tools and so on and if all these companies now working to create this ideal world of the future are watching this and learning the future is bright.

Very bright.

CR

1,068 total views, no views today

Quest for VDI

I do respect Brian. And a lot. For sure he was the guy that brought the whole TS community at the time together (Ron Oglesby, Shawn Bass, Tim Mangan, Jeff Pitsch, Benny Tritsch, myself and several others) and kept it like that. BriForum, the child of endless discussions among everyone mentioned above is now a well known conference and a great place for everyone looking for SBC/TS/Citrix/VDI/Virtualization info.

If you read his website he is always saying something (well at least lately) about VDI. Even the discussions we have with Microsoft and their RDS team (sorry, most stuff under NDA otherwise I would post about these here) are now moving towards VDI. For God’s sake, even the lady that comes to clean my house every once in a while is now talking about VDI. So in a way, it is becoming mainstream. Wait. I do not mean people are actually using it. I mean it is mainstream in terms of discussing it, talking about it. Not implementing it.

Problem is even on these discussions with all the other Microsoft MVPs (I think all mentioned above are part of this group) I am not sure exactly what every single one of them think about it. So today when thinking about it I decided it would be a good idea to talk to every single one of them, one-to-one talk, record all that and then publish every day/week these conversations here on our blog. Once I have all the recordings I will then create some sort of final analysis, based on what all these guys said, resuming all that, and publish it here.

The reason for that is in a way I want to know what all these big shots in the industry think and see if they actually agree on something. 🙂

Secondly, I want to stop talking about VDI for at least six months. Why six months? Well that will bring us to 2010, the year when according to Brian, VDI replaces the whole world and we throw everything we know/have out of the window.

And the most important reason: that will give me a six months break discussing VDI with the cleaning lady.

CR

833 total views, no views today

My take on VDI.

In the past year if I could get a penny everytime I heard the word VDI I would not be here writing this post anymore. In the Server Based Computing/Virtualization industry, Virtual Desktop Infrastructure (VDI) is “the” topic and as mentioned, has been like that for a while.

Some people in the industry (mostly the Microsoft MVPs for RDS – the new name for Terminal Services) do know what I think but as not everyone is part of that group, here you have my take on this:

1. I am not sure why people like Brian and others do not compare VDI to real desktops. In a typical VDI scenario virtual machines running a desktop OS like Windows XP or Windows Vista are accessed by users using some sort of protocol (RDP, ICA, etc). For example Citrix XenDesktop uses ICA and Provision Networks/Quest uses RDP. But today, with client hypervisors (a local hypervisor installed on your PC) you can run all these virtual machines directly on your own PC and not on a remote server. So VDI in a way is evolving. In the future I do see users using their VMs over ICA/RDP when at work and when disconnected, using them locally through a local hypervisor. Get back to the office and all changes are replicated. Cool.

If we think about how many companies simply skipped the whole Server Based Computing thing, that never ran any application or desktop off a centralized TS/Citrix farm and how many companies are just now taking off the ground I do think it is simply natural their IT guys willing to compare how a VDI solution compares to a full blown desktop (real desktops/fat clients – whatever name you want) approach. Especially now that local hypervisors can be seen in the wild.

Again, these companies simply missed the SBC bandwagon. Like several companies I know that never deployed Windows 2000. Jumped straight from NT 4.0 Domains to Windows Server 2003 Active Directory. For them, whatever Microsoft introduced or did with Windows 2000 was completely irrelevant. The same applies here. These companies never cared about SBC/TS/Citrix. They are/were a full blown PC/Desktop shop. Now that virtualization is becoming widespread they simply want to know how a regular PC environment compares to a virtualized one. Dead simple. And I can totally see and understand their reasons.

2. So far, there is always some performance hit associated with VDI. The problem here is simple. If you are trying today to deploy a VDI solution for running Windows 2000 or XP, with a 4-7 years old application, chances are scalability will not be that bad (meaning you will be able to squeeze quite a lot of users in one big server, reducing the cost per user at the end). But if you are always trying to keep up with technology and if your company always goes for the latest and greatest, this means you may be going down the road with Windows 7 with Office 2009 sometime soon. And probably your applications will be written relying on the .NET Framework 4.0. Yes, I do know these are not out today. But keep in mind that with cheap hardware comes lazy programmers and huge frameworks. Long gone are the days when we had to squeeze as much performance as we could out of a DOS app because an extra 1MB of RAM on each PC would break the company.

I cannot see .NET ZZ getting leaner or faster; same for Office 20XX, Windows YY (replace X, Y and Z with any integer). They may look faster but that is the result of much faster hardware with much more memory. That is why I came up with the ‘Claudio’s Law’ like in the ‘Moore’s Law’ (that old dude from Intel): “The time it takes for Windows XXX to boot and load Office YYY on its current generation hardware is constant” and you can try that for yourself. Get an old PC (PII 266MHz with 64MB RAM) with Windows 98 and try loading Office 97. Now fast forward to today and get a typical machine running Windows Vista with Office 2007 and do the same. The time it takes to load is virtually the same!

Where do I want to go with all this? If you keep running the latest and greatest I cannot see VDI being a scalable solution. It is a solution for sure but if scalability is not there it means a much higher cost per user as you cannot run hundreds of VMs in a single box. Plus if you want to do it properly, you will not be hosting 100s of users on cheap hardware. You will go for the good stuff. And good stuff comes at a price. An 8-CPU box with 32 cores and 64GB RAM, RAID and fast hard disks does not come cheap. And now, in a recession, I am 100% sure costs will decide the fate of several IT initiatives out there. The bottom line in many places will be indeed this: money.

Unless Microsoft/Intel/God comes up with a new way of doing things that will allow us to run 100 VMs on the above hardware, all running the latest and greatest OS and apps, I cannot see this changing.

3. Local Hypervisor. Ok this adds quite a bit to the picture as now you can run the VM directly on your PC, without sharing resources with anyone else. Sounds great, doesn’t it? The problem here is there are several OS enhancements that are now dependant on the hardware. For example, Snow Leopard and Windows 7 are now offloading certain tasks to the GPU. Several other components on the OS rely on that low level direct access to the hardware. When a hypervisor layer is present, as of today, several of these enhancements are lost. This means a performance hit. Of course there are several benefits with that approach (i.e. your ‘master images’ become hardware independent, running pretty much anywhere, as long as the hypervisor is there) but in an age where users can go to Best Buy and get a decent, fast PC for under $600, are they willing to work on something that is slower (potentially much slower depending how OSs evolve) than what they have at home? If hardware manufacturers start implementing changes that will allow things like a virtual GPU and so on that will probably be minimized/eliminated and VDI may take off.

But then we may break the whole cycle of software/hardware Wintel upgrades and the industry behind that. Companies like Dell, HP, Lenovo, etc do rely on users and companies buying and replacing computers every couple years. So at the end, what impact such approach will have in the industry? I do know we, human beings, always adapt and I am sure these companies would have to adapt to survive the new way of doing things.

Well that is what I think. As you can see I do not think VDI is bad, ugly, beautiful or great. I do think it has its own merits, it is capable of solving problems other approaches may not work well and it is still in its infancy. But I cannot simply see how all its drawbacks/issues/costs will be addressed by 2010. Sorry Brian.

CR

1,144 total views, no views today