Platform Agnostic. Good or bad?

Today we can find several vendors that claim they are ‘platform agnostic’. One typical example in the SBC/VDI space is Quest’s vWorkspace that can deliver applications coming from terminal servers or hosted desktops regardless of the virtualization solution being used.

This means your hosted desktops can be running on any hypervisor, VMWare, Citrix, Microsoft and a bunch of other ones I am sure. On paper, this sounds great.

But when talking to some large enterprise customers I realized the fact you are now relying on multiple vendors to run your solution on, support may become a big problem.

For example if your XenServer environment is not performing as expected, where is the issue exactly? On your SAN from HP? On your trunking between you IBM Blade Chassis and your Cisco core switches? On XenServer itself? On some specific VM running under XenServer?

To find where the issue is you may have to call 10 different vendors. On top of that once you find the problem that does not mean it is solved. One vendor may say the problem exists because the other vendor is not implementing the specification for a certain protocol/standard properly and blame them for the issue. The bottom line is you may have a support nightmare on your hands.

If you can have everything (or most things) under one roof, that means one single place to call and to blame. No more saying to your boss ‘it is vendor X fault according to vendor Y but vendor X says it is vendor Y fault’.

Reminds me of the early days of Citrix when Microsoft would blame Citrix and Citrix would blame Microsoft for an application not working as expected. Great times indeed.

Back to the topic, is this the reason that brought Cisco to the blade world with their Unified Computing initiative? At the end do single vendor solutions bring value to the table?

I guess there is no simple answer to this question. I can see the value of having all under one roof and not having to deal with multiple vendors. But not being tied to a single vendor also brings flexibility to the table and kind of avoids a monopoly.

As I am not flexible…

CR

2,292 total views, no views today

Memory Overcommitment. Bluff or Real Requirement?

In my humble opinion, yes, it does. Now let me explain why.

As a real world example, you guys have us, WTSLabs. When we decided to move to a virtual world, I personally looked at most of the offerings available: Microsoft Hyper-V 2008 R2, Citrix XenServer and VMWare ESXi (considering our size, free would do the trick for us for sure). The deciding factor that took us down the VMWare ESXi route was the simple fact it can overcommit memory.

Once you look at how our VMs were performing, most of the time these were sitting idle, consuming few resources (that was the case with our environment – your environment may be completely different and in that case overcommitment may not be for you).

No matter what anyone else says, if you all remember, years ago one of the main driving factors (or sales pitch if you will) towards virtualization was to consolidate your X physical servers into a bunch of physical hosts. I remember seeing several times Sales/Pre-sales guys going to offices explaining that most of the time the customers servers were there doing nothing and thanks to that, bringing all these ‘idle’ servers under one single host was possible.

I am not saying that is the case with any server and/or any environment. For sure there are several SQL, Exchange boxes out there that are always being hammered, working hard. But for tons of companies out there, especially in the SMB market, it is almost guaranteed that is not the case.

Back to our own scenario here, we now run 6 VMs. The resource hog one is our Exchange 2007 SP2 box (what a surprise…) setup with 4GB. Then we have one domain controller, web server, TS (running Windows Server 2008 R2) and two XP VMs. By monitoring these up and running on a regular day they are indeed idle most of the time, not using many resources. I do not remember all the numbers but I know we are overcommiting memory but not by a lot (probably one to two gigs – our Dell Server has 8GB).

Like WTSLabs there are many other companies out there on the same boat. And for these, if you cannot overcommit this may mean buying another server. For large enterprises another box may be just a drop in the ocean. Not for us. 🙂

Performance wise, nothing to complain so far; everything works great and seems responsive. To me, the reality is there will be cases where overcommitment is indeed not a good idea and there could be performance issues if used. But on the other hand, there will be way more cases where overcommitment will not be an issue and everything will work great, saving companies money.

The reason why Microsoft and Citrix as of today downplay memory overcommitment and all the technologies behind it (you can read more here) in my mind is simple: they do not have it.

Will they add that? I am pretty sure they will and if they do there will be two possible reasons for that:

1. They added a feature they consider useless just because they are right and the world is wrong.
2. Added it because it is really important and useful.

I will go with the second option. And once they do it I may take a look at Hyper-V and XenServer again for our needs.

CR

4,256 total views, no views today

Why all this drama now?

If you have been around the SBC space for a couple of years, you are probably aware if you had a Terminal Services/Citrix solution in place at your company you were treated in a different way. Not necessarily a good one.

In most cases the ‘Citrix’ solution was left on its own by the ‘Server’ guys. The ‘Citrix’ guys were the ones responsible for setting it up, making sure it was up and running, that performance was good (at least from their end – you cannot do much regarding Outlook performance when your ‘Server’ guys decide to run a 1000 maiboxes Exchange 2007 Server using VMWare Player) and so on.

That of course caused some interesting issues. When you had a performance problem the ‘Server’ guys almost automatically would blame ‘Citrix’. As the tools available evolved, it became much easier to prove to these douche bags the issue was actually on the way they setup their SQL servers (all in one single disk!), their Exchange boxes, their AD and even their switches/routers. And not on Citrix.

Fast forward to today’s world, where VDI is the next big thing (well, funny pause here: years ago, when everyone started talking about VDI, the CEO of a very large company that is a MAJOR player in the SBC space told me during BriForum that for him ‘VDI was one of the dumbest ideas ever but as everyone is talking about it we are now supporting this…’), and now people are all concerned about how to treat the ‘VDI’ guys at the datacenter. Read Gabe’s post on the subject here.

My point here is simple. Why all this now? ‘Citrix’ people have been used to this for years and in most cases, the guys pushing VDI forward are the EXACT same guys that had to push ‘Citrix’ forward years ago.

These people are used to that and learned how to deal with that separation at the datacenter at the time. In the past the user’s desktop was hosted on a server at the datacenter (that ran Windows Whatever with TS enabled and Citrix WinMetaXen or QuestProvisonvWhat) running on server grade (hopefully) hardware and users would access it over RDP/ICA. Today’s hotcake, VDI, has the user desktop hosted in a datacenter, running on server grade hardware and they access it over RDP/ICA. So where is the difference?

There is no difference. The ‘Citrix’ guy is now the ‘VDI’ dude (as guy is really ‘out’ – dude is ‘in’). And the same way the ‘Citrix’ guy had to fight his battles with the ‘Server’ guys and had to find his way to manage his loved puppy, all the ‘VDI’ dudes need to do is basically the same.

With a huge advantage: they have all the history, everything we, ‘Citrix’ guys, had to go through, discussed/documented/explained all over the web.

If these ‘dudes’ can learn with our past mistakes/battles/history, they will see this is not rocket science and that in several ways they are no different than what we were 5, 10 years ago.

Grow up guys. VDI is not that different from TS.

Before you thank me for this post, You are Welcome.

CR

872 total views, 1 views today

Thank you VMWare.

This time I will be quick. Over the weekend I upgraded my VMWare ESXi environment at home and from that end, everything worked smoothly (yes I know most of the time upgrading VMWare stuff is not really that easy). But as with anything that usually comes out from 3401 Hillview Ave, Palo Alto, CA, something wrong had to happen.

VMWare had MONTHS to fix the freaking issue with their VSphere Infrastructure Client or whatever that is called now (as they now copy everything Citrix does, they started changing names – word on the street is VMWare ESXi will be renamed VMWare SEXi and VSphere will become VOval) when on Windows 7.

Of course I am running Windows 7. Windows XP, according to my daughter, is “so last year” so I moved everything I have to Windows 7. Once you are in Windows 7 land the VIC (Virtual Infrastructure Client Crap) does not work anymore and when you try to logon it throws one of these really useful, easy to understand error messages. Why not show a simple window that says “You are screwed. Thanks for using VMWare.”?  

Thank Lord there is a fix for what lazy VMWare screwed up. You need to grab a DLL from somewhere and change a config file to get that crap working again. All explained here.

Once I did that, everything is now up and running again. And on Windows 7.

Please do not tell me you need Windows XP Mode on Windows 7 to fix this crap VMWare.

Virtualization is cool and great. But using it to fix shit you created in the first place, is not cool.

Really.

CR

3,982 total views, 1 views today

In a Citrix world, does the iPhone matter?

As I am always reading, today I saw this post at the Citrix Community Blogs regarding the Citrix Receiver for the iPhone. As you can see over there I made some comments and the guys at Citrix replied.

My main question regarding this post is, does the iPhone really matter in this context? Is it a game changer device that will help the adoption of Server Centric solutions (VDI or TS, does not matter)?

I ask because as of today, several Windows Mobile phones not only have video outputs (so you can hook them up to a monitor/Projector/TV) but also have support for Bluetooth keyboards, features that are NOT supported (at least officially AND using the SDK available to us, simple mortals) on the iPhone.

So today, if you want to, you can go out and buy a phone that you can hook up to a monitor and using a small, foldable Bluetooth keyboard, use it as a thin client (an RDP client is indeed available for Windows Mobile; I am sure that is the case for VNC, LogMeIn and as per the post I mentioned above, a Citrix Receiver for Windows Mobile will be out soon). As far as I know that did not really cause a huge commotion on the market. Plus to be honest, I do not know anyone actually doing this. And finally, yes, I think it is somehow a little cumbersome…

If we expand on that idea, you could simply go out and buy something like the RedFly from Celio, that is basically a netbook type device that connects to your Windows Mobile (and other phones like BlackBerries) and gives you a 7″ or 8″ screen and a reasonably sized keyboard. Same as the failed Palm Foleo if you remember that. That would be a killer solution I think, ONLY if the price was in the $50 to $99 range. At $199 (starts at that), you are now in Netbook territory. So if I will be carrying an extra device, why would I go for the RedFly? Yes, I know it is hard to justify…

Back to the iPhone, if all the above is available today, why the iPhone is seen as the ‘Jesus Phone’ (love that term, coined at Gizmodo!) for accessing Citrix?

Not sure to be honest. I do think the iPhone is a great device but to become a really useful thin client, a lot more is needed. The small form factor is indeed great for quickly accessing your servers and doing something… quickly. But for long term use, the form factor does not help at all. And for quick access I can do the same from Windows Mobile or even from the CrapBerry (yes, I do think it is crap. But that I will save for another post).

The netbook like form factor I do think is the way to go but carrying another device is not really a solution. If hotels were willing to rent devices like the RedFly out for $5 a day, THEN I see the potential, big time. They would have these paid off in a couple weeks and would provide a real option for Windows Mobile/CrapBerry users to access Citrix backends! Of course support for the iPhone could be easily added, assuming Apple blesses this type of usage for its iPhone (oh yes, you cannot use the iPhone the way you want; you use it the way Apple wants you to – funny thing is you do not even notice Apple is actually manipulating you all the time).

Is the fate of all this really in the hands of a mobile device like the iPhone at the end? Or in the hands of someone that sees the potential for renting RedFlys at hotels, eliminating the need for us carrying another device?

Time will tell.

Cheers.

CR

2,546 total views, 1 views today

Keeping yourself up to date.

With so many changes happening in the Virtualization/Server Based Computing space these days I noticed most of the technical people, especially IT staff, are having a very hard time trying to catch up on everything that is going on out there. From simple things like VMWare Studio 2.0, Citrix WorkFlow 2.0 all the way to Citrix XenApp 5.0 Feature Pack 2.

And just for your records, in the middle you may have all the application virtualization stuff (Thinstall, App-V, Altiris SVS, etc), the OS virtualization (ESX, Hyper-V, XenServer, VirtualBox, etc) and things like Windows Server 2008 R2 with all its new TS/VDI features, Quest vWorkspace and so on!

How can you make intelligent decisions about all this when the landscape is changing at such fast pace? After some internal discussions and giving my passion for speaking and training people, I decided to create what I called a ‘Crash Course’ on Server Centric technologies.

On this 5-day training we are covering all these. Sounds crazy heh? Yep. But I am sure it will be a great course for everyone out there that wants to get to the bottom and save a ton of time reading what all these things do, how they compare and what they can do for them.

As you can imagine the idea here is not to give you a deep dive on any of these things so you become a guru overnight. Instead, as mentioned we will give you the no-BS, real world view of all these, based on all the work we have been doing with all these technologies and products for years.

If you are interested, please check our training section at http://www.wtslabs.com/training.asp.

And as always feel free to email me directly with suggestions, opinions and so on!

CR

821 total views, no views today

User productivity to justify something?

This morning I was checking my Twitter and got a post regarding (yes, try to guess) VDI. Again. This time with a different twist. Using the increase (possible one) in productity to justify going VDI. You can read the original post here.

If we were all reading/writing these 20 years ago, I would agree with the article. But today, sorry, I cannot. And I will explain the reasons why.

As per the original article, when typewriters were replaced by computers, even if you are not on IT, you can clearly see how much more a computer can do compared to a typewriter. So at the time we had what I call a fundamental change in technology, something major that will clearly allow for a tenfold productivity increase or more. And that is something any CFO/CEO can clearly see and the reason why typewriters were gone quickly. I compare this kind of event in technology to the asteroid that got rid of the dinosaurs. Something major, abrupt, that has the power to bring major changes to something.

Fast forward to today’s IT landscape and the reality is WAY different. First of all, as I wrote several times, if VDI will indeed replace what we know as a desktop today it will have to introduce one of these fundamental changes. And as you can read on this blog, I do not think this will happen unless hardware changes (major ones) are brought to the table.

So back to the topic, in the past couple years there was nothing huge (another IT asteroid if you will) that would bring a tenfold increase in productivity. Sure, application virtualization, OS virtualization, SBC, VDI, type 1 hypervisors and so on helped in many areas but it was not like what happened with the typewriters.

Plus, as of today, measuring user productivity, is something extremely hard. If, as of today, with the current tools and WITHOUT VDI you cannot give your users what they need to work, there is something wrong with you and/or your company. With current, proven technologies you can probably give way more than what your users need to work. Just remember the Office paradigm where “80% of all users use 20% of all features”.

Add to that the fact if you can indeed give users something they will be able to cut 30 minutes a day from their workload, do you really think they will use these now free 30 extra minutes a day to work more? I do not think so. I can guarantee you these 30 minutes will be spread accross two coffee breaks and a lunch so he gets some extra time to himself. At the end of the day he will be doing the same workload but with 30 extra minutes for himself.

And oh boy, how can you measure that considering users are all different? I can bet in your company there are users that completely understand your IT environment and the tools they have at their disposal. Others probably still do not understand how such mysterious, almost magical device, the mouse, works. This is the reality. Not to mention that with any change it comes user training and not all users, again, are the same. Some may take days to adapt. Others, months. And some will never adapt and will curse you and your changes forever.

The point here is simple. I get the fact that an increase in productivity can indeed be used to justify changes like it did for the extinction of the typewriters. But when the introduction of a new technology brings minor increases (and that is the case in my opinion for VDI – users will still be doing pretty much the same, pretty much the same way) it gets very hard to justify, especially these days when C-level executives are trying to trim all the fat they have been carrying for several years.

In a way it is like trying to justify to your wife why you should get rid of your working, 20,000:1 CR 720p native triple LCD projector in your home theater to buy the 25,000:1 CR, 1080p native one. As a videophile I know the difference and I can see it. But for my wife and most of the people that watch movies at my place, the difference will be minimal (assuming they can see it). Same goes for IT these days.

Unless someone can bring way more to the table, another IT asteroid, I cannot be convinced, as of today, to deploy VDI in 90% of the cases or my customers. I can see it as a solution for maybe 10% of these. But sorry, it is no game changer as a lot of people have been bragging all over the Internet, from Twitter to Blogs. Again, unless some major shift, technological advancement is done on the HW level.

VDI people out there, you need way more to convince me.

Show me what you got.

Cheers.

CR

1,819 total views, no views today

User needs and the impact on TS/VDI.

After reading Daniel Feller’s post today on ‘VDI or TS’ I started thinking about one of the main arguments people have to justify VDI:  flexibility to deliver a unique desktop/environment to the user.

The more I think about this, the less sense it makes to me. When we start to think about tools, procedures, regulations and a bunch more things that surround us every single day and that are part of our lives, we can find a common thing/trend there: everything has a set of rules that we all, as a society, accept and follow. Several without questioning.

Before we go ahead let’s take a look at certain examples. When driving, if you see a red light you stop. You never questioned why the light is red and not bright pink. Plus the traffic authorities will not really change the red/yellow/green traffic light at the corner of your house just because you prefer Purple/Bibbidy Bobbidy Blue/Crushed Mellon. Nope. Once you get your license you accept the fact these are the colors and that they have a certain meaning that you will follow.

Same for your bank. You know they are usually opened from 9:30am to 4:30pm and that you cannot withdrawn $100,000 in cash on that ATM close to your place. Again, the bank is not really going to acommodate your needs to be able to withdrawn $1,000 in one dollar bills just because you think it is more efficient for you if you could do that. Or open at 2:00am because your wife prefers that.

Once you go through all the scenarions/things that are around us it is easy to understand the reason why we have regulations in place like SoX, HIPAA, etc. To have a common set of rules/procedures that guarantee certain things will always be there, done in a certain way and so on.

Why IT services these days are seen in a different way I have no clue. What I mean here is simple. IT is always being pushed by users to deliver something extra because every freaking user these days has a different, unique requirement!

Why does someone need his icons shown using Hattenschleisse fonts instead of Arial? Why does he need a picture of his three year old single testicule three legged albino camel as a background instead of the corporate logo? You get the picture.

Why users cannot live with a common, standard set of tools? I do understand Engineering needs different tools than Accounting and I am fine with that. But why do we need to support twenty five different Accounting departments in a company that has 25 users in the Accounting department? Is there really a need, in a business environment, to give every single person a unique set of tools so they can work? Cannot they work with something called a ‘common toolset’?

TS can deliver that extremely well, assuming a common toolset is there and is enforced. At several places we deployed SBC the users had to adapt to the working environment and not the other way around.

I can definitely see the value on VDI and several reasons to use it. But the simple reason ‘TS cannot address all the user requirements we have at our company’ is giant, MEGA BS to me. Why all the sudden users do not need to follow rules on their working environment the same way they do for everything else in life?

If that was not the case we would have traffic lights with pink, purple and brown lights just because your grandma likes and wants it.

As proven over the years, IT goes through cycles, always coming back to something that was done years ago. I am sure that will be the case here.

Once this generation of architects/admins/consultants creating these ‘do-whatever-you-want’ business enviroments are gone, I am certain someone down the road will realize how much of a PITA these are to manage and we will get back to the old days where you would get the right tool for the job and nothing else.

Before you ask, no, I do not hate users. But I cannot understand why you need a pink keyboard matching a yellow mouse.

At the end, are these valid user needs or simply user ‘whinning/bitching’? Ask that yourself the next time you are asked to deploy XenDesktop.

Cheers.

CR

2,056 total views, no views today

Intel buys Citrix.

WRITTEN BACK IN 2009 – Funny eh?

No, I am not someone high up in the industry that knows something that no one else knows. And before you tweet about this or send this post to your buddies, this is not true.

Last night while working and reading this thing about Intel buying a virtualization vendor came to my mind and I decided to take that line of thought and exercise it a little bit. So here you have what I think.

First of all the point here is not to think if this would really make a lot of sense from a business (reading, $$$) perspective. I am sure there could be several arguments why this would be great and why it would suck. So feel free to start such discussion using the comments here. 🙂

The main idea is what this acquisition could do to the market not only in terms of virtualization but the processor/CPU/Chipset one as well and how all that could push ‘Server Centric’ solutions combined with local hypervisors.

So imagine this acquisition goes through and no anti-trust crap happens (big chance it would in this case).

When you own the hardware platform, it is much easier to get the most out of whatever you run on top of that. Or any platform for that matter. For example I am sure Microsoft Office will always have a competitive advantage as it runs on top of Windows and as Microsoft owns it I am sure there are several tricks up the sleeve that make office simply better, faster or more efficient than the competition. Same for a graphics card manufacturer. They know their hardware and for sure are the ones that know how to get the best out of it.

Or let’s take a big real world example here: the XBox 360. By designing the hardware platform itself, focusing on what that platform is supposed to do, you get the most out of it. And owning both layers (the 360 hardware and its software – OS/SDK) no one better than Microsoft to release games that are always pushing the envelope, getting the most of the hardware at early stages. Halo/Project Gotham (even on the old XBox) were very good examples of that. Several other ones exist on the 360 to prove the point.

Now going back to Intel/Citrix. If Intel, the chipset/cpu manufacturer acquires someone like Citrix, that now has a stronghold in the virtualization market, Intel could do some very interesting things:

– As they would own both layers, I am sure they could make the hypervisor get the most of the hardware.
– Going the other way around, having a much better understanding of the software layer, the hardware platform could be changed to provide the best performance/transparency to the hypervisor.
– A new chipset with the hypervisor built-in could be delivered and loaded on any new PC from now on. This would put a hypervisor on every computer, probably bringing virtualization to the masses (I am not saying here this is good or bad. I am just stating the fact that this would indeed put a hypervisor on lots of machines around the globe).
– Now that someone has both layers under one umbrella, what would prevent Intel from having a hypervisor that is completely optimized to Intel chipsets and that would never be as good on AMD hardware for example? Or the other way around, another hypervisor would never be as good/efficient on Intel hardware as Intel’s own one (let’s name it Xentel Server).

Thinking about that made me realize this could be the downfall of several big names out there like AMD and VMWare for example. Why would I run VMWare if Xentel Server is way more scalable and faster? Why would I buy machines with AMD CPUs if the ones with ‘Intel inside’ not only have a built-in hypervisor but give way better performance when I actually use such hypervisor? What would this do to Hyper-V for example? Given that SCVMM is modular Microsoft could easily adopt Xentel Server and simply write all the management goodies needed…

There are several other things such acquisition could potentially bring to the table and certainly I did not discuss these in this article.

Will this happen? I would tend to say no, it will not happen.

But it would be a VERY interesting thing to see. So for now all we can do is discuss how such thing would change the market as we know. 🙂

CR

44,724 total views, 1 views today

What is going on with Thin Clients?

One of these days after a discussion about how cheap hardware has become (and the power that is now available on your $400 desktop/laptop) I started to think, especially after winning that Wyse notebook/terminal at the BriForum GeekOut show, what is going on with Thin Client hardware.

No matter what the vendors may say these should be cheaper and have way more power that what we see on the market these days. To start, they are pretty much all x86 based. Ok, they may have a different, fanless power supply, and something else. But why these things have not dropped in price while increasing their power as PCs/Laptops did?

I do understand these devices do have flash memory and several other things your regular PC does not have. But in my opinion there is no reason why these are so underpowered and at the same time, overpriced when compared to regular PCs.

And that in a way kind of slows down the adoption of server centric solutions. Note I am not using the term ‘Server Based’. Server centric means any solution that runs off a centralized server model, whatever that is a TS/Citrix farm or a VDI hosted solution.

With overpriced and underpowered clients at the user end, the overall experience is reduced and more than that, IT starts to question what the benefit is on having on someone’s desk something that costs as much or more than a full blown PC. Ok I do understand things like power consumption and so on. But these are usually variables that most IT people are not even aware of. And regardless of all these arguments, even if it sucks 1/10 of the power a PC does, does it need to cost more and be that underpowered? I am sure there is room for improvements on thin clients. But as I see it, the manufacturers are kind of trying to maximize their gains by selling something that has indeed an outdated design that for sure has not changed for years! So why invest money to come up with the killer thin client that can provide PC like experience to SBC/VDI environments and still use way less power than a PC, having no moving parts if we can still sell that 10 year old design and make way more money?

Well what do you think? To me, if this industry wants to become mainstream in what they do they must change. The same way companies like Citrix and Microsoft are changing the way we access our applications today.

CR

2,793 total views, no views today