I present you the iCluster.

On my quest for a great portable Lab that I can take with me anywhere, airports included, I did quite a bit of research and ended up with something I have never seen before. So here you have the details in case you are looking for the most portable lab that you can still do all sorts of fancy stuff and with kick ass performance.

The iCluster, as I named it, is actually a pretty damn amazing little thing I built. Here are the specs:

1 x Synology DS412+ NAS. Has four drive bays and two gigabit interfaces. Does iSCSI, NFS, etc. 
4 x Samsung EVO 840 1TB. Loaded these 4 1TB SSD drives into the Synology for a total of 3TB usable space (they are in Hybrid RAID).
2 x Apple Mac Mini 2.3GHz, i7 Quad-Core with 16GB RAM each. The reason for the Mac Minis is simple: they are DAMN small and portable. Two of these with 16GB each give me 32GB of usable RAM for virtual machines. Not the best BUT for a portable lab that I can carry around and at only a couple pounds, nothing beats that.
1 x HP 1810-8G v2 switch. Small 8-port Gigabit switch with full LACP capabilities (so I can trunk two ports to the Synology, giving me 4 Gigabits (full duplex) throughput. Not bad.
1 x Apposite Linktropy Mini. WAN emulator. Allows me to simulate any sort of latency/loss in a 100MBits link. I can quickly see how things will behave for users when they connect over a 3G connection, from a remote location, over satellite, etc. Impressive gear.
1 x D-Link DIR-600L Router. Used to provide wireless access to the cluster and also a WAN connection to the outside, in case I want to plug something into it.

All this is loaded on a custom-built wooden case that has a front and back door for easy access to the devices and a handle at the top. I can easily carry this with me on trips and even take it on airplanes as carry on luggage. I also put a small power bar inside so I only have a single power cord to the outside.

On the back you see the antenna for the router and some Ethernet CAT6 colour coded ports. That allows me to select (patch) if I want the connection to be passing through the WAN emulator, directly to the HP switch, if the wireless will go through the Apposite, etc.

The case is being finalized as I post this. Hopefully I will have pics of the whole thing soon. Plan is to wrap it in leather to give it a finish similar to the Marshall Stanmore speaker.

Total cost, including the Apposite Linktropy, will be in the $7,500 range in case you are wondering.

On the software side of the house I decided to go with Windows Server 2012 R2 Hyper-V for now. If it becomes a PITA on the Mac Minis I will switch to ESXi as I know for sure it works great on the Mini (I have another one with ESXi – rock solid). This gives me all the goodies (live migration on the cluster, etc) in such a small and light form factor.

So next time I am at BriForum, be prepared for some interesting live demos on the iCluster.

 

CR

8,168 total views, no views today

Intel buys McAfee. Does it matter?

As you all have heard today, Intel bought McAfee. And will pay well for it.
So the question now, at least from my end, is how this can possibly affect or help the virtualization market, the one you and I live and breath on.
First of all, we all know for VDI to take off as a mainstream solution (what it is NOT at this day in 2010 – and will not be for a long time) it must get cheaper. By cheaper it means being able to cram more instances per server. This can be achieved in several ways like using the latest, greatest and fastest CPUs you can get and with as many cores as possible (and of course using quick ass disk subsystems like Fusion-io, caching/dedup like iLIO, using tons of RAM, etc).

The point is on the CPU space there is nothing that prevents other vendors like AMD to get to the level Intel is at. In many cases in the past AMD actually delivered better silicon than Intel. So Intel needs a way to differentiate itself from their competitors. Bringing stuff like AV closer to the HW is one way of doing this. Good for Intel.

And of course getting this OUT of the VMs will for sure increase scalability. That was the reason why McAfee and others were coming up with appliances and lightweight agents (to run on the VM) to offload all that work outside the virtual environment.

The main question now is really how Intel will pull this off by not being a software company really. How they will get McAfee going. Of course I think it is just way too soon for any analyst to say anything about this. Historically Intel has not managed acquisitions like this well but they were never on such scale and with such reach like McAfee has (good or bad, they do have customers and a name in the industry, especially since that .DAT file fiasco that screwed up more computers in a day than any virus they were trying to protect).

In the near future I do not expect to see anything embedded at the HW level. This is for sure something that will come way down the road as you need to come up with something that can be leveraged to anything running on top of that HW. This means you either change the OS that will be running to benefit from these new HW extensions (like vendors did when using the virtualization components exposed by Intel and AMD on their CPUs) or, in this particular AV case, you get an agent running on the OS/VM. Not an easy task to do considering the amount of hypervisors and OSs now available.

That leads us to a very important thing. To minimize this and make things much easier, would not make sense for Intel to grab a Hypervisor vendor now? Given the three main players on this space now, VMWare, Microsoft and Citrix, I am sure the low hanging fruit here is Citrix and I even wrote about this ages ago on the post ‘Intel buys Citrix’

This would give Intel a huge advantage over any other company in the Virtual Wintel echosystem. Controlling the CPU, the Hypervisor that runs on it and extra features like AV, would give you the ultimate virtualization platform, where your solution runs better or has more features than anyone else. Example? All the HW fancy features are only exposed to your own hypervisor (like Microsoft is doing with RemoteFX, only available to Hyper-V hosts) and of course your hypervisor will scale much better than the competitors as you own and know it all about the underlying HW platform. Then the next logical step would be to acquire a graphics company like NVidia (as AMD owns ATI) and leverage all that into the platform, exposing it to the virtualization layer. Then, buy a good storage vendor and a management/layering one and they are all set.

Sure such scenario could potentially bring a lot of issues to Intel from a legal perspective, as it did to Microsoft when it became what it is today. But certainly it would simplify the virtualization market a lot (and yes, I know, locking everyone into their platform – what may not be a terrible thing as Apple has shown the sceptics with their iOS echosystem).

The bottom line is this acquisition for sure will help the virtualization space in the long run (do not expect mystical benefits happening overnight with this acquisition) but I see it as just the tip of the iceberg of what is potentially coming down the road from them.

Feds, you better keep an eye on Intel.

CR

935 total views, 2 views today

RemoteFX – Supported Video Cards

Ok, we have been trying to get RemoteFX working and even though we knew not all video cards are supported, we were not able to find in a single spot a list of the supported ones with the Windows Server 2008 R2 SP1 Beta release.

After some digging around at multiple locations here you have it:

ATI:  ATI FirePro™ v5800, v7800 and v8800 Series professional graphics

NVidia: Quadro FX 3800, 4800 , 5800 and Quadroplex  S4.

That is it. You also need to disable the onboard video card on the machine you will have Hyper-V running.

Hope this saves you the time I had to spend looking for this information. 🙂

CR

6,809 total views, 1 views today

XenDesktop 4. Not perfect.

I know tons of people will email me or comment saying I am a prick, an idiot and things along these lines. But after using Citrix XenDesktop 4 for a while, I have to give some feedback to tons of people that are probably trying the product the same way I am. That means the small shop willing to go for the free 10 licenses for XenDesktop 4.

First of all some background information here. I have been working with Citrix products for at least 15 years now. Yes, that long. I have seen it all. The good, the bad, the ugly. Citrix is indeed a company capable of great feats and at the same time, bottom crap shit. So I can safely say I am pretty versed on Citrix and its product line up.

On the virtualization frontend, even though I am no VMWare vExpert I have been using their stuff since VMWare Workstation 1.0. Used GSX, ESX, VMWare Server, VMWare Fusion, VMWare Player and so on. Deployed some decent size virtualization environments too (200+ servers). Pretty versed on it and decent knowledge on the underlying components.

Resuming: I am not as stupid as it looks like.

So what have I been trying to achieve? Very simple (and cheap). At home I have a Dell PowerEdge T105 box with 8GB RAM and 2x500GB disks (RAID 1) with dual NICs, connected to a Dell PowerConnect 2824 switch. The T105 runs ESXi 4.X (free version) and has always worked fine. Great product for sure. And yes, I do use memory overcommitment and for my needs it is simply perfect, with no performance issues whatsoever. Before you ask, yes, that is the main reason why a Citrix CTP decided to use VMWare ESXi instead of XenServer. The lack of overcommitment, for ME, is a show stopper. I wrote more about the topic here.

Back to the topic, as we are indeed a small shop, this little guy runs all for us. 2008 DC, 2008 Web, 2008 with Exchange 2007 plus two XP VMs. Actually this post you are reading is hosted on my 2008 Web Server, running under this ESXi box.

So I decided to test something very simple. As I got a freebie last year at BriForum (remember, I am the current champion of the Geek Out show that happens every BriForum) from Wyse (a Windows XPe terminal, notebook form factor) I decided to give it a try as a thin client for a XenDesktop 4 solution (by the way I had SEVERAL issues with the stupid Wyse X90 – probably will post about these later, so buyer, BE AWARE).

Got the free XenDesktop 4 license (good for 10 users) and followed the whole installation guide. Setup another VM on my ESXi (2003 Server with IIS, etc, part of my 2008 AD) and also setup a Windows 7 VM (1GB RAM). The setup could be easier and certain things make no sense whatsoever. But I somehow expected that to be the case coming from Citrix (they are a bunch of smart people that sometimes find some very weird and cumbersome ways of doing things).

Once I had all up and running I faced the first problem: the Windows 7 VM would simply hang after logging in from the DDC Web Interface. Looking at the ESXi console I could see the VM there, up and running but I would not be able to login (when it was shown as available – more on that later). After ranting a little bit on Twitter someone facing the same issue mentioned a problem with the video driver and a possible workaround. Tried that and indeed that issue was fixed.

Great so I thought.

Not too fast there fellas. Once that was fixed now I had to deal with a more serious issue. When I login to the DDC WI, it tries to start my Windows 7 VM but fails and throws an error. I then go to the vSphere console and I can see the VM all good there and I can even logon to it. Once I do that, almost like waking it up from some sort of ‘standby’ then the web interface/DDC works!

I discussed the issue with several other people running a similar setup (small shop with free ESXi) and they all face some sort of issue with XenDesktop 4. Apparently if you use Windows XP, what I have not tried, it works. But that I refuse to do as I left XP for good and more than that, as my customers are all considering a similar Windows 7 approach I must stay with the latest and greatest technology. So I do not care if it works with XP. Citrix does say Windows 7 is supported and I cannot see anywhere saying the free ESXi is not supported as Mr. Joe Shonk mentioned (so I assume it is for this unmanaged desktops case).

The bottom line for me is simple. When it works, XenDesktop 4 is a great product. But there are still issues not only at the core but on other components from what I hear (Provisioning Server issues, XenServer reliability problems and so on) and what amazes me is some of these, like the first one I had, apparently are known issues. If that is the case why not adding a readme file that explains these and the workarounds? Or why not fixing that crap?

I do see the power of XenDesktop and where it can take VDI once it is integrated with XenClient. But for now, Citrix, please let me know what needs to get done to make this work. I am sure several small businesses would jump into the VDI bandwagon with the free 10 licenses everyone can get for XenDesktop 4 but it must work.

If I find a solution or if Citrix decides to take a look at my problem I will let you guys know.

If I disappear after this post you know Citrix got me. For good.

CR

4,147 total views, 2 views today

Platform Agnostic. Good or bad?

Today we can find several vendors that claim they are ‘platform agnostic’. One typical example in the SBC/VDI space is Quest’s vWorkspace that can deliver applications coming from terminal servers or hosted desktops regardless of the virtualization solution being used.

This means your hosted desktops can be running on any hypervisor, VMWare, Citrix, Microsoft and a bunch of other ones I am sure. On paper, this sounds great.

But when talking to some large enterprise customers I realized the fact you are now relying on multiple vendors to run your solution on, support may become a big problem.

For example if your XenServer environment is not performing as expected, where is the issue exactly? On your SAN from HP? On your trunking between you IBM Blade Chassis and your Cisco core switches? On XenServer itself? On some specific VM running under XenServer?

To find where the issue is you may have to call 10 different vendors. On top of that once you find the problem that does not mean it is solved. One vendor may say the problem exists because the other vendor is not implementing the specification for a certain protocol/standard properly and blame them for the issue. The bottom line is you may have a support nightmare on your hands.

If you can have everything (or most things) under one roof, that means one single place to call and to blame. No more saying to your boss ‘it is vendor X fault according to vendor Y but vendor X says it is vendor Y fault’.

Reminds me of the early days of Citrix when Microsoft would blame Citrix and Citrix would blame Microsoft for an application not working as expected. Great times indeed.

Back to the topic, is this the reason that brought Cisco to the blade world with their Unified Computing initiative? At the end do single vendor solutions bring value to the table?

I guess there is no simple answer to this question. I can see the value of having all under one roof and not having to deal with multiple vendors. But not being tied to a single vendor also brings flexibility to the table and kind of avoids a monopoly.

As I am not flexible…

CR

2,292 total views, no views today

Memory Overcommitment. Bluff or Real Requirement?

In my humble opinion, yes, it does. Now let me explain why.

As a real world example, you guys have us, WTSLabs. When we decided to move to a virtual world, I personally looked at most of the offerings available: Microsoft Hyper-V 2008 R2, Citrix XenServer and VMWare ESXi (considering our size, free would do the trick for us for sure). The deciding factor that took us down the VMWare ESXi route was the simple fact it can overcommit memory.

Once you look at how our VMs were performing, most of the time these were sitting idle, consuming few resources (that was the case with our environment – your environment may be completely different and in that case overcommitment may not be for you).

No matter what anyone else says, if you all remember, years ago one of the main driving factors (or sales pitch if you will) towards virtualization was to consolidate your X physical servers into a bunch of physical hosts. I remember seeing several times Sales/Pre-sales guys going to offices explaining that most of the time the customers servers were there doing nothing and thanks to that, bringing all these ‘idle’ servers under one single host was possible.

I am not saying that is the case with any server and/or any environment. For sure there are several SQL, Exchange boxes out there that are always being hammered, working hard. But for tons of companies out there, especially in the SMB market, it is almost guaranteed that is not the case.

Back to our own scenario here, we now run 6 VMs. The resource hog one is our Exchange 2007 SP2 box (what a surprise…) setup with 4GB. Then we have one domain controller, web server, TS (running Windows Server 2008 R2) and two XP VMs. By monitoring these up and running on a regular day they are indeed idle most of the time, not using many resources. I do not remember all the numbers but I know we are overcommiting memory but not by a lot (probably one to two gigs – our Dell Server has 8GB).

Like WTSLabs there are many other companies out there on the same boat. And for these, if you cannot overcommit this may mean buying another server. For large enterprises another box may be just a drop in the ocean. Not for us. 🙂

Performance wise, nothing to complain so far; everything works great and seems responsive. To me, the reality is there will be cases where overcommitment is indeed not a good idea and there could be performance issues if used. But on the other hand, there will be way more cases where overcommitment will not be an issue and everything will work great, saving companies money.

The reason why Microsoft and Citrix as of today downplay memory overcommitment and all the technologies behind it (you can read more here) in my mind is simple: they do not have it.

Will they add that? I am pretty sure they will and if they do there will be two possible reasons for that:

1. They added a feature they consider useless just because they are right and the world is wrong.
2. Added it because it is really important and useful.

I will go with the second option. And once they do it I may take a look at Hyper-V and XenServer again for our needs.

CR

4,256 total views, no views today

Thank you VMWare.

This time I will be quick. Over the weekend I upgraded my VMWare ESXi environment at home and from that end, everything worked smoothly (yes I know most of the time upgrading VMWare stuff is not really that easy). But as with anything that usually comes out from 3401 Hillview Ave, Palo Alto, CA, something wrong had to happen.

VMWare had MONTHS to fix the freaking issue with their VSphere Infrastructure Client or whatever that is called now (as they now copy everything Citrix does, they started changing names – word on the street is VMWare ESXi will be renamed VMWare SEXi and VSphere will become VOval) when on Windows 7.

Of course I am running Windows 7. Windows XP, according to my daughter, is “so last year” so I moved everything I have to Windows 7. Once you are in Windows 7 land the VIC (Virtual Infrastructure Client Crap) does not work anymore and when you try to logon it throws one of these really useful, easy to understand error messages. Why not show a simple window that says “You are screwed. Thanks for using VMWare.”?  

Thank Lord there is a fix for what lazy VMWare screwed up. You need to grab a DLL from somewhere and change a config file to get that crap working again. All explained here.

Once I did that, everything is now up and running again. And on Windows 7.

Please do not tell me you need Windows XP Mode on Windows 7 to fix this crap VMWare.

Virtualization is cool and great. But using it to fix shit you created in the first place, is not cool.

Really.

CR

3,981 total views, no views today

Keeping yourself up to date.

With so many changes happening in the Virtualization/Server Based Computing space these days I noticed most of the technical people, especially IT staff, are having a very hard time trying to catch up on everything that is going on out there. From simple things like VMWare Studio 2.0, Citrix WorkFlow 2.0 all the way to Citrix XenApp 5.0 Feature Pack 2.

And just for your records, in the middle you may have all the application virtualization stuff (Thinstall, App-V, Altiris SVS, etc), the OS virtualization (ESX, Hyper-V, XenServer, VirtualBox, etc) and things like Windows Server 2008 R2 with all its new TS/VDI features, Quest vWorkspace and so on!

How can you make intelligent decisions about all this when the landscape is changing at such fast pace? After some internal discussions and giving my passion for speaking and training people, I decided to create what I called a ‘Crash Course’ on Server Centric technologies.

On this 5-day training we are covering all these. Sounds crazy heh? Yep. But I am sure it will be a great course for everyone out there that wants to get to the bottom and save a ton of time reading what all these things do, how they compare and what they can do for them.

As you can imagine the idea here is not to give you a deep dive on any of these things so you become a guru overnight. Instead, as mentioned we will give you the no-BS, real world view of all these, based on all the work we have been doing with all these technologies and products for years.

If you are interested, please check our training section at http://www.wtslabs.com/training.asp.

And as always feel free to email me directly with suggestions, opinions and so on!

CR

821 total views, no views today

Intel buys Citrix.

WRITTEN BACK IN 2009 – Funny eh?

No, I am not someone high up in the industry that knows something that no one else knows. And before you tweet about this or send this post to your buddies, this is not true.

Last night while working and reading this thing about Intel buying a virtualization vendor came to my mind and I decided to take that line of thought and exercise it a little bit. So here you have what I think.

First of all the point here is not to think if this would really make a lot of sense from a business (reading, $$$) perspective. I am sure there could be several arguments why this would be great and why it would suck. So feel free to start such discussion using the comments here. 🙂

The main idea is what this acquisition could do to the market not only in terms of virtualization but the processor/CPU/Chipset one as well and how all that could push ‘Server Centric’ solutions combined with local hypervisors.

So imagine this acquisition goes through and no anti-trust crap happens (big chance it would in this case).

When you own the hardware platform, it is much easier to get the most out of whatever you run on top of that. Or any platform for that matter. For example I am sure Microsoft Office will always have a competitive advantage as it runs on top of Windows and as Microsoft owns it I am sure there are several tricks up the sleeve that make office simply better, faster or more efficient than the competition. Same for a graphics card manufacturer. They know their hardware and for sure are the ones that know how to get the best out of it.

Or let’s take a big real world example here: the XBox 360. By designing the hardware platform itself, focusing on what that platform is supposed to do, you get the most out of it. And owning both layers (the 360 hardware and its software – OS/SDK) no one better than Microsoft to release games that are always pushing the envelope, getting the most of the hardware at early stages. Halo/Project Gotham (even on the old XBox) were very good examples of that. Several other ones exist on the 360 to prove the point.

Now going back to Intel/Citrix. If Intel, the chipset/cpu manufacturer acquires someone like Citrix, that now has a stronghold in the virtualization market, Intel could do some very interesting things:

– As they would own both layers, I am sure they could make the hypervisor get the most of the hardware.
– Going the other way around, having a much better understanding of the software layer, the hardware platform could be changed to provide the best performance/transparency to the hypervisor.
– A new chipset with the hypervisor built-in could be delivered and loaded on any new PC from now on. This would put a hypervisor on every computer, probably bringing virtualization to the masses (I am not saying here this is good or bad. I am just stating the fact that this would indeed put a hypervisor on lots of machines around the globe).
– Now that someone has both layers under one umbrella, what would prevent Intel from having a hypervisor that is completely optimized to Intel chipsets and that would never be as good on AMD hardware for example? Or the other way around, another hypervisor would never be as good/efficient on Intel hardware as Intel’s own one (let’s name it Xentel Server).

Thinking about that made me realize this could be the downfall of several big names out there like AMD and VMWare for example. Why would I run VMWare if Xentel Server is way more scalable and faster? Why would I buy machines with AMD CPUs if the ones with ‘Intel inside’ not only have a built-in hypervisor but give way better performance when I actually use such hypervisor? What would this do to Hyper-V for example? Given that SCVMM is modular Microsoft could easily adopt Xentel Server and simply write all the management goodies needed…

There are several other things such acquisition could potentially bring to the table and certainly I did not discuss these in this article.

Will this happen? I would tend to say no, it will not happen.

But it would be a VERY interesting thing to see. So for now all we can do is discuss how such thing would change the market as we know. 🙂

CR

44,724 total views, 1 views today

Quest for VDI

I do respect Brian. And a lot. For sure he was the guy that brought the whole TS community at the time together (Ron Oglesby, Shawn Bass, Tim Mangan, Jeff Pitsch, Benny Tritsch, myself and several others) and kept it like that. BriForum, the child of endless discussions among everyone mentioned above is now a well known conference and a great place for everyone looking for SBC/TS/Citrix/VDI/Virtualization info.

If you read his website he is always saying something (well at least lately) about VDI. Even the discussions we have with Microsoft and their RDS team (sorry, most stuff under NDA otherwise I would post about these here) are now moving towards VDI. For God’s sake, even the lady that comes to clean my house every once in a while is now talking about VDI. So in a way, it is becoming mainstream. Wait. I do not mean people are actually using it. I mean it is mainstream in terms of discussing it, talking about it. Not implementing it.

Problem is even on these discussions with all the other Microsoft MVPs (I think all mentioned above are part of this group) I am not sure exactly what every single one of them think about it. So today when thinking about it I decided it would be a good idea to talk to every single one of them, one-to-one talk, record all that and then publish every day/week these conversations here on our blog. Once I have all the recordings I will then create some sort of final analysis, based on what all these guys said, resuming all that, and publish it here.

The reason for that is in a way I want to know what all these big shots in the industry think and see if they actually agree on something. 🙂

Secondly, I want to stop talking about VDI for at least six months. Why six months? Well that will bring us to 2010, the year when according to Brian, VDI replaces the whole world and we throw everything we know/have out of the window.

And the most important reason: that will give me a six months break discussing VDI with the cleaning lady.

CR

833 total views, no views today