Intel buys McAfee. Does it matter?

As you all have heard today, Intel bought McAfee. And will pay well for it.
So the question now, at least from my end, is how this can possibly affect or help the virtualization market, the one you and I live and breath on.
First of all, we all know for VDI to take off as a mainstream solution (what it is NOT at this day in 2010 – and will not be for a long time) it must get cheaper. By cheaper it means being able to cram more instances per server. This can be achieved in several ways like using the latest, greatest and fastest CPUs you can get and with as many cores as possible (and of course using quick ass disk subsystems like Fusion-io, caching/dedup like iLIO, using tons of RAM, etc).

The point is on the CPU space there is nothing that prevents other vendors like AMD to get to the level Intel is at. In many cases in the past AMD actually delivered better silicon than Intel. So Intel needs a way to differentiate itself from their competitors. Bringing stuff like AV closer to the HW is one way of doing this. Good for Intel.

And of course getting this OUT of the VMs will for sure increase scalability. That was the reason why McAfee and others were coming up with appliances and lightweight agents (to run on the VM) to offload all that work outside the virtual environment.

The main question now is really how Intel will pull this off by not being a software company really. How they will get McAfee going. Of course I think it is just way too soon for any analyst to say anything about this. Historically Intel has not managed acquisitions like this well but they were never on such scale and with such reach like McAfee has (good or bad, they do have customers and a name in the industry, especially since that .DAT file fiasco that screwed up more computers in a day than any virus they were trying to protect).

In the near future I do not expect to see anything embedded at the HW level. This is for sure something that will come way down the road as you need to come up with something that can be leveraged to anything running on top of that HW. This means you either change the OS that will be running to benefit from these new HW extensions (like vendors did when using the virtualization components exposed by Intel and AMD on their CPUs) or, in this particular AV case, you get an agent running on the OS/VM. Not an easy task to do considering the amount of hypervisors and OSs now available.

That leads us to a very important thing. To minimize this and make things much easier, would not make sense for Intel to grab a Hypervisor vendor now? Given the three main players on this space now, VMWare, Microsoft and Citrix, I am sure the low hanging fruit here is Citrix and I even wrote about this ages ago on the post ‘Intel buys Citrix’

This would give Intel a huge advantage over any other company in the Virtual Wintel echosystem. Controlling the CPU, the Hypervisor that runs on it and extra features like AV, would give you the ultimate virtualization platform, where your solution runs better or has more features than anyone else. Example? All the HW fancy features are only exposed to your own hypervisor (like Microsoft is doing with RemoteFX, only available to Hyper-V hosts) and of course your hypervisor will scale much better than the competitors as you own and know it all about the underlying HW platform. Then the next logical step would be to acquire a graphics company like NVidia (as AMD owns ATI) and leverage all that into the platform, exposing it to the virtualization layer. Then, buy a good storage vendor and a management/layering one and they are all set.

Sure such scenario could potentially bring a lot of issues to Intel from a legal perspective, as it did to Microsoft when it became what it is today. But certainly it would simplify the virtualization market a lot (and yes, I know, locking everyone into their platform – what may not be a terrible thing as Apple has shown the sceptics with their iOS echosystem).

The bottom line is this acquisition for sure will help the virtualization space in the long run (do not expect mystical benefits happening overnight with this acquisition) but I see it as just the tip of the iceberg of what is potentially coming down the road from them.

Feds, you better keep an eye on Intel.

CR

935 total views, 2 views today

RemoteFX performance over lossy networks

Just a preliminary and quick video showing RemoteFX performance when loss is there, with and without our IPQ protection.

We have been testing it under several conditions, with different latencies and loss and will be publishing the full results soon. We also have some data on how much bandwidth RemoteFX uses. Just as a quick example, WMV-HD playback takes close to 30MBits; running an app like Google Earth around 9MBits.

My personal take after testing it, RemoteFX CAN be used over the WAN as long as you know exactly what your apps are (meaning, WMV-HD playback is probably a no go) AND also guaranteeing loss is minimized as it does suffer from it, making certain applications unusable.

CR

1,001 total views, no views today

RemoteFX First Impressions

As I did not have much time to test RemoteFX extensively, here are the first impressions of it and how we got it to work.

First of all, you MUST get a compatible video card. Not everything will work with Windows Server 2008 R2 with SP1 with Hyper-V, so you can get your Windows 7 VMs (with SP1 of course) working with RemoteFX.

I posted about it before. You can read the list of supported video cards here.

What did we get?

– HP desktop with a six-core AMD CPU and 8GB RAM.
– FirePro 5800 Video Card (also tried the unsupported Quadro FX 580 that by the way, does work too).

Initially I simply tested the Windows 7 VM connecting from the Hyper-V host itself but later got another Windows 7 SP1 box and used that one to connect to the VM.

Performance is decent I must say. I tried playing some Windows Media HD videos (make sure you disable multimedia redirection by using videoplaybackmode:i:0 in the .RDP file (save the RDP connection to the desktop and open it using Notepad). Also very important that you set the policy for RemoteFX (as I was not sure where to set it, I set it both on the client and on the VM itself). It is described here:

To set the experience index for connections using RemoteFX

  1. Log on to the client computer as a member of the local Administrators group.
    Click Start, and in the Search programs and files box, type gpedit.msc and then press ENTER.
  2. Navigate to Computer ConfigurationPoliciesAdministrative TemplatesWindows ComponentsRemote Desktop ServicesRemote Desktop Session HostRemote Session Environment.
  3. Double-click Set experience index for connections when using RemoteFX.
  4. Select the Enabled option.
  5. In the Screen capture rate (frames per second) box, click Highest (best quality), and then click OK.
  6. Restart the client computer.

The key thing to understand here is, why you may need RemoteFX. For example, during our tests, playing the WMV-HD tests, it used up to 30MBits so as you can see it is VERY bandwidth intensive. For comparison, running Google Earth in DirectX mode used around 9MBits. So basically the bandwidth will of course depend on the application being used. The same for how intensive CPU/GPU utilization will be.

I would expect applications like AutoCad to use way less bandwidth than something like WMV-HD and what we will be testing next is actually using RemoteFX over a typical home (cable/DSL) connection, simulated in our lab. By typical I mean a 10MBits down/1MBits up with 40-50ms latency and some packet loss probably in the 1% range (or a little more due to bursty loss). Given the first results we have seen, I am confident RemoteFX can indeed work over the WAN (at least bandwidth wise) depending on the applications.

Yes, before Brian Madden sends me a tweet or leave a comment here saying ‘MS says RemoteFX is LAN only’, I still want to make the point that IMHO, anything that is LAN only has its fate determined already. DOA. See my post about this here.

And still on the performance side, what we have seen in a nutshell is this: RemoteFX does work great BUT it is NOT the same as local. Simple things like Flip3D (using Windows key + Tab) are NOT as smooth as running them locally. Even Google Earth (that works just fine by the way) is NOT as smooth. But they both work and work fine, considering you are over RDP. For a BETA release we can expect it will be tweaked and improved even more before it hits the market.

As a sidenote, keep in mind there IS a bug on SP1 that throws a message on the RemoteFX event log about CPU encoding being used for ATI cards. It is a known issue and has been fixed apparently on later builds, what of course I have no access.  But for 1 VM testing like we did (I am after experience testing and not scalability – I will leave that to people with more time and resources on their hands like Ruben and Benny 🙂 ).

As soon as I have more results and some nice videos to show RemoteFX, I will post these here.

CR

4,472 total views, 1 views today

RemoteFX – Supported Video Cards

Ok, we have been trying to get RemoteFX working and even though we knew not all video cards are supported, we were not able to find in a single spot a list of the supported ones with the Windows Server 2008 R2 SP1 Beta release.

After some digging around at multiple locations here you have it:

ATI:  ATI FirePro™ v5800, v7800 and v8800 Series professional graphics

NVidia: Quadro FX 3800, 4800 , 5800 and Quadroplex  S4.

That is it. You also need to disable the onboard video card on the machine you will have Hyper-V running.

Hope this saves you the time I had to spend looking for this information. 🙂

CR

6,809 total views, 1 views today