EUC Podcast

Ladies and Gentlemen,

Just want to let you guys know that I am now part of the EUC Podcast with some of the best guys in the industry (I am not one, but it is good to be part of a team full of these). The first podcast, that of course I had to miss due to a project, is now available. Here you have the summary:

“The End User Computing Podcast Episode #001 is Now Live at Click here to subscribe and get the latest Podcast episodes downloaded directly to your Podcast application of choice.

The End User Computing Podcast  is a community driven podcast for IT Professionals. The content covered on the EUC Podcast is primarily geared toward community support and enablement for application, desktop, and server virtualization technologies. Comments and community interactions are strongly encouraged to keep the authors honest and non-biased toward the vendors and technologies being covered. While the EUC Podcast is an independent community driven podcast, SME’s vendor preferences and strengths may be presumed based on active projects and topic areas covered. As unaffiliated technologists, EUC Podcast encourages the authors to discuss a wide variety of vendors and products based on current or upcoming engagements.

As always, if you have any questions, comments, or simply want to leave feedback, feel free to use the comments section below!

Thanks and enjoy,

The EUC Podcast Crew.”

Make sure you subscribe and listen to it. Please note it is rated as ‘Family Friendly’ but that may not be the case if I am on that particular episode. Just saying as a couple years ago I got feedback from one of my BriForum sessions that said something along the lines ‘I did not pay this much to hear so much profanity’ and ‘This presenter has a 666 mark on his scalp. Never coming back again.”. You were warned.


2,294 total views, no views today

NetScaler 10.5.X Gateway Wizard

I promise this will be a quick post and hopefully it will save you time troubleshooting your NetScaler 10.5 setup.

I was testing the latest build I could get last week (10.5 and I noticed on the ‘XenApp and XenDesktop’ wizard (that is actually pretty good to get you off the ground) it asks you about your StoreFront site details. The problem is as it asks you the ‘StoreFront FQDN’, the ‘Store Name’ and the ‘Site Path’, you may be led to think the Site Path is the Store one. Wrong. If you use the Store path what you get is a blank screen once the user logs in to the gateway. What it actually wants is the ‘Receiver for Web’ path as seen on your StoreFront console.


The amount of people that enter the actual Store path and not the Receiver for Web one is enough to justify a little change on the GUI. So Citrix instead of labeling it ‘Site Path’ simply use ‘Receiver for Web’ path and I can bet a lot of people will get it going on the first try, reducing the amount of posts on the Citrix forums.

Sure for people doing NetScaler work day in and day out this may not be a problem but for the target audience the wizards cater for, usually people starting with the NetScalers, this is a needed change IMHO.



3,320 total views, no views today

Citrix PVS Image Copy

If you built your Citrix environment properly, you should have by now at least a test environment and a production one. And if PVS is part of your deployment, the same applies to it. A development PVS and a production one.

If you do not see why you would need a test environment, separated from you production one, please stop here. This article is not for you. For sure.

That said one of the tasks I usually have to deal is to move images from a particular PVS environment to another one. As mentioned previously this usually has to do with moving something from a test/development environment to production, once it is deemed ‘good-to-go’.

To make my life easier I wrote a simple script that takes a PVS image from a particular environment/store and copies it to another one. It takes care of exporting, copying and importing the vDisk for you. Simple but effective.

Here you have it:


=== BEGIN ===

# Copies a vDisk between PVS environments.
# Cláudio Rodrigues 2014-12-24 V1.0

CopyvDisk 1.0
IQBridge Inc., 2014. All Rights Reserved.
PowerShell script to move a vDisk from a PVS Farm to another one.
The name of the vDisk you want to copy.
The PVS Environment where your vDisk is currently used.
.PARAMETER SourceStore
The PVS Store where the vDisk you want to copy is located.
The PVS Environment that will use the vDisk.
The PVS Store where the vDisk you want to copy will be saved.
.\CopyvDisk.ps1 XenApp65V2 DEV Development PROD Production
Copies the vDisk XenApp65V2 from the DEV environment, out of the Development store
to the Production Store in PRD.

Author: Cláudio Rodrigues
Date:   December 24, 2014

[Parameter(Mandatory=$True, HelpMessage=”The vDisk to be copied”)]$vDiskName,
[Parameter(Mandatory=$True, HelpMessage=”Source PVS Environment”)]$SourceEnv,
[Parameter(Mandatory=$True, HelpMessage=”Store where the vDisk resides”)]$SourceStore,
[Parameter(Mandatory=$True, HelpMessage=”Destination PVS Environment”)]$DestEnv,
[Parameter(Mandatory=$True, HelpMessage=”Store the vDisk will be copied to”)]$DestStore

Switch ($SourceEnv)
PROD { $SourceServer = “” }
DEV  { $SourceServer = “” }

Switch ($DestEnv)
PROD { $DestServer = “” }
DEV  { $DestServer = “” }

Add-PSSnapin –Name McliPSSnapIn -ErrorAction SilentlyContinue
Mcli-Run SetupConnection -p server=$SourceServer

$TempPath = Mcli-Get Store -p storeName=$SourceStore -f path
$SourcePath = $TempPath[4].SubString(6)
Mcli-Run ExportDisk -p diskLocatorName=$vDiskName, siteName=YOUR_SITE_NAME, storeName=$SourceStore
Mcli-Run SetupConnection -p server=$DestServer

$TempPath = Mcli-Get Store -p storeName=$DestStore -f path
$DestPath = $TempPath[4].SubString(6)

c:\windows\system32\robocopy $SourcePath $DestPath “$vDiskName.*” /MIR /xo /XF *.lok /XD WriteCache

Mcli-RunWithReturn ImportDisk -p diskLocatorName=$vDiskName, siteName=YOUR_SITE_NAME, storeName=$DestStore

Mcli-Run UnloadConnection

=== END ===

This is what you will need to change:
– If you have multiple environments (i.e. Development, Test, Pre-Production, Production, etc) you will need to add all of them by their name/code and the PVS server that is part of the environment. This is done where you see the ‘Switch’ statement. In this example I have two environments, named PROD and DEV and each one has its own PVS server.
– The site name. Replace YOUR_SITE_NAME with the correct name for your PVS Site. This script assumes the Site Name is the same across all environments (I see no reason for it to be different – if you have a reason please let us know in the comments).

The script takes five (5) parameters:
– vDisk name: the name you have for the vDisk on the PVS console, like XenApp65-v1.
– The source environment: this has to match one of the names/codes you added to the ‘Switch ($SourceEnv)’ line. In this example I created one called PROD where the PVS Server for that is and another one called DEV (with as the PVS Server). You can name these anything you want. I used PROD and DEV as these make sense to me.
– The source store: under PVS you have your stores where the vDisks reside. Here you pass the store where the vDisk you want to copy is.
– The target environment: to which environment (as explained under source environment) the vDisk will be copied to.
– The target store: under which store on the target PVS environment you want the vDisk to be copied to.

Couple comments:

– You must make sure a vDisk with the same name does not exist on the target store. Otherwise it will fail. Yes, I am lazy and I could have added logic to the script to check for that and copy it somewhere else (or delete it) before doing the copy/import. I did not do it. Yes, because I am lazy and today is Christmas Eve.
– There is not much error checking on the script as the script assumes you know what you are doing and if things are passed properly it works flawlessly. So yes, I do not save your ass if you do not know shit. Keep that in mind.
– Of course the images have to be environment agnostic (meaning the database/farm settings will be dumped by GPO to allow you moving PVS images anywhere).
– The images have to be part of the same domain right?

Other than that a very simple script that has helped many of my customers over the years!

Time to celebrate Christmas.



5,222 total views, 6 views today

XenApp Load Script

This is another post on XenApp 6.5 scripting. And yes, again, the reason for that is a ton of people are still on XenApp 6.5 and not everyone has a budget for all the fancy and pretty monitoring tools out there. Not saying they are not good. They are great. But money talks at the end and the economy is not that great for many XenApp customers out there so cheapo is the way to go sometimes.

Based on the work of others (my apologies but honestly I cannot find where this came from originally – if the author contacts me, all credits for the initial script will be given here), I tweaked this script to get me the load on a particular worker group, in an easy to read graph. Take a look at it:

XenApp Farm Load

I agree it is not the fanciest graph out there but allows you to see all servers in the farm and how much load you have on each of them. You can then add to a web page that will refresh itself every 60 seconds for example so you always have the latest and greatest data from the farm. So here you have the script:


# load the appropriate assemblies
Add-PSSnapin Citrix* -ErrorAction SilentlyContinue

# get the server names and load
$XALoad = Get-XAServer -WorkerGroup YourWorkerGroup | Get-XAServerLoad | Select-Object ServerName,Load

$XALoad = $XALoad.GetEnumerator() | sort ServerName

# create chart object
$Chart = New-object System.Windows.Forms.DataVisualization.Charting.Chart
$Chart.Width = 1000
$Chart.Height = 600
$Chart.Left = 10
$Chart.Top = 10

# create a chartarea to draw on and add to chart
$ChartArea = New-Object System.Windows.Forms.DataVisualization.Charting.ChartArea
$ChartArea.AxisX.Interval = 1
$ChartArea.AxisX.Title = “PROD”
$ChartArea.AxisY.Interval = 1000
$ChartArea.AxisY.Title = “Load”

# add a data point for each server
foreach ($server in $XALoad)
$dp1 = new-object System.Windows.Forms.DataVisualization.Charting.DataPoint(0, $server.Load)
$xlabel = $server.ServerName
$dp1.AxisLabel = $xlabel.Substring(3)
# set the title to the date and time
$title = new-object System.Windows.Forms.DataVisualization.Charting.Title
$Chart.Titles.Add( $title )
$Chart.Titles[0].Text = date

# save the chart to a file

=== END

Make sure you change the Worker Group name and the location where you want the graph saved (it is in PNG format and the folder must exist). In my case I run the script on the XenApp controller.

There is still some stuff I want to add like:

– Different colours based on the load (i.e. red if over 8000, green if under 5000, etc).
– As each server can have up to 9999 as its load index before being unavailable (10000) if you know the number of servers you have you can indeed show a load percentage for the whole farm. Sure it is not a perfect metric but it is a good indicator. For example if you have 10 servers and the load on all of them adds up to 84350 we can be pretty sure you are at around 84.4% capacity on your farm. Again this is not an exact number but a pretty good idea where things stand when looking at the graph.

The idea here is to create a bunch of these graphing scripts and add all to a single ‘dashboard’ page that will show you the most relevant quick information you need when monitoring a XenApp environment.

Contributions to this in terms of scripts and new ideas are more than welcome.




2,820 total views, no views today

XenApp Reboot Script

As many customers are still running XenApp 6.5, probably one of the most stable/successful XenApp releases, when time comes to reboot the servers on the farm the options available are quite limited.

After looking for some scripts I ended up finding the one fellow CTP Dane Young wrote and posted on his site. The problem IMHO it is overkill for many customers (what ends up making it complex, especially for people not very familiar with PowerShell).

I took his script as a starting point and created a much simpler version that works very well and does what most administrators need:

– Reboots the farm in groups of servers.
– Prevents new logons to the servers that will be rebooted.
– Sends messages to the users 15, 10 and 5 minutes before the reboot will happen.
– Reboots the servers, not waiting for the servers to be completely drained. To be honest I prefer this approach as if you need to wait until a server has no more users you may have to wait days in certain cases.
– Every interaction is logged to the event log (disabling logons, sending messages to users, servers being rebooted, etc).

In this example, the need was to reboot the farm in two passes. One covering half of the servers and another for the remaining servers, 30 minutes later. For this particular case the farm had to be rebooted twice a week. So this is what we did, preparation wise:

– Created four worker groups named TuesdaysRebootGroup1, TuesdaysRebootGroup2, FridaysRebootGroup1 and FridaysRebootGroup2.
– Added half of the servers to TuesdaysRebootGroup1 and the remaining servers to TuesdaysRebootGroup2. Did the exact same thing for the FridaysRebootGroup1 and FridaysRebootGroup2 worker groups.
– On the ZDCs (two of them), we created two scheduled tasks on each of them. On the first ZDC the two tasks are scheduled to run on Tuesdays and Fridays at 1:00am and on the second ZDC the tasks run on Tuesdays and Fridays at 1:30am. The tasks on the first ZDC take care of rebooting the servers on Group1 and the ones on the second ZDC take care of rebooting the servers on Group2.

Here you have the script:


# Reboot script for XenApp 6.5 Citrix Farms
# Simply disables new logons and issues warning messages every 5 minutes
# for 15 minutes in total.
# This script can be run as a schedule task from the Zone Data Collector to process
# reboots for all other application servers
# Created by Cláudio Rodrigues, Citrix CTP, Microsoft MVP, VMware vExpert
# WTSLabs Inc. Copyright 2010, 2011, 2012, 2013
# Loosely based on the work by Dane Young, Citrix CTP
# Check for more information
# Build 2014.11.24 Revision 6

Add-PSSnapin “Citrix.Xenapp.Commands” -ErrorAction SilentlyContinue

# Define which worker group should be processed.
# We are using one script per worker group so make copies and change the worker groups as needed.
# Can be easily modified to allow multiple worker groups (see Dane’s script)
$Global:WORKERGROUP = “TestCR”
$Global:EventLog = New-Object -type System.Diagnostics.Eventlog -argumentlist Application
$Global:EventLog.Source = “Citrix Reboot Script”
$EventLog.WriteEntry(“Starting scheduled task Citrix Reboot Script.”,”Information”,”111″) # Create test event entry to note the start time of the script

$Step1 =
param ([string]$server)

$Global:EventLog = New-Object -type System.Diagnostics.Eventlog -argumentlist Application
$Global:EventLog.Source = “Citrix Reboot Script”

function DisableLogons
# Prohibits logons until next restart for server passed as variable 0
set-XAServerLogOnMode -ServerName $args[0] -LogOnMode ProhibitNewLogOnsUntilRestart
Write-Host “Disabling Logons on $server” -foregroundcolor Blue

DisableLogons $server
$EventLog.WriteEntry(“Disabled logons until next reboot on ” + $server + “.”,”Information”,”411″)

$sessions = Get-XASession | ? {($_.servername -eq $server -and $_.state -eq “Active”)}
foreach ($session in $sessions)
$username = $session.Accountname
Write-Host “Sending message to user $username on $server” -foregroundcolor Blue
Send-XASessionMessage -servername $server -MessageTitle “Server maintenance” -Messagebody “Server will be rebooted in 15 minutes” -sessionID $session.sessionid -MessageboxIcon “error”
$EventLog.WriteEntry(“Fifteen (15) minutes warning on server ” + $server + “.”,”Information”,”311″)


$Step2 =
param ([string]$server)

$Global:EventLog = New-Object -type System.Diagnostics.Eventlog -argumentlist Application
$Global:EventLog.Source = “Citrix Reboot Script”

$sessions = Get-XASession | ? {($_.servername -eq $server -and $_.state -eq “Active”)}
foreach ($session in $sessions)
$username = $session.AccountName
Write-Host “Sending message to user $username on $server” -foregroundcolor Green
Send-XASessionMessage -servername $server -MessageTitle “Server maintenance” -Messagebody “Server will be rebooted in 10 minutes” -sessionID $session.sessionid -MessageboxIcon “error”
$EventLog.WriteEntry(“Ten (10) minutes warning on server ” + $server + “.”,”Information”,”311″)


$Step3 =
param ([string]$server)

$Global:EventLog = New-Object -type System.Diagnostics.Eventlog -argumentlist Application
$Global:EventLog.Source = “Citrix Reboot Script”

$sessions = Get-XASession | ? {($_.servername -eq $server -and $_.state -eq “Active”)}
foreach ($session in $sessions)
$username = $session.AccountName
Write-Host “Sending message to user $username on $server” -foregroundcolor Yellow
Send-XASessionMessage -servername $server -MessageTitle “Server maintenance” -Messagebody “Server will be rebooted in 5 minutes. SAVE your work” -sessionID $session.sessionid -MessageboxIcon “error”

$EventLog.WriteEntry(“Five (5) minutes warning on server ” + $server + “.”,”Information”,”311″)


$RebootServer =
param ([string]$server)

$Global:EventLog = New-Object -type System.Diagnostics.Eventlog -argumentlist Application
$Global:EventLog.Source = “Citrix Reboot Script”

function StartReboot
# Creates a variable named server from the first passed variable
$server = “$args”
# Initiates shutdown on remote server
Invoke-Expression “Shutdown.exe /m $server /r /t 0 /c “”Shutdown scheduled by Citrix Reboot Script.”””
$EventLog.WriteEntry(“Initiating reboot process on ” + $server + “.”,”Information”,”911″)
Start-Sleep -s 120

StartReboot $server
# Main Script
$workergroup = $GLOBAL:WORKERGROUP
$workergroupservers = @(get-xaworkergroupserver -workergroupname $workergroup | sort-object -property ServerName)
foreach ($workergroupserver in $workergroupservers)
$server = $workergroupserver.ServerName
Write-Host “Step1 on $server” -foregroundcolor Blue

$EventLog.WriteEntry(“Processing server ‘” + $server + “‘ from worker group ‘” + $workergroup + “‘.”,”Information”,”211″)
Invoke-Command -ScriptBlock $Step1 -ArgumentList $server

Start-Sleep -s 300

foreach ($workergroupserver in $workergroupservers)
$server = $workergroupserver.ServerName
Write-Host “Step2 on $server” -foregroundcolor Green

Invoke-Command -ScriptBlock $Step2 -ArgumentList $server

Start-Sleep -s 300

foreach ($workergroupserver in $workergroupservers)
$server = $workergroupserver.ServerName
Write-Host “Step3 on $server” -foregroundcolor Yellow

Invoke-Command -ScriptBlock $Step3 -ArgumentList $server

Start-Sleep -s 300

foreach ($workergroupserver in $workergroupservers)
$server = $workergroupserver.ServerName
Write-Host “Rebooting $server” -foregroundcolor Red

Invoke-Command -ScriptBlock $RebootServer -ArgumentList $server

#### END

Yes, I do know this could be better and smaller, not to mention improved. The bottom line is, it is a simple script that does the job very well and at the same time it is simple and easy to follow, even for people not used to PowerShell.

I am sure it will help some of you out there. Any comments and suggestions (and even criticism) feel free to reach out. I am all ears.


6,497 total views, no views today

I present you the iCluster.

On my quest for a great portable Lab that I can take with me anywhere, airports included, I did quite a bit of research and ended up with something I have never seen before. So here you have the details in case you are looking for the most portable lab that you can still do all sorts of fancy stuff and with kick ass performance.

The iCluster, as I named it, is actually a pretty damn amazing little thing I built. Here are the specs:

1 x Synology DS412+ NAS. Has four drive bays and two gigabit interfaces. Does iSCSI, NFS, etc. 
4 x Samsung EVO 840 1TB. Loaded these 4 1TB SSD drives into the Synology for a total of 3TB usable space (they are in Hybrid RAID).
2 x Apple Mac Mini 2.3GHz, i7 Quad-Core with 16GB RAM each. The reason for the Mac Minis is simple: they are DAMN small and portable. Two of these with 16GB each give me 32GB of usable RAM for virtual machines. Not the best BUT for a portable lab that I can carry around and at only a couple pounds, nothing beats that.
1 x HP 1810-8G v2 switch. Small 8-port Gigabit switch with full LACP capabilities (so I can trunk two ports to the Synology, giving me 4 Gigabits (full duplex) throughput. Not bad.
1 x Apposite Linktropy Mini. WAN emulator. Allows me to simulate any sort of latency/loss in a 100MBits link. I can quickly see how things will behave for users when they connect over a 3G connection, from a remote location, over satellite, etc. Impressive gear.
1 x D-Link DIR-600L Router. Used to provide wireless access to the cluster and also a WAN connection to the outside, in case I want to plug something into it.

All this is loaded on a custom-built wooden case that has a front and back door for easy access to the devices and a handle at the top. I can easily carry this with me on trips and even take it on airplanes as carry on luggage. I also put a small power bar inside so I only have a single power cord to the outside.

On the back you see the antenna for the router and some Ethernet CAT6 colour coded ports. That allows me to select (patch) if I want the connection to be passing through the WAN emulator, directly to the HP switch, if the wireless will go through the Apposite, etc.

The case is being finalized as I post this. Hopefully I will have pics of the whole thing soon. Plan is to wrap it in leather to give it a finish similar to the Marshall Stanmore speaker.

Total cost, including the Apposite Linktropy, will be in the $7,500 range in case you are wondering.

On the software side of the house I decided to go with Windows Server 2012 R2 Hyper-V for now. If it becomes a PITA on the Mac Minis I will switch to ESXi as I know for sure it works great on the Mini (I have another one with ESXi – rock solid). This gives me all the goodies (live migration on the cluster, etc) in such a small and light form factor.

So next time I am at BriForum, be prepared for some interesting live demos on the iCluster.



8,168 total views, no views today

VMware Horizon 6. The only article you will ever need to read.

Ladies and Gentlemen,

We all knew this was going to happen and it happened yesterday. If you have no idea what I am talking about let me quickly summarize it for you and then give you my take on it.

VMware announced yesterday that it is adding support for Microsoft RDS Session Host (a.k.a. Terminal Server, Terminal Services, TS or simply RDS) on its product. So now they can deliver sessions from either Desktop OSs (what VMware View was all about since day one) and from Server OSs (with the RDS Session Host role enabled) using PCoIP.

Why I am saying this is the only article you will ever need about the subject? Well first of all I am the one writing it. Does not get better than that. Then I am not on VMware’s or Citrix’s payroll. Finally I am one of the so called ‘Dinosaurs’ in the RDS world (remember, I got the first MVP award ever for RDS specifically back in 2001). Oh and I drive a Lamborghini.

So seriously let’s take a look at the whole thing and what I think it is important with this release.

– RDS as a platform. I am very happy to see VMware doing this. Honestly. This just proves that all I have been saying all these years, that RDS is a solid platform AND not going to the grave in the near future is true. VMware now officially recognizes this. This also means a lot more work for all of us in this industry as now lots of VMware customers will start deploying this and will realize it is way more complex than a broker and a protocol. They have to deal with printing, profiles, logon times, session sharing, etc. The list goes on. For us, the industry dinosaurs, this is GREAT news. Be prepared to have hundreds of new customers lined up at your door, asking you to help them with their RDS issues.

– Citrix as a solution. There is no other way to put this. VMware is validating what Citrix has been saying for years WHILE acknowledging they (VMware) did have a big hole on their application delivery solution and that Citrix was correct all these years by addressing both the desktop and server OS application delivery mechanisms. Yes, a little tap in the back for Citrix.

– Citrix as a company. One thing I have been saying to Citrix for YEARS, even though I am a Citrix CTP as well, was the fact Citrix was milking the XenApp cow for VERY long, without really innovating much. Minor improvements here and there, evolution (albeit slow IMHO) instead of revolution. Then the world, according Brian Madden, would flip everything to VDI and RDS would die, Citrix jumped into the VDI bandwagon and more than that, started to back stab the product (XenApp) that made Citrix, well, Citrix. Decided to rename XenApp to XenDesktop “Customers are stupid” Edition (ok, App Edition), chop off some features that made XenApp 6.5 a very solid platform and then released XenApp 7.5 “Phoenix” again, still a limping version of XenApp 6.5, not really offering anything better than its previous release. Basically screwing its customers, partners and itself along the way. Cannot get better than this, screwing up wise. Not sure who they hired for the job of screwing things up but whomever that is, this guy is a GENIUS at the subject. Next time I want to screw up something I will definitely give Mr. G a call.
So VMware announcement means two things for Citrix: first, RDS is indeed an important platform what leads to XenApp is important and has to be fixed, if you do not want people starting to test Horizon 6 to jump ship or not buy your product. Secondly, and the most important thing here is, Citrix now has someone on their back and if they want to stay on top they will have to become the Ol’good Citrix we, the dinosaurs in the industry (RickD, DougBrown, SteveG, SBass, Benny, etc) learned to love. The one that innovates, that pushes the industry as a whole forward. And not the current Citrix that looks more like a bunch of farmers that know nothing more than milking a cow. And supervised by a marketing clown. Yep, it is that bad. Hopefully this will be great for the industry, leading to the same type of war we saw at the protocol level, where years ago Citrix was the king by a huge lead and now for 99% of the use cases the protocol is almost irrelevant (this helped the industry so much that even Microsoft released something great, RDP8.1, what is something borderline mystical as they do have a history of releasing stuff from their asses – you know what that is). So the lesson here: this is great for the industry, great for Citrix – if they see this as a challenge and live up to the expectations – and great for VMware, that is broadening its reach and addressing the problem properly. Great.

– XenApp as a product. Well thanks to customer feedback (more like customer wrath really) Citrix had to bring it back from the ashes. Then VMware comes and tells the world RDS is amazing. I hope this is a wake up call to Citrix so they realize how important XenApp is and always has been for their strategy and more than that, for them as a company. This move by VMware hopefully will guarantee XenApp is a product customers can trust in the long run, what many feel was not the case since Citrix almost renamed itself Cindesktop.

– Horizon 6 itself. If you have been in the industry for long you know there is more to RDS than simply having a way for people to connect to an RDS Session Host over a protocol. Problems that are not there with VDI (app compatibility, session sharing, etc) will definitely be there when you throw RDS to the mix. Right now, no one has played with Horizon 6. No one knows what it can do as a complete solution, as something that goes beyond brokering a session to an RDS SH host using PCoIP. How does it handle printing? How does it handle the user environment? How does it handle the server build itself? How much automation there is to increase farm capabilities? The list goes on and for now no one has an answer to that. That is why no decent blogger should say Horizon 6 is great or it sucks. No one knows that. And I can bet things will change from what some analysts saw today to what will be actually shipping. My take is, if VMware is intelligent, they carefully looked at what is out there, the competition, and addressed most of the needs when it is out. If that is not the case, customers may get burnt with a solution that falls short from its promises and may go for a competitor. Or, if you are really loyal to the brand and NOT in a hurry to have that working, you may just say “Oh well it is a V1 product so half of the things not working properly is to be expected – they will get better”. My personal take is I hope it is good as again this will drive the competition and the industry forward. And I will have years of consulting on the RDS space still to go. Great. But until I see it in the wild I cannot say how good or bad it is. Period.

– UX is important. Yes, the user experience is key. And how seamless things integrate with all the platforms that can work as an endpoint is very important. As Shawn Bass mentioned, Citrix ignored a lot of platforms with their receiver, to the point the receiver on OSX for example sucks. I will say this is an industry trend in general as Microsoft apps on OSX do suck too. But there is one point we cannot forget: the AX (the Admin eXperience) has to be good. No matter how good the UX is, if the AX sucks big time and the whole thing is a PITA to get going and to maintain, IT departments will certainly back slash it and bury it somewhere. Lesson here is it has to be polished in all fronts, especially if you are the last player to the game, the one that had years of research available, studying everything that sucks with your competitors. So yes, we do expect VMware offering to be polished in all fronts.

– VDI as a platform. Well thanks to the first point on this article, Horizon 6 puts the last nail in the VDI coffin. What I mean is, in the coffin that says VDI is everything, VDI is better than sex, I want to do a MILF with a VDI tattoo on her lower back (I bet you pictured it). VDI is simply another option, another tool in your tollbox and VMware finally acknowledges it. Plus this goes beyond Citrix and VMware. This is also a wake up call to all the VDI fanboys out there, that were blinded by Brian’s predictions (failed by the way) that VDI was going to take over the world and Claudio would retire due to lack of work for him as an RDS guru. Lesson here, VDI fanboys, go learn RDS and stop thinking is the bible. Brian is no Jesus. He does not even have a long beard. And he lives in San Francisco.

To conclude this post I just want to say this: 2014 is the fucking year of RDS and this is not a prediction.

Thanks VMware for confirming what I have been saying all along.

And VMware, welcome to the RDS world. I have my arms wide open.

[Hugging sound]
[VMware fanboys crying in background]


18,122 total views, 3 views today

Citrix vs. Cloud Platforms. Yawn.

Ok after reading Gabe’s article and then Brian’s take on it, instead of replying I decided to write a whole post about it. That is why you are reading this.

First of all I want to resume Brian’s post for you. I think he should start working for Gartner as he is becoming the master of failed predictions (perfect fit if you want to work for Gartner – not sure if you know this but Gartner has a lot of mediums and Gypsies on staff and is responsible for buying 83% of all crystal balls made in America) and his latest post kind of falls into the same category.

The main idea on both posts is if VMware or another player releases seamless windows apps in their cloud offerings Citrix is fucked.

Here is the deal why IMHO that is not the case and even Brian seems to contradict himself on the post he wrote.

1. The cloud. Oh the cloud. Amazes me to see most CIOs seem to have learned nothing from the whole Snowden/NSA episode. If all corporate systems and intellectual property now lives in the cloud, you just made NSA much happier. The same way Snowden put up their arse, gathering all that information and sharing with the public, don’t you think it would be possible for a Snowden Jr, to get confidential corporate data and give the finger to the NSA and go living in China or Russia with all that info ready to be sold overseas? Do we really think a pharmaceutical company with crazy drugs being developed will consider doing anything in the cloud? Or Lockheed Martin, Bombardier, making Area 51 flying shit , etc? The list of corporations in the Fortune 500 that would be MASSIVELY affected by something like this happening is simply huge. So going to the cloud just makes NSA life easier. Bring the cloud onsite and at least you have a little bit more control and chances to guarantee NSA is kept out of the door.

2. Ok I mentioned bringing the cloud onsite and Brian does mention that, meaning a common platform is there for on-premise and off-premise deployments. But on the same article he also states “Microsoft has started talking about how future versions of Windows Server will be more like “mini on-premises instances of Azure.””. That means this does NOT exist today and only Jesus knows exactly when it will see the light of the day (Nadella or Nutella as I prefer, does not know the answer for that, trust me). So as of today and for at least 3-5 years this is not happening mainstream. Also keep in mind if Windows Server 2016 does have all this shit built-in and working 100% (what is never the case with anything Microsoft releases – for God’s sake they cannot even get RDS to work 100%) companies will still have to go through the exercise of testing and validating such platform what in itself takes years for many Fortune 500 companies. These guys cannot simply change platforms overnight. The FDA would shutdown ANY pharmaceutical attempting to do that overnight. Simple as that. So the reality here is this is still YEARS away.

3. Given point #2, that means a solution, to be called a SOLUTION, and not a HAE (Half Ass Effort) has to support BOTH on-premises and off-premises TODAY. So if someone (i.e. VMware) releases something that only works off-premises, in a cloud platform, we have a problem. What do I do with my on-premises stuff? Ignore it? Choose another vendor to deal with the on-premises scenario only? That is a fucking nightmare. Now dealing with two products and two vendors so I can address my on/off-premises needs. Keep in mind this would still be the case if someone releases a platform that can indeed deal with both scenarios flawlessly within the next year. Why? Because you will still need to test and validate such platform BEFORE going full production with it (point #2). Simple. Common sense here people.

Resuming: as of today and for at least the next two to three years things will still look very similar to what they are today and if you do want to be a leader down the road you must have a platform that deals with the IT landscape of TODAY and with the IT landscape of TOMORROW. Sorry to say but VMware is nowhere near it, in terms of addressing SBC/VDI on-premises and off-premises.

Now if you do not need to test or validate anything, do believe ‘cloudfying’ your whole IT infrastructure is a great idea, and the NSA does not exist, Brian is indeed into something with his article.


26,191 total views, 1 views today

BriForum Boston 2014

This week Brian and Gabe announced the sessions for both BriForums (London/US). I am happy to announce I will be presenting two sessions in Boston and will almost certainly attend BriForum London as a regular peasant.

If you did not read the list of sessions, here is what I am presenting and why I think these will be useful and what the plan is regarding delivering them.

SBC Round Up 2014. I really like doing these. Plan is to go through the installation and testing of several RDS add-ons (i.e ProPalms TSE, Dell vWorkspace, 2X, etc) and see how the compare to each other and of course to RDS 2012 R2 by itself. What will change this year is I am actually recording all the installations and will post all videos as soon as BriForum Boston is over. Also creating individual PDFs for each product installation so at the end you will get an end-to-end guide on how to install every single major product out there. Neat.

RDS-O-Matic. This is basic the end-result of dealing with RDS installs almost on a daily basis for customers around the globe. The idea was to come up with an automated way to create all the PowerShell commands to deploy a full RDS 2012 R2 from scratch. For BriForum it will be able to perform the following tasks:

– Hyper-V only. Creates all the required VMs based on a sysprep’ed VHD. Of course this requires minimum services to be up and running already like your AD, your Hyper-V hosts, the clustering, etc. But if these are there you simply select the VHD you want and it will copy to all the required VMs, mount them, inject the Unattended.xml file and finalize the setup (add to domain, set IP, add to proper OU). This is optional (meaning if you do have all VMs ready to roll you can opt this step out). Yes before you bitch I have no love for VMware ESXi anymore.
– NLB. For every component that needs NLB you will be able to choose if you want it done for you (i.e. RD Gateway). It will create the VIP, add the ports, etc.
– UPD. If you want to enable the User Profile Disk on the deployment.
– Whole deployment. Of course it does that. Sets up the connection brokers, web access, gateway, session hosts, etc. The whole deal.
– SQL Bullshit. Ideally I will try to automate the turd Microsoft created when setting up the SQL for the Connection Brokers HA. It is a PITA (create folder on the SQL, create database, add proper security, etc – amazing how every other product on the market can do this but NOT Microsoft).

The main plan is to turn all this into a web service that anyone can hit, enter the information and get a text file ready to use for the whole deployment. Later iOS and Android apps so you can do that anywhere/anytime/offline.

And for the first time in 10 years of BriForum for me, I will be actually driving to Boston this time what may actually be faster than flying, assuming the cops do not stop me in Maine. Feel free to stop me and say ‘Hi’ if you see me around at BriForum. I will be driving ‘Ferrucio’ (yes, my kids name all the cars we have at home).

Lamborghini Gallardo



5,630 total views, no views today

Adobe Reader XI on RDS 2012 R2

Just a quick post regarding a stupid issue I had this week with the latest Adobe Reader XI running on RDS 2012 R2. After installing it when trying to launch I would get this error:

Adobe Reader XI Error
Screenshot thanks to

The usual fix is to set the bProtectedMode registry key to 0. Problem is even after doing it the application still refused to launch.

Once I set the Compatibility Mode for the app to ‘Windows XP Service Pack 3’ I was able to launch it successfully. So in case you have the same issue make sure the registry key is there AND the app is set to ‘Windows XP Service Pack 3’ compatibility mode.


3,917 total views, no views today