Introducing the Citrix Student Mentorship Program

Ladies and Gentlemen,

Yes, I know I have been MIA for a bit but the good news is, I am not dead. Not yet. I have been quietly working on a little community thing and now we are ready to announce it. Before going into the details, let me explain how this came to fruition.

As I have been working with Citrix and RDS technologies for quite some time, over the years I noticed that people got familiar with such technologies either by accident (see Carl Webster’s website) or as some sort of project that no one else wanted to tackle. What I mean is, SBC/VDI became part of someone’s career at a later stage. As if you started your working life as let’s say an MCSE or equivalent and at one stage you got involved with XenApp for example.

With that in mind, I realized that given how complex even a plain RDS deployment can be (do not get me started on Citrix, now with NetScalers, XenMobile, XenApp, XenDesktop, PVS, MCS, Insight, Director, Studio – I could keep going with the list of components you may now need to touch) it would be awesome to find a way to get people exposed to these as early as possible on their career path.

That is how the Citrix Student Mentorship Program was born. The idea is simple: seasoned, well-known professionals in the industry (Citrix CTPs or CTAs) will take two students under their wings (college/university) and will teach and help them on all they need to get their Citrix Certified Associate title for both Virtualization and Networking (so CCA-V and CCA-N). That even includes paying for the fees associated with the actual tests (at USD 200 each).

As this is also a community effort, I do hope these students will get the community bug and down the road, do for the community, what the community will be doing for them at this stage. This will give us professionals that understand the importance of the community on their careers not only today but in the future. A Win-Win situation on my book.

More details will be available soon (well when the website is ready, at

I am very excited about this program and I do hope many CTPs and CTAs will join me on this effort.



2,095 total views, 2 views today

An interesting case about PVS retries…

This is a post about a couple things I found at a customer site regarding PVS retries and why I came up to the conclusion they do not matter as much as most people think. There is more you need to look at.

First some background information about the environment:

  • XenServer based. The pool hosting the XenApp VMs was running 6.1 and was upgraded to 6.5 SP1.
  • Two PVS Servers, virtual, with 16GB RAM each.
  • Virtual File Server hosting the vDisk on a CIFS share. 16GB RAM.

We noticed performance started to degrade once we upgraded the pool to XenServer 6.5 SP1. For some reason users would complain about their sessions freezing for a while. After doing some investigation, I found the PVS retries to be on the higher side. First problem was what would be considered high. Some servers did show 500 or more but over a period of like eight to ten hours. That gave us around 1 retry per minute. This could not be the culprit. Several network devices retransmit data every single minute, at much worse rates and no one notices, even when real time audio and video are there.

That is when I decided to take a look at the servers themselves. A quick search on the Event Log under ‘System’ showed the following for the ‘bnistack’:

[MIoWorkerThread] Too many retries Initiate reconnect.

And right after:

[IosReconnectHA]  HA Reconnect in progress.

So what is happening here? The PVS target device driver on the XenApp, after a certain number of retries (that I am still to find what that is), automatically triggers a reconnection to another PVS server what we know is not instant. It does take a couple seconds and that is exactly why users would experience a ‘freezing’ on their sessions. After asking the users to write down the time such thing was happening we could clearly see it was exactly when the reconnection process was triggered.

If you right-click a vDisk on your PVS store and select ‘Show usage’, it does show the retries but more important, it shows to which server each device is connected to. That is when I started monitoring if the connection would change during the day and bingo, when it had changed, users would complain.

Now what? We knew what the issue was but why was this happening?

We started thinking about the XenServer 6.5 SP1 upgrade as that was the only thing that had changed. Our PVS image had only a couple versions (three I think), with the base one with the 6.1 XenServer tools and the latest one with the 6.5 SP1 one. That is when I decided to merge all versions (again just a few, under the default threshold). Once I did that, retries dropped dramatically to under 20 retries per day for 90% of the devices. Even the remaining ones fell to under 50 a day. Much better and no more HA reconnections.

The lesson learned here is if your base image has one version of the XenServer tools and different XenServer tools exist in one of the PVS image versions, you better merge everything right after the upgrade is done.

The other really odd thing that happened is once I merged the image I brought it back to the XenServer host as a new VM (so you can easily update the PVS Target Device to a newer one) and tried to start it, I got a blue screen. One more time, thinking the upgrade could have caused the problem, I decided to get the VM UUID and change its device ID by using xe vm-param-set uuid= platform:device_id=0002. That fixed the BSOD.

I am still not sure why having different XenServer tools on different versions would cause the much higher retries but I know for sure the merging fixed all that.

Resuming: PVS retries are something you do need to monitor but just looking at numbers may not tell you anything (unless you are seeing several retries per second). Also keep in mind it is all UDP based… The really important thing is indeed the HA kicking in and flipping the PVS server the target is connected to. That will cause the famous hangs and freezing on the devices.

And yes, ideally always merge your images after some major change like hypervisor tools. 🙂


6,006 total views, 2 views today

PVS Retries – Script

As PVS retries can indeed cause all sorts of degradation to the user experience (i.e. applications freezing or overall slowness) and it is not something that is readily exposed on any of the Citrix monitoring/management consoles (even the PVS console does not show that info, or Director for that matter), I decided to write this little PowerShell script to get that information and show it in a nice graph. This is what it looks like:


Couple comments:

  • What is considered high/normal/low for retries? I have no idea if anyone ever came up to a number. Also keep in mind the number returned by PVS is since the machine booted up or since someone reset the counter. So 1000 retries over 10 days is not a big deal if you ask me but 1000 in 5 minutes there is indeed something wrong. I would love to hear what others have to say.
  • I could (and should ) calculate and show Retries/min instead of just retries. Simply a matter of retrieving the uptime of the server, converting to minutes and dividing retries by that.
  • I assume you know how to get the MS Chart .NET libraries/PVS stuff registered/working.

So here is the code:

# This function was created by Remko, another Citrix CTP
# and probably the craziest motherfucker I have ever met.
# As the PVS PowerShell sucks, not even returning proper objects
# people like Remko took matters on their own hands.
# You can see his post here.

function ToObject {
     $collection = @()
     $item = $null
     switch -regex (Invoke-Expression $cmd)
                if ($item) {$collection += $item}
                $item = New-Object System.Object
                if ($Matches.Name -ne "Executing")
                     $item | Add-Member -Type NoteProperty -Name $Matches.Name -Value $Matches.Value
     return $collection

# Loads the appropriate assemblies
Add-PSSnapin –Name McliPSSnapIn -ErrorAction SilentlyContinue
Mcli-Run SetupConnection -p server="ENTER YOUR PVS SERVER FQDN HERE (i.e."
$XAServers = 'Mcli-Get DeviceInfo -p siteName="YOUR PVS SITE NAME",collectionName="DEVICE COLLECTION YOUR VMs ARE IN"' | ToObject

# Creates chart object
 $Chart = New-object System.Windows.Forms.DataVisualization.Charting.Chart
 $Chart.Width = 1000
 $Chart.Height = 600
 $Chart.Left = 10
 $Chart.Top = 10

# Creates a chartarea to draw on and add to chart
 $ChartArea = New-Object System.Windows.Forms.DataVisualization.Charting.ChartArea
 $ChartArea.AxisX.Interval = 1
 $ChartArea.AxisX.Title = “Servers”
 $ChartArea.AxisY.Interval = 50
 $ChartArea.AxisY.Title = “PVS Retries”
 $Chart.Series["Data"]["DrawingStyle"] = "Cylinder"

# Adds a data point for each server
 foreach ($server in $XAServers)
 $dp1 = new-object System.Windows.Forms.DataVisualization.Charting.DataPoint(0, $server.status)
 # For my particular needs I assumed the retries as this:
 # Good, under 100. Attention, between 100 and 300. Bad, over 300.
 # Am I right? No clue. Please comment/contribute with your findings.
 If ([int]$server.status -lt 101) 
     $dp1.Color = [System.Drawing.Color]::Green
     If ([int]$server.status -gt 100 -and [int]$server.status -lt 301)
         $dp1.Color = [System.Drawing.Color]::Yellow
         $dp1.Color = [System.Drawing.Color]::Red

 $xlabel = $server.deviceName
 $dp1.AxisLabel = $xlabel
 # Sets the title to the date and time
 $title = new-object System.Windows.Forms.DataVisualization.Charting.Title
 $Chart.Titles.Add( $title )
 $Chart.Titles[0].Text = date

# Saves the chart to a file on the server where the script runs.
# Could be anywhere, even UNC path.

It can be certainly improved and I will work on that. For now, give it a try and let me know what you think.


6,548 total views, 2 views today

Citrix – Where did it go wrong?

Coincidence or not, a funny thing happened last night, two days before I leave for one of the Citrix CTP meetings at HQ. I had a dream and when I woke up this morning I knew I had to put it to words in a blog post. So here you have it.

It is all about Citrix. What it was and where it is now. To understand this post we must take a step back and I have to tell you a bit of my story. One that has been tied to Citrix in ways many people are not even aware of, dating all the way back to when all they had was an OS/2 based product.

Also having an understanding about someone’s background does help quite a bit when they write about something. So here we go.

I did a bit of everything. Technical support on both sides of the fence, meaning manufacturers and resellers. Also did lots of development, a long time ago. Sure as it was really a long time ago it was all Pascal, Clipper, DBase, etc based. Tried to stay up to date on the subject (one of the reasons I even took the Big Nerd Ranch iOS class for a full week – highly recommended) but with many other things on my plate, development is more of a hobby these days.

Then when time came, I had to grow a business. Had to compete with Citrix (yes, early 2000s) and Provision Networks. And here I must say we did extremely well. Library of Congress, John Deere, Time Warner Cable, Jet Propulsion Labs, all our customers.

We were the first company that realized many companies (Citrix was a great example) were selling products that had a ton of functionality but at the same time, tons of customers were buying these and using only a handful of features the products had. So we broke it down into modules. People could now buy what they needed and not what the manufacturer wanted you to buy. That is why we grew. And grew fast.

Long story short, Terminal-Services.NET is today what you have on Parallels RAS. Yes, that Parallels. The one you probably have on your Mac, running Parallels Desktop.

Remember Citrix Project Iris? Session Recording as you know today. We beat Citrix on its own game, releasing the first ICA/RDP session recorder BEFORE Citrix had its own.

So I learned all the way from developing and testing, to growing a company, to selling a company and to starting over. Keeping an eye on the market, its trends, what was available and staying relevant (meaning staying in business when you are as small as we were as a company).

That is why I do believe I am qualified to comment on Citrix. More than most, as not many in the industry coded, created products and started/sold companies. Some are techies only; others come from a CXO background only, with no hands-on creating products or even using them. Not the case here. Whatever product you know in this industry, I used it. I tested it. And deployed for real at real customers. You get it what I mean, I am certain.

Now, Citrix. What went wrong?

I think several things contributed to that and I will explain some of these.

  • Too much forward thinking. Hey I get it. Looking ahead is needed. You try to predict where the industry is going. Where customers will be next. All great. I had to do that for my own companies. Problem is, when you focus too much on what is ahead you forget to look at your rearview mirror. Citrix did a lot of that. Like almost mandated all employees to break their rearview mirrors. The list is quite big, with some acquisitions you never heard of and some you heard and thought, “WTF?”. To name a few, ByteMobile, Podio, OctoBlu, etc. By not looking at the rearview mirror you do not see where your competitors are. You do not see what is going on today. You lose focus. You lose market share. Some may even start to think you are lost. Customers come out of keynotes thinking ‘When the hell will I ever use that?’. People need to see products and solutions they can use today. Or in six months. Not in six decades.
  • Bad Apples. Listen, everyone may get a bad apple one day. That is part of like. But when you get a lot of bad people at top management positions, what happens next? They flood the company with their buddies. If someone is dumb and their circle is full of dumb asses, chances are all their buddies are dumb asses too. That means the company is now flooded with dumb asses. It happened at Citrix. And it took its toll. Not saying there is no way to turn it around. Sure there is. But they will have to shake it up quite a lot and bring new blood to the company, exactly what you are seeing now. Also keep in mind I am not saying everyone there was like that. Far from it. Many GREAT people there. Problem is, if a lot of people at the top are like that, the great ones under will never be able to make a difference. It is like fighting an uphill battle with an army that is 100x smaller. Sure, sometimes miracles happen. Not the case here. No miracles at Citrix.
  • The hype-surfers. You know that type of guy that is always surfing the hype waves? The ones that all the sudden are only talking about the current buzzword in the industry? This usually happens on marketing-driven companies. Companies where at the core are full of marketing people but lack the hardcore techies. The guys that understand technology. And also lack the hands-on people, that understand the technology but are dealing with real world customers and problems on a daily basis. Citrix was full of hype-surfers for years. Just look at what happened when VDI became the hotcake. They thought it was a great idea to kill XenApp, their bread and butter. The product that brought the bacon home. I do not need to remind you or the whole industry about what that did for Citrix. Or where VDI stands today, compared to what many people said it would be back in 2010-2012.
  • The Channel. When you start to screw around with your own channel partners, do not expect great things to come out of that. Many partners are loyal but at the end of the day they have bills to pay. They need great products and customers. If now you are stealing customers from your own partners, you do not have a partner anymore. No more loyalty. Many will think, ‘screw you’. Happened at Citrix. How do you think VMware and many others gained market share? Sponsored tweets or facebook pages? Nope. Thank many of the Citrix partners for that. After being back stabbed they opened the door to the devil. Not saying VMware is the devil. Far from that. But it was (and it is) a great competitor, one that was eager to get more traction, more market share. So they did it.

The problems are much more than what I wrote here. And I am still baffled to see many things still wrong at Citrix, not at the technology level. I mean in lack of vision, of cohesion. Not knowing how products should integrate, on what to deliver next. Even worse, not seeing exactly what customers and the industry as a whole are after.

That said, there is hope. The company still has some pretty good people. Bright engineers. And more than that, they seem to realize the company as a whole screwed up and they are ready to listen. Time will tell if that is the case or not and we will be able to clearly see that, probably in 6-12 months.

Until then, let’s all pray for the best.

And Citrix, if you need help, you know where to find me.



4,910 total views, 3 views today

Disappearing Citrix Policies – Scripting a fix

Not sure how many of you have experienced Citrix Policies disappearing from the AppCenter console, on XenApp 6.5. I know it is an issue as many posted similar occurrences in the Citrix Support Forums.

Not sure if it is a problem with XenApp/XenDesktop 7.6 on the FMA architecture as the majority of my large customers (talking about 10,000+ users concurrently) are still on XenApp 6.5 given the fact it was for sure the most stable/robust release ever (ok, after all the Feature Packs/Hotfixes) and the one that had the most features on the Platinum license.

Back to the issue, in certain cases the fix was to restore the database. Yep. Simply updating the Group Policy modules to the latest 1.7.X releases does not necessarily fix it.

So here you have a quick and small PowerShell script that I wrote and scheduled on the Controllers so they backup the policies to a folder named after the date they run (so you can keep backups of as many versions as you want, in case you need them). Dirty and simple.

Here you have it (it assumes the XenApp SDK is already loaded – the PSM1 module you can download off

Add-PSSnapIn Citrix.* -ErrorAction SilentlyContinue
Import-Module C:\Scripts\Citrix.GroupPolicy.Commands.psm1
$date = Get-Date -format yyyy-MM-dd
$Destination = “C:\CitrixPolicies\” + “$date”
Export-CtxGroupPolicy $Destination

The date format is case sensitive (If you use mm you get the minutes when the script ran and not the month. Months are MM – do not ask me how I know that).

It does the job.



2,004 total views, no views today

Synergy 2015 Keynote – Citrix Cloud Workspace

So today we got the Synergy 2015 keynote, delivered by the usual suspect, Mark Templeton. Here I am at 3:26am in the morning after a couple beers and scotch I had a couple hours ago with old time pals like Shawn Bass, Harry Labana, Ron Oglesby and Michel Roth. Great night and tons of interesting discussions.

After all that I decided to give you my thoughts on what I think is the most important announcement from today’s keynote, the Citrix Cloud Workspace (CCW). Before I go any further let me tell you upfront that my view has nothing to do with the cloud at this stage and that I truly believe the way to go right now is to actually forget about the cloud for a moment. And you will understand why.

For that to happen we must take a look at what Microsoft announced a couple weeks ago: the Azure pack. If you do not know what that is, in a nutshell, it is on-premises Azure. You bring it in-house and now you use the exact same tools/procedures to create your infrastructure on-premises as it would be done off-premises, in the cloud. Why is that important?

First, it eliminates the distinction between on-premises and off-premises as you do things the exact same way, no matter where your stuff is running. That means if one day for whatever reason you decide the cloud is the way to go you already know everything on how to get there. You just point whatever automation/procedures you have to a different container, the cloud. Done.

Secondly, it gives a taste of the cloud to anyone that at this stage will not go for it. It plants the seed on everyone’s head about Azure, showing on-premises how the whole thing works. Without going off-premises.

Finally, everything around the cloud revolves around heavy automation. Many things happen in background, with IT not even really aware of what is going on to get you where you need to be. What I mean by that is, as an example, when I deploy a new application on Azure RemoteApp (ARA) with my custom image (my 2012R2 box with the apps I want to be available), I do not see exactly how Microsoft is actually doing it on Azure. Not that I care. As long as I end up with my apps there and assigned to the users that need them, I am good. The end result is indeed a huge simplification on how we build infrastructure and also how quick we do it. Night and day difference, not to mention the great reduction on human errors as the whole procedure is automated and you barely see it.

Take what I just said and look at Citrix Cloud Workspace. As of today I can have all the backend stuff up there and point all to my XenApp/XenDesktop on-premises. But that is not where the value is IMHO.

If Citrix brings CCW on-premises, I can now deploy my whole XenApp/XenDesktop environment in a heavily automated way. In a couple clicks I can have a PoC up and running for 500 people. The simplification between front end components (RDS Session Hosts, VMs with the VDA, etc) and the backend (StoreFront, DDCs, Databases, Domains) is huge thanks to the connector architecture in use. And again, this erases the line between on-premises and off-premises. If one day you decide you should burst to the cloud or move all up there, everything you have done on-premises is EXACTLY the same thing you would do in the cloud. No more distinction.

This is where Citrix has to go with CWC IMHO. Make the product completely location agnostic, working the exact same way and with the exact same connector no matter if on-premises or off-premises. This will greatly help with multi-domain authentication, SQL connectivity and so on.

This is the way to go moving forward with any solution actually, Citrix or not.

I hope VMware is reading this.


2,911 total views, 1 views today

Parallels acquires 2X. A deeper analysis.

As you probably know by now, Parallels Inc. has acquired 2X Software Ltd, one of the smaller players in the VDI/SBC space, in case you did not know that.

Like Brian, I always have a soft spot for the smaller vendors out there like 2X and Thinspace for the simple reason I truly believe there is no perfect product for all the use cases out there. What I do believe is using the right tool for the task and in many environments we ended up using Thinspace and 2X as Citrix was indeed overkill and the customer needed a little bit more than plain RDS.

If you were not even aware of these smaller vendors I highly recommend you to watch my BriForum 2014 Boston presentation. Main problem is I have no clue where Brian and Gabe put it. So please head over to and ask them where it is.

To make your life a little easier I will just mention the usual small vendors we deal with:

2X itself is probably the one I have the softest spot for. The reason for that is back in 2005 2X acquired my own Terminal-Services.NET and all the Windows intellectual property we had became what is known today as the 2X Remote Application Server and the 2X LoadBalancer. No matter what Alex (yes, that guy that organizes the most disorganized and shittiest IT conference for alcoholics – E2EVC) tells you, the products were good, reason why customers like Hilton Hotels and John Deere used them… So I do know these products well.

Back to the topic, there is more to this acquisition and let me explain why.

First of all, pretty much everyone that has a Mac is aware of Parallels. They were the first company to release a decent type-2 hypervisor for OSX so you could run Windows VMs on your Mac, something that probably 90% of all Mac users out there do on a daily basis. Sure VMware later joined the party with VMware Fusion but Parallels was always perceived as the leader on this space. At least based on my own tests (I have both products) Parallels was always better on the graphics department and faster in general. Things may have changed with the latest and greatest releases though. The point here is not who is the best but the simply fact Parallels is a well known brand with regular people, end-users and IT geeks.

Then Parallels released Parallels Access, a solution to allow you to remotely access your Mac/PC, like many other products on the market (i.e. GoToMyPC, LogMeIn, etc). Difference is they pretty much nailed the whole translation of a desktop GUI to a mobile/tablet device GUI making accessing desktop apps on any device a much easier thing. If you have no clue what I am talking about, take a look at their YouTube channel.

Finally there is the Parallels most people are not aware of. The company behind Plesk, Parallels Automation and Virtuozzo. If you are an IT geek or someone working for a hosting provider I can bet you have heard of that Parallels.

To make a long story short, Parallels is used at probably 10,000+ hosting providers out there on a daily basis, reaching millions of customers. What they do is automate the whole management layer required at that level (i.e. provisioning the required services subscribed – web servers, wordpress, etc, handling customer creation/permissions/etc, provisioning the required software stack, etc) and also provide a robust and potentially much more efficient virtualization layer with their container approach (that is what Virtuozzo is). They have it for both Linux and Windows.

So they do have the end-user/consumer reach with their SOHO virtualization offerings AND do have the cloud (yes I will use the pretty word that everyone likes these days) providers on-board, with 10,000+ of them as active customers. This is something that both Citrix and VMware lack. Sure they may have made their way into the cloud space with things like Desktone and CWS. That is different than having 10,000+ of these under your belt and more than that, that have been using your solution for several years. It is proven. It is robust. AND customers like it. This by itself is something not all Citrix and VMware customers say about their solutions after having buying and deployed their products. Not saying they are bad products. Just saying there is a lot of very unhappy Citrix/VMware customers out there, for one reason or another. And please do not tell me you cannot please everyone. You and I know this goes way beyond that.

Now Parallels can introduce a product that will allow you to publish individual Windows apps out of RDSH or do the brokering to VDI based desktops potentially running on containers or any other hypervisor as 2X was indeed hypervisor agnostic, all this on the cheap. And they can leverage such robust and proven platform to all their hosting providers very quickly. With some engineering they can actually leverage your OWN PC to a provider out there and allow you to seamlessly connect to the one you have at home or to a much more powerful one (more CPU, more RAM) in the cloud, when you need it. Fully synchronized with your home machine.

That is killer.

I am a huge believer that VDI will only become what Brian and others have been predicting (and failing year after year) when it becomes a consumer product. Something end-users will want and use it. And not the niche thing it is today. Yes, no matter what you say your 10,000 VDI deployment is a niche compared to the 130,000,000 physical desktops shipped last year alone. I wrote about that years ago, here.

If there is one company now that can pull this off, under the radar, while Citrix and VMware fight their battles for niche VDI supremacy, is Parallels.

Time will tell if I was right or wrong. Of course a lot here will depend on what Parallels and Jack Zubarev do with 2X. But knowing they like a good fight and do love to innovate I do not expect anything less than a great outcome from this acquisition.


2,740 total views, 1 views today

NetScaler 10.5.X Gateway Wizard

I promise this will be a quick post and hopefully it will save you time troubleshooting your NetScaler 10.5 setup.

I was testing the latest build I could get last week (10.5 and I noticed on the ‘XenApp and XenDesktop’ wizard (that is actually pretty good to get you off the ground) it asks you about your StoreFront site details. The problem is as it asks you the ‘StoreFront FQDN’, the ‘Store Name’ and the ‘Site Path’, you may be led to think the Site Path is the Store one. Wrong. If you use the Store path what you get is a blank screen once the user logs in to the gateway. What it actually wants is the ‘Receiver for Web’ path as seen on your StoreFront console.


The amount of people that enter the actual Store path and not the Receiver for Web one is enough to justify a little change on the GUI. So Citrix instead of labeling it ‘Site Path’ simply use ‘Receiver for Web’ path and I can bet a lot of people will get it going on the first try, reducing the amount of posts on the Citrix forums.

Sure for people doing NetScaler work day in and day out this may not be a problem but for the target audience the wizards cater for, usually people starting with the NetScalers, this is a needed change IMHO.



3,314 total views, no views today

Citrix PVS Image Copy

If you built your Citrix environment properly, you should have by now at least a test environment and a production one. And if PVS is part of your deployment, the same applies to it. A development PVS and a production one.

If you do not see why you would need a test environment, separated from you production one, please stop here. This article is not for you. For sure.

That said one of the tasks I usually have to deal is to move images from a particular PVS environment to another one. As mentioned previously this usually has to do with moving something from a test/development environment to production, once it is deemed ‘good-to-go’.

To make my life easier I wrote a simple script that takes a PVS image from a particular environment/store and copies it to another one. It takes care of exporting, copying and importing the vDisk for you. Simple but effective.

Here you have it:


=== BEGIN ===

# Copies a vDisk between PVS environments.
# Cláudio Rodrigues 2014-12-24 V1.0

CopyvDisk 1.0
IQBridge Inc., 2014. All Rights Reserved.
PowerShell script to move a vDisk from a PVS Farm to another one.
The name of the vDisk you want to copy.
The PVS Environment where your vDisk is currently used.
.PARAMETER SourceStore
The PVS Store where the vDisk you want to copy is located.
The PVS Environment that will use the vDisk.
The PVS Store where the vDisk you want to copy will be saved.
.\CopyvDisk.ps1 XenApp65V2 DEV Development PROD Production
Copies the vDisk XenApp65V2 from the DEV environment, out of the Development store
to the Production Store in PRD.

Author: Cláudio Rodrigues
Date:   December 24, 2014

[Parameter(Mandatory=$True, HelpMessage=”The vDisk to be copied”)]$vDiskName,
[Parameter(Mandatory=$True, HelpMessage=”Source PVS Environment”)]$SourceEnv,
[Parameter(Mandatory=$True, HelpMessage=”Store where the vDisk resides”)]$SourceStore,
[Parameter(Mandatory=$True, HelpMessage=”Destination PVS Environment”)]$DestEnv,
[Parameter(Mandatory=$True, HelpMessage=”Store the vDisk will be copied to”)]$DestStore

Switch ($SourceEnv)
PROD { $SourceServer = “” }
DEV  { $SourceServer = “” }

Switch ($DestEnv)
PROD { $DestServer = “” }
DEV  { $DestServer = “” }

Add-PSSnapin –Name McliPSSnapIn -ErrorAction SilentlyContinue
Mcli-Run SetupConnection -p server=$SourceServer

$TempPath = Mcli-Get Store -p storeName=$SourceStore -f path
$SourcePath = $TempPath[4].SubString(6)
Mcli-Run ExportDisk -p diskLocatorName=$vDiskName, siteName=YOUR_SITE_NAME, storeName=$SourceStore
Mcli-Run SetupConnection -p server=$DestServer

$TempPath = Mcli-Get Store -p storeName=$DestStore -f path
$DestPath = $TempPath[4].SubString(6)

c:\windows\system32\robocopy $SourcePath $DestPath “$vDiskName.*” /MIR /xo /XF *.lok /XD WriteCache

Mcli-RunWithReturn ImportDisk -p diskLocatorName=$vDiskName, siteName=YOUR_SITE_NAME, storeName=$DestStore

Mcli-Run UnloadConnection

=== END ===

This is what you will need to change:
– If you have multiple environments (i.e. Development, Test, Pre-Production, Production, etc) you will need to add all of them by their name/code and the PVS server that is part of the environment. This is done where you see the ‘Switch’ statement. In this example I have two environments, named PROD and DEV and each one has its own PVS server.
– The site name. Replace YOUR_SITE_NAME with the correct name for your PVS Site. This script assumes the Site Name is the same across all environments (I see no reason for it to be different – if you have a reason please let us know in the comments).

The script takes five (5) parameters:
– vDisk name: the name you have for the vDisk on the PVS console, like XenApp65-v1.
– The source environment: this has to match one of the names/codes you added to the ‘Switch ($SourceEnv)’ line. In this example I created one called PROD where the PVS Server for that is and another one called DEV (with as the PVS Server). You can name these anything you want. I used PROD and DEV as these make sense to me.
– The source store: under PVS you have your stores where the vDisks reside. Here you pass the store where the vDisk you want to copy is.
– The target environment: to which environment (as explained under source environment) the vDisk will be copied to.
– The target store: under which store on the target PVS environment you want the vDisk to be copied to.

Couple comments:

– You must make sure a vDisk with the same name does not exist on the target store. Otherwise it will fail. Yes, I am lazy and I could have added logic to the script to check for that and copy it somewhere else (or delete it) before doing the copy/import. I did not do it. Yes, because I am lazy and today is Christmas Eve.
– There is not much error checking on the script as the script assumes you know what you are doing and if things are passed properly it works flawlessly. So yes, I do not save your ass if you do not know shit. Keep that in mind.
– Of course the images have to be environment agnostic (meaning the database/farm settings will be dumped by GPO to allow you moving PVS images anywhere).
– The images have to be part of the same domain right?

Other than that a very simple script that has helped many of my customers over the years!

Time to celebrate Christmas.



5,155 total views, 4 views today

XenApp Load Script

This is another post on XenApp 6.5 scripting. And yes, again, the reason for that is a ton of people are still on XenApp 6.5 and not everyone has a budget for all the fancy and pretty monitoring tools out there. Not saying they are not good. They are great. But money talks at the end and the economy is not that great for many XenApp customers out there so cheapo is the way to go sometimes.

Based on the work of others (my apologies but honestly I cannot find where this came from originally – if the author contacts me, all credits for the initial script will be given here), I tweaked this script to get me the load on a particular worker group, in an easy to read graph. Take a look at it:

XenApp Farm Load

I agree it is not the fanciest graph out there but allows you to see all servers in the farm and how much load you have on each of them. You can then add to a web page that will refresh itself every 60 seconds for example so you always have the latest and greatest data from the farm. So here you have the script:


# load the appropriate assemblies
Add-PSSnapin Citrix* -ErrorAction SilentlyContinue

# get the server names and load
$XALoad = Get-XAServer -WorkerGroup YourWorkerGroup | Get-XAServerLoad | Select-Object ServerName,Load

$XALoad = $XALoad.GetEnumerator() | sort ServerName

# create chart object
$Chart = New-object System.Windows.Forms.DataVisualization.Charting.Chart
$Chart.Width = 1000
$Chart.Height = 600
$Chart.Left = 10
$Chart.Top = 10

# create a chartarea to draw on and add to chart
$ChartArea = New-Object System.Windows.Forms.DataVisualization.Charting.ChartArea
$ChartArea.AxisX.Interval = 1
$ChartArea.AxisX.Title = “PROD”
$ChartArea.AxisY.Interval = 1000
$ChartArea.AxisY.Title = “Load”

# add a data point for each server
foreach ($server in $XALoad)
$dp1 = new-object System.Windows.Forms.DataVisualization.Charting.DataPoint(0, $server.Load)
$xlabel = $server.ServerName
$dp1.AxisLabel = $xlabel.Substring(3)
# set the title to the date and time
$title = new-object System.Windows.Forms.DataVisualization.Charting.Title
$Chart.Titles.Add( $title )
$Chart.Titles[0].Text = date

# save the chart to a file

=== END

Make sure you change the Worker Group name and the location where you want the graph saved (it is in PNG format and the folder must exist). In my case I run the script on the XenApp controller.

There is still some stuff I want to add like:

– Different colours based on the load (i.e. red if over 8000, green if under 5000, etc).
– As each server can have up to 9999 as its load index before being unavailable (10000) if you know the number of servers you have you can indeed show a load percentage for the whole farm. Sure it is not a perfect metric but it is a good indicator. For example if you have 10 servers and the load on all of them adds up to 84350 we can be pretty sure you are at around 84.4% capacity on your farm. Again this is not an exact number but a pretty good idea where things stand when looking at the graph.

The idea here is to create a bunch of these graphing scripts and add all to a single ‘dashboard’ page that will show you the most relevant quick information you need when monitoring a XenApp environment.

Contributions to this in terms of scripts and new ideas are more than welcome.




2,814 total views, no views today