Category Archives: Azure

Windows Virtual Desktop goes GA

Windows Virtual Desktop is now General Available

Awesome news today: Windows Virtual Desktop is now General Available!

Introduction

Last year at Ignite 2018, Microsoft announced Windows Virtual Desktop (shortened to WVD) as the new name for RDmi, which was still in Private Preview at that time.
You can read my announcement blog here: https://www.cloud-architect.be/2018/10/01/rdmi-has-evolved-into-windows-virtual-desktop/

The next milestone was the Public Preview of WVD, announced on March 21 2019 by Julia White & Brad Anderson (here: https://www.microsoft.com/en-us/microsoft-365/blog/2019/03/21/windows-virtual-desktop-public-preview/).
This step made it possible for everybody to start testing WVD, give feedback & suggestions to the Product Team.

During the Public Preview, the WVD Service (containing the Broker, Web Access, Gateway & Diagnostics services) was running in the East US for all regions.
This could result in higher latency (Round Trip Time or RTT) if your workload was running in your closest region, as all traffic was flowing through the gateway in the East US.

But as we got closer to the GA launch, we noticed a drop in RTT during our sessions in our Validation Hostpools.
This could only mean 1 thing: the WVD service was running in West Europe!

General Available

As of today, Windows Virtual Desktop is General Available (GA) for everybody.
This means you can start running your Production Workload with the full support of Microsoft on the WVD Service.
(Important: Microsoft will not give you by default support on your own workload, only on the WVD Service)

And as suspected, the WVD services are running in all commercial Azure Regions as we have suspected with the drop in RTT.

What’s coming?

At the moment, the MetaData of all WVD activities is still transmitted to the US that the moment. This will change in the coming months.

Microsoft will also add more clusters and Gateways in all regions over the coming months.

But that’s not all. Microsoft also announced features that are coming up later. Check out the video (link below) from Microsoft Mechanics on YouTube to learn more

Current feature set

The Feature set of WVD today is very clear:

  • It subtracts the Broker, Web Access, Gateway & Diagnostics services from your Azure subscription, making it Multi Tenant in comparison to RDS
  • It enables Reverse Connect with only outbound connections from your deployment
  • It enables you to take advantage of Azure AD, with features like Conditional Access, Multi Factor Authentication (MFA), etc
  • More PaaS! Compared to a RDS 2019 deployment, Microsoft goes even further with PaaS enablement. The entire WVD Service is a Azure PaaS service for you to use!

Licensing

Microsoft added some additional licenses, entitling you to use the WVD service for free.

OSRequired license
Windows 10 Enterprise multi-session
Windows 10 Enterprise
Microsoft 365 E3/E5
Microsoft 365 A3/A5/Student use benefits
Microsoft 365 F1
Microsoft 365 Business
Windows 10 Enterprise E3/E5
Windows 10 Education A3/A5
Windows 10 VDA per user
Windows 7 EnterpriseMicrosoft 365 E3/E5
Microsoft 365 A3/A5/Student use benefits
Microsoft 365 F1
Microsoft 365 Business
Windows 10 Enterprise E3/E5
Windows 10 Education A3/A5
Windows 10 VDA per user
Windows Server 2012 R2, 2016, 2019RDS Client Access License (CAL) with Software Assurance

Check the FAQ of this page for further and the latest updates:
https://azure.microsoft.com/en-gb/pricing/details/virtual-desktop/

FSLogix

Microsoft acquired FSLogix November last year to enhance the Office 365 virtualization experience, especially when running WVD!
Because now you have great technology available for your Profile Management.
With FSLogix enabling faster load times for user profiles in Outlook and OneDrive, Office 365 ProPlus will become even more performant in multi-user virtual environments.

And the best part: it’s also free for all WVD entitled users!

With the launch of WVD, Microsoft also did a update of FSLogix to version 1909. Check the details in the Links section at the end.

These are the Licensing requirements for FSLogix

  • Microsoft 365 E3/E5
  • Microsoft 365 A3/A5/Student use benefits
  • Microsoft 365 F1
  • Microsoft 365 Business
  • Windows 10 Enterprise E3/E5
  • Windows 10 Education A3/A5
  • Windows 10 VDA per user
  • Remote Desktop Services (RDS) Client Access Licence (CAL)
  • Remote Desktop Services (RDS) Subscriber Access Licence (SAL)

The license that pops out is the SAL license. This means you can use FSLogix for almost any RDS & WVD deployment, on Azure, On-Prem or other Clouds

Is everything included in WVD?

This is an important topic! There are 2 things you need to know:
(ok, there are more, but these are the most important ones)

  • In the licenses above, you only get the usage of the WVD service.
    It does NOT include your Azure usage from your SessionHosts, AD deployment (Azure VM with Windows AD Role and AD Connect or Azure Active Directory Domain Services), File server (or Azure File Share), etc!
  • With simply deploying WVD, you will not be ready. You need to do a lot more than just deploy and lean back.
    You will need to
    • Validate your AD & Fileserver setup
    • Maintain your Azure Tenant & Subscription
    • Setup a backup strategy
    • Think about your updates (Software, Windows, Office, etc)
    • Cost management
    • Much more

Microsoft relies on Partners for this.
The company I work at, ASPEX, is one of those partners.
We developed a solution for provisioning WVD in an easy way, using the Best Practices from Microsoft & Azure, and our years of experience as a Hosting Partner and working on WVD from the beginning!

Check it out here:
https://aspex.be/en/breaking-microsoft-officially-launches-windows-virtual-desktop

More Readings

In the coming days, I’ll be posting some more details on the GA, the things to come and much more.
Below, you can find some links to more readings about the Genaral Availability of Windows Virtual Desktop, but also the TechCommunity link for WVD!
I try to follow this Community and answer/help where possible.

Microsoft announcing blog post:
https://www.microsoft.com/en-us/microsoft-365/blog/2019/09/30/windows-virtual-desktop-generally-available-worldwide/

Microsoft Mechanics video:
https://youtu.be/QLDu6QVohEI

Microsoft TechCommunity:
https://techcommunity.microsoft.com/t5/Windows-Virtual-Desktop/bd-p/WindowsVirtualDesktop

Microsoft Docs on Windows Virtual Desktop:
https://docs.microsoft.com/en-us/azure/virtual-desktop/

Microsoft Docs on FSLogix:
https://docs.microsoft.com/en-us/fslogix/
FSLogix Update :
https://social.msdn.microsoft.com/Forums/en-US/815c5faf-109c-4bc0-8552-7d6137c91b89/release-notes-for-fslogix-release-1909-build-29720527375?forum=FSLogix

ExportImportRdsDeployment module has been updated and it has Backup functionalities now

Introduction

After releasing my Powershell Module ExportImportRdsDeployment here, which helps you to migrate your RDS deployment to a newer version, I got some great feedback from users.
So it was time to implement some feature requests and improve some things from the first version.
The ExportImportRdsDeployment module has been updated and it has Backup functionalities now…

Updating the Module

The module is again available in the Powershell Gallery, so you can easily install and use it on any Windows Server.
Make sure you download and use version 2.0.

To check your current version:

Check current version

If you already have version 1.0 or 1.1, I would suggest you uninstall these versions, and install the latest.
To do this, you simply execute this commands:

Install newest version

Now you can import the module again and start using the updated cmdlets

Import the module

The changes

Get-Help improvement

I improved the instructions and help you get when using the Get-Help cmdlet.
You get more information and updated examples for all 4 cmdlets in the module.

Example:

Get Help improvements

Optional removal of collections & servers from deployment

In the previous version, when you executed the Export-cmdlets, the collections & the servers were removed from the deployment.

In the new version, you can export the collections or export the servers using the Export-cmdlets, without the removal functionality.
This way,

  • You can test the export and validate the migration before performing the actual migration.
  • You can use the module to create daily backups of your deployment/collections and quickly restore in case of issues or wrong manipulations on your deployment.

This is how you do it:

The Export-cmdlets are in Export-Only mode by default.
So if you run the cmdlet as you did before, no removal will be performed at the end of the export.

You can specifically declare this too, to make sure no removal is executed, using the RemoveCollections & RemoveServers switch-parameters:

If you want to perform the migration, you must specify the -RemoveCollections or -RemoveServers parameter and confirm the export with removal.
With this safety feature, you cannot mistakenly remove anything.

Confirm export

Reboot Pending check when importing

When running the Import-RDDeploymentToConnectionBroker cmdlet, the module will check if there is a Reboot pending on the target machines (in the export XML file).
If there are reboots pending, you can let the cmdlet try to reboot the VMs, or do it manually.
You cannot continue with the import until the servers are rebooted.

Reboot pending overview

File access test for XML file

Before performing an export, the cmdlet will test if you have permissions and access to the XMLFile location you have specified (or to the default location).
If you do not have access, the cmdlet will stop the export.

Export before removing

When performing an export with -RemoveCollections or -RemoveServers parameter, the cmdlet will first perform the export to the XMLfile before removing the collection/deployment.
The writing to the XMLfile must succeed first before continuing.

[Microsoft.RemoteDesktopServices.Management.RDSessionCollectionType] on a Windows 2012 machine

On a Windows 2012 machine, the cmdlet will correctly check for the [Microsoft.RemoteDesktopServices.Management.RDSessionCollectionType] when exporting/importing.

Conclusion

With this update to the module, you should be able to migrate your entire deployment more easily than before, again on any platform (like Azure) and faster than performing an in-place upgrade.

If you have any questions or feature requests, do not hesitate to contact me using the comments, or via Twitter/LinkedIn.

Invoke-AzureRmVMRunCommand

Add Sessionhosts to your existing RDS deployment using ARM templates and Invoke-AzureRmVMRunCommand

Add Sessionhosts to your existing RDS deployment using ARM templates and Invoke-AzureRmVMRunCommand

Introduction

If you are using ARM-templates & Desired State Configuration (DSC) for your RDS deployments, you will be able to cover almost all your needs.
But you could also use different options or technologies for your needs.

A great example is the combination of an ARM template with the CustomScriptExtension (for installing the RDS Sessionhost role on the VMs), the JsonADDomainExtension (to join the VM to a domain) and the new Invoke-AzureRmVMRunCommand cmdlet.

In this blog, I will show you how you can use ARM templates & the new Invoke-AzureRmVMRunCommand cmdlet to add new sessionhosts to your already existing RDS deployment, and more!

The Invoke-AzureRmVMRunCommand cmdlet

The new cmdlet allows you, as an admin of the Azure subscription, to “Invoke a run command on the VM“.
Run Command uses the VM agent to run PowerShell scripts within an Azure Windows VM. This means that the script will be executed as the Local System account (important to remember).
There are some restrictions to the cmdlet, which you can find here.

Most important thing to remember: the Powershell script you provide to the cmdlet is copied to the VM, and then executed under the Local System account.

The goal

The goal of this blog is to do these tasks:

  1. Create a new VM using an ARM Template
  2. Install the RDS Sessionhost role on the new VM using the CustomScriptExtension
  3. Join the new VM to a domain using the JsonADDomainExtension
  4. Add the new VM to an existing RDS deployment, create a new SessionCollection with the new VM, and set some basic SessionCollection settings using the Invoke-AzureRmVMRunCommand.

Because of all the great blogposts and examples on ARM Templates & RDS deployments (a great example: part 1 & part 2 up to part 7 from Freek Berson), I will not go deep dive into this.
I will touch the most important sections and parts of the template to complete tasks 1-3.
Task 4 will be the main topic of this blog.

Prerequisites

Before you can start with any of this, you will need your basic setup.
You will need to have a VNET, subnet(s), an Active Directory setup and a full RDS deployment with RDS Connection Broker, Gateway & WebAccess in place.

Task 1-3: Create VM, install RDS Role & join to the domain.

Create VM using ARM Template

To create a VM on Azure, you have a lot of possibilities: using the Azure Portal, using Powershell, using Azure CLI, etc etc.
Another great option is using an ARM Template.
As I said before, there are so many great blogposts and examples, so I’m not going into detail about this.
Below, you can find a screenshot from my ARM Template which will be used for the next parts as well.
ARM Template Create VM

Install RDS Role using CustomScriptExtension

In the ARM template, you can already start customizing your VM to your needs.
This can be achieved using the CustomScriptExtension.

You first create a Powershell script.
For this blog, I created a script with the following content:

This will install the RDS-RD-Server role required to add to our RDS Deployment.

Next, you will need to store this script on an Azure Storage Account so it can be used in the ARM Template.
PS script On Storage Account

To link a CustomScriptExtension to a VM, you define it in the Resources section of the VM.
You add a resource from the type “Microsoft.Compute/virtualMachines/extensions”. (line 192)
You give the resource the name from your VM and add a name for the action. In this case I named it Fix-RDS (line 193).
The script information is located in the “properties” section.
First, you set the extension type to “CustomScriptExtension” (line 203).
Next, you specify the script Uri (line 208): this is the Uri from the Azure Storage Account, the blob container and filename (see previous screenshot).
Next, you will enter the command that needs to executed (line 212)
Last part is the security information to download the Powershell script. In the example, I’m using the StorageAccountName & Key (line 213 & 214).

ARM-Template

Join to the domain using JsonADDomainExtension

Once the VM is created in Azure, you want to be able to access the VM directly. This can be made easier if the VM is directly joined in your existing Active Directory.
This can be done using the JsonADDomainExtension.

To link a JsonADDomainExtension to a VM, you use the “Microsoft.Compute/virtualMachines/extensions” resource again, but this is not linked under your VM section. It’s a separate section, not linked or under the VirtualMachine section as you can see in the screenshot below.
You give the resource the name from your VM and add a name for the action. In this case I named it “joindomain” (line 229).
First, you set the extension type to “JsonADDomainExtension” (line 233).
Next, you enter the Active Directory name & the FQDN from the user to perform the join. This user needs to have permissions to perform the join in your AD.
Last part is the password from that user (line 244), but this is entered in the protectedSettings part for security purposes.

ARM Template JsonADDomainExtension

Now you have an ARM Template that completes Task 1 till 3.

Task 4: Add the new VM to your existing RDS Deployment.

All previous steps were all done using an ARM Template.
This template deployment can be started from the Azure Portal, Visual Studio (Code) or using Azure CLI.
But it can also be started from a Powershell script using New-AzureRmResourceGroupDeployment.
Powershell New-AzureRmResourceGroupDeployment

And once you have started your deployment, the next step is easy.
You wait until the deployment is finished using the Job ID.

As soon as the deployment is finished, you can further customize your VM, and finish the RDS setup.

The command

Invoke-AzureRmVMRunCommand has a few required parameters:

  • ResourceGroupName: the resource group where the VM is located in.
  • CommandId: The type of command you want to execute. In my example, I’m going to use “RunPowerShellScript
  • VM / VMName / ResourceID: the VM where the command needs to be executed on.

When using “RunPowerShellScript” as CommandId, you will need to specify which script needs to be executed.
Therefor, you use the ScriptPath parameter. Here, you specify the path where the script is located.
Important: the script needs to be on the computer/server executing the Invoke-AzureRmVMRunCommand, so you need to specify the local path on the local computer/server.

Last part are the optional parameters that you need to provide to your own script.
You can create a hashtable containing all your parameters for your script and provide the hashtable to the cmdlet.

In my example, I execute the command on the Connection Broker. This is the most ideal VM I think to do this, because you are sure the necessary cmdlets are installed.

There are a few important things you need to remember!

  • If you want to use blanks in your parameters, you must use double quotes and escape them, as you can see in my example below.
    So for example: you want to pass the parameter “sessionCollection” with the value “Micha Demo Collection”, then you need to add an item to the hashtable like this:
    “sessionCollection” = “"Micha Demo Collection“”
  • Your script can contain parameters, but the parameters cannot be set to manditory.
    So your parameter cannot have this property: [Parameter(Mandatory=$true)].
    Otherwise, you will end up a vague error like you can see below.
    I already posted this into the Powershell Advisor group to see if this can be fixed.
    Invoke-AzureRmVMRunCommand : Long running operation failed with status ‘Failed’. Additional Info:’VM has reported a failure when processing extension ‘RunCommandWindows’. Error message: “Finished executing command”.’
    ErrorCode: VMExtensionProvisioningError
    ErrorMessage: VM has reported a failure when processing extension ‘RunCommandWindows’. Error message: “Finished executing command”.

The script

Now comes the hardest part, because as I said in the beginning, the command started by Invoke-AzureRmVMRunCommand is executed on the VM under the Local System account!
As you may know, when you execute the Add-RDServer cmdlet, the cmdlet will check if the role you specify is installed on the new VM. If it’s not installed, the cmdlet will try to install the role.
And the Local System account of your Connection Broker does not have permissions on the new VM.

So how do we fix this?

As stated in the beginning, the script is copied to the target VM.
And the easiest way to perform and control a RunAs, is through a Scheduled Task.

So here is the solution:

  1. Create an inner script inside the outer script
  2. Output the inner script to the target VM disk
  3. Create a Scheduled task to run the inner script as a user with enough permissions
  4. Start the task and wait.

The outer script is the script you have on your local computer, containing parameters, the inner script, the scheduled task creation part and the wait job.

Sounds complicated, but really isn’t. Let’s go over it step by step.

The Parameters

The outer script starts with the parameters you want to provide. In my example, it is just these simple parameters:

The Inner script

We want to output this inner script to a ps1-file, without the outer script to execute the inner script. The easiest way to do this is by using a Here-String in Powershell.
You create a variable, and start with @”. All the text following after this, is interpreted by Powershell as text in 1 variable, no matter how many lines or tabs you enter, until you end it with “@

So for this script, I add the basic commands to add a server to an existing RDS deployment.
I also added a screenshot to show you how it looks in Powershell ISE.

Powershell Inner Script

Output the script

Easiest part: Just pipe your script variable to the Out-File cmdlet. This will write the inner script to the local disk of the server.

The scheduled task

A bit more difficult, but once you get it, it’s easy as 1-2-3.
First you create “an action“. This is what the Scheduled Task is going to execute. In my example: the Inner script in Powershell that we just created on the VM(line 2).
Next, you can specify some settings. In my example, I set the Compatibility level to the highest. So on a Server2016, this will be Server 2016 level (line 3).
Last part is registering the Scheduled Task, specifing the user credentials, the action and the settings created just before that (line 4)

Start the Task and Wait

This is easy, especially when you work with variables for your TaskName.
Now you simply execute the Task using Start-ScheduledTask.
And then you query the status from the Task until it is Ready

That’s it!

Conclusion

Invoke-AzureRmVMRunCommand is a great way to finish a ARM Template deployment.
Because if you can start both from Powershell, you know which servers are created, and you can easily finish installations, RDS deployments, etc.

This is not the only way to do this, but’s a great way, easily manageble and highly automatable.

Migrate/upgrade RDS Deployment to new Connection Broker, even on Azure, using my new Powershell Module

Introduction

[UPDATE 2019-03-10] I did an update on the module introducing some new features. You should read the update first before continuing here: ExportImportRdsDeployment module has been updated and it has Backup functionalities now

As documented in this article, the first step to upgrade your Windows Server 2012R2 Remote Desktop Services (RDS) deployment to Windows Server 2016 is upgrading your Connection Broker.
This can be done using an in-place upgrade, but this not always ideal:

  • The in-place upgrade takes quite some time depending on various factors, like hardware, installed software, etc.
  • The upgrade can fail, causing you to rollback and increases downtime
  • There are numerous articles that discourage an in-place upgrade due to performance issues, legacy application problems, …
  • On Azure, an in-place upgrade is not supported! So you cannot upgrade your Windows there.

ASPEX has a customer that was using a Windows 2012R2 RDS deployment and we told him about the HTML5 webclient that was in Public Preview at that time, which would be a great asset to the end users.
The big problem: the HTML5 webclient requires a 2016 RDS Deployment.
We suggested to upgrade the RDS deployment from 2012R2 to 2016, but the customer requested a minimal downtime and a phased migration.
This ruled out an in-place upgrade, so I started coding…

The solution

I created a Powershell module that contains 4 functions. Each function is a step in the process to migrate your RDS deployment from one Connection Broker to another.
The module will allow you to export your existing Session Collections and RD Servers with all configuration settings, and remove them from the old Connection Broker.
Then you can import everything back into the new Deployment, connecting the RD Servers to the new Connection Broker and recreate the Collections.

There are several scenario’s where you can use this module:

  1. Migrate from one Connection Broker to another
  2. Upgrade from a 2012R2 to 2016/2019 RDS deployment
  3. etc

The module has been made available in the Powershell Gallery, so you can easily install and use it on any Windows Server.

The module will only migrate these RD roles: RD Gateway, RD Web Access & RD Session Host
The other roles (RD Connection Broker & RD Licensing Server) should already be installed on the new Connection Broker

Note:

Only Session-based Desktop Deployments (Session Collections and Personal Session Desktops) are currently supported!
If you have a Virtual machine-based desktop deployment (using a Virtualization Host), you should not use this Module!

You should always make sure you have a backup of your Connection Broker config before you start.
Testing the module on your Dev environment is always a best-practice.

 

The overview

Deployment on the old Connection Broker

This is how my test deployment looks like:
Domain: MICHA2019-POC.HOSTING

1x server as old Connection Broker & Licensing: MICHA-P-RDB-001
1x server as Gateway & Web Access: 2019-TEST-RDG1
6x servers as Session Hosts: MICHA-P-RDH-001 ==> 006
Old CB Deployment Overview ServerManager

I have 3 Session Collections (1 with Remote Apps and 2 with Remote Desktops) and 1 Personal Session Desktop Collection.
These are all divided over the available Session Hosts, as you can see in the screenshots below.
Old CB Collections Overview ServerManager Old CB RemoteApp Collection Overview ServerManager Old CB Collections Overview Powershell

New Connection Broker

1x server as new Connection Broker & Licensing: 2019-TEST-CB
I only installed the 2 roles on the 2019-TEST-CB machine.
It is recommended to install the Licensing role on the new Connection Broker server.
New CB Installed Roles  New CB Deployment Overview ServerManagerNew CB Deployment Overview Powershell

Objective

For this blog, I’m going export the Session Collections, Sessions Hosts, Gateway and Web Access servers from MICHA-P-RDB-001 to 2019-TEST-CB

The migration

Installation

To install the Module, you simply run these 2 commands:

 Old CB Install & Import Module New CB Install & Import Module

The module is uploaded to the Powershell Gallery, therefor you can easily install it, and it is signed using a Code Signing certificate.
If you get a message regarding the Nuget provider, you should confirm to install
the module

Export

Install Module

Install the module on the old Connection Broker:

Collections

First step is to export the Session Collections. The module will export the collections to a XMLfile, so it can be easily migrated to the new Connection Broker.
You export the collections using this command:

You must specify the ConnectionBroker and optionally the XmlFile. And if you add -Verbose, you will get detailed steps in the progress
Old CB Export Collections Start

When finished, you will get the location of the XMLFile (default: in C:\temp, unless specified otherwise).
Old CB Export Collections Finished

Deployment

Next step is to export the Deployment (including the servers).
You export the deployment using this command:

You must specify the ConnectionBroker and optionally the XmlFile. And if you add -Verbose, you will get detailed steps in the progress
Note: You have to confirm that you exported the Session Collections before running this step.
Old CB Export Deployment Start

When finished, you will get the location of the XMLFile (default: in C:\temp, unless specified otherwise).
The error message (as displayed below) is default behavior due to the removal of the Gateway Role from the deployment.
Old CB Export Deployment Finished

Export Files

When both exports are completed, you will find the XMLfiles in the export location.
These files should be used when importing to the new Connection Broker.
Old CB Export XmlFiles

Import

Install Module

Install the module on the new Connection Broker

Import Files

Copy the XMLfiles to the new Connection Broker, unless exported to a shared network location.
You also need the certificate(s) for all 4 roles in the deployment: RDGateway, RDWebAccess, RDPublishing & RDRedirector
In this test migration, I used a wildcard certificate for all 4 roles.
I placed all files in the C:\temp folder
New CB Xml & Pfx Files

Deployment

First step is to import the Deployment (including the servers).
You import the deployment using these commands:

You must specify the ConnectionBroker and the XmlFile, and the 4 certificate locations and passwords. And if you add -Verbose, you will get detailed steps in the progress.
New CB Import Deployment Start

When finished, you will get an summary of the imported deployment.
New CB Import Deployment Finished

Collections

Next step is to import the Session Collections.
You import the collections using this command:

You must specify the ConnectionBroker and the XmlFile. And if you add -Verbose, you will get detailed steps in the progress.
New CB Import Collections Start

When finished, you will see all Collections that are imported, including the published RemoteApps.
New CB Import Collections Finished

Overview after the migration

When you review the Server Manager, you will see that the entire Deployment is migrated.
New CB Deployment Overview ServerManager After Import

Also the Collections are completely migrated.
New CB Collections Overview ServerManager

And the Personal Session Desktop Collection is also available again.
New CB Collections Overview Powershell

Conclusion

Using this module, you are able to migrate your entire deployment in a controlled manner, on any platform (like Azure) and faster than performing an in-place upgrade.

You also completed step 1 and 2 from the guide in this article.

You can simple follow the guide further, or you can install your new Session Hosts, Gateway(s) and Web Access servers, and replace these in the new Deployment.

RDmi @ASPEX

RDmi update: RDmi compared with RDS

RDmi @ASPEX

source: https://blog.aspex.be/en/rdmi-update-rdmi-compared-with-rds

Introduction

As we have posted in our previous blog (Next generation architecture & HTML5 for RDS Hosting), ASPEX and Microsoft are still working closely together to test and improve the Remote Desktop Modern Infrastructure or RDmi.

We also talked about RDmi on two conferences:

  • first time on our Technical Partner event, on December 12th 2017
  • second time on ITPROud, the latest IT Conference for IT Pro’s, on March 14th 2018

Microsoft allowed us to give these presentations under NDA to all attendees, including a live demo of RDmi!

In this post, we will show you what you have missed from the conferences (without the information covered by the NDA), the key differences between the classic RDS and RDmi, and how an RDmi will look like in the near future.

Continue reading

Check your Azure CSP Customer list for scheduled maintenance

Introduction

With the publicly disclosed vulnerabilities referred to as “speculative execution side-channel attacks“, also known as Meltdown and Spectre, Microsoft has scheduled a quick maintenance window for all VMs running on Azure which are affected by this.
As a CSP, you have a large list of customers, each with one or more subscription(s), each subscription with one or more VMs.
To quickly check which VMs are scheduled for maintenance, doing this manually would take hours. Therefore, ASPEX is using a PowerShell script to check the maintenance status so we can quickly inform our customers with an exact list of VMs scheduled to be updated with the timeframe of the maintenance.

Powershell script

Prerequisites

To be able to run this script, you need to have 2 PowerShell Modules installed: AzureRM & AzureAD. These can be installed using the following cmdlets:

If you don’t have the Install-Module cmdlet, you need to update your Powershell version, or install the PowerShellGet Module: Get PowerShellGet Module

The Script

The script will ask you to log in twice. You have to login with your CSP admin account. This is required to be able to read out your CSP customer list, and to read out each Client Tenant and its subscriptions.

The result of the script will look like this:
CSP Check Maintenance result

Next generation architecture & HTML5 for RDS hosting

Next generation architecture & HTML5 for RDS hosting

Introduction

I already gave a small quote on LinkedIn about the cooperation between ASPEX and Microsoft about optimizing hosting Windows desktops and applications on Azure.
I’m glad to announce that the news is made public at the Inspire conference in Washington DC: Next generation architecture for RDS hosting

Next generation architecture for RDS hosting

We (the ASPEX team and myself) will continue to cooperate closely with Microsoft, giving feedback and ideas on how the Next generation of RDS hosting would look like according to us, what our partners/customers expect now & what they want for the future.
We will endeavor to be able to test & implement this next generation architecture from the very beginning, and provide feedback to Microsoft to make this the “…new architecture that enables you to create the next generation of services for your customers, while taking your business to the next level of efficiency and growth…”

Key-elements

Here are a few important elements about this new architecture:

  • “…The RDS modern infrastructure components we are showcasing today extend the current Windows Server 2016 RDS to enable partners to address new markets segments and customers while reducing the cost and complexity of hosted Windows desktop and application deployments…”
  • “…adding a new RD Diagnostics service…”
  • “…
    • Both single and multi-tenant deployments, making smaller deployments (less than 100 users) much more economically viable, while providing the necessary security of tenant isolation
    • Deployments on Microsoft Azure, on-premises equipment, and hybrid configurations
    • Virtual machines or Azure App Services can be used for deployment

    …”

HTML5 web client!

Another important new element: a HTML5 web client will be included in the new architecture.
“… The new infrastructure will also include a web client that allows users to connect from any HTML5 browser. The web client, combined with the other RDS modern infrastructure features, allows many Windows applications to be easily transformed into a Web-based Software-as-a-Service (SaaS) application without having to rewrite a line of code. …”

Updates coming soon

More information and updates will come up on my blog in the next weeks/months.

Official release:
https://blogs.technet.microsoft.com/enterprisemobility/2017/07/12/today-at-microsoft-inspire-next-generation-architecture-for-rds-hosting/

Extending a S2D (Storage Spaces Direct) Pool on Azure, and increase your IOPS!

Introduction

As more and more companies move their environment(s) to the Azure Cloud, the need for a high-available fileserver grows with it.
At the time of the migration/installation, a 128GB volume might be sufficient, at some point in time, the volume needs to be increased.
But this is not the same procedure as you would do on a on-premise datacenter.
In this blog, you will see how to increase the available diskspace for a Clustered Volume, optimizing the available IOPS & throughput of the disks.
Expanding the Clustered Volume is a topic for my next blog, coming up later on.

The “Inefficient” way

You can simply add new disks to both fileserver VMs, add the disks to the pool and increase the volume.
But this has multiple disadvantages:

  1. The amount of data disks for each VM is limited.
    Depending on your VM size, this can go from 2 up to 32 disks (Azure VM Sizes – General Purpose).
    But you don’t want to use all your available slots, to keep at least 1-2 free for emergency expanding
  2. Depending on the disks you add, you will have a mixture of disk-sizes, but also disk IOPS.
    For example:

      • A premium P6 64GB disk has 240 IOPS/disk and a max throughput of 50MB/s.
      • A premium P10 128GB disk has 500 IOPS/disk & a max throughput of 100MB/s

    (more information about performance targets)

The “Optimal” way

To optimize the new diskspace, you can follow this procedure:

  1. Add new larger disks to the VMs & add the new disks to the pool
  2. Put the old disks in “Retired” mode
  3. Move the data from the old disks to the new disks
  4. Remove & detach the old disks from the VMs
  5. The complete script

Important: This procedure can be done during production activity without downtime.
But if you want to be 100% certain, you can use the Suspend-ClusterResource & Resume-ClusterResource cmdlets

The procedure

Test setup

Overview Datadisks at the beginning

I created a 2-node fileserver setup (MICHA-A-FLS-001 & MICHA-A-FLS-002) , both with 2x 64GB managed disks attached (2x 64GB on each node = 128GB available cluster storage).
In the test-setup, I want to extend the current disks to 2x 128GB on each node (resulting in 256GB available cluster storage).

Overview disks before using show-prettypool.ps1

The output from Show-PrettyPool.ps1 (downloadable here):

1. Adding the new disks

1.1 Creating disks

First, you need to add the new disks to both nodes.
When you use Managed Disks in your deployment, you can do this by simply adding disks in the Disks-panel.
If you use Unmanaged Disks, you cannot create the disks in advance. You will need to create and attach the disks in the Virtual Machines-panel in the same step (see 1.2 Attaching disks)

So I created 4 disks of 128GB, 2 for each node.
How to Add Managed Disk - Step 1

Created Managed Disks

1.2 Attaching disks

Next step is to attach the disks to the VMs
You select the VM in the Virtual Machines-panel, select Disks and click the Add data disk button.
Attach data disk to VM

In the drop-down menu, you can select the disks you created in the Disks-panel
Select data disks to attach to VM

Then you save the configuration. Important to keep in mind: the LUN configuration, because we will need this later on.
Save disk setup & check LUN configuration

1.3 Adding the new disks to the Storage Pool

When your fileserver environment has only 1 storage pool, the new disks are added to the pool automatically as you add disks to the VMs.
Overview in VM after adding disks

Otherwise you can use the Add-PhysicalDisk cmdlet to add the disks to the storage pool

2. Put the old disks in “Retired” mode

If you just add the disks to the storage pool, after 30 minutes, Storage Spaces Direct will automatically begin re-balancing the storage pool – moving “slabs” around to even out drive utilization (more information about S2D in this deep dive). This can take some time (many hours) for larger deployments. You can watch its progress using the following cmdlet.

But we don’t want the divide the storage pool to all disks, we want to move the storage pool to the new disks. This can be done by setting the old disks in “Retired”-mode.

2.1 Select Virtual Disk

First, you need to select the virtual disk to be able to find the storage pool.

Select the virtual disk

2.2 Select the first & second node

Next, you select the 2 nodes from the cluster. This is necessary to select the old disks which will be removed.

2.3 Selecting the disks which will be removed, based on the LUN-ids

Next, you select the disks which need to be removed, based on the LUN-ids as seen in 1.2 when adding the disks to the VM.
Using the code below, you will get a Gridview with all disks from each node.
You need to look at the PhysicalLocation, describing the LUNs

Select the old disks using the LUN-id

2.4 Old disks in “Retired”-mode

Last step in this phase: setting the old disks in “Retired” mode. This is done by using the Set-PhysicalDisk cmdlet.
The code below will also rename the disks for a better overview.
Putting the disks in Retired mode will stop the Storage Spaces Direct from re-balancing the storage pool over all available disks.

old disks status are set to Retired

3. Move data to the new disks

3.1 Start the Repair-VirtualDisk cmdlet

Now the old disks are in Retired mode, we can request the S2D to start repairing the virtual disk, by moving the data from the old disks to the new disks.
This is done by using the Repair-VirtualDisk cmdlet.
This can take some time (many hours) for larger deployments, therefor we start the Repair-VritualDisk cmdlet as a job
You can watch its progress using the Get-StorageJob cmdlet.

Repair-VirtualDisk & status

3.2 Follow up the Storage Job

Upon completion, the Get-StorageJob cmdlet will return a Completed overview, or will just return a null value (after a few minutes, the job-log is cleared)
Repair job completed

3.3 Repair completed, overview of the Disks

When the Repair job is complete, using the Show-PrettyPool.ps1, you will see that the old disks are empty. The new disks are filled up, and the data is divided evenly across the new disks.
(first run was before the Repair, the second run is after the Repair job)
Overview of the disks after the Repair job

4. Remove & detach old disks

4.1 Remove from the storage pool

First step in this last phase is removing the disks from the storage pool.
Because the old disks are still in the variables, you can simply pipe them into the Remove-PhysicalDisk cmdlet.

Remove the old disks using the Remove-PhysicalDisk cmdlet

Using the Show-PrettyPool.ps1, you will see that the old disks are no longer part of the storage pool
overview using show-prettypool.ps1

4.2 Detach from VM

Next step is detaching the old disks from the VMs.
You select the VM in the Virtual Machines-pane, select Disks and click the Edit button
Edit VM disk configuration to detach old disks

On the right side, you click on the Detach button next to each old disk and click the Save button.
Detach old disks
Save new disk configuration with old disks detached

In the VM, when you execute the Get-PhysicalDisk cmdlet, you will see that only the new disks are connected.
disk overview in VM after detaching old disks

4.3 Remove Disks

The very last, but important, step to do is removing the disks permanently.
If you forget to do this, you will keep on paying for the disks, even though they are not connected.

4.3.1 Managed Disks

For Managed Disks, you go the Disks-panel.
There you will see the old disks, with no Owner assigned to it.
overview of Managed Disks after detaching

You click on an old disk, check the DiskState (should be Unattached) and click Delete.
And this should be done for all old disks.
Delete Managed Disk

4.3.2 Unmanaged Disks

For Unmanaged Disks, you go to the Storage Accounts-panel and select the Storage Account where the VHD is stored.
In the Overview, you select the Container containing the VHD.
Unmanaged Disks: select Storage Account

Next, you select the VHD-file, and on the right panel, you check the Lease Status (should be Unlocked), then you click on Delete.
Unmanaged Disks: delete VHD from storage account

And thats it, your cluster has now extra diskspace to expand the volume.
Expanding the Clustered Volume is a topic for my next blog, coming up later on.

5. Complete script