Check your Azure CSP Customer list for scheduled maintenance

Introduction

With the publicly disclosed vulnerabilities referred to as “speculative execution side-channel attacks“, also known as Meltdown and Spectre, Microsoft has scheduled a quick maintenance window for all VMs running on Azure which are affected by this.
As a CSP, you have a large list of customers, each with one or more subscription(s), each subscription with one or more VMs.
To quickly check which VMs are scheduled for maintenance, doing this manually would take hours. Therefore, ASPEX is using a PowerShell script to check the maintenance status so we can quickly inform our customers with an exact list of VMs scheduled to be updated with the timeframe of the maintenance.

Powershell script

Prerequisites

To be able to run this script, you need to have 2 PowerShell Modules installed: AzureRM & AzureAD. These can be installed using the following cmdlets:

If you don’t have the Install-Module cmdlet, you need to update your Powershell version, or install the PowerShellGet Module: Get PowerShellGet Module

The Script

The script will ask you to log in twice. You have to login with your CSP admin account. This is required to be able to read out your CSP customer list, and to read out each Client Tenant and its subscriptions.

The result of the script will look like this:
CSP Check Maintenance result

Next generation architecture & HTML5 for RDS hosting

Next generation architecture & HTML5 for RDS hosting

Introduction

I already gave a small quote on LinkedIn about the cooperation between ASPEX and Microsoft about optimizing hosting Windows desktops and applications on Azure.
I’m glad to announce that the news is made public at the Inspire conference in Washington DC: Next generation architecture for RDS hosting

Next generation architecture for RDS hosting

We (the ASPEX team and myself) will continue to cooperate closely with Microsoft, giving feedback and ideas on how the Next generation of RDS hosting would look like according to us, what our partners/customers expect now & what they want for the future.
We will endeavor to be able to test & implement this next generation architecture from the very beginning, and provide feedback to Microsoft to make this the “…new architecture that enables you to create the next generation of services for your customers, while taking your business to the next level of efficiency and growth…”

Key-elements

Here are a few important elements about this new architecture:

  • “…The RDS modern infrastructure components we are showcasing today extend the current Windows Server 2016 RDS to enable partners to address new markets segments and customers while reducing the cost and complexity of hosted Windows desktop and application deployments…”
  • “…adding a new RD Diagnostics service…”
  • “…
    • Both single and multi-tenant deployments, making smaller deployments (less than 100 users) much more economically viable, while providing the necessary security of tenant isolation
    • Deployments on Microsoft Azure, on-premises equipment, and hybrid configurations
    • Virtual machines or Azure App Services can be used for deployment

    …”

HTML5 web client!

Another important new element: a HTML5 web client will be included in the new architecture.
“… The new infrastructure will also include a web client that allows users to connect from any HTML5 browser. The web client, combined with the other RDS modern infrastructure features, allows many Windows applications to be easily transformed into a Web-based Software-as-a-Service (SaaS) application without having to rewrite a line of code. …”

Updates coming soon

More information and updates will come up on my blog in the next weeks/months.

Official release:
https://blogs.technet.microsoft.com/enterprisemobility/2017/07/12/today-at-microsoft-inspire-next-generation-architecture-for-rds-hosting/

Extending a S2D (Storage Spaces Direct) Pool on Azure, and increase your IOPS!

Introduction

As more and more companies move their environment(s) to the Azure Cloud, the need for a high-available fileserver grows with it.
At the time of the migration/installation, a 128GB volume might be sufficient, at some point in time, the volume needs to be increased.
But this is not the same procedure as you would do on a on-premise datacenter.
In this blog, you will see how to increase the available diskspace for a Clustered Volume, optimizing the available IOPS & throughput of the disks.
Expanding the Clustered Volume is a topic for my next blog, coming up later on.

The “Inefficient” way

You can simply add new disks to both fileserver VMs, add the disks to the pool and increase the volume.
But this has multiple disadvantages:

  1. The amount of data disks for each VM is limited.
    Depending on your VM size, this can go from 2 up to 32 disks (Azure VM Sizes – General Purpose).
    But you don’t want to use all your available slots, to keep at least 1-2 free for emergency expanding
  2. Depending on the disks you add, you will have a mixture of disk-sizes, but also disk IOPS.
    For example:

      • A premium P6 64GB disk has 240 IOPS/disk and a max throughput of 50MB/s.
      • A premium P10 128GB disk has 500 IOPS/disk & a max throughput of 100MB/s

    (more information about performance targets)

The “Optimal” way

To optimize the new diskspace, you can follow this procedure:

  1. Add new larger disks to the VMs & add the new disks to the pool
  2. Put the old disks in “Retired” mode
  3. Move the data from the old disks to the new disks
  4. Remove & detach the old disks from the VMs
  5. The complete script

Important: This procedure can be done during production activity without downtime.
But if you want to be 100% certain, you can use the Suspend-ClusterResource & Resume-ClusterResource cmdlets

The procedure

Test setup

Overview Datadisks at the beginning

I created a 2-node fileserver setup (MICHA-A-FLS-001 & MICHA-A-FLS-002) , both with 2x 64GB managed disks attached (2x 64GB on each node = 128GB available cluster storage).
In the test-setup, I want to extend the current disks to 2x 128GB on each node (resulting in 256GB available cluster storage).

Overview disks before using show-prettypool.ps1

The output from Show-PrettyPool.ps1 (downloadable here):

1. Adding the new disks

1.1 Creating disks

First, you need to add the new disks to both nodes.
When you use Managed Disks in your deployment, you can do this by simply adding disks in the Disks-panel.
If you use Unmanaged Disks, you cannot create the disks in advance. You will need to create and attach the disks in the Virtual Machines-panel in the same step (see 1.2 Attaching disks)

So I created 4 disks of 128GB, 2 for each node.
How to Add Managed Disk - Step 1

Created Managed Disks

1.2 Attaching disks

Next step is to attach the disks to the VMs
You select the VM in the Virtual Machines-panel, select Disks and click the Add data disk button.
Attach data disk to VM

In the drop-down menu, you can select the disks you created in the Disks-panel
Select data disks to attach to VM

Then you save the configuration. Important to keep in mind: the LUN configuration, because we will need this later on.
Save disk setup & check LUN configuration

1.3 Adding the new disks to the Storage Pool

When your fileserver environment has only 1 storage pool, the new disks are added to the pool automatically as you add disks to the VMs.
Overview in VM after adding disks

Otherwise you can use the Add-PhysicalDisk cmdlet to add the disks to the storage pool

2. Put the old disks in “Retired” mode

If you just add the disks to the storage pool, after 30 minutes, Storage Spaces Direct will automatically begin re-balancing the storage pool – moving “slabs” around to even out drive utilization (more information about S2D in this deep dive). This can take some time (many hours) for larger deployments. You can watch its progress using the following cmdlet.

But we don’t want the divide the storage pool to all disks, we want to move the storage pool to the new disks. This can be done by setting the old disks in “Retired”-mode.

2.1 Select Virtual Disk

First, you need to select the virtual disk to be able to find the storage pool.

Select the virtual disk

2.2 Select the first & second node

Next, you select the 2 nodes from the cluster. This is necessary to select the old disks which will be removed.

2.3 Selecting the disks which will be removed, based on the LUN-ids

Next, you select the disks which need to be removed, based on the LUN-ids as seen in 1.2 when adding the disks to the VM.
Using the code below, you will get a Gridview with all disks from each node.
You need to look at the PhysicalLocation, describing the LUNs

Select the old disks using the LUN-id

2.4 Old disks in “Retired”-mode

Last step in this phase: setting the old disks in “Retired” mode. This is done by using the Set-PhysicalDisk cmdlet.
The code below will also rename the disks for a better overview.
Putting the disks in Retired mode will stop the Storage Spaces Direct from re-balancing the storage pool over all available disks.

old disks status are set to Retired

3. Move data to the new disks

3.1 Start the Repair-VirtualDisk cmdlet

Now the old disks are in Retired mode, we can request the S2D to start repairing the virtual disk, by moving the data from the old disks to the new disks.
This is done by using the Repair-VirtualDisk cmdlet.
This can take some time (many hours) for larger deployments, therefor we start the Repair-VritualDisk cmdlet as a job
You can watch its progress using the Get-StorageJob cmdlet.

Repair-VirtualDisk & status

3.2 Follow up the Storage Job

Upon completion, the Get-StorageJob cmdlet will return a Completed overview, or will just return a null value (after a few minutes, the job-log is cleared)
Repair job completed

3.3 Repair completed, overview of the Disks

When the Repair job is complete, using the Show-PrettyPool.ps1, you will see that the old disks are empty. The new disks are filled up, and the data is divided evenly across the new disks.
(first run was before the Repair, the second run is after the Repair job)
Overview of the disks after the Repair job

4. Remove & detach old disks

4.1 Remove from the storage pool

First step in this last phase is removing the disks from the storage pool.
Because the old disks are still in the variables, you can simply pipe them into the Remove-PhysicalDisk cmdlet.

Remove the old disks using the Remove-PhysicalDisk cmdlet

Using the Show-PrettyPool.ps1, you will see that the old disks are no longer part of the storage pool
overview using show-prettypool.ps1

4.2 Detach from VM

Next step is detaching the old disks from the VMs.
You select the VM in the Virtual Machines-pane, select Disks and click the Edit button
Edit VM disk configuration to detach old disks

On the right side, you click on the Detach button next to each old disk and click the Save button.
Detach old disks
Save new disk configuration with old disks detached

In the VM, when you execute the Get-PhysicalDisk cmdlet, you will see that only the new disks are connected.
disk overview in VM after detaching old disks

4.3 Remove Disks

The very last, but important, step to do is removing the disks permanently.
If you forget to do this, you will keep on paying for the disks, even though they are not connected.

4.3.1 Managed Disks

For Managed Disks, you go the Disks-panel.
There you will see the old disks, with no Owner assigned to it.
overview of Managed Disks after detaching

You click on an old disk, check the DiskState (should be Unattached) and click Delete.
And this should be done for all old disks.
Delete Managed Disk

4.3.2 Unmanaged Disks

For Unmanaged Disks, you go to the Storage Accounts-panel and select the Storage Account where the VHD is stored.
In the Overview, you select the Container containing the VHD.
Unmanaged Disks: select Storage Account

Next, you select the VHD-file, and on the right panel, you check the Lease Status (should be Unlocked), then you click on Delete.
Unmanaged Disks: delete VHD from storage account

And thats it, your cluster has now extra diskspace to expand the volume.
Expanding the Clustered Volume is a topic for my next blog, coming up later on.

5. Complete script