In the previous part we took the time to set up our AzS HCI lab. If you’ve been following along (I’d strongly recommend to if you haven’t!) you have a 4-node cluster and WAC installed. We have not connected the cluster to Azure just yet and I think it’s better if we looked around the disconnected lab and the capabilities of HCI itself before we add Azure in the mix. So we will hold that action for now and explore what we’ve got so far.
Before we get into the workloads running on the cluster, let’s get to know the cluster and its properties first. Let’s scroll all the way down to the bottom of the menu on the left and see what we’ve got under “Settings”.
We seem to have a few settings neatly categorized by the purpose they serve. We’ve got settings affecting Storage, Cluster, some host level Hyper-V settings and some settings affecting the Azure Stack HCI OS itself. Let’s see what we’ve got in each.
These settings affect the underlying Storage Spaces Direct (S2D) configuration of the cluster. Here you can allocate how much system memory you want to allocate to cache the date you frequently access to boost read performance. You can read more about this here.
Note that In-memory cache works the best with read-extensive workloads while conversely, it may affect performance on write-extensive workloads in which case you should turn it off.
You can also change the settings affecting storage cache by changing the cache mode for your hard drives to set to only Reads, only Writes or both. In most cases, SSDs are fast enough by default to not require caching Reads unlike HDDs which are high capacity; but traditionally slower.
These are the settings that affect the behavior of your cluster as well as the nodes it is hosting.
For example, you can set to move your guest VMs to another node should a node shut down, intentionally or accidentally.
You can configure node-level load balancing by identifying overworked nodes and moving some of its workloads to a different node.
You have a couple of control options here depending on how frequently and how aggressively you want to check for balance.
Under Balance virtual machines, select Always to load balance upon server join and every 30 minutes, Server joins to load balance only upon server joins, or Never to disable the VM load balancing feature.
Under Aggressiveness, select Low to live migrate VMs when the server is more than 80% loaded, Medium to migrate when the server is more than 70% loaded, or High to average the servers in the cluster and migrate when the server is more than 5% above average.
You can read more about this here.
You can also set up Affinity rules to determine which certain workloads should either always be deployed in the same site (like an application server and its database) or conversely, Anti-affinity rules to always spread certain types of workloads across servers or sites (like domain controllers).
Note that the load-balancing rules we set above will honor the affinity rules accordingly. We will revisit this once we’ve spun up some guest VMs on our nodes.
Hyper-V Host Settings
These settings affect the underlying virtualization platform Hyper-V on which our nodes are hosted.
You can perform on-demand live migrations of your VMs if you were to perform any maintenance on one of your nodes, all in real time and with zero downtime.
You can also perform multiple storage migrations on a computer at a time. More about that here.
Azure Stack HCI Settings
These settings affect the usability of the Azure Stack HCI OS we installed on our nodes.
From here you can register the cluster to Azure, which we’ve not done yet. We will do it once we’ve explored everything we can do with the cluster in disconnected mode.
Under Monitoring we can select how often we want to collect sample metric data for our cluster performance. If and once we register the cluster with Azure, this data will be sent to the Azure Log Analytics workspace. We will explore this in detail a bit later.
Here you also have an option to automatically activate Windows licenses (AVMA or Automatic Virtual Machine Activation) to servers that you spin up in your cluster. There are a couple ways to achieve this, which we will talk about later. For now, you can read more here.
Alright. Those are some interesting setting options we’ve got here. Surely we’ll be fiddling with some of these as we proceed further in our journey. Until then, happy Azure Stacking!