0

Resource Pool Capacity Planning with vRealize Operations Manager!

Resource pools are a great way to divide resources among tenants. Large quantities of compute and memory can be carved up and provided to tenants and resources can be put in place to limit their access as I covered in this post last year. When you think about resource pools in this context, they’re a lot like virtual self-storage units!

Just like a manager of a self-storage business would keep track of how many units are rented out and how many are available, IT organizations need to do the same with their resources. But this can be a bit challenging without the right tools. The size of a resource pool can be defined by the CPU and memory limits imposed on them, but limits aren’t usually reflected in traditional capacity management tools. Instead, virtual administrators must keep track of how many and what size resource pools they’ve sold by tracking them manually in spreadsheets. Not only is this time consuming, but also prone to error if the proper governance isn’t established. The easier approach is to leverage VMware’s vRealize Operations Manager for resource pool capacity planning!

When doing cluster capacity planning with vRealize Operations Manager out of the box, resource pools aren’t a part of the equation. And no, that’s not a mistake; that’s by design. Capacity planning traditionally takes a look at what resources are actively used by or assigned to virtual machines. Resource Pools are just a logical container and imposing limits doesn’t consume host or cluster resources and therefore aren’t considered. However, don’t despair because vRealize Operations Manager has an incredible, no, SUPER ace up its sleeve.

Enter Super Metrics

We’ve looked at Super Metrics before on VMSpot. Super Metrics are the vRealize Operations Manager equivalent of using formulas in Excel. They take a little getting used to but they’re extremely powerful and addictive! If you haven’t played around with Super Metrics before, then I’d recommend you check out Building a vROps super metric to show CMDS/s for a custom group and Enabling Super Metrics in vROps.

In this example, I have one cluster with two resource pools named vR Ops and vSphere Management.

 

The cluster has 55GHz of CPU and 64 GB of RAM available. The resource pool vR Ops has a limit of 2GHz of CPU and 10GB of RAM.

The vSphere Management resource pool has a 10GB RAM limit and no CPU limits.

These values are available in vRealize Operations Manager as cpu|effective_limit and mem|effective_limit metrics. Now, what we need to do is calculate the total amount of CPU and Memory limits imposed on all resource pools in the cluster. We can do this by using the sum function.

sum(${adaptertype=VMWARE, objecttype=ResourcePool, metric=cpu|effective_limit, depth=1})

Because we want to look at the total resource limits imposed on all resource pools in any given cluster, we’ll assign this super metric to Cluster Compute Resources. What this super metric does is it calculates the sum of the cpu|effective_limit metrics for all resource pool objects. Because a resource pool is a child object of a cluster, we need to tell the super metric to look at the direct descendants of the cluster by including depth=1.

Now, let’s take a look at the memory limits.

sum(${adaptertype=VMWARE, objecttype=ResourcePool, metric=mem|effective_limit,depth=1})

Again, we’re going to assign this super metric to the vCenter Adapter’s Cluster Compute Resources and need to specify that this metric applies to the cluster’s direct descendants with depth=1. If it all works right, we should be able to click on the visualize super metric button at the top and we should see the super metric return a value of approximately 20GB.

 

That’s amazing, right? Sure it is! But this is only half of the picture. We really want to know how many resources are left to carve out more resource pools! To do so, we simply have to take the cluster’s total resources and subtract our new super metrics.

${this, metric=cpu|totalCapacity_average}-sum(${adaptertype=VMWARE, objecttype=ResourcePool, metric=cpu|effective_limit, depth=1})

What this super metric is doing is it pulls the cpu|totalCapacity metric for the selected cluster. That’s why our super metric starts with ${this. Then we simply subtract the super metric we composed earlier which sums the total cpu|limit for all resource pools in the cluster. Now, let’s do the same for memory.

${this, metric=mem|totalCapacity_average}-sum(${adaptertype=VMWARE, objecttype=ResourcePool, metric=mem|effective_limit,depth=1})

The memory remaining super metric is the same as the CPU remaining with the exception that we’re using the mem|totalCapacity metric for the cluster and the mem|effective_limit for the resource pool.

Once you enable these super metrics, you can use them to build a report or dashboard. Here I created a report that shows how many hosts are in my cluster, how much CPU and Memory the cluster has, how much CPU and memory is “assigned” to my resource pools, and finally how many resources I can “assign” to more resource pools.

You’ll notice that I’m over provisioned on CPU by 2GHz. That’s because the vSphere Management resource pool doesn’t have any CPU limits imposed and therefore could potentially claim 100% of the cluster’s CPU resources or 48,942MHz. The vR Ops resource pool has a 2GHz limit imposed which puts us at a defecit. To fix this, simply assign a limit to the vSphere Management resource pool.

 

 

 

Matt Bradford

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.