In my previous post, I’ve demonstrated the deployment of VMware Cloud Foundation (VCF) 9.0. This included the deployment of the fleet management components as well as the components of the Management Domain. In this post we’ll continue to build our VCF lab blueprint by deploying a Workload Domain with a supervisor-enabled vSphere cluster. These workload resources will be finally consumed by our tenant VCF Automation all-apps organization to enable self-service provisioning of resources for the end user.
Host commissioning
Before we can create the Workload Domain, we must commission the ESX hosts. This is now done in the global inventory of the vCenter Server of the Management Domain. But before we can actually start with this process, we must create the required network pools for the new ESX hosts, this is also done in the Management Domain vCenter Server.
So, let’s login into the Management Domain vCenter Server vSphere Client as administrator@vsphere.local and navigate to Global Inventory Lists> Hosts > Network Pools. There, click Create Network Pool.

In our lab, there already exists a network pool for the Management Domain. We are going to create the one for the Workload Domain, consisting of the network types vMotion and vSAN.

After we’ve entered the required information for the pool, we click on Save. The new network pool will be created and appears in the network pool list.

Now, we can proceed with the actual host commissioning. We navigate to Global Inventory Lists> Hosts > Unassigned Hosts, and click on Commission Hosts.

On the Checklist page, we review the prerequisites and confirm by clicking Select All. Then we click Proceed.

Let’s add our three unassigned ESX hosts, starting with the first one.

After adding host 2 and 3, we verify that the server fingerprint is correct for each host and then activate the Confirm All Finger Prints toggle and click on Validate All:

The host validation will take a few moments:

In my lab environment the validation failed on the first try with the error message “Failed to validate vSAN HCL status”. This is caused by the officially vSAN incompatible hardware of our nested ESX hosts.

Luckily, there is already a known workaround for this documented in William Lam’s blog post “Enhancementin VCF 9.0.1 to bypass vSAN ESA HCL & Host Commission 10GbE NIC Check“. To resolve this, we log into the SDDC Manager appliance as user vcf using SSH and execute the following commands:
echo "vsan.esa.sddc.managed.disk.claim=true" >> /etc/vmware/vcf/domainmanager/application-prod.properties
echo "vsan.esa.sddc.managed.disk.claim=true" >> /etc/vmware/vcf/operationsmanager/application-prod.properties
echo 'y' | /opt/vmware/vcf/operationsmanager/scripts/cli/sddcmanager_restart_services.sh
After the changes in the SDDC Manager configuration have been applied, we start the host commissioning workflow again in the Management Domain vCenter Server and enter the same information as before. This time the host validation succeeds:
Create the Workload Domain
In VCF 9.0 Workload Domain Creation is done in VCF Operations. So, let’s log into the VCF Operations web UI as admin user. There, select Inventory > Detailed View, expand VCF Instances and browse to our VCF instance vcf01. Then click Add Workload Domain > Create New.

Review the prerequisites, click Select All, and click Proceed.

Enter the General Information details. We leave the Enable vSphere Supervisor setting enabled, so the workload domain is enabled for vSphere Supervisor with NSX VPC Networking, creating a Supervisor, a single Supervisor control plane VM, and a single vSphere zone. Click Next.

Enter the vCenter details and click Next.

We select the default cluster image and click Next.

Then enter the NSX Manager details and click Next.


Enter the Storage details for vSAN with ESA and click Next.

We specify vSAN HCI as the vSAN cluster type and click Next.

Now, we select our three ESX hosts to use for creating the workload domain and click Next.

On the distributed Switch page, we click on Create custom switch configuration, as we want to override some of the settings of the default profile.

To use the default profile as a initial configuration, we click on Copy from preconfigured profile and select Default.

A distributed switch configuration labeled as Custom Profile has been created. We get a yellow notification message box, that a transport VLAN is mandatory and has to be updated before proceeding. To do so, we click on Edit.

On the newly opened page, we scroll down to the bottom and click Edit.

Now, we change the Transport Zone Type from Standard to Enhanced Datapath – Standard for better performance. Here is some context for this: Enhanced Datapath Path (EDP) is a packet forwarding stack, designed to provide superior performance in terms of throughput, packet rate, latency and CPU utilization. EDP Standard is a mode that provides increased network performance and very high packet processing efficiency out of the box, exceeding the performance possible with the vSphere standard stack and without the additional configurations associated with EDP Dedicated. EDP Standard is the recommended mode for maximizing host switch network performance in general compute environments and NSX Edge clusters.

Next, we scroll down to the NSX Overlay Transport Zone details. Here, we change the IP Allocation from DHCP to Static IP Pool and provide the VLAN ID and other pool details.

Now, we scroll down to the bottom of the page and click on Save Configuration.

This brings us back to the detailed summary page of the distributed switch configuration. We can see, that our Overlay Network settings have been applied.

We scroll down to the bottom and click again on Save Configuration.

Finally, we’re back on the main distributed switch configuration page. The yellow warning text box, stating that transport VLAN is mandatory and has to be updated before proceeding now shows us the possibility to acknowledge this. We do so by clicking on Acknowledge.

Afterwards we’re able to click on Next.

Now we provide details for the vSphere Supervisor. Then we click Next.

We review the information about the workload domain and click Finish.

After the successful validation, workload domain creation starts.

We can navigate to Fleet Management > Tasks to monitor the progress of the workload domain creation.

Post-deployment tasks
After the workload domain has been successfully created, we must deploy an NSX Edge cluster with an active-standby tier-0 gateway to complete Supervisor activation.
Leave a Reply