VMware Cloud Foundation (VCF) 9.0 has been released several weeks ago. Now there has been version 9.0.1 released. Time for me to finally deploy it in my lab environment. This blog post provides a step-by step guide how to prepare my lab and it provides a deep-dive guide how to deploy VCF 9.0 using the new VCF Installer.

To deploy VCF 9.0 in my lab, I’ve taken the following design decisions. To save resources on my single physical ESX host, I’ve decided to deploy the VCF appliances in a so-called simple mode. This means, that the VCF management components won’t be deployed in a highly-available setup of three nodes, but will be rather deployed as single node components.

In this design, I’ll setup a single VCF instance “vcf01” consisting of a Management Domain “m01” and a Workload Domain “w01”.

The Management components are to be deployed with the following sizing specs:

  • VCF Operations: Small
  • VCF Operations Collector: Small
  • VCF Automation: Small
  • VCF Operations for Logs: Exclude
  • VCF Operations for Networks: Exclude
  • VCF Operations for Networks Collector: Exclude
  • Identity Broker: Exclude
  • Management Domain vCenter Server: Small
  • Management Domain NSX Manager: Medium
  • Workload Domain vCenter Server: Small
  • Workload Domain NSX Manager: Medium
  • Workload Domain NSX Edges: Large
  • Workload Domain Supervisor: Single Management Zone with Combined Workload Zones

Note: As we’re going to exclude Identity Broker for now, the Management Domain vCenter Embedded SSO will be used for the fleet wide Single Sign-On.

To achieve this setup, we must deploy the following components:

ComponentvCPURAM (GB)Disk (GB)
SDDC Manager416914
Management Domain vCenter Server421694
Management Domain NSX Manager624300
VCF Fleet Management412194
VCF Operations416274
VCF Operations Collector28264
VCF Automation2496455
Workload Domain vCenter Server421694
Workload Domain NSX Manager624300
Workload Domain NSX Edge 11664400
Workload Domain NSX Edge 21664400
TOTAL903664889

The Management Domain will consist of 4 ESX servers with vSAN ESA, the Workload Domain will consist of 3 ESX servers with vSAN ESA.

For the environment, we’ll use the following networking setup:

NetworkVLANCIDRGateway
ESX Management2010.230.20.0/2410.230.20.1
VM Management2010.230.20.0/2410.230.20.1
vMotion3010.230.30.0/2410.230.30.1
vSAN4010.230.40.0/2410.230.40.1
NSX Overlay5010.230.50.0/2410.230.50.1

Note, that we’re using the network 10.230.10.0/24 as the general SDDC management network (VLAN 10), where — among other systems — our Windows Server Domain Controller resides (dc1.sddc.lab with IP address 10.230.10.4). Overall, we’re going to use the following DNS hostnames and IP addresses for our lab setup:

SystemFQDNIP Address
ESX Host 1 MD
m01-esx01.vcf.sddc.lab

10.230.20.211
ESX Host 2 MDm01-esx02.vcf.sddc.lab10.230.20.212
ESX Host 3 MDm01-esx03.vcf.sddc.lab10.230.20.213
ESX Host 4 MDm01-esx04.vcf.sddc.lab10.230.20.214
ESX Host 1 WLD 1w01-esx01.vcf.sddc.lab10.230.20.221
ESX Host 2 WLD 1w02-esx01.vcf.sddc.lab10.230.20.222
ESX Host 2 WLD 1w03-esx01.vcf.sddc.lab10.230.20.223
VCF Installer
installer.vcf.sddc.lab

10.230.20.9
SDDC Managervcf01.vcf.sddc.lab
10.230.20.10
VCF Fleet Management
flt-fm01.vcf.sddc.lab

10.230.20.11
VCF Operations
flt-ops01.vcf.sddc.lab

10.230.20.20
VCF Operation Collector
opsc01.vcf.sddc.lab

10.230.20.21
VCF Automation
flt-auto01.vcf.sddc.lab

10.230.20.30
VCF Automation Node A
flt-auto01a.vcf.sddc.lab

10.230.20.31
vCenter Server MD
m01-vc01.vcf.sddc.lab

10.230.20.41
NSX Manager VIP (MD)
m01-nsx01.vcf.sddc.lab

10.230.20.42
NSX Manager Node A (MD)
m01-nsx01a.vcf.sddc.lab

10.230.20.43
vCenter Server (WLD 1)
w01-vc01.vcf.sddc.lab

10.230.20.51
NSX Manager VIP (WLD 1)
w01-nsx01.vcf.sddc.lab

10.230.20.52
NSX Manager Node A (WLD 1)
w01-nsx01a.vcf.sddc.lab

10.230.20.53
NSX Edge Node A (WLD 1)
w01-r01-en01.vcf.sddc.lab

10.230.20.54
NSX Edge Node B (WLD 1)
w01-r01-en02.vcf.sddc.lab

10.230.20.55
Supervisor Range (WLD 1)
10.230.20.61-10.230.20.65

Preparing the ESX servers

The ESX servers in this lab will be deployed as nested appliances on my physical lab ESX server.

For the four Management Domain ESX servers, I’ll use the following specs:

  • 24 vCPUs. VCF Automation requires this huge amount of CPUs for the initial setup, for a single node setup. Although we’ll be able to scale down the appliance after deployment.
    • Expose hardware assisted virtualization to the guest OS
  • 128 GB Memory
  • 2x VMXNET3 network adapter
    • Connected to my Trunk portgroup on the physical ESX host (VLAN 4095)
  • 1x VMware Paravirtual SCSI Controller
  • 1x NVME Controller
  • 1x 20 GB Harddisk connected to the SCSI Controller
  • 1x 256 GB Harddisk connected to the NVME Controller
  • VM Boot Options
    • Whether or not to enable UEFI secure boot for this VM: Disabled

For the three Workload Domain ESX servers, I’ll use the following specs:

  • 16 vCPUs
    • Expose hardware assisted virtualization to the guest OS
  • 64 GB Memory
  • 2x VMXNET3 network adapter
    • Connected to my Trunk portgroup on the physical ESX host (VLAN 4095)
  • 1x VMware Paravirtual SCSI Controller
  • 1x NVME Controller
  • 1x 20 GB Harddisk connected to the SCSI Controller
  • 1x 128 GB Harddisk connected to the NVME Controller
  • VM Boot Options
    • Whether or not to enable UEFI secure boot for this VM: Disabled

Now, that we have the deployment specification for our nested ESX servers, we must install and configure them.

First, get the ESX 9.0.1 installer ISO from the Broadcom Support Portal.

Now use the following procedure for each of the ESX hosts:

Mount ESX installer ISO to the ESX server you’re going to install. Follow the installation wizard as usual, just make sure to install the OS to the harddisk on the SCSI controller. Also make sure to provide a sufficient complex password for the root user.

After we’ve installed ESX, we configure the Management network and enable SSH using the ESX DCUI, e.g. for m01-esx01:

  • Network Adapters: vmnic0
  • VLAN: 20
  • IPv4 Address: 10.230.20.211
  • Subnet Mask: 255.255.255.0
  • Default Gateway: 10.230.20.1
  • Primary DNS Server: 10.230.10.4
  • Hostname: m01-esx01.vcf.sddc.lab
  • Custom DNS Suffixes: vcf.sddc.lab

Next, we must configure the ESX OS for VCF Host commissioning. To allow for vSAN ESA in our nested lab, the first step is to install a so-called vSAN ESA HCL hardware mock VIB. You can grab the necessary file nested-vsan-esa-mock-hw.vib from William Lam’s Github page: https://github.com/lamw/nested-vsan-esa-mock-hw-vib/releases/tag/1.0.

Copy the vib file to the ESX server’s root directory “/” using SCP. Then log into the ESX host as root using SSH and execute the following commands:

esxcli software acceptance set --level CommunitySupported
esxcli software vib install -v /nested-vsan-esa-mock-hw.vib --no-sig-check

Next, we setup NTP:

esxcli system ntp set -s=10.230.10.4
esxcli system ntp set -e=yes

Setup the host SSL certificate:

/sbin/generate-certificates

Reboot the ESX host:

reboot

After the ESX host has been rebooted, connect to the ESX host client using a web browser and check if the new SSL certificate has been applied and shows the correct hostname, e.g.:

VCF Installer Deployment

The VMware Cloud Foundation Installer is a single virtual appliance that deploys and configures all the required VMware Cloud Foundation components. In our lab, we’re going to deploy the VCF Installer directly on the physical ESX host.

The VCF Installer is provided as a OVA file, you can download it from the Broadcom Support Portal. Once you’ve downloaded the OVA file, deploy a new VCF Installer appliance from it, e.g.:

In our lab, we have connected the VCF Installer VA to a network where the appliance is able to access all required external services, such as DNS and NTP, as well as the VCF Management components. Thus, we’re deploying it in our ESX/VM Management network 10.230.20.0/24 (VLAN 20).

Once the VA has been deployed and powered on, the VCF Installer GUI can be accessed from a web browser using https://installer.vcf.sddc.lab.

We enter the admin@local user and the password we’ve provided when we deployed the appliance and then click Log In.

VCF Installer Binary Management

Before we can deploy VMware Cloud Foundation, we must download the required binaries to the VCF Installer appliance.

If the VCF Installer appliance can connect to the internet (either directly or through a proxy server), we can connect it to an online depot and download the binaries using a download token generated from the Broadcom Support Portal.

To get started, we setup an online depot by clicking on Depot Settings and Binary Management:

We connect to an online depot by clicking on Configure and providing the Download Token. Then we click on the Authenticate button.

We are now ready to download the binaries.

In the Binary Management section, select the product and version from the drop-down menus. The UI displays all the required binaries for our product and version, including information about the file size and download status.

We select the required binaries to download and click Download.

The VCF Installer downloads and validates the binaries. Progress is displayed in the Download Status column.

After the binaries are successfully downloaded, we are ready to deploy VCF.

VCF Installer Deployment Wizard

We use the VCF Installer deployment wizard to deploy a new VCF fleet.

When we deploy the first VCF instance in a VCF fleet, we can deploy a new VCF Operations instance or connect to an existing one. When we expand a VCF fleet with an additional VCF instance, we must connect to an existing VCF Operations instance. In our case, we’re going to deploy a new VCF fleet.

We use the VCF Installer deployment wizard to specify deployment information specific for our environment such as networks, hosts, and other information. The VMware Cloud Foundation platform is automatically deployed and configured using the information provided.

Click Deployment Wizard > VMware Cloud Foundation.

Select Deploy a new VCF Fleet.

As we don’t use any existing components, we just click on Next.

Enter general information, then click Next.

Enter the VCF Operations details, then click Next.

Enter the VCF Automation details, then click Next.

Enter the vCenter Server details, then click Next.

Enter the NSX Manager details, then click Next.

Provide the Storage configuration, then click Next.

Enter the ESX host details, then click Next.

Enter details about the networks, then click Next.

In our lab, we use the ESX Management Network information (gateway, VLAN, MTU) to create the VM Management Network distributed port group, select the checkbox.

Then, we specify IP inclusion ranges for the vSAN and vMotion networks of the management domain. IP addresses from the specified range are automatically assigned to ESX hosts.

Enter the vSphere Distributed Switch details. We select the Default Switch Configuration. It provides a unified fabric for all traffic types using a single vSphere Distributed Switch.

Next, we specify the Distributed Switch configuration by expanding the selected Distributed Switch.

We examine the portgroup details and scroll down to the NSX Network Traffic section. Here, we specify the network configuration for the NSX Overlay network pool. Once finished, we click Next.

Enter the SDDC Manager details, then click Next.

We review the deployment information and download the JSON specification file in case we wanted to easily redeploy the environment. Then we click Next.

We review the validation information. TheVCF Installer validates the information we’ve provided in the deployment wizard and reports any errors or warnings. Note, that I’ve got a warning regarding NTP time synchronization on the VCF Installer appliance. I’ve checked the NTP service on the appliance by logging into the appliance using SSH and executing ntpq, which returned a proper setup.

ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 dc1.sddc.lab    .LOCL.           1 u   41   64    7    0.806   -0.220   0.229

Nevertheless, I wasn’t able to get rid of the warning, so I just acknowledged it…

Then we click on Deploy.

The deployment starts…

After a few hours, the deployment of the Management Domain finished successfully.