As I have now a new shiny lab server, I wanted to have a possibility to easily deploy and destroy a VMware Cloud Foundation (VCF) environment for learning and presentation purposes.
Deploying a full VCF stack is a lengthy process where a lot of components must be considered and need to fit together, e.g. the many VCF systems themselves, as well surrounding systems like Active Directory, or upstream routers. To make the deployment easily repeatable, the whole deployment process must be automated. Luckily, smart people at VMware have exactly done this and created the Holodeck Toolkit for this use case. Holodeck enables us to deploy a nested VCF environment on a single ESXi host in an automated fashion.
In this blog post, I’ll describe my experience deploying a single VCF 5.1.1 instance using the Holodeck Toolkit 2.0. Although the official Holodeck documentation is quite extensive, I did run into some issues during my initial deployments, which I’m going to describe here as well.
Holodeck 2.0 is able to deploy the following components almost fully automated using the VCF Lab Constructor (VLC) 5.x package:
- VLC configuration for two VCF sites
- Four node VCF management domain
- Optional three additional nested hosts in a workload domain
- NSX fully configured
- AVN/NSX Edge Deployed
- Tanzu deployed
- Surrounding systems to provide DHCP, NTP, DNS, BGP peering and L3 routing configured on the Cloud Foundation Cloud Builder VM
- Upstream router called the Holo Router, which is responsible to connect the nested environment to the external world
- A Windows Server jump host called Holo Console, which also acts as Active Directory server and certificate authority
The diagram below depicts the overall Holodeck design for Site 1:
The Holodeck deployment process consists of the following steps:
- Create a custom Windows Server 2019 ISO image for the Holo Console installation, which contains also all required installation components for VCF
- Prepare the ESXi host for the nested environment
- Deploy Holo Console
- Deploy Holo Router
- Bringup of VCF
Setup of the Holo custom ISO
The Holo Custom ISO will contain all software components used to bootstrap the Holodeck deployment. This ISO will be built on a Windows system with at least 200 GB free disk space on the disk partition, where the build process will be carried out. The default path is C:\Users\Administrator\Downloads
. On my build system, I’ve changed this due to storage limitations to D:\Store\Holodeck
.
After downloading the components including the Holodeck setup package, we uncompress the Holodeck setup package in our build directory D:\Store\Holodeck
. Then we change into the Holo-Console directory and adjust the file \holodeck-standard-main5.1.1\
create-ISO.ps1
to match the software components file and path names, and we must enter the license keys for ESXi, NSX, vCenter and vSAN. In my environment the file looks as follows:
Note: If you don’t enter a product license in the create-ISO.ps1
file, the evaluation license will be used for the product.
Two more files need to be adjusted in the Holo-Console directory to include Notepad++:
additionalfiles.txt
additionalcommands.txt
Now that everything is in place, we can build the ISO file be executing createISO.ps1
:
The successful build produces an 60 GB ISO image named CustomWindows-xxxxxx.iso
inside the Holo-Console
directory.
We rename this file to Holo-Console5.1.1.iso
and upload it to our ESXi host datastore of choice.
Preparation of the physical ESXi host
On my physical lab ESXi host, I have a single uplink vmnic0 which is attached to the default VSS called vSwitch0. On this vSwitch I’ve created a portgroup named VM_4090 which will connect systems to the provider network fabric.
For the two Holo sites, I’ve created two more VSS, each with no uplink and with a portgroup for each site:
- vSwitch1
- MTU: 9000
- Physical adapters: –
- Port groups
- VLC-A
- VLAN ID: 4095
- Security
- Promiscuous mode: Accept
- MAC address changes: Accept
- Forged transmits: Accept
- VLC-A
- vSwitch2
- MTU: 9000
- Physical adapters: –
- Port groups
- VLC-A2
- VLAN ID: 4095
- Security
- Promiscuous mode: Accept
- MAC address changes: Accept
- Forged transmits: Accept
- VLC-A2
Deployment of Holo Console
To deploy the Holo Console, we will create a new VM on the lab ESXi server, connect our Holo-Console5.1.1.iso
to it, and start the VM. This will automatically install Windows Server 2019 as guest OS, and will configure it accordingly (e.g. configure networking, install defined applications).
Once the VM is created, we start it and wait until the unattended installation is finished. The installation process will take approximately 30 minutes, we can monitor it using the web console of the ESXi Host Client. After a successful installation the Holo Console Desktop looks as follows:
Note, that we aren’t yet able to connect to Holo Console via the network, we must first deploy the Holo Router.
Deployment of Holo Router
We deploy the Holo Router using the HoloRouter-2.0.ova
located in D:\Store\Holodeck\holodeck-standard-main5.1.1\holodeck-standard-main\Holo-Router
on our Holo build system.
To deploy the VM from the OVA file, we connect to the ESXi server using the ESXi Host Client and click Create/Register VM >Deploy a virtual machine from an OVF or OVA file.
We must connect the Holo Router to three port groups:
- ExternalNet: VM_4090 (on vSwitch0)
- Site_1_Net: VLC-A (on vSwitch1)
- Site_2_Net: VLC-A2 (on vSwitch2)
We then must provide the appropriate values for the following attributes in the Deployment options (don’t change anything else!):
- External IP
- External Subnet Mask
- External gateway
After we click Finish, the Holo Router VM is being deployed. We can again monitor the installation progress using the Web Console in the ESXi Host Client. After a few minutes, the Holo Router should be successfully deployed and configured.
You can now connect to the Holo Console via RDP using the external IP address of the Holo Router, in our case it’s 172.31.1.111.
VCF Bringup
On the Holo Console, we start the VCF Lab Constructor (VLC) program by navigating in Windows File Explorer to C:\VLC\VLC-Holo-Site-1
and executing VLCGui.ps1
using Powershell.
In the VLC GUI, we load the configuration of Holo Site 1 by clicking on the Automated area in the upper left. Fill in the values as needed, in my lab the configuration looks as follows:
Once everything is properly filled out, we click on Construct!. VLC will now begin to deploy the VCF environment.
This whole bringup process takes approximately three and a half hours on my lab server.
Note: If you encounter issues during the installation, you can connect to the Cloud Builder VM via VMRC and check the bringup logs at /opt/vmware/bringup/logs/vcf-bringup-debug.log
.
After the successful VCF bringup, we have the following components deployed:
- SDDC manager
- A 4 node management cluster with vSAN storage and Workload management enabled
- NSX for the management domain fully configured
- A supervisor cluster to run Kubernetes
- AVN networking for Aria components enabled
- Cloud Builder VM configured as DNS/NTP/DHCP server, also configured for L3 routing and BGP routing with the NSX fabric on the management domain
We can navigate to the SDDC Manager web UI at https://sddc-manager.vcf.sdd.lab and manage our VCF instance from here (e.g. commission hosts for a Workload Domain, deploy Aria components, …):
Note: After the successful deployment, we should reboot the Holo Console to remove the initial, temporary routing configuration (now routing will be done through the Cloud Builder VM: east-west routing to NSX fabric, and north-south routing to Holo Router).
Note: I’ve experienced some issues with double NAT in Holodeck 5.1.1, which prevented RDP connections to Holo Console after Cloud Builder has been taken over its routing functionality (while all other network communication was fine — inside the nested environment, and outside the nested environment). To solve this, I had to remove a particular SNAT rule on the Cloud Builder VM:
First, get a list of configured NAT rules.
iptables -t -nL --line-numbers
Then delete the problematic rule #7:
iptables -t nat -D POSTROUTING 7
For me, RDP connections to Holo Console immediately started to work again. To persist this NAT configuration, the following line must be removed from the file /etc/systemd/scripts/ip4save
:
-A POSTROUTING -s 10.0.0.0/24 -o eth0.10 -j SNAT --to-source 10.0.0.221
Leave a Reply