In this blog post, I’ll cover the basic setup of the nested VMware lab.

As described in the last post, I’m using a single HPE ProLiant rack mount server with 256 GB memory, two Intel Xeon CPUs E5-2630 v2 with 6C/12T and some decent SSDs as storage backend. The storage is presented as a single datastore, and a nested file server will export NFS shares to the particular solutions.

The networking setup of the physical ESXi server consists of two standard vSwitches:

  • One vSwitch, named vSwitch0, holds provides the uplink to my home network with the VMkernel management interface as well as with a VM network portgroup for dual homed VMs (such as my virtual lab router VMs or the jump box) .
  • The other — vSwitch1 — is an internal vSwitch without any physical uplinks and is used for nested virtualization only.
Physical ESXi server vSwitch0

The second vSwitch which is responsible for the nested environment is configured to accept forged transmits, promiscuous mode and MAC address changes. This vSwitch has been configured with Jumbo frames (MTU size of 9000 bytes) to play nice with my nested NSX-T setup.

The following networks have been configured on the internal vSwitch1:

Network NameVLAN IDCIDRGatewayMTU
ESXi Trunk4095n/an/a9000
Nested Management Network1611172.16.11.0/24172.16.11.2531500
vMotion Network1612172.16.12.0/24172.16.12.2539000
vSAN Network1613172.16.13.0/24172.16.13.2539000
NSX-T Edge Uplink 12711172.27.11.0/24172.27.11.19000
NSX-T Edge Uplink 22712172.27.12.0/24172.27.12.19000
NSX-T Edge Overlay2713172.27.13.0/24172.27.13.19000
NSX-T Host Overlay1614172.26.14.0/24172.26.14.19000
Home lab networks

Supporting systems

To get access into the lab, a Windows Server 2019 system is used as a dual homed jump server named labjump1 (one NIC in the home network, one NIC in the lab management network).

The following table summarizes the deployed supporing infrastructure VMs:

VM NamevCPUsMemory GBStorage GBNetworks
dc12490Nested Management Network
labjump12490Home Network,
Nested Management Network
nfs111416Nested Management Network,
vSAN Network
tor-router1118
tor-router2118
Deployed supporting infrastructure VMs

Domain controller

The required supporting services for the lab are running on a Windows Server 2019 Domain Controller VM called dc1 on the physical ESXi host. These services are DNS, NTP, AD, Mail.

The following server roles and features have been installed on the domain controller:

  • Active Directory Certificate Services
  • Active Director Domain Services
  • DNS Server

I’ve always ensured, that the management tools are included for all the roles (if applicable).

During the role configuration, I’ve selected Certification Authority as the role service to be able to create and manage SSL certificates later on.

After the installation of the roles and features has been completed, the server must be promoted as a domain controller.

I created a new forest and specified lab.local as the root domain name.

The following table summarizes the host names and IP addresses of all lab systems configured in DNS:

NameTypeData
mailCNAMEdc1.lab.local
dc1Host (A)172.16.11.4
labjump1Host (A)172.16.11.10
cli1Host (A)172.16.11.11
vc1Host (A)172.16.11.200
vc2Host (A)172.16.11.201
nsx1Host (A)172.16.11.205
nsx1aHost (A)172.16.11.206
esx1Host (A)172.16.11.211
esx2Host (A)172.16.11.212
esx3Host (A)172.16.11.213
vcd1Host (A)172.16.11.220
vcd1aHost (A)172.16.11.221
nfs1Host (A)172.16.11.225
ampq1Host (A)172.16.11.226
ampq1aHost (A)172.16.11.227
vrslcm1Host (A)172.16.11.230
wsa1Host (A)172.16.11.231
wsa1aHost (A)172.16.11.232
default-tenantHost (A)172.16.11.232
tenant1Host (A)172.16.11.232
tenant2Host (A)172.16.11.232
vra1Host (A)172.16.11.233
vrops1Host (A)172.16.11.238
vrops1aHost (A)172.16.11.239
vrli1Host (A)172.16.11.240
vrli1aHost (A)172.16.11.241
nsx-en1Host (A)172.16.11.69
nsx-en2Host (A)172.16.11.70
vspherek8sHost (A)192.168.21.1
DNS entries on the domain controller

To send system notifications via email, I’ve also set up a Mail server. I’ve chosen hMailServer because it’s light weight, easy to install and nicely configurable using a GUI.

Storage server

An Ubuntu 20.04 LTS server VM named nfs1 is used as the file server and provides storage to the ESXi servers and later to other solutions (e.g. VCD transfer storage) via the NFS protocol.

The system has several large virtual disks configured with LVM inside the guest OS to export two volumes with ext4 file system via NFS version 3:

  • A volume for the ESXi datastores
  • A volume for the VCD transfer storage

The NFS exports are configured as follows (for simplicity, I’m using the vSAN network to connect NFS server and ESXi/VCD servers):

/srv/nfs/vcd 172.16.13.0/24(rw,sync,no_root_squash,no_subtree_check)
/srv/nfs/esx 172.16.13.0/24(rw,sync,no_root_squash,no_subtree_check)

Virtual lab routers

The routing between the home network and the lab network is done by two VyOS virtual routing appliances.

The basic configuration of the lab router1 is done as follows (172.16.11.4 is the IP address of the lab domain controller; 192.168.123.1 is the IP address of my home router):

set interfaces ethernet eth0 address 192.168.123.221/24
set interfaces ethernet eth0 description 'Home Network'
set service ssh
set system host-name tor-router1
set system ntp server 172.16.11.4

set interfaces ethernet eth1 address 172.16.11.253/24
set interfaces ethernet eth2 address 172.16.12.253/24
set interfaces ethernet eth3 address 172.16.13.253/24
set interfaces ethernet eth4 address 172.16.14.1/24
set interfaces ethernet eth5 address 172.27.11.1/24
set interfaces ethernet eth6 address 172.27.13.1/24

set interfaces ethernet eth1 description VLAN-1611_esx_mgmt
set interfaces ethernet eth2 description VLAN-1612_vmotion
set interfaces ethernet eth3 description VLAN-1613_vsan
set interfaces ethernet eth4 description VLAN-1614_nsx_host_overlay
set interfaces ethernet eth5 description VLAN-2711_nsx_edge_uplink1
set interfaces ethernet eth6 description VLAN-2713_nsx_edge_overlay

set interfaces ethernet eth1 mtu 1500
set interfaces ethernet eth2 mtu 9000
set interfaces ethernet eth3 mtu 9000
set interfaces ethernet eth4 mtu 9000
set interfaces ethernet eth5 mtu 9000
set interfaces ethernet eth6 mtu 9000

set service dhcp-server shared-network-name dhcp-1614 subnet 172.16.14.0/24 default-router 172.16.14.1
set service dhcp-server shared-network-name dhcp-1614 subnet 172.16.14.0/24 dns-server 172.16.11.4
set service dhcp-server shared-network-name dhcp-1614 subnet 172.16.14.0/24 range 0 start 172.16.14.101
set service dhcp-server shared-network-name dhcp-1614 subnet 172.16.14.0/24 range 0 stop 172.16.14.130

set protocols static route 0.0.0.0/0 next-hop 192.168.123.1 distance 1

set nat source rule 100 outbound-interface eth0
set nat source rule 100 translation address 192.168.123.221
set nat source rule 100 translation address masquerade

set nat source rule 101 outbound-interface eth0
set nat source rule 101 source address 172.16.11.0/24
set nat source rule 101 translation address 192.168.123.221
set nat source rule 101 translation address masquerade

set nat source rule 102 outbound-interface eth0
set nat source rule 102 source address 172.27.11.0/24
set nat source rule 102 translation address 192.168.123.221
set nat source rule 102 translation address masquerade

set nat source rule 103 outbound-interface eth0
set nat source rule 103 source address 172.27.12.0/24
set nat source rule 103 translation address 192.168.123.221
set nat source rule 103 translation address masquerade

commit
save

The lab router2 basic configuration is done as follows:

set interfaces ethernet eth0 address 192.168.123.222/24
set interfaces ethernet eth0 description 'Home Network'
set service ssh
set system host-name tor-router2
set system ntp server 172.16.11.4

set interfaces ethernet eth1 address 172.27.12.1/24
set interfaces ethernet eth1 description VLAN-2712_nsx_edge_uplink2
set interfaces ethernet eth1 mtu 9000

set protocols static route 0.0.0.0/0 next-hop 192.168.123.1 distance 1

commit
save

To play with VMware Cloud foundation workload domains, I’ve also configured two additional routers in a similar way (only using the default VCF WLD networks).