In this lab session, I want to transform my workload cluster into a “native Kubernetes platform” by using vSphere with Tanzu.
VMware Tanzu is a portfolio of products and solutions which allow its customers to build, run, and manage Kubernetes controlled container-based applications.
In the Operations (or Run) catalog depicted above, VMware has different implementations for Tanzu Kubernetes Grid, all of which provision and manage the lifecycle of Tanzu Kubernetes clusters on multiple platforms. It consists of the following options:
- vSphere with Tanzu: Also known as Tanzu Kubernetes Grid Service (TKGS). Runs Kubernetes workloads natively in vSphere and enables self-provisioning of Tanzu Kubernetes clusters running on vSphere with Tanzu.
- Tanzu Kubernetes Grid (TKG): TKG is a standalone offering whose origins come from VMware’s acquisition of Heptio and is installed as a management cluster, which is a Kubernetes cluster itself, that deploys and operates the Tanzu Kubernetes clusters. These Tanzu Kubernetes clusters are the workload Kubernetes clusters on which the actual workload is deployed.
- Tanzu Kubernetes Grid Integrated (TKGI): TKGi’s origins come from VMware’s acquisition of and joint development efforts with Pivotal. TKGI (formerly known as VMware Enterprise PKS) is a Kubernetes-based container solution with advanced networking, a private container registry, and life cycle management. TKGI provisions and manages Kubernetes clusters with the TKGI control plane, which consists of BOSH and Ops Manager.
In this session, we’ll cover vSphere with Tanzu.
Basic concepts
A cluster that is enabled with vSphere with Tanzu is called a Supervisor Cluster.
On my lab the Supervisor Cluster runs on top of an SDDC layer that includes the following elements:
- ESXi for compute
- NSX-T Data Center for networking
- Shared storage solution for vSphere Pods, Tanzu Kubernetes clusters, and VMs that run inside the Supervisor Cluster
After a Supervisor Cluster is created, we can create namespaces in the Supervisor Cluster that are called Supervisor Namespaces.
We can then run workloads consisting of containers that run inside vSphere Pods. We can also create upstream Kubernetes clusters by using the Tanzu Kubernetes Grid Service.
Enabling vSphere with Tanzu Workload Management
To get started, the following steps must be performed:
- Create a Content Library
- Create a storage policy
- Verify that vSphere HA and DRS are enabled on the Compute cluster
- Enable vSphere with Tanzu
Create a Content Library
For the lab I’ll create a Content Library as follows:
In the vSphere Client, goto Menu > Content Libraries and click on +Create.
In the New Content Library wizard provide the following data:
- Name and location
- Name: Kubernetes
- vCenter Server: vc2.lab.local
- Configure content library
- Subscribed content library
- Subscription URL: https://wp-content.vmware.com/v2/latest/lib.json
- Download content: when needed
- Subscribed content library
- Add Storage
- Datastore-1
Create a Storage Policy for Kubernetes
We must create a storage policy that will determine the datastore placement of the Kubernetes control plane VMs, containers, and images.
In the vSphere Client, we navigate to Menu > Tags & Custom Attributes and create a new category named Kubernetes-Category
. Then we create a new tag called Kubernetes
in the category: Kubernetes-Category
.
Then we navigate to Menu > Storage and assign the tag Kubernetes
to the shared datastore Datastore-1
.
Time to create the Storage Policy: we navigate to Menu > Policies and Profiles and click on CREATE.
In the Create VM Storage Policy wizard, the following data is entered:
- Name and description
- Name: Kubernetes-Storage-Policy
- Policy structure
- Datastore specific rules
- Enable tag based placement rules
- Datastore specific rules
- Tag based placement
- Tag category: Kubernetes-Category
- Usage option: Use storage tagged with
- Tags: Kubernetes
Verify that vSphere HA and DRS are enabled on the Compute cluster
vSphere HA and vSphere DRS must be enabled on the Compute cluster to support vSphere with Tanzu. We have enabled this during the core vSphere setup lab.
Another point worth checking ist, that trust is enabled in NSX-T Manager’s Compute Manager configuration for vc2.lab.local. We also enabled this during the NSX-T setup.
As we are in a lab environment and compute resources are always scarce, we can downscale the Supervisor Control Plane Vms from three to two nodes. To do so, we SSH into the vCSA of vc2.lab.local and add the following two lines to /etc/vmware/wcp/wcpsvc.yaml
:
minmasters: 2
maxmasters: 2
Afterwards the WCP service must be restarted: service-control --restart wcp
Enable vSphere with Tanzu
We use the vSphere Client to enable vSphere with Tanzu on the Compute cluster in vc2.lab.local by navigating to Menu > Workload Management and then clicking on GET STARTED. This opens the Enable Workload Management wizard.
On the vCenter Server and Network section, select NSX as the networking stack option.
We select the cluster SA-Compute-1 from the list of compatible clusters.
For our lab environment it is sufficient to select Tiny as resource allocation for the Control Plane size.
On the Storage section, assign the above created Storage Policy Kubernetes-Storage-Policy
to the control plane nodes, the ephemeral disks, and the image cache.
In the Management Network section, we define the following values for management networking
- Network: Tanzu-1 (created in our NSX-T lab session)
- Starting IP Address: 192.168.20.10
- Subnet Mask: 255.255.255.0
- Gateway: 192.168.20.1
- DNS Server: 172.16.11.4
- DNS Search Domains: lab.local
- NTP Server: 172.16.11.4
In the Workload Network section, we define the following values for workload networking:
- vSphere Distributed Switch: SA-DSwitch-1
- Edge Cluster: nsx-ec1
- API Server Endpoint FQDN: vspherek8s.lab.local
- DNS Servers: 172.16.11.4
- Pod CIDRs: 100.100.0.0/20
- Service CIDRs: 100.200.0.0/22
- Ingress CIDRs: 192.168.21.0/24
- Egress CIDRs: 192.168.22.0/24
In the TKG configuration we select the Kubernetes
content library.
After hitting FINISH on the Review and Confirm section, Workload Management enablement is starting. The entire process took up to one hour in my lab environment.
Configuring the Kubernetes CLI
To be able to connect to the Supervisor Cluster, and also with TKG clusters, the kubectl
and the kubectl-vsphere
plugins are required. In my lab environment I’ve prepared a small Ubuntu VM called cli1, on which we’re going to install the Tanzu CLI packages.
After logging into cli1 via SSH, we can download and unpack the vsphere-plugin.zip package for Linux in the users home directory:
osadmin@cli1:~$ wget --no-check-certificate https://vspherek8s.lab.local/wcp/plugin/linux-amd64/vsphere-plugin.zip
osadmin@cli1:~$ unzip vsphere-plugin.zip
osadmin@cli1:~$ ls bin/
kubectl kubectl-vsphere
osadmin@cli1:~$ kubectl-vsphere version
kubectl-vsphere: version 0.0.6, build 17160549, change 8514897
Creating and configuring a namespace
Namespaces in vSphere with Tanzu are a vSphere construct that are used for assigning a pool of resources and for defining permissions:
- vSphere administrators create namespaces
- Users can have permission to a namespace without requiring vSphere permissions
- CPU, memory, and storage resource limits can be defined
- Kubernetes object limits can be defined
To create a vSphere with Tanzu namespace, we perform the following steps:
- In the vSphere Client, goto Menu > Workload Management > Namespaces.
- Click CREATE NAMESPACE.
- Select the cluster: SA-Compute-1
- Enter the name for the namespace: tenant-1
- Click CREATE
Next we configure permissions and storage for this namespace:
- On the Summary tab for tenant-1, click ADD PERMISSIONS.
- Add the following permissions, then click OK
- Identity source: vsphere.local
- User/Group: devops (a group which I’v created in vsphere.local along with an user called devops01)
- Role: Can edit
- On the Summary tab for tenant-1, click ADD STORAGE.
- In the Select Storage Policies window, select Kubernetes-Storage-Policy and click OK.
The tenant namespace has been created, and configured with a storage policy and user permissions.
To access the newly created namespace, we use the kubectl CLI on our devops VM cli1:
kubectl vsphere login --server 192.168.21.1 -u devops1@vsphere.local --insecure-skip-tls-verify
kubectl config use-context tenant-1
kubectl describe ns tenant-1
Enabling the Harbor Image Registry
vSphere with Tanzu ships with an Embedded Harbor Image Registry. Harbor provides an embedded registry service. Harbor is deployed in a dedicated system namespace on the Supervisor Cluster and consists of several vSphere Pods.
To enable the Harbor, we perform these steps:
- In the vSphere Client, click the SA-Compute-01 cluster and select Configure > Namespaces > Image Registry.
- Click ENABLE HARBOR.
- Select the VM Storage Policy that will be used to store the images and click OK. In our lab environment, Kubernetes-Storage-Policy will be used.
The deployment process starts, and a system-managed namespace (vmware-system-registry-xxxxxx) as well as the required Harbor vSphere Pods are created. The enablement of Harbor can take some minutes.
Once the deployment is complete, we can verify the Harbor health. The Harbor UI address is shown and can be accessed using a web browser.
To get a more detailed view of the Harbor Registry namespace, we navigate to Menu >Workload Management and select the corresponding namespace, which is in our example vmware-system-registry-720990562.
Leave a Reply