Following up my last post about the creation and configuration of an all-apps organization, we now exploring how to actually deploy traditional and modern workloads in this organization.

In this post I’ll show how to deploy a Linux virtual machine using the VKS VM Service and a Kubernetes Guest Cluster. Finally we will deploy the popular retro game Doom in this cluster and access it via a VNC client through an external IP address.

Before we dive into the hands-on doing, let’s quickly recap the benefits of using an all-apps organization with VCF Automation (VCFA).  VCFA delivers a self-service private cloud platform, that offers tailored cloud services to meet the specific needs of different application teams. Users can interact through a self-service catalog and directly consume IaaS resources through an intuitive and unified interface. The following private cloud services are available in VCFA all-apps organizations:

  • VM Service: declaratively define and deploy virtual machines.
  • Kubernetes Service: Spin up Kubernetes clusters on demand.
  • Network Service: access self-service networking in Virtual Private Clouds (VPCs).

Additionally, services like Harbor and Cert Manager can be consumed.

For the following three scenarios, we must be logged into our all-apps organization acme. Using a web browser, we goto to the tenant portal at https://flt-auto01.vcf.sddc.lab/tenant/acme/ and login, e.g. as our local user acme-admin.

Deploy a virtual machine running an application using the VM Service

First we’re going to deploy a traditional Ubuntu Linux VM running an nginx web server delivering a static HTML page. The VM will have its network interface inside the default VPC of our organization, the HTTP service will be then externally exposed through a load balancer.

In the tenant portal, we navigate to Build & Deploy > Virtual Machine. Make sure that the appropriate namespace has been selected in the dropdown menu above the Services section in the left navigation menu – in our case, the namespace is called department-1-dgc82. Click Create VM.

We select to Deploy from OVF and click Next.

We select the zone and the VM image.

Then we select a VM class, the storage class and click Next.

In the Advanced Settings section under Load Balancer, we choose to create a Load Balancer by clicking Add > New.

We configure the Load Balancer with SSH and HTTP port. The result looks as follows:

Then we click on Save and verify, that the load balancer has been added to the configuration as vm-lb-6nha.

We scroll down to the Guest Customization part and create a new user called osadmin and set Default Sudo to Enable.

Then we set SSH Password Authentication to Enable and add the following commands to the cloudConfig Run Commands:

apt-get update
apt-get install -y nginx
systemctl enable nginx
systemctl start nginx

The result looks as follows. See the Kubernetes Resource YAML on the right-hand side how the cloudinit configuration is applied. Then we click Next.

In the Network Configuration section, we configure our lab domain controller as the name server, then we click Next.

Finally, we review the configuration and click on Deploy.

The VM and its required resources are deployed. After a few minutes we can see that it has become ready.

Let’s examine the status of the Load Balancer by navigating to Network > Services.

We note the External IP and test if the SSH service is reachable from our client.

Finally, let’s check if the webpage is accessible at http://192.168.30.1.

Deploy a Kubernetes Guest Cluster

Deploying a Kubernetes cluster in VCF Automation enables us to request a ready-to-use Kubernetes environment that is automatically built and managesd on top of the vSphere infrastructure. The procedure is as follows:

Navigate to Services > Kubernetes and click Create.

Select Custom Configuration and click Next.

In the General Settings section leave the default settings and click Next.

In the Control Plane section, we configure 1 replica, the VM class best-effort-xlarge with 4 vCPUs and 32 GB memory and the storage class nfs-default-sp. Then click Next.

In the Node Pools section, we leave the default settings and click Next.

Finally we click Finish to start the deployment of the cluster.

The cluster deployment takes a few minutes, then we can see it with status Ready.

Let’s investigate its settings by clicking on the cluster.

We can now download the Kubeconfig file to easily work with the cluster using the kubectl command by clicking on Download Kubeconfig File.

Note that using VCF Automation there are different ways of interacting with a VKS cluster. As a tenant user the recommended way is to generate an VCF Automation context and authenticate through an API token which has been created e.g. as a user in the tenant portal. Another possibility is the usage of the Kubeconfig file, which contains certificates to authenticate against the Kubernetes cluster without the need of an API token.

We can use the Kubeconfig file as follows:

kubectl get nodes --kubeconfig .\Downloads\kubernetes-cluster-rdxi-kubeconfig.yaml

To omit the kubeconfig parameter, we can copy the downloaded kubeconfig file to .kube/config.

Deploy Doom as a pod inside the Kubernetes Guest Cluster

Now let the fun part begin. Inspired by a LinkedIn post of my colleague Daniel Krieger stating that he deployed the shareware version of Doom as a Kubernetes pod in his VKS lab, I wanted to try this as well.

As the older among us might know, Doom is one of the oldest and most popular FPS games released back in 1993. It has been containerized and brought into Kubernetes via a culmination of projects ending up in one called kubedoom. Kubedoom itself is a demo where each monster represents a Kubernetes pod, and shooting monsters deletes pods via the Kubernetes API.

To get kubedoom running inside VMware VKS with VPC networking, I had to add several modifications to the original manifest. You can download the sources of my kubedoom manifest for easy usage from my Github page at https://github.com/d3m1g0d/kubedoom-vcfa-all-apps.

So let’s deploy Kubedoom inside the VKS guest cluster, we’ve just created in the chapter above. For this exercise I assume, that you’ve installed the necessary tools like git or a VNC viewer such as TigerVNC.

First, we clone my kubedoom Git repository.

git clone https://github.com/d3m1g0d/kubedoom-vcfa-all-apps.git

Then change into the kubedoom-vcfa-all-apps directory and execute the following command:

kubectl apply -k manifest

This triggers the following actions to be automatically executed:

  • Create the kubedoom namespace
  • Create the kubedoom ServiceAccount
  • Configure RBAC permissions for pod access
  • Deploy the kubedoom container via a deployment
  • Configure environment variables for the target namespace
  • Start the kubedoom pod in the cluster
  • Create a service to expose the kubedoom VNC port
  • Allocate an external IP via the cluster load balancer
  • Allow users to connect to the game using a VNC client
  • Enable kubedoom to list and delete pods during gameplay

First let’s check if the deployment was successful:

kubectl get pods -n kubedoom
NAME                        READY   STATUS    RESTARTS   AGE
kubedoom-5d56b76ccf-sbrpl   1/1     Running   0          20s

Let’s obtain the external IP address by executing the following command:

kubectl get svc -n kubedoom
NAME       TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)          AGE
kubedoom   LoadBalancer   10.98.240.109   192.168.30.3   5901:31908/TCP   40s

The following screenshots summarizes the whole process.

Now we can start the VNC viewer and connect to the server. Enter the external IP address and port 5901 combination and click Connect.

Enter the password idbehold and click OK.

We can now play Doom in a container within a Kubernetes pod on a VKS cluster accessing it all through a VNC server using a VNC viewer.

It looks like this:

At this point you can run around and play the game using your keyboards arrow keys to move, ctrl shoot and space bar to open doors.