As I have access to a new, fresh lab server with a spare NVMe SSD, I’ve installed ESXi 8.0 U3 on this server (see my previous blog post) to make use of a new feature called “Memory tiering over NVMe”.
“Memory tiering over NVMe” is a feature in Technical Preview that allows users to add memory capacity to a host by using NVMe devices that are installed locally in the host.
The feature utilizes the NVMe devices as tiered memory and reduces the impact to performance by intelligently choosing which VM memory locations should be stored in the slower NVMe device vs faster DRAM in the host.
In this post, I’ll briefly explain how to setup this feature.
The base specs of the host are depicted below:
First, we must identify the NVMe device and its location. In the vSphere Client, we select the ESXi host and navigate to Configure > Storage > Storage Devices:
Next, we setup the memory tier.
SSH into the ESXi server and put the host into maintenance mode:
esxcli system maintenanceMode set --enable true
Create the memory tier partition on the NVMe device to use as tiered memory, which we’ve identified in the first step:
esxcli system tierdevice create -d /vmfs/devices/disks/eui.3634463052b088980025384500000003
Verify that the NVMe device has been created:
esxcli system tierdevice list
DeviceName PartitionId StartSector EndSector
------------------------------------ ----------- ----------- ---------
eui.3634463052b088980025384500000003 1 2048 1875384974
Next, we enable memory tiering as an ESXi boot option:
esxcli system settings kernel set -s MemoryTiering -v TRUE
We must now reboot the host. Once it is back online, we can check if the memory tiering is active:
Looks good. Let’s check the details for the host by navigating to Configure > Hardware > Overview:
Tier 0 indicates the amount of DRAM in the system, Tier 1 indicates the amount of NVMe storage used as tiered memory, Total shows the total memory for the host as a sum of the two tiers.
By default, hosts are configured to use a DRAM to NVMe ratio of 4:1 (256 GB vs 64 GB in our case). Now, I have a 960 GB SSD, so I want to use more storage for the memory tier 1. The official recommendation here is, that the amount of NVMe configured as tiered memory should not exceed the total amount of DRAM. Anyway, I want to make full usage of the available NVMe storage so I’ll set the ratio to the maximum configurable value of 400 percent.
To achieve this, we log back into the ESXi shell and execute the following command:
esxcli system settings advanced set -o /Mem/TierNvmePct -i 400
After rebooting the ESXi host, we can disable the maintenance mode and check the updated memory tiering:
Note: “Unmappable” for Tier 1 just means that this Tier is non-DRAM.
Very nice, that’s all for now 😉
Leave a Reply