Ansible provides various modules to manage VMware infrastructure, which includes datacenter, cluster, host system and virtual machine. In this session, we’re going to explore some basic tasks in this area.
Ansible basic concepts
Ansible automates the management of remote systems and controls their desired state. A basic Ansible environment has three main components:
- Control node
- A system on which Ansible is installed. Ansible commands such as ansible or ansible-inventory are executed on a control node.
- Managed node
- A remote system, or host, that Ansible controls (e.g. an ESXi server).
- Inventory
- A list of managed nodes that are logically organized. The inventory is created on the control node to describe host deployments to Ansible.
Installing Ansible and required module
Installing Ansible is as easy as executing the following command on the system which is designated as the control node:
pip3 install --upgrade ansible
Ansible VMware modules are written on top of pyVmomi. pyVmomi is the Python SDK for the VMware vSphere API that allows user to manage ESXi servers, and vCenter infrastructure. It can be installed as follows:
pip3 install --upgrade pyvmomi
For convenience, add the python interpreter to the path, e.g.:
PATH="$HOME/Library/Python/3.8/bin/:$PATH"
Using Ansible to manage ESXi servers
Ansible keeps an inventory of the machines that it manages.
- The inventory is in the hosts inventory file. The default file is
/etc/ansible/hosts
but can be overriden on the command line. - Within the inventory you can organize hosts into groups.
- Playbooks are run against specific groups.
To create an inventory file in the local directory on the control node, create a file called hosts using your favorite text editor:
[esxi]
192.168.123.211
[esxi:vars]
ansible_connection=ssh
ansible_user=root
ansible_ssh_private_key_file=/Users/adrian/.ssh/id_pub
ansible_python_interpreter=/bin/python
In our case, we add a stand alone ESXi server 192.168.123.211 to the inventory. We set up passwordless SSH login by adding the public SSH ke from the control node’S user to the ESXi server’s authorized keys file, which is located at /etc/ssh/keys-root/authorized_keys
. The following command can be used to do this on the ESXi server:
cat id_rsa.pub | ssh root@192.168.123.211 'cat >>/etc/ssh/keys-root/authorized_keys'
Now let’s check if the connection to the ESXi server is working using the following ad-hoc command:
ansible all -i hosts -m ping
This should show the following output:
192.168.123.211 | SUCCESS => {
"changed": false,
"ping": "pong"
}
Now, that Ansible can login into the ESXi server using SSH, we can define a playbook. An Ansible playbook is a yaml file that the Ansible Control Machine uses to manage machines. It contains one or more plays, and is used to define the desired state of a system. Plays consist of an ordered set of tasks to execute against host selections from the inventory file. Tasks are the pieces that make up a play and call Ansible modules.
A simple playbook that gets a list of running VMs on the ESXi server using the esxcli command and dumping the output to stdout would look as follows:
---
- hosts: esxi
tasks:
- name: Get VMs process list
shell: esxcli vm process list
register: vm_proc_list
- name: Print list
debug:
msg: "{{ vm_proc_list.stdout }}"
We save it as e.g., esxcli.yaml
on the control node, and execute it using our inventory file “hosts” as follows:
ansible-playbook -i hosts esxcli.yaml
This produces the following output:
PLAY [esxi] ******************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************
ok: [192.168.123.211]
TASK [Get VMs process list] **************************************************************************************************************************
changed: [192.168.123.211]
TASK [Print list] ************************************************************************************************************************************
ok: [192.168.123.211] => {
"msg": "cli1\n World ID: 2100686\n Process ID: 0\n VMX Cartel ID: 2100685\n UUID: 42 02 58 47 0f 51 eb ed-35 06 3c 59 51 fd 02 17\n Display Name: cli1\n Config File: /vmfs/volumes/6135eb3f-8b320d2e-a9e2-2c768a547ebc/sa-cli1/sa-cli1.vmx"
}
PLAY RECAP *******************************************************************************************************************************************
192.168.123.211 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Ansible ESXi server automation using the VMware collection
Apart from directly logging into the ESXi server to execute commands, we can also use the vSphere API (using the SOAP SDK for Python). The SDK usage is abstraced by leveraging the VMware Ansible modules as part of the current community.vmware collection. Ansible collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins.
Collections are installed and used through Ansible Galaxy, e.g. to install the latest VMware community collection:
ansible-galaxy collection install community.vmware --upgrade
An example use case for configuration management of an ESXi server could be to set the NTP servers. This can be done by using th vmware_host_ntp module. An example playbook will look as follows:
---
- hosts: localhost
tasks:
- name: Set NTP servers for an ESXi Host
vmware_host_ntp:
hostname: 192.168.123.211
username: root
password: "VMware1!"
validate_certs: no
esxi_hostname: 192.168.123.211
state: present
ntp_servers:
- 0.pool.ntp.org
- 1.pool.ntp.org
register: host_info
- name: Print host info
debug:
msg: "{{ host_info }}"
Save the playbook as vmware_collection_hostntp.yaml
and execute it:
ansible-playbook -i hosts vmware_collection_hostntp.yaml
This produces the following output:
PLAY [localhost] *************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************
ok: [localhost]
TASK [Set NTP servers for an ESXi Host] **************************************************************************************************************
changed: [localhost]
TASK [Print host info] *******************************************************************************************************************************
ok: [localhost] => {
"msg": {
"changed": true,
"failed": false,
"host_ntp_status": {
"esxi1.mumintal.home": {
"changed": true,
"ntp_servers_changed": [
"0.pool.ntp.org",
"1.pool.ntp.org"
],
"ntp_servers_current": [
"at.pool.ntp.org",
"0.pool.ntp.org",
"1.pool.ntp.org"
],
"ntp_servers_previous": [
"192.168.123.166"
],
"state": "present"
}
}
}
}
PLAY RECAP *******************************************************************************************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Virtual machine provisioning
Another common use-case for Ansible is automation of the provisioning of infrastructure, such as virtual machines. We’ll deploy a CentOS VM from a template, customize it (CPU, memory, and vNIC), and power it on.
For this purpose, we create the playbook vmware_collection_deployvm.yaml
with the following content:
---
- hosts: localhost
tasks:
- name: Create a virtual machine from a template and customize it
vmware_guest:
hostname: vcenter1
username: "administrator@vsphere.local"
password: "VMware1!"
validate_certs: no
datacenter: DC-1
cluster: CL-1
folder: testvms
name: "{{ vm_name }}"
template: "{{ vm_template }}"
networks: "{{ vm_network }}"
datastore: vsanDatastore
state: poweredon
wait_for_ip_address: yes
hardware:
num_cpus: 2
memory_mb: "{{ 4 * 1024 }}"
memory_reservation_lock: yes
hotadd_cpu: yes
hotadd_memory: yes
register: vm_guest_info
- name: Print virtual machine guest info
debug:
msg: "{{ vm_guest_info }}"
To execute the playbook, the following command is used:
ansible-playbook vmware_collection_deployvm.yaml \
-e vm_name=foo \
-e vm_template=centos7_template \
-e '{
"vm_network":
[
{
"name": "NSX-VLAN_1_VM_Network",
"device_type": "vmxnet3",
"start_connected": yes,
"connected": yes,
"ip": "192.168.123.100",
"gateway": "192.168.1.1",
"netmask": "255.255.255.0",
"dns_servers": [
"8.8.8.8",
"8.8.4.4"
]
}
]
}'
Here, we’re passing VM name vm_name and VM template vm_template as extra variables, and we’re passing the VM network configuration vm_network as JSON object (containing a list of networks, though it is only one used in this example).
The command produces the following output:
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] *************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************
ok: [localhost]
TASK [Create a virtual machine from a template and customize it] ******************************************************************************************************
changed: [localhost]
TASK [Print virtual machine guest info] **************************************************************************************************************
ok: [localhost] => {
"msg": {
"changed": true,
"failed": false,
"instance": {
"advanced_settings": {
"ethernet0.pciSlotNumber": "192",
"guestOS.detailed.data": "bitness='64' distroName='CentOS Linux' distroVersion='7' familyName='Linux' kernelVersion='3.10.0-1160.15.2.el7.x86_64' prettyName='CentOS Linux 7 (Core)'",
"hpet0.present": "TRUE",
"migrate.hostLog": "vmtest-15321001.hlog",
"migrate.hostLogState": "none",
"migrate.migrationId": "4623454358205650002",
"monitor.phys_bits_used": "43",
"numa.autosize.cookie": "10001",
"numa.autosize.vcpu.maxPerVirtualNode": "1",
"nvram": "vmtest.nvram",
"pciBridge0.pciSlotNumber": "17",
"pciBridge0.present": "TRUE",
"pciBridge4.functions": "8",
"pciBridge4.pciSlotNumber": "21",
"pciBridge4.present": "TRUE",
"pciBridge4.virtualDev": "pcieRootPort",
"pciBridge5.functions": "8",
"pciBridge5.pciSlotNumber": "22",
"pciBridge5.present": "TRUE",
"pciBridge5.virtualDev": "pcieRootPort",
"pciBridge6.functions": "8",
"pciBridge6.pciSlotNumber": "23",
"pciBridge6.present": "TRUE",
"pciBridge6.virtualDev": "pcieRootPort",
"pciBridge7.functions": "8",
"pciBridge7.pciSlotNumber": "24",
"pciBridge7.present": "TRUE",
"pciBridge7.virtualDev": "pcieRootPort",
"sata0.pciSlotNumber": "33",
"sched.cpu.latencySensitivity": "normal",
"sched.mem.pin": "TRUE",
"sched.swap.derivedName": "/vmfs/volumes/vsan:52aa46cd53475ef8-13955186be1a04b5/fe95e362-6266-da73-9133-141877632787/vmtest-022e1ca3.vswp",
"scsi0.pciSlotNumber": "160",
"scsi0.sasWWID": "50 05 05 60 72 00 a8 d0",
"scsi0:0.redo": "",
"softPowerOff": "FALSE",
"svga.guestBackedPrimaryAware": "TRUE",
"svga.present": "TRUE",
"tools.deployPkg.fileName": "imcf-QKt54f",
"tools.guest.desktop.autolock": "FALSE",
"vmci0.pciSlotNumber": "32",
"vmotion.checkpointFBSize": "4194304",
"vmotion.checkpointSVGAPrimarySize": "8388608",
"vmware.tools.internalversion": "11269",
"vmware.tools.requiredversion": "10341"
},
"annotation": "",
"current_snapshot": null,
"customvalues": {},
"guest_consolidation_needed": false,
"guest_question": null,
"guest_tools_status": "guestToolsNotRunning",
"guest_tools_version": "11269",
"hw_cluster": "CL-1",
"hw_cores_per_socket": 1,
"hw_datastores": [
"ESX144-datastore1",
"vsanDatastore"
],
"hw_esxi_host": "172.24.71.144",
"hw_eth0": {
"addresstype": "assigned",
"ipaddresses": null,
"label": "Network adapter 1",
"macaddress": "00:50:56:a5:bd:dd",
"macaddress_dash": "00-50-56-a5-bd-dd",
"portgroup_key": null,
"portgroup_portkey": null,
"summary": "nsx.LogicalSwitch: 08f6d814-3ffc-437a-960d-1a4dd613b7df"
},
"hw_files": [
"[vsanDatastore] fe95e362-6266-da73-9133-141877632787/vmtest.vmx",
"[vsanDatastore] fe95e362-6266-da73-9133-141877632787/vmtest.nvram",
"[vsanDatastore] fe95e362-6266-da73-9133-141877632787/vmtest.vmsd",
"[vsanDatastore] fe95e362-6266-da73-9133-141877632787/vmtest_3.vmdk"
],
"hw_folder": "/DC-1/vm/testvms",
"hw_guest_full_name": null,
"hw_guest_ha_state": true,
"hw_guest_id": null,
"hw_interfaces": [
"eth0"
],
"hw_is_template": false,
"hw_memtotal_mb": 4096,
"hw_name": "vmtest",
"hw_power_status": "poweredOn",
"hw_processor_count": 2,
"hw_product_uuid": "4225dd60-7200-a8d0-9c32-c66e527ea815",
"hw_version": "vmx-14",
"instance_uuid": "502500b7-2322-a289-6e5c-5e72787e1a20",
"ipv4": null,
"ipv6": null,
"module_hw": true,
"moid": "vm-77308",
"snapshots": [],
"tpm_info": {
"provider_id": null,
"tpm_present": false
},
"vimref": "vim.VirtualMachine:vm-77308",
"vnc": {}
}
}
}
PLAY RECAP *******************************************************************************************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Checking the VM in the vSphere client shows, that the customization was successful.
Leave a Reply