
Hello, this is me, international best-selling-au-- err, I mean, your friendly technologist. Today, I will be spinning instances on OpenShift Virtualization using Ansible. While there is already enough automation with OpenShift, Ansible brings with it, certain advantages that makes it an ideal tool for spinning OpenShift Virtualization instances.
Prerequisites
I will be using virtual-machine-compute to launch my instances. I already have my SSH public key stored in a secret in my environment, which is created as follows:
oc create secret generic example-secret-public-key --from-file=key1=my-key.pub -n virtual-machine-compute
I also need to set So I revised it:up external networking via nmstate, which is already done and you can read about here.
Finally, I need to make sure that I am authenicated. For this exercise, I just authenicated via my web browser with:
oc login --web
With that, it is time to install Ansible and Ansible/Kubernetes support. Note that I will be using uv for remainder of the walkthrough.
Installation
The first thing is to initialize my python environment:
➜ kubevirt git:(main) ✗ uv venv
Using CPython 3.11.14
Creating virtual environment at: .venv
Activate with: source .venv/bin/activate
Next I install ansible:
➜ ~ uv pip install ansible
Resolved 22 packages in 28ms
Installed 9 packages in 246ms
+ ansible==12.3.0
+ ansible-core==2.19.6
+ cffi==2.0.0
+ cryptography==46.0.5
+ jinja2==3.1.6
+ markupsafe==3.0.3
+ packaging==26.0
+ pycparser==3.0
+ resolvelib==1.2.1
Then install the kubernetes dependencies:
➜ ~ uv pip install kubernetes
Resolved 13 packages in 68ms
Prepared 1 package in 51ms
Installed 13 packages in 22ms
+ certifi==2026.1.4
+ charset-normalizer==3.4.4
+ durationpy==0.10
+ idna==3.11
+ kubernetes==35.0.0
+ oauthlib==3.3.1
+ python-dateutil==2.9.0.post0
+ pyyaml==6.0.3
+ requests==2.32.5
+ requests-oauthlib==2.0.0
+ six==1.17.0
+ urllib3==2.6.3
+ websocket-client==1.9.0
Finally, I install via ansible-galaxy the kubevirt.core collection, which will also install the following:
kubernetes.corecommunity.okd
Behind the scenes:
➜ ~ uv run ansible-galaxy collection install kubevirt.core
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Downloading https://galaxy.ansible.com/api/v3/plugin/ansible/content/published/collections/artifacts/kubevirt-core-2.2.3.tar.gz to /Users/rilindo/.ansible/tmp/ansible-local-68764maasu3v3/tmpz8xo97ww/kubevirt-core-2.2.3-ieyravzl
Installing 'kubevirt.core:2.2.3' to '/Users/rilindo/.ansible/collections/ansible_collections/kubevirt/core'
Downloading https://galaxy.ansible.com/api/v3/plugin/ansible/content/published/collections/artifacts/kubernetes-core-6.3.0.tar.gz to /Users/rilindo/.ansible/tmp/ansible-local-68764maasu3v3/tmpz8xo97ww/kubernetes-core-6.3.0-66plso9b
kubevirt.core:2.2.3 was installed successfully
Installing 'kubernetes.core:6.3.0' to '/Users/rilindo/.ansible/collections/ansible_collections/kubernetes/core'
kubernetes.core:6.3.0 was installed successfully
Next, I create directory ~/kubevirt to store my ansible code. I then change over to that directory and generate the ansible.cfg file
uv run ansible-config init --disabled > ansible.cfg
In the ansible.cfg, I updated inventory under [defaults]
[defaults]
inventory=inventory/
Then I created the directory and in that directory, I created the file kubevirt.yaml
plugin: kubevirt.core.kubevirt
namespaces:
- virtual-machine-compute
network_name: nic-white-chimpanzee-56
This inventory file initializes KubeVirt and pulls the list of instances in the virtual-machine-compute namespace. The network name is the external interface attached to each instance, which Ansible will use to connect.
With that setup, I am able to bring up the list of hosts in that namespace:
➜ kubevirt git:(main) ✗ uv run ansible --list-hosts all
hosts (5):
virtual-machine-compute-freebsd-14-8giusc6ap5mnlven
virtual-machine-compute-rocky-vm-ex188
virtual-machine-compute-rocky-vm-lb
virtual-machine-compute-rocky-vm-nmstate
virtual-machine-compute-rocky-vm-sr
Now I can proceed to spin up a single instance.
Spinning Up A Single Instance
Previously, when I spin up an instance, I used the following manifest to create it:
---
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: rocky-dv-nmstate
namespace: virtual-machine-compute
spec:
source:
pvc:
name: rockylinux-10-iscsi
namespace: custom-vm-images
pvc:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: synology-iscsi-storage
volumeMode: Block
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: rocky-vm-nmstate
namespace: virtual-machine-compute
spec:
runStrategy: RerunOnFailure
template:
metadata:
labels:
kubevirt.io/domain: rocky-vm-nmstate
app: webserver-nmstate
spec:
evictionStrategy: LiveMigrate
accessCredentials:
- sshPublicKey:
propagationMethod:
noCloud: {}
source:
secret:
secretName: example-secret-public-key
domain:
cpu:
cores: 1
sockets: 1
threads: 1
memory:
guest: 4Gi
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
model: virtio
name: default
- bridge: {}
model: virtio
name: nic-white-chimpanzee-56
state: up
networks:
- name: default
pod: {}
- multus:
networkName: nad-efficient-grouse
name: nic-white-chimpanzee-56
volumes:
- name: rootdisk
dataVolume:
name: rocky-dv-nmstate
- cloudInitNoCloud:
networkData: |
version: 2
ethernets:
enp1s0:
dhcp4: true
# Higher metric means lower priority for the pod network
dhcp4-overrides:
route-metric: 200
enp2s0:
dhcp4: true
# Lower metric means this becomes the preferred default route
dhcp4-overrides:
route-metric: 50
# Explicit static route for your workstation network
routes:
- to: 192.168.1.0/24
via: 192.168.3.1
userData: |-
#cloud-config
user: cloud-user
chpasswd: { expire: False }
packages:
- httpd
runcmd:
- "sudo systemctl enable httpd --now"
- "echo '<html><body><h1>Rocky Linux Web Server on KubeVirt here!</h1></body></html>' | sudo tee /var/www/html/index.html"
name: cloudinitdisk
I took that manifest and adapted to the following playbook.
- hosts: localhost
connection: local
tasks:
- name: Create VM
kubevirt.core.kubevirt_vm:
state: present
wait: true
name: rockylinux-built-with-ansible
namespace: virtual-machine-compute
labels:
app: rockylinux-built-with-ansible
data_volume_templates:
- metadata:
name: rockylinux-built-with-ansible
spec:
source:
pvc:
name: rockylinux-10-iscsi
namespace: custom-vm-images
pvc:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: synology-iscsi-storage
volumeMode: Block
spec:
evictionStrategy: LiveMigrate
accessCredentials:
- sshPublicKey:
propagationMethod:
noCloud: {}
source:
secret:
secretName: example-secret-public-key
domain:
cpu:
cores: 1
sockets: 1
threads: 1
memory:
guest: 4Gi
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
model: virtio
name: default
- bridge: {}
model: virtio
name: nic-white-chimpanzee-56
state: up
networks:
- name: default
pod: {}
- multus:
networkName: nad-efficient-grouse
name: nic-white-chimpanzee-56
volumes:
- name: rootdisk
dataVolume:
name: rockylinux-built-with-ansible
- cloudInitNoCloud:
networkData: |
version: 2
ethernets:
enp1s0:
dhcp4: true
# Higher metric means lower priority for the pod network
dhcp4-overrides:
route-metric: 200
enp2s0:
dhcp4: true
# Lower metric means this becomes the preferred default route
dhcp4-overrides:
route-metric: 50
# Explicit static route for your workstation network
routes:
- to: 192.168.1.0/24
via: 192.168.3.1
userData: |-
#cloud-config
user: cloud-user
chpasswd: { expire: False }
packages:
- httpd
runcmd:
- "sudo systemctl enable httpd --now"
- "echo '<html><body><h1>Rocky Linux Web Server on KubeVirt here!</h1></body></html>' | sudo tee /var/www/html/index.html"
name: cloudinitdisk
As you can see, it isn't much different from a Kubernetes template. The key difference are mostly the location of labels, names, alongside the inclusion of Ansible attributes (such as state: present) as well as using DataVolume template for persistent storage.
I run the playbook:
➜ kubevirt git:(main) ✗ uv run ansible-playbook single-vm.yaml
PLAY [localhost] ***************************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************
ok: [localhost]
TASK [Create VM] ***************************************************************************************************************************************************************************************
changed: [localhost]
PLAY RECAP *********************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
And I can see that it launched
➜ kubevirt git:(main) ✗ uv run ansible --list-hosts all
hosts (6):
virtual-machine-compute-freebsd-14-8giusc6ap5mnlven
virtual-machine-compute-rocky-vm-ex188
virtual-machine-compute-rocky-vm-lb
virtual-machine-compute-rocky-vm-nmstate
virtual-machine-compute-rocky-vm-sr
virtual-machine-compute-rockylinux-built-with-ansible
However, I only care about the instances I launch with Ansible, so I made a small adjustment to the inventory file so that only instances with the label rockylinux-built-with-ansible are listed:
plugin: kubevirt.core.kubevirt
namespaces:
- virtual-machine-compute
network_name: nic-white-chimpanzee-56
label_selector: app=rockylinux-built-with-ansible
Which should return this:
➜ kubevirt git:(main) ✗ uv run ansible --list-hosts all
hosts (1):
virtual-machine-compute-rockylinux-built-with-ansible
With that, I can verify that the instance is running:
➜ kubevirt git:(main) ✗ uv run ansible -m ping all -u cloud-user
[WARNING]: Host 'virtual-machine-compute-rockylinux-built-with-ansible' is using the discovered Python interpreter at '/usr/bin/python3.12', but future installation of another Python interpreter could cause a different interpreter to be discovered. See https://docs.ansible.com/ansible-core/2.19/reference_appendices/interpreter_discovery.html for more information.
virtual-machine-compute-rockylinux-built-with-ansible | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3.12"
},
"changed": false,
"ping": "pong"
}
And I can log into the instance:
➜ kubevirt git:(main) ✗ ssh cloud-user@rockylinux-built-with-ansible
The authenticity of host 'rockylinux-built-with-ansible (192.168.3.118)' can't be established.
ED25519 key fingerprint is SHA256:cqFLHf42TbqZpMwpixNapfj0hMyNXDDz4oQUgQoZBtU.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'rockylinux-built-with-ansible' (ED25519) to the list of known hosts.
[cloud-user@rockylinux-built-with-ansible ~]$ uptime
18:11:23 up 1 min, 2 users, load average: 0.88, 0.29, 0.10
[cloud-user@rockylinux-built-with-ansible ~]$
Looking good. Now I will spin up multiple instances.
Spinning Up Multiple Instances
I created a new playbook that spins up three instances:
---
- name: Create Virtual Machine Instances
hosts: localhost
connection: local
vars:
vm_list:
- rocky01-instance
- rocky02-instance
- rocky03-instance
tasks:
- name: Create/Destroy VM instances
loop: "{{ vm_list }}"
kubevirt.core.kubevirt_vm:
state: "{{ vm_state }}"
wait: true
name: "{{ item }}"
namespace: virtual-machine-compute
labels:
app: rockylinux
data_volume_templates:
- metadata:
name: "{{ item }}"
spec:
source:
pvc:
name: rockylinux-10-iscsi
namespace: custom-vm-images
pvc:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: synology-iscsi-storage
volumeMode: Block
spec:
evictionStrategy: LiveMigrate
accessCredentials:
- sshPublicKey:
propagationMethod:
noCloud: {}
source:
secret:
secretName: example-secret-public-key
domain:
cpu:
cores: 1
sockets: 1
threads: 1
memory:
guest: 4Gi
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
model: virtio
name: default
- bridge: {}
model: virtio
name: nic-white-chimpanzee-56
state: up
networks:
- name: default
pod: {}
- multus:
networkName: nad-efficient-grouse
name: nic-white-chimpanzee-56
volumes:
- name: rootdisk
dataVolume:
name: "{{ item }}"
- cloudInitNoCloud:
networkData: |
version: 2
ethernets:
enp1s0:
dhcp4: true
# Higher metric means lower priority for the pod network
dhcp4-overrides:
route-metric: 200
enp2s0:
dhcp4: true
# Lower metric means this becomes the preferred default route
dhcp4-overrides:
route-metric: 50
# Explicit static route for your workstation network
routes:
- to: 192.168.1.0/24
via: 192.168.3.1
userData: |-
#cloud-config
user: cloud-user
chpasswd: { expire: False }
name: cloudinitdisk
This is where the advantages of Ansible become apparent, as I can parameterize specific values in the manifest. In this case, I generate a list of instances in a list value vm_list, then I loop through the list to create distinct instances and associate storage using {{ item }}. Those instances, though, will only be created as long as the vm_state is defined. This means less duplicate code as well as avoiding somewhat complex tooling to accomplish the same thing.
At this point, since I updated the label to rockylinux, I just need to update the inventory:
plugin: kubevirt.core.kubevirt
namespaces:
- virtual-machine-compute
network_name: nic-white-chimpanzee-56
label_selector: app=rockylinux
And then I run the playbook:
➜ kubevirt git:(main) ✗ uv run ansible-playbook multiple-vm.yaml -e vm_state=present
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Create Virtual Machine Instances] ****************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************
ok: [localhost]
TASK [Create/Destroy VM instances] *********************************************************************************************************************************************************************
changed: [localhost] => (item=rocky01-instance)
changed: [localhost] => (item=rocky02-instance)
changed: [localhost] => (item=rocky03-instance)
PLAY RECAP *********************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Configuring Instances
Of course, I do need to configure the instances, which is Ansible's core strength. So I whip up the following playbook multiple-vm-configure.yaml to get the hostname and uptime, install nginx and start it up.
---
- name: Configure Virtual Machine Instances
hosts: namespace_virtual_machine_compute
become: yes
remote_user: cloud-user
tasks:
- name: Display uptime
ansible.builtin.debug:
msg: "Uptime for {{ inventory_hostname }}: {{ now().replace(microsecond=0) - now().fromtimestamp(now(fmt='%s') | int - ansible_uptime_seconds) }}"
- name: Install nginx
ansible.builtin.dnf:
name: nginx
state: latest
- name: Start up nginx
ansible.builtin.systemd_service:
name: nginx
state: started
enabled: true
This will get the uptime, install Nginx and then start it up.
I then ran, and confirmed it was successful:
➜ kubevirt git:(main) ✗ uv run ansible-playbook multiple-vm-configure.yaml
PLAY [Configure Virtual Machine Instances] *************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************
[WARNING]: Host 'virtual-machine-compute-rocky02-instance' is using the discovered Python interpreter at '/usr/bin/python3.12', but future installation of another Python interpreter could cause a different interpreter to be discovered. See https://docs.ansible.com/ansible-core/2.19/reference_appendices/interpreter_discovery.html for more information.
ok: [virtual-machine-compute-rocky02-instance]
[WARNING]: Host 'virtual-machine-compute-rocky01-instance' is using the discovered Python interpreter at '/usr/bin/python3.12', but future installation of another Python interpreter could cause a different interpreter to be discovered. See https://docs.ansible.com/ansible-core/2.19/reference_appendices/interpreter_discovery.html for more information.
ok: [virtual-machine-compute-rocky01-instance]
[WARNING]: Host 'virtual-machine-compute-rocky03-instance' is using the discovered Python interpreter at '/usr/bin/python3.12', but future installation of another Python interpreter could cause a different interpreter to be discovered. See https://docs.ansible.com/ansible-core/2.19/reference_appendices/interpreter_discovery.html for more information.
ok: [virtual-machine-compute-rocky03-instance]
TASK [Display uptime] **********************************************************************************************************************************************************************************
ok: [virtual-machine-compute-rocky01-instance] => {
"msg": "Uptime for virtual-machine-compute-rocky01-instance: 0:02:32"
}
ok: [virtual-machine-compute-rocky02-instance] => {
"msg": "Uptime for virtual-machine-compute-rocky02-instance: 0:01:30"
}
ok: [virtual-machine-compute-rocky03-instance] => {
"msg": "Uptime for virtual-machine-compute-rocky03-instance: 0:00:27"
}
TASK [Install nginx] ***********************************************************************************************************************************************************************************
changed: [virtual-machine-compute-rocky01-instance]
changed: [virtual-machine-compute-rocky03-instance]
changed: [virtual-machine-compute-rocky02-instance]
TASK [Start up nginx] **********************************************************************************************************************************************************************************
changed: [virtual-machine-compute-rocky03-instance]
changed: [virtual-machine-compute-rocky02-instance]
changed: [virtual-machine-compute-rocky01-instance]
PLAY RECAP *********************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
virtual-machine-compute-rocky01-instance : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
virtual-machine-compute-rocky02-instance : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
virtual-machine-compute-rocky03-instance : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
➜ kubevirt git:(main) ✗
Then I made some updates - I don't want a generic nginx page, but a customized page with the hostname and the point in time snapshot of the uptime. So I revised it to:
---
- name: Configure Virtual Machine Instances
hosts: namespace_virtual_machine_compute
become: yes
remote_user: cloud-user
vars:
nginx_default_page: /usr/share/nginx/html/index.html
tasks:
- name: Display uptime
ansible.builtin.debug:
msg: "Uptime for {{ inventory_hostname }}: {{ now().replace(microsecond=0) - now().fromtimestamp(now(fmt='%s') | int - ansible_uptime_seconds) }}"
- name: Install nginx
ansible.builtin.dnf:
name: nginx
state: latest
- name: Start up nginx
ansible.builtin.systemd_service:
name: nginx
state: started
enabled: true
- name: Generate default page
ansible.builtin.copy:
content: "Uptime for {{ inventory_hostname }}: {{ now().replace(microsecond=0) - now().fromtimestamp(now(fmt='%s') | int - ansible_uptime_seconds) }}"
dest: "{{ nginx_default_page }}"
mode: 0644
owner: root
group: root
And re-ran it:
➜ kubevirt git:(main) ✗ uv run ansible-playbook multiple-vm-configure.yaml
PLAY [Configure Virtual Machine Instances] *************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************
[WARNING]: Host 'virtual-machine-compute-rocky02-instance' is using the discovered Python interpreter at '/usr/bin/python3.12', but future installation of another Python interpreter could cause a different interpreter to be discovered. See https://docs.ansible.com/ansible-core/2.19/reference_appendices/interpreter_discovery.html for more information.
ok: [virtual-machine-compute-rocky02-instance]
[WARNING]: Host 'virtual-machine-compute-rocky03-instance' is using the discovered Python interpreter at '/usr/bin/python3.12', but future installation of another Python interpreter could cause a different interpreter to be discovered. See https://docs.ansible.com/ansible-core/2.19/reference_appendices/interpreter_discovery.html for more information.
ok: [virtual-machine-compute-rocky03-instance]
[WARNING]: Host 'virtual-machine-compute-rocky01-instance' is using the discovered Python interpreter at '/usr/bin/python3.12', but future installation of another Python interpreter could cause a different interpreter to be discovered. See https://docs.ansible.com/ansible-core/2.19/reference_appendices/interpreter_discovery.html for more information.
ok: [virtual-machine-compute-rocky01-instance]
TASK [Display uptime] **********************************************************************************************************************************************************************************
ok: [virtual-machine-compute-rocky01-instance] => {
"msg": "Uptime for virtual-machine-compute-rocky01-instance: 0:20:02"
}
ok: [virtual-machine-compute-rocky02-instance] => {
"msg": "Uptime for virtual-machine-compute-rocky02-instance: 0:18:56"
}
ok: [virtual-machine-compute-rocky03-instance] => {
"msg": "Uptime for virtual-machine-compute-rocky03-instance: 0:17:49"
}
TASK [Install nginx] ***********************************************************************************************************************************************************************************
ok: [virtual-machine-compute-rocky02-instance]
ok: [virtual-machine-compute-rocky03-instance]
ok: [virtual-machine-compute-rocky01-instance]
TASK [Start up nginx] **********************************************************************************************************************************************************************************
ok: [virtual-machine-compute-rocky03-instance]
ok: [virtual-machine-compute-rocky01-instance]
ok: [virtual-machine-compute-rocky02-instance]
TASK [Generate default page] ***************************************************************************************************************************************************************************
changed: [virtual-machine-compute-rocky03-instance]
changed: [virtual-machine-compute-rocky02-instance]
changed: [virtual-machine-compute-rocky01-instance]
PLAY RECAP *********************************************************************************************************************************************************************************************
virtual-machine-compute-rocky01-instance : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
virtual-machine-compute-rocky02-instance : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
virtual-machine-compute-rocky03-instance : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Looking good. I am almost done.
Final Steps - Load Balancing
I updated multiple-vm.yaml to create a service and route:
- name: Create Service for the Rocky Linux deployment
community.okd.k8s:
definition:
apiVersion: v1
kind: Service
metadata:
name: rockylinux-built-with-ansible
namespace: virtual-machine-compute
spec:
ports:
- port: 80
targetPort: 80
selector:
app: rockylinux
- name: Expose the Rocky Linux service externally
community.okd.openshift_route:
service: rockylinux-built-with-ansible
namespace: virtual-machine-compute
termination: edge
annotations:
haproxy.router.openshift.io/balance: roundrobin
port: 80
tls:
insecure_policy: redirect
state: present
register: route
Both service and route are fairly similar to what you see in a Kubernetes/OpenShift manifests, with some attributes moved around, so you will need to review the Ansible documents carefully against the Kubernetes documentation to avoid mis-placing certain attributes.
With that, the entire playbook looks like this:
---
- name: Create Virtual Machine Instances
hosts: localhost
connection: local
vars:
vm_list:
- rocky01-instance
- rocky02-instance
- rocky03-instance
tasks:
- name: Create/Destroy VM instances
loop: "{{ vm_list }}"
kubevirt.core.kubevirt_vm:
state: "{{ vm_state }}"
wait: true
name: "{{ item }}"
namespace: virtual-machine-compute
labels:
app: rockylinux
data_volume_templates:
- metadata:
name: "{{ item }}"
spec:
source:
pvc:
name: rockylinux-10-iscsi
namespace: custom-vm-images
pvc:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: synology-iscsi-storage
volumeMode: Block
spec:
evictionStrategy: LiveMigrate
accessCredentials:
- sshPublicKey:
propagationMethod:
noCloud: {}
source:
secret:
secretName: example-secret-public-key
domain:
cpu:
cores: 1
sockets: 1
threads: 1
memory:
guest: 4Gi
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
model: virtio
name: default
- bridge: {}
model: virtio
name: nic-white-chimpanzee-56
state: up
networks:
- name: default
pod: {}
- multus:
networkName: nad-efficient-grouse
name: nic-white-chimpanzee-56
volumes:
- name: rootdisk
dataVolume:
name: "{{ item }}"
- cloudInitNoCloud:
networkData: |
version: 2
ethernets:
enp1s0:
dhcp4: true
# Higher metric means lower priority for the pod network
dhcp4-overrides:
route-metric: 200
enp2s0:
dhcp4: true
# Lower metric means this becomes the preferred default route
dhcp4-overrides:
route-metric: 50
# Explicit static route for your workstation network
routes:
- to: 192.168.1.0/24
via: 192.168.3.1
userData: |-
#cloud-config
user: cloud-user
chpasswd: { expire: False }
name: cloudinitdisk
- name: Create Service for the Rocky Linux deployment
community.okd.k8s:
definition:
apiVersion: v1
kind: Service
metadata:
name: rockylinux-built-with-ansible
namespace: virtual-machine-compute
spec:
ports:
- port: 80
targetPort: 80
selector:
app: rockylinux
- name: Expose the Rocky Linux service externally
community.okd.openshift_route:
service: rockylinux-built-with-ansible
namespace: virtual-machine-compute
termination: edge
annotations:
haproxy.router.openshift.io/balance: roundrobin
port: 80
tls:
insecure_policy: redirect
state: present
register: route
I re-ran the playbook, which adds those components:
➜ kubevirt git:(main) ✗ uv run ansible-playbook multiple-vm.yaml -e vm_state=present
PLAY [Create Virtual Machine Instances] ****************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************
ok: [localhost]
TASK [Create/Destroy VM instances] *********************************************************************************************************************************************************************
[WARNING]: No meaningful diff was generated, but the API may not be idempotent (only metadata.generation or metadata.resourceVersion were changed)
[WARNING]: Deprecation warnings can be disabled by setting `deprecation_warnings=False` in ansible.cfg.
[DEPRECATION WARNING]: Passing `warnings` to `exit_json` or `fail_json` is deprecated. This feature will be removed from ansible-core version 2.23. Use `AnsibleModule.warn` instead.
ok: [localhost] => (item=rocky01-instance)
ok: [localhost] => (item=rocky02-instance)
ok: [localhost] => (item=rocky03-instance)
TASK [Create Service for the rocky linux deployment] ***************************************************************************************************************************************************
changed: [localhost]
TASK [Expose the Rocky Linux service externally] *******************************************************************************************************************************************************
changed: [localhost]
PLAY RECAP *********************************************************************************************************************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I then ran curl against the route to verify connectivity and confirm all the instances are up and running, looping multiple times to confirm that traffic is going to each node:
➜ kubevirt git:(main) ✗ for count in `seq 1 33`
for> do
for> curl https://rockylinux-built-with-ansible-virtual-machine-compute.apps.okd.example.com
for> echo
for> done
Uptime for virtual-machine-compute-rocky01-instance: 0:20:02
Uptime for virtual-machine-compute-rocky03-instance: 0:17:49
Uptime for virtual-machine-compute-rocky02-instance: 0:18:56
Uptime for virtual-machine-compute-rocky02-instance: 0:18:56
Uptime for virtual-machine-compute-rocky03-instance: 0:17:49
Uptime for virtual-machine-compute-rocky01-instance: 0:20:02
Uptime for virtual-machine-compute-rocky01-instance: 0:20:02
Uptime for virtual-machine-compute-rocky03-instance: 0:17:49
Uptime for virtual-machine-compute-rocky02-instance: 0:18:56
Uptime for virtual-machine-compute-rocky02-instance: 0:18:56
Uptime for virtual-machine-compute-rocky03-instance: 0:17:49
Uptime for virtual-machine-compute-rocky01-instance: 0:20:02
Uptime for virtual-machine-compute-rocky01-instance: 0:20:02
Uptime for virtual-machine-compute-rocky03-instance: 0:17:49
Uptime for virtual-machine-compute-rocky02-instance: 0:18:56
Uptime for virtual-machine-compute-rocky02-instance: 0:18:56
Uptime for virtual-machine-compute-rocky03-instance: 0:17:49
Uptime for virtual-machine-compute-rocky01-instance: 0:20:02
Uptime for virtual-machine-compute-rocky01-instance: 0:20:02
Uptime for virtual-machine-compute-rocky03-instance: 0:17:49
Uptime for virtual-machine-compute-rocky02-instance: 0:18:56
Uptime for virtual-machine-compute-rocky02-instance: 0:18:56
Uptime for virtual-machine-compute-rocky03-instance: 0:17:49
Uptime for virtual-machine-compute-rocky01-instance: 0:20:02
Uptime for virtual-machine-compute-rocky01-instance: 0:20:02
Uptime for virtual-machine-compute-rocky03-instance: 0:17:49
Uptime for virtual-machine-compute-rocky02-instance: 0:18:56
Uptime for virtual-machine-compute-rocky02-instance: 0:18:56
Uptime for virtual-machine-compute-rocky03-instance: 0:17:49
Uptime for virtual-machine-compute-rocky01-instance: 0:20:02
Uptime for virtual-machine-compute-rocky01-instance: 0:20:02
Uptime for virtual-machine-compute-rocky03-instance: 0:17:49
Uptime for virtual-machine-compute-rocky02-instance: 0:18:56
➜ kubevirt git:(main) ✗