Image by Red Hat on Getting started with OpenShift Virtualization
Every since I blew away my OVirt installation to form a OKD/OpenShift cluster, I had not been able to stand up any virtual machines. After getting through some other things on my checklist (Backups, pipelines), I finally got around and setup proper virtual machine on OpenShift using KubeVirt aka OpenShift Virtualization. This gave me a chance to refresh what I learned from the DO316 training course, particularly when comes to opening access to virtual machines. There are some common similarities, as well as some wrinkles.
So what I would like to do is to go through the launch of virtual machines and then demo on how access is setup for them them. It is not a deep dive there are better resources for that, but it should give you a good idea of how opertionally it works.
Requirements
Besides the usual disclaimers, we are going to assume that you have these operators already running:
Fortunately those operators are easy to install with no changes to defaults.
You will also need some virtualization background and linux ba
The Basics
To starts us off, let create a Linux virtual machine using Rocky Linux. First, we create a data volume.
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: rocky-dv-sr
namespace: virtual-machine-compute
spec:
source:
http:
url: "https://dl.rockylinux.org/pub/rocky/10/images/x86_64/Rocky-10-GenericCloud-Base.latest.x86_64.qcow2"
pvc:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: nfs-csi
A data volume is a special PVC that allows imports of virtual images for use to create virtual machines. Technically, you don't need it, but if you are using virtual machines, chances are that you are doing so with data that must be stateful. Knowing that, you would need storage.
With this manifest, we import a Rocky Linux 10 Qemu image into the volume during the import, we create a PVC and request 20 gigs from nfs-csi storage class. Note that we also specified the acess mode of ReadWriteMany - we do this becaused we need to able to live migrate our VM to another host (for VMotion for you VMWare engineers).
With the storage defined, we now spin up our virtual machine with this manifest:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: rocky-vm-sr
namespace: virtual-machine-compute
spec:
runStrategy: RerunOnFailure
template:
metadata:
labels:
kubevirt.io/domain: rocky-vm-sr
app: webserver
spec:
evictionStrategy: LiveMigrate
accessCredentials:
- sshPublicKey:
propagationMethod:
noCloud: {}
source:
secret:
secretName: sammy-chocolate-bird-34
domain:
cpu:
cores: 1
sockets: 1
threads: 1
memory:
guest: 4Gi
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
model: virtio
name: default
networks:
- name: default
pod: {}
volumes:
- name: rootdisk
dataVolume:
name: rocky-dv-sr
- cloudInitNoCloud:
userData: |-
#cloud-config
user: cloud-user
chpasswd: { expire: False }
packages:
- httpd
runcmd:
- "sudo systemctl enable httpd --now"
- "echo '<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>' | sudo tee /var/www/html/index.html"
name: cloudinitdisk
Lets focus on the most important parts. First, we set our labels:
metadata:
labels:
kubevirt.io/domain: rocky-vm-sr
app: webserver
This is what we will need to route our traffic. That is shouldn't be surprising if you are familar with kubernetes. The next one may not be if you never worked with virtual machines as we starting into .template.spec:
spec:
evictionStrategy: LiveMigrate
accessCredentials:
- sshPublicKey:
propagationMethod:
noCloud: {}
source:
secret:
secretName: sammy-chocolate-bird-34
The evictionStrategy tell us to migrate our virtual machines to another hosts if the virtual machine needs to be evicted from the node. This is distinct from the usual strategy of just recreating the POD in another host. Alternatively, you can have the virtual machine shutdown, but if you intend to keep the app on the virtual machine running, you would want this.
(As an aside, when you spin up an virtual machine, there is a corresponding pod that manage it, so that component gets recreated when the node starts to evicts its resources)
The accessCredentials adds a public key from a secrets into the virtual machine. This lets you connect to your virtual machine via ssh.
The next section defines the hardware.
domain:
cpu:
cores: 1
sockets: 1
threads: 1
memory:
guest: 4Gi
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
model: virtio
name: default
Here, we are allocating a single cpu, and two disks - one for the boot or rootdisk, the other to for storing our run time configuration for cloud init (this is where you run your initial configuration), then a network interface connected to the default network.
The default network in question is the pod network, which we will attach with:
networks:
- name: default
pod: {}
Finally, we attach the volume:
volumes:
- name: rootdisk
dataVolume:
name: rocky-dv-sr
- cloudInitNoCloud:
userData: |-
#cloud-config
user: cloud-user
chpasswd: { expire: False }
packages:
- httpd
runcmd:
- "sudo systemctl enable httpd --now"
- "echo '<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>' | sudo tee /var/www/html/index.html"
name: cloudinitdisk
The first volume is the datavolume that contains our boot disk we created. The second volume is where we have initial setup, including user creation, package to install (in this case, apache httpd) and commands to start up httpd and create an index page.
Now we created our manifest, we got the mininum to stand up our virtual machines. At this point, we just need to define our services and routes:
---
apiVersion: v1
kind: Service
metadata:
name: rocky-linux-webserver
namespace: virtual-machine-compute
labels:
app: webserver
spec:
selector:
app: webserver
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
---
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: rocky-linux-webserver-route
namespace: virtual-machine-compute
spec:
host: route-rocky-linux-webserver-virtual-machine-compute.apps.okd.example.com
to:
kind: Service
name: rocky-linux-webserver
weight: 100
port:
targetPort: 80
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect
wildcardPolicy: None
Putting it all together, our code looks like this:
---
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: rocky-dv-sr
namespace: virtual-machine-compute
spec:
source:
http:
url: "https://dl.rockylinux.org/pub/rocky/10/images/x86_64/Rocky-10-GenericCloud-Base.latest.x86_64.qcow2"
pvc:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: nfs-csi
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: rocky-vm-sr
namespace: virtual-machine-compute
spec:
runStrategy: RerunOnFailure
template:
metadata:
labels:
kubevirt.io/domain: rocky-vm-sr
app: webserver
spec:
evictionStrategy: LiveMigrate
accessCredentials:
- sshPublicKey:
propagationMethod:
noCloud: {}
source:
secret:
secretName: sammy-chocolate-bird-34
domain:
cpu:
cores: 1
sockets: 1
threads: 1
memory:
guest: 4Gi
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
model: virtio
name: default
networks:
- name: default
pod: {}
volumes:
- name: rootdisk
dataVolume:
name: rocky-dv-sr
- cloudInitNoCloud:
userData: |-
#cloud-config
user: cloud-user
chpasswd: { expire: False }
packages:
- httpd
runcmd:
- "sudo systemctl enable httpd --now"
- "echo '<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>' | sudo tee /var/www/html/index.html"
name: cloudinitdisk
---
apiVersion: v1
kind: Service
metadata:
name: rocky-linux-webserver
namespace: virtual-machine-compute
labels:
app: webserver
spec:
selector:
app: webserver
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
---
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: rocky-linux-webserver-route
namespace: virtual-machine-compute
spec:
host: route-rocky-linux-webserver-virtual-machine-compute.apps.okd.example.com
to:
kind: Service
name: rocky-linux-webserver
weight: 100
port:
targetPort: 80
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect
wildcardPolicy: None
We apply it:
➜ vms git:(main) oc create -f rockylinux-sr.yaml
datavolume.cdi.kubevirt.io/rocky-dv-sr created
virtualmachine.kubevirt.io/rocky-vm-sr created
service/rocky-linux-webserver created
route.route.openshift.io/rocky-linux-webserver-route created
And it is running:
➜ vms git:(main) oc get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
headless ClusterIP None <none> 5434/TCP 6d8h
rocky-linux-webserver ClusterIP 172.30.134.243 <none> 80/TCP 2m
➜ vms git:(main) oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
headless ClusterIP None <none> 5434/TCP 6d8h
rocky-linux-webserver ClusterIP 172.30.134.243 <none> 80/TCP 2m3s
And we can view the web site:

For remote access, if you defined a password for your remote user, you can access the console via the OpenShift Virtualization UI using the VNC or serial console. Hhere is the example:

If you prefer, you can connect in the terminal as well via using VNC, serial, or ssh using virtctl (which you can download from the OpenShift Virtualition Overview):

Or upstream. This is what it looks like if you connect via ssh through virtctl:
➜ ~ virtctl ssh -l cloud-user vm/rocky-vm-sr --identity-file=$HOME/.ssh/id_ecdsa_okd
The authenticity of host 'vm.rocky-vm-sr.virtual-machine-compute (<no hostip for proxy command>)' can't be established.
ED25519 key fingerprint is SHA256:JExx+ltZDBTQsA+zjaLrvp+vsN1A9OXYskARrdZZJBc.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'vm.rocky-vm-sr.virtual-machine-compute' (ED25519) to the list of known hosts.
[cloud-user@rocky-vm-sr ~]$ cat /etc/redhat-release
Rocky Linux release 10.1 (Red Quartz)
[cloud-user@rocky-vm-sr ~]$
What if, though, you wanted to connect directly via ssh? For that matter, what if you can't use route for your site on your virtual machine or you need to expose a non-http port?
Using MetalLB With KubeVirt / OpenShift Virtualization
For some environments, particularly on-premises environment, MetalLB is what you would in place of route. It allows you to attach an external IP to your service and use that as your load balancer / outside traffic end point. We will use to define an IP for us to connect via ssh directly:
Since we already have our MetalLB installed, we first define our MetalLB instance in the metallb-system namespace:
apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
➜ ~ oc get metallbs.metallb.io -n metallb-system
NAME AGE
metallb 9d
Then we define our IP Pool (which should be a list of IPs you reserved in your network) in the metallb-system namespace
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: ip-addresspool-sample1
namespace: metallb-system
spec:
addresses:
- 172.16.3.40-172.16.3.80
autoAssign: true
avoidBuggyIPs: false
➜ ~ oc get ipaddresspools.metallb.io -n metallb-system
NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES
ip-addresspool-sample1 true false ["172.16.3.40-172.16.3.80"]
And finally, we define our layer 2 advertisment in the metallb-system namespace (referencing our previously created address pool):
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: l2-adv-sample1
namespace: metallb-system
spec:
ipAddressPools:
- ip-addresspool-sample1
➜ ~ oc get l2advertisements.metallb.io -n metallb-system
NAME IPADDRESSPOOLS IPADDRESSPOOL SELECTORS INTERFACES
l2-adv-sample1 ["ip-addresspool-sample1"]
With that done, we will clone the code and use it to launch new Rocky Linux virtual machine. It is a same code except it will have either -lb on the names and labels. And most importantly, we will define a service that will uset the allocated IP from MetalB:
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: webserver-lb
kubevirt.io/domain: rocky-vm-lb
vm.kubevirt.io/name: rocky-vm-lb
name: rocky-vm-lb
spec:
ports:
- port: 22
protocol: TCP
targetPort: 22
selector:
kubevirt.io/domain: rocky-vm-lb
type: LoadBalancer
The service here will expose the ssh port 22 and we will define the service as type LoadBalancer. The LoadBalancer is what will invoke MetalLB to allocate the IP.
At this point, our code looks like this:
---
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: rocky-dv-lb
namespace: virtual-machine-compute
spec:
source:
http:
url: "https://dl.rockylinux.org/pub/rocky/10/images/x86_64/Rocky-10-GenericCloud-Base.latest.x86_64.qcow2"
pvc:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: nfs-csi
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: rocky-vm-lb
namespace: virtual-machine-compute
spec:
runStrategy: RerunOnFailure
template:
metadata:
labels:
kubevirt.io/domain: rocky-vm-lb
app: webserver-lb
spec:
evictionStrategy: LiveMigrate
accessCredentials:
- sshPublicKey:
propagationMethod:
noCloud: {}
source:
secret:
secretName: sammy-chocolate-bird-34
domain:
cpu:
cores: 1
sockets: 1
threads: 1
memory:
guest: 4Gi
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
model: virtio
name: default
networks:
- name: default
pod: {}
volumes:
- name: rootdisk
dataVolume:
name: rocky-dv-lb
- cloudInitNoCloud:
userData: |-
#cloud-config
user: cloud-user
chpasswd: { expire: False }
packages:
- httpd
runcmd:
- "sudo systemctl enable httpd --now"
- "echo '<html><body><h1>Rocky Linux Web Server on KubeVirt Also!</h1></body></html>' | sudo tee /var/www/html/index.html"
name: cloudinitdisk
---
apiVersion: v1
kind: Service
metadata:
name: rocky-linux-webserver-lb
namespace: virtual-machine-compute
labels:
app: webserver-lb
spec:
selector:
app: webserver-lb
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
---
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: rocky-linux-webserver-route-lb
namespace: virtual-machine-compute
spec:
host: route-rocky-linux-webserver-lb-virtual-machine-compute.apps.okd.example.com
to:
kind: Service
name: rocky-linux-webserver-lb
weight: 100
port:
targetPort: 80
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect
wildcardPolicy: None
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: webserver-lb
kubevirt.io/domain: rocky-vm-lb
vm.kubevirt.io/name: rocky-vm-lb
name: rocky-vm-lb
spec:
ports:
- port: 22
protocol: TCP
targetPort: 22
selector:
kubevirt.io/domain: rocky-vm-lb
type: LoadBalancer
We apply it:
➜ vms git:(main) oc create -f rockylinux-mlb.yaml
datavolume.cdi.kubevirt.io/rocky-dv-lb created
virtualmachine.kubevirt.io/rocky-vm-lb created
service/rocky-linux-webserver-lb created
route.route.openshift.io/rocky-linux-webserver-route-lb created
service/rocky-vm-lb created
When we query the services and route, we notice that there is service with type LoadBalancer and there is an external IP with it:
➜ vms git:(main) oc get service,route
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/headless ClusterIP None <none> 5434/TCP 6d8h
service/rocky-linux-webserver ClusterIP 172.30.134.243 <none> 80/TCP 5m23s
service/rocky-linux-webserver-lb ClusterIP 172.30.246.125 <none> 80/TCP 2m27s
service/rocky-vm-lb LoadBalancer 172.30.23.82 172.16.3.40 22:30505/TCP 2m27s
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
route.route.openshift.io/rocky-linux-webserver-route route-rocky-linux-webserver-virtual-machine-compute.apps.okd.example.com rocky-linux-webserver 80 edge/Redirect None
route.route.openshift.io/rocky-linux-webserver-route-lb route-rocky-linux-webserver-lb-virtual-machine-compute.apps.okd.example.com rocky-linux-webserver-lb 80 edge/Redirect None
And we can now connect to the virtual machine via that external IP:
➜ content git:(homelab-followup-notes-january) ✗ ssh cloud-user@172.16.3.40 -identity-file=$HOME/.ssh/id_ecdsa_okd
Warning: Identity file dentity-file=/Users/rilindo/.ssh/id_ecdsa_okd not accessible: No such file or directory.
The authenticity of host '172.16.3.40 (172.16.3.40)' can't be established.
ED25519 key fingerprint is SHA256:Fztnva3XlZBcmGwo1QbVHcKQVCwzwPhzn5kigO7cklU.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '172.16.3.40' (ED25519) to the list of known hosts.
[cloud-user@rocky-vm-lb ~]$ cat /etc/redhat-release
Rocky Linux release 10.1 (Red Quartz)
[cloud-user@rocky-vm-lb ~]$
And again, we can access the web site:

What if you prefer not to use a service, though? What if ou want to connect to the virtual machine directly?
Using NMState With KubeVirt / OpenShift Virtualization
NMState is orchestration for your network on your kubernetes / openshift clusters. It lets you configure your configuration with VLANs, bonds, bridges and other features you need to logically organize your network via Kubenetes code. We will now use to expose our VM directly to our external network.
NOTE: If you are using OKD, parts of the UI for NMState does not work, so use the cli for some steps.
First thing is to label the nodes that allows us to use an external network. Now, I have been replacing my Intel NUCs with BeeLinks and good news for me, they have a secondary interface. So I labeled those nodes:
➜ ~ oc label node worker00.node.example.com external-network=true
➜ ~ oc label node worker01.node.example.com external-network=true
➜ ~ oc label node worker04.node.example.com external-network=true
➜ ~ oc get nodes
NAME STATUS ROLES AGE VERSION
control00.node.example.com Ready control-plane,master 110d v1.33.5
control01.node.example.com Ready control-plane,master 110d v1.33.5
control02.node.example.com Ready control-plane,master 110d v1.33.5
worker00.node.example.com Ready worker 24h v1.33.5
worker01.node.example.com Ready worker 7d23h v1.33.5
worker02.node.example.com Ready worker 109d v1.33.5
worker03.node.example.com Ready worker 71d v1.33.5
worker04.node.example.com Ready worker 63d v1.33.5
➜ ~ oc get nodes -l external-network=true
NAME STATUS ROLES AGE VERSION
worker00.node.example.com Ready worker 24h v1.33.5
worker01.node.example.com Ready worker 7d23h v1.33.5
worker04.node.example.com Ready worker 63d v1.33.5
With that done, I create a NodeNetworkConfigurationPolicy called external-bridge (this is the UI part that does not work in the UI in OKD, so you will need to do it via the CLI)
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: external-bridge
spec:
nodeSelector:
external-network: "true"
desiredState:
interfaces:
- name: br-external-net
type: linux-bridge
state: up
mtu: 1500
ipv4:
enabled: true
dhcp: true
bridge:
port:
- name: enp2s0
options:
stp:
enabled: false
This lets us create a linux bridge on interface enp2s0. Very importantly, I am targetting the policy with only the nodes with external-network labe, since those are the ones with that interface existing.
Once I applied, I should be able to see the policy created successfully:
➜ ~ oc get nncp
NAME STATUS REASON
external-bridge Available SuccessfullyConfigured
nmstate-sanity Available SuccessfullyConfigured
Now I will create a network attachment. That is what I will use to attach my virtual machines to the bridge:
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
annotations:
k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br-external-net
name: nad-efficient-grouse
namespace: virtual-machine-compute
spec:
config: |-
{
"cniVersion": "0.3.1",
"name": "nad-efficient-grouse",
"type": "bridge",
"bridge": "br-external-net",
"ipam": {},
"macspoofchk": true,
"preserveDefaultVlan": false
}
(I originally create this via the UI and the UI generate random words for the names. I am keeping it as I find it whimsical)
When we apply it, we should see the NAD created on the nodes we targeted:
➜ ~ oc get nnce
NAME STATUS STATUS AGE REASON
control00.node.example.com.nmstate-sanity Available 42h SuccessfullyConfigured
control01.node.example.com.nmstate-sanity Available 42h SuccessfullyConfigured
control02.node.example.com.nmstate-sanity Available 42h SuccessfullyConfigured
worker00.node.example.com.external-bridge Available 24h SuccessfullyConfigured
worker00.node.example.com.nmstate-sanity Available 24h SuccessfullyConfigured
worker01.node.example.com.external-bridge Available 40h SuccessfullyConfigured
worker01.node.example.com.nmstate-sanity Available 42h SuccessfullyConfigured
worker02.node.example.com.nmstate-sanity Available 42h SuccessfullyConfigured
worker03.node.example.com.nmstate-sanity Available 42h SuccessfullyConfigured
worker04.node.example.com.external-bridge Available 40h SuccessfullyConfigured
worker04.node.example.com.nmstate-sanity Available 42h SuccessfullyConfigured
➜ ~ oc get nnce -l nmstate.io/policy=external-bridge
NAME STATUS STATUS AGE REASON
worker00.node.example.com.external-bridge Available 24h SuccessfullyConfigured
worker01.node.example.com.external-bridge Available 40h SuccessfullyConfigured
worker04.node.example.com.external-bridge Available 40h SuccessfullyConfigured
➜ ~
Now we are ready. We created new code and label as such with -name at the end, and make a few more changes to our VirtualMachine manifest:
---
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: rocky-dv-nmstate
namespace: virtual-machine-compute
spec:
source:
http:
url: "https://dl.rockylinux.org/pub/rocky/10/images/x86_64/Rocky-10-GenericCloud-Base.latest.x86_64.qcow2"
pvc:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: nfs-csi
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: rocky-vm-nmstate
namespace: virtual-machine-compute
spec:
runStrategy: RerunOnFailure
template:
metadata:
labels:
kubevirt.io/domain: rocky-vm-nmstate
app: webserver-nmstate
spec:
evictionStrategy: LiveMigrate
accessCredentials:
- sshPublicKey:
propagationMethod:
noCloud: {}
source:
secret:
secretName: sammy-chocolate-bird-34
domain:
cpu:
cores: 1
sockets: 1
threads: 1
memory:
guest: 4Gi
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
model: virtio
name: default
- bridge: {}
model: virtio
name: nic-white-chimpanzee-56
state: up
networks:
- name: default
pod: {}
- multus:
networkName: nad-curved-seahorse
name: nic-white-chimpanzee-56
volumes:
- name: rootdisk
dataVolume:
name: rocky-dv-nmstate
- cloudInitNoCloud:
networkData: |
version: 2
ethernets:
enp1s0:
dhcp4: true
# Higher metric means lower priority for the pod network
dhcp4-overrides:
route-metric: 200
enp2s0:
dhcp4: true
# Lower metric means this becomes the preferred default route
dhcp4-overrides:
route-metric: 50
# Explicit static route for your workstation network
routes:
- to: 172.16.1.0/24
via: 172.16.3.1
userData: |-
#cloud-config
user: cloud-user
chpasswd: { expire: False }
packages:
- httpd
runcmd:
- "sudo systemctl enable httpd --now"
- "echo '<html><body><h1>Rocky Linux Web Server on KubeVirt here!</h1></body></html>' | sudo tee /var/www/html/index.html"
name: cloudinitdisk
---
apiVersion: v1
kind: Service
metadata:
name: rocky-linux-webserver-nmstate
namespace: virtual-machine-compute
labels:
app: webserver-nmstate
spec:
selector:
app: webserver-nmstate
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
---
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: rocky-linux-webserver-route-nmstate
namespace: virtual-machine-compute
spec:
host: route-rocky-linux-webserver-nmstate-virtual-machine-compute.apps.okd.example.com
to:
kind: Service
name: rocky-linux-webserver-nmstate
weight: 100
port:
targetPort: 80
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect
wildcardPolicy: None
The one major change (aside from the names), so that we create a bridge:
interfaces:
- masquerade: {}
model: virtio
name: default
- bridge: {}
model: virtio
name: nic-white-chimpanzee-56
state: up
We then attach the bridge to our NAD:
networks:
- name: default
pod: {}
- multus:
networkName: nad-curved-seahorse
name: nic-white-chimpanzee-56
The other major change is related to our networks. Because we have multiple /24 networks, I have make some routing changes within the VM so that return traffic can be routed to the correct path:
networkData: |
version: 2
ethernets:
enp1s0:
dhcp4: true
# Higher metric means lower priority for the pod network
dhcp4-overrides:
route-metric: 200
enp2s0:
dhcp4: true
# Lower metric means this becomes the preferred default route
dhcp4-overrides:
route-metric: 50
# Explicit static route for your workstation network
routes:
- to: 172.16.1.0/24
via: 172.16.3.1
Note that this may not be necessary in your environment, but if you are unable to connect, be sure to run a packet sniffer on your VM and engage with a network engiener if necessary to trace the network traffic path.
At this point, we are done, so we create the virtual machine:
➜ vms git:(main) oc apply -f rockylinux-nmstate.yaml
datavolume.cdi.kubevirt.io/rocky-dv-nmstate created
virtualmachine.kubevirt.io/rocky-vm-nmstate created
service/rocky-linux-webserver-nmstate created
route.route.openshift.io/rocky-linux-webserver-route-nmstate created
From the the cli, it may not be obvious what is the IP attached to the virtual machine:
➜ vms git:(main) oc get service,route
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/headless ClusterIP None <none> 5434/TCP 6d8h
service/rocky-linux-webserver ClusterIP 172.30.134.243 <none> 80/TCP 7m51s
service/rocky-linux-webserver-lb ClusterIP 172.30.246.125 <none> 80/TCP 4m55s
service/rocky-linux-webserver-nmstate ClusterIP 172.30.196.222 <none> 80/TCP 40s
service/rocky-vm-lb LoadBalancer 172.30.23.82 172.16.3.40 22:30505/TCP 4m55s
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
route.route.openshift.io/rocky-linux-webserver-route route-rocky-linux-webserver-virtual-machine-compute.apps.okd.example.com rocky-linux-webserver 80 edge/Redirect None
route.route.openshift.io/rocky-linux-webserver-route-lb route-rocky-linux-webserver-lb-virtual-machine-compute.apps.okd.example.com rocky-linux-webserver-lb 80 edge/Redirect None
route.route.openshift.io/rocky-linux-webserver-route-nmstate route-rocky-linux-webserver-nmstate-virtual-machine-compute.apps.okd.example.com rocky-linux-webserver-nmstate 80 edge/Redirect None
So you will need to run describe on the virtual machine to find the IP, under status:
➜ vms git:(main) oc describe vmi rocky-vm-nmstate
Name: rocky-vm-nmstate
Namespace: virtual-machine-compute
Labels: app=webserver-nmstate
kubevirt.io/domain=rocky-vm-nmstate
kubevirt.io/nodeName=worker00.node.example.com
Annotations: kubevirt.io/latest-observed-api-version: v1
kubevirt.io/storage-observed-api-version: v1
kubevirt.io/vm-generation: 1
API Version: kubevirt.io/v1
Kind: VirtualMachinevirtual machine
Metadata:
Creation Timestamp: 2026-01-04T02:34:11Z
Finalizers:
kubevirt.io/virtualMachineControllerFinalize
foregroundDeleteVirtualMachine
Generation: 12
Owner References:
API Version: kubevirt.io/v1
Block Owner Deletion: true
Controller: true
Kind: VirtualMachine
Name: rocky-vm-nmstate
UID: 0c489860-b759-4fa6-aea4-ec6e23e92b15
Resource Version: 115523003
UID: 8699d813-6db2-446f-8409-dfdb85df18ce
Spec:
Access Credentials:
Ssh Public Key:
Propagation Method:
No Cloud:
Source:
Secret:
Secret Name: sammy-chocolate-bird-34
Architecture: amd64
Domain:
Cpu:
Cores: 1
Max Sockets: 4
Model: host-model
Sockets: 1
Threads: 1
Devices:
Disks:
Disk:
Bus: virtio
Name: rootdisk
Disk:
Bus: virtio
Name: cloudinitdisk
Interfaces:
Mac Address: 02:c0:81:4c:2f:03
Masquerade:
Model: virtio
Name: default
Bridge:
Mac Address: 02:c0:81:4c:2f:04
Model: virtio
Name: nic-white-chimpanzee-56
State: up
Features:
Acpi:
Enabled: true
Firmware:
Serial: e9cd533d-e4cb-4b56-ac7a-a7ffd71cd77c
Uuid: b21bf822-ddb2-40f1-91b1-4c34d1ccd001
Machine:
Type: q35
Memory:
Guest: 4Gi
Max Guest: 16Gi
Resources:
Requests:
Memory: 4Gi
Eviction Strategy: LiveMigrate
Networks:
Name: default
Pod:
Multus:
Network Name: nad-curved-seahorse
Name: nic-white-chimpanzee-56
Volumes:
Data Volume:
Name: rocky-dv-nmstate
Name: rootdisk
Cloud Init No Cloud:
Network Data: version: 2
ethernets:
enp1s0:
dhcp4: true
# Higher metric means lower priority for the pod network
dhcp4-overrides:
route-metric: 200
enp2s0:
dhcp4: true
# Lower metric means this becomes the preferred default route
dhcp4-overrides:
route-metric: 50
# Explicit static route for your workstation network
routes:
- to: 172.16.1.0/24
via: 172.16.3.1
User Data: #cloud-config
user: cloud-user
chpasswd: { expire: False }
packages:
- httpd
runcmd:
- "sudo systemctl enable httpd --now"
- "echo '<html><body><h1>Rocky Linux Web Server on KubeVirt here!</h1></body></html>' | sudo tee /var/www/html/index.html"
Name: cloudinitdisk
Status:
Active Pods:
9da60f1c-e82b-47fe-b14b-1902f8242a36: worker00.node.example.com
Conditions:
Last Probe Time: <nil>
Last Transition Time: 2026-01-04T02:35:42Z
Status: True
Type: Ready
Last Probe Time: <nil>
Last Transition Time: <nil>
Message: All of the VMI's DVs are bound and ready
Reason: AllDVsReady
Status: True
Type: DataVolumesReady
Last Probe Time: <nil>
Last Transition Time: <nil>
Status: True
Type: LiveMigratable
Last Probe Time: <nil>
Last Transition Time: <nil>
Status: True
Type: StorageLiveMigratable
Last Probe Time: 2026-01-04T02:35:58Z
Last Transition Time: <nil>
Status: True
Type: AgentConnected
Current CPU Topology:
Cores: 1
Sockets: 1
Threads: 1
Guest OS Info:
Id: rocky
Kernel Release: 6.12.0-124.8.1.el10_1.x86_64
Kernel Version: #1 SMP PREEMPT_DYNAMIC Tue Nov 11 22:54:28 UTC 2025
Machine: x86_64
Name: Rocky Linux
Pretty Name: Rocky Linux 10.1 (Red Quartz)
Version: 10.1 (Red Quartz)
Version Id: 10.1
Interfaces:
Info Source: domain, guest-agent
Interface Name: enp1s0
Ip Address: 10.130.0.181
Ip Addresses:
10.130.0.181
fe80::c0:81ff:fe4c:2f03
Link State: up
Mac: 02:c0:81:4c:2f:03
Name: default
Pod Interface Name: eth0
Queue Count: 1
Info Source: domain, guest-agent, multus-status
Interface Name: enp2s0
Ip Address: 172.16.3.129
Ip Addresses:
172.16.3.129
fe80::c0:81ff:fe4c:2f04
Link State: up
Mac: 02:c0:81:4c:2f:04
Name: nic-white-chimpanzee-56
Pod Interface Name: podd600d5cae69
Queue Count: 1
Launcher Container Image Version: quay.io/kubevirt/virt-launcher@sha256:66f02bfd67bfdbe59ea9c0139270d9efba4e1491e1ef81bccde5bd6c7794333d
Machine:
Type: pc-q35-rhel9.6.0
Memory:
Guest At Boot: 4Gi
Guest Current: 4Gi
Guest Requested: 4Gi
Migration Method: BlockMigration
Migration Transport: Unix
Node Name: worker00.node.example.com
Phase: Running
Phase Transition Timestamps:
Phase: Pending
Phase Transition Timestamp: 2026-01-04T02:34:11Z
Phase: Scheduling
Phase Transition Timestamp: 2026-01-04T02:35:39Z
Phase: Scheduled
Phase Transition Timestamp: 2026-01-04T02:35:42Z
Phase: Running
Phase Transition Timestamp: 2026-01-04T02:35:43Z
Qos Class: Burstable
Runtime User: 107
Selinux Context: system_u:object_r:container_file_t:s0:c5,c474
Virtual Machine Revision Name: revision-start-vm-0c489860-b759-4fa6-aea4-ec6e23e92b15-1
Volume Status:
Name: cloudinitdisk
Size: 1048576
Target: vdb
Name: rootdisk
Persistent Volume Claim Info:
Access Modes:
ReadWriteMany
Capacity:
Storage: 20Gi
Claim Name: rocky-dv-nmstate
Filesystem Overhead: 0.06
Requests:
Storage: 20Gi
Volume Mode: Filesystem
Target: vda
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 2m5s virtualmachine-controller Created virtual machine pod virt-launcher-rocky-vm-nmstate-tjbcn
Normal Created 2m1s virt-handler VirtualMachinevirtual machine defined.
Normal Started 2m1s virt-handler VirtualMachinevirtual machine started.
Yu should be able to see the external IP under .Status.Interfaces.
Interfaces:
Info Source: domain, guest-agent
Interface Name: enp1s0
Ip Address: 10.130.0.181
Ip Addresses:
10.130.0.181
fe80::c0:81ff:fe4c:2f03
Link State: up
Mac: 02:c0:81:4c:2f:03
Name: default
Pod Interface Name: eth0
Queue Count: 1
Info Source: domain, guest-agent, multus-status
Interface Name: enp2s0
Ip Address: 172.16.3.129
Ip Addresses:
172.16.3.129
fe80::c0:81ff:fe4c:2f04
Of course, you can find the IP in the OpenShift Virtualization Console:

Either way, you can now directly access the instance:
➜ ~ ssh cloud-user@172.16.3.129 -identity-file=$HOME/.ssh/id_ecdsa_okd
Warning: Identity file dentity-file=/Users/rilindo/.ssh/id_ecdsa_okd not accessible: No such file or directory.
The authenticity of host '172.16.3.129 (172.16.3.129)' can't be established.
ED25519 key fingerprint is SHA256:jl2m7bh/C5vHWfL9q47/DDN5GPkBdV7G9jKhHLc9fZw.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '172.16.3.129' (ED25519) to the list of known hosts.
[cloud-user@rocky-vm-nmstate ~]$ cat /etc/redhat-release
Rocky Linux release 10.1 (Red Quartz)
[cloud-user@rocky-vm-nmstate ~]$
And because the virtual machine is accessible, you can access the webserver via the same IP:

Along with its connect route:

Conclusion
If you are virtualiation professional, you may already been familiar with concepts here and may find this easy to pick. If you are a Kubernetes professional who never need to manage beyond containers, this is a good primer on how to get started. Feel free to reach out to me here if you have any thoughts or suggestions.