Original Image by vecteezy.com
Compute autoscaling is one of the most important services that an engineer can use for their system architecture. When used correctly, it allows significantly higher availability and scalability as instances are dynamically adjusted based on load and traffic.
Autoscale is a common feature in public clouds, such as autoscaling in AWS and managed instance groups in Google Cloud Compute. It is also a core feature of Kubernetes with Horizontal Pod Autoscaling and it is even available on VMWare. As an experienced cloud engineer, I was curious to see if it is possible to leverage that process it with virtual machine instances on OpenShift Virtualization.
Starter: VM Pool Creation
I originally wanted to try Virtual Machine Replica Set, but I wasn't sure how to integrate my existing images with that API, as it looks like it is limited to containerDisk. By chance, though, I stumbled upon Virtual Machine Pools, while searching through Kubernetes community Slack. Reviewing the examples and API, I found it supports DataVolumeTemplates, which fits my existing images as well having the added bonus of retaining of state on the instance(for now)
With that direction set, I begin by creating the following manifest:
apiVersion: pool.kubevirt.io/v1alpha1
kind: VirtualMachinePool
metadata:
name: vm-pool-rocky
namespace: virtual-machine-compute
spec:
replicas: 2
selector:
matchLabels:
kubevirt.io/vmpool: vm-pool-rocky
virtualMachineTemplate:
metadata:
creationTimestamp: null
labels:
kubevirt.io/vmpool: vm-pool-rocky
spec:
runStrategy: Always
dataVolumeTemplates:
- metadata:
name: rockylinux-dv
spec:
source:
pvc:
name: rockylinux-10-nfs
namespace: custom-vm-images
pvc:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: synology-nfs-storage
template:
metadata:
creationTimestamp: null
labels:
kubevirt.io/vmpool: vm-pool-rocky
spec:
evictionStrategy: LiveMigrate
accessCredentials:
- sshPublicKey:
propagationMethod:
noCloud: {}
source:
secret:
secretName: my-ssh-key-secret
domain:
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
model: virtio
name: default
resources:
requests:
cpu: 512m
memory: 4Gi
networks:
- name: default
pod: {}
volumes:
- name: rootdisk
dataVolume:
name: rockylinux-dv
- cloudInitNoCloud:
userData: |-
#cloud-config
user: cloud-user
chpasswd: { expire: False }
packages:
- httpd
runcmd:
- "sudo systemctl enable httpd --now"
- "echo '<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>' | sudo tee /var/www/html/index.html"
name: cloudinitdisk
This will create a VM Pool that will launch two Rocky Linux 10 instances using the NFS storage class for storage. And upon startup, the cloud init service will the httpd web server as well as generate a customd default page.
To distribute traffic, I also created a service group and route in front of of the group:
---
apiVersion: v1
kind: Service
metadata:
name: vm-pool-rocky-service
namespace: virtual-machine-compute
labels:
kubevirt.io/vmpool: vm-pool-rocky
spec:
selector:
kubevirt.io/vmpool: vm-pool-rocky
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
---
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: vm-pool-rocky-route
namespace: virtual-machine-compute
annotations:
haproxy.router.openshift.io/balance: roundrobin
spec:
to:
kind: Service
name: vm-pool-rocky-service
weight: 100
port:
targetPort: 80
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect
wildcardPolicy: None
With that done, I created the VM Pool, the service and the route:
➜ vmpools git:(main) ✗ oc create -f rocky-pool-web.yaml
virtualmachinepool.pool.kubevirt.io/vm-pool-rocky created
➜ vmpools git:(main) ✗ oc create -f rocky-pool-svc.yaml
service/vm-pool-rocky-service created
route.route.openshift.io/vm-pool-rocky-route created
➜ vmpools git:(main) ✗
After a couple of minutes, the instances are up and running. With a curl loop, I was able to confirm that they are accessible through the route:
➜ vmpools git:(main) ✗ while true
while> do
while> curl https://vm-pool-rocky-route-virtual-machine-compute.apps.okd.monzell.com/
while> sleep 5
while> done
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
Unlike last time, I did not setup an external network for the instances, so instead, I will login with virtctl utility, which allows me to tunnel into the instance. I query the instances in the VM pool by their labels and then logged into one of the machines:
➜ vmpools git:(main) ✗ oc project virtual-machine-compute
Now using project "virtual-machine-compute" on server "https://api.okd.monzell.com:6443".
➜ vmpools git:(main) ✗ oc get vmi
NAME AGE PHASE IP NODENAME READY
freebsd-14-8giusc6ap5mnlven 15d Running 10.131.0.34 worker01.node.monzell.com True
rocky-vm-lb 4d22h Running 10.130.2.27 worker03.node.monzell.com True
rocky-vm-nmstate 4d Running 10.129.2.28 worker02.node.monzell.com True
rocky-vm-sr 14d Running 10.131.2.8 worker04.node.monzell.com True
rocky01-instance 2d6h Running 10.130.0.96 worker00.node.monzell.com True
rocky02-instance 2d6h Running 10.130.2.48 worker03.node.monzell.com True
rocky03-instance 2d6h Running 10.131.1.175 worker01.node.monzell.com True
vm-pool-rocky-0 6m22s Running 10.130.0.204 worker00.node.monzell.com True
vm-pool-rocky-1 7m7s Running 10.129.3.41 worker02.node.monzell.com True
➜ vmpools git:(main) ✗ oc get vmi -l kubevirt.io/vmpool=vm-pool-rocky
NAME AGE PHASE IP NODENAME READY
vm-pool-rocky-0 6m52s Running 10.130.0.204 worker00.node.monzell.com True
vm-pool-rocky-1 7m37s Running 10.129.3.41 worker02.node.monzell.com True
➜ vmpools git:(main) ✗ virtctl ssh -l cloud-user vm/vm-pool-rocky-1
[cloud-user@vm-pool-rocky-1 ~]$ hostname
vm-pool-rocky-1
[cloud-user@vm-pool-rocky-1 ~]$
Defining the Deployment
Now, my preference is to have Ansible do the heavy lifting of configuring the instances. However, because the instances in the VM Pools do not have external connectivity for Ansible to connect to, we will need to have cloud-init install Ansible on the server itself and then run Ansible playbooks within the server.
Therefore, I wrote up the following playbook that will install a web server (default to nginx), start it up, and generate a default page with the hostname server and deployment date via a local connection:
---
- name: Deploy Web Server
hosts: localhost
connection: local
become: yes
remote_user: cloud-user
vars:
webserver: nginx
default_page:
nginx: /usr/share/testpage/index.html
httpd: /var/www/html/index.html
tasks:
- name: Install webserver
ansible.builtin.dnf:
name: "{{ webserver }}"
state: latest
- name: Start up webserver
ansible.builtin.systemd_service:
name: "{{ webserver }}"
state: started
enabled: true
- name: Generate default page
ansible.builtin.copy:
content: "The {{ webserver }} Deployment for {{ ansible_hostname }} completed at: {{ now(fmt='%Y-%m-%d %H:%M:%S') }}"
dest: "{{ default_page[webserver] }}"
mode: 0644
owner: root
group: root
I uploaded the script as well as the uv install script into a ConfigMap, after which I will mount as a read-only volume for me to run the Ansible installation and execution of the playbook. The command to load the code into the ConfigMap looks like this:
➜ vmpools git:(main) ✗ oc create cm cloud-init-code --from-file=uv-install.sh=/tmp/uv-install.sh --from-file=local-playbook.yaml=local-playbook.yaml
configmap/cloud-init-code created
Of course, I can do it as code, but it looks gnarly in yaml:
apiVersion: v1
data:
local-playbook.yaml: "---\n- name: Deploy Web Server\n hosts: localhost\n connection:
local\n become: yes\n remote_user: cloud-user\n vars: \n webserver: nginx\n
\ default_page: \n nginx: /usr/share/testpage/index.html\n httpd:
/var/www/html/index.html\n tasks:\n - name: Install webserver\n ansible.builtin.dnf:\n
\ name: \"{{ webserver }}\"\n state: latest\n - name: Start up webserver\n
\ ansible.builtin.systemd_service:\n name: \"{{ webserver }}\"\n state:
started\n enabled: true\n - name: Generate default page\n ansible.builtin.copy:\n
\ content: \"The {{ webserver }} Deployment for {{ ansible_hostname }} completed
at: {{ now(fmt='%Y-%m-%d %H:%M:%S') }}\"\n dest: \"{{ default_page[webserver]
}}\"\n mode: 0644\n owner: root\n group: root"
uv-install.sh: "#!/bin/sh\n# shellcheck shell=dash\n# shellcheck disable=SC2039
\ # local is non-POSIX\n#\n# Licensed under the MIT license\n# <LICENSE-MIT or
https://opensource.org/licenses/MIT>, at your\n# option. This file may not be
copied, modified, or distributed\n# except according to those terms.\n\n# This
runs on Unix shells like bash/dash/ksh/zsh. It uses the common `local`\n# extension.
Note: Most shells limit `local` to 1 var per line, contra bash.\n\n# Some versions
of ksh have no `local` keyword. Alias it to `typeset`, but\n# beware this makes
variables global with f()-style function syntax in ksh93.\n# mksh has this alias
by default.\nhas_local() {\n # shellcheck disable=SC2034 # deliberately unused\n
\ local _has_local\n}\n\nhas_local 2>/dev/null || alias local=typeset\n\nset
-u\n\nAPP_NAME=\"uv\"\nAPP_VERSION=\"0.10.3\"\n# Look for GitHub Enterprise-style
base URL first\nif [ -n \"${UV_INSTALLER_GHE_BASE_URL:-}\" ]; then\n INSTALLER_BASE_URL=\"$UV_INSTALLER_GHE_BASE_URL\"\nelse\n
\ INSTALLER_BASE_URL=\"${UV_INSTALLER_GITHUB_BASE_URL:-https://github.com}\"\nfi\nif
[ -n \"${UV_DOWNLOAD_URL:-}\" ]; then\n ARTIFACT_DOWNLOAD_URL=\"$UV_DOWNLOAD_URL\"\nelif
[ -n \"${INSTALLER_DOWNLOAD_URL:-}\" ]; then\n ARTIFACT_DOWNLOAD_URL=\"$INSTALLER_DOWNLOAD_URL\"\nelse\n
\ ARTIFACT_DOWNLOAD_URL=\"${INSTALLER_BASE_URL}/astral-sh/uv/releases/download/0.10.3\"\nfi\nif
[ -n \"${UV_PRINT_VERBOSE:-}\" ]; then\n PRINT_VERBOSE=\"$UV_PRINT_VERBOSE\"\nelse\n
\ PRINT_VERBOSE=${INSTALLER_PRINT_VERBOSE:-0}\nfi\nif [ -n \"${UV_PRINT_QUIET:-}\"
]; then\n PRINT_QUIET=\"$UV_PRINT_QUIET\"\nelse\n PRINT_QUIET=${INSTALLER_PRINT_QUIET:-0}\nfi\nif
[ -n \"${UV_NO_MODIFY_PATH:-}\" ]; then\n NO_MODIFY_PATH=\"$UV_NO_MODIFY_PATH\"\nelse\n
\ NO_MODIFY_PATH=${INSTALLER_NO_MODIFY_PATH:-0}\nfi\nif [ \"${UV_DISABLE_UPDATE:-0}\"
= \"1\" ]; then\n INSTALL_UPDATER=0\nelse\n INSTALL_UPDATER=1\nfi\nUNMANAGED_INSTALL=\"${UV_UNMANAGED_INSTALL:-}\"\nif
[ -n \"${UNMANAGED_INSTALL}\" ]; then\n NO_MODIFY_PATH=1\n INSTALL_UPDATER=0\nfi\nAUTH_TOKEN=\"${UV_GITHUB_TOKEN:-}\"\n\nread
-r RECEIPT <<EORECEIPT\n{\"binaries\":[\"CARGO_DIST_BINS\"],\"binary_aliases\":{},\"cdylibs\":[\"CARGO_DIST_DYLIBS\"],\"cstaticlibs\":[\"CARGO_DIST_STATICLIBS\"],\"install_layout\":\"unspecified\",\"install_prefix\":\"AXO_INSTALL_PREFIX\",\"modify_path\":true,\"provider\":{\"source\":\"cargo-dist\",\"version\":\"0.30.2\"},\"source\":{\"app_name\":\"uv\",\"name\":\"uv\",\"owner\":\"astral-sh\",\"release_type\":\"github\"},\"version\":\"0.10.3\"}\nEORECEIPT\n\n#
Some Linux distributions don't set HOME\n# https://github.com/astral-sh/uv/issues/6965#issuecomment-2915796022\nget_home()
{\n if [ -n \"${HOME:-}\" ]; then\n echo \"$HOME\"\n elif [ -n \"${USER:-}\"
]; then\n getent passwd \"$USER\" | cut -d: -f6\n else\n getent
passwd \"$(id -un)\" | cut -d: -f6\n fi\n}\n# The HOME reference to show in
user output. If `$HOME` isn't set, we show the absolute path instead.\nget_home_expression()
{\n if [ -n \"${HOME:-}\" ]; then\n # shellcheck disable=SC2016\n echo
'$HOME'\n elif [ -n \"${USER:-}\" ]; then\n getent passwd \"$USER\"
| cut -d: -f6\n else\n getent passwd \"$(id -un)\" | cut -d: -f6\n fi\n}\nINFERRED_HOME=$(get_home)\n#
shellcheck disable=SC2034\nINFERRED_HOME_EXPRESSION=$(get_home_expression)\nRECEIPT_HOME=\"${XDG_CONFIG_HOME:-$INFERRED_HOME/.config}/uv\"\n\nusage()
{\n # print help (this cat/EOF stuff is a \"heredoc\" string)\n cat <<EOF\nuv-installer.sh\n\nThe
installer for uv 0.10.3\n\nThis script detects what platform you're on and fetches
an appropriate archive from\nhttps://github.com/astral-sh/uv/releases/download/0.10.3\nthen
unpacks the binaries and installs them to the first of the following locations\n\n
\ \\$XDG_BIN_HOME\n \\$XDG_DATA_HOME/../bin\n \\$HOME/.local/bin\n\nIt
will then add that dir to PATH by adding the appropriate line to your shell profiles.\n\nUSAGE:\n
\ uv-installer.sh [OPTIONS]\n\nOPTIONS:\n -v, --verbose\n Enable
verbose output\n\n -q, --quiet\n Disable progress output\n\n --no-modify-path\n
\ Don't configure the PATH environment variable\n\n -h, --help\n
\ Print help information\nEOF\n}\n\ndownload_binary_and_run_installer()
{\n downloader --check\n need_cmd uname\n need_cmd mktemp\n need_cmd
chmod\n need_cmd mkdir\n need_cmd rm\n need_cmd tar\n need_cmd grep\n
\ need_cmd cat\n\n for arg in \"$@\"; do\n case \"$arg\" in\n --help)\n
\ usage\n exit 0\n ;;\n --quiet)\n
\ PRINT_QUIET=1\n ;;\n --verbose)\n PRINT_VERBOSE=1\n
\ ;;\n --no-modify-path)\n say \"--no-modify-path
has been deprecated; please set UV_NO_MODIFY_PATH=1 in the environment\"\n NO_MODIFY_PATH=1\n
\ ;;\n *)\n OPTIND=1\n if
[ \"${arg%%--*}\" = \"\" ]; then\n err \"unknown option $arg\"\n
\ fi\n while getopts :hvq sub_arg \"$arg\"; do\n
\ case \"$sub_arg\" in\n h)\n usage\n
\ exit 0\n ;;\n v)\n
\ # user wants to skip the prompt --\n #
we don't need /dev/tty\n PRINT_VERBOSE=1\n ;;\n
\ q)\n # user wants to skip the
prompt --\n # we don't need /dev/tty\n PRINT_QUIET=1\n
\ ;;\n *)\n err
\"unknown option -$OPTARG\"\n ;;\n esac\n
\ done\n ;;\n esac\n done\n\n get_architecture
|| return 1\n local _true_arch=\"$RETVAL\"\n assert_nz \"$_true_arch\" \"arch\"\n
\ local _cur_arch=\"$_true_arch\"\n\n\n # look up what archives support this
platform\n local _artifact_name\n _artifact_name=\"$(select_archive_for_arch
\"$_true_arch\")\" || return 1\n local _bins\n local _zip_ext\n local
_arch\n local _checksum_style\n local _checksum_value\n\n # destructure
selected archive info into locals\n case \"$_artifact_name\" in \n \"uv-aarch64-apple-darwin.tar.gz\")\n
\ _arch=\"aarch64-apple-darwin\"\n _zip_ext=\".tar.gz\"\n
\ _bins=\"uv uvx\"\n _bins_js_array='\"uv\",\"uvx\"'\n _libs=\"\"\n
\ _libs_js_array=\"\"\n _staticlibs=\"\"\n _staticlibs_js_array=\"\"\n
\ _updater_name=\"\"\n _updater_bin=\"\"\n ;;\n
\ \"uv-aarch64-pc-windows-msvc.zip\")\n _arch=\"aarch64-pc-windows-msvc\"\n
\ _zip_ext=\".zip\"\n _bins=\"uv.exe uvx.exe uvw.exe\"\n
\ _bins_js_array='\"uv.exe\",\"uvx.exe\",\"uvw.exe\"'\n _libs=\"\"\n
\ _libs_js_array=\"\"\n _staticlibs=\"\"\n _staticlibs_js_array=\"\"\n
\ _updater_name=\"\"\n _updater_bin=\"\"\n ;;\n
\ \"uv-aarch64-unknown-linux-gnu.tar.gz\")\n _arch=\"aarch64-unknown-linux-gnu\"\n
\ _zip_ext=\".tar.gz\"\n _bins=\"uv uvx\"\n _bins_js_array='\"uv\",\"uvx\"'\n
\ _libs=\"\"\n _libs_js_array=\"\"\n _staticlibs=\"\"\n
\ _staticlibs_js_array=\"\"\n _updater_name=\"\"\n _updater_bin=\"\"\n
\ ;;\n \"uv-aarch64-unknown-linux-musl.tar.gz\")\n _arch=\"aarch64-unknown-linux-musl-static\"\n
\ _zip_ext=\".tar.gz\"\n _bins=\"uv uvx\"\n _bins_js_array='\"uv\",\"uvx\"'\n
\ _libs=\"\"\n _libs_js_array=\"\"\n _staticlibs=\"\"\n
\ _staticlibs_js_array=\"\"\n _updater_name=\"\"\n _updater_bin=\"\"\n
\ ;;\n \"uv-arm-unknown-linux-musleabihf.tar.gz\")\n _arch=\"arm-unknown-linux-musl-staticeabihf\"\n
\ _zip_ext=\".tar.gz\"\n _bins=\"uv uvx\"\n _bins_js_array='\"uv\",\"uvx\"'\n
\ _libs=\"\"\n _libs_js_array=\"\"\n _staticlibs=\"\"\n
\ _staticlibs_js_array=\"\"\n _updater_name=\"\"\n _updater_bin=\"\"\n
\ ;;\n \"uv-armv7-unknown-linux-gnueabihf.tar.gz\")\n _arch=\"armv7-unknown-linux-gnueabihf\"\n
\ _zip_ext=\".tar.gz\"\n _bins=\"uv uvx\"\n _bins_js_array='\"uv\",\"uvx\"'\n
\ _libs=\"\"\n _libs_js_array=\"\"\n _staticlibs=\"\"\n
\ _staticlibs_js_array=\"\"\n _updater_name=\"\"\n _updater_bin=\"\"\n
\ ;;\n \"uv-armv7-unknown-linux-musleabihf.tar.gz\")\n _arch=\"armv7-unknown-linux-musl-staticeabihf\"\n
\ _zip_ext=\".tar.gz\"\n _bins=\"uv uvx\"\n _bins_js_array='\"uv\",\"uvx\"'\n
\ _libs=\"\"\n _libs_js_array=\"\"\n _staticlibs=\"\"\n
\ _staticlibs_js_array=\"\"\n _updater_name=\"\"\n _updater_bin=\"\"\n
\ ;;\n \"uv-i686-pc-windows-msvc.zip\")\n _arch=\"i686-pc-windows-msvc\"\n
\ _zip_ext=\".zip\"\n _bins=\"uv.exe uvx.exe uvw.exe\"\n
\ _bins_js_array='\"uv.exe\",\"uvx.exe\",\"uvw.exe\"'\n _libs=\"\"\n
\ _libs_js_array=\"\"\n _staticlibs=\"\"\n _staticlibs_js_array=\"\"\n
\ _updater_name=\"\"\n _updater_bin=\"\"\n ;;\n
\ \"uv-i686-unknown-linux-gnu.tar.gz\")\n _arch=\"i686-unknown-linux-gnu\"\n
\ _zip_ext=\".tar.gz\"\n _bins=\"uv uvx\"\n _bins_js_array='\"uv\",\"uvx\"'\n
\ _libs=\"\"\n _libs_js_array=\"\"\n _staticlibs=\"\"\n
\ _staticlibs_js_array=\"\"\n _updater_name=\"\"\n _updater_bin=\"\"\n
\ ;;\n \"uv-i686-unknown-linux-musl.tar.gz\")\n _arch=\"i686-unknown-linux-musl-static\"\n
\ _zip_ext=\".tar.gz\"\n _bins=\"uv uvx\"\n _bins_js_array='\"uv\",\"uvx\"'\n
\ _libs=\"\"\n _libs_js_array=\"\"\n _staticlibs=\"\"\n
\ _staticlibs_js_array=\"\"\n _updater_name=\"\"\n _updater_bin=\"\"\n
\ ;;\n \"uv-powerpc64-unknown-linux-gnu.tar.gz\")\n _arch=\"powerpc64-unknown-linux-gnu\"\n
\ _zip_ext=\".tar.gz\"\n _bins=\"uv uvx\"\n _bins_js_array='\"uv\",\"uvx\"'\n
\ _libs=\"\"\n _libs_js_array=\"\"\n _staticlibs=\"\"\n
\ _staticlibs_js_array=\"\"\n _updater_name=\"\"\n _updater_bin=\"\"\n
\ ;;\n \"uv-powerpc64le-unknown-linux-gnu.tar.gz\")\n _arch=\"powerpc64le-unknown-linux-gnu\"\n
\ _zip_ext=\".tar.gz\"\n _bins=\"uv uvx\"\n _bins_js_array='\"uv\",\"uvx\"'\n
\ _libs=\"\"\n _libs_js_array=\"\"\n _staticlibs=\"\"\n
\ _staticlibs_js_array=\"\"\n _updater_name=\"\"\n _updater_bin=\"\"\n
\ ;;\n \"uv-riscv64gc-unknown-linux-gnu.tar.gz\")\n _arch=\"riscv64gc-unknown-linux-gnu\"\n
\ _zip_ext=\".tar.gz\"\n _bins=\"uv uvx\"\n _bins_js_array='\"uv\",\"uvx\"'\n
\ _libs=\"\"\n _libs_js_array=\"\"\n _staticlibs=\"\"\n
\ _staticlibs_js_array=\"\"\n _updater_name=\"\"\n _updater_bin=\"\"\n
\ ;;\n \"uv-s390x-unknown-linux-gnu.tar.gz\")\n _arch=\"s390x-unknown-linux-gnu\"\n
\ _zip_ext=\".tar.gz\"\n _bins=\"uv uvx\"\n _bins_js_array='\"uv\",\"uvx\"'\n
\ _libs=\"\"\n _libs_js_array=\"\"\n _staticlibs=\"\"\n
\ _staticlibs_js_array=\"\"\n _updater_name=\"\"\n _updater_bin=\"\"\n
\ ;;\n \"uv-x86_64-apple-darwin.tar.gz\")\n _arch=\"x86_64-apple-darwin\"\n
\ _zip_ext=\".tar.gz\"\n _bins=\"uv uvx\"\n _bins_js_array='\"uv\",\"uvx\"'\n
\ _libs=\"\"\n _libs_js_array=\"\"\n _staticlibs=\"\"\n
\ _staticlibs_js_array=\"\"\n _updater_name=\"\"\n _updater_bin=\"\"\n
\ ;;\n \"uv-x86_64-pc-windows-msvc.zip\")\n _arch=\"x86_64-pc-windows-msvc\"\n
\ _zip_ext=\".zip\"\n _bins=\"uv.exe uvx.exe uvw.exe\"\n
\ _bins_js_array='\"uv.exe\",\"uvx.exe\",\"uvw.exe\"'\n _libs=\"\"\n
\ _libs_js_array=\"\"\n _staticlibs=\"\"\n _staticlibs_js_array=\"\"\n
\ _updater_name=\"\"\n _updater_bin=\"\"\n ;;\n
\ \"uv-x86_64-unknown-linux-gnu.tar.gz\")\n _arch=\"x86_64-unknown-linux-gnu\"\n
\ _zip_ext=\".tar.gz\"\n _bins=\"uv uvx\"\n _bins_js_array='\"uv\",\"uvx\"'\n
\ _libs=\"\"\n _libs_js_array=\"\"\n _staticlibs=\"\"\n
\ _staticlibs_js_array=\"\"\n _updater_name=\"\"\n _updater_bin=\"\"\n
\ ;;\n \"uv-x86_64-unknown-linux-musl.tar.gz\")\n _arch=\"x86_64-unknown-linux-musl-static\"\n
\ _zip_ext=\".tar.gz\"\n _bins=\"uv uvx\"\n _bins_js_array='\"uv\",\"uvx\"'\n
\ _libs=\"\"\n _libs_js_array=\"\"\n _staticlibs=\"\"\n
\ _staticlibs_js_array=\"\"\n _updater_name=\"\"\n _updater_bin=\"\"\n
\ ;;\n *)\n err \"internal installer error: selected
download $_artifact_name doesn't exist!?\"\n ;;\n esac\n\n\n #
Replace the placeholder binaries with the calculated array from above\n RECEIPT=\"$(echo
\"$RECEIPT\" | sed s/'\"CARGO_DIST_BINS\"'/\"$_bins_js_array\"/)\"\n RECEIPT=\"$(echo
\"$RECEIPT\" | sed s/'\"CARGO_DIST_DYLIBS\"'/\"$_libs_js_array\"/)\"\n RECEIPT=\"$(echo
\"$RECEIPT\" | sed s/'\"CARGO_DIST_STATICLIBS\"'/\"$_staticlibs_js_array\"/)\"\n\n
\ # download the archive\n local _url=\"$ARTIFACT_DOWNLOAD_URL/$_artifact_name\"\n
\ local _dir\n _dir=\"$(ensure mktemp -d)\" || return 1\n local _file=\"$_dir/input$_zip_ext\"\n\n
\ say \"downloading $APP_NAME $APP_VERSION ${_arch}\" 1>&2\n say_verbose
\" from $_url\" 1>&2\n say_verbose \" to $_file\" 1>&2\n\n ensure mkdir
-p \"$_dir\"\n\n if ! downloader \"$_url\" \"$_file\"; then\n say \"failed
to download $_url\"\n say \"this may be a standard network error, but it
may also indicate\"\n say \"that $APP_NAME's release process is not working.
When in doubt\"\n say \"please feel free to open an issue!\"\n exit
1\n fi\n\n if [ -n \"${_checksum_style:-}\" ]; then\n verify_checksum
\"$_file\" \"$_checksum_style\" \"$_checksum_value\"\n else\n say \"no
checksums to verify\"\n fi\n\n # ...and then the updater, if it exists\n
\ if [ -n \"$_updater_name\" ] && [ \"$INSTALL_UPDATER\" = \"1\" ]; then\n local
_updater_url=\"$ARTIFACT_DOWNLOAD_URL/$_updater_name\"\n # This renames
the artifact while doing the download, removing the\n # target triple and
leaving just the appname-update format\n local _updater_file=\"$_dir/$APP_NAME-update\"\n\n
\ if ! downloader \"$_updater_url\" \"$_updater_file\"; then\n say
\"failed to download $_updater_url\"\n say \"this may be a standard network
error, but it may also indicate\"\n say \"that $APP_NAME's release process
is not working. When in doubt\"\n say \"please feel free to open an issue!\"\n
\ exit 1\n fi\n\n # Add the updater to the list of binaries
to install\n _bins=\"$_bins $APP_NAME-update\"\n fi\n\n # unpack
the archive\n case \"$_zip_ext\" in\n \".zip\")\n ensure
unzip -q \"$_file\" -d \"$_dir\"\n ;;\n\n \".tar.\"*)\n ensure
tar xf \"$_file\" --strip-components 1 -C \"$_dir\"\n ;;\n *)\n
\ err \"unknown archive format: $_zip_ext\"\n ;;\n esac\n\n
\ install \"$_dir\" \"$_bins\" \"$_libs\" \"$_staticlibs\" \"$_arch\" \"$@\"\n
\ local _retval=$?\n if [ \"$_retval\" != 0 ]; then\n return \"$_retval\"\n
\ fi\n\n ignore rm -rf \"$_dir\"\n\n # Install the install receipt\n if
[ \"$INSTALL_UPDATER\" = \"1\" ]; then\n if ! mkdir -p \"$RECEIPT_HOME\";
then\n err \"unable to create receipt directory at $RECEIPT_HOME\"\n
\ else\n echo \"$RECEIPT\" > \"$RECEIPT_HOME/$APP_NAME-receipt.json\"\n
\ # shellcheck disable=SC2320\n local _retval=$?\n fi\n
\ else\n local _retval=0\n fi\n\n return \"$_retval\"\n}\n\n# Replaces
$HOME with the variable name for display to the user,\n# only if $HOME is defined.\nreplace_home()
{\n local _str=\"$1\"\n\n if [ -n \"${HOME:-}\" ]; then\n echo \"$_str\"
| sed \"s,$HOME,\\$HOME,\"\n else\n echo \"$_str\"\n fi\n}\n\njson_binary_aliases()
{\n local _arch=\"$1\"\n\n case \"$_arch\" in \n \"aarch64-apple-darwin\")\n
\ echo '{}'\n ;;\n \"aarch64-pc-windows-gnu\")\n echo '{}'\n
\ ;;\n \"aarch64-unknown-linux-gnu\")\n echo '{}'\n ;;\n
\ \"aarch64-unknown-linux-musl-dynamic\")\n echo '{}'\n ;;\n \"aarch64-unknown-linux-musl-static\")\n
\ echo '{}'\n ;;\n \"arm-unknown-linux-gnueabihf\")\n echo
'{}'\n ;;\n \"arm-unknown-linux-musl-dynamiceabihf\")\n echo
'{}'\n ;;\n \"arm-unknown-linux-musl-staticeabihf\")\n echo '{}'\n
\ ;;\n \"armv7-unknown-linux-gnueabihf\")\n echo '{}'\n ;;\n
\ \"armv7-unknown-linux-musl-dynamiceabihf\")\n echo '{}'\n ;;\n
\ \"armv7-unknown-linux-musl-staticeabihf\")\n echo '{}'\n ;;\n
\ \"i686-pc-windows-gnu\")\n echo '{}'\n ;;\n \"i686-unknown-linux-gnu\")\n
\ echo '{}'\n ;;\n \"i686-unknown-linux-musl-dynamic\")\n echo
'{}'\n ;;\n \"i686-unknown-linux-musl-static\")\n echo '{}'\n
\ ;;\n \"powerpc64-unknown-linux-gnu\")\n echo '{}'\n ;;\n
\ \"powerpc64le-unknown-linux-gnu\")\n echo '{}'\n ;;\n \"riscv64gc-unknown-linux-gnu\")\n
\ echo '{}'\n ;;\n \"s390x-unknown-linux-gnu\")\n echo
'{}'\n ;;\n \"x86_64-apple-darwin\")\n echo '{}'\n ;;\n
\ \"x86_64-pc-windows-gnu\")\n echo '{}'\n ;;\n \"x86_64-unknown-linux-gnu\")\n
\ echo '{}'\n ;;\n \"x86_64-unknown-linux-musl-dynamic\")\n echo
'{}'\n ;;\n \"x86_64-unknown-linux-musl-static\")\n echo '{}'\n
\ ;;\n *)\n echo '{}'\n ;;\n esac\n}\n\naliases_for_binary()
{\n local _bin=\"$1\"\n local _arch=\"$2\"\n\n case \"$_arch\" in \n
\ \"aarch64-apple-darwin\")\n case \"$_bin\" in\n *)\n echo
\"\"\n ;;\n esac\n ;;\n \"aarch64-pc-windows-gnu\")\n
\ case \"$_bin\" in\n *)\n echo \"\"\n ;;\n
\ esac\n ;;\n \"aarch64-unknown-linux-gnu\")\n case \"$_bin\"
in\n *)\n echo \"\"\n ;;\n esac\n ;;\n
\ \"aarch64-unknown-linux-musl-dynamic\")\n case \"$_bin\" in\n *)\n
\ echo \"\"\n ;;\n esac\n ;;\n \"aarch64-unknown-linux-musl-static\")\n
\ case \"$_bin\" in\n *)\n echo \"\"\n ;;\n
\ esac\n ;;\n \"arm-unknown-linux-gnueabihf\")\n case \"$_bin\"
in\n *)\n echo \"\"\n ;;\n esac\n ;;\n
\ \"arm-unknown-linux-musl-dynamiceabihf\")\n case \"$_bin\" in\n *)\n
\ echo \"\"\n ;;\n esac\n ;;\n \"arm-unknown-linux-musl-staticeabihf\")\n
\ case \"$_bin\" in\n *)\n echo \"\"\n ;;\n
\ esac\n ;;\n \"armv7-unknown-linux-gnueabihf\")\n case
\"$_bin\" in\n *)\n echo \"\"\n ;;\n esac\n
\ ;;\n \"armv7-unknown-linux-musl-dynamiceabihf\")\n case \"$_bin\"
in\n *)\n echo \"\"\n ;;\n esac\n ;;\n
\ \"armv7-unknown-linux-musl-staticeabihf\")\n case \"$_bin\" in\n *)\n
\ echo \"\"\n ;;\n esac\n ;;\n \"i686-pc-windows-gnu\")\n
\ case \"$_bin\" in\n *)\n echo \"\"\n ;;\n
\ esac\n ;;\n \"i686-unknown-linux-gnu\")\n case \"$_bin\"
in\n *)\n echo \"\"\n ;;\n esac\n ;;\n
\ \"i686-unknown-linux-musl-dynamic\")\n case \"$_bin\" in\n *)\n
\ echo \"\"\n ;;\n esac\n ;;\n \"i686-unknown-linux-musl-static\")\n
\ case \"$_bin\" in\n *)\n echo \"\"\n ;;\n
\ esac\n ;;\n \"powerpc64-unknown-linux-gnu\")\n case \"$_bin\"
in\n *)\n echo \"\"\n ;;\n esac\n ;;\n
\ \"powerpc64le-unknown-linux-gnu\")\n case \"$_bin\" in\n *)\n
\ echo \"\"\n ;;\n esac\n ;;\n \"riscv64gc-unknown-linux-gnu\")\n
\ case \"$_bin\" in\n *)\n echo \"\"\n ;;\n
\ esac\n ;;\n \"s390x-unknown-linux-gnu\")\n case \"$_bin\"
in\n *)\n echo \"\"\n ;;\n esac\n ;;\n
\ \"x86_64-apple-darwin\")\n case \"$_bin\" in\n *)\n echo
\"\"\n ;;\n esac\n ;;\n \"x86_64-pc-windows-gnu\")\n
\ case \"$_bin\" in\n *)\n echo \"\"\n ;;\n
\ esac\n ;;\n \"x86_64-unknown-linux-gnu\")\n case \"$_bin\"
in\n *)\n echo \"\"\n ;;\n esac\n ;;\n
\ \"x86_64-unknown-linux-musl-dynamic\")\n case \"$_bin\" in\n *)\n
\ echo \"\"\n ;;\n esac\n ;;\n \"x86_64-unknown-linux-musl-static\")\n
\ case \"$_bin\" in\n *)\n echo \"\"\n ;;\n
\ esac\n ;;\n *)\n echo \"\"\n ;;\n esac\n}\n\nselect_archive_for_arch()
{\n local _true_arch=\"$1\"\n local _archive\n\n # try each archive,
checking runtime conditions like libc versions\n # accepting the first one
that matches, as it's the best match\n case \"$_true_arch\" in \n \"aarch64-apple-darwin\")\n
\ _archive=\"uv-aarch64-apple-darwin.tar.gz\"\n if [ -n \"$_archive\"
]; then\n echo \"$_archive\"\n return 0\n fi\n
\ _archive=\"uv-x86_64-apple-darwin.tar.gz\"\n if [ -n \"$_archive\"
]; then\n echo \"$_archive\"\n return 0\n fi\n
\ ;;\n \"aarch64-pc-windows-gnu\")\n _archive=\"uv-aarch64-pc-windows-msvc.zip\"\n
\ if [ -n \"$_archive\" ]; then\n echo \"$_archive\"\n
\ return 0\n fi\n ;;\n \"aarch64-pc-windows-msvc\")\n
\ _archive=\"uv-aarch64-pc-windows-msvc.zip\"\n if [ -n \"$_archive\"
]; then\n echo \"$_archive\"\n return 0\n fi\n
\ _archive=\"uv-x86_64-pc-windows-msvc.zip\"\n if [ -n \"$_archive\"
]; then\n echo \"$_archive\"\n return 0\n fi\n
\ _archive=\"uv-i686-pc-windows-msvc.zip\"\n if [ -n \"$_archive\"
]; then\n echo \"$_archive\"\n return 0\n fi\n
\ ;;\n \"aarch64-unknown-linux-gnu\")\n _archive=\"uv-aarch64-unknown-linux-gnu.tar.gz\"\n
\ if ! check_glibc \"2\" \"28\"; then\n _archive=\"\"\n
\ fi\n if [ -n \"$_archive\" ]; then\n echo
\"$_archive\"\n return 0\n fi\n _archive=\"uv-aarch64-unknown-linux-musl.tar.gz\"\n
\ if [ -n \"$_archive\" ]; then\n echo \"$_archive\"\n
\ return 0\n fi\n ;;\n \"aarch64-unknown-linux-musl-dynamic\")\n
\ _archive=\"uv-aarch64-unknown-linux-musl.tar.gz\"\n if
[ -n \"$_archive\" ]; then\n echo \"$_archive\"\n return
0\n fi\n ;;\n \"aarch64-unknown-linux-musl-static\")\n
\ _archive=\"uv-aarch64-unknown-linux-musl.tar.gz\"\n if
[ -n \"$_archive\" ]; then\n echo \"$_archive\"\n return
0\n fi\n ;;\n \"arm-unknown-linux-gnueabihf\")\n
\ _archive=\"uv-arm-unknown-linux-musleabihf.tar.gz\"\n if
[ -n \"$_archive\" ]; then\n echo \"$_archive\"\n return
0\n fi\n ;;\n \"arm-unknown-linux-musl-dynamiceabihf\")\n
\ _archive=\"uv-arm-unknown-linux-musleabihf.tar.gz\"\n if
[ -n \"$_archive\" ]; then\n echo \"$_archive\"\n return
0\n fi\n ;;\n \"arm-unknown-linux-musl-staticeabihf\")\n
\ _archive=\"uv-arm-unknown-linux-musleabihf.tar.gz\"\n if
[ -n \"$_archive\" ]; then\n echo \"$_archive\"\n return
0\n fi\n ;;\n \"armv7-unknown-linux-gnueabihf\")\n
\ _archive=\"uv-armv7-unknown-linux-gnueabihf.tar.gz\"\n if
! check_glibc \"2\" \"17\"; then\n _archive=\"\"\n fi\n
\ if [ -n \"$_archive\" ]; then\n echo \"$_archive\"\n
\ return 0\n fi\n _archive=\"uv-armv7-unknown-linux-musleabihf.tar.gz\"\n
\ if [ -n \"$_archive\" ]; then\n echo \"$_archive\"\n
\ return 0\n fi\n ;;\n \"armv7-unknown-linux-musl-dynamiceabihf\")\n
\ _archive=\"uv-armv7-unknown-linux-musleabihf.tar.gz\"\n if
[ -n \"$_archive\" ]; then\n echo \"$_archive\"\n return
0\n fi\n ;;\n \"armv7-unknown-linux-musl-staticeabihf\")\n
\ _archive=\"uv-armv7-unknown-linux-musleabihf.tar.gz\"\n if
[ -n \"$_archive\" ]; then\n echo \"$_archive\"\n return
0\n fi\n ;;\n \"i686-pc-windows-gnu\")\n _archive=\"uv-i686-pc-windows-msvc.zip\"\n
\ if [ -n \"$_archive\" ]; then\n echo \"$_archive\"\n
\ return 0\n fi\n ;;\n \"i686-pc-windows-msvc\")\n
\ _archive=\"uv-i686-pc-windows-msvc.zip\"\n if [ -n \"$_archive\"
]; then\n echo \"$_archive\"\n return 0\n fi\n
\ ;;\n \"i686-unknown-linux-gnu\")\n _archive=\"uv-i686-unknown-linux-gnu.tar.gz\"\n
\ if ! check_glibc \"2\" \"17\"; then\n _archive=\"\"\n
\ fi\n if [ -n \"$_archive\" ]; then\n echo
\"$_archive\"\n return 0\n fi\n _archive=\"uv-i686-unknown-linux-musl.tar.gz\"\n
\ if [ -n \"$_archive\" ]; then\n echo \"$_archive\"\n
\ return 0\n fi\n ;;\n \"i686-unknown-linux-musl-dynamic\")\n
\ _archive=\"uv-i686-unknown-linux-musl.tar.gz\"\n if [ -n
\"$_archive\" ]; then\n echo \"$_archive\"\n return
0\n fi\n ;;\n \"i686-unknown-linux-musl-static\")\n
\ _archive=\"uv-i686-unknown-linux-musl.tar.gz\"\n if [ -n
\"$_archive\" ]; then\n echo \"$_archive\"\n return
0\n fi\n ;;\n \"powerpc64-unknown-linux-gnu\")\n
\ _archive=\"uv-powerpc64-unknown-linux-gnu.tar.gz\"\n if
! check_glibc \"2\" \"17\"; then\n _archive=\"\"\n fi\n
\ if [ -n \"$_archive\" ]; then\n echo \"$_archive\"\n
\ return 0\n fi\n ;;\n \"powerpc64le-unknown-linux-gnu\")\n
\ _archive=\"uv-powerpc64le-unknown-linux-gnu.tar.gz\"\n if
! check_glibc \"2\" \"17\"; then\n _archive=\"\"\n fi\n
\ if [ -n \"$_archive\" ]; then\n echo \"$_archive\"\n
\ return 0\n fi\n ;;\n \"riscv64gc-unknown-linux-gnu\")\n
\ _archive=\"uv-riscv64gc-unknown-linux-gnu.tar.gz\"\n if
! check_glibc \"2\" \"31\"; then\n _archive=\"\"\n fi\n
\ if [ -n \"$_archive\" ]; then\n echo \"$_archive\"\n
\ return 0\n fi\n ;;\n \"s390x-unknown-linux-gnu\")\n
\ _archive=\"uv-s390x-unknown-linux-gnu.tar.gz\"\n if ! check_glibc
\"2\" \"17\"; then\n _archive=\"\"\n fi\n if
[ -n \"$_archive\" ]; then\n echo \"$_archive\"\n return
0\n fi\n ;;\n \"x86_64-apple-darwin\")\n _archive=\"uv-x86_64-apple-darwin.tar.gz\"\n
\ if [ -n \"$_archive\" ]; then\n echo \"$_archive\"\n
\ return 0\n fi\n ;;\n \"x86_64-pc-windows-gnu\")\n
\ _archive=\"uv-x86_64-pc-windows-msvc.zip\"\n if [ -n \"$_archive\"
]; then\n echo \"$_archive\"\n return 0\n fi\n
\ ;;\n \"x86_64-pc-windows-msvc\")\n _archive=\"uv-x86_64-pc-windows-msvc.zip\"\n
\ if [ -n \"$_archive\" ]; then\n echo \"$_archive\"\n
\ return 0\n fi\n _archive=\"uv-i686-pc-windows-msvc.zip\"\n
\ if [ -n \"$_archive\" ]; then\n echo \"$_archive\"\n
\ return 0\n fi\n ;;\n \"x86_64-unknown-linux-gnu\")\n
\ _archive=\"uv-x86_64-unknown-linux-gnu.tar.gz\"\n if !
check_glibc \"2\" \"17\"; then\n _archive=\"\"\n fi\n
\ if [ -n \"$_archive\" ]; then\n echo \"$_archive\"\n
\ return 0\n fi\n _archive=\"uv-x86_64-unknown-linux-musl.tar.gz\"\n
\ if [ -n \"$_archive\" ]; then\n echo \"$_archive\"\n
\ return 0\n fi\n ;;\n \"x86_64-unknown-linux-musl-dynamic\")\n
\ _archive=\"uv-x86_64-unknown-linux-musl.tar.gz\"\n if [
-n \"$_archive\" ]; then\n echo \"$_archive\"\n return
0\n fi\n ;;\n \"x86_64-unknown-linux-musl-static\")\n
\ _archive=\"uv-x86_64-unknown-linux-musl.tar.gz\"\n if [
-n \"$_archive\" ]; then\n echo \"$_archive\"\n return
0\n fi\n ;;\n *)\n err \"there isn't a
download for your platform $_true_arch\"\n ;;\n esac\n err \"no
compatible downloads were found for your platform $_true_arch\"\n}\n\ncheck_glibc()
{\n local _min_glibc_major=\"$1\"\n local _min_glibc_series=\"$2\"\n\n #
Parsing version out from line 1 like:\n # ldd (Ubuntu GLIBC 2.35-0ubuntu3.1)
2.35\n _local_glibc=\"$(ldd --version | awk -F' ' '{ if (FNR<=1) print $NF
}')\"\n\n if [ \"$(echo \"${_local_glibc}\" | awk -F. '{ print $1 }')\" = \"$_min_glibc_major\"
] && [ \"$(echo \"${_local_glibc}\" | awk -F. '{ print $2 }')\" -ge \"$_min_glibc_series\"
]; then\n return 0\n else\n say \"System glibc version (\\`${_local_glibc}')
is too old; checking alternatives\" >&2\n return 1\n fi\n}\n\n# See
discussion of late-bound vs early-bound for why we use single-quotes with env
vars\n# shellcheck disable=SC2016\ninstall() {\n # This code needs to both
compute certain paths for itself to write to, and\n # also write them to shell/rc
files so that they can look them up to e.g.\n # add them to PATH. This requires
an active distinction between paths\n # and expressions that can compute them.\n
\ #\n # The distinction lies in when we want env-vars to be evaluated. For
instance\n # if we determine that we want to install to $HOME/.myapp, which
do we add\n # to e.g. $HOME/.profile:\n #\n # * early-bound: export PATH=\"/home/myuser/.myapp:$PATH\"\n
\ # * late-bound: export PATH=\"$HOME/.myapp:$PATH\"\n #\n # In this
case most people would prefer the late-bound version, but in other\n # cases
the early-bound version might be a better idea. In particular when using\n #
other env-vars than $HOME, they are more likely to be only set temporarily\n #
for the duration of this install script, so it's more advisable to erase their\n
\ # existence with early-bounding.\n #\n # This distinction is handled
by \"double-quotes\" (early) vs 'single-quotes' (late).\n #\n # However
if we detect that \"$SOME_VAR/...\" is a subdir of $HOME, we try to rewrite\n
\ # it to be '$HOME/...' to get the best of both worlds.\n #\n # This
script has a few different variants, the most complex one being the\n # CARGO_HOME
version which attempts to install things to Cargo's bin dir,\n # potentially
setting up a minimal version if the user hasn't ever installed Cargo.\n #\n
\ # In this case we need to:\n #\n # * Install to $HOME/.cargo/bin/\n
\ # * Create a shell script at $HOME/.cargo/env that:\n # * Checks if $HOME/.cargo/bin/
is on PATH\n # * and if not prepends it to PATH\n # * Edits $INFERRED_HOME/.profile
to run $HOME/.cargo/env (if the line doesn't exist)\n #\n # To do this we
need these 4 values:\n\n # The actual path we're going to install to\n local
_install_dir\n # The directory C dynamic/static libraries install to\n local
_lib_install_dir\n # The install prefix we write to the receipt.\n # For
organized install methods like CargoHome, which have\n # subdirectories, this
is the root without `/bin`. For other\n # methods, this is the same as `_install_dir`.\n
\ local _receipt_install_dir\n # Path to the an shell script that adds install_dir
to PATH\n local _env_script_path\n # Potentially-late-bound version of install_dir
to write env_script\n local _install_dir_expr\n # Potentially-late-bound
version of env_script_path to write to rcfiles like $HOME/.profile\n local
_env_script_path_expr\n # Forces the install to occur at this path, not the
default\n local _force_install_dir\n # Which install layout to use - \"flat\"
or \"hierarchical\"\n local _install_layout=\"unspecified\"\n # A list of
binaries which are shadowed in the PATH\n local _shadowed_bins=\"\"\n\n #
Check the newer app-specific variable before falling back\n # to the older
generic one\n if [ -n \"${UV_INSTALL_DIR:-}\" ]; then\n _force_install_dir=\"$UV_INSTALL_DIR\"\n
\ _install_layout=\"flat\"\n elif [ -n \"${CARGO_DIST_FORCE_INSTALL_DIR:-}\"
]; then\n _force_install_dir=\"$CARGO_DIST_FORCE_INSTALL_DIR\"\n _install_layout=\"flat\"\n
\ elif [ -n \"$UNMANAGED_INSTALL\" ]; then\n _force_install_dir=\"$UNMANAGED_INSTALL\"\n
\ _install_layout=\"flat\"\n fi\n\n # Check if the install layout
should be changed from `flat` to `cargo-home`\n # for backwards compatible
updates of applications that switched layouts.\n if [ -n \"${_force_install_dir:-}\"
]; then\n if [ \"$_install_layout\" = \"flat\" ]; then\n # If
the install directory is targeting the Cargo home directory, then\n #
we assume this application was previously installed that layout\n if
[ \"$_force_install_dir\" = \"${CARGO_HOME:-${INFERRED_HOME:-}/.cargo}\" ]; then\n
\ _install_layout=\"cargo-home\"\n fi\n fi\n fi\n\n
\ # Before actually consulting the configured install strategy, see\n # if
we're overriding it.\n if [ -n \"${_force_install_dir:-}\" ]; then\n case
\"$_install_layout\" in\n \"hierarchical\")\n _install_dir=\"$_force_install_dir/bin\"\n
\ _lib_install_dir=\"$_force_install_dir/lib\"\n _receipt_install_dir=\"$_force_install_dir\"\n
\ _env_script_path=\"$_force_install_dir/env\"\n _install_dir_expr=\"$(replace_home
\"$_force_install_dir/bin\")\"\n _env_script_path_expr=\"$(replace_home
\"$_force_install_dir/env\")\"\n ;;\n \"cargo-home\")\n
\ _install_dir=\"$_force_install_dir/bin\"\n _lib_install_dir=\"$_force_install_dir/bin\"\n
\ _receipt_install_dir=\"$_force_install_dir\"\n _env_script_path=\"$_force_install_dir/env\"\n
\ _install_dir_expr=\"$(replace_home \"$_force_install_dir/bin\")\"\n
\ _env_script_path_expr=\"$(replace_home \"$_force_install_dir/env\")\"\n
\ ;;\n \"flat\")\n _install_dir=\"$_force_install_dir\"\n
\ _lib_install_dir=\"$_force_install_dir\"\n _receipt_install_dir=\"$_install_dir\"\n
\ _env_script_path=\"$_force_install_dir/env\"\n _install_dir_expr=\"$(replace_home
\"$_force_install_dir\")\"\n _env_script_path_expr=\"$(replace_home
\"$_force_install_dir/env\")\"\n ;;\n *)\n err
\"Unrecognized install layout: $_install_layout\"\n ;;\n esac\n
\ fi\n if [ -z \"${_install_dir:-}\" ]; then\n _install_layout=\"flat\"\n
\ # Install to $XDG_BIN_HOME\n if [ -n \"${XDG_BIN_HOME:-}\" ]; then\n
\ _install_dir=\"$XDG_BIN_HOME\"\n _lib_install_dir=\"$_install_dir\"\n
\ _receipt_install_dir=\"$_install_dir\"\n _env_script_path=\"$XDG_BIN_HOME/env\"\n
\ _install_dir_expr=\"$(replace_home \"$_install_dir\")\"\n _env_script_path_expr=\"$(replace_home
\"$_env_script_path\")\"\n fi\n fi\n if [ -z \"${_install_dir:-}\"
]; then\n _install_layout=\"flat\"\n # Install to $XDG_DATA_HOME/../bin\n
\ if [ -n \"${XDG_DATA_HOME:-}\" ]; then\n _install_dir=\"$XDG_DATA_HOME/../bin\"\n
\ _lib_install_dir=\"$_install_dir\"\n _receipt_install_dir=\"$_install_dir\"\n
\ _env_script_path=\"$XDG_DATA_HOME/../bin/env\"\n _install_dir_expr=\"$(replace_home
\"$_install_dir\")\"\n _env_script_path_expr=\"$(replace_home \"$_env_script_path\")\"\n
\ fi\n fi\n if [ -z \"${_install_dir:-}\" ]; then\n _install_layout=\"flat\"\n
\ # Install to $HOME/.local/bin\n if [ -n \"${INFERRED_HOME:-}\"
]; then\n _install_dir=\"$INFERRED_HOME/.local/bin\"\n _lib_install_dir=\"$INFERRED_HOME/.local/bin\"\n
\ _receipt_install_dir=\"$_install_dir\"\n _env_script_path=\"$INFERRED_HOME/.local/bin/env\"\n
\ _install_dir_expr=\"$INFERRED_HOME_EXPRESSION/.local/bin\"\n _env_script_path_expr=\"$INFERRED_HOME_EXPRESSION/.local/bin/env\"\n
\ fi\n fi\n\n if [ -z \"$_install_dir_expr\" ]; then\n err
\"could not find a valid path to install to!\"\n fi\n\n # Identical to the
sh version, just with a .fish file extension\n # We place it down here to wait
until it's been assigned in every\n # path.\n _fish_env_script_path=\"${_env_script_path}.fish\"\n
\ _fish_env_script_path_expr=\"${_env_script_path_expr}.fish\"\n\n # Replace
the temporary cargo home with the calculated one\n RECEIPT=$(echo \"$RECEIPT\"
| sed \"s,AXO_INSTALL_PREFIX,$_receipt_install_dir,\")\n # Also replace the
aliases with the arch-specific one\n RECEIPT=$(echo \"$RECEIPT\" | sed \"s'\\\"binary_aliases\\\":{}'\\\"binary_aliases\\\":$(json_binary_aliases
\"$_arch\")'\")\n # And replace the install layout\n RECEIPT=$(echo \"$RECEIPT\"
| sed \"s'\\\"install_layout\\\":\\\"unspecified\\\"'\\\"install_layout\\\":\\\"$_install_layout\\\"'\")\n
\ if [ \"$NO_MODIFY_PATH\" = \"1\" ]; then\n RECEIPT=$(echo \"$RECEIPT\"
| sed \"s'\\\"modify_path\\\":true'\\\"modify_path\\\":false'\")\n fi\n\n say
\"installing to $_install_dir\"\n ensure mkdir -p \"$_install_dir\"\n ensure
mkdir -p \"$_lib_install_dir\"\n\n # copy all the binaries to the install dir\n
\ local _src_dir=\"$1\"\n local _bins=\"$2\"\n local _libs=\"$3\"\n local
_staticlibs=\"$4\"\n local _arch=\"$5\"\n for _bin_name in $_bins; do\n
\ local _bin=\"$_src_dir/$_bin_name\"\n ensure mv \"$_bin\" \"$_install_dir\"\n
\ # unzip seems to need this chmod\n ensure chmod +x \"$_install_dir/$_bin_name\"\n
\ for _dest in $(aliases_for_binary \"$_bin_name\" \"$_arch\"); do\n ln
-sf \"$_install_dir/$_bin_name\" \"$_install_dir/$_dest\"\n done\n say
\" $_bin_name\"\n done\n # Like the above, but no aliases\n for _lib_name
in $_libs; do\n local _lib=\"$_src_dir/$_lib_name\"\n ensure mv
\"$_lib\" \"$_lib_install_dir\"\n # unzip seems to need this chmod\n ensure
chmod +x \"$_lib_install_dir/$_lib_name\"\n say \" $_lib_name\"\n done\n
\ for _lib_name in $_staticlibs; do\n local _lib=\"$_src_dir/$_lib_name\"\n
\ ensure mv \"$_lib\" \"$_lib_install_dir\"\n # unzip seems to need
this chmod\n ensure chmod +x \"$_lib_install_dir/$_lib_name\"\n say
\" $_lib_name\"\n done\n\n say \"everything's installed!\"\n\n # Avoid
modifying the users PATH if they are managing their PATH manually\n case :$PATH:\n
\ in *:$_install_dir:*) NO_MODIFY_PATH=1 ;;\n *) ;;\n esac\n\n
\ if [ \"0\" = \"$NO_MODIFY_PATH\" ]; then\n add_install_dir_to_ci_path
\"$_install_dir\"\n add_install_dir_to_path \"$_install_dir_expr\" \"$_env_script_path\"
\"$_env_script_path_expr\" \".profile\" \"sh\"\n exit1=$?\n shotgun_install_dir_to_path
\"$_install_dir_expr\" \"$_env_script_path\" \"$_env_script_path_expr\" \".profile
.bashrc .bash_profile .bash_login\" \"sh\"\n exit2=$?\n add_install_dir_to_path
\"$_install_dir_expr\" \"$_env_script_path\" \"$_env_script_path_expr\" \".zshrc
.zshenv\" \"sh\"\n exit3=$?\n # This path may not exist by default\n
\ ensure mkdir -p \"$INFERRED_HOME/.config/fish/conf.d\"\n exit4=$?\n
\ add_install_dir_to_path \"$_install_dir_expr\" \"$_fish_env_script_path\"
\"$_fish_env_script_path_expr\" \".config/fish/conf.d/$APP_NAME.env.fish\" \"fish\"\n
\ exit5=$?\n\n if [ \"${exit1:-0}\" = 1 ] || [ \"${exit2:-0}\" =
1 ] || [ \"${exit3:-0}\" = 1 ] || [ \"${exit4:-0}\" = 1 ] || [ \"${exit5:-0}\"
= 1 ]; then\n say \"\"\n say \"To add $_install_dir_expr
to your PATH, either restart your shell or run:\"\n say \"\"\n say
\" source $_env_script_path_expr (sh, bash, zsh)\"\n say \" source
$_fish_env_script_path_expr (fish)\"\n fi\n fi\n\n _shadowed_bins=\"$(check_for_shadowed_bins
\"$_install_dir\" \"$_bins\")\"\n if [ -n \"$_shadowed_bins\" ]; then\n warn
\"The following commands are shadowed by other commands in your PATH:$_shadowed_bins\"\n
\ fi\n}\n\ncheck_for_shadowed_bins() {\n local _install_dir=\"$1\"\n local
_bins=\"$2\"\n local _shadow\n\n for _bin_name in $_bins; do\n _shadow=\"$(command
-v \"$_bin_name\")\"\n if [ -n \"$_shadow\" ] && [ \"$_shadow\" != \"$_install_dir/$_bin_name\"
]; then\n _shadowed_bins=\"$_shadowed_bins $_bin_name\"\n fi\n
\ done\n\n echo \"$_shadowed_bins\"\n}\n\nprint_home_for_script() {\n local
script=\"$1\"\n\n local _home\n case \"$script\" in\n # zsh has a
special ZDOTDIR directory, which if set\n # should be considered instead
of $HOME\n .zsh*)\n if [ -n \"${ZDOTDIR:-}\" ]; then\n _home=\"$ZDOTDIR\"\n
\ else\n _home=\"$INFERRED_HOME\"\n fi\n ;;\n
\ *)\n _home=\"$INFERRED_HOME\"\n ;;\n esac\n\n
\ echo \"$_home\"\n}\n\nadd_install_dir_to_ci_path() {\n # Attempt to do
CI-specific rituals to get the install-dir on PATH faster\n local _install_dir=\"$1\"\n\n
\ # If GITHUB_PATH is present, then write install_dir to the file it refs.\n
\ # After each GitHub Action, the contents will be added to PATH.\n # So
if you put a curl | sh for this script in its own \"run\" step,\n # the next
step will have this dir on PATH.\n #\n # Note that GITHUB_PATH will not
resolve any variables, so we in fact\n # want to write install_dir and not
install_dir_expr\n if [ -n \"${GITHUB_PATH:-}\" ]; then\n ensure echo
\"$_install_dir\" >> \"$GITHUB_PATH\"\n fi\n}\n\nadd_install_dir_to_path()
{\n # Edit rcfiles ($HOME/.profile) to add install_dir to $PATH\n #\n #
We do this slightly indirectly by creating an \"env\" shell script which checks
if install_dir\n # is on $PATH already, and prepends it if not. The actual
line we then add to rcfiles\n # is to just source that script. This allows
us to blast it into lots of different rcfiles and\n # have it run multiple
times without causing problems. It's also specifically compatible\n # with
the system rustup uses, so that we don't conflict with it.\n local _install_dir_expr=\"$1\"\n
\ local _env_script_path=\"$2\"\n local _env_script_path_expr=\"$3\"\n local
_rcfiles=\"$4\"\n local _shell=\"$5\"\n\n if [ -n \"${INFERRED_HOME:-}\"
]; then\n local _target\n local _home\n\n # Find the first
file in the array that exists and choose\n # that as our target to write
to\n for _rcfile_relative in $_rcfiles; do\n _home=\"$(print_home_for_script
\"$_rcfile_relative\")\"\n local _rcfile=\"$_home/$_rcfile_relative\"\n\n
\ if [ -f \"$_rcfile\" ]; then\n _target=\"$_rcfile\"\n
\ break\n fi\n done\n\n # If we didn't
find anything, pick the first entry in the\n # list as the default to create
and write to\n if [ -z \"${_target:-}\" ]; then\n local _rcfile_relative\n
\ _rcfile_relative=\"$(echo \"$_rcfiles\" | awk '{ print $1 }')\"\n
\ _home=\"$(print_home_for_script \"$_rcfile_relative\")\"\n _target=\"$_home/$_rcfile_relative\"\n
\ fi\n\n # `source x` is an alias for `. x`, and the latter is more
portable/actually-posix.\n # This apparently comes up a lot on freebsd.
It's easy enough to always add\n # the more robust line to rcfiles, but
when telling the user to apply the change\n # to their current shell \".
x\" is pretty easy to misread/miscopy, so we use the\n # prettier \"source
x\" line there. Hopefully people with Weird Shells are aware\n # this is
a thing and know to tweak it (or just restart their shell).\n local _robust_line=\".
\\\"$_env_script_path_expr\\\"\"\n local _pretty_line=\"source \\\"$_env_script_path_expr\\\"\"\n\n
\ # Add the env script if it doesn't already exist\n if [ ! -f \"$_env_script_path\"
]; then\n say_verbose \"creating $_env_script_path\"\n if
[ \"$_shell\" = \"sh\" ]; then\n write_env_script_sh \"$_install_dir_expr\"
\"$_env_script_path\"\n else\n write_env_script_fish
\"$_install_dir_expr\" \"$_env_script_path\"\n fi\n else\n say_verbose
\"$_env_script_path already exists\"\n fi\n\n # Check if the line
is already in the rcfile\n # grep: 0 if matched, 1 if no match, and 2 if
an error occurred\n #\n # Ideally we could use quiet grep (-q),
but that makes \"match\" and \"error\"\n # have the same behaviour, when
we want \"no match\" and \"error\" to be the same\n # (on error we want
to create the file, which >> conveniently does)\n #\n # We search
for both kinds of line here just to do the right thing in more cases.\n if
! grep -F \"$_robust_line\" \"$_target\" > /dev/null 2>/dev/null && \\\n !
grep -F \"$_pretty_line\" \"$_target\" > /dev/null 2>/dev/null\n then\n
\ # If the script now exists, add the line to source it to the rcfile\n
\ # (This will also create the rcfile if it doesn't exist)\n if
[ -f \"$_env_script_path\" ]; then\n local _line\n #
Fish has deprecated `.` as an alias for `source` and\n # it will
be removed in a later version.\n # https://fishshell.com/docs/current/cmds/source.html\n
\ # By contrast, `.` is the traditional syntax in sh and\n #
`source` isn't always supported in all circumstances.\n if [ \"$_shell\"
= \"fish\" ]; then\n _line=\"$_pretty_line\"\n else\n
\ _line=\"$_robust_line\"\n fi\n say_verbose
\"adding $_line to $_target\"\n # prepend an extra newline in case
the user's file is missing a trailing one\n ensure echo \"\" >>
\"$_target\"\n ensure echo \"$_line\" >> \"$_target\"\n return
1\n fi\n else\n say_verbose \"$_install_dir already
on PATH\"\n fi\n fi\n}\n\nshotgun_install_dir_to_path() {\n # Edit
rcfiles ($HOME/.profile) to add install_dir to $PATH\n # (Shotgun edition -
write to all provided files that exist rather than just the first)\n local
_install_dir_expr=\"$1\"\n local _env_script_path=\"$2\"\n local _env_script_path_expr=\"$3\"\n
\ local _rcfiles=\"$4\"\n local _shell=\"$5\"\n\n if [ -n \"${INFERRED_HOME:-}\"
]; then\n local _found=false\n local _home\n\n for _rcfile_relative
in $_rcfiles; do\n _home=\"$(print_home_for_script \"$_rcfile_relative\")\"\n
\ local _rcfile_abs=\"$_home/$_rcfile_relative\"\n\n if [
-f \"$_rcfile_abs\" ]; then\n _found=true\n add_install_dir_to_path
\"$_install_dir_expr\" \"$_env_script_path\" \"$_env_script_path_expr\" \"$_rcfile_relative\"
\"$_shell\"\n fi\n done\n\n # Fall through to previous
\"create + write to first file in list\" behavior\n\t if [ \"$_found\" = false
]; then\n add_install_dir_to_path \"$_install_dir_expr\" \"$_env_script_path\"
\"$_env_script_path_expr\" \"$_rcfiles\" \"$_shell\"\n fi\n fi\n}\n\nwrite_env_script_sh()
{\n # write this env script to the given path (this cat/EOF stuff is a \"heredoc\"
string)\n local _install_dir_expr=\"$1\"\n local _env_script_path=\"$2\"\n
\ ensure cat <<EOF > \"$_env_script_path\"\n#!/bin/sh\n# add binaries to PATH
if they aren't added yet\n# affix colons on either side of \\$PATH to simplify
matching\ncase \":\\${PATH}:\" in\n *:\"$_install_dir_expr\":*)\n ;;\n
\ *)\n # Prepending path in case a system-installed binary needs to be
overridden\n export PATH=\"$_install_dir_expr:\\$PATH\"\n ;;\nesac\nEOF\n}\n\nwrite_env_script_fish()
{\n # write this env script to the given path (this cat/EOF stuff is a \"heredoc\"
string)\n local _install_dir_expr=\"$1\"\n local _env_script_path=\"$2\"\n
\ ensure cat <<EOF > \"$_env_script_path\"\nif not contains \"$_install_dir_expr\"
\\$PATH\n # Prepending path in case a system-installed binary needs to be overridden\n
\ set -x PATH \"$_install_dir_expr\" \\$PATH\nend\nEOF\n}\n\nget_current_exe()
{\n # Returns the executable used for system architecture detection\n #
This is only run on Linux\n local _current_exe\n if test -L /proc/self/exe
; then\n _current_exe=/proc/self/exe\n else\n warn \"Unable to
find /proc/self/exe. System architecture detection might be inaccurate.\"\n if
test -n \"$SHELL\" ; then\n _current_exe=$SHELL\n else\n need_cmd
/bin/sh\n _current_exe=/bin/sh\n fi\n warn \"Falling
back to $_current_exe.\"\n fi\n echo \"$_current_exe\"\n}\n\nget_bitness()
{\n need_cmd head\n # Architecture detection without dependencies beyond
coreutils.\n # ELF files start out \"\\x7fELF\", and the following byte is\n
\ # 0x01 for 32-bit and\n # 0x02 for 64-bit.\n # The printf builtin
on some shells like dash only supports octal\n # escape sequences, so we use
those.\n local _current_exe=$1\n local _current_exe_head\n _current_exe_head=$(head
-c 5 \"$_current_exe\")\n if [ \"$_current_exe_head\" = \"$(printf '\\177ELF\\001')\"
]; then\n echo 32\n elif [ \"$_current_exe_head\" = \"$(printf '\\177ELF\\002')\"
]; then\n echo 64\n else\n err \"unknown platform bitness\"\n
\ fi\n}\n\nis_host_amd64_elf() {\n local _current_exe=$1\n\n need_cmd
head\n need_cmd tail\n # ELF e_machine detection without dependencies beyond
coreutils.\n # Two-byte field at offset 0x12 indicates the CPU,\n # but
we're interested in it being 0x3E to indicate amd64, or not that.\n local _current_exe_machine\n
\ _current_exe_machine=$(head -c 19 \"$_current_exe\" | tail -c 1)\n [ \"$_current_exe_machine\"
= \"$(printf '\\076')\" ]\n}\n\nget_endianness() {\n local _current_exe=$1\n
\ local cputype=$2\n local suffix_eb=$3\n local suffix_el=$4\n\n #
detect endianness without od/hexdump, like get_bitness() does.\n need_cmd head\n
\ need_cmd tail\n\n local _current_exe_endianness\n _current_exe_endianness=\"$(head
-c 6 \"$_current_exe\" | tail -c 1)\"\n if [ \"$_current_exe_endianness\" =
\"$(printf '\\001')\" ]; then\n echo \"${cputype}${suffix_el}\"\n elif
[ \"$_current_exe_endianness\" = \"$(printf '\\002')\" ]; then\n echo \"${cputype}${suffix_eb}\"\n
\ else\n err \"unknown platform endianness\"\n fi\n}\n\n# Detect the
Linux/LoongArch UAPI flavor, with all errors being non-fatal.\n# Returns 0 or
234 in case of successful detection, 1 otherwise (/tmp being\n# noexec, or other
causes).\ncheck_loongarch_uapi() {\n need_cmd base64\n\n local _tmp\n if
! _tmp=\"$(ensure mktemp)\"; then\n return 1\n fi\n\n # Minimal Linux/LoongArch
UAPI detection, exiting with 0 in case of\n # upstream (\"new world\") UAPI,
and 234 (-EINVAL truncated) in case of\n # old-world (as deployed on several
early commercial Linux distributions\n # for LoongArch).\n #\n # See
https://gist.github.com/xen0n/5ee04aaa6cecc5c7794b9a0c3b65fc7f for\n # source
to this helper binary.\n ignore base64 -d > \"$_tmp\" <<EOF\nf0VMRgIBAQAAAAAAAAAAAAIAAgEBAAAAeAAgAAAAAABAAAAAAAAAAAAAAAAAAAAAQQAAAEAAOAAB\nAAAAAAAAAAEAAAAFAAAAAAAAAAAAAAAAACAAAAAAAAAAIAAAAAAAJAAAAAAAAAAkAAAAAAAAAAAA\nAQAAAAAABCiAAwUAFQAGABUAByCAAwsYggMAACsAC3iBAwAAKwAxen0n\nEOF\n\n
\ ignore chmod u+x \"$_tmp\"\n if [ ! -x \"$_tmp\" ]; then\n ignore
rm \"$_tmp\"\n return 1\n fi\n\n \"$_tmp\"\n local _retval=$?\n\n
\ ignore rm \"$_tmp\"\n return \"$_retval\"\n}\n\nensure_loongarch_uapi()
{\n check_loongarch_uapi\n case $? in\n 0)\n return 0\n
\ ;;\n 234)\n err 'Your Linux kernel does not provide
the ABI required by this distribution.'\n ;;\n *)\n warn
\"Cannot determine current system's ABI flavor, continuing anyway.\"\n warn
'Note that the official distribution only works with the upstream kernel ABI.'\n
\ warn 'Installation will fail if your running kernel happens to be
incompatible.'\n ;;\n esac\n}\n\nget_architecture() {\n local
_ostype\n local _cputype\n _ostype=\"$(uname -s)\"\n _cputype=\"$(uname
-m)\"\n local _clibtype=\"gnu\"\n local _local_glibc\n\n if [ \"$_ostype\"
= Linux ]; then\n if [ \"$(uname -o)\" = Android ]; then\n _ostype=Android\n
\ fi\n if ldd --version 2>&1 | grep -q 'musl'; then\n _clibtype=\"musl-dynamic\"\n
\ else\n # Assume all other linuxes are glibc (even if wrong,
static libc fallback will apply)\n _clibtype=\"gnu\"\n fi\n
\ fi\n\n if [ \"$_ostype\" = Darwin ]; then\n # Darwin `uname -m`
can lie due to Rosetta shenanigans. If you manage to\n # invoke a native
shell binary and then a native uname binary, you can\n # get the real answer,
but that's hard to ensure, so instead we use\n # `sysctl` (which doesn't
lie) to check for the actual architecture.\n if [ \"$_cputype\" = i386
]; then\n # Handling i386 compatibility mode in older macOS versions
(<10.15)\n # running on x86_64-based Macs.\n # Starting
from 10.15, macOS explicitly bans all i386 binaries from running.\n #
See: <https://support.apple.com/en-us/HT208436>\n\n # Avoid `sysctl:
unknown oid` stderr output and/or non-zero exit code.\n if sysctl hw.optional.x86_64
2> /dev/null || true | grep -q ': 1'; then\n _cputype=x86_64\n
\ fi\n elif [ \"$_cputype\" = x86_64 ]; then\n # Handling
x86-64 compatibility mode (a.k.a. Rosetta 2)\n # in newer macOS versions
(>=11) running on arm64-based Macs.\n # Rosetta 2 is built exclusively
for x86-64 and cannot run i386 binaries.\n\n # Avoid `sysctl: unknown
oid` stderr output and/or non-zero exit code.\n if sysctl hw.optional.arm64
2> /dev/null || true | grep -q ': 1'; then\n _cputype=arm64\n fi\n
\ fi\n fi\n\n if [ \"$_ostype\" = SunOS ]; then\n # Both Solaris
and illumos presently announce as \"SunOS\" in \"uname -s\"\n # so use
\"uname -o\" to disambiguate. We use the full path to the\n # system uname
in case the user has coreutils uname first in PATH,\n # which has historically
sometimes printed the wrong value here.\n if [ \"$(/usr/bin/uname -o)\"
= illumos ]; then\n _ostype=illumos\n fi\n\n # illumos
systems have multi-arch userlands, and \"uname -m\" reports the\n # machine
hardware name; e.g., \"i86pc\" on both 32- and 64-bit x86\n # systems.
\ Check for the native (widest) instruction set on the\n # running kernel:\n
\ if [ \"$_cputype\" = i86pc ]; then\n _cputype=\"$(isainfo -n)\"\n
\ fi\n fi\n\n local _current_exe\n case \"$_ostype\" in\n\n Android)\n
\ _ostype=linux-android\n ;;\n\n Linux)\n _current_exe=$(get_current_exe)\n
\ _ostype=unknown-linux-$_clibtype\n _bitness=$(get_bitness
\"$_current_exe\")\n ;;\n\n FreeBSD)\n _ostype=unknown-freebsd\n
\ ;;\n\n NetBSD)\n _ostype=unknown-netbsd\n ;;\n\n
\ DragonFly)\n _ostype=unknown-dragonfly\n ;;\n\n
\ Darwin)\n _ostype=apple-darwin\n ;;\n\n illumos)\n
\ _ostype=unknown-illumos\n ;;\n\n MINGW* | MSYS*
| CYGWIN* | Windows_NT)\n _ostype=pc-windows-gnu\n ;;\n\n
\ *)\n err \"unrecognized OS type: $_ostype\"\n ;;\n\n
\ esac\n\n case \"$_cputype\" in\n\n i386 | i486 | i686 | i786 | x86)\n
\ _cputype=i686\n ;;\n\n xscale | arm)\n _cputype=arm\n
\ if [ \"$_ostype\" = \"linux-android\" ]; then\n _ostype=linux-androideabi\n
\ fi\n ;;\n\n armv6l)\n _cputype=arm\n
\ if [ \"$_ostype\" = \"linux-android\" ]; then\n _ostype=linux-androideabi\n
\ else\n _ostype=\"${_ostype}eabihf\"\n fi\n
\ ;;\n\n armv7l | armv8l)\n _cputype=armv7\n if
[ \"$_ostype\" = \"linux-android\" ]; then\n _ostype=linux-androideabi\n
\ else\n _ostype=\"${_ostype}eabihf\"\n fi\n
\ ;;\n\n aarch64 | arm64)\n _cputype=aarch64\n ;;\n\n
\ x86_64 | x86-64 | x64 | amd64)\n _cputype=x86_64\n ;;\n\n
\ mips)\n _cputype=$(get_endianness \"$_current_exe\" mips ''
el)\n ;;\n\n mips64)\n if [ \"$_bitness\" -eq 64
]; then\n # only n64 ABI is supported for now\n _ostype=\"${_ostype}abi64\"\n
\ _cputype=$(get_endianness \"$_current_exe\" mips64 '' el)\n fi\n
\ ;;\n\n ppc)\n _cputype=powerpc\n ;;\n\n
\ ppc64)\n _cputype=powerpc64\n ;;\n\n ppc64le)\n
\ _cputype=powerpc64le\n ;;\n\n s390x)\n _cputype=s390x\n
\ ;;\n riscv64)\n _cputype=riscv64gc\n ;;\n
\ loongarch64)\n _cputype=loongarch64\n ensure_loongarch_uapi\n
\ ;;\n *)\n err \"unknown CPU type: $_cputype\"\n\n
\ esac\n\n # Detect 64-bit linux with 32-bit userland\n if [ \"${_ostype}\"
= unknown-linux-gnu ] && [ \"${_bitness}\" -eq 32 ]; then\n case $_cputype
in\n x86_64)\n # 32-bit executable for amd64 = x32\n
\ if is_host_amd64_elf \"$_current_exe\"; then {\n err
\"x32 linux unsupported\"\n }; else\n _cputype=i686\n
\ fi\n ;;\n mips64)\n _cputype=$(get_endianness
\"$_current_exe\" mips '' el)\n ;;\n powerpc64)\n _cputype=powerpc\n
\ ;;\n aarch64)\n _cputype=armv7\n if
[ \"$_ostype\" = \"linux-android\" ]; then\n _ostype=linux-androideabi\n
\ else\n _ostype=\"${_ostype}eabihf\"\n fi\n
\ ;;\n riscv64gc)\n err \"riscv64 with
32-bit userland unsupported\"\n ;;\n esac\n fi\n\n #
Detect armv7 but without the CPU features Rust needs in that build,\n # and
fall back to arm.\n if [ \"$_ostype\" = \"unknown-linux-gnueabihf\" ] && [
\"$_cputype\" = armv7 ]; then\n if ! (ensure grep '^Features' /proc/cpuinfo
| grep -E -q 'neon|simd') ; then\n # Either `/proc/cpuinfo` is malformed
or unavailable, or\n # at least one processor does not have NEON (which
is asimd on armv8+).\n _cputype=arm\n fi\n fi\n\n _arch=\"${_cputype}-${_ostype}\"\n\n
\ RETVAL=\"$_arch\"\n}\n\nsay() {\n if [ \"0\" = \"$PRINT_QUIET\" ]; then\n
\ echo \"$1\"\n fi\n}\n\nsay_verbose() {\n if [ \"1\" = \"$PRINT_VERBOSE\"
]; then\n echo \"$1\"\n fi\n}\n\nwarn() {\n if [ \"0\" = \"$PRINT_QUIET\"
]; then\n local red\n local reset\n red=$(tput setaf 1 2>/dev/null
|| echo '')\n reset=$(tput sgr0 2>/dev/null || echo '')\n say \"${red}WARN${reset}:
$1\" >&2\n fi\n}\n\nerr() {\n if [ \"0\" = \"$PRINT_QUIET\" ]; then\n local
red\n local reset\n red=$(tput setaf 1 2>/dev/null || echo '')\n
\ reset=$(tput sgr0 2>/dev/null || echo '')\n say \"${red}ERROR${reset}:
$1\" >&2\n fi\n exit 1\n}\n\nneed_cmd() {\n if ! check_cmd \"$1\"\n then
err \"need '$1' (command not found)\"\n fi\n}\n\ncheck_cmd() {\n command
-v \"$1\" > /dev/null 2>&1\n return $?\n}\n\nassert_nz() {\n if [ -z \"$1\"
]; then err \"assert_nz $2\"; fi\n}\n\n# Run a command that should never fail.
If the command fails execution\n# will immediately terminate with an error showing
the failing\n# command.\nensure() {\n if ! \"$@\"; then err \"command failed:
$*\"; fi\n}\n\n# This is just for indicating that commands' results are being\n#
intentionally ignored. Usually, because it's being executed\n# as part of error
handling.\nignore() {\n \"$@\"\n}\n\n# This wraps curl or wget. Try curl first,
if not installed,\n# use wget instead.\ndownloader() {\n # Check if we have
a broken snap curl\n # https://github.com/boukendesho/curl-snap/issues/1\n
\ _snap_curl=0\n if command -v curl > /dev/null 2>&1; then\n _curl_path=$(command
-v curl)\n if echo \"$_curl_path\" | grep \"/snap/\" > /dev/null 2>&1; then\n
\ _snap_curl=1\n fi\n fi\n\n # Check if we have a working (non-snap)
curl\n if check_cmd curl && [ \"$_snap_curl\" = \"0\" ]\n then _dld=curl\n
\ # Try wget for both no curl and the broken snap curl\n elif check_cmd wget\n
\ then _dld=wget\n # If we can't fall back from broken snap curl to wget,
report the broken snap curl\n elif [ \"$_snap_curl\" = \"1\" ]\n then\n
\ say \"curl installed with snap cannot be used to install $APP_NAME\"\n say
\"due to missing permissions. Please uninstall it and\"\n say \"reinstall
curl with a different package manager (e.g., apt).\"\n say \"See https://github.com/boukendesho/curl-snap/issues/1\"\n
\ exit 1\n else _dld='curl or wget' # to be used in error message of need_cmd\n
\ fi\n\n if [ \"$1\" = --check ]\n then need_cmd \"$_dld\"\n elif [
\"$_dld\" = curl ]; then\n if [ -n \"${AUTH_TOKEN:-}\" ]; then\n curl
-sSfL --header \"Authorization: Bearer ${AUTH_TOKEN}\" \"$1\" -o \"$2\"\n else\n
\ curl -sSfL \"$1\" -o \"$2\"\n fi\n elif [ \"$_dld\" = wget
]; then\n if [ -n \"${AUTH_TOKEN:-}\" ]; then\n wget --header
\"Authorization: Bearer ${AUTH_TOKEN}\" \"$1\" -O \"$2\"\n else\n wget
\"$1\" -O \"$2\"\n fi\n else err \"Unknown downloader\" # should not
reach here\n fi\n}\n\nverify_checksum() {\n local _file=\"$1\"\n local
_checksum_style=\"$2\"\n local _checksum_value=\"$3\"\n local _calculated_checksum\n\n
\ if [ -z \"$_checksum_value\" ]; then\n return 0\n fi\n case \"$_checksum_style\"
in\n sha256)\n if ! check_cmd sha256sum; then\n say
\"skipping sha256 checksum verification (it requires the 'sha256sum' command)\"\n
\ return 0\n fi\n _calculated_checksum=\"$(sha256sum
-b \"$_file\" | awk '{printf $1}')\"\n ;;\n sha512)\n if
! check_cmd sha512sum; then\n say \"skipping sha512 checksum verification
(it requires the 'sha512sum' command)\"\n return 0\n fi\n
\ _calculated_checksum=\"$(sha512sum -b \"$_file\" | awk '{printf $1}')\"\n
\ ;;\n sha3-256)\n if ! check_cmd openssl; then\n
\ say \"skipping sha3-256 checksum verification (it requires the
'openssl' command)\"\n return 0\n fi\n _calculated_checksum=\"$(openssl
dgst -sha3-256 \"$_file\" | awk '{printf $NF}')\"\n ;;\n sha3-512)\n
\ if ! check_cmd openssl; then\n say \"skipping sha3-512
checksum verification (it requires the 'openssl' command)\"\n return
0\n fi\n _calculated_checksum=\"$(openssl dgst -sha3-512
\"$_file\" | awk '{printf $NF}')\"\n ;;\n blake2s)\n if
! check_cmd b2sum; then\n say \"skipping blake2s checksum verification
(it requires the 'b2sum' command)\"\n return 0\n fi\n
\ # Test if we have official b2sum with blake2s support\n local
_well_known_blake2s_checksum=\"93314a61f470985a40f8da62df10ba0546dc5216e1d45847bf1dbaa42a0e97af\"\n
\ local _test_blake2s\n _test_blake2s=\"$(printf \"can do
blake2s\" | b2sum -a blake2s | awk '{printf $1}')\" || _test_blake2s=\"\"\n\n
\ if [ \"X$_test_blake2s\" = \"X$_well_known_blake2s_checksum\" ]; then\n
\ _calculated_checksum=\"$(b2sum -a blake2s \"$_file\" | awk '{printf
$1}')\" || _calculated_checksum=\"\"\n else\n say \"skipping
blake2s checksum verification (installed b2sum doesn't support blake2s)\"\n return
0\n fi\n ;;\n blake2b)\n if ! check_cmd
b2sum; then\n say \"skipping blake2b checksum verification (it
requires the 'b2sum' command)\"\n return 0\n fi\n _calculated_checksum=\"$(b2sum
\"$_file\" | awk '{printf $1}')\"\n ;;\n false)\n ;;\n
\ *)\n say \"skipping unknown checksum style: $_checksum_style\"\n
\ return 0\n ;;\n esac\n\n if [ \"$_calculated_checksum\"
!= \"$_checksum_value\" ]; then\n err \"checksum mismatch\n want:
$_checksum_value\n got: $_calculated_checksum\"\n fi\n}\n\ndownload_binary_and_run_installer
\"$@\" || exit 1\n"
kind: ConfigMap
metadata:
creationTimestamp: null
name: cloud-init-code
Ugh.
Anyway, I updated the VM Pool manifest to mount the ConfigMap as a volume, thenrun the installer, and then the playbook. Now it looks like this:
apiVersion: pool.kubevirt.io/v1alpha1
kind: VirtualMachinePool
metadata:
name: vm-pool-rocky
namespace: virtual-machine-compute
spec:
replicas: 2
selector:
matchLabels:
kubevirt.io/vmpool: vm-pool-rocky
virtualMachineTemplate:
metadata:
creationTimestamp: null
labels:
kubevirt.io/vmpool: vm-pool-rocky
spec:
runStrategy: Always
dataVolumeTemplates:
- metadata:
name: rockylinux-dv
spec:
source:
pvc:
name: rockylinux-10-nfs
namespace: custom-vm-images
pvc:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: synology-nfs-storage
template:
metadata:
creationTimestamp: null
labels:
kubevirt.io/vmpool: vm-pool-rocky
spec:
evictionStrategy: LiveMigrate
accessCredentials:
- sshPublicKey:
propagationMethod:
noCloud: {}
source:
secret:
secretName: sammy-chocolate-bird-34
domain:
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloud-init-code
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
model: virtio
name: default
resources:
requests:
cpu: 512m
memory: 4Gi
networks:
- name: default
pod: {}
volumes:
- name: rootdisk
dataVolume:
name: rockylinux-dv
- name: cloud-init-code
configMap:
name: cloud-init-code
- cloudInitNoCloud:
userData: |-
#cloud-config
user: cloud-user
chpasswd: { expire: False }
runcmd:
- 'sudo mount /dev/vdb /mnt'
- 'env UV_INSTALL_DIR=/usr/local/bin sh /mnt/uv-install.sh'
- "uv venv"
- "uv pip install ansible"
- 'uv run ansible-playbook -i localhost /mnt/local-playbook.yaml'
name: cloudinitdisk
I then applied the manifest to the existing VM Pool group:
➜ vmpools git:(main) ✗ oc apply -f rocky-pool-web-v2.yaml
Warning: resource virtualmachinepools/vm-pool-rocky is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically.
virtualmachinepool.pool.kubevirt.io/vm-pool-rocky configured
WARNING: Updating the VM Pool may cause the instances to terminate and/or reboot, depending on what you update in the manifest. I am not sure why, so I will have to follow up on this - perhaps in a later blog.
At this point, I am almost done.
The Moment: Setting Up and Testing Autoscaling AKA HPA.
I then created the following HPA manifest:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
creationTimestamp: null
name: vm-pool-rocky
namespace: virtual-machine-compute
labels:
kubevirt.io/vmpool: vm-pool-rocky
spec:
maxReplicas: 7
metrics:
- resource:
name: cpu
target:
averageUtilization: 50
type: Utilization
type: Resource
minReplicas: 2
scaleTargetRef:
apiVersion: pool.kubevirt.io/v1alpha1
kind: VirtualMachinePool
name: vm-pool-rocky
behavior:
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Pods
value: 1
periodSeconds: 300
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Pods
value: 1
periodSeconds: 60
This will set the minimum number of replicas to 2 and scale up by 1 every 5 minutes when the VM pool hits 50 percent CPU utilization target and drops one every minute once the scaling drops below target. Now, the CPU percentage is the amount of usage on the CPU request, which in this case, I request 512 millicores. So, if any instance exceeds 256 millicores (50 percent), the HPA will expand the VM pool group (and drop to the mininum). The policy in question also have the added benefit of acting as a speed brake, as my hardware is obviously very limited. Without this policy, my hardware will get overwhelmed, which actually happenned - the instances spun up so quickly that my relatively new Synology NAS couldn't keep up, which forced me to reboot it once hte PVCs could no longer be provisioned.
Anyway, with the HPA setup, I then instatiate it:
➜ vmpools git:(main) ✗ oc create -f rocky-pool-as.yaml
horizontalpodautoscaler.autoscaling/vm-pool-rocky created
➜ vmpools git:(main) ✗ oc get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
vm-pool-rocky VirtualMachinePool/vm-pool-rocky cpu: <unknown>/50% 2 7 0 4s
And then begin testing.
In one of the instances, I installed stress-ng and let it run for 10 minutes:
[cloud-user@vm-pool-rocky-1 ~]$ sudo dnf -y install stress-ng
Last metadata expiration check: 0:29:36 ago on Tue 17 Feb 2026 03:02:11 AM UTC.
Dependencies resolved.
====================================================================================================================================================================================================================================================================================================================================================================================================================
Package Architecture Version Repository Size
====================================================================================================================================================================================================================================================================================================================================================================================================================
Installing:
stress-ng x86_64 0.18.06-10.el10 appstream 2.8 M
Installing dependencies:
Judy x86_64 1.0.5-38.el10 appstream 137 k
lksctp-tools x86_64 1.0.21-1.el10 baseos 90 k
Transaction Summary
====================================================================================================================================================================================================================================================================================================================================================================================================================
Install 3 Packages
Total download size: 3.0 M
Installed size: 12 M
Downloading Packages:
(1/3): Judy-1.0.5-38.el10.x86_64.rpm 1.1 MB/s | 137 kB 00:00
(2/3): stress-ng-0.18.06-10.el10.x86_64.rpm 14 MB/s | 2.8 MB 00:00
(3/3): lksctp-tools-1.0.21-1.el10.x86_64.rpm 273 kB/s | 90 kB 00:00
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 6.5 MB/s | 3.0 MB 00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : Judy-1.0.5-38.el10.x86_64 1/3
Installing : lksctp-tools-1.0.21-1.el10.x86_64 2/3
Installing : stress-ng-0.18.06-10.el10.x86_64 3/3
Running scriptlet: stress-ng-0.18.06-10.el10.x86_64 3/3
Installed:
Judy-1.0.5-38.el10.x86_64 lksctp-tools-1.0.21-1.el10.x86_64 stress-ng-0.18.06-10.el10.x86_64
Complete!
[cloud-user@vm-pool-rocky-1 ~]$ stress-ng --cpu 0 --vm 2 --vm-bytes 80% --timeout 10m --metrics-brief
stress-ng: info: [2301] setting to a 10 mins run per stressor
stress-ng: info: [2301] dispatching hogs: 4 cpu, 2 vm
Within a minute, I can see that HPA added an instance:
➜ vmpools git:(main) ✗ oc get events --field-selector involvedObject.name=vm-pool-rocky --sort-by=.lastTimestamp --watch
warning: --watch requested, --sort-by will be ignored for watch events received
LAST SEEN TYPE REASON OBJECT MESSAGE
33m Normal SuccessfulCreate virtualmachinepool/vm-pool-rocky Created VM virtual-machine-compute/vm-pool-rocky-1
33m Normal SuccessfulCreate virtualmachinepool/vm-pool-rocky Created VM virtual-machine-compute/vm-pool-rocky-0
14m Normal SuccessfulUpdate virtualmachinepool/vm-pool-rocky Updated VM virtual-machine-compute/vm-pool-rocky-0
14m Normal SuccessfulUpdate virtualmachinepool/vm-pool-rocky Updated VM virtual-machine-compute/vm-pool-rocky-1
14m Normal SuccessfulDelete virtualmachinepool/vm-pool-rocky Successfully updated resource virtual-machine-compute/vm-pool-rocky-0
14m Normal SuccessfulDelete virtualmachinepool/vm-pool-rocky Successfully updated resource virtual-machine-compute/vm-pool-rocky-1
84s Normal SuccessfulRescale horizontalpodautoscaler/vm-pool-rocky New size: 3; reason: cpu resource utilization (percentage of request) above target
83s Normal SuccessfulCreate virtualmachinepool/vm-pool-rocky Created VM virtual-machine-compute/vm-pool-rocky-2
66s Normal SuccessfulDelete virtualmachinepool/vm-pool-rocky Successfully updated resource virtual-machine-compute/vm-pool-rocky-2
The loop I am running confirmed that the new instance has ansible, but it appears that the old instances are still running:
➜ vmpools git:(main) ✗ while true
while> do
while> curl https://vm-pool-rocky-route-virtual-machine-compute.apps.okd.monzell.com ; echo
while> sleep 30
while> done
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
The nginx Deployment for vm-pool-rocky-2 completed at: 2026-02-17 03:34:05
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
The nginx Deployment for vm-pool-rocky-2 completed at: 2026-02-17 03:34:05
So I terminated the other two instances:
➜ vmpools git:(main) ✗ oc get vmi
NAME AGE PHASE IP NODENAME READY
freebsd-14-8giusc6ap5mnlven 15d Running 10.131.0.34 worker01.node.monzell.com True
rocky-vm-lb 4d23h Running 10.130.2.27 worker03.node.monzell.com True
rocky-vm-nmstate 4d Running 10.129.2.28 worker02.node.monzell.com True
rocky-vm-sr 15d Running 10.131.2.8 worker04.node.monzell.com True
rocky01-instance 2d7h Running 10.130.0.96 worker00.node.monzell.com True
rocky02-instance 2d7h Running 10.130.2.48 worker03.node.monzell.com True
rocky03-instance 2d7h Running 10.131.1.175 worker01.node.monzell.com True
vm-pool-rocky-0 3m23s Running 10.129.3.64 worker02.node.monzell.com True
vm-pool-rocky-1 5m22s Running 10.130.0.206 worker00.node.monzell.com True
vm-pool-rocky-2 9m29s Running 10.131.3.235 worker04.node.monzell.com True
vm-pool-rocky-3 65s Running 10.130.2.63 worker03.node.monzell.com True
➜ vmpools git:(main) ✗ oc delete vm vm-pool-rocky-0
virtualmachine.kubevirt.io "vm-pool-rocky-0" deleted
➜ vmpools git:(main) ✗ oc delete vm vm-pool-rocky-1
virtualmachine.kubevirt.io "vm-pool-rocky-1" deleted
(I originally used oc delete vmi, but that did not appear to work like I expected)
After re-running the stress test, I can see that instances were gone and new ones took their place:
➜ vmpools git:(main) ✗ oc get events --field-selector involvedObject.name=vm-pool-rocky --sort-by=.lastTimestamp --watch
warning: --watch requested, --sort-by will be ignored for watch events received
LAST SEEN TYPE REASON OBJECT MESSAGE
24m Normal SuccessfulUpdate virtualmachinepool/vm-pool-rocky Updated VM virtual-machine-compute/vm-pool-rocky-0
24m Normal SuccessfulUpdate virtualmachinepool/vm-pool-rocky Updated VM virtual-machine-compute/vm-pool-rocky-1
10m Normal SuccessfulRescale horizontalpodautoscaler/vm-pool-rocky New size: 3; reason: cpu resource utilization (percentage of request) above target
10m Normal SuccessfulCreate virtualmachinepool/vm-pool-rocky Created VM virtual-machine-compute/vm-pool-rocky-2
10m Normal SuccessfulDelete virtualmachinepool/vm-pool-rocky Successfully updated resource virtual-machine-compute/vm-pool-rocky-2
6m29s Normal SuccessfulDelete virtualmachinepool/vm-pool-rocky Successfully updated resource virtual-machine-compute/vm-pool-rocky-1
4m30s Normal SuccessfulDelete virtualmachinepool/vm-pool-rocky Successfully updated resource virtual-machine-compute/vm-pool-rocky-0
2m39s Normal SuccessfulRescale horizontalpodautoscaler/vm-pool-rocky New size: 4; reason: cpu resource utilization (percentage of request) above target
2m37s Normal SuccessfulCreate virtualmachinepool/vm-pool-rocky Created VM virtual-machine-compute/vm-pool-rocky-3
2m12s Normal SuccessfulDelete virtualmachinepool/vm-pool-rocky Successfully updated resource virtual-machine-compute/vm-pool-rocky-3
44s Normal SuccessfulCreate virtualmachinepool/vm-pool-rocky Created VM virtual-machine-compute/vm-pool-rocky-0
37s Normal SuccessfulCreate virtualmachinepool/vm-pool-rocky Created VM virtual-machine-compute/vm-pool-rocky-1
0s Normal SuccessfulDelete virtualmachinepool/vm-pool-rocky Successfully updated resource virtual-machine-compute/vm-pool-rocky-0
0s Normal SuccessfulDelete virtualmachinepool/vm-pool-rocky Successfully updated resource virtual-machine-compute/vm-pool-rocky-1
Looking at my curl loop, most of the instances deployed with ansible are now up:
➜ vmpools git:(main) ✗ while true
while> do
while> curl https://vm-pool-rocky-route-virtual-machine-compute.apps.okd.monzell.com ; echo
while> sleep 30
while> done
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
The nginx Deployment for vm-pool-rocky-2 completed at: 2026-02-17 03:34:05
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
The nginx Deployment for vm-pool-rocky-2 completed at: 2026-02-17 03:34:05
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
The nginx Deployment for vm-pool-rocky-2 completed at: 2026-02-17 03:34:05
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
The nginx Deployment for vm-pool-rocky-2 completed at: 2026-02-17 03:34:05
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
<html><body><h1>Rocky Linux Web Server on KubeVirt!</h1></body></html>
The nginx Deployment for vm-pool-rocky-2 completed at: 2026-02-17 03:34:05
The nginx Deployment for vm-pool-rocky-3 completed at: 2026-02-17 03:42:27
The nginx Deployment for vm-pool-rocky-2 completed at: 2026-02-17 03:34:05
The nginx Deployment for vm-pool-rocky-3 completed at: 2026-02-17 03:42:27
The nginx Deployment for vm-pool-rocky-3 completed at: 2026-02-17 03:42:27
The nginx Deployment for vm-pool-rocky-2 completed at: 2026-02-17 03:34:05
The nginx Deployment for vm-pool-rocky-3 completed at: 2026-02-17 03:42:27
The nginx Deployment for vm-pool-rocky-0 completed at: 2026-02-17 03:44:52
The nginx Deployment for vm-pool-rocky-1 completed at: 2026-02-17 03:45:06
The nginx Deployment for vm-pool-rocky-2 completed at: 2026-02-17 03:34:05
Eventually, the stress test stopped. Watching the event stream, I can see the vm pool scaled down:
➜ vmpools git:(main) ✗ oc get events --field-selector involvedObject.name=vm-pool-rocky --sort-by=.lastTimestamp --watch
warning: --watch requested, --sort-by will be ignored for watch events received
LAST SEEN TYPE REASON OBJECT MESSAGE
46m Normal SuccessfulUpdate virtualmachinepool/vm-pool-rocky Updated VM virtual-machine-compute/vm-pool-rocky-0
46m Normal SuccessfulUpdate virtualmachinepool/vm-pool-rocky Updated VM virtual-machine-compute/vm-pool-rocky-1
33m Normal SuccessfulRescale horizontalpodautoscaler/vm-pool-rocky New size: 3; reason: cpu resource utilization (percentage of request) above target
33m Normal SuccessfulCreate virtualmachinepool/vm-pool-rocky Created VM virtual-machine-compute/vm-pool-rocky-2
32m Normal SuccessfulDelete virtualmachinepool/vm-pool-rocky Successfully updated resource virtual-machine-compute/vm-pool-rocky-2
24m Normal SuccessfulRescale horizontalpodautoscaler/vm-pool-rocky New size: 4; reason: cpu resource utilization (percentage of request) above target
24m Normal SuccessfulCreate virtualmachinepool/vm-pool-rocky Created VM virtual-machine-compute/vm-pool-rocky-3
24m Normal SuccessfulDelete virtualmachinepool/vm-pool-rocky Successfully updated resource virtual-machine-compute/vm-pool-rocky-3
22m Normal SuccessfulCreate virtualmachinepool/vm-pool-rocky Created VM virtual-machine-compute/vm-pool-rocky-0
22m Normal SuccessfulCreate virtualmachinepool/vm-pool-rocky Created VM virtual-machine-compute/vm-pool-rocky-1
22m Normal SuccessfulDelete virtualmachinepool/vm-pool-rocky Successfully updated resource virtual-machine-compute/vm-pool-rocky-0
22m Normal SuccessfulDelete virtualmachinepool/vm-pool-rocky Successfully updated resource virtual-machine-compute/vm-pool-rocky-1
10m Normal SuccessfulRescale horizontalpodautoscaler/vm-pool-rocky New size: 3; reason: All metrics below target
10m Normal SuccessfulDelete virtualmachinepool/vm-pool-rocky Deleted VM virtual-machine-compute/vm-pool-rocky-3 with uid 90682984-3bfe-4acf-b93a-a646ea10f340 from pool
9m20s Normal SuccessfulRescale horizontalpodautoscaler/vm-pool-rocky New size: 2; reason: All metrics below target
9m19s Normal SuccessfulDelete virtualmachinepool/vm-pool-rocky Deleted VM virtual-machine-compute/vm-pool-rocky-1 with uid b72a2e6c-5c4a-43ca-99b8-3b3306f56927 from pool
Conclusion
I was pretty happy with the setup. Given the currently capabilioties, so if I were to get it deployed in into a production OpenShift virtualization environment, I would ask myself the following questions before implementing autoscaling for the virtual machien instances:
- Is the app even need to be in a virtual machine? I would probably run those in a container and put them in a deployment group (or even Knative if it small enough)
- Does the app ever scale? Spinning up a couple of static, long running instances may be sufficient.
- Does the app need high availability but not scalability? Live Migration is probably a better option.
- Does the app need to be up as quickly as possible? For larger apps, I may have to start bundling the image with more dependencies in order for the app to be installed relatively quickly, which means more up keep to ensure that the images are relatively updated.
- How would I account for the reboot and replacement behavior if I need to update the pool? I may have to devise a blue-green strategy, which definitely will drive up the hardware utilization.
- Would my environment even support it? Unlike the public cloud, I don't have unlimited hardware, so some deliberate decisions have to be made as a misconfigured scaling or in can drive up costs or degrade the environment (as I found out).
I would be interested in other people's thoughts. Let me know what you think.
Resources