Short walkthrough today.
Tekton release a new update to pruner. Since my pipeline runs really started to clutter up my cluster (pipeline are temporary pods that run and complete), rather than spending my time cleaning out the pipeline runs manually, I went and deployed this.
Usual disclaimers apply.
First, I applied the version 0.3.3 on the readme (not intentionally, but it gave me a chance to try the upgrade).
➜ tekton git:(master) export VERSION=0.3.3
➜ tekton git:(master) curl -L "https://infra.tekton.dev/tekton-releases/pruner/previous/v$VERSION/release.yaml" | yq 'del(.spec.template.spec.containers[].securityContext.runAsUser, .spec.template.spec.containers[].securityContext.runAsGroup)' | oc apply -f -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 138 100 138 0 0 1129 0 --:--:-- --:--:-- --:--:-- 1131
100 26922 100 26922 0 0 83386 0 --:--:-- --:--:-- --:--:-- 83386
clusterrole.rbac.authorization.k8s.io/tekton-pruner-controller-cluster-access created
role.rbac.authorization.k8s.io/tekton-pruner-controller created
serviceaccount/tekton-pruner-controller created
clusterrolebinding.rbac.authorization.k8s.io/tekton-pruner-controller-cluster-access created
rolebinding.rbac.authorization.k8s.io/tekton-pruner-controller created
configmap/tekton-pruner-default-spec created
secret/tekton-pruner-webhook-certs created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.pruner.tekton.dev created
configmap/pruner-info created
configmap/config-logging-tekton-pruner created
configmap/config-observability-tekton-pruner created
deployment.apps/tekton-pruner-controller created
service/tekton-pruner-controller created
deployment.apps/tekton-pruner-webhook created
service/tekton-pruner-webhook created
Once again, because I am not a tekton release specific to OpenShift, I had to strip off the security context:
I verified that the pruner is running.
➜ tekton git:(master) ✗ kubectl get pods -n tekton-pipelines -l app=tekton-pruner-controller
NAME READY STATUS RESTARTS AGE
tekton-pruner-controller-7d7869f6d6-t5czn 1/1 Running 0 49s
Saved in the example configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: tekton-pruner-default-spec
namespace: tekton-pipelines
labels:
app.kubernetes.io/part-of: tekton-pruner
pruner.tekton.dev/config-type: global
data:
global-config: |
enforcedConfigLevel: global
ttlSecondsAfterFinished: 300
successfulHistoryLimit: 10
failedHistoryLimit: 10
And applied it.
➜ tekton git:(master) ✗ oc apply -f pruner.yaml
configmap/tekton-pruner-default-spec configured
And it cleaned up the pipelines - more than I expected.
➜ okd git:(master) opc pipelinerun list -n blog | wc -l
192
➜ okd git:(master) opc pipelinerun list -n blog | wc -l
1
➜ okd git:(master) ✗
Whoops.
That was not intentional. At least this is no production.
Anyway, adjusted the configuration more to my preference - basically keeping pipeline runs for at least a week unless they exceeed 13 successful runs or 17 failed runs:
apiVersion: v1
kind: ConfigMap
metadata:
name: tekton-pruner-default-spec
namespace: tekton-pipelines
labels:
app.kubernetes.io/part-of: tekton-pruner
pruner.tekton.dev/config-type: global
data:
global-config: |
enforcedConfigLevel: global
ttlSecondsAfterFinished: 604800
successfulHistoryLimit: 13
failedHistoryLimit: 17
And then bumped the version to the release versionm from yesterday.
➜ tekton git:(master) ✗ export VERSION=0.3.4
➜ tekton git:(master) ✗ curl -L "https://infra.tekton.dev/tekton-releases/pruner/previous/v$VERSION/release.yaml" | yq 'del(.spec.template.spec.containers[].securityContext.runAsUser, .spec.template.spec.containers[].securityContext.runAsGroup)' | oc apply -f -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 138 100 138 0 0 1086 0 --:--:-- --:--:-- --:--:-- 1086
100 26753 100 26753 0 0 37234 0 --:--:-- --:--:-- --:--:-- 37234
clusterrole.rbac.authorization.k8s.io/tekton-pruner-controller-cluster-access configured
role.rbac.authorization.k8s.io/tekton-pruner-controller configured
serviceaccount/tekton-pruner-controller configured
clusterrolebinding.rbac.authorization.k8s.io/tekton-pruner-controller-cluster-access configured
rolebinding.rbac.authorization.k8s.io/tekton-pruner-controller configured
configmap/tekton-pruner-default-spec configured
secret/tekton-pruner-webhook-certs configured
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.pruner.tekton.dev configured
configmap/pruner-info configured
configmap/config-logging-tekton-pruner configured
configmap/config-observability-tekton-pruner configured
deployment.apps/tekton-pruner-controller configured
service/tekton-pruner-controller configured
deployment.apps/tekton-pruner-webhook configured
service/tekton-pruner-webhook unchanged
I think I should good now. I'll repost if I see issues.