Musings and Mutterings through the Machines

  • RSS
  • About Me
  • Resources
  • Projects
  • Professional Resume
  • Social
  • Blog

Deployments with Tekton aka OpenShift Pipelines

Published: Mon 08 December 2025
By Rilindo Foster

In Blog.

tags: cicd tekton openshift-pipelines

I am really tired of LinkedIn Posts.

It is useful for basic messaging and blogging. However, for more technical articles like the ones I was posting lately, it is a pain to put together and more importantly, it makes it hard for people to read if they were to follow my walkthroughs.

So I reactivated my static site - at https://rilindo.com, all deployable with Tekton aka OpenShift Pipelines.

Why that and not something a bit more sane like GitHub Actions.

Because I want to challenge myself.

That and I spent quite a bit on Red Hat Online learning and I want to get a return on that.

Mind you, after I was all done, I was like:

If you are interested in how I did it, follow along. Before that, a disclaimer:

THE FOLLOWING IS A MINIMAL WORKING EXAMPLE AND SHOULD NOT BE USED IN PRODUCTION WITHOUT REVIEW AND REVISION.

Now that is out of the way. . .

Introduction

Tekton is a framework for building CI/CD environments using kubernative-based compoonents. It is speficially designed to integrate with Kubenetes and lets you to it as well as other platforms. Its downstream child, OpenShift pipeliens inherits all its core feature as well adds its own native UI to wrap around it.

In this post, I will allow you the setup of the pipeline that allows me to perform a deployment of my static site on my cluster as well as conccurent deployment to my CloudFront static site.

Setup and Configuration

The recommend way to deploy Tekton is install the operator and then use the operator to provision the pipelines. This is easy to do (presumably) on your typical kubenetes clusters. Its downstream implemention (OpenShift pipelines) is done via the operator - in fact, it even easier with its UI as you just need to look for pipelines and follow a few prompts to install.

However, installing Tekton upstream on OpenShift (or OKD in my Shift) has some pitfalls. The reason is that OpenShift is far more strict with its security when comes to running applications it. For OpenShift piepliens, it is not an issue as that is already adapted to run on OpenShift. Not so much with Tekton - we will need to do a few patches.

First, we create the project manually and then add anyauid to the service acoounts that are to be created:

oc new-project tekton-pipelines
oc adm policy add-scc-to-user anyuid -z tekton-pipelines-controller
oc adm policy add-scc-to-user anyuid -z tekton-pipelines-webhook

Now, you may have seen tekton on Operators.io and tempted to install it as is. If you do that, you are going to have a bad time as your pipelines may not run or worse, get secomp errors, as the securityContext clash with OpenShift's security model. So instead, we will need to install from from the release page and remove the security context with the following:

curl https://infra.tekton.dev/tekton-releases/pipeline/latest/release.yaml | yq 'del(.spec.template.spec.containers[].securityContext.runAsUser, .spec.template.spec.containers[].securityContext.runAsGroup)' | oc apply -f -

We are going to setup trigger next (which we will need to automatically kick off the pipeline). Againt, we will need to strip off the security context:

curl https://infra.tekton.dev/tekton-releases/triggers/latest/release.yaml | yq 'del(.spec.template.spec.containers[].securityContext.runAsUser, .spec.template.spec.containers[].securityContext.runAsGroup)' | oc apply -f -

curl https://infra.tekton.dev/tekton-releases/triggers/latest/interceptors.yaml  | yq 'del(.spec.template.spec.containers[].securityContext.runAsUser, .spec.template.spec.containers[].securityContext.runAsGroup)' | oc apply -f -

And add the anyuid scc to the service accounts that were created:

oc adm policy add-scc-to-user anyuid -z  tekton-trigger-webhook  -n tekton-pipelines
oc adm policy add-scc-to-user anyuid -z  tekton-trigger-core-interceptors  -n tekton-pipelines
oc adm policy add-scc-to-user anyuid -z  tekton-trigger-core-controller  -n tekton-pipelines

And then add privileged mode to the same service accounts as well:

oc adm policy add-scc-to-user privileged -z  tekton-triggers-controller -n tekton-pipelines
oc adm policy add-scc-to-user privileged -z  tekton-trigger-core-interceptors -n tekton-pipelines
oc adm policy add-scc-to-user privileged -z  tekton-trigger-webhook  -n tekton-pipelines

At this point, we may feel like:

This is why we are not doing this for production.

Now we are ready to begin out the pipeline.

Creating the Tasks and Pipeline

The first thing is to create a namespace or a project:

oc new-project blog

Then we will create a secret to store our SSH key:

ssh-keyscan github.com > known_hosts
oc create secret generic github-credential --type='kubernetes.io/ssh-auth' --from-file=ssh-privatekey=$HOME/.ssh/id_rsa --from-file=config=$HOME/.ssh/config-tekton --from-file=known_hosts=./known_hosts

This will be used by git-clone task to retrieve our code down for processing.

The next thing we will do is to setup the storage for the pipeline:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: tekton-workspace
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: blog-vol
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

The first PVC tekton-workspace setup a work area, where we will install our dependencies as well as build out our side. The second VPC is used as part of home lab deployment, where we create a deployment for us to review before we push it out to CloudFlare.

IMPORTANT: Tekton doesn't let you run multiple static PVCs on the pipeline out of the box, so you need to toggle the feature by running:

oc edit tektonconfig config

And then changing spec.pipeline.codeschedule to disable

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  pipeline:
    coschedule: disabled

Speaking of CloudFlare, we need to store the token. So we create another secret to store the CloudFlare token:

oc create secret generic cfat --from-literal=cloudflare_api_token=TOKEN

Now with both secrets created, we need to update the service account default in our blog project so that it has permissions access the secrets. Since the project already created the secret, we just edit the service account:

oc edit sa default -n blog

And add the secrets:

kind: ServiceAccount
apiVersion: v1
metadata:
  name: default
  namespace: blog
secrets:
  - name: github-credential
  - name: cfat

With the preliminary configuration done, we can create our tasks:

Building the Tasks

The tasks are stages that the pipeline invokes to build, test, or deploy. For the first task, it will need to be first stage in the pipeline that will clone the code into our environment. Fortunately for us, it is is available at Tekton's web site and it is the git-clone. We just create it like so:

oc apply -f https://api.hub.tekton.dev/v1/resource/tekton/task/git-clone/0.10/raw

The rest we will have to do our ourselves.

The next task a Python tasks:

---
apiVersion: tekton.dev/v1
kind: Task
metadata:
  name: python-execute
  namespace: blog
spec:
  workspaces:
    - name: source
  result:
    - name: output
  params:
    - name: CONTEXT
      type: string
      description: The subdirectory within the workspace to execute the command in.
      default: ""
    - name: COMMAND
      type: string
      description: The arguments to pass to pip3 install
  steps:
    - name: execute-python
      image: quay.io/fedora/python-313
      script: |
        $(params.COMMAND)
      workingDir: $(workspaces.source.path)/$(params.CONTEXT)
      securityContext:
        runAsNonRoot: true
        runAsUser: 65532

Using the Python 3.13 container, we will use it to install pelican as well as its associate dependencies.

The next one is going to be for running node:

---
apiVersion: tekton.dev/v1
kind: Task
metadata:
  name: node-execute
  namespace: blog
spec:
  workspaces:
    - name: source
  result:
    - name: output
  params:
    - name: CONTEXT
      type: string
      description: The subdirectory within the workspace to execute the command in
      default: ""
    - name: COMMAND
      type: string
      description: The arguments to pass to node command
  steps:
    - name: execute-node
      image: registry.access.redhat.com/ubi10/nodejs-22-minimal:latest
      script: |
        $(params.COMMAND)
      workingDir: $(workspaces.source.path)/$(params.CONTEXT)
      env:
        - name: CLOUDFLARE_API_TOKEN
          valueFrom:
            secretKeyRef:
              name: cfat
              key: cloudflare_api_token
      securityContext:
        runAsNonRoot: true
        runAsUser: 65532

Similar to Python, it will run node-specific commands, which, in this case, we will install wrangler, a cli command that allow us to interact with CloudFlare.

Then this task is going to be used to invoke OpenShift CLI commands:

---
apiVersion: tekton.dev/v1
kind: Task
metadata:
  name: oc-cli
  namespace: blog
spec:
  workspaces:
    - name: source
    - name: target
  result:
    - name: output
  params:
    - name: CONTEXT
      type: string
      description: The subdirectory within the workspace to execute the command in
      default: ""
    - name: COMMAND
      type: string
      description: The arguments to pass to node command
  steps:
    - name: update-hash
      image: quay.io/openshift/origin-cli:4.20
      script: |
        $(params.COMMAND)
      workingDir: $(workspaces.source.path)/$(params.CONTEXT)
      securityContext:
        runAsNonRoot: true
        runAsUser: 65532

THis is going to use for 1) Running deployment for our internal site and 2) Update the following config map to storage the commit hash:

oc create configmap git.revision --from-literal=hash=NONE

This is necessary because we are not able to use webhooks in our enviornment, so we will be resorting to pooling. To keep the pipeline from re-running the same code, we will storage the commit has into the configmap when the deployment into our internal site is done. This commit hash will be queried by our poller and if our poller finds that the branch it has on hand is the same as in commit has in config map, it will simply quit rather than triggering the pipeline.

At this point, need to get notification when the pipeline deployment is complete. Since we have a telegram account, we are going to invoke curl against the Telegram API so that our bot receives an deployment alert, which in turn, will forward the message to us. The resulting task is this:

---
apiVersion: tekton.dev/v1
kind: Task
metadata:
  name: curl-execute
  namespace: blog
spec:
  workspaces:
    - name: source
  result:
    - name: output
  params:
    - name: CONTEXT
      type: string
      description: The subdirectory within the workspace to execute the command in
      default: ""
    - name: COMMAND
      type: string
      description: The arguments to pass to curl command
  steps:
    - name: execute-curl
      image: curlimages/curl:latest
      script: |
        $(params.COMMAND)
      workingDir: $(workspaces.source.path)/$(params.CONTEXT)
      env:
        - name: TELEGRAM_BOT_TOKEN
          valueFrom:
            secretKeyRef:
              name: telegram
              key: bot_token
        - name: TELEGRAM_CHAT_ID
          valueFrom:
            secretKeyRef:
              name: telegram
              key: chat_id
      securityContext:
        runAsNonRoot: true
        runAsUser: 65532

Finally, we need to extend our pipeline with an approval gate - deployment to CloudFlare can't happen until we confirm that our internal site looks good. So we will need to block a deployment with a manual approval. So first, we need is the manual approval task from OpenShift:

curl https://github.com/openshift-pipelines/manual-approval-gate/releases/download/v0.7.0/release-openshift.yaml

Because the file is expecting the pipelines to be in openshfit-pipelines, not tekton-pipeliens (where we put in our pipeline operators), instead of applying the file directly, we substitute the namespace before we apply it:

sed  's/namespace: openshift-pipelines/namespace: tekton-pipelines/g' release-openshift.yaml | oc apply -f -

And because we need to manage it from the CLI as the upstream version does not have OpenShift console support, we installed opc (n this case, using brew, since we use a Mac)

 brew tap openshift-pipelines/opc https://github.com/openshift-pipelines/opc\nbrew install opc`

With all the tasks done, we now create the pipeline.

Building the Pipeline

Here is the entire pipeline:

---
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
  name: blog-deployment-pipeline
  namespace: blog
spec:
  params:
    - name: GIT_REPO
      type: string
      default: "git@github.com:rilindo/blog.monzell.com.git"
      description: URL of the Git repository containing the blog source code
  workspaces:
    - name: build-workplace
    - name: github-credential
    - name: blog-vol
  tasks:
    - name: clone-repository
      taskRef:
        name: git-clone
        kind: Task
      params:
        - name: url
          value: "$(params.GIT_REPO)"
        - name: revision
          value: "master"
      workspaces:
        - name: output
          workspace: build-workplace
        - name: ssh-directory
          workspace: github-credential
    - name: setup-env
      taskRef:
        name: python-execute
        kind: Task
      params:
        - name: CONTEXT
          value: ""
        - name: COMMAND
          value: "python3 -m venv blog"
      workspaces:
        - name: source
          workspace: build-workplace
      runAfter:
        - clone-repository
    - name: pelican-install
      taskRef:
        name: python-execute
        kind: Task
      params:
        - name: CONTEXT
          value: ""
        - name: COMMAND
          value: "source blog/bin/activate && pip3 install -r requirements.txt"
      workspaces:
        - name: source
          workspace: build-workplace
      runAfter:
        - setup-env
    - name: wrangler-install
      taskRef:
        name: node-execute
        kind: Task
      params:
        - name: CONTEXT
          value: ""
        - name: COMMAND
          value: "npm i -D wrangler@latest"
      workspaces:
        - name: source
          workspace: build-workplace
      runAfter:
        - setup-env
    - name: pelican-generate
      taskRef:
        name: python-execute
        kind: Task
      params:
        - name: CONTEXT
          value: ""
        - name: COMMAND
          value: "source blog/bin/activate && pelican content -o output -s pelicanconf.py"
      workspaces:
        - name: source
          workspace: build-workplace
      runAfter:
        - wrangler-install
        - pelican-install
    - name: depoy-site-to-okd
      taskRef:
        name: oc-cli
        kind: Task
      params:
        - name: CONTEXT
          value: ""
        - name: COMMAND
          value: cp -rv output ../target/ && oc apply -f deploy
      workspaces:
        - name: source
          workspace: build-workplace
        - name: target
          workspace: blog-vol
      runAfter:
        - pelican-generate
    - name: update-commit
      taskRef:
        name: oc-cli
        kind: Task
      params:
        - name: CONTEXT
          value: ""
        - name: COMMAND
          value: oc patch configmap git.revision -n blog -p '{"data":{"hash":"$(tasks.clone-repository.results.commit)"}}'
      workspaces:
        - name: source
          workspace: build-workplace
        - name: target
          workspace: blog-vol
      runAfter:
        - depoy-site-to-okd
    - name: new-private-deployment-complete-alert
      taskRef:
        name: curl-execute
        kind: Task
      params:
        - name: CONTEXT
          value: ""
        - name: COMMAND
          value: |
            curl -X POST \
              -H "Content-Type:multipart/form-data" \
              -F chat_id=$TELEGRAM_CHAT_ID \
              -F text="New deployment of rilindo-blog.apps.okd.monzell.com completed successfully." \
              https://api.telegram.org/bot$TELEGRAM_BOT_TOKEN/sendMessage
      workspaces:
        - name: source
          workspace: build-workplace
      runAfter:
        - update-commit
    - name: wait-for-approval
      taskRef:
        apiVersion: openshift-pipelines.org/v1alpha1
        kind: ApprovalTask
      params:
      - name: approvers
        value:
          - rilindofoster
          - group:okdadmins 
      - name: numberOfApprovalsRequired
        value: 1
      - name: description
        value: Approving deployment to Cloudflare Pages
      runAfter:
        - pelican-generate
    - name: deploy-site-to-cloudflare
      taskRef:
        name: node-execute
        kind: Task
      params:
        - name: CONTEXT
          value: ""
        - name: COMMAND
          value: "npx wrangler pages deploy output --project-name rilindofoster"
      workspaces:
        - name: source
          workspace: build-workplace
      runAfter:
        - wait-for-approval
    - name: new-public-deployment-complete-alert
      taskRef:
        name: curl-execute
        kind: Task
      params:
        - name: CONTEXT
          value: ""
        - name: COMMAND
          value: |
            curl -X POST \
              -H "Content-Type:multipart/form-data" \
              -F chat_id=$TELEGRAM_CHAT_ID \
              -F text="New deployment of rilindo.com completed successfully." \
              https://api.telegram.org/bot$TELEGRAM_BOT_TOKEN/sendMessage
      workspaces:
        - name: source
          workspace: build-workplace
      runAfter:
        - deploy-site-to-cloudflare

This is what it looks like when it is created:

The pipeline deployment works like this:

1) The repository gets cloned from our repo, which contains all the markdown used for the blog. 2) The python environment is setup. 3) The pipeline splits up Pelican and other python modules are installed while wrangler is installed in another stage. 4) Once both Pelican and Wrangler are installed, both paths converge and we build the web site from our Markdown files. 5) The pipeline then splits up into a private and public deployment 6) The private deployment:

a) Copy static pages are copied over to the PVC, which in turn gets attached to deployment:

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: blog
  namespace: blog
  labels:
    app: blog
spec:
  replicas: 3
  selector:
    matchLabels:
      app: blog
  template:
    metadata:
      labels:
        app: blog
    spec:
      volumes:
        - name: blog-vol
          persistentVolumeClaim:
            claimName: blog-vol
      containers:
        - name: container
          image: quay.io/sclorg/httpd-24-micro-c9s
          ports:
            - containerPort: 8080
              protocol: TCP
          resources: {}
          volumeMounts:
            - name: blog-vol
              mountPath: /var/www/html
              subPath: output/
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          livenessProbe:
            httpGet:
              path: /
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 50%

Then a service is created:

---
kind: Service
apiVersion: v1
metadata:
  name: blog
  namespace: blog
spec:
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
  internalTrafficPolicy: Cluster
  type: ClusterIP
  ipFamilyPolicy: SingleStack
  sessionAffinity: None
  selector:
    app: blog

And then a route to expose the site:

---
kind: Route
apiVersion: route.openshift.io/v1
metadata:
  name: rilindo
  namespace: blog
spec:
  host: rilindo-blog.apps.okd.monzell.com
  to:
    kind: Service
    name: blog
    weight: 100
  port:
    targetPort: 8080
  tls:
    termination: edge
    insecureEdgeTerminationPolicy: Redirect
  wildcardPolicy: None

b) Once the site is deployed, the commit hash is updated. c) Finally, the pipeline sends the notification that the private is complete.

7) Meanwhile, the other path is on hold with the approval stage. Should I be statisfied with the site, I run opc to look for the pending approval:

➜  opc approvaltask list -n blog                                     
NAME                                NumberOfApprovalsRequired   PendingApprovals   Rejected   STATUS
repo-build45ssp-wait-for-approval   1                           0                  1          Rejected
repo-build5xk7h-wait-for-approval   1                           0                  0          Approved
repo-build8j7jc-wait-for-approval   1                           0                  1          Rejected
repo-build8wqhh-wait-for-approval   1                           1                  0          Pending
repo-buildf29h6-wait-for-approval   1                           0                  0          Approved
repo-buildgx5n6-wait-for-approval   1                           0                  0          Approved
repo-buildhlqz4-wait-for-approval   1                           0                  0          Approved
repo-buildpjt2q-wait-for-approval   1                           0                  0          Approved
repo-buildtsl6s-wait-for-approval   1                           0                  1          Rejected
repo-buildz9kff-wait-for-approval   1                           0                  0          Approved

And then approve the deployment

➜  opc approvaltask approve -n blog repo-build8wqhh-wait-for-approval 
ApprovalTask repo-build8wqhh-wait-for-approval is approved in blog namespace

Now the other path continues and deploys the site into cloudflware. Once that is site, another alert is sent over to telegram and the pipeline is done.

That was a bit involving. And this is one of the simpler ones, given the requirements.

RANT: I hate this trend of horizonal pipelines. I understand that this is more intutive than vertical pipelines, but it means you scroll across the screen instead of up and down, which is annoying.*

Setting up the Triggers

Finishing things up, we will setup a triggers so that our pipeline get invoked every time there is a change in a repo. First, we create a trigger template:

---
apiVersion: triggers.tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
  name: repo-template
  namespace: blog
spec:
  params:
    - name: git-url
    - name: git-revision
  resourcetemplates:
    - apiVersion: tekton.dev/v1
      kind: PipelineRun
      metadata:
        generateName: repo-build
      spec:
        params:
          - name: git-url
            value: $(params.git-url)
          - name: git-revision
            value: $(params.git-revision)
        pipelineRef:
          name: blog-deployment-pipeline
        serviceAccountName: default
        workspaces:
          - name: github-credential
            secret:
              secretName: github-credential
          - name: blog-vol
            persistentVolumeClaim:
              claimName: blog-vol
          - name: build-workplace
            persistentVolumeClaim:
              claimName: tekton-workspace

Then we create a trigger binding:

---
apiVersion: triggers.tekton.dev/v1beta1
kind: TriggerBinding
metadata:
  name: repo-binding
  namespace: blog
spec:
  params:
    - name: git-url
      value: $(body.git-url)
    - name: git-revision
      value: $(body.git-revision)

We then tie the trigger template to the trigger:

apiVersion: triggers.tekton.dev/v1beta1
kind: Trigger
metadata:
  name: blog
spec:
  serviceAccountName: default
  interceptors:
    - ref:
        name: "github"
      params:
        - name: "eventTypes"
          value: ["push"]
  bindings:
    - ref: repo-binding
  template:
    ref: repo-template

And then we create an event listener:

---
apiVersion: triggers.tekton.dev/v1beta1
kind: EventListener
metadata:
  name: repo-listener
  namespace: blog
spec:
  namespaceSelector: {}
  resources: {}
  serviceAccountName: default
  triggers:
    - bindings:
        - kind: TriggerBinding
          ref: repo-binding
      name: repo-trigger
      template:
        ref: repo-template

The event listner will create a service that accept requests to trigger pipeline. The service will require ingress, so we create a route:

kind: Route
apiVersion: route.openshift.io/v1
metadata:
  name: el-repo-listener
  namespace: blog
spec:
  host: el-repo-listener-blog.apps.okd.monzell.com
  to:
    kind: Service
    name: el-repo-listener
    weight: 100
  port:
    targetPort: http-listener
  tls:
    termination: edge
  wildcardPolicy: None

All of this requires updates the service account with new roles permissions as now the service account will need to perform all the work that I usually do, including deployments. So without further ado:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: tekton-eventlistener-role
  namespace: blog
rules:
  - apiGroups: ["triggers.tekton.dev"]
    resources: ["triggerbindings", "triggertemplates", "eventlisteners", "triggers","interceptors"] 
    verbs: ["get", "list", "watch", "create"]

  - apiGroups: ["tekton.dev"]
    resources: ["pipelineruns", "taskruns"]
    verbs: ["create"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: tekton-eventlistener-binding
  namespace: blog
subjects:
  - kind: ServiceAccount
    name: default
    namespace: blog
roleRef:
  kind: Role
  name: tekton-eventlistener-role
  apiGroup: rbac.authorization.k8s.io

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: tekton-triggers-interceptor-reader
rules:
  - apiGroups: ["triggers.tekton.dev"]
    resources: ["clusterinterceptors","clustertriggerbindings"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: tekton-triggers-interceptor-binding
  namespace: blog
subjects:
  - kind: ServiceAccount
    name: default
    namespace: blog
roleRef:
  kind: ClusterRole
  name: tekton-triggers-interceptor-reader
  apiGroup: rbac.authorization.k8s.io
---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tekton-triggers-interceptor-clusterbinding
  namespace: blog
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: tekton-triggers-interceptor-reader
subjects:
  - kind: ServiceAccount
    name: default
    namespace: blog

---

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: tekton-update-configmap-role
  namespace: blog
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "list", "watch", "update", "patch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: tekton-update-configmap-rolebinding
  namespace: blog
subjects:
  - kind: ServiceAccount
    name: default
    namespace: blog
roleRef:
  kind: Role
  name: tekton-update-configmap-role
  apiGroup: rbac.authorization.k8s.io

---

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: tekton-deployrole
  namespace: blog
rules:
  - apiGroups: ["route.openshift.io",""]
    resources: ["routes","services","routes/custom-host"]
    verbs: ["create", "get", "list", "watch", "update", "patch"]
  - apiGroups: ["apps"]
    resources: ["deployments","pods"]
    verbs: ["create", "get", "list", "watch", "update", "patch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: tekton-deploy-rolebinding
  namespace: blog
subjects:
  - kind: ServiceAccount
    name: default
    namespace: blog
roleRef:
  kind: Role
  name: tekton-deployrole
  apiGroup: rbac.authorization.k8s.io

With all that, we finally add a cron service to run the polling:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: trigger-pipeline-cron
  namespace: blog
spec:
  schedule: "*/5 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: test
              image: "prontotools/alpine-git-curl:latest"
              imagePullPolicy: IfNotPresent
              volumeMounts:
                - name: ssh-keys
                  mountPath: /ssh_tmp
              command:
                - "/bin/sh"         
              args:
                - -c
                - |
                  set -eu
                  mkdir -p /.ssh
                  cp -R /ssh_tmp $HOME/.ssh
                  chmod 0700 $HOME/.ssh
                  chmod -R 0400 $HOME/.ssh/*
                  REMOTE_REVISION=$(git ls-remote --heads ${REPO_URL} | grep -E '(master|main)' | awk '{print $1}')
                  if [ $LOCAL_REVISION != $REMOTE_REVISION ]; then
                    echo "Revisions differ, triggering pipeline"
                    curl -X POST -H "Content-Type:application/json" -d "{\"git-url\":\"${REPO_URL}\",\"git-revision\":\"${REPO_BRANCH}\"}" "https://el-repo-listener-blog.apps.okd.monzell.com"
                  else
                    echo "Revisions are the same, no action taken"
                  fi
              env:
                - name: LOCAL_REVISION
                  valueFrom:
                    configMapKeyRef:
                      name: git.revision
                      key: hash
                - name: REPO_URL
                  value: "git@github.com:rilindo/blog.monzell.com.git"
                - name: REPO_BRANCH
                  value: master
          volumes:
            - name: ssh-keys
              secret:
                secretName: github-credential
          restartPolicy: Never

Ending Thoughts and Improvements

I am pretty satisifed with the pipeline in that it met most of the things I wanted to do. There are lots of improvements that can be made, though>

1) I want to have it launch multiple deployments based on each branch. I think this can be done by combining environment variables and kustomized 2) I want to get away from using polling as it is a bit of a hack, but I am not sure how as my internal cluster is not accessible from the internet. One way I am thinking of is to mirroring the site and trigger from an internal Git server instead. I may look into launching private instance of ]GitLab](https://gitlab.com) or Forgejo. 3) I really want console access to my approvals. I don't mind CLI, but pipelines are made for ClickOps.

Eventually, I may add comments back to my blog, so until then, feel free to each out to me here if you have any thoughts or suggestions.

Proudly powered by Pelican, which takes great advantage of Python.

The theme is by Smashing Magazine, thanks!