GitOps event streaming with ArgoCD, Redpanda and TriggerMesh

Jonathan Michaux

Jonathan Michaux

May 4, 2023
GitOps event streaming with ArgoCD, Redpanda and TriggerMesh
GitOps event streaming with ArgoCD, Redpanda and TriggerMesh

We recently published a post on the Redpanda blog called Kubernetes-native connectivity for Redpanda with TriggerMesh. It demonstrates how easily TriggerMesh can get data into and out of Redpanda when running on Kubernetes. In some ways, you can think of it as a Kubernetes-native Kafka Connect alternative.

I wanted to keep exploring the possibilities of Redpanda and TriggerMesh on Kubernetes, and thought it would be valuable to demonstrate a GitOps-style workflow for deploying event-driven applications to Kubernetes. We’ve heard and seen a lot from both users and the open-source community about ArgoCD recently, so let’s take it for a spin here. We’ll use GitOps to deploy a TriggerMesh event flow that routes events from one Redpanda topic to another. Then we’ll show that changes to the TriggerMesh manifest on GitHub are automatically deployed to Kubernetes by Argo. 

Below is a sneak peak of the great dashboards that ArgoCD provides you with no extra effort once your app is deployed to K8s 😍:

If you haven't already done so, please join our community Slack so that you can reach out with any Argo-related questions of feedback as you scramble through this post 🙃. 

Start Redpanda on Minikube

I’m using the guide provided by Redpanda to install Redpanda on Minikube, and I’m starting to customize things when the time comes to create a namespace for the cluster:

kubectl create ns panda-pipes

I’m then creating a single node cluster as instructed:

kubectl apply -n panda-pipes \
              -f https://raw.githubusercontent.com/redpanda-data/redpanda/dev/src/go/k8s/config/samples/one_node_cluster.yaml

And making sure that the cluster was correctly created:

kubectl exec -it -n panda-pipes one-node-cluster-0 -- rpk cluster metadata --brokers='localhost:9092'

Our goal is to pipe events from one topic to another using TriggerMesh, so I’ll go ahead and create the source and target topics using Redpanda’s rpk CLI: 

kubectl exec -it -n panda-pipes one-node-cluster-0 -- rpk topic create source-topic --brokers='localhost:9092'

kubectl exec -it -n panda-pipes one-node-cluster-0 -- rpk topic create target-topic --brokers='localhost:9092'

Install TriggerMesh

Head to the documentation to install TriggerMesh on Minikube using either Helm or YAML. When properly installed, you should see three pods in the triggermesh namespace:

% kubectl get po -n triggermesh 
NAME                                            READY   STATUS    RESTARTS   AGE
triggermesh-controller-854fd7b677-td728         1/1     Running   0          3h45m
triggermesh-triggermesh-core-75779dd5d4-k8mct   1/1     Running   0          3h45m
triggermesh-webhook-5fc6874b97-z8mm6            1/1     Running   0          3h45m

Create the ArgoCD app

I’m installing ArgoCD as per the official documentation.

Now we need to create a Kubernetes manifest for our TriggerMesh objects. We’ll need:

  • a Kafka source that reads from the source Redpanda topic
  • a TriggerMesh broker to receive events from the source
  • a Kafka target that delivers events to the target Redpanda topic
  • a Trigger that will subscribe the Kafka target to events from the Broker

The following TriggerMesh manifest does just this, each object is its own custom resource:

apiVersion: eventing.triggermesh.io/v1alpha1
kind: RedisBroker
metadata:
  labels:
    triggermesh.io/context: triggermesh
  name: triggermesh

---

apiVersion: sources.triggermesh.io/v1alpha1
kind: KafkaSource
metadata:
  name: kafka-source
spec:
  groupID: mygroup
  bootstrapServers:
    - one-node-cluster-0.one-node-cluster.panda-pipes.svc.cluster.local.:9092
  topic: source-topic
  sink:
    ref:
      apiVersion: eventing.triggermesh.io/v1alpha1
      kind: RedisBroker
      name: triggermesh

---

apiVersion: targets.triggermesh.io/v1alpha1
kind: KafkaTarget
metadata:
  name: kafka-target
spec:
  bootstrapServers:
  - one-node-cluster-0.one-node-cluster.panda-pipes.svc.cluster.local.:9092
  topic: target-topic

---

apiVersion: eventing.triggermesh.io/v1alpha1
kind: Trigger
metadata:
  labels:
    triggermesh.io/context: triggermesh
  name: broker-to-kafka
spec:
  broker:
    group: eventing.triggermesh.io
    kind: RedisBroker
    name: triggermesh
  target:
    ref:
      apiVersion: targets.triggermesh.io/v1alpha1
      kind: KafkaTarget
      name: kafka-target

Lets push this manifest to a Git repository that we’ll use as the source of our GitOps workflow with ArgoCD. I’ll put the manifests in a subfolder of the repo called triggermesh. 

Now we can open the Argo GUI (as explained in the installation guide linked earlier) and create a new Argo application:

We’ll point it to the Git repo and mention the triggermesh path (i.e. subfolder). 

After creating the app, you’ll see that Argo will sync the cluster to the repo until all the K8s objects have been created, at which point the app’s status should be Healthy and Synced, which means your TriggerMesh event flow is now live! 🚀

Test the event flow

Let’s check to see that TriggerMesh reads events from source-topic and routes them to target-topic. 

In a new terminal, start a Kafka consumer that listens to the target-topic:

kubectl exec -it -n panda-pipes one-node-cluster-0 -- rpk topic consume target-topic --brokers='localhost:9092'

In another, start a producer and write a message into the source-topic:

kubectl exec -it -n panda-pipes one-node-cluster-0 -- rpk topic produce source-topic --brokers='localhost:9092'

Enter some valid JSON like {"hello":"world"}.

This should get routed by TriggerMesh to the target-topic, which you should see at the output of the consumer terminal, looking something like this:

{
  "topic": "target-topic",
  "key": "2a3cf405-50e7-464f-aba5-7e9b09d1c1fe",
  "value": "{\"specversion\":\"1.0\",
             \"id\":\"2a3cf405-50e7-464f-aba5-7e9b09d1c1fe\",
             \"source\":\"source-topic\",
             \"type\":\"io.triggermesh.kafka.event\",
             \"subject\":\"kafka/event\",
             \"datacontenttype\":\"application/json\",
             \"time\":\"2023-05-03T09:58:40.420450425Z\",
             \"data\":{
                 \"hello\":\"world\"
             },
             \"triggermeshbackendid\":\"1683107920422-0\"
  }",
  "timestamp": 1683107922101,
  "partition": 0,
  "offset": 0
}

What you’re seeing here is metadata about the Kafka message that the Redpanda rpk CLI tool is showing us, along with the value of the message which contains the entire CloudEvent (including metadata attributes like id, source, type, etc…) Note that you can ask the Kafka target to omit the CloudEvents metadata in the target-topic by using the discardCloudEventContext parameter

Make some changes to the TriggerMesh manifests in Git

Let’s add a simple JSON transformation to our TriggerMesh yaml manifest that adds a new JSON attribute to the payload. We’ll also explicitly set the type of the event produced by the transformation to transformed.event, so that we can specifically route the transformed events to the Kafka target by updating the existing Trigger with a filter. Then we also need to add a new Trigger that routes the original Kafka events to the transformation, resulting in the event flow shown in the diagram below, with its full manifest below it. 


apiVersion: eventing.triggermesh.io/v1alpha1
kind: RedisBroker
metadata:
  labels:
    triggermesh.io/context: triggermesh
  name: triggermesh

---

apiVersion: sources.triggermesh.io/v1alpha1
kind: KafkaSource
metadata:
  name: kafka-source
spec:
  groupID: mygroup
  bootstrapServers:
    - one-node-cluster-0.one-node-cluster.panda-pipes.svc.cluster.local.:9092
  topic: source-topic
  sink:
    ref:
      apiVersion: eventing.triggermesh.io/v1alpha1
      kind: RedisBroker
      name: triggermesh

---

apiVersion: targets.triggermesh.io/v1alpha1
kind: KafkaTarget
metadata:
  name: kafka-target
spec:
  bootstrapServers:
  - one-node-cluster-0.one-node-cluster.panda-pipes.svc.cluster.local.:9092
  topic: target-topic

---

apiVersion: eventing.triggermesh.io/v1alpha1
kind: Trigger
metadata:
  labels:
    triggermesh.io/context: triggermesh
  name: broker-to-kafka
spec:
  filters:
  - exact:
      type: transformed.event
  broker:
    group: eventing.triggermesh.io
    kind: RedisBroker
    name: triggermesh
  target:
    ref:
      apiVersion: targets.triggermesh.io/v1alpha1
      kind: KafkaTarget
      name: kafka-target

---

apiVersion: flow.triggermesh.io/v1alpha1
kind: Transformation
metadata:
  name: triggermesh-transformation
spec:
  context:
  - operation: add
    paths:
    - key: type
      value: transformed.event
  data:
  - operation: add
    paths:
    - key: new
      value: message

---

apiVersion: eventing.triggermesh.io/v1alpha1
kind: Trigger
metadata:
  labels:
    triggermesh.io/context: triggermesh
  name: broker-to-transformation
spec:
  filters:
  - exact:
      type: io.triggermesh.kafka.event
  broker:
    group: eventing.triggermesh.io
    kind: RedisBroker
    name: triggermesh
  target:
    ref:
      apiVersion: flow.triggermesh.io/v1alpha1
      kind: Transformation
      name: triggermesh-transformation

Commit this updated manifest to git, and you’ll see ArgoCD automatically deploy the new version to the cluster. You can check to make sure the transformation pod exists in the default namespace like so:

% kubectl get po
NAME                                        READY   STATUS    RESTARTS   AGE
kafkasource-kafka-source-6fcbd54695-frbln   1/1     Running   0          15m
knative-operator-66f5b45fcd-q695c           1/1     Running   0          4h4m
operator-webhook-9f5487b8f-t26bd            1/1     Running   0          4h4m
triggermesh-rb-broker-56766d4648-9jl7q      1/1     Running   0          15m
triggermesh-rb-redis-6d5985d478-4t7t8       1/1     Running   0          15m

Don’t worry if you sometimes don’t see the Kafka Target pod, it is scaling to zero when unused thanks to Knative serving

Now if you send a new message to the source-topic again, you’ll see a message on the target-topic with the added key:

{
  "topic": "target-topic",
  "key": "d5398332-7754-4bea-a48b-004ccfd27b8a",
  "value": "{\"specversion\":\"1.0\",
             \"id\":\"d5398332-7754-4bea-a48b-004ccfd27b8a\",
             \"source\":\"source-topic\",
             \"type\":\"transformed.event\",
             \"subject\":\"kafka/event\",
             \"datacontenttype\":\"application/json\",
             \"time\":\"2023-05-03T13:04:18.324135802Z\",
             \"data\":{
                 \"hello\":\"world\",
                 \"new\":\"message\"
             },
             \"triggermeshbackendid\":\"1683119058343-0\"
  }",
  "timestamp": 1683119058351,
  "partition": 0,
  "offset": 1
}

And don’t forget to click on the application in the Argo GUI to browse through the built-in dashboard (screenshot at the start of the post). 

Conclusion

GitOps is an approach for managing the configuration of your infrastructure and applications from source control. It is particularly well-suited for use with Kubernetes, and ArgoCD is very easy to get up and running. In a few steps, we’ve got TriggerMesh deployed via GitOps and routing events across Redpanda topics. Now to make updates to the event flow, we simply need to push to Git. And of course this comes with all the benefits related to version control, meaning I can easily revert, branch, fork, etc! 

If you've gotten this far, then you'll love joining the discussion on our community Slack. Pop in and say hi to the team, we all love to hear directly from users.

Tags
No items found.

Create your first event flow in under 5 minutes