Simplifying Kafka Connectors with an Integration Language

Simplifying Kafka Connectors with an Integration Language
Simplifying Kafka Connectors with an Integration Language

Blog 4 in our four-part Series: Using TriggerMesh to Connect Apache Kafka And Kubernetes

In blog 3 about configuring Kafka sources and sinks using Knative and Kubernetes API objects you might have told yourself “face full of YAML”. Indeed, one of the most painful points with Kubernetes since its inception has been the face full of YAML problem. Once you start declaring the desired state of your application with Kubernetes objects you end up with a lot and a lot of YAML.

There is a terrific document about declarative application management in Kubernetes on github originally written by Brian Grant from Google. In that document you can read about the many different ways to manage an app in Kubernetes and the many different ways to solve or at least tackle the “face full of yaml” challenge. In that document you will also find a link to an amazing spreadsheet which last I checked had 127 tools for managing Kubernetes applications.

In this post I want to show you what we are working on at TriggerMesh, which is not a tool to manage k8s applications but an authoring tool for integration between various applications with event-driven mechanisms at its core. Think connecting Kafka sources to streams, defining transformation and connecting to sinks. It does couple things which I think are great

  • simplifies designing event-driven systems
  • generates complex YAML based on a more friendly HCL based syntax

Before we dive into our representation of a Kafka flow, let’s do a small detour by the “Enterprise Integration Patterns”.

Enterprise Integration Patterns

Enterprise integration is not new but put in the lens of a Cloud-Native world it is ripe for some modernization if not disruption. Enterprise Integration patterns have been described and the figure below show the different patterns in each major component: source, channel, router, transformer, target.

What we set out to do at TriggerMesh is to create a description language to declaratively represent the design of an integration, abstracting the API objects being used to implement the patterns.


Looking at a Kafka flow with a source and a sink/target as an integration we can then think about representing this flow with the TriggerMesh Integration Language. Here is a sneak peek below, we are planning to release it at the end of the month.

TriggerMesh Integration Language

At TriggerMesh we use TerraForm like a lot of people and we do enjoy the Hashicorp Configuration Language (i.e HCL). So we decided to write our TriggerMesh Integration Language (TIL) with HCL. And doing so simplify the authoring of all the YAML that would be needed to write an integration.

apiVersion: eventing.knative.dev/v1alpha1
kind: KafkaSink
metadata:
  name: my-kafka-topic
spec:
auth:
  secret:
    ref:
      name: kafkahackathon
bootstrapServers:
— pkc-456q9.us-east4.gcp.confluent.cloud:9092
topic: hackathon

With TIL this becomes:

target “kafka” “my_kafka_topic” {
  topic = “hackathon”
  bootstrap_servers = [“pkc-419q3.us-east4.gcp.confluent.cloud:9092”]
  auth = secret_name(“kafkahackathon”)
}

Note that we do not need to know what the apiVersion is, what the kind is. We just need to know that this is the target endpoint of our integration and this of type kafka

Similarly for a Kafkasource as described in this post the full YAML to consume messages from Kafka would look something like:

apiVersion: sources.knative.dev/v1beta1
kind: KafkaSource
metadata:
  name: my-kafka
spec:
bootstrapServers:
— pkc-q.us-east4.gcp.confluent.cloud:9092
net:
  sasl:
    enable: true
    password:
      secretKeyRef:
        key: password
        name: kafkahackathon
    type:
      secretKeyRef:
        key: sasl.mechanism
        name: kafkahackathon
    user:
      secretKeyRef:
        key: user
        name: kafkahackathon
…
sink:
  ref:
    apiVersion: v1
    kind: Service
    name: display
topics:
— hackathon

And in TIL this becomes:

source “kafka” “my_kafka” {
  
  bootstrap_servers = [“pkc-q3.us-east4.gcp.confluent.cloud:9092”]
  topics = [“hackathon”]
  sasl_auth = secret_name(“kafkahackathon”)
  tls = secret_name(“kafkahackathon”)
  
  to = target.sockeye
}


GitHub Kafka Source and MicroService as a Kafka Sink

Equipped with TIL, you will be able to declare a message flow declaratively and manage it in Kubernetes (assuming the required Kafka and TriggerMesh controllers are in place).

For example, a GitHub event source sending all its events to a Kafka stream and consuming from a Kafka stream to target a serverless workload on Kubernetes would be declared something like:

source “github” “git_source” {
  owner_and_repository = “sebgoa/transform”
  event_types = [“push”, “issues”]
  tokens = secret_name(“github-secret”)
  to = target.my_kafka_topic
}
target “kafka” “my_kafka_topic” {
  topic = “hackathon”
  bootstrap_servers = [“pkc-4q3.us-east4.gcp.confluent.cloud:9092”]
  auth = secret_name(“kafkahackathon”)
}
source "kafka" "my_kafka" {
   bootstrap_servers = ["pkc-4q3.us-east4.gcp.confluent.cloud:9092"]
   topics = ["hackathon"]
   sasl_auth =  secret_name("kafkahackathon")
   tls = secret_name("kafkahackathon")
   to = target.mymicroservice
}
target "container" "mymicroservice" {
   image = "docker.io/n3wscott/sockeye:v0.7.0"
   public = true
}


With this definition stored in a file kafka.hcl for example you can then generate the YAML with our CLI and apply it to your k8s cluster


til generate kafka.hcl | kubectl apply -f-

Conclusions

First, we simplify writing enterprise integration by adopting a cloud-native declaration. Second, we solve the face full of YAML problem in describing enterprise integration by providing folks with a description language written in HCL which:

  • defines known integration patterns as top level components (e.g source, target, transformation and more)
  • abstracts Kubernetes objects (native and custom)
  • can be used in your GitOps pipeline
  • simplifies Kafka connector configuration

We are almost done, stay tuned for the release and if you want to take it for a spin just contact me.



Related Posts

Join our newsletter

Sign up for our newsletter for the latest news, training, and offers to help you build event-driven systems with TriggerMesh.