In blog 3 about configuring Kafka sources and sinks using Knative and Kubernetes API objects you might have told yourself “face full of YAML”. Indeed, one of the most painful points with Kubernetes since its inception has been the face full of YAML problem. Once you start declaring the desired state of your application with Kubernetes objects you end up with a lot and a lot of YAML.
There is a terrific document about declarative application management in Kubernetes on github originally written by Brian Grant from Google. In that document you can read about the many different ways to manage an app in Kubernetes and the many different ways to solve or at least tackle the “face full of yaml” challenge. In that document you will also find a link to an amazing spreadsheet which last I checked had 127 tools for managing Kubernetes applications.
In this post I want to show you what we are working on at TriggerMesh, which is not a tool to manage k8s applications but an authoring tool for integration between various applications with event-driven mechanisms at its core. Think connecting Kafka sources to streams, defining transformation and connecting to sinks. It does couple things which I think are great
Before we dive into our representation of a Kafka flow, let’s do a small detour by the “Enterprise Integration Patterns”.
Enterprise integration is not new but put in the lens of a Cloud-Native world it is ripe for some modernization if not disruption. Enterprise Integration patterns have been described and the figure below show the different patterns in each major component: source, channel, router, transformer, target.
Looking at a Kafka flow with a source and a sink/target as an integration we can then think about representing this flow with the TriggerMesh Integration Language. Here is a sneak peek below, we are planning to release it at the end of the month.
At TriggerMesh we use TerraForm like a lot of people and we do enjoy the Hashicorp Configuration Language (i.e HCL). So we decided to write our TriggerMesh Integration Language (TIL) with HCL. And doing so simplify the authoring of all the YAML that would be needed to write an integration.
With TIL this becomes:
Note that we do not need to know what the apiVersion is, what the kind is. We just need to know that this is the target endpoint of our integration and this of type kafka
Similarly for a Kafkasource as described in this post the full YAML to consume messages from Kafka would look something like:
And in TIL this becomes:
Equipped with TIL, you will be able to declare a message flow declaratively and manage it in Kubernetes (assuming the required Kafka and TriggerMesh controllers are in place).
For example, a GitHub event source sending all its events to a Kafka stream and consuming from a Kafka stream to target a serverless workload on Kubernetes would be declared something like:
With this definition stored in a file kafka.hcl for example you can then generate the YAML with our CLI and apply it to your k8s cluster
First, we simplify writing enterprise integration by adopting a cloud-native declaration. Second, we solve the face full of YAML problem in describing enterprise integration by providing folks with a description language written in HCL which:
We are almost done, stay tuned for the release and if you want to take it for a spin just contact me.