Announcing TriggerMesh 1.24

Jonathan Michaux

Jonathan Michaux

Mar 1, 2023
Announcing TriggerMesh 1.24
Announcing TriggerMesh 1.24

We’re happy to announce that TriggerMesh 1.24 is out. This release includes new connectors, feature requests, bug fixes, and some powerful new tmctl commands. Be sure to check out the detailed release notes on the respective repos, namely triggermesh and tmctl. Let’s dive in! 

New MongoDB target

MongoDB is now available as an event target for TriggerMesh. MongoDB is a flexible and scalable NoSQL database that uses JSON-like documents to store and manage data. With the new TriggerMesh target, you can either append events to a MongoDB collection, or update a specific key, or query existing documents. Check out the documentation or take it for a spin right now with tmctl: 

You can use any managed services like MongoDB Atlas or the self-hosted community version

New Solace source and target connectors

TriggerMesh now supports Solace as both an event source and target. You can now start ingesting events from a Solace queue with a single tmctl command:

And similarly, deliver events to a Solace queue with the matching target component: 

Take a look at the source and target docs for more details as well as example Kubernetes manifests. 

Set a unique service account per GCP event source

With this update all GCP sources that use the new service account key attribute will get their own Kubernetes Service Account (KSA) annotated with the provided Google Service Account (GSA) name. The previous version of this mechanism worked with the single Service Account shared between all objects of the same source kind, which was limiting for certain setups. 

We used the opportunity to move all authentication related attributes for GCP sources to the spec's auth object. For backwards compatibility we're keeping the old spec.serviceAccountKey and new spec.auth.serviceAccountKey available in the object spec, but please be aware that spec.serviceAccountKey will be removed in a future release.

Below is a snapshot of what the specs look like before and after the update. 

Before

After

Run on DigitalOcean App Platform

TriggerMesh is introducing support for DigitalOcean App Platform as a deployment environment. This allows developers to build their event pipes locally relying solely on a local Docker environment and then deploy the pipe to DigitalOcean App Platform. App Platform is a fully managed solution to rapidly build, deploy, manage and scale containerized applications.

Practically speaking, you can now create your event flow locally with tmctl and then export a DigitalOcean App Spec that can be directly deployed on DigitalOcean App Platform. The App Spec contains definitions for DigitalOcean services (snippet shown below) and workers. 

Run anywhere with Docker Compose

Similar to what we did with DigitalOcean, we’re also providing an export to Docker Compose from tmctl. For users that want to run simple integrations and don’t need an advanced runtime platform, this is an easy way to get going quickly. The generated Docker Compose file will include a service for each of the TriggerMesh components that run as individual containers. The below example shows a subset of what is produced by the new option, and shows a webhook source service. 

What’s new in tmctl

A ton of new capabilities made it into tmctl with 1.24 (tmctl 1.2.0). 

Save and share tmctl projects with the new import command

You can now import manifests that you have exported using dump back into tmctl, which means you can save, version, and even share your tmctl projects. This Gist contains a TriggerMesh yaml manifest that was created by building a local event flow with tmctl and saving it as yaml with tmctl dump.

Try importing it yourself:

To see exactly what you imported, don’t forget to tmctl describe.

You can create your own yaml for versioning or sharing like this: 

Simpler access to logs with the new tmctl logs command

Easy access to logs while developing event-driven applications is essential to debug and understand what is happening at different stages. The new tmctl logs command gathers logs from all the components and the broker running locally. You can use the --follow or -f option to keep the command running and watch for new logs in real time, or run it without the option to get a look at the full history of available logs.

Prefix and suffix wildcard trigger filters in tmctl 

Events sometimes carry hierarchical type information that includes things such as categories or geographies. For example, in a recent article we used event types like: eu-fashion-v1, eu-books-v1, and so forth. Another example is AWS S3 source event types which contain hierarchical information about the nature of the event: com.amazon.s3.objectcreated, com.amazon.s3.objectremoved, com.amazon.s3.objectrestore, and so forth.

It is now possible to route events based on substrings from the event type directly from tmctl thanks to prefix and suffix wildcard triggers.

If you tmctl dump these configurations, you’ll see that we’re using the filter specification in the Triggers under the hood:

We’ve also added support for specifying advanced JSON trigger filters in tmctl which directly match the syntax that you can find on the Trigger CRD’s filter attribute. As an example, consider the following two trigger definitions which are semantically equivalent:

Create your first event flow in under 5 minutes