Access local event sources with tmctl and Docker Desktop
You may have experienced first hand the difficulty of getting containerised code to communicate with services running on your localhost. It's something that can happen during development when you're experimenting with new technologies that need to communicate with each other, but some are running in containers and others are not. The TriggerMesh command line interface tmctl runs on Docker, and so accessing event sources like HTTP services or Kafka running on your localhost requires some specific configuration. Don't worry, we'll break down how this can be done specifically when using Docker Desktop. Let's get started!
When localhost plays hard to HTTP GET
TriggerMesh is an open-source Amazon EventBridge alternative that makes it easy to capture events from many sources (from any cloud or on-prem), centralise them into a broker, transform and filter them, and reliably deliver them to numerous destinations using pub/sub-style communication. tmctl is the new command line interface that provides a local development experience, letting you get started with TriggerMesh on any computer that has Docker. But what if you're running non-containerised services on your machine that produce events you'd like to capture with TriggerMesh? By default, asking TriggerMesh to fetch events from localhost won't work, because TriggerMesh components deployed with tmctl are containerised and will try to access the container's localhost.
Below is an example Docker Desktop dashboard that shows the TriggerMesh components running after having completed all the steps in this tutorial: a broker named foo, an HTTP poller source, a Kafka source, and a wiretap which corresponds to the tmctl watch command:
To illustrate the problem at hand, if you configure the foo-httppollersource component to GET from localhost:8000, the requests will fail.
Docker Desktop provides some documentation on how to address this, namely by using the special host.docker.internal host name. But the devil is in the details particularly when trying to achieve this with Kafka.
Let's take a look at the solution for both HTTP Poller source and Kafka source.
Capturing events from a local HTTP service
The TriggerMesh HTTP Poller source let's you turn any HTTP service into an event source. The component polls the service at regular intervals and turns the response data into an event that is pushed into TriggerMesh for further transformation, filtering and routing.
You'll first need to have a local HTTP service running. I'm using this simple command to create one with Python:
After installing tmctl and Docker Desktop, we can now create a TriggerMesh broker and create an HTTP Poller source that will invoke the local HTTP service. We'll use host.docker.internal as a hostname for the endpoint parameter, instead of localhost or 127.0.0.1.
Notice that we've set the poller to five second intervals, and we're telling it to generate events of type myeventtype.
If you open a watcher in another terminal with tmctl watch, you should now see events showing up in the broker every five seconds:
You can apply this same principle to many other situations in which you need to access a local HTTP service from a Docker container. Now lets look at the slightly more involved Kafka example.
Capturing events from a local Kafka broker
Apache Kafka (and its many alternative or competing distributions and variants) is often used in combination with TriggerMesh, either as an initial event source, final destination, or as part of a multi-legged event flow that spans many topics. If you want to go the path of running Kafka locally without containerising it, you might hit some issue trying to connect to it using the TriggerMesh Kafka source or Kafka target.
To run a local Kafka broker, we followed the Apache Kafka quickstart and its KRaft variant which removes the need for Zookeeper.
By default the Kafka broker is accessible at localhost:9092. But when creating your Kafka source or target, you'll need to include both the localhost and the special host.docker.internal versions of the Kafka bootstrap URLs parameter.
Below is an example command to create a Kafka source that can read from a local Kafka broker:
As you can see, I've configured SASL PLAIN authentication on my Kafka broker and am passing authentication parameters to the Kafka source. The Kafka source currently requires authentication but will soon make authentication optional.