Using Kong API Gateway With An Event Driven System to Modernize Legacy Integrations

Using Kong API Gateway With An Event Driven System to Modernize Legacy Integrations
Using Kong API Gateway With An Event Driven System to Modernize Legacy Integrations

Let’s talk API gateways and event based integration a bit. Amazon API gateway has been a pillar of serverless applications on AWS, it allows developers to manage API endpoints backed by Lambda functions or potentially other services. Building REST APIs with serverless functions has truly empowered developers to deliver products faster in the Cloud. For enterprises with significant on-premises systems, there is no AWS API Gateway, but you have the Kong Gateway which allows you to do similar things. In this post, we are going to go one step beyond and show you how you can use the Kong Gateway to expose a REST API in front of an event driven integration. In other terms, front your asynchronous event flow with an API.

Why is this important? Because people in all enterprises are in a race to modernize their applications, get out of a monolithic system mindset, embrace Cloud services and open source. This means adopting open source technologies to leverage investments from multiple vendors and/or a large community of developers, using Cloud services and as many managed offerings as possible, and doing everything with a software lifecycle that speeds up so that new products can be delivered to customers faster.

This is particularly relevant for Architects and Developers who need to bridge legacy systems into their cloud-native architecture and DevOps mindset. An example we are going to dive into:

Think about old systems of record running on mainframes and interacting via an IBM MQ system. How do you expose a REST API in front of it so that you can extend the lifetime of that system and make it relevant in the Cloud era?

This is also very useful for enterprises moving away from systems like MuleSoft in favor of an open source system that can leverage event driven infrastructure, whether it is Kafka based, or running on Kubernetes.

So how would you modernize an old integration with IBM MQ, for example? How would you create a REST API for it and how would you run it in Kubernetes?

The figure below shows you how. We are going to:

  • Use Kong to expose a REST endpoint that will generate a CloudEvent
  • Run TriggerMesh in a Kubernetes cluster with Knative
  • Use the TriggerMesh declarative API for transformation and connection with MQ

First, a REST Endpoint

To be able to expose a REST API in front of it you can use Kong instead of a costly MuleSoft Anypoint platform.

Kong gives you the “REST” mechanics but to plug it into TriggerMesh and generate a CloudEvent event we developed a Kong Plugin. This plugin allows you to transform a REST request into a CloudEvent. This event specification is the linga franca of Knative (now in CNCF) and TriggerMesh and allows you to benefit from features such as auto-scaling and event routing. The Kong configuration would look like this

- name: dispatcher
  url: http://synchronizer-dispatcher.default.svc.cluster.local
  - name: bar-route
    - /bar
    - name: ce-plugin

This defines a Kong service called dispatcher. For requests made to the `/bar` route it transforms the body of the request and creates a CloudEvent of type of `` and forwards it to the `url` of the dispatcher service.

The net result of this plugin is that a simple curl request posting some JSON data will emit a CloudEvent with some attributes: id, source, type, timestamp

curl -v $KONG_ADDRESS -d '{"hello":"CloudEvents"}' -H "Content-Type: application/json"

With this Kong transformation we are bridging the REST endpoint exposed via Kong to an event-driven system managed by Kubernetes. Therefore we can then interact with IBM-MQ.

Second, an IBM MQ Connector in Kubernetes

In the latest release of TriggerMesh we shipped an IBM MQ Source and Target, all of it runs in Kubernetes and is available as a declarative API just like any Kubernetes workload. You can find the complete demo in a GitHub repository. For our purposes here I will just highlight the main steps.

A sample IBM-MQ source manifest looks like this:

kind: IBMMQSource
  name: mq-output-channel
  channelName: DEV.APP.SVRCONN
  connectionName: ibm-mq.default.svc.cluster.local(1414)
        key: password
        name: ibm-mq-secret
        key: username
        name: ibm-mq-secret
  queueManager: QM1
  queueName: DEV.QUEUE.2
      kind: Synchronizer
      name: ibm-mq

The ` connectionName` points to your IBM-MQ server, the `credentials` references a Kubernetes secrets which holds the username and password, the queue is specified and you set a `sink` as the destination for the event that you consume from MQ.

Just like that with a declarative API you consume from a legacy system and you emit an event in the CloudEvent format of CNCF. In addition this consumer is containerized and managed by Kubernetes.

The same thing can be done to produce events into IBM-MQ using a TriggerMesh so-called Target. You can find the API specification in our documentation.

Finally, Add a Synchronizer and Transformations

To bring it all together we have to deal with synchronization. Indeed REST exposes a synchronous API and event-driven systems are by nature asynchronous. Mulesoft for example has long offered an IBM-MQ connector which leverages a correlation and reply-to metadata of MQ messages to build a synchronous connector. In TriggerMesh you can now do the same declaratively, open-source and in Kubernetes. What’s not to like :)

The diagram below represents the full flow of the system. You can try it for yourself with the manifests available at . To ease up the testing you might need the TriggerMesh DSL. What the flow also showcases is the ability to do different types of transformations depending on the routes being called via REST. Since each route can be configured to generate a different event type, the even broker can route events to different transforming functions before going to IBM-MQ.

Think about fixed-width to JSON transformation, COBOL copybook transformations, Dataweave transformations and even good old XSLT transformations.


Moving to the Cloud does not mean throwing away decades of enterprise efforts, performance optimization, workflows and system of records. You do not need to lift and shift everything at once. What you can definitely do is modernize your approach to software development and start bringing in new technologies to your entire software and infrastructure stack.

Doing so you can extend the lifetime of legacy systems like IBM-MQ or Oracle DB and make them more relevant in the Cloud era.

In this post we showed you a practical way to build a REST interface in front of a system of record that you interact with via IBM-MQ. All components are configured via declarative API implemented as extensions to the Kubernetes API. Such a modernization gives you a slew of features derived from Kubernetes: RBAC, logging, monitoring, auto-scaling and the ability to manage your integration with a GitOps workflow. In addition it saves you considerable amount of money by getting away from costly proprietary integration solution and embracing open source systems like Kong, TriggerMesh and Kubernetes.

Create your first event flow in under 5 minutes