Enterprise Service Bus technology (ESB) provided a pluggable architecture that has served legacy applications quite well. The paradigm offers heterogeneous connectivity that can throttle and scale, which was used to deliver an integrated enterprise solution based on siloed applications.
Digital Transformation programs run by enterprises aim to replace these legacy, siloed applications with container-based applications. The strategy results in distributed systems with exceptional flexibility in responding to changing business demands. A 2020 Gartner study reported that, by 2022, more than 75% of global organizations will be running containerized applications in production, up from 30% at the time of the report.
Container orchestration engines like Kubernetes provide the platform to manage containers but it does not natively address the need to integrate applications. On the other hand, the ephemeral nature of containers requires service discovery and load-balancing solutions. ESB can be used to integrate application containers, but such a solution often has many drawbacks, including:
Alternatively, solution architects have adopted the principle of “dumb pipes and smart contracts" to address these challenges around enterprise integration. The dumb pipes are created using serverless platforms, with AWS Lambda being the most popular one. These serverless platforms offer custom event handling that requires programming support. In distributed architectures, growing numbers of deployed services increases application integration points and the volume of data transferred. The complexity of all these exchanges often results in the Lambda Pinball anti-pattern:
“We lose sight of important domain logic in the tangled web of lambdas, buckets, and queues as requests bounce around increasingly complex graphs of cloud services.”
Modern Enterprise Integration Solutions must support declarative configuration of various out-of-the-box connectors built using cloud-native principles. Building the integration platform on these foundations results in such benefits as:
TriggerMesh is a distributed solution for enterprise integration needs. It offers the agility and flexibility of modern cloud-oriented solutions while adapting successful practices from legacy integration solutions. TriggerMesh’s versatility is based on its cloud native architecture with the following characteristics.
TriggerMesh is a distributed application that is packaged and deployed using containers. The solution is built using microservices for the TriggerMesh service proxy and TriggerMesh process engine. The service proxy offers such capabilities as ingress, the TriggerMesh management console, authentication, TLS termination, etc. The process engine provides APIs for all TriggerMesh features and executes reconciliation loops to deliver intended behaviors. The two services have different resource usage patterns and can be scaled independently.
Kubernetes facilitates application portability across different infrastructure platforms including on-premises VMs, bare metal, public cloud providers, etc. Holding on to this flexibility is a challenge as engineers must avoid using cloud provider’s native services like AWS Lambda. On the other hand, adopting a cloud-native architecture ensures that the application is based on Kubernetes services and open source environment-agnostic frameworks. TriggerMesh uses frameworks to support deployment across all Kubernetes providers.
Kubernetes application deployment is often cumbersome as you must deploy the containers and their respective Custom Resource Definitions(CRDs). It must be followed by defining custom roles and access privileges for the newly added CRDs. Manual execution of this process can be error prone if CRDs and their associated claims are missed. Instead, TriggerMesh is packaged with Helm charts and Kubernetes Operators to install and upgrade the solution with a single command.
API first architecture is about developing APIs as the first-class interface and not an afterthought. TriggerMesh is developed with REST-based APIs and JSON data exchange. The TriggerMesh Management Console relies on these APIs to deliver the intended outcomes. All integration components such as source, targets, and transformations are configured using APIs.
The APIs decouple the application use cases and the features. Developers can work with the API, using an OpenAPI editor, to build custom logic for their specific use cases. The Kubernetes platform often uses this flexibility to provide declarative expression of this custom logic. The Kubernetes API provides this feature by using Custom Resource Definitions (CRDs). Applications can provide their respective CRDs, which will be used in YAML declarations.
The declarative integration approach has worked successfully for various integration solutions such as Apache Camel, MuleSoft, etc. It allows engineers to focus on end-to-end integration rather than specific data handshakes. The TriggerMesh API is used to represent integrations declaratively.
Debugging anomalies, bottlenecks, and performance issues is a challenge in distributed architectures. The traditional practice of relying on logs and monitoring is no longer good enough. It needs a holistic solution comprising of the following:
The inclusive nature of these data is often referred to as “Observability.” Kubernetes does not include support for any of the above practices. This is provided by using individual tools such as ELK, Prometheus, Grafana, Zipkin, or holistic solutions like service mesh.
TriggerMesh can be integrated with all of these tools and solutions to provide the required observability. You can export the logs, monitor metrics, and extend TriggerMesh sources and sinks using your established enterprise practices.
TriggerMesh offers a portable integration solution that can be deployed to your preferred infrastructure. It supports running an enterprise integration platform with the same operational practices as the Kubernetes platform.