We are embarking on a new journey. Personally, it will certainly prove to be a new chapter in our lives and professionally we are excited to bring together our know-how and help move the Serverless era forward. Mark brings his business expertise and community chops and I bring the engineering and product vision, we have worked together before and we balance each other well. It is often quite hard to do something on your own, especially a new technology venture and I am glad that the time is right for both of us.
We have seen a few waves in computing now and have been participating in the Cloud and more recently the container revolution. It has been great to see. As a technology enthusiast, it certainly feels that the pace of innovation has accelerated. This is mostly driven by the fact that software is now written in the Open and that everyone shares part of the development and maintenance costs. The Cloud providers offer managed services for all core infrastructure pieces of the enterprise and going to the cloud seems a de-facto decision compared to even five years ago.
Looking back in 2012 when Mark and time worked on Apache CloudStack, Cloud was hard. Understanding the “full stack” was complicated for most organization. Not only people needed to adopt a new operational model but they also needed to adopt a new culture to take full advantage of the Cloud. There is still work to do on the organizational culture, but the infrastructure piece feels solved. The challenge is moving up the stack, putting the focus back on the application and application management.
In 2014 when I started looking deeply at containers, I did so because I wanted to understand the excitement around Docker. I did not grasp why when the core technologies for containers (namespaces and cgroups) had been around for several years, folks where only now getting excited by them. It did not take me long to understand. Docker with a new UX to manage containers and the availability of a container store made life easy for developers. I jumped in the bandwagon with no fear.
In what has turned out to be a bit ironic, writing the Docker cookbook allowed me to spend time on Kubernetes starting in the fall 2014 and that’s when I knew that Kubernetes was going to become the operating systems of the data center. Having used CloudStack, Openstack, Eucalyptus, and Opennebula (the big four) at the time, I saw Kubernetes has the open sourcing of the backbone of Google Compute Engine (GCE). As I say often, it was as if AWS had decided to open source EC2, except that it was Google who did it. Noone really took on this aspect of Kubernetes. Even though it could be used to manage virtual machines since workloads started being containerized we all focused on containers and quite quickly abandoned VM workloads. That is not to say that they have disappeared from the data center, not at all, but new applications are now containerized and run on Kubernetes.
Every week now I see new Kubernetes users in production or about to go production. They do so quickly, they bring a new stack in the data center. They have become more agile, faster to production and are adopting the software that are part of CNCF quickly. This is the signal to me that time is right to get back to the application landscape and rely on the core infrastructure as being Kubernetes run.
Here comes knative
This shift to applications is not new either, PaaS have been around for some time. Openshift existed prior to Kubernetes. Cloud Foundry is eight years old. But with AWS Lambda launched in 2015 and more services being offered by Cloud providers, the application environment is changing. With pun totally intended, applications are now Cloud native. They are made of micro-services and use other Cloud services and with managed services like Google Kubernetes Engine (GKE), the data-center is getting “serverless”.
AWS Lambda and functions, in general, are a way to compose Cloud services and build an application that has many self-properties -tip of the hat to IBM- auto-scaling, self-healing, self-tuning, etc. At the core of those function-based applications are events. Events emitted by cloud services and even any on-prem software.
Therefore we now need to focus on helping people deploy functions and link them via events to build the new Cloud native applications. This is what TriggerMesh is attempting to do, give users a straightforward path to running their functions and manage the event triggers that meshes the functions together.
To do so we are going to use knative. Knative offers core primitives (e.g CRD) to build a FaaS solution on. Knative also builds on Istio, the service mesh, which is also gaining momentum in the enterprise. Having developed Kubeless back in December 2016 in my basement with my friend Nguyen Anh Tu, I know exactly what is needed. I am actually quite proud to have developed Kubeless that early and now see a lot of the ideas that we put forward within knative.
knative was announced in July at Google Next and as the strong support of Google, Pivotal and Red Hat. While Pivotal has clearly started to re-factor their Riff FaaS project on top of knative, the Red Hat presence also indicates that the future of OpenWhisk might be based on knative, but the recent acquisition tells me to be cautious 🙂 I think other FaaS projects will also follow knative but time will tell.
A Cloud and Open Source software
Building a distribution of knative or offering support services around it would certainly be a wise business move. But Mark and I like a challenge, we have been breathing clouds for many years now and it is time for us to try to build one.
So TriggerMesh will provide a cloud service, under the hood will be Knative, Istio and of course Kubernetes. You will be able to deploy your functions hosted on GitHub, BitBucket and GitLab and thanks to Knative eventing we will allow you to configure your leading cloud providers service event sources (i.e AWS SQS, Azure Event grid, Google Cloud storage) and mesh together with your functions into your Cloud Native applications.
Of course, we will contribute to Open-Source and you will find in the k8s and knative slack channels. You can already go to our GitHub organization and you will find:
- A Golang based client for Knative called `tm`
- A Terraform provider for knative
- A set of Knative build templates to deploy your OpenFaaS functions on TriggerMesh
- An Azure function build template to deploy your functions using the Azure runtime
With all of that said, if you want to get started today with TriggerMesh please join our Early Access Program, and if you want to help out give us a shout.
Go Cloud, go knative, let’s build some Serverless applications with TriggerMesh !!!!