What Every CIO Needs to Know about Serverless, Part 2
This series of blogs “What Every CIO Needs to Know about Serverless” is designed to help bring clarity and sanity to the sometimes confusing world of serverless. In Part 1 we covered why serverless matters to your digital transformation strategy and we introduced a couple basic concepts.
Here in Part 2, we are breaking down the difference between serverless, Functions as a Services (FaaS), and Knative. The similarities and differences between these terms and approaches has been a source of lots of discussions and confusion. We doubt we’ll settle it all in a single blog, but we hope at least it will make your life easier.
Let’s Look Closer at Serverless
Serverless computing refers to the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.
Serverless computing does not mean that we no longer use servers to host and run code; nor does it mean that operations engineers are no longer required. Rather, it refers to the idea that developers no longer need to spend time and resources on server provisioning, maintenance, updates, scaling, and capacity planning. Instead, all of these tasks and capabilities are handled by a serverless platform and are completely abstracted away from the developers and IT/operations teams.
Notable characteristics of serverless computing include:
- Autoscaling, including scaling to zero: Traditional cloud or on-premises applications run – and consume compute, storage, and networking resources – even when they are not in use. With serverless, when the function is not called, all compute and other resources go idle.
- Usage based pricing: Hand in hand with scaling to zero, when a function is not being used, you pay nothing. Serverless providers charge per function call
- Event-driven: Serverless enables developers to focus on applications that consist of event-driven functions that respond to a variety of triggers
- Use Cases: Common serverless use cases include eCommerce, clickstream analytics, contact center, legacy app modernization, and DevOps functions.
OK, So What about FaaS?
Serverless is often equated with Function as a Service (FaaS) offerings like AWS Lambda. We believe serverless is much more than FaaS. A FaaS platform lets users write small pieces of code that get executed when an event happens. The platform transparently takes care of provisioning the runtime, auto-scaling and security.
FaaS can be thought of as the glue that connects cloud services together and that gets executed when certain events happen. Serverless enables developers to focus on applications that consist of event-driven functions that respond to a variety of triggers. FaaS platforms take care of the rest – such as trigger-to-function logic, information passing from one function to another function, auto-provisioning of container and run-time (when, where, and what), auto-scaling, identity management, etc.
How Does Knative Fit In?
Knative is an open source serverless platform that provides a set of middleware components to build modern, source-centric, and container-based applications that can run anywhere: on-premises, in the cloud, or even in a third-party data center.
In his excellent talk at NDC London 2020, Google Developer Advocate Mete Atamel describes Knative as helping to resolve a previous tradeoff developers had to make between serverless OR containers. With Knative, you get both – the flexibility of containers with the zero-touch provisioning and fast iteration of serverless.
Knative does two things: Serving and Eventing.
- Serving builds on Kubernetes to support deploying and serving of serverless applications and functions. Serving provides the autoscaling – including scale to zero – feature of FaaS as well as fine-grained traffic control using modern network gateways.
- Eventing provides building blocks for consuming and producing events that adhere to the CloudEvents specification (a specification developed by the CNCF Working Group). It includes abstractions from event sources, and decoupled delivery through messaging channels backed by pluggable pub/sub broker services.
Knative Serving defines a set of objects as Kubernetes Custom Resource Definitions (CRDs). These objects are used to define and control how your serverless workload behaves on the cluster.
Service: The service.serving.knative.dev resource automatically manages the whole lifecycle of your workload. It controls the creation of other objects to ensure that your app has a route, a configuration, and a new revision for each update of the service.
Route: The route.serving.knative.dev resource maps a network endpoint to one or more revisions.
Configuration: The configuration.serving.knative.dev resource maintains the desired state for your deployment. It provides a clean separation between code and configuration and follows the Twelve-Factor App methodology.
Revision: The revision.serving.knative.dev resource is a point-in-time snapshot of the code and configuration for each modification made to the workload.
Any producer (or source), can generate events before there are active event consumers that are listening.
Any event consumer can express interest in an event or class of events, before there are producers that are creating those events.
As of v0.5, Knative Eventing defines Broker and Trigger objects to make it easier to filter events.
- A Broker provides a bucket of events which can be selected by attribute. It receives events and forwards them to subscribers defined by one or more matching Triggers.
- A Trigger describes a filter on event attributes which should be delivered to an Addressable. You can create as many Triggers as necessary.
Knative Eventing also defines an event forwarding and persistence layer, called a Channel. Each channel is a separate Kubernetes Custom Resource. Events are delivered to Services or forwarded to other channels (possibly of a different type) using Subscriptions. This allows message delivery in a cluster to vary based on requirements, so that some events might be handled by an in-memory implementation while others would be persisted using Apache Kafka or NATS Streaming.
Tekton Knative CI/CD
Tekton is a Kubernetes-native open-source framework for creating continuous integration and delivery (CI/CD) systems. It lets you build, test, and deploy across multiple cloud providers or on-premises systems by abstracting away the underlying implementation details.
As a Kubernetes-native framework, Tekton makes it easier to deploy across multiple cloud providers or hybrid environments. By leveraging the Custom Resource Definitions (CRDs) in Kubernetes, Tekton uses the Kubernetes control plane to run pipeline tasks. By using standard industry specifications, Tekton will work well with existing CI/CD tools such as Jenkins, Jenkins X, Skaffold, and Knative.
Tekton is one of the initial projects in Continuous Delivery Foundation (CDF) with 25+ members which serves as a governing body for fast-growing CI/CD projects to encourage and sustain vendor-neutral collaboration and participation.
A Little About Us
TriggerMesh believes enterprise developers will increasingly build applications as a mesh of cloud native functions and services from multiple cloud providers. We believe this architecture is the best way for agile businesses to deliver the effortless digital experiences customers expect and at the same time minimize infrastructure complexity.
To bring today’s enterprise applications into this future, the TriggerMesh cloud native integration platform ties together cloud computing, SaaS, and on-premises applications. We do this through an event-driven cloud service bus that connects application workflows across varied infrastructures.