Runtimes deploy built containers in different ways. By offering users a choice of runtimes, we plan to make the platform more extensible and support a variety of workloads including long-running applications, stream processors, and finite jobs. This release includes two runtimes, Core and Knative. A streaming runtime is under development.
The Core runtime is a thin layer creating a Kubernetes deployment and a service that targets the deployment. The deployment is automatically updated for new images, creating a new replicaset based on the deployment rollout strategy.
The workload is accessible from within the cluster by default, but must be explicitly exposed externally. A single replica is run by default, but the deployment can be targeted by a HorizontalPodAutoscaler or any other scaler that supports the
/scale subresource on deployment. Custom scalers, observability and ingress can be provided for deployers as none are provided by default.
The Knative runtime is most analogous to riff 0.3. It requires that Knative Serving and Istio are installed into the cluster in addition to riff. There are two models for consuming Knative: Deployers and Adapters.
Deployers create a Knative configuration and route for a referenced build. The configuration is updated as the build produces new images.
Adapters reference an existing Knative service or configuration, updating the image property as the build produces new ones. Route rules are preserved when new images trigger the creation of Knative revisions.
The new streaming runtime is under active development, and not included in this release.
The goal of the streaming runtime is to enable workloads to consume, process, and produce message streams, in conjunction with streaming platforms like Kafka.