High Availability for Pivotal Ingress Router
Page last updated:
Architecture of Pivotal Ingress Router
Pivotal Ingress Router consists of various components:
Cluster Registrar: Polls the PKS API for a list of clusters. This allows the Ingress Router to detect when new clusters have been created, updated, and deleted.
Master Routing Controller: When new clusters are created, it configures routing to the Kube API for those clusters. It also handles state changes of clusters such as scaling up/down and when clusters are deleted.
Workload Routing Controller Manager (Experimental): When new clusters are created, it creates Workload Routing Controllers for those clusters.
Workload Routing Controller (Experimental): Watches for custom resources created with the intent of routing to a particular workload. It then configures routing to those workloads.
IngressGateway (Istio): Runs an Envoy proxy that accepts traffic from external clients and routes to destinations configured by the Master Routing Controller and the Workload Routing Controllers.
Pilot (Istio): Converts configuration set by the Master Routing Controller and Workload Routing Controllers into Envoy routing configuration.
To summarize, Pivotal Ingress Router provides service discovery for clusters and workloads. Pilot provides an interface to abstract this mechanism into a standard, Envoy configuration. IngressGateway ultimately provides the routing for traffic from external clients.
High Availability for Pivotal Ingress Router
To ensure that routing is highly available, it is recommended to have multiple worker nodes on the system cluster where Pivotal Ingress Router is installed and multiple replicas of the IngressGateway spread across those worker nodes. If any of these worker nodes goes down, the pods on that worker node (such as Ingress Router components) will be rescheduled to run on another available worker node.
Currently, Pivotal Ingress Router will by default install with three IngressGateway instances. If the worker node on which the IngressGateway is running goes down for any reason (such as a stemcell update or PKS upgrade), workloads and clusters will still be routable.
We also recommend having multiple master nodes on the workload clusters so that the Kube APIs on these clusters are still routable.
The number of master and worker node instances can be updated on existing clusters by reconfiguring your PKS tile as shown here.
To update the number of IngressGateway instances, update the
istio.gateways.istio-ingressgateway.autoscaleMin
and
istio.gateways.istio-ingressgateway.autoscaleMax
fields in
user-provided-values.yaml
. For more information about how to apply these
changes, see the installation
instructions.