Routing

Page last updated:

NSX-T Data Center Edge Type and Load Balancer capacity planning

It is recommended to use a minimum of four large Edge VMs in a configuration of two active-standby pairs (at least two instances are necessary for HA) to have a sufficient number of available load balancer instances. This offers a budget of two large load balancer instances. A production grade deployment should only use large load balancers.

They type of Edge deployment is up to you based on your needs and capacity. Edges as VMs offers maximum flexibility and use of the physical resources but the limits on bare metal Edges is far higher. Since TA4A is a “hyperconverged” or “collapsed” model, the approach of using VMs for Edges is a natural fit. VMware recommends starting with Edges as VMs until the system capacity is exhausted, then adding bare-metal Edes or more virtualized capacity for Edge VMs as necessary.

Introduced in NSX-T 2.5, users can assign edges to defined fault domains. When TKGI provisions load balancer T1 routers, the active and passive edges are assigned on separate zones automatically, which increases the resiliency.

The size of the load balancer determines the number of Virtual Servers, Pools, and Pool Members per LB instance. Learn more about configuration maximums at VMware’s Configuration Maximums site

Important Note: Since TAS for VMs and TKGI use active/standby Edge Node Load Balancer Service, one would always need to consider capacity as per Edge pair to derive the available number of Load Balancer Services. If there are two Edge servers, only the Load Balancer Service capacity of one instance can be taken into consideration as the other is treated as stand-by.

The number of available Load Balancer instances is directly proportional (1:1) to the maximum number of Kubernetes clusters that can be supported. Each Kubernetes cluster would use minimum of one load balancer from an active Edge instance. Based on the type of load balancer and the Edge, the load balancer instances are fixed. As such, the resulting number of Kubernetes clusters created on the edge are also constrained by the number of free loadbalancer services on the active Edge.

Note: Maximum number of Edge nodes per Edge Cluster is 10 (as of NSX-T Data Center 2.5). If the number of Edge nodes is greater than 10, create additional Edge clusters. If the Load Balancer service capacity is fully utilized on a given Edge pair, then install and bring up additional Edge VM instance pairs in the same Edge Cluster to handle requirements for additional load balancers (for existing or new TKGI clusters).

Learn more about configuration maximums at VMware’s Configuration Maximums site

TKGI NSX-T Container Plugin for ingress traffic

There is no native way to reach the apps you will place in TKGI, one must be selected and deployed. What you select influences load balancing choices. Kubernetes has three ways for external users to access services running in a Kubernetes cluster:

NSX-T Load Balancer supports both layer 4 and layer 7 ingress traffic, as well as SSL encryption

TKGI NSX-T container plugin integrates with NSX-T Load Balancer, that Kubernetes operator can interface with Kubernetes resources (ingress controller and load balancer) to provision/configure NSX-T Load Balancer.

The Benefits of TKGI NCP + NSX-T integration:

  • Kubernetes users only needs to interface Kubernetes resources rather than lower level network infrastructure
  • NSX-T Load Balancer provisioning & configuration setup automatically:
    • Underlay IP assignment (from pre-assigned IP Pool)
    • Backend Pool configuration to load balancing traffic to Kubernetes pods
    • SSL certificate configuration

Your choice is to select a layer 4 or layer 7 approach; each will be described here. A layer 4 approach is a good choice TCP-bound traffic types and layer 7 is best for HTTP-bound traffic types to/from the app.

Ingress Controller and Load Balancing for TKGI

Layer 7 Approach Using Ingress Controller

Ingress Controller View a larger version of this diagram.

Domain info for ingress is defined in the manifest of the Kubernetes deployment. Here is an example.

apiVersion: extensions/v1
kind: Ingress
metadata:
  name: music-ingress
  namespace: music1
spec:
  rules:
  - host: music1.tkgi.domain.com
    http:
      paths:
      - path: /.*
        backend:
          serviceName: music-service
          servicePort: 8080

Layer 4 Approach Using LoadBalancer Service

Load Balancing View a larger version of this diagram.

When pushing a Kubernetes deployment with type set to LoadBalancer, NSX-T Data Center automatically creates a new VIP for the deployment on the existing load balancer for that namespace. You will need to specify a listening and translation port in the service, along with a name for tagging. You will also specify a protocol to use. Here is an example.

apiVersion: v1
kind: Service
metadata:
  ...
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: web