LATEST VERSION: v1.4 - RELEASE NOTES
Pivotal Container Service v1.3

Defining Network Profiles

Page last updated:

This topic describes how to define network profiles for Kubernetes clusters provisioned with Pivotal Container Service (PKS) on vSphere with NSX-T.

About Network Profiles

Network profiles let you customize NSX-T configuration parameters at the time of cluster creation. Use cases for network profiles include the following:

Profile Type Description
Load Balancer Sizing Customize the size of the NSX-T load balancer provisioned when a Kubernetes cluster is created using PKS.
Custom Pod Networks Assign IP addresses from a dedicated IP block to pods in your Kubernetes cluster.
Routable Pod Networks Assign routable IP addresses from a dedicated IP block to pods in your Kubernetes cluster.
Bootstrap Security Group for Kubernetes Master Nodes Specify an NSX-T Namespace Group where Kubernetes master nodes will be added to during cluster creation.
Pod Subnet Prefix Specify the size of the pod subnet.
Custom Floating IP Specify a custom floating IP pool.
Edge Router Selection Specify the NSX-T Tier-0 router where Kubernetes node and Pod networks will be connected to.
DNS Configuration for Kubernetes Clusters Specify one or more DNS servers for Kubernetes clusters.

Network Profile Format

Network profiles are defined using JSON. Here are example network profiles for two different customers:

np_customer_A.json
{
    "name": "np-cust-a",
    "description": "Network Profile for Customer A",
    "parameters": {
        "lb_size": "small",
        "t0_router_id": "5a7a82b2-37e2-4d73-9cb1-97a8329e1a90",
        "fip_pool_ids": [
            "e50e8f6e-1a7a-45dc-ad49-3a607baa7fa0"
        ],
        "pod_ip_block_ids": [
            "7056d707-acec-470e-88cf-66bb86fbf439"
        ],
        "master_vms_nsgroup_id": "9b8d535a-d3b6-4735-9fd0-56305c4a5293",
        "pod_subnet_prefix" : 27
    }
}
np_customer_B.json
{
    "name": "np-cust-b",
    "description": "Network Profile for Customer B",
    "parameters": {
        "lb_size": "medium",
        "t0_router_id": "5a7a82b2-37e2-4d73-9cb1-97a8329e1a92",
        "fip_pool_ids": [
            "e50e8f6e-1a7a-45dc-ad49-3a607baa7fa2"
        ],
        "pod_routable": true,
        "pod_ip_block_ids": [
            "ebe78a74-a5d5-4dde-ba76-9cf4067eee55",
            "ebe78a74-a5d5-4dde-ba76-9cf4067eee56"
        ]
        "master_vms_nsgroup_id": "9b8d535a-d3b6-4735-9fd0-56305c4a5292",
        "pod_subnet_prefix" : 26
    }
}

Network Profile Parameters

Define a network profile configuration in a JSON file using the following parameters:

Parameter Description
name String that is the user-defined name of the network profile.
description String that is the user-defined description for the network profile.
parameters One or more name-value pairs.
lb_size Size of the NSX-T load balancer deployed with the Kubernetes cluster: small, medium, or large.
pod_ip_block_ids Array of Pod IP Block UUIDs as defined in NSX-T; comma-separated.
pod_routable Boolean true or false. Set the parameter to true to assign routable IP addresses to pods.
master_vms_nsgroup_id UUID of NSGroup as defined in NSX-T.
fip_pool_ids Array of Floating IP Pool UUIDs as defined in NSX-T; comma separated.
pod_subnet_prefix Integer that is the prefix size of the custom Pods IP Block subnet.
t0_router_id UUID of tenant Tier-0 router as defined in NSX-T.
nodes_dns Array of IP addresses (up to 3) for DNS server lookup by Kubernetes nodes and pods.

Network Profile Creation

After the network profile is defined in a JSON file, a PKS administrator can create the network profile using the PKS CLI. The Kubernetes administrator can use the network profile when creating a cluster.

For more information, see the Create and Use Network Profiles section of Using Network Profiles (NSX-T Only).

Load Balancer Sizing

When you deploy a Kubernetes cluster using PKS on NSX-T, an NSX-T load balancer is automatically provisioned. By default the size of this load balancer is small. Using a network profile, you can customize the size of the load balancer. For more information, see Load Balancers in PKS Deployments on vSphere with NSX‑T.

NSX-T load balancers run on edge nodes. There are various form factors for edge nodes. PKS supports the large edge VM and the bare metal edge. The large VM edge node must run on Intel processors. The large load balancer requires a bare metal edge node. For more information about edge nodes, see Scaling Load Balancer Resources in the NSX-T documentation.

The NSX-T load balancer is a logical load balancer that handles a number of functions using virtual servers and pools. For more information, see Supported Load Balancer Features in the NSX-T documentation.

The following virtual servers are required for PKS:

  • 1 TCP layer 4 virtual server for each Kubernetes service of type:LoadBalancer
  • 2 HTTP and HTTPS layer 7 global virtual servers for Kubernetes ingress resources
  • 1 global virtual server for the PKS API

The following network profile, np-lb-med, defines a medium load balancer:

{
    "name": "np-lb-med",
    "description": "Network profile for medium NSX-T load balancer",
    "parameters": {
        "lb_size": "medium"
    }
}

The following network profile, np-lb-large, defines a large load balancer:

{
    "name": "np-lb-large",
    "description": "Network profile for large NSX-T load balancer",
    "parameters": {
        "lb_size": "large"
     }
}

Note: The large load balancer requires a bare metal NSX Edge Node.

Custom Pod Networks

When you configure your NSX-T infrastructure for PKS, you must create a Pods IP Block. For more information, see the Plan IP Blocks section of Planning, Preparing, and Configuring NSX-T for PKS.

By default, this subnet is non-routable. When a Kubernetes cluster is deployed, each pod receives an IP address from the Pods IP Block you created. Because the pod IP addresses are non-routable, NSX-T creates a SNAT rule on the Tier-0 router to allow network egress from the pods. This configuration is shown in the diagram below:

Non-routable pod network with SNAT

You can use a network profile to override the global Pods IP Block that you specify in the PKS tile with a custom IP block. To use a custom pods network, do the following after you deploy PKS:

  1. Define a custom IP block in NSX-T. For more information, see Creating NSX-T Objects for PKS.

  2. Define a network profile that references the custom pods IP block. For example, the following network profile defines non-routable pod addresses from two IP blocks:

{
    "description": "Network profile with two non-routable pod networks",
    "name": "non-routable-pod",
    "parameters": {
      "pod_ip_block_ids": [
        "ebe78a74-a5d5-4dde-ba76-9cf4067eee55",
        "ebe78a74-a5d5-4dde-ba76-9cf4067eee56"
      ]
    }
}

Routable Pod Networks

Using a network profile, you can assign routable IP addresses from a dedicated routable IP block to pods in your Kubernetes cluster. When a cluster is deployed using that network profile, the routable IP block overrides the default non-routable IP block described created for deploying PKS. When you deploy a Kubernetes cluster using that network profile, each pod receives a routable IP address. This configuration is shown in the diagram below. If you use routable pods, the SNAT rule is not created.

Routable pod network using network profiles

To use routable pods, do the following after you deploy PKS:

  1. Define a routable IP block in NSX-T. For more information, see Creating NSX-T Objects for PKS.

  2. Define a network profile that references the routable IP block. For example, the following network profile defines routable pod addresses from two IP blocks:

{
    "description": "Network profile with small load balancer and two routable pod networks",
    "name": "small-routable-pod",
    "parameters": {
      "pod_routable": true,
      "pod_ip_block_ids": [
        "ebe78a74-a5d5-4dde-ba76-9cf4067eee55",
        "ebe78a74-a5d5-4dde-ba76-9cf4067eee56"
      ]
    }
}

Bootstrap Security Group

Most of the NSX-T virtual interface tags used by PKS are added to the Kubernetes master node or nodes during the node initialization phase of cluster provisioning. To add tags to virtual interfaces, the Kubernetes master node needs to connect to the NSX-T Manager API. Network security rules provisioned prior to cluster creation time do not allow nodes to connect to NSX-T if the rules are based on a Namespace Group (NSGroup) managed by PKS.

To address this bootstrap issue, PKS exposes an optional configuration parameter in Network Profiles to systematically add Kubernetes master nodes to a pre-provisioned NSGroup. The BOSH vSphere cloud provider interface (CPI) has the ability to use the NSGroup to automatically manage members following the BOSH VM lifecycle for Kubernetes master nodes.

To configure a Bootstrap Security Group, complete the following steps:

  1. Create the NSGroup in NSX Manager prior to provisioning a Kubernetes cluster using PKS. For more information, see Create an NSGroup in the NSX-T documentation.
  2. Define a network profile that references the NSGroup UUID that the BOSH CPI can use to bootstrap the master node or nodes. For example, the following network profile specifies an NSGroup for the BOSH CPI to use to dynamically update Kubernetes master node memberships:
{
    "name": "np-boot-nsgroups",
    "description": "Network Profile for Customer B",
    "parameters": {
        "master_vms_nsgroup_id": "9b8d535a-d3b6-4735-9fd0-56305c4a5293"
    }
}

Pod Subnet Prefix

Each time a Kubernetes namespace is created, a subnet from the pods IP block is allocated. The size of the subnet carved from this block for such purposes is /24. For more information, see the Pods IP Block section of Planning, Preparing, and Configuring NSX-T for PKS.

You can define a Network Profile using the pod_subnet_prefix parameter to customize the size of the pod subnet reserved for namespaces. For example, the following network profile specifies /27 for the size of the pods IP block subnet:

{
    "name": "np-pod-prefix",
    "description": "Network Profile for Customizing Pod Subnet Size",
    "parameters": {
        "pod_subnet_prefix" : 27
    }
}

Custom Floating IP Pool

To deploy PKS to vSphere with NSX-T, you must define a floating IP pool in NSX Manager. The IP addresses in this floating IP pool are assigned to load balancers automatically provisioned by NSX-T when you deploy a Kubernetes cluster using PKS. For more information, see the Plan Network CIDRs section of Planning, Preparing, and Configuring NSX-T for PKS.

You can define a network profile that specifies a custom floating IP pool to use instead of the default pool specified in the PKS tile.

To define a custom floating IP pool, follow the steps below:

  1. Create a floating IP pool using NSX Manager prior to provisioning a Kubernetes cluster using PKS. For more information, see Create IP Pool in the NSX-T documentation.
  2. Define a network profile that references the floating IP pool UUID that you defined. The following example defines a custom floating IP pool:
{
    "name": "np-custom-fip",
    "description": "Network Profile for Custom Floating IP Pool",
    "parameters": {
        "fip_pool_ids": [
            "e50e8f6e-1a7a-45dc-ad49-3a607baa7fa0",
            "ebe78a74-a5d5-4dde-ba76-9cf4067eee55"
        ]
    }
}

The example above uses two floating IP pools. With this configuration, if the first pool of IP addresses, e50e8f6e-1a7a-45dc-ad49-3a607baa7fa0, is exhausted, the system will use the IP addresses in the next IP pool that is listed, ebe78a74-a5d5-4dde-ba76-9cf4067eee55.

Edge Router Selection

Using PKS on vSphere with NSX-T, you can deploy Kubernetes clusters on dedicated Tier-0 routers, creating a multi-tenant environment for each Kubernetes cluster. As shown in the diagram below, with this configuration a shared Tier-0 router hosts the PKS control plane and connects to each customer Tier-0 router using BGP. To support multi-tenancy, configure firewall rules and security settings in NSX Manager.

Cluster Isolation Using Multiple T0 Routers

To deploy Kubernetes clusters on tenancy-based Tier-0 router(s), follow the steps below:

  1. For each Kubernetes tenant, create a dedicated Tier-0 router, and configure static routes, BGP, NAT and Edge Firewall security rules as required by each tenant. For instructions, see Configuring Multiple Tier-0 Routers for Tenant Isolation.
  2. Define a network profile per tenant that references the Tier-0 router UUID provisioned for that tenant. For example, the following network profiles define two tenant Tier-0 routers with a NATed topology.

    np_customer_A-NAT.json
    {
      "description": "network profile for Customer A",
      "name": "network-profile-Customer-A",
      "parameters": {
        "lb_size": "medium",
        "t0_router_id": "82e766f7-67f1-45b2-8023-30e2725600ba",
        "fip_pool_ids": ["8ec655f-009a-79b7-ac22-40d37598c0ff"],
        "pod_ip_block_ids": ["fce766f7-aaf1-49b2-d023-90e272e600ba"]
      }
    }
    
    np_customer_B-NAT.json
    {
      "description": "network profile for Customer B",
      "name": "network-profile-Customer-B",
      "parameters": {
        "lb_size": "small",
        "t0_router_id": "a4e766cc-87ff-15bd-9052-a0e2425612b7",
        "fip_pool_ids": ["4ec625f-b09b-29b4-dc24-10d37598c0d1"],
        "pod_ip_block_ids": ["91e7a3a1-c5f1-4912-d023-90e272260090"]
      }
    }
    

    The following network profiles define two customer Tier-0 routers for a no-NAT topology:

    np_customer_A.json
    {
      "description": "network profile for Customer A",
      "name": "network-profile-Customer-A",
      "parameters": {
        "lb_size": "medium",
        "t0_router_id": "82e766f7-67f1-45b2-8023-30e2725600ba",
        "fip_pool_ids": [
            "8ec655f-009a-79b7-ac22-40d37598c0ff",
            "7ec625f-b09b-29b4-dc24-10d37598c0e0"
        ],
        "pod_routable": true,
        "pod_ip_block_ids": [
            "fce766f7-aaf1-49b2-d023-90e272e600ba",
            "6faf46fd-ccce-4332-92d2-d918adcccce0"
        ]
      }
    }
    
    np_customer_B.json
    {
      "description": "network profile for Customer B",
      "name": "network-profile-Customer-B",
      "parameters": {
        "lb_size": "small",
        "t0_router_id": "a4e766cc-87ff-15bd-9052-a0e2425612b7",
        "fip_pool_ids": [
            "4ec625f-b09b-29b4-dc24-10d37598c0d1",
            "6ec625f-b09b-29b4-dc24-10d37598dDd1"
        ],
        "pod_routable": true,
        "pod_ip_block_ids": [
            "91e7a3a1-c5f1-4912-d023-90e272260090",
            "6faf46fd-ccce-4332-92d2-d918adcccce0"
        ]
      }
    }
    

    Note: The pod_routable parameter controls the routing behavior of a tenant Tier-0 router. If the parameter is set to true, the custom Pods IP Block subnet is routable and NAT is not used. If pod_routable is not present or is set to false, the custom Pods IP Block is not routable and the tenant Tier-0 is deployed in NAT mode.

DNS Configuration for Kubernetes Clusters

Using a network profile, you can define one or more DNS servers for use with Kubernetes clusters. Elements in the nodes_dns field of a network profile override the DNS server that is configured in the Networking section of the PKS with NSX-T tile. For more information, see Networking.

The nodes_dns field accepts an array with up to 3 elements. Each element must be a valid IP address of a DNS server. If you are deploying PKS in a multi-tenant environment with multiple Tier-0 routers, the first DNS server entered should be a shared DNS server. Subsequent entries are typically specific to the tenant.

The following example network profile, nodes-dns.json, demonstrates the configuration of the nodes_dns parameter with 3 DNS servers. Each entry is the IP address of a DNS server, with the first entry being a public DNS server.

nodes-dns.json
{
    "description": "Overwrite Nodes DNS Entry",
    "name": "nodes_dns_multiple",
    "parameters": {
        "nodes_dns": [
            "8.8.8.8", "192.168.115.1", "192.168.116.1"
        ]
    }
}

Please send any feedback you have to pks-feedback@pivotal.io.

Create a pull request or raise an issue on the source for this page in GitHub