Defining Network Profiles

Page last updated:

This topic describes how to define network profiles for Kubernetes clusters provisioned with Enterprise Pivotal Container Service (Enterprise PKS) on vSphere with NSX-T.

About Network Profiles

Network profiles let you customize NSX-T configuration parameters at the time of cluster creation. Use cases for network profiles include the following:

Profile Type Description
Load Balancer Sizing Customize the size of the NSX-T load balancer provisioned when a Kubernetes cluster is created.
Custom Pod Networks Assign IP addresses from a dedicated IP block to pods in your Kubernetes cluster.
Routable Pod Networks Assign routable IP addresses from a dedicated IP block to pods in your Kubernetes cluster.
Bootstrap Security Group for Kubernetes Master Nodes Specify an NSX-T Namespace Group where Kubernetes master nodes will be added to during cluster creation.
Pod Subnet Prefix Specify the size of the pod subnet.
Custom Floating IP Specify a custom floating IP pool.
Edge Router Selection Specify the NSX-T Tier-0 router where Kubernetes node and Pod networks will be connected to.
DNS Configuration for Kubernetes Clusters Specify one or more DNS servers for Kubernetes clusters.
Configurable Nodes IP Block Customize the IP addresses, subnet size, and routability for Kubernetes node networks.

Network Profile Format

Network profiles are defined using JSON. Here are example network profiles for two different customers:

np_customer_A.json
{
    "name": "np-cust-a",
    "description": "Network Profile for Customer A",
    "parameters": {
        "lb_size": "small",
        "t0_router_id": "5a7a82b2-37e2-4d73-9cb1-97a8329e1a90",
        "fip_pool_ids": [
            "e50e8f6e-1a7a-45dc-ad49-3a607baa7fa0"
        ],
        "pod_ip_block_ids": [
            "7056d707-acec-470e-88cf-66bb86fbf439"
        ],
        "master_vms_nsgroup_id": "9b8d535a-d3b6-4735-9fd0-56305c4a5293",
        "pod_subnet_prefix" : 27
    }
}
np_customer_B.json
{
    "name": "np-cust-b",
    "description": "Network Profile for Customer B",
    "parameters": {
        "lb_size": "medium",
        "t0_router_id": "5a7a82b2-37e2-4d73-9cb1-97a8329e1a92",
        "fip_pool_ids": [
            "e50e8f6e-1a7a-45dc-ad49-3a607baa7fa2"
        ],
        "pod_routable": true,
        "pod_ip_block_ids": [
            "ebe78a74-a5d5-4dde-ba76-9cf4067eee55",
            "ebe78a74-a5d5-4dde-ba76-9cf4067eee56"
        ]
        "master_vms_nsgroup_id": "9b8d535a-d3b6-4735-9fd0-56305c4a5292",
        "pod_subnet_prefix" : 26
    }
}

Network Profile Parameters

Define a network profile configuration in a JSON file using the following parameters:















Parameter Type Usage Description
name String Global User-defined name of the network profile.
description String Global User-defined description for the network profile.
parameters Key-Value Pair Global One or more name-value pairs.
lb_size Key-Word Load Balancer Size of the NSX-T load balancer deployed with the Kubernetes cluster. Accepts: small, medium, or large.
pod_ip_block_ids Array Pods Networks Pod IP Block UUIDs as defined in NSX-T; comma-separated.
pod_routable Boolean Pods Networks Enable routable IP addresses. Set the parameter to true to assign routable IP addresses to pods. Accepts: true or false.
master_vms_nsgroup_id UUID Security Groups NSGroup UUID as defined in NSX-T.
fip_pool_ids Array Floating IPs Floating IP Pool UUIDs as defined in NSX-T; comma separated.
pod_subnet_prefix Integer Pods Networks Prefix size of the custom Pods IP Block subnet.
t0_router_id UUID Multi-T0 Tenant Tier-0 router UUID as defined in NSX-T.
nodes_dns Array Nodes DNS IP addresses (up to 3) for DNS server lookup by Kubernetes nodes and pods.
node_ip_block_ids Array Nodes Networks Nodes IP Block UUIDs as defined in NSX-T; comma-separated.
node_subnet_prefix Integer Nodes Networks Prefix size of the custom Nodes IP Block subnet.
node_routable Boolean Nodes Networks Enable routable IP addresses. Set the parameter to true to assign routable IP addresses to nodes. Accepts: true or false.

Network Profile Creation

After the network profile is defined in a JSON file, an Enterprise PKS administrator can create the network profile using the PKS CLI. The Kubernetes administrator can use the network profile when creating a cluster.

For more information, see the Create and Use Network Profiles section of Using Network Profiles (NSX-T Only).

Network Profile Use Cases

This section lists and describes network profile definition for supported use cases.

Load Balancer Sizing

When you deploy a Kubernetes cluster using Enterprise PKS on NSX-T, an NSX-T load balancer is automatically provisioned. By default the size of this load balancer is small. Using a network profile, you can customize the size of the load balancer. For more information, see Load Balancers in Enterprise PKS Deployments on vSphere with NSX‑T.

NSX-T load balancers run on edge nodes. There are various form factors for edge nodes. Enterprise PKS supports the large edge VM and the bare metal edge. The large VM edge node must run on Intel processors. The large load balancer requires a bare metal edge node. For more information about edge nodes, see Scaling Load Balancer Resources in the NSX-T documentation.

The NSX-T load balancer is a logical load balancer that handles a number of functions using virtual servers and pools. For more information, see Supported Load Balancer Features in the NSX-T documentation.

The following virtual servers are required for Enterprise PKS:

  • 1 TCP layer 4 virtual server for each Kubernetes service of type:LoadBalancer
  • 2 HTTP and HTTPS layer 7 global virtual servers for Kubernetes ingress resources
  • 1 global virtual server for the PKS API

The following network profile, np-lb-med, defines a medium load balancer:

{
    "name": "np-lb-med",
    "description": "Network profile for medium NSX-T load balancer",
    "parameters": {
        "lb_size": "medium"
    }
}

The following network profile, np-lb-large, defines a large load balancer:

{
    "name": "np-lb-large",
    "description": "Network profile for large NSX-T load balancer",
    "parameters": {
        "lb_size": "large"
     }
}

Note: The large load balancer requires a bare metal NSX Edge Node.

Custom Pod Networks

When you configure your NSX-T infrastructure for Enterprise PKS, you must create a Pods IP Block. For more information, see the Plan IP Blocks section of Planning, Preparing, and Configuring NSX-T for Enterprise PKS.

By default, this subnet is non-routable. When a Kubernetes cluster is deployed, each pod receives an IP address from the Pods IP Block you created. Because the pod IP addresses are non-routable, NSX-T creates a SNAT rule on the Tier-0 router to allow network egress from the pods. This configuration is shown in the diagram below:

Non-routable pod network with SNAT

You can use a network profile to override the global Pods IP Block that you specify in the Enterprise PKS tile with a custom IP block. To use a custom pods network, do the following after you deploy Enterprise PKS:

  1. Define a custom IP block in NSX-T. For more information, see Creating NSX-T Objects for Enterprise PKS.

  2. Define a network profile that references the custom pods IP block. For example, the following network profile defines non-routable pod addresses from two IP blocks:

{
    "description": "Network profile with two non-routable pod networks",
    "name": "non-routable-pod",
    "parameters": {
      "pod_ip_block_ids": [
        "ebe78a74-a5d5-4dde-ba76-9cf4067eee55",
        "ebe78a74-a5d5-4dde-ba76-9cf4067eee56"
      ]
    }
}

Pod Subnet Prefix

Each time a Kubernetes namespace is created, a subnet from the pods IP block is allocated. The default size of the subnet carved from this block for such purposes is /24. For more information, see the Pods IP Block section of Planning, Preparing, and Configuring NSX-T for Enterprise PKS.

You can define a Network Profile using the pod_subnet_prefix parameter to customize the size of the pod subnet reserved for namespaces. For example, the following network profile specifies /27 for the size of the pods IP block subnet:

{
    "name": "np-pod-prefix",
    "description": "Network Profile for Customizing Pod Subnet Size",
    "parameters": {
        "pod_subnet_prefix" : 27
    }
}

Note: The subnet size for a Pods IP Block must be consistent across all Network Profiles. Enterprise PKS does not support variable subnet sizes for a given IP Block.

Routable Pod Networks

Using a network profile, you can assign routable IP addresses from a dedicated routable IP block to pods in your Kubernetes cluster. When a cluster is deployed using that network profile, the routable IP block overrides the default non-routable IP block described created for deploying Enterprise PKS. When you deploy a Kubernetes cluster using that network profile, each pod receives a routable IP address. This configuration is shown in the diagram below. If you use routable pods, the SNAT rule is not created.

Routable pod network using network profiles

To use routable pods, do the following after you deploy Enterprise PKS:

  1. Define a routable IP block in NSX-T. For more information, see Creating NSX-T Objects for Enterprise PKS.

  2. Define a network profile that references the routable IP block. For example, the following network profile defines routable pod addresses from two IP blocks:

{
    "description": "Network profile with small load balancer and two routable pod networks",
    "name": "small-routable-pod",
    "parameters": {
      "pod_routable": "true",
      "pod_ip_block_ids": [
        "ebe78a74-a5d5-4dde-ba76-9cf4067eee55",
        "ebe78a74-a5d5-4dde-ba76-9cf4067eee56"
      ]
    }
}

Define Security Group Namespace

Most of the NSX-T virtual interface tags used by Enterprise PKS are added to the Kubernetes master node or nodes during the node initialization phase of cluster provisioning. To add tags to virtual interfaces, the Kubernetes master node needs to connect to the NSX-T Manager API. Network security rules provisioned prior to cluster creation time do not allow nodes to connect to NSX-T if the rules are based on a Namespace Group (NSGroup) managed by Enterprise PKS.

To address this bootstrap issue, Enterprise PKS exposes an optional configuration parameter in Network Profiles to systematically add Kubernetes master nodes to a pre-provisioned NSGroup. The BOSH vSphere cloud provider interface (CPI) has the ability to use the NSGroup to automatically manage members following the BOSH VM lifecycle for Kubernetes master nodes.

To define a Security Group namespace, complete the following steps:

  1. Create the NSGroup in NSX Manager prior to provisioning a Kubernetes cluster using Enterprise PKS. For more information, see Create an NSGroup in the NSX-T documentation.
  2. Define a network profile that references the NSGroup UUID that the BOSH CPI can use to bootstrap the master node or nodes. For example, the following network profile specifies an NSGroup for the BOSH CPI to use to dynamically update Kubernetes master node memberships:
{
    "name": "np-boot-nsgroups",
    "description": "Network Profile for Customer B",
    "parameters": {
        "master_vms_nsgroup_id": "9b8d535a-d3b6-4735-9fd0-56305c4a5293"
    }
}

Custom Floating IP Pool

To deploy Enterprise PKS to vSphere with NSX-T, you must define a Floating IP Pool in NSX Manager. IP addresses from the Floating IP Pool are used for SNAT IP addresses whenever a Namespace is created (NAT mode). In addition, IP addresses from the Floating IP Pool are assigned to load balancers automatically provisioned by NSX-T, including the load balancer fronting the PKS API server and load balancers for pod ingress. For more information, see the Plan Network CIDRs section of Planning, Preparing, and Configuring NSX-T for Enterprise PKS.

You can define a network profile that specifies a custom floating IP pool to use instead of the default pool specified in the Enterprise PKS tile.

To define a custom floating IP pool, follow the steps below:

  1. Create a floating IP pool using NSX Manager prior to provisioning a Kubernetes cluster using Enterprise PKS. For more information, see Create IP Pool in the NSX-T documentation.
  2. Define a network profile that references the floating IP pool UUID that you defined. The following example defines a custom floating IP pool:
{
    "name": "np-custom-fip",
    "description": "Network Profile for Custom Floating IP Pool",
    "parameters": {
        "fip_pool_ids": [
            "e50e8f6e-1a7a-45dc-ad49-3a607baa7fa0",
            "ebe78a74-a5d5-4dde-ba76-9cf4067eee55"
        ]
    }
}

The example above uses two floating IP pools. With this configuration, if the first pool of IP addresses, e50e8f6e-1a7a-45dc-ad49-3a607baa7fa0, is exhausted, the system will use the IP addresses in the next IP pool that is listed, ebe78a74-a5d5-4dde-ba76-9cf4067eee55.

Note: If you are using multiple Floating IP Pools within the same Tier-0 router, the Floating IP Pools cannot overlap. Overlapping Floating IP Pools are allowed across Tier-0 routers, but not within the same Tier-0 router.

Edge Router Selection

Using Enterprise PKS on vSphere with NSX-T, you can deploy Kubernetes clusters on dedicated Tier-0 routers, creating a multi-tenant environment for each Kubernetes cluster. As shown in the diagram below, with this configuration a shared Tier-0 router hosts the PKS control plane and connects to each customer Tier-0 router using BGP. To support multi-tenancy, configure firewall rules and security settings in NSX Manager.

Cluster Isolation Using Multiple T0 Routers

To deploy Kubernetes clusters on tenancy-based Tier-0 router(s), follow the steps below:

  1. For each Kubernetes tenant, create a dedicated Tier-0 router, and configure static routes, BGP, NAT and Edge Firewall security rules as required by each tenant. For instructions, see Configuring Multiple Tier-0 Routers for Tenant Isolation.
  2. Define a network profile per tenant that references the Tier-0 router UUID provisioned for that tenant. For example, the following network profiles define two tenant Tier-0 routers with a NATed topology.

    np_customer_A-NAT.json
    {
      "description": "network profile for Customer A",
      "name": "network-profile-Customer-A",
      "parameters": {
        "lb_size": "medium",
        "t0_router_id": "82e766f7-67f1-45b2-8023-30e2725600ba",
        "fip_pool_ids": ["8ec655f-009a-79b7-ac22-40d37598c0ff"],
        "pod_ip_block_ids": ["fce766f7-aaf1-49b2-d023-90e272e600ba"]
      }
    }
    
    np_customer_B-NAT.json
    {
      "description": "network profile for Customer B",
      "name": "network-profile-Customer-B",
      "parameters": {
        "lb_size": "small",
        "t0_router_id": "a4e766cc-87ff-15bd-9052-a0e2425612b7",
        "fip_pool_ids": ["4ec625f-b09b-29b4-dc24-10d37598c0d1"],
        "pod_ip_block_ids": ["91e7a3a1-c5f1-4912-d023-90e272260090"]
      }
    }
    

    The following network profiles define two customer Tier-0 routers for a no-NAT topology:

    np_customer_A.json
    {
      "description": "network profile for Customer A",
      "name": "network-profile-Customer-A",
      "parameters": {
        "lb_size": "medium",
        "t0_router_id": "82e766f7-67f1-45b2-8023-30e2725600ba",
        "fip_pool_ids": [
            "8ec655f-009a-79b7-ac22-40d37598c0ff",
            "7ec625f-b09b-29b4-dc24-10d37598c0e0"
        ],
        "pod_routable": "true",
        "pod_ip_block_ids": [
            "fce766f7-aaf1-49b2-d023-90e272e600ba",
            "6faf46fd-ccce-4332-92d2-d918adcccce0"
        ]
      }
    }
    
    np_customer_B.json
    {
      "description": "network profile for Customer B",
      "name": "network-profile-Customer-B",
      "parameters": {
        "lb_size": "small",
        "t0_router_id": "a4e766cc-87ff-15bd-9052-a0e2425612b7",
        "fip_pool_ids": [
            "4ec625f-b09b-29b4-dc24-10d37598c0d1",
            "6ec625f-b09b-29b4-dc24-10d37598dDd1"
        ],
        "pod_routable": "true",
        "pod_ip_block_ids": [
            "91e7a3a1-c5f1-4912-d023-90e272260090",
            "6faf46fd-ccce-4332-92d2-d918adcccce0"
        ]
      }
    }
    

    Note: The pod_routable parameter controls the routing behavior of a tenant Tier-0 router. If the parameter is set to true, the custom Pods IP Block subnet is routable and NAT is not used. If pod_routable is not present or is set to false, the custom Pods IP Block is not routable and the tenant Tier-0 is deployed in NAT mode.

DNS Configuration for Kubernetes Clusters

You can specify multiple DNS enterires in a Network Profile to override the Nodes DNS parameter configured in the PKS tile. In a multi-tenant environment, for example, each tenant can have a different set of DNS servers to do a DNS lookup.

Using a network profile, you can define one or more DNS servers for use with Kubernetes clusters. Elements in the nodes_dns field of a network profile override the DNS server that is configured in the Networking section of the Enterprise PKS tile. For more information, see Networking.

The nodes_dns field accepts an array with up to 3 elements. Each element must be a valid IP address of a DNS server. If you are deploying Enterprise PKS in a multi-tenant environment with multiple Tier-0 routers and a single PKS foundation (installation) shared across all the tenants, or if you have shared services that can be accessed by all Kubernetes clusters deployed across multiple Tier 0 routers, the first DNS server entered should be a shared DNS server. Subsequent DNS entries in the Network Profile can be specific to the tenant.

The following example network profile, nodes-dns.json, demonstrates the configuration of the nodes_dns parameter with 3 DNS servers. Each entry is the IP address of a DNS server, with the first entry being a public DNS server.

nodes-dns.json
{
    "description": "Overwrite Nodes DNS Entry",
    "name": "nodes_dns_multiple",
    "parameters": {
        "nodes_dns": [
            "8.8.8.8", "192.168.115.1", "192.168.116.1"
        ]
    }
}

Configurable Node Network IP Blocks

The Nodes IP Block is used by Enterprise PKS to assign address space to Kubernetes nodes when new clusters are deployed or a cluster increases its scale. By default each Kubernetes cluster deployed by Enterprise PKS is allocated a /24 subnet, which allows up to 256 IP addresses to be assigned.

Using a network profile you can define one or more custom Node IP Block networks, specify the size of the nodes subnet, and specify if the network is routable.

Using the node_ip_block_ids parameter in a network profile, you can specify one or more Nodes IP Blocks for the Kubernetes node networks such that if one of IP block is exhausted, an alternative IP block can be used by Kubernetes clusters to create the Nodes subnet.

Note: Specifying a new node subnet for an existing cluster is not supported. In other words, you cannot autoscale the node network for an existing cluster. For any new clusters created using a network profile with node_ip_block_ids configured, Enterprise PKS automatically creates a node subnet from one of the IP blocks that is available.

The node_routable boolean lets you specify if the Node network is routable or non-routable. This is the equivalent of enabling or disabling NAT mode in the PKS tile. If "node_routable":false, the Node network uses NAT mode. In this case you must make sure that Kubernetes nodes have access to BOSH and other PKS control plane components. See Creating the Enterprise PKS Management Plane for more information. If "node_routable":true, the IP address space must be an externally routable address block.

Note: The default routable setting for the Node network is determined based on the selection made in the PKS tile. If NAT mode is selected, the Node network is non-routable. To override the default selection, provide the node_routable parameter in the network profile.

Depending on the size of the cluster (number of Kubernetes nodes), you can specify a subnet size using the node_subnet_prefix parameter that optimizes the use of network address space. This configuration is especially useful when the cluster nodes are using globally routable address space with the node_routable option set to “true”.

For example, if the Enterprise PKS administrator has configured the default in the PKS tile to be a Routable network for the Nodes IP Block*, the Kubernetes cluster administrator can deploy Kubernetes cluster in the NAT'ed mode (non-routable) by specifying a network profile with an IP block that supports the NAT'ed address range.

Note: The default size of the Node network is /24. If you want to use a different size, you must specify the node_subnet_prefix size.

nodes-network.json
{
    "description": "Configurable Nodes Network IP Block",
    "name": "network-profile_nodes-ip-block",
    "parameters": {
        "node_ip_block_ids": [
            "2250dc43-63c8-4bb8-b8cf-c6e12ccfb7de", "3d577e5c-dcaf-4921-9458-d12b0e1318e6"
        ],
        "node_routable":true,
        "node_subnet_prefix":20
    }
}

Please send any feedback you have to pks-feedback@pivotal.io.