LATEST VERSION: v1.1 - RELEASE NOTES
Pivotal Container Service v1.1

Installing and Configuring PKS with NSX-T Integration

Page last updated:

This topic describes how to install and configure Pivotal Container Service (PKS) on vSphere with NSX-T integration.

Deployment Topologies

There are three supported topologies in which to deploy NSX-T with PKS. Except where noted, the instructions in this topic describe how to set up all three options.

NAT Topology

The following figure shows a Network Address Translation (NAT) deployment:

NAT Topology

View a larger version of this image.

This topology has the following characteristics:

  • PKS control plane (Ops Manager, BOSH Director, and PKS VM) components are all located on a logical switch that has undergone Network Address Translation on a T0.
  • Kubernetes cluster master and worker nodes are located on a logical switch that has undergone Network Address Translation on a T0. This requires DNAT rules to allow access to Kubernetes APIs.

NO-NAT with Virtual Switch (VSS/VDS) Topology

The following figure shows a NO-NAT with Virtual Switch (VSS/VDS) deployment:

NO-NAT Topology with Virtual Switch

View a larger version of this image.

This topology has the following characteristics:

  • PKS control plane (Ops Manager, BOSH Director, and PKS VM) components are using corporate routable IP addresses.
  • Kubernetes cluster master and worker nodes are using corporate routable IP addresses.
  • The PKS control plane is deployed outside of the NSX-T network and the Kubernetes clusters are deployed and managed within the NSX-T network. Since BOSH needs routable access to the Kubernetes Nodes to monitor and manage them, the Kubernetes Nodes need routable access.

NO-NAT with Logical Switch (NSX-T) Topology

The following figure shows a NO-NAT with Logical Switch (NSX-T) deployment:

NO-NAT Topology with Logical Switch

View a larger version of this image.

This topology has the following characteristics:

  • PKS control plane (Ops Manager, BOSH Director, and PKS VM) components are using corporate routable IP addresses.
  • Kubernetes cluster master and worker nodes are using corporate routable IP addresses.
  • The PKS control plane is deployed inside of the NSX-T network. Both the PKS control plane components (VMs) and the Kubernetes Nodes use corporate routable IP addresses.

Before You Install

Follow these steps before performing the procedures in this topic:

Note: When using NSX-T 2.1, creating namespaces with names longer than 40 characters may result in a truncated or hashed name in the NSX-T Manager UI.

Step 1: Plan for Network Subnets and IP Blocks

Before you install PKS on NSX-T, you should plan for the CIDRs and IP blocks that you are using in your deployment.

Plan Network CIDRs

Plan for the following network CIDRs in the IPv4 address space according to the instructions in the VMware NSX-T documentation.

  • VTEP CIDR(s): One or more of these networks host your GENEVE Tunnel Endpoints on your NSX Transport Nodes. Size the networks to support all of your expected Host and Edge Transport Nodes. For example, a CIDR of 192.168.1.0/24 provides 254 usable IPs. This is used when creating the ip-pool-vteps in Step 3.
  • PKS MANAGEMENT CIDR: This small network is used to access PKS management components such as Ops Manager and the PKS Service VM. For example, a CIDR of 10.172.1.0/28 provides 14 usable IPs. For the NO-NAT deployment topologies, this is a corporate routable subnet /28. For the NAT deployment topology, this is a non-routable subnet /28, and DNAT needs to be configured in NSX-T to access the PKS management components.
  • PKS LB CIDR: This network provides your load balancing address space for each Kubernetes cluster created by PKS. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services. For example, 10.172.2.0/24 provides 256 usable IPs. This network is used when creating the ip-pool-vips described in Create NSX Network Objects, or when the services are deployed. You enter this network in the Floating IP Pool ID field in the Networking pane of the PKS tile.

Refer to the instructions in the VMware NSX-T documentation to ensure that your network topology enables the following communications:

  • vCenter, NSX-T components, and ESXi hosts must be able to communicate with each other.
  • The Ops Manager Director VM must be able to communicate with vCenter and the NSX Manager.
  • The Ops Manager Director VM must be able to communicate with all nodes in all Kubernetes clusters.
  • Each Kubernetes cluster deployed by PKS deploys a NCP pod that must be able to communicate with the NSX Manager.

Plan IP Blocks

In addition, you need to plan IP blocks for pods and nodes that are created when PKS creates the Kubernetes cluster. IP Block sizes must be a multiple of 256 (/24). You must make sure that an IP block already has any subnets allocated, and that the subnet size is 256 (/24). You configure the Pods IP Block ID and Nodes IP Block ID in the Networking pane of the PKS tile.

Harbor uses the following IP blocks for its internal bridges:

  • 172.17.0.1/16
  • 172.18.0.1/16
  • 172.19.0.1/16
  • 172.20.0.1/16
  • 172.21.0.1/16
  • 172.22.0.1/16

Each cluster uses the following IP block for Kubernetes services:

  • 10.100.200.0/24

Note: Do not use any of the IP blocks listed above for pods or nodes. If you create Kubernetes clusters with any of the blocks listed above, the Kubernetes worker nodes cannot reach Harbor for the image pull.

Step 2: Deploy NSX-T

Deploy NSX-T according to the instructions in the VMware NSX-T documentation.

Note: In general, accept default settings unless instructed otherwise.

  1. Deploy the NSX Manager. For more information, see NSX Manager Installation in the VMware NSX-T documentation.
  2. Deploy NSX Controllers. For more information, see NSX Controller Installation and Clustering in the VMware NSX-T documentation.
  3. Join the NSX Controllers to the NSX Manager. For more information, see Join NSX Controllers with the NSX Manager in the VMware NSX-T documentation.
  4. Initialize the Control Cluster. For more information, see Initialize the Control Cluster to Create a Control Cluster Master in the VMware NSX-T documentation.
  5. Add your ESXi hosts to the NSX-T Fabric. For more information, see Add a Hypervisor Host to the NSX-T Fabric in the VMware NSX-T documentation. Each host must have at least one free nic/vmnic not already used by other vSwitches on the ESXi host for use with NSX Host Transport Nodes.
  6. Deploy NSX Edge VMs. We recommend at least two VMs. For more information, see NSX Edge Installation in the VMware NSX-T documentation. Each deployed NSX Edge VM requires free resources in your vSphere environment to provide 8 vCPU, 16 GB of RAM, and 120 GB of storage. When deploying, you must connect the vNICs of the NSX Edge VMs to an appropriate PortGroup for your environment by completing the following steps:
    1. Connect the first Edge interface to your environment’s PortGroup/VLAN where your Edge Management IP can route and communicate with the NSX Manager.
    2. Connect the second Edge interface to your environment’s PortGroup/VLAN where your GENEVE VTEPs can route and communicate with each other. Your VTEP CIDR should be routable to this PortGroup.
    3. Connect the third Edge interface to your environment’s PortGroup/VLAN where your T0 uplink interface is located.
    4. Join the NSX Edge VMs to the NSX-T Fabric. For more information, see Join NSX Edge with the Management Plane in the VMware NSX-T documentation.

Step 3: Create the NSX-T Objects Required for PKS

Create the NSX-T objects (network objects, logical switches, NSX Edge, and logical routers) needed for PKS deployment according to the instructions in the VMware NSX-T documentation.

3.1: Create NSX Network Objects

  1. Create two NSX IP pools. For more information, see Create an IP Pool for Tunnel Endpoint IP Addresses in the VMware NSX-T documentation. Configuration details for the NSX IP pools:
    • One NSX IP pool for GENEVE Tunnel Endpoints ip-pool-vteps, within the usable range of the VTEP CIDR created in Step 1, to be used with NSX Transport Nodes that you create later in this section
    • One NSX IP pool for NSX Load Balancing VIPs ip-pool-vips, within the usable range of the PKS LB CIDR created in Step 1, to be used with the T0 Logical Router that you create later in this section
  2. Create two NSX Transport Zones (TZs). For more information, see Create Transport Zones in the VMware NSX-T documentation. Configuration details for the NSX TZs:
    • One NSX TZ for PKS control plane Services and Kubernetes Cluster deployment overlay networks named tz-overlay and the associated N-VDS hs-overlay. Select Standard.
    • One NSX TZ for NSX Edge uplinks (ingress/egress) for PKS Kubernetes clusters named tz-vlan and the associated N-VDS hs-vlan. Select Standard.
  3. If the default uplink profile is not applicable in your deployment, create your own NSX uplink host profile. For more information, see Create an Uplink Profile in the VMware NSX-T documentation.
  4. Create NSX Host Transport Nodes. For more information, see Create a Host Transport Node in the VMware NSX-T documentation. Configuration details:
    • For each host in the NSX-T Fabric, create a node named tnode-host-NUMBER. For example, if you have three hosts in the NSX-T Fabric, create three nodes named tnode-host-1, tnode-host-2, and tnode-host-3.
    • Add the tz-overlay NSX Transport Zone to each NSX Host Transport Node.

      Note: The Transport Nodes must be placed on free host NICs not already used by other vSwitches on the ESXi host. Use the ip-pool-vteps IP pool that allows them to route and communicate with each other, as well as other Edge Transport Nodes, to build GENEVE tunnels.

  5. Create NSX IP Blocks. We recommend that you use separate NSX IP Blocks for the node networks and the pod networks. The subnets (both nodes and pods) should have a size of 256 (/24). For more information, see Manage IP Blocks in the VMware NSX-T documentation. Configuration details:
    • One NSX IP Block named node-network-ip-block. PKS uses this block to assign address space to Kubernetes master and worker nodes when new clusters are deployed or a cluster increases its scale.
    • One NSX IP Block named pod-network-ip-block. The NSX-T Container Plug-in (NCP) uses this block to assign address space to Kubernetes pods through the Container Networking Interface (CNI).

3.2: Create Logical Switches

  1. Create the following NSX Logical Switches. For more information, see Create a Logical Switch in the VMware NSX-T documentation. Configuration details for the Logical Switches:
  2. Attach your first NSX Logical Switch to the tz-vlan NSX Transport Zone.
  3. Attach your second and third NSX Logical Switches to the tz-overlay NSX Transport Zone.

3.3: Create NSX Edge Objects

  1. Create NSX Edge Transport Nodes. For more information, see Create an NSX Edge Transport Node in the VMware NSX-T documentation.
  2. Add both tz-vlan and tz-overlay NSX Transport Zones to the NSX Edge Transport Nodes. Controller Connectivity and Manager Connectivity should be UP.
  3. Refer to the MAC addresses of the Edge VM interfaces you deployed to deploy your virtual NSX Edges:
    1. Connect the hs-overlay N-VDS to the vNIC (fp-eth#) that matches the MAC address of the second NIC from your deployed Edge VM.
    2. Connect the hs-vlan N-VDS to the vNIC (fp-eth#) that matches the MAC address of the third NIC from your deployed Edge VM.
  4. Create an NSX Edge cluster named edge-cluster-pks. For more information, see Create an NSX Edge Cluster in the VMware NSX-T documentation.
  5. Add the NSX Edge Transport Nodes to the cluster.

3.4: Create Logical Routers

Create T0 Logical Router for PKS

T0 routers are edge routers that help route data between your non-NSX-T (such as a Physical Network) and the NSX-T network. PKS currently supports only a single T0 router per instance.

  1. Create a Tier-0 (T0) logical router named t0-pks. For more information, see Create a Tier-0 Logical Router in the VMware NSX-T documentation. Configuration details:

    • Select edge-cluster-pks for the cluster.
    • Set High Availability Mode to Active-Standby. NAT rules are be applied on T0 by NCP. If not set Active-Standby, the router does not support NAT rule configuration.
  2. Attach the T0 logical router to the ls-pks-uplink logical switch you created previously. For more information, see Connect a Tier-0 Logical Router to a VLAN Logical Switch in the VMware NSX-T documentation. Create a logical router port for ls-pks-uplink and assign an IP address and CIDR that your environment uses to route to all PKS assigned IP pools and IP blocks.

  3. Configure T0 routing to the rest of your environment using the appropriate routing protocol for your environment or by using static routes. For more information, see Tier-0 Logical Router in the VMware NSX-T documentation. The CIDR used in ip-pool-vips must route to the IP you just assigned to your t0 uplink interface.

(Optional) Configure NSX Edge for High Availability (HA)

You can configure NSX Edge for high availability (HA) using Active/Standby mode to support failover, as shown in the following figure.

NSX Edge High Availability

To configure NSX Edge for HA, complete the following steps:

Note: All IP addresses must belong to the same subnet.

Step 1: On the T0 router, create a second uplink attached to the second Edge transport node:

Setting First Uplink Second Uplink
IP Address/Mask uplink_1_ip uplink_2_ip
URPF Mode None (optional) None (optional)
Transport Node edge-TN1 edge-TN2
LS uplink-LS1 uplink-LS1

Step 2: On the T0 router, create the HA VIP:

Setting HA VIP
VIP address [ha_vip_ip]
Uplinks ports uplink-1 and uplink-2

The HA VIP becomes the official IP for the T0 router uplink. External router devices peering with the T0 router must use this IP address.

Step 3: On the physical router, configure the next hop to point to the HA VIP address.

Step 4: You can verify your setup by running the following commands:

nsx-edge-n> get high-availability channels
nsx-edge-n> get high-availability channels stats
nsx-edge-n> get logical-router
nsx-edge-n> get logical-router ROUTER-UUID high-availability status

Create T1 Logical Router for PKS Management VMs

  1. Create a Tier-1 (T1) logical router for PKS management VMs named t1-pks-mgmt. For more information, see Create a Tier-1 Logical Router in the VMware NSX-T documentation. Configuration details:
    • Link to the t0-pks logical router you created in a previous step.
    • Select edge-cluster-pks for the cluster.

      Note: Skip this step if you are deploying the NO-NAT with Virtual Switch topology. This Logical Router is required for the NAT deployment topology and NO-NAT with Logical Switch deployment topology. .

  2. Create a logical router port for ls-pks-mgmt and assign the following CIDR block: 10.172.1.0/28. For more information, see Connect a Tier-0 Logical Router to a VLAN Logical Switch in the VMware NSX-T documentation.
  3. Configure route advertisement on the T1 as follows. For more information, see Configure Route Advertisement on a Tier-1 Logical Router in the VMware NSX-T documentation. Configuration details:
    • Enable Status.
    • Enable Advertise All NSX Connected Routes.
    • Enable Advertise All NAT Routes.
    • Enable Advertise All LB VIP Routes.

Configure NAT Rules for PKS Management VMs

Note: This step applies to the NAT deployment topology only. Skip this step for NO-NAT deployment topologies.

Create the following NAT rules for the Mgmt T0. For more information, see Tier-0 NAT in the VMware NSX-T documentation. Configuration details:

Type For
DNAT External -> Ops Manager
DNAT External -> Pivotal Container Service
SNAT Ops Manager & BOSH Director -> DNS
SNAT Ops Manager & BOSH Director -> NTP
SNAT Ops Manager & BOSH Director -> vCenter
SNAT Ops Manager & BOSH Director -> ESXi
SNAT Ops Manager & BOSH Director -> NSX-T Manager

The Destination NAT (DNAT) rule on the T0 maps an external IP address from the PKS MANAGEMENT CIDR to the IP where you deploy Ops Manager on the ls-pks-mgmt logical switch. For example, a DNAT rule that maps 10.172.1.2 to 172.31.0.2, where 172.31.0.2 is the IP address you assign to Ops Manager when connected to ls-pks-mgmt. Later, you create another DNAT rule to map an external IP address from the PKS MANAGEMENT CIDR to the PKS endpoint.

The Source NAT (SNAT) rule on the T0 allows the PKS Management VMs to communicate with your vCenter and NSX Manager environments. For example, an SNAT rule that maps 172.31.0.0/24 to 10.172.1.1, where 10.172.1.1 is a routable IP address from your PKS MANAGEMENT CIDR. For more information, see Configure Source NAT on a Tier-1 Router in the VMware NSX-T documentation.

Note: Ops Manager and BOSH must use the NFCP protocol to the actual ESX hosts to which it is uploading stemcells. Specifically, Ops Manager & BOSH Director -> ESXi.

Note: Limit the Destination CIDR for the SNAT rules to the subnets that contain your vCenter and NSX Manager IP addresses.

Step 4: Deploy Ops Manager

Complete the procedures in Deploying Ops Manager to vSphere.

Step 5: Configure Ops Manager

Perform the following steps to configure Ops Manager for the NSX logical switches:

  1. Complete the procedures in Configuring Ops Manager on vSphere.

    Note: If you have Pivotal Application Service (PAS) installed, we recommend installing PKS on a separate instance of Ops Manager v2.1.

    • On the vCenter Config pane, select NSX Networking - NSX-T. This configuration is used for PAS and PKS. For more information, see the Enable NSX-T Mode in the BOSH Director section of Deploying PAS with NSX-T Networking in the PCF documentation.

      Note: If you are using the NAT deployment topology, you must have already deployed Ops Manager to the ls-pks-mgmt NSX logical switch by following the instructions above in Create T1 Logical Router for PKS Management VMs. You will use the DNAT IP address to access Ops Manager.

    • On the Create Networks pane, create the following network:
      Field Configuration
      Name pks-infrastructure
      vSphere Network Name MY-PKS-virt-net/MY-PKS-subnet-infrastructure
      Description A network for deploying the PKS control plane VMs that maps to the NSX logical switch named ls-pks-mgmt created for the PKS Management Network in Step 3: Create the NSX-T Objects Required for PKS.
  2. Return to the Ops Manager Installation Dashboard and click Apply Changes.

Step 6: Generate and Register Certificates

Before you install PKS on NSX-T, you must create two certificates that you will provide in the Networking pane in the PKS tile. For more information, see Networking.

6.1: Generate the NSX Manager Super User Principal Identity Certificate

This certificate represents a principal identity with superuser permissions that the PKS VM will use to communicate with NSX-T to manage (create, delete, and modify) node networking resources. During PKS installation on NSX-T, you will need to provide this in the NSX Manager Super User Principal Identity Certificate field on the Networking pane in the PKS tile. You can complete the following steps from the Ops Manager VM or from any other Linux VM. This procedure does not work on Mac OS.

Before You Begin

Export the following environment variables to complete the steps below:

NSX_MANAGER="<NSX_MANAGER_IP>"
NSX_USER="<NSX_MANAGER_USERNAME>"
NSX_PASSWORD='<NSX_MANAGER_PASSWORD>'
PI_NAME="pks-nsx-t-superuser" 
NSX_SUPERUSER_CERT_FILE="pks-nsx-t-superuser.crt"
NSX_SUPERUSER_KEY_FILE="pks-nsx-t-superuser.key"
NODE_ID=$(cat /proc/sys/kernel/random/uuid)

Step 6.1.1: Create the Super User Principal Identity Certificate

Create the Super User Principal Identity Certificate using a script or by clicking Generate RSA Certificate on the Networking tab in the PKS tile. For more information, see Networking.

Create Certificate Using a Script

To create the certificate using a script, run the following command:

$ openssl req \
-newkey rsa:2048 \
-x509 \
-nodes \
-keyout "$NSX_SUPERUSER_KEY_FILE" \
-new \
-out "$NSX_SUPERUSER_CERT_FILE" \
-subj /CN=pks-nsx-t-superuser \
-extensions client_server_ssl \
-config <(
cat /etc/ssl/openssl.cnf \
<(printf '[client_server_ssl]\nextendedKeyUsage = clientAuth\n')
) \
-sha256 \
-days 730
Create Certificate from the Networking Tab

To create the certificate from the Networking tab in the PKS tile, follow the steps below.

  1. Navigate to the Networking tab in the PKS tile. For more information, see Networking.
  2. Click Generate RSA Certificate and provide a wildcard domain, for example, *.nsx.pks.vmware.local.
  3. In the Ops Manager / Linux VM where the subsequent scripts will run, create a file named pks-nsx-t-superuser.crt and copy the generated certificate into it.
  4. In the Ops Manager / Linux VM where the subsequent scripts will run, create a file named pks-nsx-t-superuser.key and copy the private key into it.

Step 6.1.2: Register the Certificate

To register the certificate with NSX Manager, run the following commands:

cert_request=$(cat <<END
  {
    "display_name": "$PI_NAME",
    "pem_encoded": "$(awk '{printf "%s\\n", $0}' $NSX_SUPERUSER_CERT_FILE)"
  }
END
)
curl -k -X POST \
"https://${NSX_MANAGER}/api/v1/trust-management/certificates?action=import" \
-u "$NSX_USER:$NSX_PASSWORD" \
-H 'content-type: application/json' \
-d "$cert_request"

The response includes the CERTIFICATE_ID value.

Step 6.1.3: Register the Principal Identity

To register the principal identity with NSX Manager, run the following commands:

pi_request=$(cat <<END
  {
    "display_name": "$PI_NAME",
    "name": "$PI_NAME",
    "permission_group": "superusers",
    "certificate_id": "$CERTIFICATE_ID",
    "node_id": "$NODE_ID"
  }
END
)
curl -k -X POST \
  "https://${NSX_MANAGER}/api/v1/trust-management/principal-identities" \
  -u "$NSX_USER:$NSX_PASSWORD" \
  -H 'content-type: application/json' \
  -d "$pi_request"

Step 6.1.4: Verify the Certificate and Key

To verify that the certificate and key can be used with NSX-T, complete the following steps:

curl -k -X GET \
"https://${NSX_MANAGER}/api/v1/trust-management/principal-identities" \
--cert $(pwd)/"$NSX_SUPERUSER_CERT_FILE" \
--key $(pwd)/"$NSX_SUPERUSER_KEY_FILE"

Later, when you install PKS on NSX-T, you will copy and paste the contents of the pks-nsx-t-superuser.crt and pks-nsx-t-superuser.key into the NSX Manager Super User Principal Identity Certificate field on the Networking pane in the PKS tile.

6.2: Generate the NSX Manager CA Certificate

This certificate is used to authenticate with the NSX Manager. You create an IP-based, self-signed certificate and register it with NSX Manager. During PKS installation on NSX-T, you will need to provide this certificate in the NSX Manager CA Cert field on the Networking Tab in the PKS tile.

Step 6.2.1: Generate a Self-signed Certificate

Note: If you already have a CA-signed certificate, skip this section and go to 6.2.2.

  1. Create a file for the certificate request parameters named nsx-cert.cnf.

  2. Copy the following parameters and paste them into the file, replacing NSX-MANAGER-IP-ADDRESS with the IP address of your NSX Manager, and NSX-MANAGER-COMMONNAME with the FQDN of the NSX Manager host:

    [ req ]
    default_bits = 2048
    distinguished_name = req_distinguished_name
    req_extensions = req_ext
    prompt = no
    [ req_distinguished_name ]
    countryName = US
    stateOrProvinceName = California
    localityName = CA
    organizationName = NSX
    commonName = NSX-MANAGER-IP-ADDRESS
    [ req_ext ]
    subjectAltName = @alt_names
    [alt_names]
    DNS.1 = NSX-MANAGER-COMMONNAME,NSX-MANAGER-IP-ADDRESS
    

    For example:

    [ req ]
    default_bits = 2048
    distinguished_name = req_distinguished_name
    req_extensions = req_ext
    prompt = no
    [ req_distinguished_name ]
    countryName = US
    stateOrProvinceName = California
    localityName = Palo-Alto
    organizationName = NSX
    commonName = nsxmgr-01a.example.com
    [ req_ext ]
    subjectAltName=DNS:nsxmgr-01a.example.com,IP:192.0.2.40
    
  3. Export the NSX_MANAGER_IP_ADDRESS and NSX_MANAGER_COMMONNAME environment variables using the IP address of your NSX Manager and the FQDN of the NSX Manager host.

    For example:

    $ export NSX_MANAGER_IP_ADDRESS=192.0.2.40
    $ export NSX_MANAGER_COMMONNAME=nsxmgr-01a.example.com
    

  4. Generate the certificate using openssl. Run the following command:

    $ openssl req -newkey rsa:2048 -x509 -nodes \
    -keyout nsx.key -new -out nsx.crt -subj /CN=$NSX_MANAGER_COMMONNAME \
    -reqexts SAN -extensions SAN -config <(cat ./nsx-cert.cnf \
     <(printf '[SAN]\nsubjectAltName=DNS:$NSX_MANAGER_COMMONNAME,IP:$NSX_MANAGER_IP_ADDRESS')) -sha256 -days 365
    

  5. Verify that the certificate looks correct and that the NSX manager IP is in the Subject Alternative Name (SAN) by running the following command:

    $ openssl x509 -in nsx.crt -text -noout
    

Step 6.2.2: Register the Certificate with NSX Manager

  1. Log into the NSX Manager UI.
  2. Import the certificate by copying nsx.crt and nsx.key. For instructions, see Import a CA Certificate in the NSX-T documentation.
  3. Get the ID of the certificate. Run the following command, replacing CERTIFICATE-NAME with the certificate name:

    curl --insecure -u admin:'admin_pw' -X \
    GET https://NSX-Manager-IP-Address/api/v1/trust-management/certificates \
    | jq '-r .results[] | select(.display_name==CERTIFICATE-NAME) | .id'
    
  4. Register the certificate with NSX Manager, replacing CERTIFICATE-ID with the certificate ID:

    curl --insecure -u admin:'admin_pw' -X \
    POST 'https://NSX-Manager-IP-Address/api/v1/node/services/http?action=apply_certificate&certificate_id=CERTIFICATE-ID'
    

Later, when you install PKS on NSX-T, you will copy and paste the contents of the nsx.crt certificate into the NSX Manager CA Cert field on the Networking pane in the PKS tile.

Step 7: Install and Configure PKS

Perform the following steps to install and configure PKS:

  1. Install the PKS tile. For more information, see Installing and Configuring PKS.
  2. Click the orange Pivotal Container Service tile to start the configuration process.

    Note: Configuration of NSX-T or Flannel cannot be changed after initial installation and configuration of PKS.

    Pivotal Container Service tile on the Ops Manager installation dashboard

Assign AZs and Networks

Perform the following steps:

  1. Click Assign AZs and Networks.
  2. Select the availability zone (AZ) where you want to deploy the PKS API VM as a singleton job.

    Note: You must select an additional AZ for balancing other jobs before clicking Save, but this selection has no effect in the current version of PKS.

    Assign AZs and Networks pane in Ops Manager
  3. Under Network, select the PKS Management Network linked to the ls-pks-mgmt NSX logical switch you created in Step 5: Configure Ops Manager. This will provide network placement for the PKS API VM.
  4. Under Service Network, your selection depends on whether you are upgrading from a previous PKS version or installing an original PKS deployment.
    • If you are upgrading from a previous PKS version, select the PKS Service Network linked to the ls-pks-service NSX logical switch you created in Step 5: Configure Ops Manager. This will provide network placement for the on-demand Kubernetes cluster service instances created by the PKS broker.
    • If you are deploying PKS on vSphere with NSX-T for the first time, the Service Network field does not apply to PKS deployments. However, the tile requires you to make a selection. Therefore, select any network that has been configured in the Ops Manager Network configuration.
  5. Click Save.

PKS API

Perform the procedure in the PKS API section of Installing and Configuring PKS.

Plans

Perform the procedures in the Plans section of Installing and Configuring PKS.

Kubernetes Cloud Provider

Perform the procedures in the Kubernetes Cloud Provider section of Installing and Configuring PKS.

(Optional) Logging

Perform the procedures in the Logging section of Installing and Configuring PKS.

Networking

Perform the following steps:

  1. Click Networking.
  2. Under Container Networking Interface, select NSX-T. NSX-T Networking configuration pane in Ops Manager
  3. For NSX Manager hostname, enter the hostname or IP address of your NSX Manager.
  4. For NSX Manager Super User Principal Identify Certificate, copy and paste the contents and private key of the Principal Identity certificate you created in Step 6.1: Generate the NSX Manager Super User Principal Identity Certificate. You can create the certificate in this tab by clicking Generate RSA Certificate, providing a wildcard domain, for example, *.nsx.pks.vmware.local, and copying the generated certificate and key to the pks-nsx-t-superuser.crt and pks-nsx-t-superuser.key files. For more information, including instructions for completing the additional, required registration and verification steps, see Step 6.1: Generate the NSX Manager Super User Principal Identity Certificate.
  5. (Optional) For NSX Manager CA Cert, copy and paste the contents of the NSX Manager CA certificate you created in Step 6: Generate and Register Certificates. This will be used to connect to the NSX Manager.
  6. The Disable SSL certificate verification checkbox is not selected by default. In order to disable TLS verification, select the checkbox. You may want to disable TLS verification if you did not enter a CA certificate, or if your CA certificate is self-signed.
  7. If you are using a NAT deployment topology, leave the NAT mode checkbox selected. If you are using a NO-NAT topology, clear this checkbox. For more information, see the Deployment Topologies section above.
  8. Enter the following IP Block settings:

    NSX-T Networking configuration pane in Ops Manager

    • Pods IP Block ID: Enter the UUID of the IP block to be used for Kubernetes pods. PKS allocates IP addresses for the pods when they are created in Kubernetes. Each time a namespace is created in Kubernetes, a subnet from this IP block is allocated. The current subnet size that is created is /24, which means a maximum of 256 pods can be created per namespace.
    • Nodes IP Block ID: Enter the UUID of the IP block to be used for Kubernetes nodes. PKS allocates IP addresses for the nodes when they are created in Kubernetes. The node networks are created on a separate IP address space from the pod networks. The current subnet size that is created is /24, which means a maximum of 256 nodes can be created per cluster. For more information, including sizes and the IP blocks to avoid using, see Plan IP Blocks.
  9. For T0 Router ID, enter the t0-pks T0 router UUID. Locate this value in the NSX-T UI router overview.

  10. For Floating IP Pool ID, enter the ip-pool-vips ID that you created for load balancer VIPs. For more information, see Plan IP Blocks. PKS uses the floating IP pool to allocate IP addresses to the load balancers created for each of the clusters. The load balancer routes the API requests to the master nodes and the data plane.

  11. For Nodes DNS, enter one or more Domain Name Servers used by the Kubernetes nodes.

  12. For vSphere Cluster Names, enter the name of the vSphere cluster that corresponds to the AZ where you deployed the PKS control plane VM.

  13. (Optional) Configure a global proxy for all outgoing HTTP and HTTPS traffic from your Kubernetes clusters.

    Production environments can deny direct access to public Internet services and between internal services by placing an HTTP or HTTPS proxy in the network path between Kubernetes nodes and those services.

    If your environment includes HTTP or HTTPS proxies, configuring PKS to use these proxies allows PKS-deployed Kubernetes nodes to access public Internet services and other internal services. Follow the steps below to configure a global proxy for all outgoing HTTP/HTTPS traffic from your Kubernetes clusters:

    1. Under HTTP/HTTPS proxy, select Enabled.

      Networking pane configuration

    2. Under HTTP Proxy URL, enter the URL of your HTTP/HTTPS proxy endpoint. For example, http://myproxy.com:1234.

    3. (Optional) If your proxy uses basic authentication, enter the username and password in either HTTP Proxy Credentials or HTTPS Proxy Credentials.

    4. Under No Proxy, enter the service network CIDR where your PKS cluster is deployed. List any additional IP addresses that should bypass the proxy.

      Note: By default, the .internal, 10.100.0.0/8, and 10.200.0.0/8 IP address ranges are not proxied. This allows internal PKS communication.

  14. Click Save.

UAA

Perform the procedures in the UAA section of Installing and Configuring PKS.

(Optional) Monitoring

Perform the procedures in the Monitoring section of Installing and Configuring PKS.

Errands

Errands are scripts that run at designated points during an installation.

WARNING: You must enable the NSX-T Validation errand in order to verify and tag required NSX-T objects.

Perform the following steps:

  1. Click Errands. Errand configuration pane
  2. For Post Deploy Errands, select ON for the NSX-T Validation errand. This errand validates your NSX-T configuration and tags the proper resources.
  3. Click Save.

(Optional) Resource Config and Stemcell

To modify the resource usage or stemcell configuration of PKS, see the Resource Config and Stemcell sections in Installing and Configuring PKS.

Step 8: Apply Changes to Deploy the PKS Tile

After configuring the tile, return to the Ops Manager Installation Dashboard and click Apply Changes to deploy the PKS tile.

Step 9: Retrieve the PKS Endpoint

  1. When the installation is completed, retrieve the PKS endpoint by performing the following steps:
    1. From the Ops Manager Installation Dashboard, click the Pivotal Container Service tile.
    2. Click the Status tab and record the IP address assigned to the Pivotal Container Service job.
  2. Create a DNAT rule on the t1-pks-mgmt T1 to map an external IP from the PKS MANAGEMENT CIDR to the PKS endpoint. For example, a DNAT rule that maps 10.172.1.4 to 172.31.0.4, where 172.31.0.4 is PKS endpoint IP address on the ls-pks-mgmt NSX Logical Switch.

    Note: Ensure that you have no overlapping NAT rules. If your NAT rules overlap, you cannot reach Ops Manager from VMs in the vCenter network.

Developers should use the DNAT IP address when logging in with the PKS CLI. For more information, see Using PKS.


Please send any feedback you have to pks-feedback@pivotal.io.

Create a pull request or raise an issue on the source for this page in GitHub