Configuring Windows Worker-Based Kubernetes Clusters (Beta)

Page last updated:

This topic describes configuring Windows worker-based Kubernetes clusters in Enterprise Pivotal Container Service (Enterprise PKS).

Overview

In Enterprise PKS you can provision a Windows worker-based Kubernetes cluster on vSphere with Flannel.

To provision a Windows worker-based Kubernetes cluster:

  1. Verify your environment meets the Windows worker-based Kubernetes cluster Prerequisites.
  2. Configure a Windows Worker-Based Kubernetes Cluster.
  3. Upload the Windows Server Stemcell.
  4. Create a Windows Worker-Based Cluster.

IMPORTANT: Support for Windows-based Kubernetes clusters is in beta and supports only vSphere with Flannel.

Do not enable this feature if you are using Enterprise PKS 1.5 with vSphere with NSX-T, Google Cloud Platform (GCP), Azure, or Amazon Web Services (AWS).

We are actively looking for feedback on this beta feature. To submit feedback, send an email to pcf-windows@pivotal.io.

Prerequisites

The following are required for creating a Windows worker-based Kubernetes cluster in Enterprise PKS 1.5:

  • Your vSphere environment meets the vSphere Prerequisites and Resource Requirements.
  • Enterprise PKS must be installed in a vSphere with Flannel environment.

    Note: NSX-T does not support networking Windows containers. If this is a key requirement for you, submit feedback by sending an email to pcf-windows@pivotal.io.

  • Enterprise PKS has been configured as described in Installing Enterprise PKS on vSphere.
  • You must have a vSphere stemcell 2019.7 for Windows Server version 2019.

    Note: vSphere Windows Server stemcells are not available on the Pivotal Network. vSphere Windows Server stemcells must be created by using Stembuild and your own Windows Server ISO. For information about creating vSphere Windows Server stemcells, see Windows Stemcells in Downloading or Creating Windows Stemcells.

Configure a Windows Worker-Based Kubernetes Cluster

  1. Configure a Windows worker plan as described in Plans, below.
  2. Configure Windows worker networking as described in Networking, below.
  3. Upload the Windows Server stemcell as described in Upload the Windows Server Stemcell, below.
  4. Click Apply Changes to complete the configuration changes.

Plans

A plan defines a set of resource types used for deploying a cluster.

Note: Before configuring your Windows worker plan, you must first activate and configure Plan 1. See Plans in Installing Enterprise PKS on vSphere for more information.

To activate and configure a plan, perform the following steps:

  1. Click the plan that you want to activate. You must activate and configure either Plan 11, Plan 12, or Plan 13 to deploy a Windows worker-based cluster.

  2. Select Active to activate the plan and make it available to developers deploying clusters.
    Plan pane configuration

  3. Under Name, provide a unique name for the plan.

  4. Under Description, edit the description as needed. The plan description appears in the Services Marketplace, which developers can access by using the PKS CLI.

  5. Verify that, under Worker OS, Set to Windows is selected.

  6. Under Master/ETCD Node Instances, select the default number of Kubernetes master/etcd nodes to provision for each cluster. You can enter 1, 3, or 5.

    Note: If you deploy a cluster with multiple master/etcd node VMs, confirm that you have sufficient hardware to handle the increased load on disk write and network traffic. For more information, see Hardware recommendations in the etcd documentation.

    In addition to meeting the hardware requirements for a multi-master cluster, we recommend configuring monitoring for etcd to monitor disk latency, network latency, and other indicators for the health of the cluster. For more information, see Monitoring Master/etcd Node VMs.

    WARNING: To change the number of master/etcd nodes for a plan, you must ensure that no existing clusters use the plan. Enterprise PKS does not support changing the number of master/etcd nodes for plans with existing clusters.

  7. Under Master/ETCD VM Type, select the type of VM to use for Kubernetes master/etcd nodes. For more information, including master node VM customization options, see the Master Node VM Size section of VM Sizing for Enterprise PKS Clusters.

  8. Under Master Persistent Disk Type, select the size of the persistent disk for the Kubernetes master node VM.

  9. Under Master/ETCD Availability Zones, select one or more AZs for the Kubernetes clusters deployed by Enterprise PKS. If you select more than one AZ, Enterprise PKS deploys the master VM in the first AZ and the worker VMs across the remaining AZs. If you are using multiple masters, Enterprise PKS deploys the master and worker VMs across the AZs in round-robin fashion.

  10. Under Maximum number of workers on a cluster, set the maximum number of Kubernetes worker node VMs that Enterprise PKS can deploy for each cluster. Enter any whole number in this field.
    Plan pane configuration, part two

  11. Under Worker Node Instances, select the default number of Kubernetes worker nodes to provision for each cluster.

    If the user creating a cluster with the PKS CLI does not specify a number of worker nodes, the cluster is deployed with the default number set in this field. This value cannot be greater than the maximum worker node value you set in the previous field. For more information about creating clusters, see Creating Clusters.

    For high availability, create clusters with a minimum of three worker nodes, or two per AZ if you intend to use PersistentVolumes (PVs). For example, if you deploy across three AZs, you should have six worker nodes. For more information about PVs, see PersistentVolumes in Maintaining Workload Uptime. Provisioning a minimum of three worker nodes, or two nodes per AZ is also recommended for stateless workloads.

    If you later reconfigure the plan to adjust the default number of worker nodes, the existing clusters that have been created from that plan are not automatically upgraded with the new default number of worker nodes.

  12. Under Worker VM Type, select the type of VM to use for Kubernetes worker node VMs. For more information, including worker node VM customization options, see the Worker Node VM Number and Size section of VM Sizing for Enterprise PKS Clusters.

    Note: BOSH does not support persistent disks for Windows VMs. If specifying Worker Persistent Disk Type on a Windows worker is a requirement for you, submit feedback by sending an email to pcf-windows@pivotal.io.

  13. Under Worker Availability Zones, select one or more AZs for the Kubernetes worker nodes. Enterprise PKS deploys worker nodes equally across the AZs you select.

  14. Under Kubelet customization - system-reserved, enter resource values that Kubelet can use to reserve resources for system daemons. For example, memory=250Mi, cpu=150m. For more information about system-reserved values, see the Kubernetes documentation. Plan pane configuration, part two

  15. Under Kubelet customization - eviction-hard, enter threshold limits that Kubelet can use to evict pods when they exceed the limit. Enter limits in the format EVICTION-SIGNAL=QUANTITY. For example, memory.available=100Mi, nodefs.available=10%, nodefs.inodesFree=5%. For more information about eviction thresholds, see the Kubernetes documentation.

    WARNING: Use the Kubelet customization fields with caution. If you enter values that are invalid or that exceed the limits the system supports, Kubelet might fail to start. If Kubelet fails to start, you cannot create clusters.

  16. Under Errand VM Type, select the size of the VM that contains the errand. The smallest instance possible is sufficient, as the only errand running on this VM is the one that applies the Default Cluster App YAML configuration.

  17. (Optional) Under (Optional) Add-ons - Use with caution, enter additional YAML configuration to add custom workloads to each cluster in this plan. You can specify multiple files using --- as a separator. For more information, see Adding Custom Workloads. Plan pane configuration

    Note: Windows in Kubernetes does not support privileged containers. See Feature Restrictions in the Kubernetes documentation for additional information.

  18. (Optional) Enable or disable one or more admission controller plugins: PodSecurityPolicy, and SecurityContextDeny. See Admission Plugins for more information. Windows in Kubernetes does not support the DenyEscalatingExec Admission Plugin feature. See API in the Kubernetes documentation for additional information.

  19. Click Save.

Networking

To configure networking, do the following:

  1. Click Networking.
  2. Under Container Networking Interface, select Flannel. Networking pane configuration
  3. (Optional) Enter values for Kubernetes Pod Network CIDR Range and Kubernetes Service Network CIDR Range.
    • For Windows worker-based clusters the Kubernetes Service Network CIDR Range setting must remain 10.220.0.0/16. vSphere on Flanel does not support networking Windows containers. If customizing the Service Network CIDR range is a key requirement for you, submit feedback by sending an email to pcf-windows@pivotal.io.
  4. (Optional) Configure a global proxy for all outgoing HTTP and HTTPS traffic from your Kubernetes clusters. This setting will not set the proxy for running Kubernetes workloads or pods.

    Production environments can deny direct access to public Internet services and between internal services by placing an HTTP or HTTPS proxy in the network path between Kubernetes nodes and those services.

    If your environment includes HTTP or HTTPS proxies, configuring Enterprise PKS to use these proxies allows Enterprise PKS-deployed Kubernetes nodes to access public Internet services and other internal services. Follow the steps below to configure a global proxy for all outgoing HTTP/HTTPS traffic from your Kubernetes clusters:
    1. Under HTTP/HTTPS Proxy, select Enabled.
      Networking pane configuration
    2. Under HTTP Proxy URL, enter the URL of your HTTP proxy endpoint. For example, http://myproxy.com:1234.
    3. (Optional) If your HTTP proxy uses basic authentication, enter the username and password under HTTP Proxy Credentials.
    4. Under HTTPS Proxy URL, enter the URL of your HTTPS proxy endpoint. For example, https://myproxy.com:1234.
    5. (Optional) If your HTTPS proxy uses basic authentication, enter the username and password under HTTPS Proxy Credentials.
    6. Under No Proxy, enter the service network CIDR where your Enterprise PKS cluster is deployed. List any additional IP addresses or domain names that should bypass the proxy. The No Proxy property for vSphere accepts wildcard domains denoted by a prefixed *. or ., for example *.example.com and .example.com.

      Note: By default, the .internal, 10.100.0.0/8, and 10.200.0.0/8 IP address ranges are not proxied. This allows internal Enterprise PKS communication.

      Do not use the - character in the No Proxy field. Entering an underscore character in this field can cause upgrades to fail.

      Because some jobs in the VMs accept *. as a wildcard, while others only accept ., we recommend that you define a wildcard domain using both of them. For example, to denote example.com as a wildcard domain, add both *.example.com and example.com to the No Proxy property.

  5. Under Allow outbound internet access from Kubernetes cluster vms (IaaS-dependent), ignore the Enable outbound internet access checkbox.
  6. Click Save.

Upload the Windows Server Stemcell

  1. When prompted by Ops Manager to upload a stemcell, follow the instructions and provide your previously created vSphere stemcell 2019.7 for Windows Server version 2019.

Create a Windows Worker-Based Cluster

  1. To create a Windows worker-based cluster follow the steps in Creating Clusters.

Please send any feedback you have to pks-feedback@pivotal.io.