Deploy Enterprise PKS by Using the Configuration Wizard

Page last updated:

You can deploy a new Enterprise Pivotal Container Service (Enterprise PKS) instance either by using the VMware Enterprise PKS Management Portal configuration wizard to guide you through the configuration process, or by importing an existing YAML configuration file into the YAML editor.

This topic describes how to use the configuration wizard to deploy Enterprise PKS.

Prerequisites

Step 0: Launch the Configuration Wizard

On the VMware Enterprise PKS landing page, click Install then Start Configuration.

To get help in the wizard at any time, click the ? icon at the top of the page, or click the More Info… links in each section to see help topics relevant to that section. Click the i icons for tips about how to fill in specific fields.

Step 1: Connect to vCenter Server

  1. Enter the IP address or FQDN for the vCenter Server instance on which to deploy Enterprise PKS.
  2. Enter the vCenter Single Sign On username and password for a user account that has vSphere administrator permissions.
  3. Click Connect.
  4. Select the datacenter in which to deploy Enterprise PKS from the drop-down menu.
  5. Click Next to configure networking.

Step 2: Configure Networking

Provide connection information for the container networking interface to use with Enterprise PKS. Enterprise PKS Management Console provides 3 network configuration options for your Enterprise PKS deployments. Each network configuration option has specific prerequisites. You cannot change the type of networking after the deployment.

Configure an Automated NAT Deployment to NSX-T Data Center

Provide information about an NSX-T Data Center network that you have not already configured for use with Enterprise PKS. You provide information about the NSX-T Data Center setup, and Enterprise PKS Management Console configures it for you. Make sure that your NSX-T Data Center setup satisfies the Prerequisites for an Automated NAT Deployment to NSX-T Data Center before you begin.

  1. Select the NSX-T Data Center (Automated NAT Deployment) radio button.
  2. Configure the connection to NSX Manager.
    • Enter the IP address or FQDN of NSX Manager.
    • Enter the user name and password for an NSX administrator account.
  3. Click Connect.
  4. Enter information about the uplink network.
    • Uplink CIDR: Enter a CIDR range within the uplink subnet for the Tier 0 uplinks, for example 10.40.206.0/24.
    • Gateway IP: Enter the IP address for the gateway, for example 10.40.206.125.
    • VLAN ID: Enter the VLAN ID within the range 0 to 4095, for example 1206.
    • Edge Node 1: Select an Edge Node from the drop-down menu, for example nsx-edge-1.
    • T0 Uplink 1 IP: Enter the IP address of the Tier 0 uplink 1, for example 10.40.206.9.
    • Edge Node 2: Select an Edge Node from the drop-down menu, for example nsx-edge-2. The second edge node is optional for proof-of-concept deployments, but it is strongly recommended for production deployments. To use only one edge node, set Edge Node two to “None”.
    • T0 Uplink 2 IP: Enter the IP address of the Tier 0 uplink 1, for example 10.40.206.11.
    • T0 HA Virtual IP: Enter the IP address for the HA Virtual IP, for example 10.40.206.24.
  5. Enter information about the network resources for the Enterprise PKS deployment to use.

    • Deployment CIDR: Enter a CIDR range to use for Enterprise PKS components, for example 10.192.182.1/22.
    • Deployment DNS: Enter the IP address of the DNS server to use for deploying Enterprise PKS components, for example 192.168.111.155.
    • NTP Server: Enter the IP address of an NTP server.
    • Pod IP Block CIDR: Enter a CIDR range to use for pods, with a maximum suffix of 24. For example 11.192.182.1/21.
    • Node IP Block CIDR: Enter a CIDR range to use for nodes, with a maximum suffix of 22. For example 10.192.182.1/21.
    • Nodes DNS: Enter the Domain Name Server used by the Kubernetes nodes.
    • Deployment Network Reserved IP Range: Optionally enter a range of IP addresses in the From and To text boxes. No VMs are deployed in this range.

    Note: You cannot modify reserved IP ranges after the initial deployment.

    • Usable range of floating IPs: Enter the floating IP range, for example From 10.0.1.1 To 10.0.0.200. Click Add Range to add more IP ranges.
  6. Optionally enable Manage certificates manually for NSX if NSX Manager uses a custom CA certificate.

    Note: If NSX-T Data Center uses custom certificates and you do not provide the CA certificate for NSX Manager, Enterprise PKS Management Console automatically generates one and registers it with NSX Manager. This can cause other services that are integrated with NSX Manager not to function correctly.

    Enter the contents of the CA certificate in the NSX Manager CA Cert text box:

    -----BEGIN CERTIFICATE-----
    nsx_manager_CA_certificate_contents
    -----END CERTIFICATE-----
    

    If you do not select Manage certificates manually for NSX, the management console generates a certificate for you.

  7. Optionally enable Disable SSL certificates verification to allow unsecured connections to NSX Manager.

  8. Click Next to configure identity management.

For the next steps, see Configure Identity Management.

Configure a Bring Your Own Topology Deployment to NSX-T Data Center

Provide information about an NSX-T Data Center network that you have already fully configured for use with Enterprise PKS. Make sure that your NSX-T Data Center setup satisfies the Prerequisites for a Bring Your Own Topology Deployment to NSX-T Data Center before you begin.

  1. Select the NSX-T Data Center (Bring Your Own Topology) radio button.
  2. Configure the connection to NSX Manager.
    • Enter the IP address of the NSX Manager.
    • Enter the user name and password for an NSX administrator account.
  3. Click Connect.
  4. Use the drop-down menus to select existing network resources for each of the following items.
    • Network for PKS Management Plane: Select the name of an opaque network on an NSX-T Virtual Distributed Switch (N-VDS).
    • Pod IP Block ID: Select the UUID for the IP block to use for Kubernetes pods.
    • Node IP block ID: Select the UUID for the IP block to use for Kubernetes nodes.
    • T0 Router ID: Select the UUID for the Tier-0 Logical Router configured in NSX-T Data Center.
    • Floating IP Pool ID: Select the UUID for the Floating IP Pool.
  5. Enter IP addresses for the following resources.

    • Nodes DNS: Enter the IP address for the DNS server to use for Kubernetes nodes and pods.
    • Deployment DNS: Enter the IP address for the deployment network, for example 192.168.111.155.
    • NTP Server: Enter the IP address of an NTP server.
    • Deployment Network Reserved IP Range: Optionally enter a range of IP addresses in the From and To text boxes. No VMs are deployed in this range.

    Note: You cannot modify reserved IP ranges after the initial deployment.

  6. Optionally enable Manage certificates manually for NSX if NSX Manager uses a custom CA certificate.

    Enter the contents of the CA certificate in the NSX Manager CA Cert text box:

    -----BEGIN CERTIFICATE-----
    nsx_manager_CA_certificate_contents
    -----END CERTIFICATE-----
    

    If you do not select Manage certificates manually for NSX, the management console generates a certificate for you.

  7. Optionally enable Disable SSL certificates verification to allow unsecured connections to NSX Manager.

  8. Leave the NAT Mode toggle enabled.

    Only NAT mode is supported in this release. You cannot use the configuration wizard to deploy Enterprise PKS with no-NAT mode.

  9. Click Next to configure identity management.

For the next steps, see Configure Identity Management.

Configure a Flannel Network

Provide information so that Enterprise PKS Management Console can provision a Flannel network during deployment. Make sure that you have the information listed in Prerequisites for a Flannel Network before you begin.

  1. Select the Flannel radio button.
  2. Configure the Deployment Network Resource options.

    • Deployment Network: Select a vSphere network on which to deploy Enterprise PKS.
    • Deploy network CIDR: Enter a CIDR range to use for Enterprise PKS components, for example 10.192.182.1/22.
    • Deployment network gateway IP: Enter the IP address for the gateway for the deployment network, for example 10.0.0.1.
    • Deployment DNS: Enter the IP address for the deployment network DNS server, for example 192.168.111.155.
    • Deployment Network Reserved IP Range: Optionally enter a range of IP addresses in the From and To text boxes. No VMs are deployed in this range.

    Note: You cannot modify reserved IP ranges after the initial deployment.

  3. Configure the Service Network Resource options.

    • Service Network: Select a vSphere network to use as the service network.
    • Service network CIDR: Enter a CIDR range to use for the service network, for example 10.192.182.1/23.
    • Service network gateway IP: Enter the IP address for the gateway for the service network.
    • Service DNS: Enter the IP address for the service network DNS server, for example 192.168.111.155.
    • Service Network Reserved IP Range: Optionally enter a range of IP addresses in the From and To text boxes. No VMs are deployed in this range.

    Note: You cannot modify the reserved IP range after the initial deployment.

    • NTP Server: Enter the IP address of an NTP server.
  4. Configure the Kubernetes network options.

    • Pod network CIDR: Enter a CIDR range to use for pods, for example 11.192.182.1/31.
    • Service network CIDR: Enter a CIDR range to use for the Kubernetes services, for example 10.192.182.1/23.
  5. Click Next to configure identity management.

Step 3: Configure Identity Management

Enterprise PKS Management Console provides 3 identity management options for your Enterprise PKS deployments.

Use a Local Database

You can manage users by using a local database that is created during Enterprise PKS deployment. After deployment, you can add users and groups to the database and assign roles to them in the Identity Management view of the Enterprise PKS Management Console.

  1. Select the Local user database radio button.
  2. In the PKS API FQDN text box, enter an address for the PKS API Server VM, for example api.pks.example.com.

For the next steps, see Optionally Configure UAA and Custom Certificates.

Use an External LDAP Server

Provide information about an existing external Active Directory or LDAP server.

  1. Select the AD/LDAP radio button.
  2. For AD/LDAP endpoint, select ldap or ldaps from the drop-down menu and enter the IP address and port of the AD or LDAP server.
  3. Enter the username and password to use to connect to the server.
  4. Enter the remaining details for your server:
    • User search base: Enter the location in the AD/LDAP directory tree where user search begins. For example, a domain named cloud.example.com might use ou=Users,dc=example,dc=com.
    • User search filter: Enter a string to use for user search criteria. For example, the standard search filter cn=Smith returns all objects with a common name equal to Smith. Use cn={0} to return all LDAP objects with the same common name as the username.
    • LDAP referrals: Select how to handle references to alternate locations in which AD/LDAP requests can be processed:
      • Automatically follow referrals
      • Ignore referrals
      • Abort authentication
    • Group search base: Optionally enter the location in the AD/LDAP directory tree where group search begins. For example, a domain named cloud.example.com might use ou=Groups,dc=example,dc=com.
    • Group search filter: Enter a string that defines AD/LDAP group search criteria, such as member={0}.
    • External groups whitelist: Optionally enter a comma-separated list of group patterns to be populated in the user’s id_token.
    • Email Attribute: Enter the attribute name in the AD/LDAP directory that contains user email addresses. For example, mail.
    • Email domains: Optionally enter a comma-separated list of the email domains for external users who can receive invitations to Enterprise PKS.
    • First name attribute: Optionally enter the attribute name in the AD/LDAP directory that contains user first names, for example cn.
    • Last name attribute: Optionally enter the attribute name in the AD/LDAP directory that contains user last names. for example sn.
  5. In the PKS API FQDN text box, enter an address for the PKS API Server VM, for example api.pks.example.com.

For the next steps, see Optionally Configure UAA and Custom Certificates.

Use a SAML Identity Provider

You can configure Enterprise PKS so that Kubernetes authenticates users against a SAML identity provider. Before you configure a SAML identity provider, you must configure your identity provider to designate Enterprise PKS as a service provider. For information about how to configure Okta and Azure Active Directory, see the following topics:

After you have configured your identity provider, enter information about the provider in Enterprise PKS Management Console.

  1. Select the SAML Identity Provider radio button.
  2. For Provider name, enter a unique name you create for the Identity Provider.
    This name can include only alphanumeric characters, +, _, and -. You must not change this name after deployment because all external users use it to link to the provider.

  3. For Display name, enter a display name for your provider.
    The display name appears as a link on your login page.

  4. Enter the metadata from your identity provider either as XML or as a URL.

    • Download your identity provider metadata and paste the XML into Provider Metadata.
    • If your identity provider exposes a metadata URL, enter it in Metadata URL.
  5. For Name ID format, select the name identifier format for your SAML identity provider.
    This translates to username on Enterprise PKS. The default is Email Address.

  6. For First name attribute and Last name attribute, enter the attribute names in your SAML database that correspond to the first and last names in each user record.
    These fields are case sensitive.

  7. For Email Attribute, enter the attribute name in your SAML assertion that corresponds to the email address in each user record, for example, EmailID.
    This field is case sensitive.

  8. For External groups attribute, enter the attribute name in your SAML database for your user groups.
    This field is case sensitive. To map the groups from the SAML assertion to admin roles in Enterprise PKS, see Grant Enterprise PKS Access to an External LDAP Group.

  9. By default, all SAML authentication requests from Enterprise PKS are signed, but you can optionally disable Sign authentication requests.
    If you disable this option, you must configure your identity provider to verify SAML authentication requests.

  10. To validate the signature for the incoming SAML assertions, enable Required signed assertions.
    If you enable this option, you must configure your Identity Provider to send signed SAML assertions.

  11. For Signature Algorithm, choose an algorithm from the drop down to use for signed requests and assertions.
    The default value is SHA256.

  12. In the PKS API FQDN text box, enter an address for the PKS API Server VM, for example api.pks.example.com.

For the next steps, see Optionally Configure UAA and Custom Certificates.

Optionally Configure UAA and Custom Certificates

However you manage identities, you can use OpenID Connect (OIDC) to instruct Kubernetes to verify end-user identities based on authentication performed by a User Account and Authentication (UAA) server. Using OIDC lets you set up an external IDP, such as Okta, to authenticate users who access Kubernetes clusters with kubectl. If you enable OIDC, administrators can grant namespace-level or cluster-wide access to Kubernetes end users. If you do not enable OIDC, you must use service accounts to authenticate kubectl users.

Note: You cannot enable OIDC if you intend to integrate Enterprise PKS with VMware vRealize Operations Management Pack for Container Monitoring.

  1. Optionally select Configure created clusters to use UAA as the OIDC provider and provide the following information.
    • UAA OIDC Groups Claim: Sets the --oidc-groups-claim flag on the kube-api server. Enter the name of your groups claim. This is used to set a user’s group in the JSON Web Token (JWT) claim. The default value is roles.
    • UAA OIDC Groups Prefix: Sets the --oidc-groups-prefix flag. Enter a prefix for your groups claim. This prevents conflicts with existing names. For example, if you enter the prefix oidc:, UAA creates a group name like oidc:developers.
    • UAA OIDC Username Claim: Sets the --oidc-username-claim flag. Enter the name of your username claim. This is used to set a user’s username in the JWT claim. The default value is user_name. Depending on your provider, admins can enter claims besides user_name, such as email or name.
    • UAA OIDC Username Prefix: Sets the --oidc-username-prefix flag. Enter a prefix for your username claim. This prevents conflicts with existing names. For example, if you enter the prefix oidc:, UAA creates a username like oidc:admin.
  2. Optionally select Manage certificates manually for PKS API to generate and upload your own certificates for the PKS API Server.

    If you do not select this option, the management console creates auto-generated, self-signed certificates.

    Enter the contents of the certificate in the PKS API Certificate text box:

    -----BEGIN CERTIFICATE-----
    pks_api_certificate_contents
    -----END CERTIFICATE-----
    

    Enter the contents of the certificate key in the Private Key PEM text box:

    -----BEGIN PRIVATE KEY-----
    pks_api_private_key_contents
    -----END PRIVATE KEY-----
    
  3. Click Next to configure availability zones.

Step 4: Configure Availability Zones

Availability zones specify the compute resources for Kubernetes cluster deployment. Availability zones are a BOSH construct, that in Enterprise PKS deployments to vSphere correspond to vCenter Server clusters, host groups, and resource pools. Availability zones allow you to provide high-availability and load balancing to applications. When you run more than one instance of an application, those instances are balanced across all of the availability zones that are assigned to the application. You must configure at least one availability zone. You can configure multiple additional availability zones.

  1. In the Name field, enter a name for the availability zone.
  2. Optionally select This is the management availability zone.
    The management availability zone is the availability zone in which to deploy the PKS Management Plane. The management plane consists of the PKS API VM, Ops Manager, BOSH Director, and Harbor Registry. You can only designate one availability zone as the management zone. If you do not designate an availability zone as the management zone, Enterprise PKS Management Console selects the first one.
  3. In the Compute resource tree, select clusters, host groups, or resource pools for this availability zone to use.
  4. Click Save Availability Zone.
  5. Optionally click Add Availability Zone to add another zone.
    You can only select resources that are not already included in another zone. You can create multiple availability zones.
  6. Click Save Availability Zone for every additional availability zone that you create.
  7. Click Next to configure storage.

Step 5: Configure Storage

Designate the datastores to use for the different types of storage required by your Enterprise PKS deployment.

  • Ephemeral storage is used to contain the files for ephemeral VMs that Enterprise PKS creates during installation, upgrade, and operation. Ephemeral VMs are automatically created and deleted as needed.
  • Permanent storage is used for permanent Enterprise PKS data.
  • Kubernetes persistent volume storage is used to store Kubernetes persistent volumes, for use in stateful applications.

You can use different datastores for the storage of permanent and ephemeral data. If you disable the permanent storage option, Enterprise PKS uses the ephemeral storage for permanent data. For information about when it is appropriate to share the ephemeral, permanent, and persistent volume datastores or use separate ones, see PersistentVolume Storage Options on vSphere.

Datastores can only be selected if their minimum capacity is greater than 250GB. You can use VMware vSAN, Network File Share (NFS), or VMFS storage.

  1. Select one or more datastores for use as ephemeral storage, or use the search field on the right to find datastores by name.
  2. Optionally enable Specify permanent storage to designate different datastores for ephemeral and permanent data.
  3. If you enabled permanent storage, select one or more datastores for permanent storage, or use the search field to find datastores by name.
  4. If you enabled permanent storage, select the size of the disk for persistent Enterprise PKS data from the PKS persistent disk size drop-down menu
  5. Select one datastore for Kubernetes persistent volume storage, or use the search field to find datastores by name.
  6. Click Next to configure plans.

Step 6: Configure Plans

A plan is a cluster configuration template that defines the set of resources for Enterprise PKS to use when deploying Kubernetes clusters. A plan allows you to configure the numbers of master and worker nodes, the configuration of the master and worker VMs, disk sizes, availability zones for master and node VMs, and advanced settings.

Enterprise PKS Management Console provides preconfigured default plans, for different sizes of Kubernetes clusters. You can change the default configurations, or you can enable the plans as they are. You must enable at least one plan configuration because when you use the PKS CKI to create a Kubernetes cluster, you must specify the plan on which you are basing the Kubernetes cluster. If no plans are enabled, you cannot create Kubernetes clusters.

Enterprise PKS plans support privileged containers and three admission control plugins. For information about privileged containers and the supported admission plugins, see Privileged mode for pods in the Kubernetes documentation. For information about admission plugins, see Enabling, Disabling, and Using Admission Control Plugins for Enterprise PKS Clusters.

  1. To use the preconfigured plans as they are, click Save Plan for each of the small, medium, and large plans.
  2. Optionally use the drop-down menus and buttons to change the default configurations of the preconfigured plans.
    • Enter a name and a description for the plan in the Name and Description text boxes.
    • Master/etcd node instances: Select 1 (small), 3 (medium), or 5 (large).
    • Master persistent disk size: Select the size of the master persistent disk.
    • Master/etcd availability zones: Enable one or more availability zones for the master nodes.
    • Master/etcd VM type: Select the size of the Master VM.
    • Worker node instances: Specify the number of worker nodes. For a small deployment, 3 is suggested.
    • Worker persistent disk size: Select the size of the worker node persistent disk.
    • Worker availability zones: Enable one or more available availability zones for the worker nodes.
    • Worker VM type: Select a configuration for worker nodes.
    • Max worker node instances: Select the maximum number of worker nodes.
    • Errand VM type: Select the size of the VM to run BOSH errand tasks.
    • Enable privileged Containers: Optionally enable privileged container mode. Use with caution.
    • Admission Plugins: Optionally enable admission plugins. Admission plugins, provide a higher level of access control to the Kubernetes API server and should be used with caution.
      • PodSecurityPolicy
      • DenyEscalatingExec
      • SecurityContextDeny
    • Node Drain Timeout: Enter the timeout in minutes for the node to drain pods. If you set this value to 0, the node drain does not terminate.
    • To configure when the node drains, optionally enable the following:
      • Force node to drain even if it has running pods not managed by a ReplicationController, ReplicaSet, Job, DaemonSet or Stateful Set
      • Force node to drain even if it has running DaemonSet managed pods
      • Force node to drain even if it has running pods using emptyDir
      • Force node to drain even if pods are still running after timeout
    • Pod Shutdown Grace Period: Enter a timeout in seconds for the node to wait before it forces the pod to terminate. If you set this value to -1, the default timeout is set to the one specified by the pod.
  3. Click Save Plan for each plan that you edit.
  4. Optionally click Add Plan to create a new plan, configure it as described above, and click Save Plan.
  5. Optionally delete any plans that you do not need.
  6. Click Next to configure integrations.

Step 7: Configure Integrations

If your infrastructure includes existing deployments of Wavefront by VMware, VMware vRealize Operations Management Pack for Container Monitoring, or VMware vRealize Log Insight, you can configure Enterprise PKS to connect to those services. You can also configure Enterprise PKS to forward logs to a Syslog server.

Configure Connection to Wavefront

By connecting your Enterprise PKS deployment to an existing deployment of Wavefront by VMware, you can obtain detailed metrics about Kubernetes clusters and pods. Before you configure Wavefront integration, you must have an active Wavefront account and access to a Wavefront instance. For more information, including about how to generated a Wavefront access token, see VMware PKS Integration and VMware Enterprise PKS Integration Details in the Wavefront by VMware documentation.

  1. Select the Enable toggle to enable a connection to Wavefront.
  2. Enter the address of your Wavefront instance in the Wavefront URL text box.
  3. Enter the Wavefront API token in the Wavefront access token text box.
  4. Enter an email address to which Wavefront sends alerts in the Wavefront alert recipient text box.
  5. Optionally disable Create pre-defined Wavefront alerts when provisioning PKS.
  6. Optionally disable Delete pre-defined alerts when deleting PKS.
  7. Click Save.
  8. Configure integrations with other applications, or click Next to install Harbor.

Configure Connection to VMware vRealize Operations Management Pack for Container Monitoring

You can connect your Enterprise PKS deployment to an existing instance of VMware vRealize Operations Management Pack for Container Monitoring. vRealize Operations Management Pack for Container Monitoring provides detailed monitoring of your Kubernetes clusters. vRealize Operations Management Pack for Container Monitoring must be installed, licensed, running, and available in your environment before you enable the option. For more information, see the vRealize Operations Management Pack for Container Monitoring documentation.

If you enable the option to integrate Enterprise PKS with VMware vRealize Operations Management Pack for Container Monitoring, the management console creates a cAdvisor container in your Enterprise PKS deployment.

  1. Select the Enable toggle to enable a connection to vRealize Operations Management Pack for Container Monitoring.
  2. Click Save.
  3. Configure integrations with other applications, or click Next to install Harbor.

Configure Connection to VMware vRealize Log Insight

You can configure Enterprise PKS deployment so that an existing deployment of VMware vRealize Log Insight pulls logs from all BOSH jobs and containers running in the cluster, including node logs from core Kubernetes and BOSH processes, Kubernetes event logs, and POD stdout and stderr.

vRealize Log Insight must be installed, licensed, running, and available in your environment before you enable the option. For instructions and additional information, see the vRealize Log Insight documentation.

  1. Select the Enable toggle to enable a connection to vRealize Log Insight.
  2. Enter the address of your vRealize Log Insight instance in the Host text box.
  3. Optionally disable Enable SSL.
  4. Optionally disable Disable SSL certificate validation.
  5. Click Save.
  6. Configure integrations with other applications, or click Next to install Harbor.

Note: If you enable integration with vRealize Log Insight, Enterprise PKS Management Console generates a unique vRealize Log Insight agent ID for the appliance. You must provide this agent ID to vRealize Log Insight so that it can pull the appropriate logs from the appliance. For information about how to obtain the agent ID, see Obtain the VMware vRealize Log Insight Agent ID for Enterprise PKS Management Console in Troubleshooting Enterprise PKS Management Console.

Configure Connection to Syslog

You can configure your Enterprise PKS deployment so that it sends logs for BOSH-deployed VMs, Kubernetes clusters, and namespaces to an existing Syslog server.

  1. Select the Enable toggle to enable a connection to Syslog.
  2. Enter the address of your Syslog server in the Address and port text boxes.
  3. Select TCP, UDP, or RELP from the Transport protocol drop-down menu.
  4. Optionally select Enable TLS.
  5. Enter a permitted peer ID.
  6. Click Save.
  7. Click Next to install Harbor.

Step 8: Configure Harbor

Harbor is an enterprise-class registry server that you can use to store and distribute container images. Harbor allows you to organize image repositories in projects, and to set up role-based access control to those projects to define which users can access which repositories. Harbor also provides rule-based replication of images between registries, optionally implements Content Trust with Notary and vulnerability scanning of stored images with Clair, and provides detailed logging for project and user auditing.

Harbor provides a Notary server that allows you to implement Content Trust by signing and verifying the images in the registry. When Notary content trust is enabled, users can only push and pull images that have been signed and verified to or from the registry.

Harbor uses Clair to perform vulnerability and security scanning of images in the registry. You can set thresholds that prevent users from running images that exceed those vulnerability thresholds. Once an image is uploaded into the registry, Harbor uses Clair to check the various layers of the image against known vulnerability databases and reports any issues found.

  1. Optionally select the Enable toggle to deploy Harbor when you deploy Enterprise PKS.
  2. In the Harbor FQDN text box, enter a name for the Harbor VM, for example harbor.pks.example.com.
    This is the address at which you access the Harbor administration UI and registry service.
  3. Enter and confirm a password for the Harbor VM.
  4. Select the method to use for authenticating connections to Harbor.
    • Harbor internal user management: Create a local database of users in the Harbor VM.
    • Log in Harbor with LDAP users: Use AD or LDAP to manage users. You configure the connection to the LDAP server in Harbor after deployment.
    • UAA in Pivotal Container Service: Use the same UAA as you use for Enterprise PKS.
  5. Optionally select Manage certificates manually for Harbor to use custom certificates with Harbor.

    To use a custom certificate, paste the contents of the server certificate PEM file in the SSL certificate PEM text box.

    -----BEGIN CERTIFICATE-----
    ssl_certificate_contents
    -----END CERTIFICATE-----
    

    Paste the contents of the certificate key in the SSL key PEM text box.

    -----BEGIN PRIVATE KEY-----
    ssl_private_key_contents
    -----END PRIVATE KEY-----
    

    Paste the contents of the Certificate Authority (CA) file in the Certificate Authority text box.

    -----BEGIN CERTIFICATE-----
    CA_certificate_contents
    -----END CERTIFICATE-----
    
  6. Select the location in which to store image repositories.

    • Local file system: Stores images in the Harbor VM. No configuration required.
    • Remote NFS server: Provide the IP address and path to an NFS share point.
    • AWS S3: Provide the connection details for your Amazon S3 account.
      • Access Key: Enter your access key ID.
      • Secret Key: Enter the secret access key for your access key ID.
      • Region: The region in which your bucket is located.
      • Bucket Name: Enter the name of the S3 bucket.
      • Root Directory in the Bucket: Enter the root directory of the bucket.
      • Chunk Size: The default is 5242880 but you can change it if necessary.
      • Endpoint URL of your S3-compatible file store:
      • Enable v4auth: Access to the S3 bucket is authenticated by default. Deselect this checkbox for anonymous access.
      • Secure mode: Access to your S3 bucket is secure by default. Deselect this checkbox to disable secure mode.
    • Google Cloud Storage: Provide the connection details for your Google Cloud Storage account.
      • Bucket Name: Enter the name of the GCS bucket.
      • Root Directory in the Bucket: Enter the root directory of the bucket.
      • Chunk Size: The default is 5242880 but you can change it if necessary.
      • Key File: Enter the service account key for your bucket
  7. Select the configuration for the Harbor VM from the VM type for harbor-app drop-down menu.

  8. Select the size of the disk for the Harbor VM from the Disk size for harbor-app drop-down menu.

  9. Optionally enable Clair.

    • In the Updater interval field, specify when Clair will update its CVE databases for the registered sources. When the updater interval expires, Clair will update its CVE databases. The default updater interval is 0, which means Clair will never update its CVE databases. If you set the updater interval to 24, Clair updates its CVE databases every 24 hours.
    • In the HTTP Proxy field, enter the URL to proxy HTTP traffic to the Clair service. For example: http://my.proxy.com:3128.
    • In the HTTPS Proxy field, enter the URL to proxy HTTPS traffic to the Clair service. For example: http://my.proxy.com:3128.
  10. Optionally enable Notary to enable content trust for container images to install Notary.

  11. Click Next to complete the configuration wizard.

Step 9: Configure CEIP and Telemetry

VMware’s Customer Experience Improvement Program (CEIP) and the Pivotal Telemetry Program provide VMware and Pivotal with information to improve their products and services, fix problems, and advise you on how best to deploy and use our products. As part of the CEIP and Telemetry programs, VMware and Pivotal collect technical information about your organization’s use of Enterprise PKS Management Console.

  1. Select your level of participation in the CEIP and Telemetry programs:
    • No participation
    • Standard, anonymous participation
    • Enhanced participation which allows VMware and Pivotal to identify you and if necessary provide support
  2. Optionally tell us how you intend to use Enterprise PKS Management Console.

Step 10: Generate Configuration File and Deploy Enterprise PKS

When all of the sections of the wizard are green, you can generate a YAML configuration file and deploy Enterprise PKS.

  1. Click Generate Configuration to see the generated YAML file.
  2. Optionally click Export to save a copy of the YAML file for future use.
    This is recommended. The manifest is exported as the file PksConfiguration.yaml.
  3. Optionally edit the YAML directly in the YAML editor.
  4. Click Apply Configuration then Continue to deploy Enterprise PKS.
  5. On the Installing PKS Instance page, follow the progress of the deployment.
  6. When the deployment has completed successfully, click Continue to monitor and manage your deployment.

Next Steps

You can now access the Enterprise PKS control plane and begin deploying Kubernetes clusters.

For information about how you can use Enterprise PKS Management Console to monitor and manage your deployment, see Monitor and Manage Enterprise PKS in the Management Console.

If Enterprise PKS fails to deploy, see Troubleshooting.


Please send any feedback you have to pks-feedback@pivotal.io.