LATEST VERSION: v1.2 - RELEASE NOTES
Pivotal Container Service v1.2

Configuring Ops Manager on vSphere

Page last updated:

This topic describes how to configure Ops Manager for vSphere without NSX-T integration.

Before you begin this procedure, ensure that you have successfully completed all of the steps in Deploying Ops Manager on vSphere.

Note: You can also perform the procedures in this topic using the Ops Manager API. For more information, see Using the Ops Manager API.

Step 1: Set up Ops Manager

Note: If you have Pivotal Application Service (PAS) installed, we strongly recommend installing PKS on a separate instance of Ops Manager for security reasons. For more information, see PAS and PKS Deployments with Ops Manager.

  1. Navigate to the fully qualified domain of your Ops Manager in a web browser.

  2. The first time you start Ops Manager, you must choose one of the following:

Select authentication

Use an Identity Provider

  1. Log in to your IdP console and download the IdP metadata XML. Optionally, if your IdP supports metadata URL, you can copy the metadata URL instead of the XML.

  2. Copy the IdP metadata XML or URL to the Ops Manager Use an Identity Provider log in page.

    Meta om

    Note: The same IdP metadata URL or XML is applied for the BOSH Director. If you use a separate IdP for BOSH, copy the metadata XML or URL from that IdP and enter it into the BOSH IdP Metadata text box in the Ops Manager log in page.

  3. Enter your Decryption passphrase. Read the End User License Agreement, and select the checkbox to accept the terms.

  4. Your Ops Manager log in page appears. Enter your username and password. Click Login.

  5. Download your SAML Service Provider metadata (SAML Relying Party metadata) by navigating to the following URLs:

    • 5a. Ops Manager SAML service provider metadata: https://OPS-MAN-FQDN:443/uaa/saml/metadata
    • 5b. BOSH Director SAML service provider metadata: https://BOSH-IP-ADDRESS:8443/saml/metadata

    Note: To retrieve your BOSH-IP-ADDRESS, navigate to the Ops Manager Director tile > Status tab. Record the Ops Manager Director IP address.

  6. Configure your IdP with your SAML Service Provider metadata. Import the Ops Manager SAML provider metadata from Step 5a above to your IdP. If your IdP does not support importing, provide the values below.

    • Single sign on URL: https://OPS-MAN-FQDN:443/uaa/saml/SSO/alias/OPS-MAN-FQDN
    • Audience URI (SP Entity ID): https://OP-MAN-FQDN:443/uaa
    • Name ID: Email Address
    • SAML authentication requests are always signed
  7. Import the BOSH Director SAML provider metadata from Step 5b to your IdP. If the IdP does not support an import, provide the values below.

    • Single sign on URL: https://BOSH-IP:8443/saml/SSO/alias/BOSH-IP
    • Audience URI (SP Entity ID): https://BOSH-IP:8443
    • Name ID: Email Address
    • SAML authentication requests are always signed
  8. Return to the Ops Manager Director tile, and continue with the configuration steps below.

Internal Authentication

  1. When redirected to the Internal Authentication page, you must complete the following steps:
    • Enter a Username, Password, and Password confirmation to create an Admin user.
    • Enter a Decryption passphrase and the Decryption passphrase confirmation. This passphrase encrypts the Ops Manager datastore, and is not recoverable.
    • If you are using an HTTP proxy or HTTPS proxy, follow the instructions in Configuring Proxy Settings for the BOSH CPI.
    • Read the End User License Agreement, and select the checkbox to accept the terms.

Om login

Step 2: vCenter Config Page

  1. Log in to Ops Manager with the Admin username and password you created in the previous step.

    Cf login

  2. Click the Ops Manager Director tile.

    Director tile vmware

  3. Select vCenter Config.

    Vcenter config

  4. Enter the following information:

    • vCenter Host: The hostname of the vCenter that manages ESXi/vSphere.
    • vCenter Username: A vCenter username with create and delete privileges for virtual machines (VMs) and folders.
    • vCenter Password: The password for the vCenter user specified above.
    • Datacenter Name: The name of the datacenter as it appears in vCenter.
    • Virtual Disk Type: The Virtual Disk Type to provision for all VMs. For guidance on selecting a virtual disk type, see Provisioning a Virtual Disk in vSphere.
    • Ephemeral Datastore Names (comma delimited): The names of the datastores that store ephemeral VM disks deployed by Ops Manager.
    • Persistent Datastore Names (comma delimited): The names of the datastores that store persistent VM disks deployed by Ops Manager.
    • VM Folder: The vSphere datacenter folder (default: pcf_vms) where Ops Manager places VMs.
    • Template Folder: The vSphere datacenter folder (default: pcf_templates) where Ops Manager places VMs.
    • Disk path Folder: The vSphere datastore folder (default: pcf_disk) where Ops Manager creates attached disk images. You must not nest this folder.
  5. Select Standard vCenter Networking.

  6. Click Save.

Note: After your initial deployment, you cannot edit the VM Folder, Template Folder, and Disk path Folder names.

Step 3: Director Config Page

  1. Select Director Config.

    Vsphere director

  2. In the NTP Servers (comma delimited) field, enter your NTP server addresses.

    Note: The NTP server configuration only updates after VM recreation. Ensure that you select the Recreate all VMs checkbox if you modify the value of this field.

  3. Leave the JMX Provider IP Address field blank.

    Note: Starting from PCF v2.0, BOSH-reported system metrics are available in the Loggregator Firehose by default. If you continue to use PCF JMX Bridge for consuming them outside of the Firehose, you may receive duplicate data. To prevent this duplicate data, leave the JMX Provider IP Address field blank.

  4. Leave the Bosh HM Forwarder IP Address field blank.

    Note: Starting from PCF v2.0, BOSH-reported system metrics are available in the Loggregator Firehose by default. If you continue to use the BOSH HM Forwarder for consuming them, you may receive duplicate data. To prevent duplicate data, leave the Bosh HM Forwarder IP Address field blank.

  5. Select the Enable VM Resurrector Plugin to enable Ops Manager Resurrector functionality.

  6. Select Enable Post Deploy Scripts to run a post-deploy script after deployment. This script allows the job to execute additional commands against a deployment.

    Note: You must enable post-deploy scripts to install PKS.

  7. Select Recreate all VMs to force BOSH to recreate all VMs on the next deploy. This process does not destroy any persistent disk data.

  8. Select Enable bosh deploy retries if you want Ops Manager to retry failed BOSH operations up to five times.

    Note: If you use Ops Manager v2.2, Pivotal recommends disabling Allow Legacy Agents. Disabling the field allows Ops Manager to implement TLS secure communications.

  9. Select Keep Unreachable Director VMs if you want to preserve Ops Manager Director VMs after a failed deployment for troubleshooting purposes.

  10. Select HM Pager Duty Plugin to enable Health Monitor integration with PagerDuty.

    Director hm pager

    • Service Key: Enter your API service key from PagerDuty.
    • HTTP Proxy: Enter an HTTP proxy for use with PagerDuty.
  11. Select HM Email Plugin to enable Health Monitor integration with email.

    Director hm email

    • Host: Enter your email hostname.
    • Port: Enter your email port number.
    • Domain: Enter your domain.
    • From: Enter the address for the sender.
    • Recipients: Enter comma-separated addresses of intended recipients.
    • Username: Enter the username for your email server.
    • Password: Enter the password for your email server.
    • Enable TLS: Select this checkbox to enable Transport Layer Security.
  12. Select a Blobstore Location to either configure the blobstore as an internal server or an external endpoint. Because the internal server is unscalable and less secure, Pivotal recommends you configure an external blobstore.

    Note: After you deploy Ops Manager, you cannot change the blobstore location.

    • Internal: Select this option to use an internal blobstore. Ops Manager creates a new VM for blob storage. No additional configuration is required.
    • S3 Compatible Blobstore: Select this option to use an external S3-compatible endpoint. Follow the procedures in Sign up for Amazon S3 and Create a Bucket in the AWS documentation. When you have created an S3 bucket, complete the following steps:
      1. S3 Endpoint: Navigate to the Regions and Endpoints topic in the AWS documentation. Locate the endpoint for your region in the Amazon Simple Storage Service (S3) table and construct a URL using your region’s endpoint. For example, if you are using the us-west-2 region, the URL you create would be https://s3-us-west-2.amazonaws.com. Enter this URL into the S3 Endpoint field in Ops Manager.
      2. Bucket Name: Enter the name of the S3 bucket.
      3. Access Key and Secret Key: Enter the keys you generated when creating your S3 bucket.
      4. Select V2 Signature or V4 Signature. If you select V4 Signature, enter your Region.

        Note: AWS recommends using Signature Version 4. For more information about AWS S3 Signatures, see Authenticating Requests in the AWS documentation.

    • GCS Blobstore: Select this option to use an external Google Cloud Storage (GCS) endpoint. To create a GCS bucket, you will need a GCS account. Follow the procedures in Creating Storage Buckets in the GCP documentation. Once you have created a GCS bucket, complete the following steps:

      1. Bucket Name: Enter the name of your GCS bucket.
      2. Storage Class: Select the storage class for your GCS bucket. For more information, see Storage Classes in the GCP documentation.
      3. Service Account Key: Follow the steps in the Set up an IAM Service Account section of Preparing GCP to download a JSON file with a private key, and then enter the contents of the JSON file into the field.

      Blobstore

  13. By default, PCF deploys and manages an Internal database for you. If you choose to use an External MySQL Database, complete the associated fields with information obtained from your external MySQL Database provider: Host, Port, Username, Password, and Database.

    Director database

  14. (Optional) Director Workers sets the number of workers available to execute Director tasks. This field defaults to 5.

  15. (Optional) Max Threads sets the maximum number of threads that the Ops Manager Director can run simultaneously. For vSphere, the default value is 32. Leave the field blank to use this default value. Pivotal recommends that you use the default value unless doing so results in rate limiting or errors on your IaaS.

  16. (Optional) Enter a value for the Director Hostname, or leave this field blank (default). See Director Config Page in Configuring BOSH Director on vSphere in the PCF documentation for additional details on populating this field.

  17. If you use Ops Manager v2.2, ensure the Disable BOSH DNS server for troubleshooting purposes checkbox is not selected.

    WARNING: Do not disable BOSH DNS if you are deploying PKS.

  18. (Optional) To set a custom banner that users see when logging in to the Director using SSH, enter text in the Custom SSH Banner field. Disable bosh dns

  19. Click Save.

Note: After your initial deployment, you cannot edit the Blobstore and Database locations.

Step 4: Create Availability Zone Page

Ops Manager Availability Zones correspond to your vCenter clusters and resource pools. Multiple Availability Zones allow you to provide high-availability and load balancing to your applications. When you run more than one instance of an application, Ops Manager balances those instances across all of the Availability Zones assigned to the application. At least three availability zones are recommended for a highly available installation of your chosen runtime.

  1. Select Create Availability Zones.

    Create az

  2. Use the following steps to create one or more Availability Zones for your applications to use:

    • Click Add.
    • Enter a unique Name for the Availability Zone.
    • Enter the name of an existing vCenter Cluster to use as an Availability Zone.
    • (Optional) Enter the name of a Resource Pool in the vCenter cluster that you specified above. The jobs running in this Availability Zone share the CPU and memory resources defined by the pool.
    • (Optional) Click Add Cluster to create another set of Cluster and Resource Pool fields. You can add multiple clusters. Click the trash icon to delete a cluster. The first cluster cannot be deleted.

      Note: For more information about using availability zones in vSphere, see Understanding Availability Zones in VMware Installations in the PCF documentation.

  3. Click Save.

Step 5: Create Networks Page

  1. Select Create Networks.

  2. Select Enable ICMP checks to enable ICMP on your networks. Ops Manager uses ICMP checks to confirm that components within your network are reachable.

  3. Click Add Network and create the following networks:

    • pks-infrastructure: This network is for Ops Manager, the BOSH Director, the PKS broker, and the PKS API.
    • pks-main: If you have a large deployment with multiple tiles, you can choose to deploy the PKS broker and PKS API to a separate network named pks-main. See the table below for more information.
    • pks-services: Netwok for creating the master and worker VMs for Kubernetes clusters. The CIDR should not conflict with the pod overlay network 10.200.0.0/16 or the reserved Kubernetes services CIDR of 10.100.200.0/24.
      Use the values from the following table as a guide when you create each network, replacing the IP addresses with ranges that are available in your vSphere environment:
      Infrastructure
      Network
      Field Configuration
      Name pks-infrastructure
      vSphere Network Name MY-PKS-virt-net/MY-PKS-subnet-infrastructure
      CIDR 192.168.101.0/26
      Reserved IP Ranges 192.168.101.1-192.168.101.9
      DNS 192.168.101.2
      Gateway 192.168.101.1
      Main Network (Optional) Field Configuration
      Name pks-main
      vSphere Network Name MY-PKS-virt-net/MY-PKS-subnet-pks
      CIDR 192.168.16.0/26
      Reserved IP Ranges 192.168.16.1-192.168.16.9
      DNS 192.168.16.2
      Gateway 192.168.16.1
      Service Network Field Configuration
      Name pks-services
      vSphere Network Name MY-PKS-virt-net/MY-PKS-subnet-services
      CIDR 192.168.20.0/22
      Reserved IP Ranges 192.168.20.1-192.168.20.9
      DNS 192.168.20.2
      Gateway 192.168.20.1
  4. Select which Availability Zones to use with the network.

  5. Click Save.

    Note: Multiple networks allow you to place vCenter on a private network and the rest of your deployment on a public network. Isolating vCenter in this manner denies access to it from outside sources and reduces possible security vulnerabilities.

    Note: If you use the Cisco Nexus 1000v Switch, see more information in Using the Cisco Nexus 1000v Switch with Ops Manager in the PCF documentation.

Step 6: Assign AZs and Networks Page

  1. Select Assign AZs and Networks.

    Assign az

  2. Use the drop-down menu to select a Singleton Availability Zone. The Ops Manager Director installs in this Availability Zone.

  3. Use the drop-down menu to select a Network for your Ops Manager Director.

  4. Click Save.

Step 7: Security Page

  1. Select Security.

    Om security

  2. In Trusted Certificates, enter a custom certificate authority (CA) certificate to insert into your organization’s certificate trust chain. This feature enables all BOSH-deployed components in your deployment to trust a custom root certificate. If you want to use Docker Registries for running app instances in Docker containers, use this field to enter your certificate for your private Docker Registry. For more information, see Using Docker Registries in the PCF documentation.

  3. Choose Generate passwords or Use default BOSH password. Pivotal recommends that you use the Generate passwords option for increased security.

  4. Click Save. To view your saved Director password, click the Credentials tab.

Step 8: Syslog Page

  1. Select Syslog.

    Syslog bosh

  2. (Optional) To send BOSH Director system logs to a remote server, select Yes.

  3. In the Address field, enter the IP address or DNS name for the remote server.

  4. In the Port field, enter the port number that the remote server listens on.

  5. In the Transport Protocol dropdown menu, select TCP, UDP, or RELP. This selection determines which transport protocol is used to send the logs to the remote server.

  6. (Optional) Mark the Enable TLS checkbox to use TLS encryption when sending logs to the remote server.

    • In the Permitted Peer field, enter either the name or SHA1 fingerprint of the remote peer.
    • In the SSL Certificate field, enter the SSL certificate for the remote server.
  7. Click Save.

Step 9: Resource Config Page

  1. Select Resource Config.

    Vsphere om resources

  2. Adjust any values as necessary for your deployment. Under the Instances, Persistent Disk Type, and VM Type fields, choose Automatic from the drop-down menu to allocate the recommended resources for the job. If the Persistent Disk Type field reads None, the job does not require persistent disk space.

    Note: Ops Manager requires a Director VM with at least 8 GB memory.

    Note: If you set a field to Automatic and the recommended resource allocation changes in a future version, Ops Manager automatically uses the updated recommended allocation.

  3. Click Save.

Step 10: Complete the Ops Manager Installation

Follow the steps below to complete the Ops Manager installation:

  1. Return to the Ops Manager Installation Dashboard.
  2. Click Review Pending Changes. Select the product that you intend to deploy and review the changes. For more information, see Reviewing Pending Product Changes.

    Note: In Ops Manager v2.2, the Review Pending Changes page is a Beta feature. If you deploy PKS to Ops Manager v2.2, you can skip this step.

  3. Click Apply Changes.

Next Steps

To install PKS on vSphere without NSX-T integration, perform the procedures in Installing PKS on vSphere.

To use Harbor to store and manage container images, see Installing and Integrating VMware Harbor Registry.


Please send any feedback you have to pks-feedback@pivotal.io.

Create a pull request or raise an issue on the source for this page in GitHub