Deploying Elastic Runtime on GCP

Page last updated:

This topic describes how to install and configure Elastic Runtime for Pivotal Cloud Foundry (PCF) on Google Cloud Platform (GCP).

Before beginning this procedure, ensure that you have successfully completed the Configuring Ops Manager Director on GCP topic.

Note: If you plan to install the PCF IPsec add-on, you must do so before installing any other tiles. Pivotal recommends installing IPsec immediately after Ops Manager, and before installing the Elastic Runtime tile.

Step 1: Download the Elastic Runtime Tile

  1. If you have not already downloaded Elastic Runtime, log in to Pivotal Network, and click on Pivotal Cloud Foundry Elastic Runtime.

  2. From the Releases drop-down, select the release to install and choose one of the following:

    1. Click PCF Elastic Runtime to download the Elastic Runtime .pivotal file
    2. Click PCF Small Footprint Runtime to download the Small Footprint Runtime .pivotal file. For more information, see Getting Started with Small Footprint Runtime.

Step 2: Add Elastic Runtime to Ops Manager

  1. Navigate to the Pivotal Cloud Foundry Operations Manager Installation Dashboard.

  2. Click Import a Product to add the Elastic Runtime tile to Ops Manager. This may take a while depending on your connection speed.

    Tip: After you import a tile to Ops Manager, you can view the latest available version of that tile in the Installation Dashboard by enabling the Pivotal Network API. For more information, refer to the Adding and Deleting Products topic.

  3. On the left, click the plus icon next to the imported Elastic Runtime product to add it to the Installation Dashboard.

  4. Click the newly added Elastic Runtime tile in the Installation Dashboard.

    Er tile

Step 3: Assign Availability Zones and Networks

  1. Select Assign AZ and Networks. These are the Availability Zones that you create when configuring Ops Manager Director.

  2. Select an Availability Zone under Place singleton jobs. Ops Manager runs any job with a single instance in this Availability Zone.

  3. Select one or more Availability Zones under Balance other jobs. Ops Manager balances instances of jobs with more than one instance across the Availability Zones that you specify.

    Note: For production deployments, Pivotal recommends at least three Availability Zones for a highly available installation of Elastic Runtime.

  4. From the Network drop-down box, choose the network on which you want to run Elastic Runtime.

    Er az

  5. Click Save.

Step 4: Add DNS Records for Your Load Balancers

In this step you redirect queries for your domain to the IP addresses of your load balancers.

  1. Locate the static IP addresses of the load balancers you created in Preparing to Deploy PCF on GCP:

    • An HTTP(S) load balancer named pcf-router
    • A TCP load balancer for WebSockets named pcf-websockets
    • A TCP load balancer named pcf-ssh
    • A TCP load balancer for the TCP router if you plan on enabling the TCP routing feature

      Note: You can locate the static IP address of each load balancer by clicking its name under Networks > Load balancing in the GCP Console.

  2. Log in to the DNS registrar that hosts your domain. Examples of DNS registrars include Network Solutions, GoDaddy, and Register.com.

  3. Create A records with your DNS registrar that map domain names to the public static IP addresses of the load balancers located above:

    If your DNS entry is:Set to the public IP of this load balancer:RequiredExample
    *.YOURSYSTEMDOMAIN pcf-router Yes *.system.example.com
    *.YOURAPPSDOMAIN pcf-router Yes *.apps.example.com
    doppler.YOURSYSTEMDOMAIN pcf-websockets Yes doppler.system.example.com
    loggregator.YOURSYSTEMDOMAIN pcf-websockets Yes loggregator.system.example.com
    ssh.YOURSYSTEMDOMAIN pcf-ssh Yes, to allow SSH access to apps ssh.system.example.com
    tcp.YOURDOMAIN IP address of the TCP load balancer for TCP routing No, only set up if you have enabled the TCP routing feature tcp.example.com

  4. Save changes within the web interface of your DNS registrar.

  5. In a terminal window, run the following dig command to confirm that you created your A record successfully:

    dig xyz.EXAMPLE.COM

    You should see the A record that you just created:

    ;; ANSWER SECTION:
    xyz.EXAMPLE.COM.      1767    IN  A 203.0.113.1

Note: You must complete this step before proceeding to Cloud Controller configuration. A difficult-to-resolve problem can occur if the wildcard domain is improperly cached before the A record is registered.

Step 5: Configure Domains

  1. Select Domains.

    Domains

  2. Enter the system and application domains.

    • The System Domain defines your target when you push apps to Elastic Runtime.
    • The Apps Domain defines where Elastic Runtime serves your apps.

    Note: Pivotal recommends that you use the same domain name but different subdomain names for your system and app domains. For example, use system.example.com for your system domain, and apps.example.com for your apps domain.

  3. Click Save.

Step 6: Configure Networking

  1. Select Networking.

  2. Leave the Router IPs, SSH Proxy IPs, HAProxy IPs, and TCP Router IPs fields blank. You do not need to complete these fields when deploying PCF to GCP.

    Note: You specify load balancers in the Resource Config section of Elastic Runtime later on in the installation process. See the Configure Load Balancers section of this topic for more information.

  3. Under Certificate and Private Key for HAProxy and Router, you must provide an SSL Certificate and Private Key. Starting in PCF v.1.12, HAProxy and Gorouter are enabled to receive TLS communication by default. Networking haproxy router cert config
    You can either provide a certificate signed by a Certificate Authority (CA) or click on the Generate RSA Certificate link to generate a self-signed certificate in Ops Manager.

    For details about generating certificates in Ops Manager for your wildcard system domains, see the Providing a Certificate for Your SSL/TLS Termination Point topic.

  4. In the Minimum version of TLS supported by HAProxy and Router field, select the minimum version of TLS to use in HAProxy and Router communications. HAProxy and Router use TLS v1.2 by default. If you need to accommodate clients that use an older version of TLS, select a lower minimum version. For a list of TLS ciphers supported by the Gorouter, see Securing Traffic into Cloud Foundry. Networking min tls version

  5. In the TLS Cipher Suites for Router field, specify the TLS cipher suites to use for TLS handshakes between Gorouter and downstream clients like load balancers or HAProxy. Use an ordered, colon-delimited list of Golang-supported TLS cipher suites in the OpenSSL format. The recommended setting is ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384. Operators should verify that the ciphers are supported by any clients or downstream components that will initiate TLS handshakes with the Router. For a list of TLS ciphers supported by the Gorouter, see Securing Traffic into Cloud Foundry. Networking tls router Verify that whatever client is participating in the TLS handshake with Gorouter has at least one cipher suite in common with Gorouter.

    Note: Specify cipher suites that are supported by the versions configured in the Minimum version of TLS supported by HAProxy and Router field.

  6. In the TLS Cipher Suites for HAProxy field, specify the TLS cipher suites to use in HAProxy for TLS handshakes between HAProxy and its clients such as load balancers and Gorouter. Use an ordered, colon-delimited list of TLS cipher suites in the OpenSSL format. The recommended setting: DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384
    Operators should verify that the ciphers are supported by any clients or downstream components that will initiate TLS handshakes with the HAProxy. Networking tls haproxy Verify that whatever clients are participating in the TLS handshake with HAProxy have at least one cipher suite in common with HAProxy.

    Note: Specify cipher suites that are supported by the versions configured in the Minimum version of TLS supported by HAProxy and Router field.

  7. Under HAProxy forwards requests to Router over TLS, select Enable or Disable based on your deployment layout. Networking haproxy router tls forward

    • Enable HAProxy forwarding of requests to Router over TLS
      If you want to: Encrypt communication between HAProxy and the Gorouter
      Then configure the following:
      1. Leave Enable selected.
      2. In the Certificate Authority for HAProxy Backend field, specify the Certificate Authority (CA) that signed the certificate you configured in the Certificate and Private Key for HAProxy and Router field.

        Note: If you used the Generate RSA Certificate link to generate a self-signed certificate, then the CA to specify is the Ops Manager CA, which you can locate at the CA endpoint in the Ops Manager API.

      3. Make sure that Gorouter and HAPRoxy have TLS cipher suites in common in the TLS Cipher Suites for Router and TLS Cipher Suites for HAProxy fields.
      See also:
    • Disable HAProxy forwarding of requests to Router over TLS
      If you want to: Use non-encrypted communication between HAProxy and Gorouter, or you are not using HAProxy
      Then configure the following:
      1. Select Disable.
      2. If you are not using HAProxy, set the the number of HAProxy job instances to 0 on the Resource Config page. See Configuring Resources.
      See also:

  8. If you are not using SSL encryption or if you are using self-signed certificates, select Disable SSL certificate verification for this environment. Selecting this checkbox also disables SSL verification for route services.

    Note: For production deployments, Pivotal does not recommend disabling SSL certificate verification.

  9. (Optional) If you want HAProxy or the Gorouter to reject any HTTP (non-encrypted) traffic, select the Disable HTTP on HAProxy and Gorouter checkbox. When selected, HAProxy and Gorouter will not listen on port 80.
    Networking disable http haproxy gorouter

  10. Select the Disable insecure cookies on the Router checkbox to set the secure flag for cookies generated by the router.

  11. To disable the addition of Zipkin tracing headers on the Gorouter, deselect the Enable Zipkin tracing headers on the router checkbox. Zipkin tracing headers are enabled by default. For more information about using Zipkin trace logging headers, see Zipkin Tracing in HTTP Headers.

  12. By default, the Elastic Runtime routers handle traffic for applications deployed to an isolation segment created by the PCF Isolation Segment tile. To configure the Elastic Runtime routers to reject requests for applications within isolation segments, select the Routers reject requests for Isolation Segments checkbox. Isolate network Do not enable this option without deploying routers for each isolation segment. See the the following topics for more information:

  13. Under Configure the CF Router support for the X-Forwarded-Client-Cert header, configure how the Gorouter handles x-forwarded-client-cert (XFCC) HTTP headers. Networking xforwarded client cert xfcc The following table indicates which option to choose based on your deployment layout.

    If your deployment is configured as follows: Then select the following option:
    • Load balancer is terminating TLS, and
    • Load balancer is configured to put the client certificate from a mutual authentication TLS handshake into the X-Forwarded-Client-Cert HTTP header, and
    • Requests to Gorouter are unencrypted (whether or not HAProxy is present.)
    Always forward the XFCC header in the request, regardless of whether the client connection is mTLS (default).
    • Load balancer is terminating TLS, and
    • Load balancer is configured to put the client certificate from a mutual authentication TLS handshake into the X-Forwarded-Client-Cert HTTP header, and
    • Requests to Gorouter are encrypted (whether or not HAProxy is present.)
    Forward the XFCC header received from the client only when the client connection is mTLS.
    • Load balancer is not terminating TLS (configured as pass through), and
    • Gorouter is terminating TLS
    Strip the XFCC header when present and set it to the client certificate from the mTLS handshake.

    For a description of the behavior of each configuration option, see Forward Client Certificate to Applications.

  14. In the Choose whether or not to enable route services section, choose either Enable route services or Disable route services. Route services are a class of marketplace services that perform filtering or content transformation on application requests and responses. See the Route Services topic for details.

  15. (Optional) If you want to limit the number of app connections to the backend, enter a value in the Max Connections Per Backend field. You can use this field to prevent a poorly behaving app from all the connections and impacting other apps.

    To choose a value for this field, review the peak concurrent connections received by instances of the most popular apps in your deployment. You can determine the number of concurrent connections for an app from the httpStartStop event metrics emitted for each app request.

    If your deployment uses PCF Metrics, you can also obtain this peak concurrent connection information from Network Metrics. The default value of 0 means that there is no limit. Networking max connections backend

  16. Enter a value for Router Max Idle Keepalive Connections. See Considerations for Configuring max_idle_connections.

    Keepalive

  17. (Optional) To accommodate larger uploads over connections with high latency, increase the number of seconds in the Router Timeout to Backends field.

  18. (Optional) Use the Frontend Idle Timeout for Gorouter and HAProxy field to help prevent connections from your load balancer to Gorouter or HAProxy from being closed prematurely. The value you enter sets the duration, in seconds, that Gorouter or HAProxy maintains an idle open connection from a load balancer that supports keep-alive.

    In general, set the value higher than your load balancer’s backend idle timeout to avoid the race condition where the load balancer sends a request before it discovers that Gorouter or HAProxy has closed the connection.

    See the following table for specific guidance and exceptions to this rule:

    IaaS Guidance
    AWS AWS ELB has a default timeout of 60 seconds, so Pivotal recommends a value greater than 60.
    Azure By default, Azure load balancer times out at 240 seconds without sending a TCP RST to clients, so as an exception, Pivotal recommends a value lower than 240 to force the load balancer to send the TCP RST.
    GCP GCP has a default timeout of 600 seconds, so Pivotal recommends a value greater than 600.
    Other Set the timeout value to be greater than that of the load balancer’s backend idle timeout.

  19. (Optional) Increase the value of Load Balancer Unhealthy Threshold to specify the amount of time, in seconds, that the router continues to accept connections before shutting down. During this period, healthchecks may report the router as unhealthy, which causes load balancers to failover to other routers. Set this value to an amount greater than or equal to the maximum time it takes your load balancer to consider a router instance unhealthy, given contiguous failed healthchecks.

  20. (Optional) Modify the value of Load Balancer Healthy Threshold. This field specifies the amount of time, in seconds, to wait until declaring the Router instance started. This allows an external load balancer time to register the Router instance as healthy.

    Router lb thresholds

  21. (Optional) If app developers in your organization want certain HTTP headers to appear in their app logs with information from the Gorouter, specify them in the HTTP Headers to Log field. For example, to support app developers that deploy Spring apps to PCF, you can enter Spring-specific HTTP headers.

    Http Headers to Log

  22. If you expect requests larger than the default maximum of 16 Kbytes, enter a new value (in bytes) for HAProxy Request Max Buffer Size. You may need to do this, for example, to support apps that embed a large cookie or query string values in headers.

  23. If your PCF deployment uses HAProxy and you want it to receive traffic only from specific sources, use the following fields:

    • Protected Domains: Enter a comma-separated list of domains from which PCF can receive traffic.
    • Trusted CIDRs: Optionally, enter a space-separated list of CIDRs to limit which IP addresses from the Protected Domains can send traffic to PCF. Protected domains

  24. (Optional) You can change the value in the Applications Network Maximum Transmission Unit (MTU) field. Pivotal recommends setting the MTU value for your application network to 1454. Some configurations, such as networks that use GRE tunnels, may require a smaller MTU value.

  25. The Loggregator Port defaults to 443 if left blank. Enter a new value to override the default.

  26. (Optional) Enter an IP range for the overlay network in the Overlay Subnet box. If you do not set a custom range, Ops Manager uses 10.255.0.0/16. Optionally specify the overlay subnet range

    WARNING: The overlay network IP range must not conflict with any other IP addresses in your network.

  27. Enter a UDP port number in the VXLAN Tunnel Endpoint Port box. If you do not set a custom port, Ops Manager uses 4789. Specify the host port for receiving VXLAN packets

  28. To enable logging for app traffic, select Enable (will increase log volume) under Log traffic for all accepted/denied application packets. App traffic logging generates log messages as follows:

    • TCP traffic: Logs the first packet of every new TCP connection.
    • UDP traffic: Logs UDP packets sent and received, up to a maximum per-second rate for each container. Set this rate limit in the UDP logging interval field (default: 100).
    • Packets denied: Logs packets blocked by either a container-specific networking policy or by Application Security Group rules applied across the space, org, or deployment. Logs packet denials up to a maximum per-second rate for each container, set in the Denied logging interval field (default: 1).
      Log app traffic enable
      See Manage Logging for Container-to-Container Networking for more information.

  29. TCP Routing is disabled by default. To enable this feature, perform the following steps:

    1. Select Enable TCP Routing.
    2. In TCP Routing Ports, enter a range of ports to be allocated for TCP Routes.

      For each TCP route you want to support, you must reserve a range of ports. This is the same range of ports you configured your load balancer with in the Pre-Deployment Steps, unless you configured DNS to resolve the TCP domain name to the TCP router directly.

      The TCP Routing Ports field accepts a comma-delimited list of individual ports and ranges, for example 1024-1099,30000,60000-60099. Configuration of this field is only applied on the first deploy, and update updates to the port range are made using the cf CLI. For details about modifying the port range, see the Router Groups topic. Ert tcp routing enable

    3. For GCP, you also need to specify the name of a GCP TCP load balancer in the LOAD BALANCER column of TCP Router job of the Resource Config screen. You configure this later on in Elastic Runtime. See Configure Load Balancers section of this topic.

  30. To disable TCP routing, click Select this option if you prefer to enable TCP Routing at a later time. For more information, see the Configuring TCP Routing in Elastic Runtime topic.

  31. Click Save.

Step 7: Configure Application Containers

  1. Select Application Containers.

    Er config app containers

  2. The Enable Custom Buildpacks checkbox governs the ability to pass a custom buildpack URL to the -b option of the cf push command. By default, this ability is enabled, letting developers use custom buildpacks when deploying apps. Disable this option by disabling the checkbox. For more information about custom buildpacks, refer to the buildpacks section of the PCF documentation.

  3. The Allow SSH access to app containers checkbox controls SSH access to application instances. Enable the checkbox to permit SSH access across your deployment, and disable it to prevent all SSH access. See the Application SSH Overview topic for information about SSH access permissions at the space and app scope.

  4. If you want enable SSH access for new apps by default in spaces that allow SSH, select Enable SSH when an app is created. If you deselect the checkbox, developers can still enable SSH after pushing their apps by running cf enable-ssh APP-NAME.

  5. You can configure Elastic Runtime to run app instances in Docker containers by supplying their IP address range(s) in the Private Docker Insecure Registry Whitelist textbox. See the Using Docker Registries topic for more information.

  6. By default, containers use the same DNS servers as the host. If you want to override the DNS servers to be used in containers, enter a comma-separated list of servers in DNS Servers.

  7. Select your preference for Docker Images Disk-Cleanup Scheduling on Cell VMs. If you choose Clean up disk-space once threshold is reached, enter a Threshold of Disk-Used in megabytes. For more information about the configuration options and how to configure a threshold, see Configuring Docker Images Disk-Cleanup Scheduling.

  8. Enter a number in the Max Inflight Container Starts textbox. This number configures the maximum number of started instances across your deployment’s Diego Cells. For more information about this feature, see Setting a Maximum Number of Started Containers.

  9. Under Enabling NFSv3 volume services, select Enable or Disable. NFS volume services allow application developers to bind existing NFS volumes to their applications for shared file access. For more information, see the Enabling NFS Volume Services topic.

    Note: In a clean install, NFSv3 volume services is enabled by default. In an upgrade, NFSv3 volume services is set to the same setting as it was in the previous deployment.

  10. (Optional) To configure LDAP for NFSv3 volume services, perform the following steps: Er config app vol svc

    • For LDAP Service Account User, enter the username of the service account in LDAP that will manage volume services.
    • For LDAP Service Account Password, enter the password for the service account.
    • For LDAP Server Host, enter the hostname or IP address of the LDAP server.
    • For LDAP Server Port, enter the LDAP server port number. If you do not specify a port number, Ops Manager uses 389.
    • For LDAP Server Protocol, enter the server protocol. If you do not specify a protocol, Ops Manager uses TCP.
    • For LDAP User Fully-Qualified Domain Name, enter the fully qualified path to the LDAP service account. For example, if you have a service account named volume-services that belongs to organizational units (OU) named service-accounts and my-company, and your domain is named domain, the fully qualified path looks like the following:
      CN=volume-services,OU=service-accounts,OU=my-company,DC=domain,DC=com
  11. By default, Elastic Runtime manages container images using the GrootFS plugin for Garden-runC. If you experience issues with GrootFS, you can disable the plugin and use the image plugin built into Garden-runC.

  12. Click Save.

Step 8: Configure Application Developer Controls

  1. Select Application Developer Controls.

    App dev controls

  2. Enter the Maximum File Upload Size (MB). This is the maximum size of an application upload.

  3. Enter the Default App Memory (MB). This is the amount of RAM allocated by default to a newly pushed application if no value is specified with the cf CLI.

  4. Enter the Default App Memory Quota per Org. This is the default memory limit for all applications in an org. The specified limit only applies to the first installation of Elastic Runtime. After the initial installation, operators can change the default value using the cf CLI.

  5. Enter the Maximum Disk Quota per App (MB). This is the maximum amount of disk allowed per application.

    Note: If you allow developers to push large applications, Elastic Runtime may have trouble placing them on Cells. Additionally, in the event of a system upgrade or an outage that causes a rolling deploy, larger applications may not successfully re-deploy if there is insufficient disk capacity. You should scale your deployment to ensure your Cells have sufficient disk to run your applications.

  6. Enter the Default Disk Quota per App (MB). This is the amount of disk allocated by default to a newly pushed application if no value is specified with the cf CLI.

  7. Enter the Default Service Instances Quota per Org. The specified limit only applies to the first installation of Elastic Runtime. After the initial installation, operators can change the default value using the cf CLI.

  8. Enter the Staging Timeout (Seconds). When you stage an application droplet with the Cloud Controller, the server times out after the number of seconds you specify in this field.

  9. Click Save.

Step 9: Review Application Security Groups

Setting appropriate Application Security Groups is critical for a secure deployment. Type X in the box to acknowledge that once the Elastic Runtime deployment completes, you will review and set the appropriate application security groups. See Restricting App Access to Internal PCF Components for instructions.

Asg

Step 10: Configure UAA

  1. Select UAA.

  2. (Optional) Under JWT Issuer URI, enter the URI that UAA uses as the issuer when generating tokens.

    Ert uaa jwt uri

  3. Under SAML Service Provider Credentials, enter a certificate and private key to be used by UAA as a SAML Service Provider for signing outgoing SAML authentication requests. You can provide an existing certificate and private key from your trusted Certificate Authority or generate a self-signed certificate. The following domains must be associated with the certificate: login.YOUR-SYSTEM-DOMAIN and *.login.YOUR-SYSTEM-DOMAIN.

    Note: The Pivotal Single Sign-On Service and Pivotal Spring Cloud Services tiles require the *.login.YOUR-SYSTEM-DOMAIN.

  4. If the private key specified under Service Provider Credentials is password-protected, enter the password under SAML Service Provider Key Password. Service provider

  5. (Optional) In the Apps Manager Access Token Lifetime, Apps Manager Refresh Token Lifetime, Cloud Foundry CLI Access Token Lifetime, and Cloud Foundry CLI Refresh Token Lifetime fields, change the lifetimes of tokens granted for Apps Manager and Cloud Foundry Command Line Interface (cf CLI) login access and refresh. Most deployments use the defaults. Authsso uaa bottom

  6. (Optional) Customize the text prompts used for username and password from the cf CLI and Apps Manager login popup by entering values for Customize Username Label (on login page) and Customize Password Label (on login page).

  7. (Optional) The Proxy IPs Regular Expression field contains a pipe-delimited set of regular expressions that UAA considers to be reverse proxy IP addresses. UAA respects the x-forwarded-for and x-forwarded-proto headers coming from IP addresses that match these regular expressions. To configure UAA to respond properly to Router or HAProxy requests coming from a public IP address, append a regular expression or regular expressions to match the public IP address.

  8. You can configure UAA to use the internal MySQL database provided with PCF, or you can configure an external database provider. Follow the procedures in either the Internal Database Configuration or the External Database Configuration section below.

Note: If you are performing an upgrade, do not modify your existing internal database configuration or you may lose data. You must migrate your existing data before changing the configuration. See Upgrading Pivotal Cloud Foundry for additional upgrade information, and contact Pivotal Support for help.

Internal Database Configuration

  1. Select Internal MySQL.

UAA DB Selection

  1. Click Save.

  2. Ensure that you complete the “Configure Internal MySQL” step later in this topic to configure high availability and automatic backups for your internal MySQL databases.

External Database Configuration

Note: The exact procedure to create databases depends upon the database provider you select for your deployment. The following procedure uses AWS RDS as an example, but UAA also supports Azure SQL Server.

Warning: Protect whichever database you use in your deployment with a password.

To create your UAA database, perform the following steps:

  1. Add the ubuntu account key pair from your IaaS deployment to your local SSH profile so you can access the Ops Manager VM. For example, in AWS, you add a key pair created in AWS:

    $ ssh-add aws-keypair.pem
  2. SSH in to your Ops Manager using the Ops Manager FQDN and the username ubuntu:

    $ ssh ubuntu@OPS-MANAGER-FQDN
  3. Log in to your MySQL database instance using the appropriate hostname and user login values configured in your IaaS account. For example, to log in to your AWS RDS instance, run the following MySQL command:

    $ mysql --host=RDSHOSTNAME --user=RDSUSERNAME --password=RDSPASSWORD

  4. Run the following MySQL commands to create a database for UAA:

    CREATE database uaa;

  5. Type exit to quit the MySQL client, and exit again to close your connection to the Ops Manager VM.

  6. From the UAA section in Elastic Runtime, select External. Ert uaa external

  7. For Hostname, enter the hostname of the database server.

  8. For TCP Port, enter the port of the database server.

  9. For User Account and Authentication database username, specify a unique username that can access this specific database on the database server.

  10. For User Account and Authentication database password, specify a password for the provided username.

  11. Click Save.

Step 11: Configure Authentication and Enterprise SSO

  1. Select Authentication and Enterprise SSO.

    Er config auth enterprise sso uaa

  2. To authenticate user sign-ons, your deployment can use one of three types of user database: the UAA server’s internal user store, an external SAML identity provider, or an external LDAP server.

    • To use the internal UAA, select the Internal option and follow the instructions in the Configuring UAA Password Policy topic to configure your password policy.
    • To connect to an external identity provider through SAML, scroll down to select the SAML Identity Provider option and follow the instructions in the Configuring PCF for SAML section of the Configuring Authentication and Enterprise SSO for Elastic Runtime topic.
    • To connect to an external LDAP server, scroll down to select the LDAP Server option and follow the instructions in the Configuring LDAP section of the Configuring Authentication and Enterprise SSO for Elastic Runtime topic.
  3. Click Save.

Step 12: Configure System Databases

You can configure Elastic Runtime to use the internal MySQL database provided with PCF, or you can configure an external database provider for the databases required by Elastic Runtime.

Note: If you are performing an upgrade, do not modify your existing internal database configuration or you may lose data. You must migrate your existing data first before changing the configuration. Contact Pivotal Support for help. See Upgrading Pivotal Cloud Foundry for additional upgrade information.

Internal Database Configuration

If you want to use internal databases for your deployment, perform the following steps:

  1. Select Databases.

  2. Select Internal Databases - MySQL. Sys db

  3. Click Save.

Then proceed to Step 13: (Optional) Configure Internal MySQL to configure high availability and automatic backups for your internal MySQL databases.

External Database Configuration

Note: To configure an external database for UAA, see the External Database Configuration section of Configure UAA.

Note: The exact procedure to create databases depends upon the database provider you select for your deployment. The following procedure uses AWS RDS as an example. You can configure a different database provider that provides MySQL support, such as Google Cloud SQL.

Warning: Protect whichever database you use in your deployment with a password.

To create your Elastic Runtime databases, perform the following steps:

  1. Add the ubuntu account key pair from your IaaS deployment to your local SSH profile so you can access the Ops Manager VM. For example, in AWS, you add a key pair created in AWS:

    $ ssh-add aws-keypair.pem
  2. SSH in to your Ops Manager using the Ops Manager FQDN and the username ubuntu:

    $ ssh ubuntu@OPS-MANAGER-FQDN
  3. Log in to your MySQL database instance using the appropriate hostname and user login values configured in your IaaS account. For example, to log in to your AWS RDS instance, run the following MySQL command:

    $ mysql --host=RDSHOSTNAME --user=RDSUSERNAME --password=RDSPASSWORD

  4. Run the following MySQL commands to create databases for the eleven Elastic Runtime components that require a relational database:

    CREATE database ccdb;
    CREATE database notifications;
    CREATE database autoscale;
    CREATE database app_usage_service;
    CREATE database routing;
    CREATE database diego;
    CREATE database account;
    CREATE database nfsvolume;
    CREATE database networkpolicyserver;
    CREATE database silk;
    CREATE database locket;
    

  5. Type exit to quit the MySQL client, and exit again to close your connection to the Ops Manager VM.

  6. In Elastic Runtime, select Databases.

  7. Select the External Databases option.

  8. For Hostname, enter the hostname of the database server.

  9. For TCP Port, enter the port of the database server.

  10. Each component that requires a relational database has two corresponding fields: one for the database username and one for the database password. For each set of fields, specify a unique username that can access this specific database on the database server and a password for the provided username.

  11. Click Save.

Step 13: (Optional) Configure Internal MySQL

Note: You only need to configure this section if you have selected Internal Databases - MySQL in the Databases section.

  1. Select Internal MySQL.

  2. In the MySQL Proxy IPs field, enter one or more comma-delimited IP addresses that are not in the reserved CIDR range of your network. If a MySQL node fails, these proxies re-route connections to a healthy node. See the Proxy section of the MySQL for PCF topic for more information.

  3. (Optional) Configure round-robin DNS to spread requests across your MySQL proxies. Only perform this step if you want to approximate load balancing on your internal MySQL proxies.

    1. Create a DNS A Record to round robin against your IP addresses. For example: Mysql a record
    2. In the MySQL Service Hostname field, enter the hostname you created for round-robin DNS. If you leave this field blank, components are configured with the IP address of the first proxy instance entered in the MySQL Proxy IPs field.

      Caution: Round-robin DNS does not handle component availability as well as a load balancer. If one or more of the database proxies fail, components that rely on the MySQL database can become unavailable. At time of publication, GCP load balancers only support access to public IP addresses.

      Mysql proxy ips
  4. In the Replication canary time period field, leave the default of 30 seconds or modify the value based on your deployment requirements. Lower numbers cause the canary to run more frequently, which adds load to the database.

  5. In the Replication canary read delay field, leave the default of 20 seconds or increase the value. This field configures how long the canary waits, in seconds, before verifying that data is replicating across each MySQL node. Clusters under heavily load can experience a small replication lag as write-sets are committed across the nodes.

  6. (Required): In the E-mail address field, enter the email address where the MySQL service should send alerts when the cluster experiences a replication issue or when a node is not allowed to auto-rejoin the cluster.

  7. To prohibit the creation of command line history files on the MySQL nodes, deselect the Allow Command History checkbox.

  8. For Cluster Probe Timeout, enter the maximum amount of time, in seconds, that a new node will search for existing cluster nodes. If left blank, the default value is 10 seconds.

    Mysql replication canary

  9. Under Automated Backups Configuration, choose one of five options for MySQL backups:

    • Disable automatic backups of MySQL disables automatic backups, but you can still deploy the Backup Prepare Node if you use BOSH Backup and Restore to back up your MySQL database. For more information, see the Backing Up Pivotal Cloud Foundry with BBR topic.
    • Enable automated backups from MySQL to an S3 bucket or other S3-compatible file store saves your backups to an existing Amazon Web Services (AWS) or Ceph S3-compatible blobstore. Mysql backups s3 This option requires the following fields:
      • For S3 Bucket Name, enter the name of your S3 bucket. Do not include an s3:// prefix, a trailing /, or underscores. If the bucket does not already exist, it will be created automatically.
      • For Bucket Path, specify a folder within the bucket to hold your MySQL backups. Do not include a trailing /.
      • For S3 Bucket Region, enter the AWS region where the bucket is located, such as us-east-1.
      • For AWS Access Key ID and AWS Secret Access Key, enter your AWS or Ceph credentials.
      • For Cron Schedule, enter a valid cron expression to schedule your automated backups. Cron uses your computer’s local time zone.
      • Enable Backup All Nodes to make unique backups from each instance of the MySQL server rather than just the first MySQL server instance.
    • Enable automated backups from MySQL to Google Cloud Storage saves your backups to Google Cloud Storage. Mysql backups gcs This option requires the following fields:
      • For GCP Service Account Key JSON, enter the name of a Google Cloud Platform (GCP) Service Account Key with access to the project and bucket specified below. This key must be in JSON format.
      • For GCP Project ID, enter the project ID of your GCP project. You can find the project ID on the Dashboard of the GCP Console.
      • For GCP Storage Bucket Name, enter the name of a bucket in Google Cloud Storage where your backups will be uploaded. If the bucket does not already exist, it will be created automatically.
      • For Cron Schedule, enter a valid cron expression to schedule your automated backups. Cron uses your computer’s local time zone.
      • Enable Backup All Nodes to make unique backups from each instance of the MySQL server rather than just the first MySQL server instance.
    • Enable automated backups from MySQL to Azure saves your backups to Azure. Mysql backups azure This option requires the following fields:
      • For Azure Storage Account, enter the name of an existing Azure storage account where backups will be uploaded. For more information about creating and managing an Azure storage account, see the Azure documentation.
      • For Azure Storage Access Key, enter an Azure storage access key for the storage account.
      • For Azure Storage Container, enter the name of an existing Azure storage container that will store the backups.
      • For Backup Path, enter the path within the Azure storage container where backups will be uploaded.
      • For Cron Schedule, enter a valid cron expression to schedule your automated backups. Cron uses your computer’s local time zone.
      • Enable Backup All Nodes to make unique backups from each instance of the MySQL server rather than just the first MySQL server instance.
    • Enable automated backups from MySQL to a remote host via SCP saves your backups to a remote host using secure copy protocol (SCP). Mysql backups scp This option requires the following fields:
      • For Hostname, enter the name of your SCP host.
      • For Port, enter your SCP port. This should be the TCP port that your SCP host uses for SSH. The default port is 22.
      • For Username, enter your SSH username for the SCP host.
      • For Private key, paste in your SSH private key.
      • For Destination directory, enter the directory on the SCP host where you want to save backup files.
      • For Cron Schedule, enter a valid cron expression to schedule your automated backups. Cron uses your computer’s local time zone.
      • Enable Backup All Nodes to make unique backups from each instance of the MySQL server rather than just the first MySQL server instance.

        Note: If you choose to enable automated MySQL backups, set the number of instances for the Backup Prepare Node under the Resource Config section of the Elastic Runtime tile to 1.

  10. If you want to log audit events for internal MySQL, select Enable server activity logging under Server Activity Logging.

    1. For the Event types field, you can enter the events you want the MySQL service to log. By default, this field includes connect and query, which tracks who connects to the system and what queries are processed. For more information, see the Logging Events section of the MariaDB documentation.

      Server Activity Logging, Load Balancer Thresholds

  11. Enter values for the following fields:

    • Load Balancer Healthy Threshold: Specifies the amount of time, in seconds, to wait until declaring the MySQL proxy instance started. This allows an external load balancer time to register the instance as healthy.
    • Load Balancer Unhealthy Threshold: Specifies the amount of time, in seconds, that the MySQL proxy continues to accept connections before shutting down. During this period, the healthcheck reports as unhealthy to cause load balancers to fail over to other proxies. You must enter a value greater than or equal to the maximum time it takes your load balancer to consider a proxy instance unhealthy, given repeated failed healthchecks.
  12. If you want to enable the MySQL interruptor feature, select the checkbox to Prevent node auto re-join. This feature stops all writes to the MySQL database if it notices an inconsistency in the dataset between the nodes. For more information, see the Interruptor section in the MySQL for PCF documentation.

  13. Click Save.

Step 14: Configure File Storage

To minimize system downtime, Pivotal recommends using highly resilient and redundant external filestores for your Elastic Runtime file storage.

When configuring file storage for the Cloud Controller in Elastic Runtime, you can select one of the following:

  • Internal WebDAV filestore
  • External S3-compatible or Ceph-compatible filestore
  • External Google Cloud Storage
  • External Azure Cloud Storage

For production-level PCF deployments on GCP, Pivotal recommends selecting External Google Cloud Storage. For more information about production-level PCF deployments on GCP, see the Reference Architecture for Pivotal Cloud Foundry on GCP.

For additional factors to consider when selecting file storage, see the Considerations for Selecting File Storage in Pivotal Cloud Foundry topic.

Internal Filestore

Internal file storage is only appropriate for small, non-production deployments.

To use the PCF internal filestore, perform the following steps:

  1. In the Elastic Runtime tile, select File Storage.

  2. Select Internal WebDAV, and click Save.

External Google Cloud Storage

To use external Google file storage for your Elastic Runtime filestore, perform the following steps:

  1. Select the External Google Cloud Storage option. Gcp file storage google cloud storage
  2. Enter values for Access Key and Secret Key. To obtain the values for these fields:
    • In the GCP Console, navigate to the Storage tab, then click Settings.
    • Click Interoperability.
    • If necessary, click Enable interoperability access. If interoperability access is already enabled, confirm that the default project matches the project where you are installing PCF.
    • Click Create a new key. Gcp key secret create
    • Copy and paste the generated values into the corresponding Elastic Runtime fields. PCF uses these values for authentication when connecting to Google Cloud Storage.
  3. To create buckets in GCP, perform the following steps:
    • In the GCP Console, navigate to the Storage tab, then click Create Bucket.
    • Enter a unique bucket name.
    • For the Default storage class, select Regional.
    • From the Regional location dropdown, select the region associated with your PCF deployment.
    • Click Create. When the bucket is created, return to Elastic Runtime to configure the bucket names.
  4. For the Buildpacks Bucket Name, enter the name of the bucket for storing your app buildpacks.
  5. For Droplets Bucket Name, enter the name of the bucket for your app droplet storage. Pivotal recommends that you use a unique bucket, but you can use the same bucket as the previous step.
  6. For Resources Bucket Name, enter the name of the bucket for resources. Pivotal recommends that you use a unique bucket, but you can use the same bucket as the previous step.
  7. For Packages Bucket Name, enter the name of the bucket for packages. Pivotal recommends that you use a unique bucket, but you can use the same bucket as the previous step.
  8. Click Save.

Other IaaS Storage Options

Azure Storage and External S3-Compatible File Storage are also available as file storage options, but Pivotal does not recommend these for a typical PCF on GCP installation.

Step 15: (Optional) Configure System Logging

If you forward logging messages to an external Reliable Event Logging Protocol (RELP) server, complete the following steps:

  1. Select the System Logging section that is located within your Pivotal Elastic Runtime Settings tab. Updated system logging
  2. Enter the IP address of your syslog server in Address.
  3. Enter the port of your syslog server in Port. The default port for a syslog server is 514.

    Note: The host must be reachable from the Elastic Runtime network, accept TCP connections, and use the RELP protocol. Ensure your syslog server listens on external interfaces.

  4. Select a Transport Protocol to use when forwarding logs.
  5. If you plan to use TLS encryption when sending logs to the remote server, select Yes when answering the Encrypt syslog using TLS? question.
    1. In the Permitted Peer field, enter either the name or SHA1 fingerprint of the remote peer.
    2. In the TLS CA Certificate field, enter the TLS CA Certificate for the remote server.
  6. For the Syslog Drain Buffer Size, enter the number of messages the Doppler server can hold from Metron agents before the server starts to drop them. See the Loggregator Guide for Cloud Foundry Operators topic for more details.
  7. If you want to include security events in your log stream, select the Enable Cloud Controller security event logging checkbox. This logs all API requests, including the endpoint, user, source IP address, and request result, in the Common Event Format (CEF).
  8. Click Save.

Step 16: (Optional) Customize Apps Manager

The Custom Branding and Apps Manager sections customize the appearance and functionality of Apps Manager. Refer to Custom Branding Apps Manager for descriptions of the fields on these pages and for more information about customizing Apps Manager.

  1. Select Custom Branding. Use this section to configure the text, colors, and images of the interface that developers see when they log in, create an account, reset their password, or use Apps Manager. Custombranding

  2. Click Save to save your settings in this section.

  3. Select Apps Manager. Config apps man

  4. Select Enable Invitations to enable invitations in Apps Manager. Space Managers can invite new users for a given space, Org Managers can invite new users for a given org, and Admins can invite new users across all orgs and spaces. See the Inviting New Users section of the Managing User Roles with Apps Manager topic for more information.

  5. Select Display Marketplace Service Plan Prices to display the prices for your services plans in the Marketplace.

  6. Enter the Supported currencies as json to appear in the Marketplace. Use the format {"CURRENCY-CODE":"SYMBOL"}. This defaults to {"usd": "$", "eur": "€"}.

  7. Use Product Name, Marketplace Name, and Customize Sidebar Links to configure page names and sidebar links in the Apps Manager and Marketplace pages.

  8. Click Save to save your settings in this section.

Step 17: (Optional) Configure Email Notifications

Elastic Runtime uses SMTP to send invitations and confirmations to Apps Manager users. You must complete the Email Notifications page if you want to enable end-user self-registration.

  1. Select Email Notifications.

    Smtp

  2. Enter your reply-to and SMTP email information. You must use port 2525. Ports 25 and 587 are not allowed on GCP Compute Engine.

  3. For SMTP Authentication Mechanism, select none.

  4. Click Save.

Note: If you do not configure the SMTP settings using this form, the administrator must create orgs and users using the cf CLI tool. See Creating and Managing Users with the cf CLI for more information.

Step 18: Configure Cloud Controller

  1. Click Cloud Controller.

    Config cc

  2. Enter your Cloud Controller DB Encryption Key if all of the following are true:

    • You deployed Elastic Runtime previously.
    • You then stopped Elastic Runtime or it crashed.
    • You are re-deploying Elastic Runtime with a backup of your Cloud Controller database.

      See Backing Up Pivotal Cloud Foundry for more information.
  3. CF API Rate Limiting prevents API consumers from overwhelming the platform API servers. Limits are imposed on a per-user or per-client basis and reset on an hourly interval.

    To disable CF API Rate Limiting, select Disable under Enable CF API Rate Limiting. To enable CF API Rate Limiting, perform the following steps:

    1. Under Enable CF API Rate Limiting, select Enable.
    2. For General Limit, enter the number of requests a user or client is allowed to make over an hour interval for all endpoints that do not have a custom limit. The default value is 2000.
    3. For Unauthenticated Limit, enter the number of requests an unauthenticated client is allowed to make over an hour interval. The default value is 100.
  4. Under Enable secure communication between Diego and Cloud Controller?, ensure that Enable is selected.

    Note: Secure communication between Diego and Cloud Controller is enabled by default in Small Footprint Runtime. There is no selector to enable or disable.



    In previous versions of PCF, the Cloud Controller and Diego communicated insecurely and indirectly through the Cloud Controller Bridge. As of PCF v1.12, the Cloud Controller and Diego can communicate directly over secure TLS, without a bridge component. The Enable button selects this new option, enabling direct, secure communications and deactivating the Cloud Controller Bridge.

    In a fresh install of PCF v1.12, the Enable option is selected by default. In upgrades, operators must manually select Enable to deactivate the Cloud Controller Bridge and make the internal communications secure.

  5. Click Save.

Step 19: Configure Smoke Tests

The Smoke Tests errand runs basic functionality tests against your Elastic Runtime deployment after an installation or update. In this section, choose where to run smoke tests. In the Errands section, you can choose whether or not to run the Smoke Tests errand.

  1. Select Smoke Tests.

  2. If you have a shared apps domain, select Temporary space within the system organization, which creates a temporary space within the system organization for running smoke tests and deletes the space afterwards. Otherwise, select Specified org and space and complete the fields to specify where you want to run smoke tests.

    Smoke test er config

  3. Click Save.

Step 20: (Optional) Enable Advanced Features

The Advanced Features section of Elastic Runtime includes new functionality that may have certain constraints. Although these features are fully supported, Pivotal recommends caution when using them in production environments.

Diego Cell Memory and Disk Overcommit

If your apps do not use the full allocation of disk space and memory set in the Resource Config tab, you might want use this feature. These fields control the amount to overcommit disk and memory resources to each Diego Cell VM.

For example, you might want to use the overcommit if your apps use a small amount of disk and memory capacity compared to the amounts set in the Resource Config settings for Diego Cell.

Note: Due to the risk of app failure and the deployment-specific nature of disk and memory use, Pivotal has no recommendation about how much, if any, memory or disk space to overcommit.

To enable overcommit, follow these steps:

  1. Select Advanced Features.

    Disk memory overcommit

  2. Enter the total desired amount of Diego cell memory value in the Cell Memory Capacity (MB) field. Refer to the Diego Cell row in the Resource Config tab for the current Cell memory capacity settings that this field overrides.

  3. Enter the total desired amount of Diego cell disk capacity value in the Cell Disk Capacity (MB) field. Refer to the Diego Cell row in the Resource Config tab for the current Cell disk capacity settings that this field overrides.

  4. Click Save.

Note: Entries made to each of these two fields set the total amount of resources allocated, not the overage.

Whitelist for Non-RFC-1918 Private Networks

Some private networks require extra configuration so that internal file storage (WebDAV) can communicate with other PCF processes.

The Whitelist for non-RFC-1918 Private Networks field is provided for deployments that use a non-RFC 1918 private network. This is typically a private network other than 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16.

Most PCF deployments do not require any modifications to this field.

To add your private network to the whitelist, perform the following steps:

  1. Select Advanced Features.

  2. Append a new allow rule to the existing contents of the Whitelist for non-RFC-1918 Private Networks field. Nonrfc whitelist Include the word allow, the network CIDR range to allow, and a semi-colon (;) at the end. For example: allow 172.99.0.0/24;

  3. Click Save.

CF CLI Connection Timeout

The CF CLI Connection Timeout field allows you to override the default five second timeout of the Cloud Foundry Command Line Interface (cf CLI) used within your PCF deployment. This timeout affects the cf CLI command used to push Elastic Runtime errand apps such as Notifications, Autoscaler, and Apps Manager.

Set the value of this field to a higher value, in seconds, if you are experiencing domain name resolution timeouts when pushing errands in Elastic Runtime.

To modify the value of the CF CLI Connection Timeout, perform the following steps:

  1. Select Advanced Features.

  2. Add a value, in seconds, to the CF CLI Connection Timeout field. Cf cli connection timeout

  3. Click Save.

Step 21: Configure Errands

Errands are scripts that Ops Manager runs automatically when it installs or uninstalls a product, such as a new version of Elastic Runtime. There are two types of errands: post-deploy errands run after the product is installed, and pre-delete errands run before the product in uninstalled.

By default, Ops Manager always runs pre-delete errands, and only runs post-deploy errands when the product has changed since the last time Ops Manager installed something. In Elastic Runtime, the Smoke Test Errand defaults to always run.

The Elastic Runtime tile Errands pane lets you change these run rules. For each errand, you can select On to run it always, Off to never run it, or When Changed to run it only when the product has changed since the last install.

For more information about how Ops Manager manages errands, see the Managing Errands in Ops Manager topic.

Note: Several errands deploy apps that provide services for your deployment, such as Autoscaling and Notifications. Once one of these apps is running, selecting Off for the corresponding errand on a subsequent installation does not stop the app.

Errands

  • Smoke Test Errand verifies that your deployment can do the following:

    • Push, scale, and delete apps
    • Create and delete orgs and spaces
  • Usage Service Errand deploys the Pivotal Usage Service application, which Apps Manager depends on.

  • Apps Manager Errand deploys Apps Manager, a dashboard for managing apps, services, orgs, users, and spaces. Until you deploy Apps Manager, you must perform these functions through the cf CLI. After Apps Manager has been deployed, Pivotal recommends deselecting the checkbox for this errand on subsequent Elastic Runtime deployments. For more information about Apps Manager, see the Getting Started with the Apps Manager topic.

  • Notifications Errand deploys an API for sending email notifications to your PCF platform users.

    Note: The Notifications app requires that you configure SMTP with a username and password, even if you set the value of SMTP Authentication Mechanism to none.

  • Notifications UI Errand deploys a dashboard for users to manage notification subscriptions.

  • Pivotal Account Errand deploys Pivotal Account, a dashboard that allows users to create and manage their accounts. In the Pivotal Account dashboard, users can launch applications, manage their profiles, manage account security, manage notifications, and manage approvals. See the Enabling Pivotal Account topic for more information.

  • Autoscaling Errand enables you to configure your apps to automatically scale in response to changes in their usage load. See the Scaling an Application Using Autoscaler topic for more information.

  • Autoscaling Registration Errand makes the Autoscaling service available to your applications. Without this errand, you cannot bind the Autoscaling app to your apps.

  • NFS Broker Errand enables you to use NFS Volume Services by installing the NFS Broker app in Elastic Runtime. See the Enabling NFS Volume Services topic for more information.

Step 22: Configure Load Balancers

  1. Navigate to the GCP Console and click Load balancing.

    Config lb

    You should see the SSH load balancer, the HTTP(S) load balancer, the TCP WebSockets load balancer, and optionally, the TCP router that you created in the Create Load Balancers in GCP section of the Preparing to Deploy PCF on GCP topic.

  2. Record the name of your SSH load balancer and your TCP WebSockets load balancer. For example, pcf-ssh and pcf-websockets.

  3. Click your HTTP(S) load balancer. For example, pcf-router. Pcf router

  4. Under Backend services, record the name of the backend service of the HTTP(S) load balancer. For example, pcf-backend.

  5. In the Elastic Runtime tile, click Resource Config.

    Resource config

  6. Under the LOAD BALANCERS column of the Router row, enter a comma-delimited list consisting of the name of your TCP WebSockets load balancer and the name of your HTTP(S) load balancer backend with the protocol prepended. For example, tcp:pcf-websockets,http:pcf-backend.

    Note: Do not add a space between key/value pairs in the LOAD BALANCER field or it will fail.

    Note: If you are using HAProxy in your deployment, then enter the above load balancer values in the LOAD BALANCERS field of the HAPRoxy row instead of the Router row. For a high availability configuration, scale up the HAProxy job to more than one instance.

  7. If you have enabled TCP routing in the Advanced Features pane and set up the TCP Load Balancer in GCP, add the name of your TCP load balancer, prepended with tcp:, to the LOAD BALANCERS column of the TCP Router row. For example, tcp:pcf-tcp-router.

  8. Under the LOAD BALANCERS column of the Diego Brain row, enter the name of your SSH load balancer prepended with tcp:. For example, tcp:pcf-ssh.

  9. Verify that the Internet Connected checkbox for every job is checked to allow the jobs to reach the Internet. This gives all VMs a public IP address that enables outbound Internet access.

    Note: If you want to provision a Network Address Translation (NAT) box to provide Internet connectivity to your VMs instead of providing them with public IP addresses, deselect the Internet Connected checkboxes. For more information about using NAT in GCP, see the GCP documentation.

  10. Click Save.

Step 23: (Optional) Scale Down and Disable Resources

Note: The Resource Config pane has fewer VMs if you are installing the Small Footprint Runtime.

Note: The Small Footprint Runtime does not default to a highly available configuration. It defaults to the minimum configuration. If you would like the Small Footprint Runtime to be highly available, do the following: Scale the Compute, Router, and Database VMs to 3 instances and scale the Control VM to 2 instances.

Elastic Runtime defaults to a highly available resource configuration. However, you may still need to perform additional procedures to make your deployment highly available. See the Zero Downtime Deployment and Scaling in CF and the Scaling Instances in Elastic Runtime topics for more information.

If you do not want a highly available resource configuration, you must scale down your instances manually by navigating to the Resource Config section and using the drop-down menus under Instances for each job.

By default, Elastic Runtime also uses an internal filestore and internal databases. If you configure Elastic Runtime to use external resources, you can disable the corresponding system-provided resources in Ops Manager to reduce costs and administrative overhead.

Complete the following procedures to disable specific VMs in Ops Manager:

  1. Click Resource Config.

  2. If you configure Elastic Runtime to use an external S3-compatible filestore, edit the following fields:

    • File Storage: Enter 0 in Instances.
  3. If you selected External when configuring the UAA and System databases, edit the following fields:

    • MySQL Proxy: Enter 0 in Instances.
    • MySQL Server: Enter 0 in Instances.
    • MySQL Monitor: Enter 0 in Instances.
    • Cloud Controller Database: Enter 0 in Instances.
    • UAA Database: Enter 0 in Instances.
  4. If you are not using HAProxy, enter 0 in the Instances field for HAProxy.

  5. Click Save.

Step 24: Verify and Download Stemcell Version

Verify whether Ops Manager is providing the stemcell version required by Elastic Runtime. If the correct version is already present, you do not need to download a new stemcell.

  1. In the Elastic Runtime tile, select Stemcell.

  2. Verify that the version indicated in the filename matches the version of stemcell required by Elastic Runtime.

    • If Elastic Runtime detects that a stemcell .tgz file is present in the Ops Manager Director VM at /var/tempest/stemcells/, the Stemcell screen displays filename information. Stemcell 18
    • If Elastic Runtime cannot detect a stemcell .tgz file, the following message displays: Stemcell not found
  3. If the version of the stemcell file that is loaded does not match the required version listed in the Pivotal Network download page for Elastic Runtime, or cannot be found by Ops Manager, perform the following steps to download and import a new stemcell file:

    1. Log in to the Pivotal Network and click Stemcells.
    2. Download the appropriate stemcell version targeted for your IaaS.
    3. In the Stemcell section of the Elastic Runtime tile, click Import Stemcell to import the downloaded stemcell .tgz file.

Step 25: Complete the Elastic Runtime Installation

  1. Click the Installation Dashboard link to return to the Installation Dashboard.

  2. Click Apply Changes.

    The install process generally requires a minimum of 90 minutes to complete. The image shows the Changes Applied window that displays when the installation process successfully completes.

    Ops manager complete

Create a pull request or raise an issue on the source for this page in GitHub