Getting Started with Small Footprint Runtime

This topic describes the Small Footprint Runtime tile for Pivotal Cloud Foundry (PCF).

The Small Footprint Runtime is a repackaging of the Elastic Runtime components into a smaller deployment with fewer virtual machines (VMs). The Limitations section describes the limitations that come with a smaller deployment.

Differentiate Small Footprint Runtime and Elastic Runtime

A standard Elastic Runtime deployment must have at least 25 virtual machines (VMs), but the Small Footprint Runtime requires only 15.

The following image displays a comparison of the number of VMs deployed by Elastic Runtime and Small Footprint Runtime.

ERT VMs

Use Cases

Use the Small Footprint Runtime tile for smaller PCF deployments on which you intend to host 2500 or fewer apps, as described in the Limitations section. If you want to use Small Footprint Runtime in a production environment, ensure the Limitations described below are not an issue in your use case.

Note: The Small Footprint Runtime tile is compatible with PCF service tiles.

The Small Footprint Runtime tile is also ideal for the following use cases:

  • Proof-of-concept installations:
    • Deploy PCF quickly and with a small footprint for evaluation or testing purposes.
  • Sandbox installations:
    • Use Small Footprint Runtime as a PCF operator sandbox for tasks such as testing compatibility.
  • Service tile R&D:
    • Test a service tile against Small Footprint Runtime instead of a standard Elastic Runtime deployment to increase efficiency and reduce cost.

Limitations

The Small Footprint Runtime tile has the following limitations:

  • Number of app instances:
    • The tile is not designed to support large numbers of app instances. You cannot scale the number of Compute VMs beyond 10 instances in the Resource Config pane. The Small Footprint Runtime is designed to support 2500 or fewer apps.
  • Increasing platform capacity:
    • You cannot upgrade the Small Footprint Runtime tile to the standard Elastic Runtime tile. If you expect platform usage to increase beyond the capacity of the Small Footprint Runtime tile, Pivotal recommends using the standard Elastic Runtime tile.
  • Management plane availability during tile upgrades:
    • You may not be able to perform management plane operations like deploying new apps and accessing APIs for brief periods during tile upgrades. The management plane is colocated on the Control VM.
  • App availability during tile upgrades:
    • If you require availability during your upgrades, you must scale your Compute VMs to a highly available configuration. Ensure sufficient capacity exists to move app instances between Compute VM instances during the upgrade.

Architecture

You can deploy the Small Footprint Runtime tile with a minimum of 14 VMs, as shown in the image below.

Note: The following image assumes that you are using an external blobstore.

SRT VMS

To reduce the number of VMs required for Small Footprint Runtime, the Control and Database VMs include colocated jobs that run on a single VM in Elastic Runtime. See the next sections for details.

For more information about the components mentioned on this page, see the Architecture section of the Elastic Runtime Concepts guide.

Control VM

The Control VM includes the Elastic Runtime jobs that handle management plane operations, app lifecycles, logging, and user authorization and authentication.

The following image shows all the jobs from Elastic Runtime that are colocated on the Control VM in Small Footprint Runtime.

Control VMs

Database VM

The database VM includes the Elastic Runtime jobs that handle internal storage and messaging.

The following image shows all the jobs from Elastic Runtime that are colocated on the Database VM in Small Footprint Runtime.

DB VMs

Compute VM

The Compute VM is the same as the Diego Cell VM in Elastic Runtime.

Compute VMs

Other VMs (Unchanged)

The following image shows the VMs performs the same functions in both versions of the Elastic Runtime tile.

Unchanged VMs

Requirements

The following topics list the minimum resources needed to run Small Footprint Runtime and Ops Manager on the public IaaSes that PCF supports:

Installing Small Footprint Runtime

To install the Small Footprint Runtime tile, follow the instructions for Installing Pivotal Cloud Foundry on your IaaS.

Follow the same installation and configuration steps as for Elastic Runtime, with the following differences:

  • Selecting a product in Pivotal Network:

    • When you navigate to the Elastic Runtime tile on Pivotal Network, select the Small Footprint release.
  • Securing communication between Diego and the Cloud Controller:

    • In the Small Footprint Runtime tile, Diego and Cloud Controller communicate securely by default.
  • Configuring resources:

    • The Resource Config pane in the Small Footprint Runtime tile reflects the differences in VMs discussed in the Architecture section of this topic.
    • Small Footprint Runtime does not default to a highly available configuration like Elastic Runtime does. It defaults to a minimum configuration. To make Small Footprint Runtime highly available, scale the VMs to the following instance counts:
      • Compute: 3
      • Control: 2
      • Database: 3
      • Router: 3
  • Configuring load balancers:

    • If you are using an SSH load balancer, you must enter its name in the Control VM row of the Resource Config pane. There is no Diego Brain row in Small Footprint Runtime because the Diego Brain is co-located on the Control VM. You can still enter the appropriate load balancers in the Router and TCP Router rows as normal.

Troubleshooting Colocated Jobs using Logs

If you need to troubleshoot a job that runs on the Control or Database VMs, follow these steps:

  1. Follow the procedures in Advanced Troubleshooting with the BOSH CLI to the log in to the BOSH Director for your deployment:

    1. Gather Credential and IP Address Information
    2. SSH into Ops Manager
    3. Log in to the BOSH Director
  2. Use BOSH to list the VMs in your Small Footprint Runtime deployment:

    bosh2 -e MY-ENV -d MY-DEPLOYMENT vms

    Note: If you do not know the name of your deployment, you can run bosh -e MY-ENV deployments to list the deployments for your BOSH Director.

    See the following example output:
    $ bosh2 -e example-env -d example-deployment vms
    Using environment 'example-env' as client 'ops_manager'

    Task 182. Done

    Deployment 'example-deployment'

    Instance Process State AZ IPs VM CID VM Type backup-prepare/8fd07242-cf7c-4a4d-ba69-85fe078114f9 running us-central1-a 10.0.4.10 vm-6ec72a47-55b0-4767-78af-759f1f295183 micro compute/01c6947d-477e-4605-9e6b-5d130a58c70c running us-central1-b 10.0.4.8 vm-ce14173c-d93e-414c-6830-afbe0c713fc5 xlarge.disk compute/28045395-5048-4c8d-8363-e22fc7b66847 running us-central1-c 10.0.4.9 vm-e3d8f696-5802-4552-4006-a1260563ed49 xlarge.disk compute/2e3ed7dc-baa4-42ef-814d-980c6ab1c36b running us-central1-a 10.0.4.7 vm-6dc34e53-71f3-4741-674a-c42c4df9e559 xlarge.disk control/12b1b027-7ffd-43ca-9dc9-7f4ff204d86a running us-central1-a 10.0.4.6 vm-9760b74e-e13e-4483-79b6-78ab3818b628 xlarge ha_proxy/b0587c68-45a8-40e2-94d3-5d2ffcdaf858 running us-central1-a 10.0.4.11 vm-27d62bfc-af6d-4c8b-6e2a-cbba09eddd1e micro mysql_monitor/5185d04e-e038-4664-a26a-d16d0d295a7f running us-central1-a 10.0.4.15 vm-6d215888-913b-44a3-4db3-52329c5ada53 micro router/2043b22d-0c3b-4a02-873f-80a724c3ed08 running us-central1-a 10.0.4.12 vm-2b7cf5f4-5926-4f70-6e47-d994a6eff93b micro router/72b54793-e0d0-4301-8932-76da5375e654 running us-central1-c 10.0.4.14 vm-e77bcdf1-0c26-46cd-7783-6d766f4c5098 micro router/e3d2ab7b-6191-46bb-ab62-c1db7268a942 running us-central1-b 10.0.4.13 vm-3e84523b-1988-475e-49e8-de80fd76c656 micro database/681bcad5-fa8b-4cf1-912f-45140d96123f running us-central1-a 10.0.4.5 vm-e3cded4f-cf47-499f-4c96-992b3c6ebf9c large.disk tcp_router/61a06e83-a62b-4afb-b452-441dc2dc1e4c running us-central1-a 10.0.4.17 vm-cc1f0a62-409f-47f9-58b0-0b8f46cf9ac0 micro

    13 vms

    Succeeded

  3. Use BOSH to SSH into one of the Small Footprint Runtime VMs.

    bosh2 -e MY-ENV -d MY-DEPLOYMENT ssh VM-NAME/GUID
    For example, to SSH into the Control VM, run the following:
    $ bosh2 -e example-env -d example-deployment ssh control/12b1b027-7ffd-43ca-9dc9-7f4ff204d86a
    

  4. Run sudo su to act as super user.

  5. Use monit to list the processes running on the VM.

    monit summary
    See the following example output that lists the processes running on the Control VM. The processes listed reflect the colocation of jobs as outlined in the Architecture section of this topic.
    control/12b1b027-7ffd-43ca-9dc9-7f4ff204d86a:/var/vcap/bosh_ssh/bosh_f9d2446b18b445e# monit summary
    The Monit daemon 5.2.5 uptime: 5d 21h 10m

    Process 'consul_agent' running Process 'bbs' running Process 'metron_agent' running Process 'locket' running Process 'route_registrar' running Process 'policy-server' running Process 'silk-controller' running Process 'uaa' running Process 'statsd_injector' running Process 'cloud_controller_ng' running Process 'cloud_controller_worker_local_1' running Process 'cloud_controller_worker_local_2' running Process 'nginx_cc' running Process 'routing-api' running Process 'cloud_controller_clock' running Process 'cloud_controller_worker_1' running Process 'auctioneer' running Process 'cc_uploader' running Process 'file_server' running Process 'nsync_listener' running Process 'ssh_proxy' running Process 'tps_watcher' running Process 'stager' running Process 'loggregator_trafficcontroller' running Process 'reverse_log_proxy' running Process 'adapter' running Process 'doppler' running Process 'syslog_drain_binder' running System 'system_localhost' running

  6. To access logs, navigate to /vars/vcap/sys/log:

    cd /var/vcap/sys/log

  7. Run ls to list the log directories for each process. See the following example output from the Control VM:

    control/12b1b027-7ffd-43ca-9dc9-7f4ff204d86a:/var/vcap/sys/log# ls
    adapter      cloud_controller_clock   file_server                    nginx_cc               route_registrar  statsd_injector      uaa_ctl.err.log
    auctioneer   cloud_controller_ng      locket                         nginx_newrelic_plugin  routing-api      syslog_drain_binder  uaa_ctl.log
    bbs          cloud_controller_worker  loggregator_trafficcontroller  nsync                  silk-controller  syslog_forwarder
    cc_uploader  consul_agent             metron_agent                   policy-server          ssh_proxy        tps
    cfdot        doppler                  monit                          reverse_log_proxy      stager           uaa
    

  8. Navigate to the directory of the process that you want to view logs for. For example, for the Cloud Controller process, run cd cloud_controller_ng/. From the directory of the process, you can list and view its logs. See the following example output:

    control/12b1b027-7ffd-43ca-9dc9-7f4ff204d86a:/var/vcap/sys/log/cloud_controller_ng# ls
    cloud_controller_ng_ctl.err.log  cloud_controller_ng.log.2.gz  cloud_controller_ng.log.6.gz         drain                  pre-start.stdout.log
    cloud_controller_ng_ctl.log      cloud_controller_ng.log.3.gz  cloud_controller_ng.log.7.gz         post-start.stderr.log
    cloud_controller_ng.log          cloud_controller_ng.log.4.gz  cloud_controller_worker_ctl.err.log  post-start.stdout.log
    cloud_controller_ng.log.1.gz     cloud_controller_ng.log.5.gz  cloud_controller_worker_ctl.log      pre-start.stderr.log
    

Release Notes

The Small Footprint Runtime tile releases alongside the Elastic Runtime tile. See the Elastic Runtime Release Notes.

Create a pull request or raise an issue on the source for this page in GitHub