Getting Started with Small Footprint TAS for VMs

Page last updated:

This topic describes the Small Footprint VMware Tanzu Application Service for VMs (TAS for VMs) tile for Ops Manager.

The Small Footprint TAS for VMs is a repackaging of the TAS for VMs components into a smaller deployment with fewer virtual machines (VMs). For a description of the limitations that come with a smaller deployment, see Limitations.

Differentiate Small Footprint TAS for VMs and TAS for VMs

A standard TAS for VMs deployment must have at least 13 VMs, but Small Footprint TAS for VMs requires only four.

The following image displays a comparison of the number of VMs deployed by TAS for VMs and Small Footprint TAS for VMs.

VM comparison between TAS for VMs and Small Footprint TAS for VMs. TAS for VMs has 13 required VMs and 11 optional VMs. Small Footprint TAS for VMs, on the other hand, has 4 required VMs and 8 optional VMs. The required VMs for TAS for VMs are Cloud Controller, Cloud Controller Worker, Clock Global, Diego BBS, Diego Brain, Diego Cell, Doppler Server, Loggregator Traffic Controller, NATS, Router, Syslog Adapter, Syslog Scheduler, and UAA. The optional VMs for TAS for VMs are File Storage, HAProxy, Backup Restore Node, MySQL Monitor, MySQL Proxy, MySQL Server, TCP Router, CredHub, Istio Router, Istio Control, and Route Syncer. The required VMs in Small Footprint TAS for VMs are Compute, Control, Database, and Router. The optional VMs for Small Footprint TAS for VMs are File Storage, HAProxy, Backup Restore Node, MySQL Monitor, TCP Router, Istio Router, Istio Control, and Route Syncer.

Use Cases

Use Small Footprint TAS for VMs for smaller Ops Manager deployments on which you intend to host 2500 or fewer apps, as described in Limitations. If you want to use Small Footprint TAS for VMs in a production environment, ensure the limitations described below are not an issue in your use case.

Note: Small Footprint TAS for VMs is compatible with Ops Manager service tiles.

Small Footprint TAS for VMs is also ideal for the following use cases:

  • Proof-of-concept installations: Deploy Ops Manager quickly and with a small footprint for evaluation or testing purposes.

  • Sandbox installations: Use Small Footprint TAS for VMs as a Ops Manager operator sandbox for tasks such as testing compatibility.

  • Service tile R&D: Test a service tile against Small Footprint TAS for VMs instead of a standard TAS for VMs deployment to increase efficiency and reduce cost.

Limitations

Small Footprint TAS for VMs has the following limitations:

  • Number of app instances: The tile is not designed to support large numbers of app instances. You cannot scale the number of Compute VMs beyond 10 instances in the Resource Config pane. Small Footprint TAS for VMs is designed to support 2500 or fewer apps.

  • Increasing platform capacity: You cannot upgrade the Small Footprint TAS for VMs tile to the standard TAS for VMs tile. If you expect platform usage to increase beyond the capacity of Small Footprint TAS for VMs, VMware recommends using the standard TAS for VMs tile.

  • Management plane availability during tile upgrades: You may not be able to perform management plane operations like deploying new apps and accessing APIs for brief periods during tile upgrades. The management plane is located on the Control VM.

  • App availability during tile upgrades: If you require availability during your upgrades, you must scale your Compute VMs to a highly available configuration. Ensure sufficient capacity exists to move app instances between Compute VM instances during the upgrade.

Architecture

You can deploy Small Footprint TAS for VMs with a minimum of four VMs, as shown in the image below.

Note: The following image assumes that you are using an external blobstore.

Small Footprint TAS for VMs has 4 required VMs and 8 optional VMs. The required VMs are Compute, Control, Database, and Router. The optional VMs are File Storage, HAProxy, Backup Restore Node, MySQL Monitor, TCP Router, Istio Router, Istio Control, and Route Syncer.

To reduce the number of VMs required for Small Footprint TAS for VMs, the Control and Database VMs include colocated jobs that run on a single VM in TAS for VMs. See the next sections for details.

For more information about the components mentioned on this page, see TAS for VMs Components.

Control VM

The Control VM includes the TAS for VMs jobs that handle management plane operations, app lifecycles, logging, and user authorization and authentication. Additionally, all errands run on the Control VM, eliminating the need for a VM for each errand and significantly reducing the time it takes to run errands.

The following image shows all the jobs from TAS for VMs that are colocated on the Control VM in Small Footprint TAS for VMs.

Small Footprint TAS for VMs Condenses 11 TAS for VMs VMs onto 1 VM called Control. The 11 TAS for VMs VMs are Cloud Controller, Cloud Controller Worker, Clock Global, Diego BBS, Diego Brain, Doppler Server, Loggregator Traffic Controller, Syslog Adapter, Syslog Scheduler, UAA, and CredHub.

Database VM

The Database VM includes the TAS for VMs jobs that handle internal storage and messaging.

The following image shows all the jobs from TAS for VMs that are colocated on the Database VM in Small Footprint TAS for VMs.

Small Footprint TAS for VMs Condenses 3 TAS for VMs VMs onto 1 VM called Database. The three TAS for VMs VMs are NATS, MySQL Proxy, and MySQL Server.

Compute VM

The Compute VM is the same as the Diego Cell VM in TAS for VMs.

Small Footprint TAS for VMs calls the Diego Cell VM the Compute VM

Other VMs (Unchanged)

The following image shows the VMs performs the same functions in both versions of the TAS for VMs tile.

The Router, File Storage, HAProxy, Backup Restore Node, MySQL Monitor, TCP Router, Istio Router, Istio Control, and Route Syncer VMs are the same in both TAS for VMs and small footprint TAS for VMs.

Requirements

The following topics list the minimum resources needed to run Small Footprint TAS for VMs on the public IaaSes that Ops Manager supports:

Installing Small Footprint TAS for VMs

To install Small Footprint TAS for VMs, see Architecture and Installation Overview and the installation and configuration topics for your IaaS.

Follow the same installation and configuration steps as for TAS for VMs, with these differences:

  • Selecting a product in VMware Tanzu Network: When you navigate to the VMware Tanzu Application Service for VMs page on VMware Tanzu Network, select the Small Footprint release.

  • Configuring resources:

    • The Resource Config pane in the Small Footprint TAS for VMs tile reflects the differences in VMs discussed in Architecture.
    • Small Footprint TAS for VMs does not default to a highly available configuration like TAS for VMs does. It defaults to a minimum configuration. To make Small Footprint TAS for VMs highly available, scale the VMs to the following instance counts:
      • Compute: 3
      • Control: 2
      • Database: 3
      • Router: 3
  • Configuring load balancers: If you are using an SSH load balancer, you must enter its name in the Control VM row of the Resource Config pane. There is no Diego Brain row in Small Footprint TAS for VMs because the Diego Brain is colocated on the Control VM. You can still enter the appropriate load balancers in the Router and TCP Router rows as normal.

Troubleshooting Colocated Jobs Using Logs

To troubleshoot a job that runs on the Control or Database VMs:

  1. Follow the procedures in Advanced Troubleshooting with the BOSH CLI to the log in to the BOSH Director for your deployment:

    1. Gather Credential and IP Address Information
    2. SSH into Ops Manager
    3. Log in to the BOSH Director
  2. Use BOSH to list the VMs in your Small Footprint TAS for VMs deployment. Run:

    bosh -e BOSH-ENV -d TAS-DEPLOYMENT vms
    

    Where:

    • BOSH-ENV is the name of your BOSH environment.
    • TAS-DEPLOYMENT is the name of your Small Footprint TAS for VMs deployment.

      Note: If you do not know the name of your deployment, you can run bosh -e BOSH-ENV deployments to list the deployments for your BOSH Director.

  3. Use BOSH to SSH into one of the Small Footprint TAS for VMs VMs. Run:

    bosh -e BOSH-ENV -d TAS-DEPLOYMENT ssh VM-NAME/VM-GUID
    

    Where:

    • BOSH-ENV is the name of your BOSH environment.
    • TAS-DEPLOYMENT is the name of your Small Footprint TAS for VMs deployment.
    • VM-NAME is the name of your VM.
    • VM-GUID is the GUID of your VM.

    For example, to SSH into the Control VM, run:

    bosh -e example-env -d example-deployment ssh control/12b1b027-7ffd-43ca-9dc9-7f4ff204d86a
    
  4. To act as a super user, run:

    sudo su
    
  5. To list the processes running on the VM, run:

    monit summary
    

    The example output below lists the processes running on the Control VM. The processes listed reflect the colocation of jobs as outlined in Architecture.

    control/12b1b027-7ffd-43ca-9dc9-7f4ff204d86a:/var/vcap/bosh_ssh/bosh_f9d2446b18b445e# monit summary
    The Monit daemon 5.2.5 uptime: 5d 21h 10m

    Process 'bbs' running Process 'metron_agent' running Process 'locket' running Process 'route_registrar' running Process 'policy-server' running Process 'silk-controller' running Process 'uaa' running Process 'statsd_injector' running Process 'cloud_controller_ng' running Process 'cloud_controller_worker_local_1' running Process 'cloud_controller_worker_local_2' running Process 'nginx_cc' running Process 'routing-api' running Process 'cloud_controller_clock' running Process 'cloud_controller_worker_1' running Process 'auctioneer' running Process 'cc_uploader' running Process 'file_server' running Process 'nsync_listener' running Process 'ssh_proxy' running Process 'tps_watcher' running Process 'stager' running Process 'loggregator_trafficcontroller' running Process 'reverse_log_proxy' running Process 'adapter' running Process 'doppler' running Process 'syslog_drain_binder' running System 'system_localhost' running

  6. To access logs, navigate to /vars/vcap/sys/log by running:

    cd /var/vcap/sys/log
    
  7. To list the log directories for each process, run:

    ls
    
  8. Navigate to the directory of the process that you want to view logs for. For example, for the Cloud Controller process, run:

    cd cloud_controller_ng/
    

    From the directory of the process, you can list and view its logs. See the following example output:

    control/12b1b027-7ffd-43ca-9dc9-7f4ff204d86a:/var/vcap/sys/log/cloud_controller_ng# ls
    cloud_controller_ng_ctl.err.log  cloud_controller_ng.log.2.gz  cloud_controller_ng.log.6.gz         drain                  pre-start.stdout.log
    cloud_controller_ng_ctl.log      cloud_controller_ng.log.3.gz  cloud_controller_ng.log.7.gz         post-start.stderr.log
    cloud_controller_ng.log          cloud_controller_ng.log.4.gz  cloud_controller_worker_ctl.err.log  post-start.stdout.log
    cloud_controller_ng.log.1.gz     cloud_controller_ng.log.5.gz  cloud_controller_worker_ctl.log      pre-start.stderr.log
    

Release Notes

The Small Footprint TAS for VMs tile releases alongside the TAS for VMs tile. For more information, see VMware Tanzu Application Service for VMs v2.9 Release Notes.