Installing Ops Manager on OpenStack
Page last updated:
This guide describes how to install Ops Manager on OpenStack with VMware Tanzu Application Service for VMs (TAS for VMs).
OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter. For guidance on OpenStack service credential management, see Open Stack Security Documents below.
These documents provide a general reference for OpenStack service credential management.
This section describes the requirements for installing Ops Manager on OpenStack, including general requirements for installing Ops Manager with Ops Manager and TAS for VMs as well as OpenStack requirements.
Note: You can install Ops Manager on OpenStack with the TAS for VMs runtime. The VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) runtime is not supported for OpenStack. For more information about TAS for VMs, see TAS for VMs Concepts. For more information about TKGI, see VMware Tanzu Kubernetes Grid Integrated Edition (TKGI).
The general requirements for deploying and managing a Ops Manager deployment with Ops Manager and VMware Tanzu Application Service for VMs (TAS for VMs) are:
A wildcard DNS record that points to your Gorouter or load balancer. Alternatively, you can use a service such as xip.io. For example,
- TAS for VMs gives each app its own hostname in your app domain.
- With a wildcard DNS record, every hostname in your domain resolves to the IP address of your Gorouter or load balancer, and you do not need to configure an A record for each app hostname. For example, if you create a DNS record
*.example.compointing to your load balancer or Gorouter, every app deployed to the
example.comdomain resolves to the IP address of your Gorouter.
At least one wildcard TLS certificate that matches the DNS record you set up above,
Sufficient IP allocation:
- One static IP address for either HAProxy or one of your Gorouters.
- One static IP address for each job in the Ops Manager tile. For a full list, see the Resource Config pane for each tile.
- One static IP address for each job listed below:
- File Storage
- MySQL Proxy
- MySQL Server
- Backup Restore Node
- MySQL Monitor
- Diego Brain
- TCP Router
- One IP for each VM instance created by the service.
- An additional IP address for each compilation worker. The formula for total IPs needed is
IPs needed = static IPs + VM instances + compilation workers.
Note: VMware recommends that you allocate at least 36 dynamic IP addresses when deploying Ops Manager and TAS for VMs. BOSH requires additional dynamic IP addresses during installation to compile and deploy VMs, install TAS for VMs, and connect to services.
One or more NTP servers if not already provided by your IaaS.
(Recommended) A network without DHCP available for deploying the TAS for VMs VMs.
Note: If you have DHCP, see Troubleshooting Guide for guidance on avoiding issues with your installation.
(Optional) External storage. When you deploy TAS for VMs, you can select internal file storage or external file storage, either network-accessible or IaaS-provided, as an option in the TAS for VMs tile. VMware recommends using external storage whenever possible. For more information about how file storage location affects platform performance and stability during upgrades, see Configure File Storage in Configuring TAS for VMs for Upgrades.
(Optional) External databases. When you deploy TAS for VMs, you can select internal or external databases for the BOSH Director and for TAS for VMs. VMware recommends using external databases in production deployments. An external database must be configured to use the UTC timezone.
(Optional) External user stores. When you deploy TAS for VMs, you can select a SAML user store for Ops Manager or a SAML or LDAP user store for TAS for VMs, to integrate existing user accounts.
The most recent version of the Cloud Foundry Command Line Interface (cf CLI).
The following are OpenStack requirements for deploying Ops Manager:
Ops Manager is supported on the OpenStack Ocata, Pike, Queens, Rocky, and Stein releases. OpenStack is a collection of inter-operable components and requires general OpenStack expertise to troubleshoot issues that may occur when installing Ops Manager on particular releases and distributions. To verify that your OpenStack platform is compatible with Ops Manager, use the OpenStack Validator tool. To access the OpenStack Validator tool, see CF OpenStack Validator on GitHub.
VMware recommends granting complete access to the OpenStack logs to the operator managing the Ops Manager installation process.
For OpenStack accounts for Ops Manager, VMware recommends following the principle of least privilege by scoping privileges to the most restrictive permissions possible for a given role.
You must have a dedicated OpenStack project, formerly known as an OpenStack tenant.
You must have Keystone access to the dedicated OpenStack project, including the following:
- Auth URL
- Username and password. The
PrimaryProjectfor the user must be the project you want to use to deploy Ops Manager. For more information, see Manage projects and users in the OpenStack documentation.
- Project name
- Region (with multiple availability zones if you require high availability)
- SSL certificate for your wildcard domain (see below)
You must have the ability to do the following in OpenStack:
- Create and modify VM flavors
- Enable DHCP if required
- Create a network and then connect that network with a router to an external network
- Create an external network with a pool of floating IP addresses
- Boot VMs directly from image
- Create two wildcard domains for separate system and app domains
The following are resource requirements for the dedicated OpenStack project:
- 118 GB of RAM
- 22 available instances
- 14 small VMs (1 vCPU, 1024 MB of RAM, 10 GB of root disk)
- 2 high-CPU VMs (2 vCPU, 1024 MB of RAM, 10 GB of root disk)
- 3 large VMs (4 vCPU, 16384 MB of RAM, 10 GB of root disk)
- 3 extra-large VMs (8 vCPU, 16 GB of RAM, 160 GB of ephemeral disk)
- 58 vCPUs
- 1 TB of storage
- Nova or Neutron networking with floating IP support
Note: By default, TAS for VMs deploys the number of VM instances required to run a highly available configuration of Ops Manager. If you are deploying a test or sandbox Ops Manager that does not require HA, then you can scale down the number of instances in your deployment. For information about the number of instances required to run a minimal, non-HA Ops Manager deployment, see Scaling TAS for VMs.
The following are requirements for the OpenStack Cinder back end:
- Ops Manager requires RAW root disk images. The Cinder back end for your OpenStack project must support RAW.
- VMware recommends that you use a Cinder back end that supports snapshots. This is required for some BOSH functionalities.
- VMware recommends enabling your Cinder back end to delete block storage asynchronously. If this is not possible, it must be able to delete multiple 20 GB volumes within 300 seconds.
The following are requirements for using an Overlay Network with VXLAN or GRE Protocols:
- If an overlay network is being used with VXLAN or GRE protocols, the MTU of the created VMs must be adjusted to the best practices recommended by the plugin vendor (if any).
- DHCP must be enabled in the internal network for the MTU to be assigned to the VMs automatically.
- To adjust your MTU values, see Configure Networking in Configuring TAS for VMs.
- Failure to configure your overlay network correctly could cause Apps Manager to fail since applications will not be able to connect to the UAA.
Note: If you are using IPsec, your resource usage will increase by approximately 36 bytes. For details, including information about setting correct MTU values, see Installing IPsec in the IPsec documentation.
To install Ops Manager on OpenStack with the TAS for VMs runtime, do the following:
Deploy Ops Manager. See Deploying Ops Manager on OpenStack.
Configure BOSH Director on OpenStack. See Configuring BOSH Director on OpenStack.
After completing the procedures above, configure a runtime for Ops Manager.
For information about installing and configuring a runtime, see the following: