Platform Architecture and Planning Overview
Page last updated:
This topic describes reference architectures and other plans for installing Ops Manager on any infrastructure to support the Pivotal Application Service (PAS) and Enterprise Pivotal Container Service (Enterprise PKS) runtime environments.
Overview
The reference architectures in this section describe a proven approach for deploying Ops Manager and runtimes, such as PAS or Enterprise PKS, on a specific IaaS, such as AWS, Azure, GCP, OpenStack, or vSphere. These reference architectures meet the following requirements:
- Secure
- Publicly accessible
- Includes common Ops Manager-managed services such as VMware Tanzu SQL, VMware Tanzu RabbitMQ, and Spring Cloud Services for VMware Tanzu
- Can host at least 100 app instances
- Are deployed and validated by VMware to support Pivotal Operations Manager, PAS, and Enterprise PKS. This is true for all infrastructures except OpenStack for Enterprise PKS.
You can use Ops Manager reference architectures to help plan the best configuration for your Ops Manager deployment on your IaaS.
Note: OpenStack does not support Enterprise PKS.
Reference Architecture and Planning Topics
All Ops Manager reference architectures start with the base PAS architecture and base Enterprise PKS architecture.
These IaaS-specific topics build on these two common base architectures:
- AWS Reference Architecture
- Azure Reference Architecture
- GCP Reference Architecture
- OpenStack Reference Architecture
- vSphere Reference Architecture
These topics address aspects of platform architecture and planning that the Ops Manager reference architectures do not cover:
- Implementing a Multi-Foundation Enterprise PKS Deployment
- Using Global DNS Load Balancers for Multi-Foundation
PAS Architecture
The diagram below illustrates a base architecture for PAS and how its network topology places and replicates Pivotal Platform and PAS components across subnets and Availability Zones (AZs).
View a larger version of this diagram
Internal Components
The table below describes the internal component placements shown in the diagram above:
Component | Placement and Access Notes |
---|---|
Ops Manager | Deployed on one of the three public subnets. Accessible by fully-qualified domain name (FQDN) or through an optional jumpbox. |
BOSH Director | Deployed on the infrastructure subnet. |
Jumpbox | Optional. Deployed on the infrastructure subnet for accessing PAS management components such as Ops Manager and the Cloud Foundry Command Line Interface (cf CLI). |
Gorouters (HTTP routers in Ops Manager) | Deployed on all three PAS subnets, one per AZ. Accessed through the HTTP, HTTPS, and SSL load balancers. |
Diego Brain | Deployed on all three PAS subnets, one per AZ. The Diego Brain component is required, but SSH container access support through the Diego Brain is optional, and enabled through the SSH load balancers. |
TCP routers | Optional. Deployed on all three PAS subnets, one per AZ, to support TCP routing. |
Service tiles | Service brokers and shared services instances are deployed to the Services subnet. Dedicated on-demand service instances are deployed to an on-demand services subnet. |
Isolation segments | Deployed on an isolation segment subnet. Includes Diego Cells and Gorouters for running and accessing apps hosted within isolation segments. |
Networks
These sections describe PAS’s recommendations for defining your networks and load-balancing their incoming requests:
Required Subnets
PAS requires these statically-defined networks to host its main component systems:
Infrastructure subnet -
/24
segment
This subnet contains VMs that require access only for Platform Administrators, such as Ops Manager, the BOSH Director, and an optional jumpbox.PAS subnet -
/24
segment
This subnet contains PAS runtime VMs, such as Gorouters, Diego Cells, and Cloud Controllers.Services subnet -
/24
segment
The services and on-demand services networks support Ops Manager tiles that you might add in addition to PAS. In other words, they are the networks for everything that is not PAS. Some services tiles can call for additional network capacity to grow into on-demand. If you use services with this capability, VMware recommends that you add an on-demand services network for each on-demand service.On-demand services subnets -
/24
segments
This is for services that can allocate network capacity on-demand from BOSH for their worker VMs. VMware recommends allocating a dedicated subnet to each on-demand service. For example, you can configure the Redis tile as follows:- Network: Enter the existing
Services
network, to host the service broker. - Services network: Deploy a new network
OD-Services1
, to host the Redis worker VMs.
Another on-demand service tile can then also useServices
for its broker and a newOD-Services2
network for its workers, and so on.
- Network: Enter the existing
Isolation segment subnets -
/24
segments
You can add one or more isolation segment tiles to a PAS installation to compartmentalize hosting and routing resources. For each isolation segment you deploy, you should designate a/24
network for its range of address space.
Load Balancing
Any PAS installation needs a suitable load balancer to send incoming HTTP, HTTPS, SSH, and SSL traffic to its Gorouters and app containers. All installations approaching production-level use rely on external load balancing from hardware appliance vendors or other network-layer solutions.
The load balancer can also perform Layer 4 or Layer 7 load balancing functions. SSL can be terminated at the load balancer or used as a pass-through to the Gorouter.
Common deployments of load balancing in PAS are:
- HTTP/HTTPS traffic to and from Gorouters
- TCP traffic to and from TCP routers
- Traffic from the Diego Brain, when developers access app containers through SSH
To load-balance across multiple PAS foundations, use an IaaS- or vendor-specific Global Traffic Manager or Global DNS load balancer.
For more information, see Global DNS Load Balancers for Multi-Foundation Environments.
High Availability
PAS is not considered high availability (HA) until it runs across at least two AZs. VMware recommends defining three AZs.
On IaaSes with their own HA capabilities, using the IaaS HA in conjunction with a Pivotal Platform HA topology provides the best of both worlds. Multiple AZs give Pivotal Platform redundancy, so that losing an AZ is not catastrophic. The BOSH Resurrector can then replace lost VMs as needed to repair a foundation.
To back up and restore a foundation externally, use BOSH Backup and Restore (BBR). For more information, see BOSH Backup and Restore documentation.
Storage
PAS requires disk storage for each component, for both persistent data and to allocate to ephemeral data. You size these disks in the Resource Config pane of the PAS tile. For more information about storage configuration and capacity planning, see the corresponding section in the reference architecture for your IaaS.
The platform also requires you to configure file storage for large shared objects. These blobstores can be external or internal. For details, see Configuring File Storage for PAS.
Security
For information about how PAS implements security, see:
Network Communication Paths in Ops Manager in Network Security
Domain Names
PAS requires these domain names to be registered:
- System domain, for PAS and other tiles:
sys.domain.name
- App domain, for your apps:
app.domain.name
You must also define these wildcard domain names and include them when creating certificates that access PAS and its hosted apps:
- *.SYSTEM-DOMAIN
- *.APPS-DOMAIN
- *.login.SYSTEM-DOMAIN
- *.uaa.SYSTEM-DOMAIN
Component Scaling
For recommendations on scaling PAS for different deployment scenarios, see Scaling PAS .
Enterprise PKS Architecture
The diagram below illustrates a base architecture for Enterprise PKS and how its network topology places and replicates Pivotal Platform and Enterprise PKS components across subnets and AZs:
_Base canvas of /images/v2/PCF_Reference_Architecture_2019.graffle %>
View a larger version of this diagram
Internal Components
The table below describes the internal component placements shown in the diagram above:
Component | Placement and Access Notes |
---|---|
Ops Manager | Deployed on one of the subnets. Accessible by fully-qualified domain name (FQDN) or through an optional jumpbox. |
BOSH Director | Deployed on the infrastructure subnet. |
Jumpbox | Optional. Deployed on the infrastructure subnet for accessing Enterprise PKS management components such as Ops Manager and the kubectl command line interface. |
Enterprise PKS API | Deployed as a service broker VM on the Enterprise PKS services subnet. Handles Enterprise PKS API and service adapter requests, and manages Enterprise PKS clusters. For more information, see Enterprise PKS Components in Enterprise Enterprise Pivotal Container Service (Enterprise Enterprise PKS). |
Harbor tile | Optional container images registry, typically deployed to the services subnet. |
Enterprise PKS Cluster** | Deployed to a dynamically-created, dedicated Enterprise PKS cluster subnet. Each cluster consists of worker nodes that run the workloads, or apps, and one or more master nodes. |
Networks
These sections describe VMware’s recommendations for defining your networks and load-balancing their incoming requests.
Subnets Requirements
Enterprise PKS requires two defined networks to host the main elements that compose it:
Infrastructure subnet -
/24
This subnet contains VMs that require access only for Platform Administrators, such as Ops Manager, BOSH Director, and an optional jumpbox.Enterprise PKS services subnet -
/24
This subnet hosts Enterprise PKS API VM and other optional service tiles such as Harbor.Enterprise PKS cluster subnets - each one a
/24
from a pool of pre-allocated IPs
These subnets host Enterprise PKS clusters.
Load Balancing
Load balancers can be used to manage traffic across master nodes of a Enterprise PKS cluster or for deployed workloads. For more information on how to configure load balancers for Enterprise PKS, see the corresponding section in the reference architecture for your IaaS.
High Availability
Enterprise PKS has no inherent HA capabilities to design for. Make the best efforts to have HA design at the IaaS, storage, power and access layers to support Enterprise PKS.
Storage
Enterprise PKS requires shared storage across all AZs for the deployed workloads to appropriately allocate their required storage.
Security
For information about how Enterprise PKS implements security, see Enterprise Enterprise PKS Security and Firewall Ports.
Domain Names
Enterprise PKS requires the *.pks.domain.name
domain name to be registered when creating a wildcard certificate and Enterprise PKS tile configurations.
The wildcard certificate covers both the Enterprise PKS API domain, such as api.pks.domain.name
, and the Enterprise PKS clusters domains, such as cluster.pks.domain.name
.
Cluster Management
For information about managing Enterprise PKS clusters, see Managing Clusters.