Platform Architecture and Planning Overview

This documentation describes reference architectures and other plans for installing Pivotal Platform on any infrastructure to support the runtime environments Pivotal Application Service (PAS) and Pivotal Container Service (PKS).

Overview

A Pivotal Platform reference architecture describes a proven approach for deploying Pivotal Platform on a specific IaaS, such as AWS, Azure, GCP, OpenStack, or vSphere. Pivotal Platform reference architectures meet the following requirements:

  • Secure
  • Publicly-accessible
  • Includes common Pivotal Platform-managed services such as MySQL, RabbitMQ, and Spring Cloud Services
  • Can host at least 100 app instances, or far more
  • Have been deployed and validated by Pivotal to support Pivotal Operations Manager, PAS, and PKS

You can use Pivotal Platform reference architectures to help plan the best configuration for your Pivotal Platform deployment on your IaaS.

Reference Architecture and Planning Topics

All Pivotal Platform reference architectures start with the Base PAS Architecture and Base PKS Architecture, below.

The following infrastructure-specific topics build on these two common base architectures:

The following topic describes a broader architecture that automates the installation and updating of multiple Pivotal Platform foundations, multiple instances of each architecture above:

The following topics address aspects of platform architecture and planning that the Pivotal Platform reference architectures do not cover:

PAS Architecture

The following diagram shows a base architecture for PAS, and how its network topology places and replicates Pivotal Platform and PAS components across subnets and Availability Zones.

PAS Base Deployment Topology

View a larger version of this diagram

Internal Components

The following table describes the internal component placements shown in the diagram above:

Component Placement and Access Notes
Ops Manager Deployed on one of the three public subnets. Accessible by FQDN or through an optional jumpbox.
BOSH Director Deployed on the infrastructure subnet.
Jumpbox Optional. Deployed on the infrastructure subnet for accessing PAS management components such as Ops Manager and the Cloud Foundry command-line interface (cf CLI).
Gorouters (HTTP routers in CF) Deployed on all three PAS subnets, one per Availability Zone (AZ).
Accessed through the HTTP, HTTPS, and SSL load balancers.
Diego Brain Deployed on all three PAS subnets, one per AZ.
The Diego Brain component is required, but SSH container access support through the Diego Brain is optional, and enabled through the SSH load balancers.
TCP Routers Optional. Deployed on all three PAS subnets, one per AZ, to support TCP routing.
Service Tiles Service brokers and shared services instances are deployed to the Services subnet.
Dedicated on-demand service instances are deployed to an On-demand services subnet.
Isolation Segments Deployed on an Isolation Segment subnet. Includes Diego Cells and Gorouters for running and accessing apps hosted within isolation segments.

Networks

Pivotal recommends defining your networks and load-balancing their incoming requests as follows:

Required Subnets

PAS requires the following statically-defined networks to host its main component systems.

  • Infrastructure subnet - /24 segment This subnet contains VMs that require access only for Platform Administrators, for example Ops Manager, the BOSH Director, and an optional jumpbox.
  • PAS subnet - /24 segment This subnet contains PAS runtime VMs, such as Gorouters, Diego Cells and Cloud Controllers.
  • Services subnet - /24 segment The Services and On-Demand Services networks support Ops Manager tiles that you might add in addition to PAS. You can think of them as the the “everything not PAS” networks. Some services tiles can call for additional network capacity to grow into on-demand. If you use services with this capability, Pivotal recommends that you add an On-Demand Services network for each on-demand service, as described below.
  • On-Demand Services subnet(s) - /24 segments This is for services that can allocate network capacity on-demand from BOSH for their worker VMs; see the Services subnet above. Pivotal recommends allocating a dedicated subnet to each on-demand service. For example, you can configure the Redis tile as follows:

    • Network: enter the existing Services network, to host the service broker
    • Services Network: deploy a new network OD-Services1, to host the Redis worker VMs

    Another on-demand service tile can then also use Services for its broker and a new OD-Services2 network for its workers, and so on.

  • Isolation Segments subnet(s) - /24 segments You can add one or more Isolation Segment tiles to an PAS installation to compartmentalize hosting and routing resources. For each Isolation Segment you deploy, you should designate a /24 network for its range of address space.

Load Balancing

Any PAS installation needs a suitable load balancer to send incoming HTTP, HTTPS, SSH, and SSL traffic to its Gorouters and application containers. All installations approaching production-level use rely on external load balancing from hardware appliance vendors or other network-layer solutions.

The load balancer can also perform Layer 4 or Layer 7 load balancing functions. SSL can be terminated at the load balancer or used as a pass-through to the Gorouter.

Common deployments of load balancing in PAS are:

  • HTTP/HTTPS traffic to and from Gorouters
  • TCP traffic to and from TCP routers
  • Traffic from the Diego Brain, when developers access app containers via SSH

To balance load across multiple PAS foundations, use an IaaS- or vendor-specific Global Traffic Manager or Global DNS load balancer.

For additional information, see Global DNS Load Balancers for Multi-Foundation Environments.

High Availability

PAS is not considered High Availability (HA) until it runs across at least two Availability Zones (AZs). Pivotal recommends defining three AZs.

On IaaSes with their own HA capabilities, using the IaaS HA in conjunction with a Pivotal Platform HA topology provides the best of both worlds. Multiple AZs give Pivotal Platform redundancy, so that losing an AZ is not catastrophic. The BOSH Resurrector can then replace lost VMs as needed to repair a foundation.

To back up and restore a foundation externally, use BOSH Backup and Restore (BBR). For more information, see the BOSH Backup and Restore documentation.

Storage

PAS requires disk storage for each component, for both persistent data and to allocate to ephemeral data. You size these disks in the Resource Config pane of the PAS tile. For details on storage configuration and capacity planning, see the corresponding chapter for the specific IaaS.

The platform also requires you to configure file storage for large shared objects. These blobstores can be external or internal. For details, see Configuring File Storage for PAS.

Security

See the topics below for how PAS implements security:

Domain Names

PAS requires following domain names to be registered:

  • System domain, for PAS and other tiles: sys.domain.name
  • App domain, for your applications: app.domain.name

You must also define the following wildcard domain names and include them when creating certificates that access PAS and its hosted apps:

  • *.SYSTEM-DOMAIN
  • *.APPS-DOMAIN
  • *.login.SYSTEM-DOMAIN
  • *.uaa.SYSTEM-DOMAIN

Component Scaling

See Scaling PAS for recommendations on how to scale PAS for different deployment scenarios.

PKS Architecture

The following diagram shows a base architecture for PKS, and how its network topology places and replicates Pivotal Platform and PKS components across subnets and Availability Zones.

PAS Base Deployment Topology

View a larger version of this diagram

Internal Components

The following table describes the internal component placements shown in the diagram above:

Component Placement and Access Notes
Ops Manager Deployed on one of the subnets. Accessible by fully-qualified domain name (FQDN) or through an optional jumpbox.
BOSH Director Deployed on the infrastructure subnet.
Jumpbox Optional. Deployed on the infrastructure subnet for accessing PKS management components such as Ops Manager and the PKS + Kubectl command-line interface.
PKS API Deployed as a service broker VM on the PKS Services subnet. Handles PKS API and service adapter requests, and manages PKS clusters. For more information, see Pivotal Container Service (PKS)
Harbor Tile Optional container images registry, typically deployed to the Services subnet.
PKS Cluster Deployed to a dynamically-created, dedicated PKS cluster subnet. Each cluster consists of worker nodes that run the workloads (applications) and one or more master nodes.

Networks

Pivotal recommends defining your networks and load-balancing their incoming requests as follows:

Subnets Requirements

PKS requires two defined networks to host the main elements that compose it.

  • Infrastructure subnet - /24 Contains VMs that require access only for Platform Administrators, for example Ops Manager, BOSH Director and an optional jumpbox.
  • PKS Services subnet - /24 Hosts PKS API VM and other optional service tiles such as Harbor.
  • PKS Clusters subnets - each one a /24 from a pool of pre-allocated IPs Hosts PKS clusters.

Load Balancing

Load balancers can be used to manage traffic across master nodes of a PKS cluster or for deployed workloads. For more information on how to configure load balancers for PKS, refer to the corresponding documentation for the specific IaaS.

High Availability

PKS has no inherent HA capabilities to design for. Make the best efforts to have HA design at the IaaS, storage, power and access layers to support PKS.

Storage

PKS requires shared storage across all Availability Zones for the deployed workloads to appropriately allocate their required storage.

Security

See the topics below for how PKS implements security:

Domain Names

PKS requires the following domain names to be registered when creating a wildcard certificate and PKS tile configurations: *.pks.domain.name

The wildcard certificate covers both the PKS API domain e.g. api.pks.domain.name and the PKS Clusters domains e.g. mycluster.pks.domain.name.

Cluster Management

See Managing Clusters for recommendations on managing Enterprise PKS clusters.