vSphere Reference Architecture

Page last updated:

This topic provides reference architectures for Pivotal Platform on vSphere. It builds on the common base architectures described in Platform Architecture and Planning.

See Installing Pivotal Platform on vSphere for additional requirements and installation instructions for Pivotal Platform on vSphere.

Before You Begin

Our products contain feature sets, that when used, will impact designs in this reference architecture.

Isolation Segments

Isolation segments are delivered as a tile in Pivotal Operations Manager. They are used to create smaller individual deployments of PAS components, Gorouters and Diego cells, to help isolate workloads. The isolation criteria could include capacity management or audit compliance to name a couple. Isolation segments can be also be patched independently from the PAS tile or other isolation segments. We recommend that when using isolation segments, you should deploy all containers to an isolation segment and leave the PAS tile to only system-level components. By moving all application workloads to an isolation segment, you get finer-grained control over the network usage, capacity planning, firewalling, and business continuity.

Host Groups

Host groups allow you create virtual PaaS failure domains independently of physical IaaS capacity. In PAS or PKS on vSphere, this amounts to creating multiple Availability Zones that do not have to align 1:1 with vSphere constructs (i.e clusters or pools). You can use host groups to gain PaaS high availability with IaaS capacity that is undersized. Alternatively, host groups can be used to create availability zones that can grow in an elastic way as the IaaS grows.

Multiple vCenter Support

Multiple vCenter Support can allow you to use PAS in a mature vSphere environment. Coupled with host groups, a brown field deployment strategy can happen as hosts in existing vSphere clusters can be set aside for PAS/PKS. PAS/PKS can also be stretched geographically without needing to stretch IaaS elements.

Overview

High Availability

For information about high availability requirements and recommendations for PAS on vSphere, see the High Availability section of Platform Architecture and Planning Overview.

Shared Storage

Shared storage is a requirement for Pivotal Platform. You can allocate networked storage to the host clusters following one of two common approaches: horizontal or vertical. The approach you follow reflects how your data center arranges its storage and host blocks in its physical layout.

Horizontal Shared Storage

With the horizontal shared storage approach, you grant all hosts access to all datastores and assign a subset to each Pivotal Platform installation.

For example, with six datastores ds01 through ds06, you grant all nine hosts access to all six datastores. You then provision your first Pivotal Platform installation to use stores ds01 through ds03 and your second Pivotal Platform installation to use ds04 through ds06.

Vertical Shared Storage

With the vertical shared storage approach, you grant each cluster its own datastores, creating a cluster-aligned storage strategy. vSphere VSAN is an example of this architecture.

For example, with six datastores ds01 through ds06, you assign datastores ds01 and ds02 to a cluster, ds03 and ds04 to a second cluster, and ds05 and ds06 to a third cluster. You then provision your first Pivotal Platform installation to use ds01, ds03, and ds05, and your second Pivotal Platform installation to use ds02, ds04, and ds06.

With this arrangement, all VMs in the same installation and cluster share a dedicated datastore.

Storage Capacity

Pivotal recommends the following storage capacity allocation for production and non-production PAS environments:

  • Production environments: Configure at least 8 TB of data storage. You can configure this as either one 8 TB store or a number of smaller volumes that sum to 8 TB. Frequently-used developments may require significantly more storage to accommodate new code and buildpacks.
  • Non-production environments: Configure 4 to 6 TB of data storage.

Note: Pivotal Platform does not support using vSphere Storage Clusters with the latest versions of Pivotal Platform validated for the reference architecture. Datastores should be listed in the vSphere tile by their native name, not the cluster name created by vCenter for the storage cluster.

Note: If a datastore is part of a vSphere Storage Cluster using DRS storage (sDRS), you must disable the s-vMotion feature on any datastores used by Pivotal Platform. Otherwise, s-vMotion activity can rename independent disks and cause BOSH to malfunction. For more information, see How to Migrate Pivotal Platform to a New Datastore in vSphere.

For more information about general storage requirements and recommendations for PAS, see the Storage section of Platform Architecture and Planning Overview.

SQL Server

An internal MySQL database is sufficient for use in production environments.

However, an external database provides more control over database management for large environments that require multiple data centers.

For information about configuring system databases on PAS, see the Configure System Databases section of Configuring PAS.

Security

For information about security requirements and recommendations for PAS deployments, see the Security section of Platform Architecture and Planning Overview.

Blobstore Storage

PAS ships with an internal blobstore. It is recommended for POC deployments. Pivotal recommends that you use the following blobstore storage for production and non-production PAS environments:

  • Production/Test environments: Use an external S3 storage appliance as the blobstore such as (Dell ECS, Minio, any other S3 compatible datastore in your environment).
  • POC environments: Use internal blobstore.

Note: For POC environments, the internal blobstore can be the primary consumer of storage, as the internal blobstore must be actively maintained. There will be down time for deployment during events such as storage upgrades or migrations to new disks.

For more information about blobstore storage requirements and recommendations, see the Configure File Storage section of Configuring PAS for Upgrades.

DNS

PAS requires a system domain, app domain, and several wildcard domains.

For more information about DNS requirements for PAS, see the Domain Names section in Platform Planning and Architecture.

Networking

The vSphere reference architecture for the Pivotal Application Service (PAS) and Pivotal Container Service (PKS) runtimes is based on software-defined networking (SDN) infrastructure. vSphere offers NSX-T and NSX-V to support SDN infrastructure.

Pivotal recommends using an SDN to take advantage of features including:

  • Virtualized, encapsulated networks and broadcast domains
  • VLAN exhaustion avoidance with the use of virtualized, encapsulated logical networks
  • DNAT/SNAT services to create separate, non-routable network ranges for the PAS installation
  • Load balancing services to pass traffic thru to pools of platform routers
  • SSL termination at the load balancer at layer 7 with the option to forward on at layer 4 or 7 with unique certificates
  • Virtual, distributed routing and firewalling services for east-west traffic native to the hypervisor

Pivotal Platform supports the following configurations for Pivotal Platform on vSphere deployments:

PAS on vSphere with NSX-T

The following sections describe the reference architecture for PAS on vSphere with NSX-T deployments, as weel as requirements and recommendations for deploying PAS on vSphere with NSX-T, including network, load balancing, and storage capacity requirements and recommendations.

Architecture

The following reference architecture diagram describes the architecture for PAS on vSphere with NSX-T deployments.

The diagram shows the architecture for a PAS on vSphere with NSX-T deployment. For more information about the components and networking demonstrated by the diagram, read the description below this diagram.

View a larger version of this diagram.

As shown in the diagram above, PAS deployments with NSX-T are ideally deployed with three clusters and three availability zones (AZs).

A NSX-T Tier-0 router is on the entry boundary of the PAS deployment. This router is the central logical router into/out of the PAS platform. Routing configuration on the IP backbone can be static or dynamic using BGP on the Tier-0 router. Several Tier-1 routers, such as the router for the PAS and infrastructure subnets, connect to the Tier-0 router as child routing points to/from other child T1 routers and from the routed IP backbone.

NSX-T Container Plugin Requirement

For PAS deployments, the VMware NSX-T Container Plugin for Pivotal Platform is required to enable the SDN features available through NSX-T.

The NSX-T Container Plugin enables a container networking stack and integrates with NSX-T.

Note: To use NSX-T with PAS, the NSX-T Container Plugin must be installed, configured, and deployed at the same time as the PAS tile. To download the NSX-T Container Plugin, see VMware NSX-T Container Plug-in for Pivotal Platform on Pivotal Network.

Networking

The following sections describe networking requirements and recommendations for PAS on vSphere with NSX-T deployments.

Routable IPs

The Tier-0 router must have routable external IP address space assigned to peer with other routers on that backbone (BGP routing) or simply be a addressable routing point (static routing). Select a network range for the Tier-0 router with enough addresses, as the network will be separated into the following two jobs:

  • Routing incoming and outgoing traffic on the IP backbone.
  • DNAT(s) and SNAT(s), load balancer VIPs, and other Pivotal Platform components.

Note: Compared to NSX-V, NSX-T consumes much more address space for SNATs.

Load Balancing

The following are load balancing requirements and recommendations for PAS on vSphere with NSX-T deployments:

  • You must configure load balancers for the Gorouters, which NSX-T can provide.
    • The domains for the PAS system and apps must resolve to the load balancer VIP.
    • You must assign either a private or a public IP address assigned to the domains for the PAS system and apps.
  • Pivotal recommends that you configure load balancers at layer 4 for the Gorouters. With layer 4 load balancers, HTTP/HTTPS traffic passes through the load balancers and SSL is terminated at the Gorouters. This approach reduces overhead at the NSX load balancers due to processing encryption/decryption. This burden is better shared amongst a number of Gorouters as compared to a single, logical NSX load balancer.

    Note: NSX load balancers can operate at layer 7 and terminate/initiate SSL. However, this use adds additional burden on the NSX Edge that hosts the load balancer, which may have other jobs and processing needs, which brings the requirement for well resourced Edge(s) that may not be available or grow with increasing demands and is not recommended.

  • Any TCP Gorouters and SSH Proxies within the platform also require load balancers, which NSX-T can provide.
  • Additional Layer 4 and Layer 7 NSX-T load balancers are created automatically during app deployment.

Networking, Subnets, and IP Spacing

The following are requirements and recommendations related to networks, subnets, and IP spacing for PAS on vSphere:

  • PAS requires statically-defined networks to host PAS component VMs.
  • The tenant side (on the right side of the diagram) uses a series of non-routable address ranges when using NAT.
  • NSX-T dynamically assigns PAS org networks and adds a Tier-1 router. These org networks are automatically instantiated based on a non-overlapping block of address space. You can configure the block of address space in the NCP Configuration section of the NSX-T tile in Pivotal Operations Manager. The default is /24, thus every dynamically generated org in PAS is assigned a new /24 network.

For more information about PAS subnets, see the Required Subnets section in Platform Architecture and Planning Overview.

Example subnet layout using NAT: * Infrastructure - 192.168.1.0/24 * Deployment - 192.168.2.0/24 * Services - 192.168.3.0/24 * On-Demand Service - 192.168.4.0 - 192.168.9.255 (in /24 segments) * Isolation segments - 192.168.10.0 - 192.168.127.255 (in /24 segments) * On-Demand Orgs - 192.168.128.0/17 (in /24 segments - auto allocated by NSX-T only)

PAS on vSphere with NSX-V

The following sections describe the reference architecture for PAS on vSphere with NSX-V deployments. They also provide requirements and recommendations for deploying PAS on vSphere with NSX-V, such as network, load balancing, and storage capacity requirements and recommendations.

PAS on vSphere with NSX-V enables services provided by NSX on the PAS platform, such as Edge services gateway (ESG), load balancers, firewall services, and NAT/SNAT services.

Architecture

The following reference architecture diagram describes the architecture for PAS on vSphere with NSX-V deployments.

The diagram shows the architecture for a PAS on vSphere with NSX-V deployment. For more information about the components and networking demonstrated by the diagram, read the description below this diagram.

View a larger version of this diagram.

As shown in the diagram above, PAS deployments with NSX-V are deployed with three clusters and three availability zones (AZs).

PAS deployments with NSX-V also include a Edge Service Gateway (ESG) as a router on the front end. North/South traffic in NSX-V is handled by an ESG and traffic entering and leaving a PAS installation as a tenant behind an ESG is considered North/South while traffic between the networks deployed on the tenant side of an ESG is considered East/West, which could be routed by the ESG or by a DLR. Compared to NSX-T architecture, NSX-V architecture does not use the Tier-0 / Tier-1 router construct to connect the central router to the various subnets for the PAS deployment.

For more information about using ESG on vSphere, see Using Edge Services Gateway on VMware NSX.

Networking

The following sections describe networking requirements and recommendations for PAS on vSphere with NSX-V deployments.

Routable IPs

You must assign routable external IPs on the server side, such as routable IPs for NATs and load balancers, to the Edge router.

Load Balancing

Load Balancing services are available in the Edge Services Gateway (ESG). These services are processes that run on the ESG along with other services such as routing and/or firewall. The following are load balancing requirements and recommendations for PAS on vSphere with NSX-V deployments:

  • ESG load balances can be configured as either layer 4 (encryption pass-thru) or layer 7 (encryption termination/initiation). It is recommended to use layer 4 services at the ESG in order to reduce processing overhead in the ESG.

    Note: It is possible to use Layer 7 load balancers and terminate SSL at the load balancers. However, this approach adds additional overhead processing and is not recommended.

  • The domains for the PAS system and apps must resolve to the load balancer. If the ESG is also the load balancer, these domains will resolve to a VIP on that ESG.
  • If you are also using features such as TCP routing or a proxy for SSH, you can deploy a load balancer service for them in the same, or a different ESG.
  • If you do not want to use load balancing services in NSX-V, or use load balancing services external to NSX-V in addition to those included, they should be deployed external to (on the provider side of) the NSX-V installation.

A high performance alternative to load balancing in the ESG that is also the boundary router is to deploy ESGs as dedicated load balancer service nodes on a separate network on the tenant side (behind the NAT) of the routing ESG. These specific ESG LBs would act as “one-arm load balancers”, where they are connected to the separate network with a single interface and act as both a load balancer VIP and as the owner of the pool being balanced to all thru that connection. Multiples of these can be deployed to ensure compute resources are dedicated to these load balancing needs with no overhead from any other job. The key differentiator here from the typical approach is VIP and pool are both aligned to a single network connection while typical would be VIP on the provider interface and pool on the tenant interface.

Networks, Subnets, and IP Spacing

For information about network, subnet, and IP space planning requirements and recommendations, see the Required Subnets section in Platform Architecture and Planning Overview.

High Availability

For information about high availability requirements and recommendations for PAS on vSphere, see the High Availability section of Platform Architecture and Planning Overview.

Shared Storage

Shared storage is a requirement for Pivotal Platform. You can allocate networked storage to the host clusters following one of two common approaches: horizontal or vertical. The approach you follow reflects how your data center arranges its storage and host blocks in its physical layout.

For information about horizontal and vertical shared storage, see Shared Storage.

Storage Capacity

Pivotal recommends the following storage capacity allocation for production and non-production PAS environments:

  • Production environments: Configure at least 8 TB of data storage. You can configure this as either one 8 TB store or a number of smaller volumes that sum to 8 TB. Frequently-used developments may require significantly more storage to accommodate new code and buildpacks.
  • Non-production environments: Configure 4 to 6 TB of data storage.

Note: Pivotal Platform does not support using vSphere Storage Clusters with the latest versions of Pivotal Platform validated for the reference architecture. Datastores should be listed in the vSphere tile by their native name, not the cluster name created by vCenter for the storage cluster.

Note: If a datastore is part of a vSphere Storage Cluster using DRS storage (sDRS), you must disable the s-vMotion feature on any datastores used by Pivotal Platform. Otherwise, s-vMotion activity can rename independent disks and cause BOSH to malfunction. For more information, see How to Migrate Pivotal Platform to a New Datastore in vSphere.

For more information about general storage requirements and recommendations for PAS, see the Storage section of Platform Architecture and Planning Overview.

SQL Server

An internal MySQL database is sufficient for use in production environments.

However, an external database provides more control over database management for large environments that require multiple data centers.

For information about configuring system databases on PAS, see the Configure System Databases section of Configuring PAS.

Security

For information about security requirements and recommendations for PAS on vSphere deployments, see the Security section in Platform Architecture and Planning Overview.

Blobstore Storage

PAS ships with an internal blobstore. It is recommended for POC deployments. Pivotal recommends that you use the following blobstore storage for production and non-production PAS environments:

  • Production/Test environments: Use an external S3 storage appliance as the blobstore such as (Dell ECS, Minio, any other S3 compatible datastore in your environment).
  • POC environments: Use internal blobstore.

Note: For POC environments, the internal blobstore can be the primary consumer of storage, as the internal blobstore must be actively maintained. There will be down time for deployment during events such as storage upgrades or migrations to new disks.

For more information about blobstore storage requirements and recommendations, see the Configure File Storage section of Configuring PAS for Upgrades.

PAS on vSphere without SDN

Note: Please refer to The following sections describe the reference architecture for PAS on vSphere without software-defined networking deployments. ### Networking Without an SDN, IP allocations will all come from routed network space. Discussions and planning within your organization will be essential to acquiring the needed amount of IP space for a PAS deployment now with future growth considerations because in our experience: * Routed IP address space is a premium resource * Adding more later is difficult, costly, and time consuming The following is a best-guess layout for IP space utilization in a single PAS deployment. * Infrastructure - /28 * PAS Deployment - /23 (This size is almost completely dependent on the estimated desired capacity for containers. It can go smaller, but we do not recommend you go larger in a single deployment) * Services - /23 (This size is almost completely dependent on the estimated desired capacity for services. Resize as necessary.) ### Isolation Segments Isolation segments can help with the IP address space needs in a routed network design. * Build smaller groups of Gorouters and Diego cells aligned to a particular service * Smaller groups will use less IP address space ## PKS on vSphere with NSX-T The following sections describe the reference architecture for PKS on vSphere with NSX-T deployments. They also provide requirements and recommendations for deploying PKS on vSphere with NSX-T, such as network, load balancing, and storage capacity requirements and recommendations. ### Architecture The following reference architecture diagram describes the architecture for PKS on vSphere with NSX-T deployments. ![The diagram shows the architecture for a PKS on vSphere with NSX-T deployment. For more information about the components and networking demonstrated by the diagram, read the description below this diagram.](../images/v2/export/PKS_vSphere_NSX-T.png) [View a larger version of this diagram](../images/v2/export/PKS_vSphere_NSX-T.png). As shown in the diagram above, PKS deployments with NSX-T are deployed with three clusters and three availability zones (AZs). A NSX-T Tier-0 router is on the front end of the PKS deployment. This router is a central logical router into the PKS platform. You can configure static or dynamic routing using BGP from the routed IP backbone through the Tier-0 router. Several Tier-1 routers, such as the router for the infrastructure subnet, connect to the Tier-0 router. New Tier-1 routers are created on-demand as new clusters and namespaces are added to PKS.

Note: The PKS on vSphere with NSX-T architecture supports multiple master nodes for PKS v1.2 and later.

Networking

The following sections describe networking requirements and recommendations for PKS on vSphere with NSX-T deployments.

Load Balancing

The following are load balancing requirements and recommendations for PKS on vSphere with NSX-T deployments:

  • Use standard NSX-T load balancers. NSX-T Load Balancers of type Layer 4 and Layer 7 are created automatically during app deployment.
  • Use both Layer 4 and Layer 7 load balancers:
    • Use Layer 7 load balancers for ingress routing.
    • Use Layer 4 load balancers for services of type LoadBalancer. This allows you to terminate SSL at the load balancers, which reduces overhead processing.
  • NSX-T provides ingress routing natively. You can also use a third party service for ingress routing, such as Istio or Nginx. You run the third party ingress routing service as a container in the cluster.
  • If you use a third party ingress routing service, the following requirements apply:

    • Create wildcard DNS entries to point to the service.
    • Define domain information for the ingress routing service in the manifest of the PKS on vSphere deployment. For example:

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: music-ingress
        namespace: music1
      spec:
        rules:
        - host: music1.pks.domain.com
          http:
            paths:
            - path: /.*
              backend:
                serviceName: music-service
                servicePort: 8080
      
  • When you push a PKS on vSphere deployment with a service type set to LoadBalancer, NSX-T automatically creates a new WIP for the deployment on the existing load balancer for that namespace. You must specify a listening and translation port in the service, a name for tagging, and a protocol. For example:

    apiVersion: v1
    kind: Service
    metadata:
      ...
    spec:
      type: LoadBalancer
      ports:
      - port: 80
        targetPort: 8080
        protocol: TCP
        name: web
    

Routable IPs

The following are routable IP requirements and recommendations for PKS with NSX-T deployments:

  • Deployments with PKS NSX-T ingress: Pivotal recommends a /25 network for deployments with PKS NSX-T ingress. The Tier-0 router must have routable external IP address space to advertise on the BGP network with its peers.

    Select a network range for the Tier-0 router with enough space so that the network can be separated into the following two jobs:

    • Routing incoming and outgoing traffic.
    • DNATs and SNATs, load balancer WIPs, and other Pivotal Platform components.

    Note: Compared to vSphere deployments with NSX-V, PKS on vSphere with NSX-T consumes much more address space for SNATs.

  • Deployments with several load balancers: Pivotal recommends a /23 network for deployments that use several load balancers. Deployments with several load balancers have much higher address space consumption for load balancer WIPs. This is because Kubernetes service types allocate IP addresses very frequently. To accommodate the higher address space, allow for four times the address space.

Networks, Subnets, and IP Spacing

The following considerations and recommendations apply to networks, subnets, and IP spacing for PKS on vSphere with NSX-T deployments:

  • Allocate a large network block for PKS clusters and pods:

    • PKS Clusters: Configure a 172.24.0.0/14 network block.
    • PKS Pods: Configure a 172.24.0.0/14 network block.

    NSX-T creates IP address blocks of /24 from these /14 networks by default each time a new cluster or pod is created. You can configure this CIDR range for PKS in Ops Manager.

  • When deploying PKS with Ops Manager, you must allow for a block of address space for dynamic networks that PKS deploys for each namespace. The recommended address space allows you to view a queue of which jobs relate to each service.

  • When a new PKS cluster is created, PKS creates a new /24 network from PKS cluster address space.

  • When a new app is deployed, new NSX-T Tier-1 routers are generated and PKS creates a /24 network from the PKS pods network.

  • Allocate a large IP block in NSX-T for Kubernetes pods. For example, a /14 network. NSX-T creates address blocks of /24 by default. This CIDR range for Kubernetes services network ranges is configurable in Ops Manager.

For more information, see Networks in Platform Architecture and Planning Overview.

Multi-Tenancy

For PKS on vSphere with NSX-T deployments, networks are created dynamically for both PKS clusters and pods.

To accommodate these dynamically-created networks, Pivotal recommends that you use multiple clusters, rather than a single cluster with multiple namespaces.

Multiple clusters provide additional features such as security, customization on a per cluster basis, privileged containers, failure domains, and version choice. Namespaces should be used as a naming construct and not as a tenancy construct.

Master Nodes

The PKS on vSphere with NSX-T architecture supports multiple master nodes for PKS v1.2 and later.

You can define the number of master nodes per plan in the PKS tile in Ops Manager. The number of master nodes should be an odd number to allow etcd to form a quorum.

Pivotal recommends that you have at least one master node per AZ for high availability and disaster recovery.

High Availability

For information about high availability requirements and recommendations, see the High Availability section of Platform Architecture and Planning Overview.

Storage Capacity

Pivotal recommends the following storage capacity allocation for production and non-production PKS environments:

PKS on vSphere supports static persistent volume provisioning and dynamic persistent volume provisioning.

For more information about storage requirements and recommendations, see PersistentVolume Storage Options on vSphere

Security

For information about security requirements and recommendations, see the Security section of Platform Architecture and Planning Overview.