Control Plane Reference Architectures

Page last updated:

This topic describes topologies and best practices for deploying Concourse on BOSH, and using Concourse to manage Pivotal Platform foundations. For production environments, Pivotal recommends deploying Concourse with BOSH.

Overview

Concourse is the main continuous integration and continuous delivery (CI/CD) tool that the Pivotal and open-source Cloud Foundry communities use to develop, test, deploy and manage Pivotal Platform foundations.

The three topologies described in this document govern the network placement and relationship between two systems:

  • The control plane runs Concourse to gather sources for, integrate, update, and otherwise manage Pivotal Platform foundations. This layer may also host internal Docker registries, S3 buckets, git repositories, and other tools.

  • Each Pivotal Platform foundation runs Pivotal Platform on an instance of BOSH.

Each topology described below has been developed and validated in multiple Pivotal Platform customer and Pivotal Labs environments.

Deployment Topologies

Security policies are usually the main factor that determines which CI/CD deployment topology best suits a site’s needs. Specifically, the decision depends on what network connections are allowed between the control plane and the Pivotal Platform foundations that it manages.

The following three topologies answer a range of security needs, ordered by increasing level of security around the Pivotal Platform foundations:

  • Topology 1: Concourse server and worker VMs all colocated on control plane

  • Topology 2: Concourse server on control plane, and remote workers colocated with Pivotal Platform foundations

  • Topology 3: Multiple Concourse servers and workers colocated with Pivotal Platform foundations

Across these three topologies, the increasing level of Pivotal Platform foundation security correlates with:

  • Increasing network complexity
  • Increasing effort required for initial deployment and ongoing maintenance

Security Decision Factors

The graph below captures the decision factors dictated by network security policy, and recommends Concourse deployment topologies that adhere to those policies.

Concourse decision

Additional Decision Factors

In addition to security policy, deciding on a CI/CD deployment topology may also depend on factors such as:

  • Network latency across network zones
  • Air-gapped vs. Internet-connected environments
  • Resource limitations

The Credentials Management and Storage Services sections below includes notes and recommendations regarding credentials management, Docker registries, S3 buckets, and git repositories, but does not cover these tools extensively.

Topology Objects

Complete deployment topologies dictate the internal placement or remote use of the following:

  • Concourse server and worker VMs, as discussed above
  • Credentials managers such as CredHub or Vault
  • Docker registries, public or private
  • S3 or other storage buckets, public or private
  • git or other code repositories, public or private

Topology 1: Concourse Server and Worker VMs All Colocated on Control Plane

This simple topology follows network security policies that allow a single control plane to connect to all Pivotal Platform foundations deployed across multiple network zones or data centers.

Concourse single plane

In this topology, the Concourse server and worker VMs, along with other tools (e.g. Docker registry, S3 buckets, Vault server), are all deployed to the same subnet.

Connectivity Requirements

Concourse worker VMs must be allowed to connect to:

  • The Ops Manager VM or a jumpbox VM in all of the Pivotal Platform foundations networks
  • (on vSphere) The vCenter API for each Pivotal Platform foundation

Performance Notes

Network data transfers between the control plane and each Pivotal Platform foundation network zone may carry large files such as Pivotal Platform tiles, release files, and Pivotal Platform foundation backup pipelines output.

Pivotal recommends that you test network throughput and latency between those network zones to make sure that data transfer rates do not make pipeline execution times unacceptably long.

Firewall Requirements

All Concourse worker VMs are required to connect to certain VMs on Pivotal Platform foundation subnets across networks zones or data centers.

See Pivotal Platform CI/CD Pipelines section for required VMs, ports, and external websites required for Pivotal Platform CI/CD pipelines.

Pros

  • Simplified deployment and maintenance of centralized control plane, which requires only one BOSH deployment and runs one BOSH Director.
  • Simplified setup and maintenance of Pivotal Platform CI/CD pipelines. All pipelines and Concourse teams use a single, centralized server.

Cons

  • You may have to configure firewall rules in each Pivotal Platform network to allow connectivity from workers in CI/CD zone, as mentioned above.

Topology 2: Concourse Server on Control Plane, and Remote Workers Colocated with Pivotal Platform Foundations

This topology supports environments where Pivotal Platform foundation VMs cannot receive incoming traffic from IP addresses outside their network zone or data center, but they can initiate outbound connections to outside zones.

As in Topology 1, the Concourse server, and potentially other VMs for tools such as Docker registries or S3 buckets, are deployed to a dedicated subnet. The difference here is that Concourse worker VM pools are deployed inside each Pivotal Platform foundation subnet, network zone, or data center.

Concourse multi zone

Connectivity Requirements

Concourse worker VMs in each Pivotal Platform foundation network zone or data center must:

  • Connect to the Ops Manager VM or a jumpbox VM in each Pivotal Platform foundations network
  • (on vSphere) Connect to the vCenter API for each Pivotal Platform foundation
  • Have outbound connection access to the Concourse web/ATC server on port 2222. This lets them handshake with the Concourse server to open a reverse SSH tunnel for ongoing communication between them and the Concourse ATC.

For more details, see Concourse Architecture.

Performance Notes

Remote workers have to download large installation files from either the Internet or from a configured S3 artifacts repository. For Pivotal Platform backup pipelines, workers may also have to upload large backup files to the S3 repository.

Pivotal recommends that you test network throughput and latency between those network zones to make sure that data transfer rates do not make pipeline execution times unacceptably long.

Firewall Requirements

Remote worker VMs require outbound access to the Concourse web/ATC server on port 2222.

Remote worker VMs inside each Pivotal Platform foundation network zone or data center are required to connect to VMs in their colocated foundation. They may also require access to external web sites or S3 repositories for downloading installation files.

See the Pivotal Platform CI/CD Pipelines section for required VMs and ports required for Pivotal Platform CI/CD pipelines.

Pros

  • Relatively simple maintenance of centralized control plane, which contains single Concourse server and other tools.
  • Simplified setup and maintenance of Pivotal Platform CI/CD pipelines. All pipelines and Concourse teams use a single, centralized server.

Cons

  • You have to reconfigure firewalls to grant outbound access to remote worker VMs.
  • In addition to deploying the control plane, you need an additional BOSH deployment and running BOSH Director for each Pivotal Platform Foundation network zone or data center.
  • You need to manage multiple Concourse worker pools in multiple locations.

Topology 3: Multiple Concourse Servers and Workers Colocated with Pivotal Platform Foundations

This topology supports environments where Pivotal Platform foundation VMs can only be accessed from within the same network zone or data center. This scenario requires deploying complete and dedicated control planes within each deployment zone.

Concourse multi foundation

Performance Notes

Since workers run in the same network zone or data center as the Pivotal Platform foundation, data transfer throughput should not limit pipeline performance.

Firewall Requirements

  • Air-gapped environments require some way to bootstrap S3 repository with Docker images and Pivotal Platform releases files for pipelines from external sites.
  • Non-air-gapped environments, in which workers can download required files from external websites, need those websites to be whitelisted in the proxy or firewall setup.

Pros

  • Requires little or no firewall rules configuration for control plane VMs. In non-air-gapped environments, worker VMs download Pivotal Platform releases and Docker images for pipelines from external websites.

Cons

  • Requires deploying and maintaining multiple Concourse and other tools.
  • Requires deploying and maintaining multiple Pivotal Platform pipelines for each Concourse server.
  • For air-gapped environments, requires setting up an S3 repository for each control plane.

Deploying CI/CD to the Control Plane

There are a few alternatives to deploy BOSH Directors, Concourse servers, and other tool releases to the control plane. Details on those alternatives are outside of the scope of this document, but refer to the links below for the most common options:

Manual deployments

Automated deployments

Control Plane Deployment Best Practices

Here are some best practices for control plane components that apply across all deployment topologies:

Dedicated BOSH Director

Pivotal recommends deploying Concourse and other control plane tools on their own, dedicated BOSH layer, with their own BOSH Director that runs separately from the Pivotal Platform foundations that the control plane manages.

Pivotal does not recommend using an existing Pivotal Platform BOSH Director instance to deploy Concourse and other software (e.g. Minio S3, private Docker registry, CredHub). Sharing the same BOSH Director with Pivotal Platform deployments increases the risk of accidental or undesired updates or deletion of those deployments.

Dedicating a BOSH Director to Concourse other control plane tools also provides higher flexibility for Pivotal Platform foundation upgrades and updates, such as stemcell patches.

Concourse dedicated bosh

Credentials Management

All credentials and other sensitive information that feeds into a Concourse pipeline should be encrypted and stored using credentials management software such as CredHub or Vault. Never store credentials as plain text in parameter files in file repositories.

Credentials Management with CredHub

Concourse integrates with CredHub to manage credentials in its pipelines. The pipelines reference encrypted secrets stored in a CredHub server and retrieve them automatically during execution of tasks.

To integrate Concourse with a CredHub server, you configure its ATC job’s deployment properties with information about the CredHub server and corresponding UAA authentication credentials.

You can deploy CredHub in multiple ways: as a dedicated VM, or integrated with other VMs, such as colocating the CredHub server with the BOSH Director VM or Concourse’s ATC/web VM.

Colocating a CredHub server with Concourse’s ATC VM dedicates it to the Concourse pipelines and lets Concourse administrators manage the credentials server. This configuration also means that during Concourse upgrades, the CredHub server only goes down when the Concourse ATC job is also down, which minimizes potential credential server outages for running pipelines.

The diagram below illustrates the jobs of Concourse VMs, along with the ones for the BOSH Director VM, when a dedicated CredHub server is deployed with Concourse.

Concourse bosh jobs

The Concourse Pipelines Integration with CredHub documentation in the Pivotal Platform Pipelines repository describes how to deploy a CredHub server integrated with Concourse.

Credentials Management with Vault

For how to configure Vault to manage credentials for Concourse pipelines, see Secure credential automation with Vault and Concourse in the Pivotal Platform Pipelines repository.

Storage Services

Git Server

BOSH and Concourse implement the concepts of infrastructure-as-code and pipelines-as-code. As such, it is important to store all source code for deployments and pipelines in a version-controlled repository.

Pivotal Platform CI/CD pipelines assume that their source code is kept in git-compatible repositories. GitHub is the most popular git-compatible repository for Internet-connected environments.

GitLab, BitBucket and GOGS are examples of git servers that can be used for both connected and air-gapped environments.

A git server that contains configuration and pipeline code for a Pivotal Platform foundation needs to be accessible by the corresponding worker VMs that run CI/CD pipelines for that foundation.

S3 Repository

In all environments Concourse requires an S3 repository to store Pivotal Platform backups and possibly other files. If your deployment requires a private, internal S3 repository but your IaaS lacks built-in options, you can use BOSH to deploy your own S3 releases, such as Minio S3 and EMC Cloud Storage to your control plane.

For air-gapped environments, an S3 repository is also the preferred method to store release files for Pivotal Platform tiles, stemcells and buildpacks. Docker images can also be stored to an S3 repository as an alternative to a private Docker registry. See Offline Pipelines for Airgapped Environments for details.

Private Docker Registry

For air-gapped environments, Docker Images for Concourse pipelines need to be stored either on a private Docker registry or in an S3 repository. For BOSH-deployed private registry alternatives, see Docker Registry or VMWare Harbor.

High Availability

For details on how to set up a load balancer to handle traffic across multiple instances of the ATC/web VM, and how to deploy multiple worker instances, see the Concourse Architecture topic.

Pivotal Platform CI/CD Pipelines

Pivotal Platform Pipelines

Pivotal Platform Platform Automation with Concourse (Pivotal Platform Pipelines) is a collection of Concourse pipelines for installing and upgrading Pivotal Platform. See the source on Github or download from Pivotal Network (sign-in required).

Warning: At time of publication, the Pivotal Platform Pipelines repository is undergoing planned deprecation.

To run Pivotal Platform Pipelines, you need:

  • Ops Manager web UI, API and VM installed
  • (vSphere environments) vCenter API installed
  • For proxy- or Internet-connected environments, whitelist the following sites:

BOSH Backup and Restore (BBR) Pivotal Platform Pipelines

BBR Pivotal Platform Pipelines automate Pivotal Platform foundation backups. See and download the source on Github.

To run BBR Pivotal Platform Pipelines, you need an S3 repository for storing backup artifacts.

Note: BBR Pivotal Platform Pipelines is a Pivotal Platform community project not officially supported by Pivotal.

Pipelines Orchestration Frameworks

Pivotal Platform Pipelines Maestro uses a Maestro framework to:

  • Automate pipeline creation and management for multiple Pivotal Platform foundations
  • Promote and audit configuration changes and version upgrades across all foundations

Note: Pivotal Platform Pipelines Maestro is a Pivotal Platform community project not officially supported by Pivotal.

Concourse Team Management Best Practices

When a single Concourse server hosts CI/CD pipelines for more than one Pivotal Platform foundation, Pivotal recommends creating one Concourse team (not main) specific to each foundation, and associating that team with all pipelines for that foundation, e.g. install, upgrade, backup, and metering.

Concourse pipelines

Dedicating a Concourse team to each Pivotal Platform foundation has the following benefits:

  • It avoids the clutter of pipelines in a single team. The list of pipelines for each foundation may be long, depending how many tiles are deployed to it.

  • It avoids the risk of operators running a pipeline for the wrong foundation. When a single team hosts maintenance pipelines for multiple foundations, the clutter of dozens of pipelines may lead operators to accidentally run a pipeline (e.g. upgrade or delete tile) targeted at the wrong Pivotal Platform foundation.

  • It allows for more granular access control settings per team. Pivotal Platform pipelines for higher environments (e.g. Production) may require a more restricted access control than ones from lower environments (e.g. Sandbox). Authentication settings for Concourse teams enable that level of control.

  • It allows workers to be assigned to pipelines of a specific foundation. Concourse deployment configuration allows for the assignment of workers to a single team. If that team contains pipelines of only one foundation, then the corresponding group of workers run pipelines only for that foundation. This is useful when security policy requires tooling and automation for a foundation (e.g. Production) to run on specific VMs.

Concourse Teams for App Development Orgs and Spaces

When a Concourse server hosts pipelines for an app development team, Pivotal recommends creating a Concourse team associated with that development team, and associating Concourse team membership with org and space membership defined in the Pivotal Application Service (PAS) user authentication and authorization (UAA) server.

Associating Concourse teams with PAS org and space member lists synchronizes access between PAS and Concourse, letting developers see and operate the build pipelines for the apps they develop.