Installing PCF in Airgapped Environments

Page last updated:

This topic provides an overview of the components and resources needed to install PCF in airgapped environments, including the typical corresponding automation resources. It also describes one example solution that has been used in the past.

Offline Components

To run PCF offline, you must obtain resources from the internet and move them into offline components that store and use them. The method you use to move a resource from Pivotal Network or GitHub into an offline environment can vary from setting up a designated proxy to burning a DVD.

The following image displays types of resources and the components you must move them to in your offline environment. It also includes a jumpbox. The jumpbox is a Linux host for running commands such as bosh, uaac, and fly. You could use the Ops Manager VM for this purpose.

Offline components

The following table provides more detail about resources and the component you must move them to:

Resource Component
Pivotal Network products such as tiles, stemcells, BOSH releases, and Ops Manager S3 and artifact store
Pipelines and scripts Git
Configuration Git
Container images Docker Registry, S3, or artifact store
Third Party Resources such as NSX-T and OSS BOSH releases Artifact store
Backup artifacts S3

Architectural Patterns

The following sections describe three architectural patterns for running PCF in airgapped environments. The pattern you use depends on how your environment is set up.

Pattern One: Artifact Store with Internet Access

It is common for organizations to deploy an artifact store such as Artifactory or Nexus as a gateway to the Internet. In this configuration, the artifact store is typically configured as a Docker mirror.

This pattern includes an S3 component that is used only for backups of the PCF installation.

Internet artifact store

The following table provides an example of how you might handle resources in this scenario:

Resource Component
Pivotal Network products such as tiles, stemcells, BOSH releases, and Ops Manager This architectural pattern presents the challenge of getting Pivotal Network resources from the Internet into the artifact store. For example, Artifactory and Nexus cannot interact directly with Pivotal Network. You could place Pivotal Network resources in an artifact store either through a remote Concourse worker with access to the Internet, by downloading them from the Internet and placing them in the artifact store manually, or some other method.
Pipelines and scripts Store in Git. Use manual clone and push. Modify pipelines to use registry_mirror.
Configuration Store in Git. Use manual clone and push.
Container images Store in the artifact store that you configured as a mirror.
Third Party Resources such as NSX-T and OSS BOSH releases Store in a generic artifact store repository.
Backup artifacts Store in a S3 blobstore.

Pattern Two: Completely Offline

In completely offline environments, you must bring in all resources manually and store them in the available local components.

Completely offline

In this architectural pattern, you must configure pipelines to watch components for changes and apply updates when available. The following table provides an example of how you might handle resources in this scenario:

Resource Component
Pivotal Network products such as tiles, stemcells, BOSH releases, and Ops Manager Manually upload to Nexus.
Pipelines and scripts Modify pipelines to use a local Harbor Docker registry. Manualy clone an online environment, bring it to the offline environment on DVD, and push it to offline Git.
Configuration Store in your offline Git.
Container Images Get from the Internet, transfer to USB, and push to offline Harbor.
Third Party Resources such as NSX-T and OSS BOSH releases Store in a generic artifact store repository.
Backup artifacts Store in a S3 blobstore.

Pattern Three: TLS Intercepting Proxy

This pattern allows Internet access, but only through a proxy that decrypts and rewrites TLS certificates. In general, this resembles an online install, however it requires that you add your corporate certificate to all BOSH-deployed VMs.

Tls intercepting proxy

Some of the challenges in this pattern include the following:

  • Any automation that calls to the Internet will fail if it does not have the corporate certificate in its trust store.
  • Concourse tasks that do not use resources do not receive updated BOSH root certificates. This means you must configure tasks to ignore TLS errors or update the root certificates as part of the tasks.

Example Installation in Airgapped Environment

In this example solution, an operator uses Concourse as the control plane architecture and specialized PCF Pipelines to deploy PCF to computer networks physically isolated from the Internet.

Warning: At time of publication, the PCF Pipelines repository is undergoing planned deprecation. Use this document only as a model and reference when implementing your own solution.

The following diagram presents a high-level overview of this solution:

Offline env diagram

This implementation runs in two environments, one airgapped and one Internet-connected. Each environment has its own Concourse server and pipeline. The airgapped environment bootstraps from an internal S3 repository populated by contents moved over as tar files from the Internet-connected environment.

Requirements

This solution requires the following:

  • Internet-connected environment with access to DockerHub and PivNet.
  • Concourse v3.3.3 or later, installed in both the Internet-connected and airgapped environments.
  • A path from the Internet-connected environment to the airgapped environment. All artifacts, such as release files, Docker images and scripts, must be downloaded and packaged within an Internet-connected network, then transferred to the airgapped environment.
  • An offline S3-compatible blobstore. The airgapped Concourse pipeline retrieves artifacts from a Concourse S3 Resource. There are many S3-compatible blobstores that you can use within airgapped environments, such as Minio and Dell EMC Elastic Cloud Storage.

Bootstrap the Pipelines

As illustrated in the diagram below, two pipelines help bootstrap the offline environment, create-offline-pinned-pipelines and unpack-pcf-pipelines-combined. These pipelines are meant to facilitate physical transfer of artifacts to the airgapped environment.

Offline full diagram

Pipelines Execution Flow

  1. Download artifacts from external sources, sign, package and upload them to S3 repository.
  2. Move packaged artifacts to disconnected environment’s S3 repository (either manual or automated.)
  3. Unpack artifacts, check their signature and setup offline pcf-pipelines with new artifacts.
  4. pcf-pipelines are triggered upon existence of new artifacts in S3 repository.
  5. PCF is deployed by pcf-pipelines in disconnected environment.

Create Offline Pinned Pipelines

The create-offline-pinned-pipelines does the following:

  • Pulls all required resources (images, products and pipelines) from their locations on the Internet and package them.
  • Transforms pcf-pipelines to consume the resources from S3-compatible blobstore.
  • Creates an encrypted tarball with all resources, and a shasum manifest for each resource.
  • Puts the tarball to a location within S3 storage for it to be transferred to the airgapped environment.

To download this pipeline, see create-offline-pinned-pipelines.

The following screenshot shows how this pipeline appears in the Concourse dashboard.

Create offline pinned pipelines View a larger version of this diagram.

Unpack PCF Pipelines

The unpack-pcf-pipelines-combined does the following:

  • Downloads, decrypts, and extracts the GPG-encrypted tarball into its components after it has been transferred to the pcf-pipelines-combined/ path in the S3-compatible store.
  • Verifies the shasum manifest of the tarball contents.
  • Puts the tarball parts into their appropriate locations within the airgapped S3 storage for use by the pipelines.

To download this pipeline, see unpack-pcf-pipelines-combined.

The following screenshot shows how this pipeline appears in the Concourse dashboard.

Unpack pcf pipelines combined View a larger version of this diagram.

From this point, the pcf-pipelines folder in the configured S3 bucket in the airgapped environment contains the pcf-pipelines tarball that can then be used to set a pipeline on an airgapped Concourse, in the same fashion as a standard pcf-pipelines setup.

Bootstrapping Requirements

For the unpack-pcf-pipelines-combined to work, there must be a single manual transfer of the czero-cflinuxfs2 tarball to the czero-cflinuxfs2 folder within the airgapped environment’s S3 storage. Only after that is done can the unpack-pcf-pipelines-combined pipeline be set and unpaused.

Install in the China Region

Question: I am a customer headquartered in the US with another location in China, but am experiencing problems downloading Pivotal software directly to my China location. What should I do?

Disclaimer: This question has legal implications, and Pivotal cannot provide legal advice to anyone about their specific obligations under any applicable laws and regulations in China or elsewhere (including, but not limited to, the laws and regulations on import/export and cyber security.) So you should seek legal guidance regarding your own import/export compliance under all the applicable laws or other obligations regarding this issue. The following information is provided to you for your reference only. Pivotal expressly disclaims all liabilities in respect to actions taken or not taken based on the following information.

Certain customers have used the above solution under the above use case, which may or may not address your issue. A customer headquartered in the United States (U.S.) has another location in China, each with their own environments/systems. Customer downloads our software from Pivotal Network (PivNet) from our PivNet servers in the U.S. Customer employee then copies our software from the customer’s environment in the U.S. to customer’s environment in China. Customer’s China location then uses that locally available version of our software, instead of downloading it directly from PivNet to customer’s China location.

Create a pull request or raise an issue on the source for this page in GitHub