Configuring File Storage for PAS

This topic provides instructions for configure file storage for PAS based on your IaaS and installation method. See the section that applies to your use case.

To minimize system downtime, Pivotal recommends using highly resilient and redundant external filestores for your Pivotal Application Service (PAS) file storage. For more factors to consider when selecting file storage, see the Considerations for Selecting File Storage in Pivotal Cloud Foundry topic.

Internal File Storage

Internal file storage is only appropriate for small, non-production deployments.

To use the PCF internal filestore, select Internal WebDAV, and click Save.

AWS

This section describes how to configure file storage for AWS.

Note: If you followed the instructions in Installing PCF on AWS Manually, you created the necessary resources for external S3-compatible file storage.

For production-level PCF deployments on AWS, Pivotal recommends selecting the External S3-Compatible File Store. For more information about production-level PCF deployments on AWS, see the Reference Architecture for Pivotal Cloud Foundry on AWS.

To use an external S3-compatible filestore for PAS file storage, perform the following steps:

  1. Select the External S3-Compatible Filestore option and complete the following fields:

    • Enter the https:// URL Endpoint for your region.
      For example, in the us-west-2 region, enter https://s3-us-west-2.amazonaws.com/.
    • If you use an AWS instance profile to manage role information for your filestore, select the S3 AWS with Instance Profile checkbox. For more information, see AWS Identity and Access Management in the AWS documentation. External s3 filestore instance profile
    • Alternatively, enter the Access Key and Secret Key of the pcf-user you created when configuring AWS for PCF. If you select the S3 AWS with Instance Profile checkbox and also enter an Access Key and Secret Key, the instance profile will overrule the Access Key and Secret Key.
    • From the S3 Signature Version dropdown, select V4 Signature. For more information about S4 signatures, see Signing AWS API Requests in the AWS documentation.
    • For Region, enter the region in which your S3 buckets are located. us-west-2 is an example of an acceptable value for this field.
    • Select Server-side Encryption to encrypt the contents of your S3 filestore. This option is only available for AWS S3.
    • (Optional) If you selected Server-side Encryption, you can also specify a KMS Key ID. PAS uses the KMS key to encrypt files uploaded to the blobstore. If you do not provide a KMS Key ID, PAS uses the default AWS key. For more information, see Protecting Data Using Server-Side Encryption with AWS KMS–Managed Keys (SSE-KMS).
    • Enter names for your S3 buckets:

      Ops Manager Field Value Description
      Buildpacks Bucket Name pcf-buildpacks-bucket
      This S3 bucket stores app buildpacks.
      Droplets Bucket Name pcf-droplets-bucket This S3 bucket stores app droplets. Pivotal recommends that you use a unique bucket name for droplets, but you can also use the same name as above.
      Packages Bucket Name pcf-packages-bucket This S3 bucket stores app packages. Pivotal recommends that you use a unique bucket name for packages, but you can also use the same name as above.
      Resources Bucket Name pcf-resources-bucket This S3 bucket stores app resources. Pivotal recommends that you use a unique bucket name for app resources, but you can also use the same name as above.
    • Configure the following depending on whether your S3 buckets have versioning enabled:

      • Versioned S3 buckets: Enable the Use versioning for backup and restore checkbox to archive each bucket to a version.
      • Unversioned S3 buckets: Disable the Use versioning for backup and restore checkbox, and enter a backup bucket name for each active bucket. The backup bucket name must be different from the name of the active bucket it backs up. Backup buckets For more information about setting up external S3 blobstores, see the Backup and Restore with External Blobstores topic in the Cloud Foundry documentation.
  2. Click Save.

Note: For more information regarding AWS S3 Signatures, see the Authenticating Requests topic in the AWS documentation.

Azure

This section describes how to configure file storage for Azure.

For production-level PCF deployments on Azure, the recommended selection is Azure Storage. For more information about production-level PCF deployments on Azure, see the Reference Architecture for Pivotal Cloud Foundry on Azure.

For more factors to consider when selecting file storage, see Considerations for Selecting File Storage in Pivotal Cloud Foundry.

To use external Azure file storage for your Pivotal Application Service (PAS) filestore, perform the following steps:

  1. Select the External AzureStorage option. File storage azure
  2. To create a new storage account and storage containers for the PAS filestore, perform the following steps.
    • In the Azure Portal, navigate to the Storage accounts tab.
    • Click on the plus icon to add a new storage account.
    • In the Name field, enter a unique name (all lowercase, 3 to 24 alphanumeric characters) for the storage account.
    • For the Deployment model, select Resource manager.
    • For Account kind, select General purpose.
    • For Performance, select Standard.
    • From the Replication dropdown, select Locally-redundant storage (LRS).
    • For Storage service encryption, select Disabled.
    • From the Subscription dropdown, select the subscription where you want to deploy PCF resources.
    • For Resource group, select Use existing and enter the name of the resource group where you deployed PAS.
    • From Location the dropdown, select the Location where you are deploying PCF.
    • Click Create.
    • After the storage account is created, select the new storage account from the dashboard.
    • Navigate to the Blob Service section of the storage account, and then click on Containers to create one or more containers in this storage account for buildpacks, droplets, resources, and packages.
    • In Blob Service, select Soft Delete. In Soft Delete, click Enabled to enable soft delete in your Azure storage account.

      Note: BBR requires that you enable soft delete in your Azure storage account before you enable backup and restore for your Azure blobstores in Ops Manager. You should set a reasonable retention policy to minimize storage costs. For more information on enabling soft delete in your Azure storage account, see the Azure documentation.

    • For each container that you create, set the Access type to Private.
  3. In PAS, enter the name of the storage account you created for Account Name.
  4. In the Access Key field, enter one of the access keys provided for the storage account. To obtain a value for this fields, visit the Azure Portal, navigate to the Storage accounts tab and click on Access keys.
  5. For Environment, enter the name of the Azure Cloud environment that contains your storage. This value defaults to AzureCloud.
  6. For the Buildpacks Container Name, enter the container name for storing your app buildpacks.
  7. For Droplets Container Name, enter the container name for your app droplet storage. Pivotal recommends that you use a unique container name, but you can use the same container name as the previous step.
  8. For Packages Container Name, enter the container name for packages. Pivotal recommends that you use a unique container name, but you can use the same container name as the previous step.
  9. For Resources Container Name, enter the container name for resources. Pivotal recommends that you use a unique container name, but you can use the same container name as the previous step.
  10. (Optional) To enable backup and restore for your Azure blobstores in PAS, select the Enable backup and restore (Soft deletes must be enabled for all storage containers listed above) checkbox.
  11. (Optional) To enable PAS to restore your containers to a different Azure storage account than the account where you take backups, do the following:
    1. Under Restore from Storage Account, enter the name of the Azure storage account you want to restore your containers from. Leave this field blank if restoring to the same storage account where you take backups.
    2. Under Restore using Access Key, enter the access key for the Azure storage account you specified in Restore from Storage Account. Leave this field blank if restoring to the same storage account where you take backups.
  12. Click Save.

Note: To enable backup and restore of your PAS tile that uses S3 compatible blobstore, see Enable External Blobstore Backups.

GCP

This section describes how to configure file storage for GCP. Follow the instructions that correspond to your installation method: Manual or Terraform.

For production-level PCF deployments on GCP, Pivotal recommends selecting External Google Cloud Storage. For more information about production-level PCF deployments on GCP, see the Reference Architecture for Pivotal Cloud Foundry on GCP.

Manual

This section describes how to configure file storage for GCP if you installed PCF manually.

PAS can use Google Cloud Storage (GCS) as its external filestore by using either a GCP interoperable storage access key or your GCS Service Account. Choose the procedure that corresponds to your use case.

PAS can use Google Cloud Storage (GCS) as its external filestore by using either a GCP inter-operable storage access key or your GCS Service Account. See below for how to configure each option.

Note: The Pivotal Application Service for Windows (PASW) tile is incompatible with GCP configured with a GCS file store. If you are deploying PASW in your GCP environment, Pivotal recommends that you select the S3-compatible file store for your environment.

External Google Cloud Storage with Access Key and Secret Key

  1. Select the External Google Cloud Storage with Access Key and Secret Key option Gcp filestore gcs access key
  2. Enter values for Access Key and Secret Key. To obtain the values for these fields:
    1. In the GCP Console, navigate to the Storage tab, then click Settings.
    2. Click Interoperability.
    3. If necessary, click Enable interoperability access. If interoperability access is already enabled, confirm that the default project matches the project where you are installing PCF.
    4. Click Create a new key. Gcp key secret create
    5. Copy and paste the generated values into the corresponding PAS fields. PCF uses these values for authentication when connecting to Google Cloud Storage.
    6. Enter the names of the storage buckets you created in Preparing to Deploy Ops Manager on GCP Manually:
      • Buildpacks Bucket Name: MY-PCF-buildpacks
      • Droplets Bucket Name: MY-PCF-droplets
      • Resources Bucket Name: MY-PCF-resources
      • Packages Bucket Name: MY-PCF-packages
    7. Click Save.

External Google Cloud Storage with Service Account

  1. Select the External Google Cloud Storage with Service Account option Gcp filestore gcs service account
    1. For GCP Project ID enter the Project ID on your GCP Console that you want to use for your PAS file storage.
    2. For GCP Service Account Email enter the email address associated with your GCP account.
    3. For GCP Service Account JSON enter the account key that you use to access the specified GCP project, in JSON format.
    4. Enter the names of the storage buckets you created in Preparing to Deploy Ops Manager on GCP Manually:
      • Buildpacks Bucket Name: MY-PCF-buildpacks
      • Droplets Bucket Name: MY-PCF-droplets
      • Resources Bucket Name: MY-PCF-resources
      • Packages Bucket Name: MY-PCF-packages
    5. Click Save.

Terraform

This section describes how to configure file storage for GCP if you installed PCF with Terraform.

PAS can use Google Cloud Storage (GCS) as its external filestore by using either a GCP interoperable storage access key or your GCS Service Account. Choose the procedure that corresponds to your use case.

PAS can use Google Cloud Storage (GCS) as its external filestore by using either a GCP inter-operable storage access key or your GCS Service Account. See below for how to configure each option.

Note: The Pivotal Application Service for Windows (PASW) tile is incompatible with GCP configured with a GCS file store. If you are deploying PASW in your GCP environment, Pivotal recommends that you select the S3-compatible file store for your environment.

External Google Cloud Storage with Access Key and Secret Key

  1. Select the External Google Cloud Storage with Access Key and Secret Key option Gcp filestore gcs access key
  2. Enter values for Access Key and Secret Key. To obtain the values for these fields:
    1. In the GCP Console, navigate to the Storage tab, then click Settings.
    2. Click Interoperability.
    3. If necessary, click Enable interoperability access. If interoperability access is already enabled, confirm that the default project matches the project where you are installing PCF.
    4. Click Create a new key. Gcp key secret create
    5. Copy and paste the generated values into the corresponding PAS fields. PCF uses these values for authentication when connecting to Google Cloud Storage.
    6. Enter the names of the storage buckets you created in Deploying Ops Manager on GCP Using Terraform:
      • Buildpacks Bucket Name: Enter the value of buildpacks_bucket from your Terraform output.
      • Droplets Bucket Name: Enter the value of droplets_bucket from your Terraform output.
      • Resources Bucket Name: Enter the value of packages_bucket from your Terraform output.
      • Packages Bucket Name: Enter the value of resources_bucket from your Terraform output.
    7. Click Save.

External Google Cloud Storage with Service Account

  1. Select the External Google Cloud Storage with Service Account option Gcp filestore gcs service account
    1. For GCP Project ID enter the Project ID on your GCP Console that you want to use for your PAS file storage.
    2. For GCP Service Account Email enter the email address associated with your GCP account.
    3. For GCP Service Account JSON enter the account key that you use to access the specified GCP project, in JSON format.
    4. Enter the names of the storage buckets you created in Deploying Ops Manager on GCP Using Terraform:
      • Buildpacks Bucket Name: Enter the value of buildpacks_bucket from your Terraform output.
      • Droplets Bucket Name: Enter the value of droplets_bucket from your Terraform output.
      • Resources Bucket Name: Enter the value of packages_bucket from your Terraform output.
      • Packages Bucket Name: Enter the value of resources_bucket from your Terraform output.
    5. Click Save.

Openstack

For production-level PCF deployments on OpenStack, the recommended selection is External S3-Compatible.

For more factors to consider when selecting file storage, see Considerations for Selecting File Storage in Pivotal Cloud Foundry.

To use an external S3-compatible filestore for PAS file storage, perform the following steps:

  1. Select the External S3-Compatible Filestore option and complete the following fields:

    • Enter the https:// URL Endpoint for your region.
      For example, in the us-west-2 region, enter https://s3-us-west-2.amazonaws.com/.
    • If you use an AWS instance profile to manage role information for your filestore, select the S3 AWS with Instance Profile checkbox. For more information, see AWS Identity and Access Management in the AWS documentation. External s3 filestore instance profile
    • Alternatively, enter the Access Key and Secret Key of the pcf-user you created when configuring AWS for PCF. If you select the S3 AWS with Instance Profile checkbox and also enter an Access Key and Secret Key, the instance profile will overrule the Access Key and Secret Key.
    • From the S3 Signature Version dropdown, select V4 Signature. For more information about S4 signatures, see Signing AWS API Requests in the AWS documentation.
    • For Region, enter the region in which your S3 buckets are located. us-west-2 is an example of an acceptable value for this field.
    • Select Server-side Encryption to encrypt the contents of your S3 filestore. This option is only available for AWS S3.
    • (Optional) If you selected Server-side Encryption, you can also specify a KMS Key ID. PAS uses the KMS key to encrypt files uploaded to the blobstore. If you do not provide a KMS Key ID, PAS uses the default AWS key. For more information, see Protecting Data Using Server-Side Encryption with AWS KMS–Managed Keys (SSE-KMS).
    • Enter names for your S3 buckets:

      Ops Manager Field Value Description
      Buildpacks Bucket Name pcf-buildpacks-bucket
      This S3 bucket stores app buildpacks.
      Droplets Bucket Name pcf-droplets-bucket This S3 bucket stores app droplets. Pivotal recommends that you use a unique bucket name for droplets, but you can also use the same name as above.
      Packages Bucket Name pcf-packages-bucket This S3 bucket stores app packages. Pivotal recommends that you use a unique bucket name for packages, but you can also use the same name as above.
      Resources Bucket Name pcf-resources-bucket This S3 bucket stores app resources. Pivotal recommends that you use a unique bucket name for app resources, but you can also use the same name as above.
    • Configure the following depending on whether your S3 buckets have versioning enabled:

      • Versioned S3 buckets: Enable the Use versioning for backup and restore checkbox to archive each bucket to a version.
      • Unversioned S3 buckets: Disable the Use versioning for backup and restore checkbox, and enter a backup bucket name for each active bucket. The backup bucket name must be different from the name of the active bucket it backs up. Backup buckets For more information about setting up external S3 blobstores, see the Backup and Restore with External Blobstores topic in the Cloud Foundry documentation.
  2. Click Save.

Note: For more information regarding AWS S3 Signatures, see the Authenticating Requests topic in the AWS documentation.

vSphere

For production-level PCF deployments on vSphere, the recommended selection is External S3-Compatible and the use of an external filestore. For more information about production-level PCF deployments on vSphere, see the Reference Architecture for Pivotal Cloud Foundry on vSphere.

For more factors to consider when selecting file storage, see Considerations for Selecting File Storage in Pivotal Cloud Foundry.

To use an external S3-compatible filestore for PAS file storage, perform the following steps:

  1. Select the External S3-Compatible Filestore option and complete the following fields:

    • Enter the https:// URL Endpoint for your region.
      For example, in the us-west-2 region, enter https://s3-us-west-2.amazonaws.com/.
    • If you use an AWS instance profile to manage role information for your filestore, select the S3 AWS with Instance Profile checkbox. For more information, see AWS Identity and Access Management in the AWS documentation. External s3 filestore instance profile
    • Alternatively, enter the Access Key and Secret Key of the pcf-user you created when configuring AWS for PCF. If you select the S3 AWS with Instance Profile checkbox and also enter an Access Key and Secret Key, the instance profile will overrule the Access Key and Secret Key.
    • From the S3 Signature Version dropdown, select V4 Signature. For more information about S4 signatures, see Signing AWS API Requests in the AWS documentation.
    • For Region, enter the region in which your S3 buckets are located. us-west-2 is an example of an acceptable value for this field.
    • Select Server-side Encryption to encrypt the contents of your S3 filestore. This option is only available for AWS S3.
    • (Optional) If you selected Server-side Encryption, you can also specify a KMS Key ID. PAS uses the KMS key to encrypt files uploaded to the blobstore. If you do not provide a KMS Key ID, PAS uses the default AWS key. For more information, see Protecting Data Using Server-Side Encryption with AWS KMS–Managed Keys (SSE-KMS).
    • Enter names for your S3 buckets:

      Ops Manager Field Value Description
      Buildpacks Bucket Name pcf-buildpacks-bucket
      This S3 bucket stores app buildpacks.
      Droplets Bucket Name pcf-droplets-bucket This S3 bucket stores app droplets. Pivotal recommends that you use a unique bucket name for droplets, but you can also use the same name as above.
      Packages Bucket Name pcf-packages-bucket This S3 bucket stores app packages. Pivotal recommends that you use a unique bucket name for packages, but you can also use the same name as above.
      Resources Bucket Name pcf-resources-bucket This S3 bucket stores app resources. Pivotal recommends that you use a unique bucket name for app resources, but you can also use the same name as above.
    • Configure the following depending on whether your S3 buckets have versioning enabled:

      • Versioned S3 buckets: Enable the Use versioning for backup and restore checkbox to archive each bucket to a version.
      • Unversioned S3 buckets: Disable the Use versioning for backup and restore checkbox, and enter a backup bucket name for each active bucket. The backup bucket name must be different from the name of the active bucket it backs up. Backup buckets For more information about setting up external S3 blobstores, see the Backup and Restore with External Blobstores topic in the Cloud Foundry documentation.
  2. Click Save.

Note: For more information regarding AWS S3 Signatures, see the Authenticating Requests topic in the AWS documentation.

Create a pull request or raise an issue on the source for this page in GitHub