Configuring File Storage for PAS

Page last updated:

Warning: Pivotal Application Service (PAS) v2.8 is no longer supported because it has reached the End of General Support (EOGS) phase as defined by the Support Lifecycle Policy. To stay up to date with the latest software and security updates, upgrade to a supported version.

This topic provides instructions for configuring file storage for Pivotal Application Service (PAS) based on your IaaS and installation method. See the section that applies to your use case.

To minimize system downtime, Pivotal recommends using highly resilient and redundant external filestores for your PAS file storage. For more factors to consider when selecting file storage, see Configure File Storage in Configuring PAS for Upgrades.

Note: After initial installation, do not change file storage configuration without first migrating existing files to the new provider.

Internal File Storage

Internal file storage is only appropriate for small, non-production deployments.

To use the PAS internal filestore:

  1. Select Internal WebDAV.

  2. Click Save.

AWS

This section describes how to configure file storage for AWS.

Note: If you followed the procedure in Preparing to Deploy Ops Manager on AWS, you created the necessary resources for external S3-compatible file storage.

For production-level Pivotal Platform deployments on AWS, Pivotal recommends selecting External S3-compatible filestore. For instructions, see External S3-Compatible Filestore below.

You can also configure Fog blobstores to use AWS IAM instance profiles. For instructions, see Fog with AWS IAM Instance Profiles below.

For more information about production-level Pivotal Platform deployments on AWS, see AWS Reference Architecture.

External S3-Compatible Filestore

To use an external S3-compatible filestore for PAS file storage:

  1. Select External S3-compatible filestore.

  2. For URL endpoint, enter the https:// URL endpoint for your region. For example, https://s3.us-west-2.amazonaws.com/.

  3. Enter the Access key and Secret key of the pcf-user you created when configuring AWS for Pivotal Platform.

  4. (Optional) If your PAS deployment is on AWS, you can alternatively enter the Access key and Secret key of the pcf-user you created when configuring AWS for Pivotal Platform. If you enable the S3 AWS with instance profile checkbox and also enter an Access key and Secret key, the instance profile overrules the access key and secret key.

  5. From the S3 signature version dropdown, select V4 signature. For more information about S3 signatures, see Signing AWS API Requests in the AWS documentation.

  6. For Region, enter the region in which your S3 buckets are located. For example, us-west-2.

  7. To encrypt the contents of your S3 filestore, select Server-side encryption. This option is only available for AWS S3.

  8. (Optional) If you selected Server-side encryption, you can also specify a KMS key ID. PAS uses the KMS key to encrypt files uploaded to the blobstore. If you do not provide a KMS Key ID, PAS uses the default AWS key. For more information, see Protecting Data Using Server-Side Encryption with AWS KMS–Managed Keys (SSE-KMS) in the AWS documentation.

  9. Enter names for your S3 buckets:

    Ops Manager Field Value Description
    Buildpacks bucket name pcf-buildpacks-bucket
    This S3 bucket stores app buildpacks.
    Droplets bucket name pcf-droplets-bucket This S3 bucket stores app droplets. Pivotal recommends that you use a unique bucket name for droplets, but you can also use the same name as above.
    Packages bucket name pcf-packages-bucket This S3 bucket stores app packages. Pivotal recommends that you use a unique bucket name for packages, but you can also use the same name as above.
    Resources bucket name pcf-resources-bucket This S3 bucket stores app resources. Pivotal recommends that you use a unique bucket name for app resources, but you can also use the same name as above.

  10. Configure these checkboxes depending on whether your S3 buckets have versioning enabled:

    • For versioned S3 buckets, enable the Use versioning for backup and restore checkbox to archive each bucket to a version.
      If you are using Dell ECS, Pivotal recommends against versioned buckets. For more information, see Step 3: Configure PAS File Storage in Dell EMC ECS with Pivotal Cloud Foundry. You can use mirroring as an alternative to versioning.
    • For unversioned S3 buckets, disable the Use versioning for backup and restore checkbox, and enter a backup bucket name for each active bucket. The backup bucket name must be different from the name of the active bucket it backs up. For more information about setting up external S3 blobstores, see Enable Versioning on Your S3-Compatible Blobstore in Backup and Restore for External Blobstores in the Cloud Foundry documentation.
  11. Enter the name of the region in which your backup S3 buckets are located. For example, us-west-2. These are the buckets used to back up and restore the contents of your S3 filestore.

  12. (Optional) Enter names for your backup S3 buckets:

    Ops Manager Field Value Description
    Backup buildpacks bucket name buildpacks-backup-bucket
    This S3 bucket is used to back up and restore your buildpacks bucket. This bucket name must be different from the buckets you named above.
    Backup droplets bucket name droplets-backup-bucket This S3 bucket is used to back up and restore your droplets bucket. Pivotal recommends that you use a unique bucket name for droplet backups, but you can also use the same name as above.
    Backup packages bucket name packages-backup-bucket This S3 bucket is used to back up and restore your packages bucket. Pivotal recommends that you use a unique bucket name for package backups, but you can also use the same name as above.

  13. Click Save.

Note: For more information about AWS S3 signatures, see Authenticating Requests in the AWS documentation.

Fog with AWS IAM Instance Profiles

To configure Fog blobstores to use AWS IAM instance profiles:

  1. Configure an additional cloud-controller IAM role with the following policy to give access to the S3 buckets you plan to use:

    {
      "Version": "2012-10-17",
      "Statement": [{
        "Effect": "Allow",
        "Action": [ "s3:*" ],
        "Resource": [
          "arn:aws:s3:::YOUR-AWS-BUILDPACK-BUCKET",
          "arn:aws:s3:::YOUR-AWS-BUILDPACK-BUCKET/*",
          "arn:aws:s3:::YOUR-AWS-DROPLET-BUCKET",
          "arn:aws:s3:::YOUR-AWS-DROPLET-BUCKET/*",
          "arn:aws:s3:::YOUR-AWS-PACKAGE-BUCKET",
          "arn:aws:s3:::YOUR-AWS-PACKAGE-BUCKET/*",
          "arn:aws:s3:::YOUR-AWS-RESOURCE-BUCKET",
          "arn:aws:s3:::YOUR-AWS-RESOURCE-BUCKET/*",
        ]
      }]
    }
    

    Replace YOUR-AWS-BUILDPACK-BUCKET, YOUR-AWS-DROPLET-BUCKET, YOUR-AWS-PACKAGE-BUCKET, and YOUR-AWS-RESOURCE-BUCKET with the names of your AWS buckets. Do not use periods (.) in your AWS bucket names.

    If you use the AWS console, an IAM role is automatically assigned to an IAM instance profile with the same name, cloud-controller. If you do not use the AWS console, you must create an IAM instance profile with a single assigned IAM role. For more information, see Step 4: Create an IAM Instance Profile for Your Amazon EC2 Instances in the AWS documentation.

  2. In your BOSH cloud config, create a VM extension to add the IAM instance profile you created to VMs using the extension.

    vm_extensions:
    - cloud_properties:
        iam_instance_profile: cloud-controller
      name: cloud-controller-iam
    

    Note: You can also create a VM extension using the Ops Manager API. For more information, see Create or Update a VM Extension in Managing Custom VM Extensions.

  3. In your PAS deployment manifest, use the cloud-controller-iam VM extension you created for the instance groups containing cloud_controller, cloud_controller_worker, and cloud_controller_clock, as in the example below:

    instance_groups:
    ...
    - name: api
      ...
      vm_extensions:
      - cloud-controller-iam
    ...
    - name: cc-worker
      ...
      vm_extensions:
      - cloud-controller-iam
    ...
    - name: scheduler
      ...
      vm_extensions:
      - cloud-controller-iam
    
  4. Insert the following configuration into your deployment manifest under properties.cc:

    cc:
      buildpacks:
        blobstore_type: fog
        buildpack_directory_key: YOUR-AWS-BUILDPACK-BUCKET
        fog_connection: &fog_connection
          provider: AWS
          region: us-east-1
          use_iam_profile: true
      droplets:
        blobstore_type: fog
        droplet_directory_key: YOUR-AWS-DROPLET-BUCKET
        fog_connection: *fog_connection
      packages:
        blobstore_type: fog
        app_package_directory_key: YOUR-AWS-PACKAGE-BUCKET
        fog_connection: *fog_connection
      resource_pool:
        blobstore_type: fog
        resource_directory_key: YOUR-AWS-RESOURCE-BUCKET
        fog_connection: *fog_connection
    

    Replace YOUR-AWS-BUILDPACK-BUCKET, YOUR-AWS-DROPLET-BUCKET, YOUR-AWS-PACKAGE-BUCKET, and YOUR-AWS-RESOURCE-BUCKET with the names of your AWS buckets. Do not use periods (.) in your AWS bucket names.

  5. (Optional) Provide other configuration with the fog_connection hash, which is passed through to the Fog gem.

GCP

This section describes how to configure file storage for GCP. Follow the procedure that corresponds to your installation method:

For production-level Pivotal Platform deployments on GCP, Pivotal recommends selecting External Google Cloud Storage. For more information about production-level Pivotal Platform deployments on GCP, see GCP Reference Architecture.

Manual

This section describes how to configure file storage for GCP if you installed Pivotal Platform manually.

PAS can use Google Cloud Storage (GCS) as its external filestore by using either a GCP interoperable storage access key or your GCS Service Account.

To configure file storage for GCP, follow one of these procedures:

External Google Cloud Storage with Access Key and Secret Key

To configure file storage for GCP using an access key and secret key:

  1. Select External Google Cloud Storage with access key and secret key.

  2. Enter values for Access key and Secret key. To obtain the values for these fields:

    1. In the GCP Console, navigate to the Storage tab.
    2. Click Settings.
    3. Click Interoperability.
    4. If necessary, click Enable interoperability access. If interoperability access is already enabled, confirm that the default project matches the project where you are installing PAS.
    5. Click Create a new key.
    6. Copy and paste the generated values into the corresponding PAS fields. PAS uses these values for authentication when connecting to Google Cloud Storage.
  3. Enter the names of the storage buckets you created in Step 6: Create Database Instance and Databases in Preparing to Deploy Ops Manager on GCP Manually:

    • Buildpacks bucket name: PREFIX-PCF-buildpacks
    • Droplets bucket name: PREFIX-PCF-droplets
    • Packages bucket name: PREFIX-PCF-packages
    • Resources bucket name: PREFIX-PCF-resources

      Where PREFIX is a prefix of your choice, required to make the bucket name unique.
  4. Click Save.

External Google Cloud Storage with Service Account

To configure file storage for GCP using a service account:

Note: You can either use the same service account that you created for Ops Manager, or create a separate service account for PAS file storage. To create a separate service account for PAS file storage, follow the procedure in Step 1: Set Up IAM Service Accounts in Preparing to Deploy Ops Manager on GCP Manually, but only select the Storage > Storage Admin role.

  1. Select External Google Cloud Storage with service account.

  2. For GCP project ID, enter the Project ID on your GCP Console that you want to use for your PAS file storage.

  3. For GCP service account email, enter the email address associated with your GCP account.

  4. For GCP service account key JSON, enter the account key that you use to access the specified GCP project, in JSON format.

  5. Enter the names of the storage buckets you created in Step 7: Create Storage Buckets in Preparing to Deploy Ops Manager on GCP Manually:

    • Buildpacks bucket name: PREFIX-PCF-buildpacks
    • Droplets bucket name: PREFIX-PCF-droplets
    • Packages bucket name: PREFIX-PCF-packages
    • Resources bucket name: PREFIX-PCF-resources
    • Backup bucket name: PREFIX-PCF-backup

      Where PREFIX is a prefix of your choice, required to make the bucket name unique.
  6. Click Save.

Terraform

This section describes how to configure file storage for GCP if you installed Pivotal Platform with Terraform.

PAS can use Google Cloud Storage (GCS) as its external filestore by using either a GCP interoperable storage access key or your GCS Service Account.

To configure file storage for GCP, follow one of these procedures:

External Google Cloud Storage with Access Key and Secret Key

To configure file storage for GCP using an access key and secret key:

  1. Select External Google Cloud Storage with access key and secret key.

  2. Enter values for Access key and Secret key. To obtain the values for these fields:

    1. In the GCP Console, navigate to the Storage tab.
    2. Click Settings.
    3. Click Interoperability.
    4. If necessary, click Enable interoperability access. If interoperability access is already enabled, confirm that the default project matches the project where you are installing PAS.
    5. Click Create a new key.
    6. Copy and paste the generated values into the corresponding PAS fields. PAS uses these values for authentication when connecting to Google Cloud Storage.
  3. Enter the names of the storage buckets you created in GCP Service Account Key for Blobstore in Deploying Ops Manager on GCP Using Terraform:

    • Buildpacks bucket name: Enter the value of buildpacks_bucket from your Terraform output.
    • Droplets bucket name: Enter the value of droplets_bucket from your Terraform output.
    • Packages bucket name: Enter the value of resources_bucket from your Terraform output.
    • Resources bucket name: Enter the value of packages_bucket from your Terraform output.
  4. Click Save.

External Google Cloud Storage with Service Account

To configure file storage for GCP using a service account:

Note: You can either use the same service account that you created for Ops Manager, or create a separate service account for PAS file storage. To create a separate service account for PAS file storage, follow the procedure in Step 1: Set Up IAM Service Accounts in Preparing to Deploy Ops Manager on GCP Manually, but only select the Storage > Storage Admin role.

  1. Select External Google Cloud Storage with service account.

  2. For GCP project ID, enter the Project ID on your GCP Console that you want to use for your PAS file storage.

  3. For GCP service account email, enter the email address associated with your GCP account.

  4. For GCP service account key JSON, enter the account key that you use to access the specified GCP project, in JSON format.

  5. Enter the names of the storage buckets you created in GCP Service Account Key for Blobstore in Deploying Ops Manager on GCP Using Terraform:

    • Buildpacks bucket name: Enter the value of buildpacks_bucket from your Terraform output.
    • Droplets bucket name: Enter the value of droplets_bucket from your Terraform output.
    • Packages bucket name: Enter the value of resources_bucket from your Terraform output.
    • Resources bucket name: Enter the value of packages_bucket from your Terraform output.
    • Backup bucket name: Enter the value of backup_bucket from your Terraform output.
  6. Click Save.

Azure

This section describes how to configure file storage for Azure.

For production-level Pivotal Platform deployments on Azure, Pivotal recommends selecting External Azure Storage. For more information about production-level Pivotal Platform deployments on Azure, see Azure Reference Architecture.

For more factors to consider when selecting file storage, see Configure File Storage in Configuring PAS for Upgrades.

To use external Azure file storage for your PAS filestore:

  1. Select External Azure storage.

  2. To create a new storage account for the PAS filestore:

    1. Navigate to the Azure Portal.
    2. Select the Storage accounts tab.
    3. Click the + icon to add a new storage account.
    4. In the Name field, enter a unique name for the storage account. This name must be all lowercase and contain 3 to 24 alphanumeric characters.
    5. For Deployment model, select Resource manager.
    6. From the Account kind dropdown, select General purpose.
    7. For Performance, select Standard.
    8. From the Replication dropdown, select Locally-redundant storage (LRS).
    9. For Storage service encryption, select Disabled.
    10. From the Subscription dropdown, select the subscription where you want to deploy PAS resources.
    11. For Resource group, select Use existing and enter the name of the resource group where you deployed PAS.
    12. From the Location dropdown, select the location where you are deploying PAS.
    13. Click Create.
  3. To create new storage containers in the storage account you created in the previous step:

    1. In the Azure Portal, select the new storage account from the dashboard.
    2. Under Blob Service, select Containers to create one or more containers in this storage account for buildpacks, droplets, resources, and packages.
    3. Select Soft Delete.
    4. Select Enabled to enable soft delete in your Azure storage account.

      Note: BBR requires that you enable soft delete in your Azure storage account before you enable backup and restore for your Azure blobstores in Ops Manager. You should set a reasonable retention policy to minimize storage costs. For more information on enabling soft delete in your Azure storage account, see Soft delete for blobs in the Azure documentation.

    5. For each container that you create, set the Access type to Private.
  4. In PAS, enter the name of the storage account you created for Account name.

  5. In the Access key field, enter one of the access keys provided for the storage account. To obtain a value for this field:

    1. Navigate to the Azure Portal.
    2. Select the Storage accounts tab.
    3. Click Access keys.
  6. For Environment, enter the name of the Azure Cloud environment that contains your storage. This value defaults to AzureCloud, but other options include AzureChinaCloud, AzureUSGovernment, and AzureGermanCloud.

  7. For Buildpacks container name, enter the container name for storing your app buildpacks.

  8. For Droplets container name, enter the container name for your app droplet storage. Pivotal recommends that you use a unique container name, but you can use the same container name as the previous step.

  9. For Packages container name, enter the container name for packages. Pivotal recommends that you use a unique container name, but you can use the same container name as the previous step.

  10. For Resources container name, enter the container name for resources. Pivotal recommends that you use a unique container name, but you can use the same container name as the previous step.

  11. (Optional) To enable backup and restore for your Azure blobstores in PAS, select the Enable backup and restore checkbox.

    Note: Soft deletes must be enabled for all storage containers listed.

  12. (Optional) To enable PAS to restore your containers to a different Azure storage account than the account where you take backups:

    1. Under Restore from storage account, enter the name of the Azure storage account you want to restore your containers from. Leave this field blank if you want to restore to the same storage account where you take backups.
    2. Under Restore using access key, enter the access key for the Azure storage account you specified in Restore from storage account. Leave this field blank if you want to restore to the same storage account where you take backups.
  13. Click Save.

Note: To enable backup and restore of your PAS tile that uses an S3-compatible blobstore, see Enable External Blobstore Backups.

OpenStack

For production-level Pivotal Platform deployments on OpenStack, Pivotal recommends selecting External S3-compatible filestore. For more information about production-level Pivotal Platform deployments on OpenStack, see OpenStack Reference Architecture.

For more factors to consider when selecting file storage, see Configure File Storage in Configuring PAS for Upgrades.

To use an external S3-compatible filestore for PAS file storage:

  1. Select External S3-compatible filestore.

  2. For URL endpoint, enter the https:// URL endpoint for your region. For example, https://s3.us-west-2.amazonaws.com/.

  3. Enter the Access key and Secret key of the pcf-user you created when configuring AWS for Pivotal Platform.

  4. From the S3 signature version dropdown, select V4 signature. For more information about S3 signatures, see Signing AWS API Requests in the AWS documentation.

  5. For Region, enter the region in which your S3 buckets are located. For example, us-west-2.

  6. To encrypt the contents of your S3 filestore, select Server-side encryption. This option is only available for AWS S3.

  7. (Optional) If you selected Server-side encryption, you can also specify a KMS key ID. PAS uses the KMS key to encrypt files uploaded to the blobstore. If you do not provide a KMS Key ID, PAS uses the default AWS key. For more information, see Protecting Data Using Server-Side Encryption with AWS KMS–Managed Keys (SSE-KMS) in the AWS documentation.

  8. Enter names for your S3 buckets:

    Ops Manager Field Value Description
    Buildpacks bucket name pcf-buildpacks-bucket
    This S3 bucket stores app buildpacks.
    Droplets bucket name pcf-droplets-bucket This S3 bucket stores app droplets. Pivotal recommends that you use a unique bucket name for droplets, but you can also use the same name as above.
    Packages bucket name pcf-packages-bucket This S3 bucket stores app packages. Pivotal recommends that you use a unique bucket name for packages, but you can also use the same name as above.
    Resources bucket name pcf-resources-bucket This S3 bucket stores app resources. Pivotal recommends that you use a unique bucket name for app resources, but you can also use the same name as above.

  9. Configure these checkboxes depending on whether your S3 buckets have versioning enabled:

    • For versioned S3 buckets, enable the Use versioning for backup and restore checkbox to archive each bucket to a version.
      If you are using Dell ECS, Pivotal recommends against versioned buckets. For more information, see Step 3: Configure PAS File Storage in Dell EMC ECS with Pivotal Cloud Foundry. You can use mirroring as an alternative to versioning.
    • For unversioned S3 buckets, disable the Use versioning for backup and restore checkbox, and enter a backup bucket name for each active bucket. The backup bucket name must be different from the name of the active bucket it backs up. For more information about setting up external S3 blobstores, see Enable Versioning on Your S3-Compatible Blobstore in Backup and Restore for External Blobstores in the Cloud Foundry documentation.
  10. Enter the name of the region in which your backup S3 buckets are located. For example, us-west-2. These are the buckets used to back up and restore the contents of your S3 filestore.

  11. (Optional) Enter names for your backup S3 buckets:

    Ops Manager Field Value Description
    Backup buildpacks bucket name buildpacks-backup-bucket
    This S3 bucket is used to back up and restore your buildpacks bucket. This bucket name must be different from the buckets you named above.
    Backup droplets bucket name droplets-backup-bucket This S3 bucket is used to back up and restore your droplets bucket. Pivotal recommends that you use a unique bucket name for droplet backups, but you can also use the same name as above.
    Backup packages bucket name packages-backup-bucket This S3 bucket is used to back up and restore your packages bucket. Pivotal recommends that you use a unique bucket name for package backups, but you can also use the same name as above.

  12. Click Save.

Note: For more information about AWS S3 signatures, see Authenticating Requests in the AWS documentation.

vSphere

For production-level Pivotal Platform deployments on vSphere, Pivotal recommends selecting External S3-compatible filestore and using an external filestore. For more information about production-level Pivotal Platform deployments on vSphere, see vSphere Reference Architecture.

For more factors to consider when selecting file storage, see Configure File Storage in Configuring PAS for Upgrades.

To use an external S3-compatible filestore for PAS file storage:

  1. Select External S3-compatible filestore.

  2. For URL endpoint, enter the https:// URL endpoint for your region. For example, https://s3.us-west-2.amazonaws.com/.

  3. Enter the Access key and Secret key of the pcf-user you created when configuring AWS for Pivotal Platform.

  4. From the S3 signature version dropdown, select V4 signature. For more information about S3 signatures, see Signing AWS API Requests in the AWS documentation.

  5. For Region, enter the region in which your S3 buckets are located. For example, us-west-2.

  6. To encrypt the contents of your S3 filestore, select Server-side encryption. This option is only available for AWS S3.

  7. (Optional) If you selected Server-side encryption, you can also specify a KMS key ID. PAS uses the KMS key to encrypt files uploaded to the blobstore. If you do not provide a KMS Key ID, PAS uses the default AWS key. For more information, see Protecting Data Using Server-Side Encryption with AWS KMS–Managed Keys (SSE-KMS) in the AWS documentation.

  8. Enter names for your S3 buckets:

    Ops Manager Field Value Description
    Buildpacks bucket name pcf-buildpacks-bucket
    This S3 bucket stores app buildpacks.
    Droplets bucket name pcf-droplets-bucket This S3 bucket stores app droplets. Pivotal recommends that you use a unique bucket name for droplets, but you can also use the same name as above.
    Packages bucket name pcf-packages-bucket This S3 bucket stores app packages. Pivotal recommends that you use a unique bucket name for packages, but you can also use the same name as above.
    Resources bucket name pcf-resources-bucket This S3 bucket stores app resources. Pivotal recommends that you use a unique bucket name for app resources, but you can also use the same name as above.

  9. Configure these checkboxes depending on whether your S3 buckets have versioning enabled:

    • For versioned S3 buckets, enable the Use versioning for backup and restore checkbox to archive each bucket to a version.
      If you are using Dell ECS, Pivotal recommends against versioned buckets. For more information, see Step 3: Configure PAS File Storage in Dell EMC ECS with Pivotal Cloud Foundry. You can use mirroring as an alternative to versioning.
    • For unversioned S3 buckets, disable the Use versioning for backup and restore checkbox, and enter a backup bucket name for each active bucket. The backup bucket name must be different from the name of the active bucket it backs up. For more information about setting up external S3 blobstores, see Enable Versioning on Your S3-Compatible Blobstore in Backup and Restore for External Blobstores in the Cloud Foundry documentation.
  10. Enter the name of the region in which your backup S3 buckets are located. For example, us-west-2. These are the buckets used to back up and restore the contents of your S3 filestore.

  11. (Optional) Enter names for your backup S3 buckets:

    Ops Manager Field Value Description
    Backup buildpacks bucket name buildpacks-backup-bucket
    This S3 bucket is used to back up and restore your buildpacks bucket. This bucket name must be different from the buckets you named above.
    Backup droplets bucket name droplets-backup-bucket This S3 bucket is used to back up and restore your droplets bucket. Pivotal recommends that you use a unique bucket name for droplet backups, but you can also use the same name as above.
    Backup packages bucket name packages-backup-bucket This S3 bucket is used to back up and restore your packages bucket. Pivotal recommends that you use a unique bucket name for package backups, but you can also use the same name as above.

  12. Click Save.

Note: For more information about AWS S3 signatures, see Authenticating Requests in the AWS documentation.