LATEST VERSION: 1.4 - RELEASE NOTES
Pivotal Cloud Cache for PCF v1.3

Pivotal Cloud Cache Developer Guide

This document describes how a Pivotal Cloud Foundry (PCF) app developer can choose a service plan, create and delete Pivotal Cloud Cache (PCC) service instances, and bind an app.

You must install the Cloud Foundry Command Line Interface (cf CLI) to run the commands in this topic.

Viewing All Plans Available for Pivotal Cloud Cache

Run cf marketplace -s p-cloudcache to view all plans available for PCC. The plan names displayed are configured by the operator on tile installation.

$ cf marketplace -s p-cloudcache

Getting service plan information for service p-cloudcache as admin...
OK

service plan   description      free or paid
extra-small    Caching Plan 1   free
small          Caching Plan 2   free
medium         Caching Plan 3   free
large          Caching Plan 4   free
extra-large    Caching Plan 5   free

Creating a Pivotal Cloud Cache Service Instance

Run cf create-service p-cloudcache PLAN-NAME SERVICE-INSTANCE-NAME to create a service instance. Replace PLAN-NAME with the name from the list of available plans. Replace SERVICE-INSTANCE-NAME with a name of your choice. Use this name to refer to your service instance with other commands. Service instance names can include alpha-numeric characters, hyphens, and underscores.

$ cf create-service p-cloudcache extra-large my-cloudcache

Service instances are created asynchronously. Run the cf services command to view the current status of the service creation, and of other service instances in the current org and space:

$ cf services
Getting services in org my-org / space my-space as user...
OK

name            service        plan    bound apps   last operation
my-cloudcache   p-cloudcache   small                create in progress

When completed, the status changes from create in progress to create succeeded.

Provide Optional Parameters

You can create a customized service instance by passing optional parameters to cf create-service using the -c flag. The -c flag accepts a valid JSON object containing service-specific configuration parameters, provided either in-line or in a file.

The PCC service broker supports the following parameters:

  • num_servers: An integer that specifies the number of server instances in the cluster. The minimum value is 4. The maximum and default values are configured by the operator.
  • new_size_percentage: An integer that specifies the percentage of the heap to allocate to young generation. This value must be between 5 and 83. By default, the new size is 2 GB or 10% of heap, whichever is smaller.

The following example creates the service with five service instances in the cluster:

$ cf create-service p-cloudcache small my-cloudcache -c '{"num_servers": 5}'

Enable Session State Caching with the Java Buildpack

When the session-replication tag is specified, the Java buildpack downloads all the required resources for session state caching. This feature is available in Java buildpack version 3.19 and higher, up to but not including version 4. It is then available again in version 4.3.

To enable session state caching, do one of the following items:

  • Option 1: When creating your service instance name, specify the session-replication tag. For example:

     $ cf create-service p-cloudcache small-plan my-service-instance -t session-replication
  • Option 2: Update your service instance, specifying the session-replication tag:

    $ cf update-service new-service-instance -t session-replication
  • Option 3: When creating the service, name the service instance name by appending it with the string -session-replication, for example my-service-instance-session-replication.

Enable Session State Caching Using Spring Session

To use Spring Session for session state caching for apps with PCC, follow the steps below:

  1. Make the following changes to the app:

    • Replace existing Spring Session @EnableXXXHttpSession annotation with @EnableGemFireHttpSession(maxInactiveIntervalInSeconds = N) where N is seconds.
    • Add the spring-session-data-geode and spring-data-geode dependencies to the build.
    • Add beans to the Spring app config.

    For more information, see the spring-session-data-gemfire-example repository.

  2. Create a region named ClusteredSpringSessions in gfsh using the cluster_operator_XXX credentials: create region --name=ClusteredSpringSessions --type=PARTITION_HEAP_LRU

Dev Plans

The Dev Plan is a type of service plan that is useful for development and testing. This example creates a Dev Plan service instance:

$ cf create-service p-cloudcache dev-plan my-dev-cloudcache

The plan provides a single locator and a single server colocated within a single VM. Because the VM is recycled when the service instance is updated or upgraded, all data within the region is lost upon update or upgrade.

When post-deploy scripts are enabled for Ops Manager, the service instance is created with a single sample region called example_partition_region. The region is of type PARTITION_REDUNDANT_HEAP_LRU, as described in Partitioned Region Types for Creating Regions on the Server.

If example_partition_region has not been created, it is probably because post-deploy scripts are not enabled for Ops Manager, as described in Configure a Dev Plan.

Set Up WAN-Separated Service Instances

Two service instances may form a single distributed system across a WAN. The interaction of the two service instances may follow one of the patterns described within the section on Design Patterns.

Call the two service instances A and B. The GemFire cluster within each service instance uses an identifier called a distributed_system_id. This example assigns distributed_system_id = 1 to Cluster A and distributed_system_id = 2 to Cluster B. GemFire gateway senders provide the communication path and construct that propagates region operations from one cluster to another. On the receiving end are GemFire gateway receivers. Creating a service instance also creates gateway receivers.

Set Up a Bidirectional System

This sequence of steps sets up a bidirectional transfer, as will be needed for an active-active pattern, as described in Bidirectional Replication Across a WAN.

  1. Create the cluster A service instance using the cluster A Cloud Foundry credentials. This example explicitly sets the distributed_system_id of cluster A using a -c option with a command of the form:

    cf create-service p-cloudcache PLAN-NAME SERVICE-INSTANCE-NAME -c '{
    "distributed_system_id" : ID-VALUE }'
    

    Here is a cluster A example of the create-service command:

    $ cf create-service p-cloudcache wan-cluster wan1 -c '{
    "distributed_system_id" : 1 }'
    

    Verify the completion of service creation prior to continuing to the next step. Output from the cf services command will show the last operation as create succeeded when service creation is completed.

  2. Create a service key for cluster A. The service key will contain generated credentials that this example will use in the creation of the cluster B service instance:

    $ cf create-service-key wan1 k1
    

    Within the service key, each username is generated with a unique string appended so there will be unique user names for the different roles. The user names in this example have been modified to be easy to understand, and they are not representative of the user names that will be generated upon service key creation. Passwords generated for the service key are output in clear text. The passwords shown in this example have been modified to be easy to understand, and they are not representative of the passwords that will be generated upon service key creation. Here is sample output from cf service-key wan1 k1:

    Getting key k1 for service instance wan1 as admin...
    
    {
     "distributed_system_id": "1",
     "locators": [
      "10.0.16.21[55221]"
      "10.0.16.22[55221]"
      "10.0.16.23[55221]"
     ],
     "urls": {
      "gfsh": "https://cloudcache-1.example.com/gemfire/v1",
      "pulse": "https://cloudcache-1.example.com/pulse"
     },
     "users": [
      {
       "password": "cl-op-ABCDE-password",
       "roles": [
        "cluster_operator"
       ],
       "username": "cluster_operator_ABCDE"
      },
      {
       "password": "dev-FGHIJ-password",
       "roles": [
        "developer"
       ],
       "username": "developer_FGHIJ"
      }
     ],
     "wan": {
      "sender_credentials": {
       "active": {
        "password": "gws-KLMNO-password",
        "username": "gateway_sender_KLMNO"
       }
      }
     }
    }
    
  3. Communicate the cluster A locators IP and port addresses and sender_credentials to the cluster B Cloud Foundry administrator.

  4. Create the cluster B service instance using cluster B Cloud Foundry credentials. This example explicitly sets the distributed_system_id. Use a -c option with the command to specify the distributed_system_id, the cluster A service instance’s locators, and the cluster A sender_credentials:

    $ cf create-service p-cloudcache wan-cluster wan2 -c '
    {
      "distributed_system_id":2,
      "remote_clusters":[
      {
        "remote_locators":[
          "10.0.16.21[55221]",
          "10.0.16.22[55221]",
          "10.0.16.23[55221]"],
        "trusted_sender_credentials":[
        {
          "username": "gateway_sender_KLMNO",
          "password":"gws-KLMNO-password"
        }]
      }]
    }'
    

    Verify the completion of service creation prior to continuing to the next step. Output from the cf services command will show the last operation as create succeeded when service creation is completed.

  5. Create the service key of cluster B:

    $ cf create-service-key wan2 k2
    

    Here is sample output from cf service-key wan2 k2, which outputs details of the cluster B service key:

    Getting key k2 for service instance destination as admin...
    
    {
     "distributed_system_id": "2",
     "locators": [
      "10.0.24.21[55221]"
      "10.0.24.22[55221]"
      "10.0.24.23[55221]"
     ],
     "urls": {
      "gfsh": "https://cloudcache-2.example.com/gemfire/v1",
      "pulse": "https://cloudcache-2.example.com/pulse"
     },
     "users": [
      {
       "password": "cl-op-PQRST-password",
       "roles": [
        "cluster_operator"
       ],
       "username": "cluster_operator_PQRST"
      },
      {
       "password": "dev-UVWXY-password",
       "roles": [
        "developer"
       ],
       "username": "developer_UVWXY"
      }
     ],
     "wan": {
      "remote_clusters": [
      {
        "remote_locators": [
          "10.0.16.21[55221]",
          "10.0.16.21[55221]",
          "10.0.16.21[55221]"
        ],
        "trusted_sender_credentials": [
         "gateway_sender_KLMNO"
        ]
       }
      ],
      "sender_credentials": {
       "active": {
        "password": "gws-ZABCD-password",
        "username": "gateway_sender_ZABCD"
       }
      }
     }
    }
    
  6. Communicate the cluster B locators IP and port addresses and sender_credentials to the cluster A Cloud Foundry administrator.

  7. Update the cluster A service instance using the cluster A Cloud Foundry credentials to include the cluster B locators and the cluster B sender_credentials:

    $ cf update-service wan1 -c '
    {
      "remote_clusters":[
      {
        "remote_locators":[
          "10.0.24.21[55221]",
          "10.0.24.22[55221]",
          "10.0.24.23[55221]"],
        "trusted_sender_credentials":[
        {
          "username":"gateway_sender_ZABCD",
          "password":"gws-ZABCD-password"
        }]
      }]
    }'
    Updating service instance wan1 as admin
    
  8. To observe and verify that the cluster A service instance has been correctly updated, it is necessary to delete and recreate the cluster A service key. As designed, the recreated service key will have the same user identifiers and passwords; new unique strings and passwords are not generated. Use the cluster A Cloud Foundry credentials in these commands:

    $ cf delete-service-key wan1 k1
    
    $ cf create-service-key wan1 k1
    

    The cluster A service key will now appear as:

    Getting key k1 for service instance wan1 as admin...
    
    {
     "distributed_system_id": "1",
     "locators": [
      "10.0.16.21[55221]",
      "10.0.16.22[55221]",
      "10.0.16.23[55221]"
     ],
     "urls": {
      "gfsh": "https://cloudcache-1.example.com/gemfire/v1",
      "pulse": "https://cloudcache-1.example.com/pulse"
     },
     "users": [
      {
       "password": "cl-op-ABCDE-password",
       "roles": [
        "cluster_operator"
       ],
       "username": "cluster_operator_ABCDE"
      },
      {
       "password": "dev-FGHIJ-password",
       "roles": [
        "developer"
       ],
       "username": "developer_FGHIJ"
      }
     ],
     "wan": {
      "remote_clusters": [
       {
        "remote_locators": [
         "10.0.24.21[55221]",
         "10.0.24.22[55221]",
         "10.0.24.23[55221]"
        ],
        "trusted_sender_credentials": [
         "gateway_sender_ZABCD"
        ]
       }
      ],
      "sender_credentials": {
       "active": {
        "password": "gws-KLMNO-password",
        "username": "gateway_sender_KLMNO"
       }
      }
     }
    }
    
  9. Use gfsh to create the cluster A gateway sender and the region. Any region operations that occur after the region is created on cluster A, but before the region is created on cluster B will be lost.

    • Connect using gfsh and the cluster A cluster_operator credentials, which are needed to be authorized for the gateway sender creation operation:
      gfsh>connect --url=https://cloudcache-1.example.com/gemfire/v1 --use-http --user=cluster_operator_ABCDE --password=cl-op-ABCDE-password
      
    • Create the cluster A gateway sender. The required remote-distributed-system-id option identifies the distributed-system-id of the destination cluster. It is 2 for this example:

      gfsh>create gateway-sender --id=send_to_2 --remote-distributed-system-id=2 --enable-persistence=true
      
    • Create the cluster A region. The gateway-sender-id associates region operations with a specific gateway sender. The region must have an associated gateway sender in order to propagate region events across the WAN.

      gfsh>create region --name=regionX --gateway-sender-id=send_to_2 --type=PARTITION_REDUNDANT
      
  10. Use gfsh to create the cluster B gateway sender and region.

    • Connect using gfsh and the cluster B cluster_operator credentials, which are needed to be authorized for the gateway sender creation operation:
      gfsh>connect --url=https://cloudcache-2.example.com/gemfire/v1 --use-http --user=cluster_operator_PQRST --password=cl-op-PQRST-password
      
    • Create the cluster B gateway sender:

      gfsh>create gateway-sender --id=send_to_1 --remote-distributed-system-id=1 --enable-persistence=true
      
    • Create the cluster B region:

      gfsh>create region --name=regionX --gateway-sender-id=send_to_1 --type=PARTITION_REDUNDANT
      

Set Up a Unidirectional System

This sequence of steps sets up a unidirectional transfer, such that all operations in cluster A are replicated in cluster B. Two design patterns that use unidirectional replication are described in Blue-Green Disaster Recovery and CQRS Pattern Across a WAN.

  1. Create the cluster A service instance using the cluster A Cloud Foundry credentials. This example explicitly sets the distributed_system_id of cluster A using a -c option with a command of the form:

    cf create-service p-cloudcache PLAN-NAME SERVICE-INSTANCE-NAME -c '{
    "distributed_system_id" : ID-VALUE }'
    

    Here is a cluster A example of the create-service command:

    $ cf create-service p-cloudcache wan-cluster wan1 -c '{
    "distributed_system_id" : 1 }'
    

    Verify the completion of service creation prior to continuing to the next step. Output from the cf services command will show the last operation as create succeeded when service creation is completed.

  2. Create a service key for cluster A. The service key will contain generated credentials that this example will use in the creation of the cluster B service instance:

    $ cf create-service-key wan1 k1
    

    Within the service key, each username is generated with a unique string appended so there will be unique user names for the different roles. The user names in this example have been modified to be easy to understand, and they are not representative of the user names that will be generated upon service key creation. Passwords generated for the service key are output in clear text. The passwords shown in this example have been modified to be easy to understand, and they are not representative of the passwords that will be generated upon service key creation. Here is sample output from cf service-key wan1 k1:

    Getting key k1 for service instance wan1 as admin...
    
    {
     "distributed_system_id": "1",
     "locators": [
      "10.0.16.21[55221]"
      "10.0.16.22[55221]"
      "10.0.16.23[55221]"
     ],
     "urls": {
      "gfsh": "https://cloudcache-1.example.com/gemfire/v1",
      "pulse": "https://cloudcache-1.example.com/pulse"
     },
     "users": [
      {
       "password": "cl-op-ABCDE-password",
       "roles": [
        "cluster_operator"
       ],
       "username": "cluster_operator_ABCDE"
      },
      {
       "password": "dev-FGHIJ-password",
       "roles": [
        "developer"
       ],
       "username": "developer_FGHIJ"
      }
     ],
     "wan": {
      "sender_credentials": {
       "active": {
        "password": "gws-KLMNO-password",
        "username": "gateway_sender_KLMNO"
       }
      }
     }
    }
    
  3. Communicate the cluster A locators IP and port addresses and sender_credentials to the cluster B Cloud Foundry administrator.

  4. Create the cluster B service instance using cluster B Cloud Foundry credentials. This example explicitly sets the distributed_system_id. Use a -c option with the command to specify the distributed_system_id, the cluster A service instance’s locators, and the cluster A sender_credentials:

    $ cf create-service p-cloudcache wan-cluster wan2 -c '
    {
      "distributed_system_id":2,
      "remote_clusters":[
      {
        "remote_locators":[
          "10.0.16.21[55221]",
          "10.0.16.22[55221]",
          "10.0.16.23[55221]"],
        "trusted_sender_credentials":[
        {
          "username": "gateway_sender_KLMNO",
          "password":"gws-KLMNO-password"
        }]
      }]
    }'
    

    Verify the completion of service creation prior to continuing to the next step. Output from the cf services command will show the last operation as create succeeded when service creation is completed.

  5. Create the service key of cluster B:

    $ cf create-service-key wan2 k2
    

    Note that the cluster B service key will contain unneeded (for the unidirectional setup), but automatically-created sender_credentials. Here is sample output from cf service-key wan2 k2, which outputs details of the cluster B service key:

    Getting key k2 for service instance destination as admin...
    
    {
     "distributed_system_id": "2",
     "locators": [
      "10.0.24.21[55221]"
      "10.0.24.22[55221]"
      "10.0.24.23[55221]"
     ],
     "urls": {
      "gfsh": "https://cloudcache-2.example.com/gemfire/v1",
      "pulse": "https://cloudcache-2.example.com/pulse"
     },
     "users": [
      {
       "password": "cl-op-PQRST-password",
       "roles": [
        "cluster_operator"
       ],
       "username": "cluster_operator_PQRST"
      },
      {
       "password": "dev-UVWXY-password",
       "roles": [
        "developer"
       ],
       "username": "developer_UVWXY"
      }
     ],
     "wan": {
      "remote_clusters": [
      {
        "remote_locators": [
          "10.0.16.21[55221]",
          "10.0.16.21[55221]",
          "10.0.16.21[55221]"
        ],
        "trusted_sender_credentials": [
         "gateway_sender_KLMNO"
        ]
       }
      ],
      "sender_credentials": {
       "active": {
        "password": "gws-ZABCD-password",
        "username": "gateway_sender_ZABCD"
       }
      }
     }
    }
    
  6. Communicate the cluster B locators IP and port addresses to the cluster A Cloud Foundry administrator.

  7. Update the cluster A service instance using the cluster A Cloud Foundry credentials to include the cluster B locators:

    $ cf update-service wan1 -c '
    {
      "remote_clusters":[
      {
        "remote_locators":[
          "10.0.24.21[55221]",
          "10.0.24.22[55221]",
          "10.0.24.23[55221]"]
      }]
    }'
    Updating service instance wan1 as admin
    
  8. To observe and verify that the cluster A service instance has been correctly updated, it is necessary to delete and recreate the cluster A service key. As designed, the recreated service key will have the same user identifiers and passwords; new unique strings and passwords are not generated. Use the cluster A Cloud Foundry credentials in these commands:

    $ cf delete-service-key wan1 k1
    
    $ cf create-service-key wan1 k1
    

    The cluster A service key will now appear as:

    Getting key k1 for service instance wan1 as admin...
    
    {
     "distributed_system_id": "1",
     "locators": [
      "10.0.16.21[55221]",
      "10.0.16.22[55221]",
      "10.0.16.23[55221]"
     ],
     "urls": {
      "gfsh": "https://cloudcache-1.example.com/gemfire/v1",
      "pulse": "https://cloudcache-1.example.com/pulse"
     },
     "users": [
      {
       "password": "cl-op-ABCDE-password",
       "roles": [
        "cluster_operator"
       ],
       "username": "cluster_operator_ABCDE"
      },
      {
       "password": "dev-FGHIJ-password",
       "roles": [
        "developer"
       ],
       "username": "developer_FGHIJ"
      }
     ],
     "wan": {
      "remote_clusters": [
       {
        "remote_locators": [
         "10.0.24.21[55221]",
         "10.0.24.22[55221]",
         "10.0.24.23[55221]"
        ] ]
       }
      ],
      "sender_credentials": {
       "active": {
        "password": "gws-KLMNO-password",
        "username": "gateway_sender_KLMNO"
       }
      }
     }
    }
    
  9. Use gfsh to create the cluster A gateway sender and the region. Any region operations that occur after the region is created on cluster A, but before the region is created on cluster B will be lost.

    • Connect using gfsh and the cluster A cluster_operator credentials, which are needed to be authorized for the gateway sender creation operation:
      gfsh>connect --url=https://cloudcache-1.example.com/gemfire/v1 --use-http --user=cluster_operator_ABCDE --password=cl-op-ABCDE-password
      
    • Create the cluster A gateway sender. The required remote-distributed-system-id option identifies the distributed-system-id of the destination cluster. It is 2 for this example:

      gfsh>create gateway-sender --id=send_to_2 --remote-distributed-system-id=2 --enable-persistence=true
      
    • Create the cluster A region. The gateway-sender-id associates region operations with a specific gateway sender. The region must have an associated gateway sender in order to propagate region events across the WAN.

      gfsh>create region --name=regionX --gateway-sender-id=send_to_2 --type=PARTITION_REDUNDANT
      
  10. Use gfsh to create the cluster B region.

    • Connect using gfsh and the cluster B cluster_operator credentials, which are needed to be authorized for the create operation:
      gfsh>connect --url=https://cloudcache-2.example.com/gemfire/v1 --use-http --user=cluster_operator_PQRST --password=cl-op-PQRST-password
      
    • Create the cluster B region:

      gfsh>create region --name=regionX --type=PARTITION_REDUNDANT
      

Deleting a Service Instance

You can delete service instances using the cf CLI. Before doing so, you must remove any existing service keys and app bindings.

  1. Run cf delete-service-key SERVICE-INSTANCE-NAME KEY-NAME to delete the service key.
  2. Run cf unbind-service APP-NAME SERVICE-INSTANCE-NAME to unbind your app from the service instance.
  3. Run cf delete-service SERVICE-INSTANCE-NAME to delete the service instance.
$ cf delete-service-key my-cloudcache my-service-key
$ cf unbind-service my-app my-cloudcache
$ cf delete-service my-cloudcache

Deletions are asynchronous. Run cf services to view the current status of the service instance deletion.

Updating a Pivotal Cloud Cache Service Instance

You can apply all optional parameters to an existing service instance using the cf update-service command. You can, for example, scale up a cluster by increasing the number of servers.

Previously specified optional parameters are persisted through subsequent updates. To return the service instance to default values, you must explicitly specify the defaults as optional parameters.

For example, if you create a service instance with five servers using a plan that has a default value of four servers:

$ cf create-service p-cloudcache small my-cloudcache -c '{"num_servers": 5}'

And you set the new_size_percentage to 50%:

$ cf update-service my-cloudcache -c '{"new_size_percentage": 50}'

Then the resulting service instance has 5 servers and new_size_percentage of 50% of heap.

Cluster Rebalancing

When updating a cluster to increase the number of servers, the available heap size is increased. When this happens, PCC automatically rebalances data in the cache to distribute data across the cluster.

This automatic rebalancing does not occur when a server leaves the cluster and later rejoins, for example when a VM is re-created, or network connectivity lost and restored. In this case, you must manually rebalance the cluster using the gfsh rebalance command while authenticated as a cluster operator.

Note: You must first connect with gfsh before you can use the rebalance command.

About Changes to the Service Plan

Your PCF operator can change details of the service plan available on the Marketplace. If your operator changes the default value of one of the optional parameters, this does not affect existing service instances.

However, if your operator changes the allowed values of one of the optional parameters, existing instances that exceed the new limits are not affected, but any subsequent service updates that change the optional parameter must adhere to the new limits.

For example, if the PCF operator changes the plan by decreasing the maximum value for num_servers, any future service updates must adhere to the new num_servers value limit.

You might see the following error message when attempting to update a service instance:

$ cf update-service  my-cloudcache -c '{"num_servers": 5}'
Updating service instance my-cloudcache as admin...
FAILED
Server error, status code: 502, error code: 10001, message: Service broker error: Service cannot be updated at this time, please try again later or contact your operator for more information

This error message indicates that the operator has made an update to the plan used by this service instance. You must wait for the operator to apply plan changes to all service instances before you can make further service instance updates.

Accessing a Service Instance

After you have created a service instance, you can start accessing it. Usually, you set up cache regions before using your service instance from a deployed CF app. You can do this with the gfsh command line tool. To connect, you must set up a service key.

Create Service Keys

Service keys provide a way to access your service instance outside the scope of a deployed CF app. Run cf create-service-key SERVICE-INSTANCE-NAME KEY-NAME to create a service key. Replace SERVICE-INSTANCE-NAME with the name you chose for your service instance. Replace KEY-NAME with a name of your choice. You can use this name to refer to your service key with other commands.

$ cf create-service-key my-cloudcache my-service-key

Run cf service-key SERVICE-INSTANCE-NAME KEY-NAME to view the newly created service key.

$ cf service-key my-cloudcache my-service-key

The cf service-key returns output in the following format:

{
  "distributed_system_id": "0",
  "locators": [
    "10.244.0.66[55221]",
    "10.244.0.4[55221]",
    "10.244.0.3[55221]"
  ],
  "urls": {
    "gfsh": "gfsh-url",
    "pulse": "pulse-url"
  },
  "users": [
    {
      "password": "developer-password",
      "username": "developer_XXX",
      "roles": [
       "developer"
      ]
    },
    {
      "password": "cluster_operator-password",
      "username": "cluster_operator_XXX",
      "roles": [
       "cluster_operator"
      ]
    }
  ]
}

The service key specifies the user roles and URLs that are predefined for interacting with and within the cluster:

  • The cluster operator administers the pool, performing operations such as creating and destroying regions, and creating gateway senders. The identifier assigned for this role is of the form cluster_operator_XXX, where XXX is a unique string generated upon service instance creation and incorporated in this user role’s name.
  • The developer does limited cluster administration such as region creation, and the developer role is expected to be used by applications that are interacting with region entries. The developer does CRUD operations on regions. The identifier assigned for this role is of the form developer_XXX, where XXX is a unique string generated upon service instance creation and incorporated in this user role’s name.
  • The gateway sender writes data that is sent to another cluster. The identifier assigned for this role is of the form gateway_sender_XXX, where XXX is a unique string generated upon service instance creation and incorporated in this user role’s name.
  • A URL used to connect the gfsh client to the service instance
  • A URL used to view the Pulse dashboard in a web browser, which allows monitoring of the service instance status. Use the developer credentials to authenticate.

Connect with gfsh over HTTPS

When connecting over HTTPS, you must use the same certificate you use to secure traffic into Pivotal Application Service (PAS) or Elastic Runtime; that is, the certificate you use where your TLS termination occurs. Before you can connect, you must create a truststore.

Create a Truststore

To create a truststore, use the same certificate you used to configure TLS termination. We suggest using the keytool command line utility to create a truststore file.

  1. Locate the certificate you use to configure TLS termination.
  2. Using your certificate, run the keytool command.

    For example:

    $ keytool -import -alias ENV -file CERTIFICATE.CER -keystore TRUSTSTORE-FILE-PATH"

    Where:

    • ENV is your system environment.
    • CERTIFICATE.CER is your certificate file.
    • TRUSTSTORE-FILE-PATH is the path to the location where you want to create the truststore file, including the name you want to give the file.
  3. When you run this command, you are prompted to enter a keystore password. Create a password and remember it!

  4. When prompted for the certificate details, enter yes to trust the certificate.

The following example shows how to run keytool and what the output looks like:

$ keytool -import -alias prod-ssl -file /tmp/loadbalancer.cer -keystore /tmp/truststore/prod.myTrustStore 
Enter keystore password:
Re-enter new password:
Owner: CN=*.url.example.com, OU=Cloud Foundry, O=Pivotal, L=New York, ST=New York, C=US
Issuer: CN=*.url.example.com, OU=Cloud Foundry, O=Pivotal, L=New York, ST=New York, C=US
Serial number: bd84912917b5b665
Valid from: Sat Jul 29 09:18:43 EDT 2017 until: Mon Apr 07 09:18:43 EDT 2031
Certificate fingerprints:
   MD5:  A9:17:B1:C9:6C:0A:F7:A3:56:51:6D:67:F8:3E:94:35
   SHA1: BA:DA:23:09:17:C0:DF:37:D9:6F:47:05:05:00:44:6B:24:A1:3D:77
   SHA256: A6:F3:4E:B8:FF:8F:72:92:0A:6D:55:6E:59:54:83:30:76:49:80:92:52:3D:91:4D:61:1C:A1:29:D3:BD:56:57
   Signature algorithm name: SHA256withRSA
   Version: 3

Extensions:

#1: ObjectId: 2.5.29.10 Criticality=true
BasicConstraints:[
  CA:true
  PathLen:0
]

#2: ObjectId: 2.5.29.11 Criticality=false
SubjectAlternativeName [
  DNSName: *.sys.url.example.com
  DNSName: *.apps.url.example.com
  DNSName: *.uaa.sys.url.example.com
  DNSName: *.login.sys.url.example.com
  DNSName: *.url.example.com
  DNSName: *.ws.url.example.com
]

Trust this certificate? [no]:  yes
Certificate was added to keystore

Establish the Connection with HTTPS

After you have created the truststore, you can use the Pivotal GemFire command line interface, gfsh, to connect to the cluster over HTTPS.

  1. Acquire gfsh by downloading the correct Pivotal GemFire ZIP archive from Pivotal Network. The correct version of Pivotal GemFire to download is any patch version of the Pivotal GemFire version listed in the PCC release notes. A link to the PCC release notes is on Pivotal Network in the Release Details for your PCC version. Note that a JDK or JRE will also be required, as specified in the release notes.
  2. Unzip the Pivotal GemFire ZIP archive. gfsh is within the bin directory in the expanded Pivotal GemFire. Use gfsh with Unix or gfsh.bat with Windows.
  3. Set the JAVA_ARGS environment variable with the following command:

    export JAVA_ARGS="-Djavax.net.ssl.trustStore=TRUSTSTORE-FILE-PATH"

    Where: TRUSTSTORE-FILE-PATH is the path to the TrustStore file you created in Create a Truststore.


    For example:

    $ export JAVA_ARGS="-Djavax.net.ssl.trustStore=/tmp/truststore/prod.myTrustStore"
    

  4. Run gfsh, and then issue a connect command that specifies an HTTPS URL of the form:

    connect --use-http=true --url=<HTTPS-gfsh-URL> 
     --user=<cluster_operator_XXX> 
     --password=<cluster_operator-password>
    

    The cluster operator user name and password are in the service key. See Create Service Keys for instructions on how to view the service key.

Establish the Connection with HTTPS in a Development Environment

When working in a non-production, development environment, a developer may choose to work in a less secure manner by eliminating the truststore and SSL mutual authentication.

The steps to establish the gfsh connection become:

  1. Acquire gfsh by downloading the correct Pivotal GemFire ZIP archive from Pivotal Network. The correct version of Pivotal GemFire to download is any patch version of the Pivotal GemFire version listed in the PCC release notes. A link to the PCC release notes is on Pivotal Network in the Release Details for your PCC version. Note that a JDK or JRE will also be required, as specified in the release notes.
  2. Unzip the Pivotal GemFire ZIP archive. gfsh is within the bin directory in the expanded Pivotal GemFire. Use gfsh with Unix or gfsh.bat with Windows.
  3. Run gfsh, and then issue a connect command that specifies an HTTPS URL of the form:

    connect --use-http=true --use-ssl --skip-ssl-validation=true
     --url=<HTTPS-gfsh-URL> --user=<cluster_operator_XXX>
     --password=<cluster_operator-password>
    

    The cluster operator user name and password are in the service key. See Create Service Keys for instructions on how to view the service key.

Using Pivotal Cloud Cache

Create Regions with gfsh

After connecting with gfsh as a cluster_operator_XXX, you can define a new cache region.

The following command creates a partitioned region with a single redundant copy:

gfsh>create region --name=my-cache-region --type=PARTITION_REDUNDANT_HEAP_LRU
     Member      | Status
---------------- | -------------------------------------------------------
cacheserver-z2-1 | Region "/my-cache-region" created on "cacheserver-z2-1"
cacheserver-z3-2 | Region "/my-cache-region" created on "cacheserver-z3-2"
cacheserver-z1-0 | Region "/my-cache-region" created on "cacheserver-z1-0"
cacheserver-z1-3 | Region "/my-cache-region" created on "cacheserver-z1-3"

See Region Design for guidelines on choosing a region type.

You can test the newly created region by writing and reading values with gfsh:

gfsh>put --region=/my-cache-region --key=test --value=thevalue
Result      : true
Key Class   : java.lang.String
Key         : test
Value Class : java.lang.String
Old Value   : NULL


gfsh>get --region=/my-cache-region --key=test
Result      : true
Key Class   : java.lang.String
Key         : test
Value Class : java.lang.String
Value       : thevalue

In practice, you should perform these get/put operations from a deployed PCF app. To do this, you must bind the service instance to these apps.

Java Build Pack Requirements

To ensure that your app can use all the features from PCC, use the latest buildpack. The buildpack is available on GitHub at cloudfoundry/java-buildpack.

Bind an App to a Service Instance

Binding your apps to a service instance enables the apps to connect to the service instance and read or write data to the region. Run cf bind-service APP-NAME SERVICE-INSTANCE-NAME to bind an app to your service instance. Replace APP-NAME with the name of the app. Replace SERVICE-INSTANCE-NAME with the name you chose for your service instance.

$ cf bind-service my-app my-cloudcache

Binding an app to the service instance provides connection information through the VCAP_SERVICES environment variable. Your app can use this information to configure components, such as the GemFire client cache, to use the service instance.

The following is a sample VCAP_SERVICES environment variable:

{
  "p-cloudcache": [
    {
      "credentials": {
    "locators": [
      "10.244.0.4[55221]",
      "10.244.1.2[55221]",
      "10.244.0.130[55221]"
    ],
    "urls": {
      "gfsh": "https://cloudcache-1.example.com/gemfire/v1",
      "pulse": "https://cloudcache-1.example.com/pulse"
    },
    "users": [
      {
        "password": "some_developer_password",
        "username": "developer_XXX"
      },
      {
        "password": "some_password",
        "username": "cluster_operator_XXX"
      }
    ]
      },
      "label": "p-cloudcache",
      "name": "test-service",
      "plan": "caching-small",
      "provider": null,
      "syslog_drain_url": null,
      "tags": [],
      "volume_mounts": []
    }
  ]
}

Use the Pulse Dashboard

You can access the Pulse dashboard for a service instance by accessing the pulse-url you obtained from a service key in a web browser.

Use either the cluster_operator_XXX or developer_XX credentials to authenticate.

Access Service Instance Metrics

To access service metrics, you must have Enable Plan selected under Service Plan Access on the page where you configure your tile properties. (For details, see the Configure Service Plans page.)

PCC service instances output metrics to the Loggregator Firehose. You can use the Firehose plugin to view metrics output on the CF CLI directly or connect the output to any other Firehose nozzle; for example, the nozzle for Datadog.

PCC v1.3.x supports metrics for the whole cluster and metrics for each member. Each server and locator in the cluster outputs metrics.

Service Instance (Cluster-wide) Metrics

  • serviceinstance.MemberCount: the number of VMs in the cluster
  • serviceinstance.TotalHeapSize: the total MBs of heap available in the cluster
  • serviceinstance.UsedHeapSize: the total MBs of heap in use in the cluster

Member (per-VM) Metrics

  • member.GarbageCollectionCount: the number of JVM garbage collections that have occured on this member since startup
  • member.CpuUsage: the percentage of CPU time used by the Gemfire process
  • member.GetsAvgLatency: the avg latency of GET requests to this Gemfire member
  • member.PutsAvgLatency: the avg latency of PUT requests to this Gemfire member
  • member.JVMPauses: the number of JVM pauses that have occured on this member since startup
  • member.FileDescriptorLimit: the number of files this member allows to be open at once
  • member.TotalFileDescriptorOpen: the number of files this member has open now
  • member.FileDescriptorRemaining: the number of files that this member could open before hitting its limit
  • member.TotalHeapSize: the number of megabytes allocated for the heap
  • member.UsedHeapSize: the number of megabytes currently in use for the heap
  • member.UnusedHeapSizePercentage: the percentage of the total heap size that is not currently being used

Access Service Broker Metrics

Service broker metrics are on by default and can be accessed through the Firehose nozzle plugin. For more information on broker metrics, see On Demand Broker Metrics.

Export gfsh Logs

You can get logs and .gfs stats files from your PCC service instances using the export logs command in gfsh.

  1. Use the Connect with gfsh over HTTPS procedure to connect to the service instance for which you want to see logs.
  2. Run export logs.
  3. Find the ZIP file in the directory where you started gfsh. This file contains a folder for each member of the cluster. The member folder contains the associated log files and stats files for that member.

For more information about the gfsh export command, see the gfsh export documentation.

Deploy an App JAR File to the Servers

You can deploy or redeploy an app JAR file to the servers in the cluster.

To deploy an app JAR file after connecting within gfsh using the cluster operator credentials, do the following:

  1. Run this gfsh command to deploy the JAR file:

    deploy --jar=PATH-TO-JAR/FILENAME.jar

    For example,

     gfsh>deploy --jar=working-directory/myJar.jar 
  2. Run this command to restart the cluster and load the updated JAR file:

    cf update-service SERVICE-INSTANCE-NAME -c '{"restart": true}'

    For example,

    $ cf update-service my-service-instance -c '{"restart": true}'

To redeploy an app JAR file after connecting within gfsh using the cluster operator role, do the following:

  1. Run this gfsh command to remove the existing JAR file:

    undeploy --jar=PATH-TO-JAR/FILENAME.jar

    For example,

     gfsh>undeploy --jar=current-jars/myJar.jar 
  2. Run this gfsh command to deploy the updated JAR file:

    gfsh>deploy --jar=PATH-TO-UPDATED-JAR/FILENAME.jar

    For example,

    gfsh>deploy --jar=newer-jars/myJar.jar
  3. Run this command to restart the cluster and load the updated JAR file:

    cf update-service SERVICE-INSTANCE-NAME -c '{"restart": true}'

    For example,

    $ cf update-service my-service-instance -c '{"restart": true}'

Connecting a Spring Boot App to Pivotal Cloud Cache with Session State Caching

This section describes the two ways in which you can connect a Spring Boot app to PCC:

  • Using a Tomcat app with a WAR file. This is the default method for Tomcat apps.
  • Using the spring-session-data-gemfire library. This method requires that you use the correct version of these libraries.

Use the Tomcat App

In PCC v1.1 and later, to get a Spring Boot app running with session state caching (SSC) on PCC, you must create a WAR file using the spring-boot-starter-tomcat plugin instead of the spring-boot-maven plugin to create a JAR file.

For example, if you want your app to use SSC, you cannot use spring-boot-maven to build a JAR file and push your app to PCF, because the Java buildpack does not pull in the necessary JAR files for SSC when it detects a Spring JAR file.

To build your WAR file, add this depedency to your pom.xml:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-tomcat</artifactId>
  <scope>provided</scope>
</dependency>

For a full example of running a Spring Boot app that connects with SSC, run this app and use this following for your pom.xml:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>io.pivotal.gemfire.demo</groupId>
  <artifactId>HttpSessionCaching-Webapp</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <packaging>war</packaging>

  <name>HttpSessionCaching-Webapp</name>
  <description>Demo project for GemFire Http Session State caching</description>

  <parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.5.3.RELEASE</version>
    <relativePath/> <!-- lookup parent from repository -->
  </parent>

  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
    <java.version>1.8</java.version>
  </properties>

  <dependencies>
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-thymeleaf</artifactId>
    </dependency>

    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-tomcat</artifactId>
      <scope>provided</scope>
    </dependency>

    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-test</artifactId>
      <scope>test</scope>
    </dependency>
  </dependencies>

</project>

Use a Spring Session Data GemFire App

You can connect your Spring app to PCC to do session state caching. Use the correct version of the spring-session-data-gemfire library; apps built for PCC v1.3.0 and later versions are compatible with Spring Session Data GemFire v2.0.0.M2 and later versions.

Upgrade PCC and Spring Session Data GemFire

  1. Before your operator upgades PCC, stop your app. This avoids breaking the app in this upgrade process.

  2. Upgrade PCC. See Upgrading Pivotal Cloud Cache for details.

  3. Rebuild your app using a build.gradle file that depends on the correct version of Spring Session Data GemFire. Here is an example build.gradle file:

    version = '0.0.1-SNAPSHOT'
    
    buildscript {
      ext {
        springBootVersion = '2.0.0.M3'
      }
      repositories {
        mavenCentral()
        maven { url "https://repo.spring.io/snapshot" }
        maven { url "https://repo.spring.io/milestone" }
      }
      dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
      }
    }
    
    apply plugin: 'java'
    apply plugin: 'org.springframework.boot'
    apply plugin: 'idea'
    
    idea{
      module{
        downloadSources = true
        downloadJavadoc = true
      }
    }
    
    sourceCompatibility = 1.8
    targetCompatibility = 1.8
    
    repositories {
      mavenCentral()
      maven { url "https://repo.spring.io/libs-milestone" }
      maven { url "https://repo.spring.io/milestone" }
      maven { url "http://repo.springsource.org/simple/ext-release-local" }
      maven { url "http://repo.spring.io/libs-release/" }
      maven { url "https://repository.apache.org/content/repositories/snapshots" }
    }
    
    dependencies {
      compile("org.springframework.boot:spring-boot-starter-web:2.0.0.M3")
      compile("org.springframework.session:spring-session-data-gemfire:2.0.0.M2")
      compile("io.pivotal.spring.cloud:spring-cloud-gemfire-spring-connector:1.0.0.RELEASE")
      compile("io.pivotal.spring.cloud:spring-cloud-gemfire-cloudfoundry-connector:1.0.0.RELEASE")
    }
    
  4. Clear the session state region.

  5. Start the rebuilt app.

Creating Continuous Queries Using Spring Data GemFire

To create continuous queries with the Spring Data GemFire library, you must have the following:

  • Spring Data GemFire v2.0.1 release
  • Spring Boot v2.0.0+

To create continuous queries, do the following items:

  • Specify attributes subscriptionEnabled and readyForEvents for the ClientCacheApplication annotation. Apply this annotation to the Spring Boot client application class:

    @ClientCacheApplication(name = "GemFireSpringApplication", readyForEvents = true,
      subscriptionEnabled = true)
    

    The annotation for a durable event queue for continuous queries also sets the durableClientId and keepAlive attributes. For example:

    @ClientCacheApplication(name = "GemFireSpringApplication",
      durableClientId = "durable-client-id", keepAlive = true,
      readyForEvents = true, subscriptionEnabled = true)
    
  • Annotate the method that handles the events to specify the query. To make the event queue durable across server failures and restarts, include the durable = true attribute in the annotation, as is done in the example:

    @Component
    public class ContinuousQuery {
    
        @ContinuousQuery(name = "yourQuery",
           query = "SELECT * FROM /yourRegion WHERE someAttribute == true",
           durable = true)
        public void handleChanges(CqEvent event) {
          //PERFORM SOME ACTION
        }
    }
    

    The class that contains the method with the @ContinuousQuery annotation must have the @Component annotation, such that the continuous query is wired up correctly for the server.

For more information, see the Spring Data GemFire documentation.

Create a pull request or raise an issue on the source for this page in GitHub