LATEST VERSION: 1.6.3 - RELEASE NOTES
Pivotal Cloud Cache for PCF v1.6

Using Pivotal Cloud Cache

Create Regions with gfsh

After connecting with gfsh as a cluster_operator_XXX, you can define a new cache region.

The following command creates a partitioned region with a single redundant copy:

gfsh>create region --name=my-cache-region --type=PARTITION_REDUNDANT_HEAP_LRU
     Member      | Status
---------------- | -------------------------------------------------------
cacheserver-z2-1 | Region "/my-cache-region" created on "cacheserver-z2-1"
cacheserver-z3-2 | Region "/my-cache-region" created on "cacheserver-z3-2"
cacheserver-z1-0 | Region "/my-cache-region" created on "cacheserver-z1-0"
cacheserver-z1-3 | Region "/my-cache-region" created on "cacheserver-z1-3"

See Region Design for guidelines on choosing a region type.

You can test the newly created region by writing and reading values with gfsh:

gfsh>put --region=/my-cache-region --key=test --value=thevalue
Result      : true
Key Class   : java.lang.String
Key         : test
Value Class : java.lang.String
Old Value   : NULL


gfsh>get --region=/my-cache-region --key=test
Result      : true
Key Class   : java.lang.String
Key         : test
Value Class : java.lang.String
Value       : thevalue

In practice, you should perform these get/put operations from a deployed PCF app. To do this, you must bind the service instance to these apps.

Working with Disk Stores

Persistent regions and regions that overflow upon eviction use disk stores. Use gfsh to create or destroy a disk store.

Create a Disk Store

To create a disk store for use with a persistent or overflow type of region:

  1. Use the directions in Connect with gfsh over HTTPS to connect to the PCC service instance using the cluster operator credentials.

  2. Create the disk store with a gfsh command of the form:

    create disk-store --name=<name-of-disk-store>
     --dir=<relative/path/to/diskstore/directory>
    

    Specify a relative path for the disk store location. That relative path will be created within /var/vcap/store/gemfire-server/. For more details on further options, see the Pivotal GemFire create disk-store Command Reference Page.

Destroy a Disk Store

To destroy a disk store:

  1. Use the directions in Connect with gfsh over HTTPS to connect to the PCC service instance using the cluster operator credentials.

  2. Destroy the disk store with a gfsh command of the form:

    destroy disk-store --name=<name-of-disk-store>
    

    For more details on further options, see the Pivotal GemFire destroy disk-store Command Reference Page.

Java Build Pack Requirements

To ensure that your app can use all the features from PCC, use the latest buildpack. The buildpack is available on GitHub at cloudfoundry/java-buildpack.

Bind an App to a Service Instance

Binding your apps to a service instance enables the apps to connect to the service instance and read or write data to the region. Run cf bind-service APP-NAME SERVICE-INSTANCE-NAME to bind an app to your service instance. Replace APP-NAME with the name of the app. Replace SERVICE-INSTANCE-NAME with the name you chose for your service instance.

$ cf bind-service my-app my-cloudcache

Binding an app to the service instance provides connection information through the VCAP_SERVICES environment variable. Your app can use this information to configure components, such as the GemFire client cache, to use the service instance.

The following is a sample VCAP_SERVICES environment variable:

{
  "p-cloudcache": [
    {
      "credentials": {
    "locators": [
      "10.244.0.4[55221]",
      "10.244.1.2[55221]",
      "10.244.0.130[55221]"
    ],
    "urls": {
      "gfsh": "https://cloudcache-1.example.com/gemfire/v1",
      "pulse": "https://cloudcache-1.example.com/pulse"
    },
    "users": [
      {
        "password": "some_developer_password",
        "username": "developer_XXX"
      },
      {
        "password": "some_password",
        "username": "cluster_operator_XXX"
      }
    ]
      },
      "label": "p-cloudcache",
      "name": "test-service",
      "plan": "caching-small",
      "provider": null,
      "syslog_drain_url": null,
      "tags": [],
      "volume_mounts": []
    }
  ]
}

Use the Pulse Dashboard

You can access the Pulse dashboard for a service instance by accessing the pulse-url you obtained from a service key in a web browser.

Use either the cluster_operator_XXX or developer_XX credentials to authenticate.

Access Service Instance Metrics

To access service metrics, you must have Enable Plan selected under Service Plan Access on the page where you configure your tile properties. (For details, see the Configure Service Plans page.)

PCC service instances output metrics to the Loggregator Firehose. You can use the Firehose plugin to view metrics output on the cf CLI directly or connect the output to any other Firehose nozzle; for example, the nozzle for Datadog.

PCC supports metrics for the whole cluster and metrics for each member. Each server and locator in the cluster outputs metrics.

Service Instance (Cluster-wide) Metrics

  • serviceinstance.MemberCount: the number of VMs in the cluster
  • serviceinstance.TotalHeapSize: the total MBs of heap available in the cluster
  • serviceinstance.UsedHeapSize: the total MBs of heap in use in the cluster

Member (per-VM) Metrics

  • member.GarbageCollectionCount: the number of JVM garbage collections that have occurred on this member since startup
  • member.CpuUsage: the percentage of CPU time used by the GemFire process
  • member.GetsAvgLatency: the avg latency of GET requests to this GemFire member
  • member.PutsAvgLatency: the avg latency of PUT requests to this GemFire member
  • member.JVMPauses: the number of JVM pauses that have occurred on this member since startup
  • member.FileDescriptorLimit: the number of files this member allows to be open at once
  • member.TotalFileDescriptorOpen: the number of files this member has open now
  • member.FileDescriptorRemaining: the number of files that this member could open before hitting its limit
  • member.TotalHeapSize: the number of megabytes allocated for the heap
  • member.UsedHeapSize: the number of megabytes currently in use for the heap
  • member.UnusedHeapSizePercentage: the percentage of the total heap size that is not currently being used

Access Service Broker Metrics

Service broker metrics are on by default and can be accessed through the Firehose nozzle plugin. For more information on broker metrics, see On Demand Broker Metrics.

Export gfsh Logs

You can get logs and .gfs stats files from your PCC service instances using the export logs command in gfsh.

  1. Use the Connect with gfsh over HTTPS procedure to connect to the service instance for which you want to see logs.
  2. Run export logs.
  3. Find the ZIP file in the directory where you started gfsh. This file contains a folder for each member of the cluster. The member folder contains the associated log files and stats files for that member.

For more information about the gfsh export command, see the gfsh export documentation.

Deploy an App JAR File to the Servers

You can deploy or redeploy an app JAR file to the servers in the cluster.

To do an initial deploy of an app JAR file after connecting within gfsh using the cluster operator credentials, run this gfsh command:

deploy --jar=PATH-TO-JAR/FILENAME.jar

For example,

 gfsh>deploy --jar=working-directory/myJar.jar 

To redeploy an app JAR file after connecting within gfsh using the cluster operator role, do the following:

  1. Run this gfsh command to deploy the updated JAR file:

    deploy --jar=PATH-TO-UPDATED-JAR/FILENAME.jar

    For example,

    gfsh>deploy --jar=newer-jars/myJar.jar
  2. Run this command to restart the cluster and load the updated JAR file:

    cf update-service SERVICE-INSTANCE-NAME -c '{"restart": true}'

    For example,

    $ cf update-service my-service-instance -c '{"restart": true}'

Use the GemFire-Greenplum Connector

The GemFire-Greenplum connector permits the transfer of a PCC region out to a Greenplum database table or the transfer of a Greenplum database table into a PCC region. gfsh commands set up the configuration and initiate transfers. See the GemFire-Greenplum Connector documentation for details.

Connect in gfsh with the cluster operator role to have the necessary permissions to use the connector.

Create a pull request or raise an issue on the source for this page in GitHub