Using gfsh

gfsh Command Restrictions

Developers may invoke all gfsh commands. Given credentials with sufficient permissions, those gfsh command will be executed. However, not all gfsh commands are supported. An invocation of an unsupported command may lead to incorrect results. Those results range from ineffective results to inconsistent region entries. Do not use these listed gfsh commands; each has an explanation why it must not be used.

These gfsh start commands will bring up members contrary to the configured plan. Their configuration will be wrong, and their existence is likely to contribute to data loss. Since they are not part of the configured plan, any upgrade will not include them, and if they were to stop or crash, the BOSH Director will not restart them.

  • gfsh start locator
  • gfsh start server

These cluster stop commands will temporarily stop the member or cluster. However, the BOSH Director will notice that members are not running and restart them. So, these commands will be ineffective:

  • gfsh stop locator
  • gfsh stop server
  • gfsh shutdown

These Lucene-related commands are not supported:

  • gfsh create lucene index
  • gfsh describe lucene index
  • gfsh destroy lucene index
  • gfsh list lucene indexes
  • gfsh search lucene

These JNDI binding-related commands are not supported:

  • gfsh create jndi-binding
  • gfsh describe jndi-binding
  • gfsh destroy jndi-binding
  • gfsh list jndi-binding

This configure command will instill configuration contrary to the already-configured plan. Since it is not part of the configured plan, any upgrade will not include it. Therefore, do not use:

  • gfsh configure pdx

The create of a gateway receiver will never be appropriate for any situation. The PCC cluster will already have gateway receivers, and there is no situation in which the cluster can benefit from creating more. Therefore, do not use:

  • gfsh create gateway receiver

Do Not Export from a GemFire Cluster to a PCC Cluster

While the expectation is that configuration and data can be exported from a GemFire cluster and then imported into a PCC cluster, this does not work. Using export and import commands will not have the desired effect of migration from one cluster to another. The import of cluster configuration requires a state that cannot be provided by a PCC cluster. The PCC cluster will already have its configuration, and upon restart or upgrade, that same configuration will be used. Given that the configuration cannot be imported, data import is problematic. Therefore, do not use:

  • gfsh import cluster-configuration
  • gfsh import data

Note: The restriction here on the use of the gfsh import data does not apply to the procedure for migrating from an existing PCC cluster that does not use TLS for encryption to a PCC cluster that does use TLS for encryption. See Migrating to a TLS-Enabled Cluster for that procedure.

Create Regions

After connecting with gfsh with a role that is able to manage a cluster’s data, you can define a new cache region.

The following command creates a partitioned region with a single redundant copy:

gfsh>create region --name=my-cache-region --type=PARTITION_REDUNDANT_HEAP_LRU
     Member      | Status
---------------- | -------------------------------------------------------
cacheserver-z2-1 | Region "/my-cache-region" created on "cacheserver-z2-1"
cacheserver-z3-2 | Region "/my-cache-region" created on "cacheserver-z3-2"
cacheserver-z1-0 | Region "/my-cache-region" created on "cacheserver-z1-0"
cacheserver-z1-3 | Region "/my-cache-region" created on "cacheserver-z1-3"

See Region Design for guidelines on choosing a region type.

You can test the newly created region by writing and reading values with gfsh:

gfsh>put --region=/my-cache-region --key=test --value=thevalue
Result      : true
Key Class   : java.lang.String
Key         : test
Value Class : java.lang.String
Old Value   : NULL


gfsh>get --region=/my-cache-region --key=test
Result      : true
Key Class   : java.lang.String
Key         : test
Value Class : java.lang.String
Value       : thevalue

In practice, you should perform these get/put operations from a deployed PCF app. To do this, you must bind the service instance to these apps.

Working with Disk Stores

Persistent regions and regions that overflow upon eviction use disk stores. Use gfsh to create or destroy a disk store.

Create a Disk Store

To create a disk store for use with a persistent or overflow type of region:

  1. Use the directions in Connect with gfsh over HTTPS to connect to the PCC service instance with a role that is able to manage a cluster’s data.

  2. Create the disk store with a gfsh command of the form:

    create disk-store --name=<name-of-disk-store>
     --dir=<relative/path/to/diskstore/directory>
    

    Specify a relative path for the disk store location. That relative path will be created within /var/vcap/store/gemfire-server/. For more details on further options, see the Pivotal GemFire create disk-store Command Reference Page.

Destroy a Disk Store

To destroy a disk store:

  1. Use the directions in Connect with gfsh over HTTPS to connect to the PCC service instance with a role that is able to manage a cluster’s data.

  2. Destroy the disk store with a gfsh command of the form:

    destroy disk-store --name=<name-of-disk-store>
    

    For more details on further options, see the Pivotal GemFire destroy disk-store Command Reference Page.

Use the Pulse Dashboard

You can access the Pulse dashboard for a service instance in a web browser using the Pulse URL specified in the service key you created following the directions in Create Service Keys.

Access Service Instance Metrics

To access service metrics, you must have Enable Plan selected under Service Plan Access on the page where you configure your tile properties. (For details, see the Configure Service Plans page.)

PCC service instances output metrics to the Loggregator Firehose. You can use the Firehose plugin to view metrics output on the cf CLI directly. For more information about using the Firehose plugin, see Installing the Loggregator Firehose Plugin for CLI.

You can also connect the output to a Firehose nozzle. Nozzles are programs that consume data from the Loggregator Firehose. For example, to connect the output to the nozzle for Datadog, see Datadog Firehose Nozzle on GitHub.

PCC supports metrics for the whole cluster and metrics for each member. Each server and locator in the cluster outputs metrics.

Service Instance (Cluster-wide) Metrics

  • serviceinstance.MemberCount: the number of VMs in the cluster
  • serviceinstance.TotalHeapSize: the total MBs of heap available in the cluster
  • serviceinstance.UsedHeapSize: the total MBs of heap in use in the cluster

Member (per-VM) Metrics

  • member.GarbageCollectionCount: the number of JVM garbage collections that have occurred on this member since startup
  • member.CpuUsage: the percentage of CPU time used by the GemFire process
  • member.GetsAvgLatency: the avg latency of GET requests to this GemFire member
  • member.PutsAvgLatency: the avg latency of PUT requests to this GemFire member
  • member.JVMPauses: the number of JVM pauses that have occurred on this member since startup
  • member.FileDescriptorLimit: the number of files this member allows to be open at once
  • member.TotalFileDescriptorOpen: the number of files this member has open now
  • member.FileDescriptorRemaining: the number of files that this member could open before hitting its limit
  • member.TotalHeapSize: the number of megabytes allocated for the heap
  • member.UsedHeapSize: the number of megabytes currently in use for the heap
  • member.UnusedHeapSizePercentage: the percentage of the total heap size that is not currently being used

Access Service Broker Metrics

Service broker metrics are on by default and can be accessed through the Loggregator Firehose CLI plugin. For more information, see Installing the Loggregator Firehose Plugin for CLI. For more information about broker metrics, see On Demand Broker Metrics.

Export gfsh Logs

You can get logs and .gfs stats files from your PCC service instances using the export logs command in gfsh.

  1. Use the Connect with gfsh over HTTPS procedure to connect to the service instance for which you want to see logs. Use a role that can read cluster information.
  2. Run export logs. If you see a message of the form

    Estimated exported logs expanded file size = 289927115, file-size-limit = 104857600. To disable exported logs file size check use option "--file-size-limit=0".
    your logs exceed an expanded size of 100 megabytes, and they were not exported. To export in this case, use the gfsh command:

    export logs --file-size-limit=0

  3. Find the ZIP file in the directory where you started gfsh. This file contains a folder for each member of the cluster. The member folder contains the associated log files and stats files for that member.

For more information about the gfsh export command, see the gfsh export documentation.

Deploy an App JAR File to the Servers

You can deploy or redeploy an app JAR file to the servers in the cluster.

To do an initial deploy of an app JAR file after connecting within gfsh using credentials that can manage the cluster, run this gfsh command:

deploy --jar=PATH-TO-JAR/FILENAME.jar

For example,

 gfsh>deploy --jar=working-directory/myJar.jar 

To redeploy an app JAR file after connecting within gfsh using credentials that can manage the cluster, do the following:

  1. Run this gfsh command to deploy the updated JAR file:

    deploy --jar=PATH-TO-UPDATED-JAR/FILENAME.jar

    For example,

    gfsh>deploy --jar=newer-jars/myJar.jar
  2. Run this command to restart the cluster and load the updated JAR file:

    cf update-service SERVICE-INSTANCE-NAME -c '{"restart": true}'

    For example,

    $ cf update-service my-service-instance -c '{"restart": true}'

Use the GemFire-Greenplum Connector

The GemFire-Greenplum connector permits the transfer of a PCC region out to a Greenplum database table or the transfer of a Greenplum database table into a PCC region. gfsh commands set up the configuration and initiate transfers. See the GemFire-Greenplum Connector documentation for details.

Connect in gfsh with a role that can both manage the cluster and read and write cluster data.

Scripting Common Operations

Gfsh commands may be placed into files, providing a way to script common maintenance operations. Scripting operations reduces errors that might occur when manually entering commands, and can be especially useful for running test cases and in deployment automation.

Running a gfsh Script

The gfsh run command invokes the gfsh commands that are in a file. Start up the gfsh command line interface first, and then invoke the command with:

gfsh>run --file=thecommands.gfsh

To see the gfsh run options, invoke:

gfsh>help run

File specification can be a relative or absolute path and file name. Scripted commands are not interactive; any command that would have prompted for input will instead use default values.

In order to eliminate placing cleartext passwords within a script, run gfsh and connect to the cluster prior to running a script with the gfsh run command. The password will appear on the command line, but it will not appear in the file that logs and captures gfsh commands.

Example Scripts

Tightly focussed scripts will ease cluster maintenance. Most of these scripts will do cluster management operations, so connect to the cluster with a role that is able to manage both the cluster and the data.

A common operation will be creating the regions hosted on the servers. This example gfsh script contents creates two regions:

create region --name=sessions --type=PARTITION_REDUNDANT
create region --name=customers --type=REPLICATE

This example gfsh script contents deploys a app JAR file to the cluster servers:

deploy --jar=/path/to/app-classes.jar

This example gfsh script contents creates the disk store needed for persistent regions:

create disk-store --name=all-regions-disk --dir=regions