Pivotal Cloud Cache
Pivotal Cloud Cache (PCC) is a high-performance, high-availability caching layer for Pivotal Cloud Foundry (PCF). PCC offers an in-memory key-value store. It delivers low-latency responses to a large number of concurrent data access requests.
PCC provides a service broker to create in-memory data clusters on demand. These clusters are dedicated to the PCF space and tuned for specific use cases defined by your service plan. Service operators can create multiple plans to support different use cases.
PCC uses Pivotal GemFire. The Pivotal GemFire API Documentation details the API for client access to data objects within Pivotal GemFire.
This documentation performs the following functions:
- Describes the features and architecture of PCC
- Provides the PCF operator with instructions for installing, configuring, and maintaining PCC
- Provides app developers instructions for choosing a service plan, creating, and deleting PCC service instances
- Provides app developers instructions for binding apps
The following table provides version and version-support information about PCC:
|Release date||February 11, 2019|
|Software component version||Pivotal GemFire v9.3.3
(Limited-availability v1.3.2 runs Pivotal GemFire v9.2.0)
|Compatible Ops Manager version(s)||v1.12.x and v2.0.x
(For limited-availability v1.3.2: v1.12.x, v2.0.x, v2.1.x, and v2.2.x)
|Compatible Elastic Runtime version(s)||v1.12.x
(For limited-availability v1.3.2: v1.12.x)
|Compatible Pivotal Application Service (PAS)* version(s)||v2.0.x
(For limited-availability v1.3.2: v2.0.x, v2.1.x, and v2.2.x)
|IaaS support||AWS, Azure, GCP, OpenStack, and vSphere|
|Required BOSH stemcell version||3586
(Limited-availability v1.3.2 requires 3468)
|Minimum Java buildpack version required for apps||v3.13|
* As of PCF v2.0, Elastic Runtime is renamed Pivotal Application Service (PAS).
Some PCF services offer on-demand service plans. These plans let developers provision service instances when they want.
These contrast with the more common pre-provisioned service plans, which require operators to provision the service instances during installation and configuration through the service tile UI.
The following PCF services offer on-demand service plans:
MySQL for PCF v2.0 and later
RabbitMQ for PCF
Redis for PCF
Pivotal Cloud Cache (PCC)
These services package and deliver their on-demand service offerings differently. For example, some services, like Redis for PCF, have one tile, and you configure the tile differently depending on whether you want on-demand service plans or pre-provisioned service plans.
For other services, like PCC and MySQL for PCF, only on-demand service plans are available.
The following table lists and contrasts the different ways that PCF services package on-demand and pre-provisioned service offerings.
|PCF service tile||Standalone product related to the service||Versions supporting on demand||Versions supporting pre-provisioned|
|RabbitMQ for PCF||Pivotal RabbitMQ||v1.8 and later||All versions|
|Redis for PCF||Redis||v1.8 and later||All versions|
|MySQL for PCF||MySQL||v2.x
|PCC||Pivotal GemFire||All versions||NA|
Pivotal GemFire is the data store within Pivotal Cloud Cache (PCC). A small amount of administrative GemFire setup is required for a PCC service instance, and any app will use a limited portion of the GemFire API.
The PCC architectural model is a client-server model. The clients are apps or microservices, and the servers are a set of GemFire servers maintained by a PCC service instance. The GemFire servers provide a low-latency, consistent, fault-tolerant data store within PCC.
GemFire holds data in key/value pairs. Each pair is called an entry. Entries are logically grouped into sets called regions. A region is a map (or dictionary) data structure.
The app (client) uses PCC as a cache. A cache lookup (read) is a get operation on a GemFire region. The cache operation of a cache write is a put operation on a GemFire region.
The GemFire command-line interface, called
gfsh, facilitates region administration. Use
gfsh to create and destroy regions within the PCC service instance.
PCC deploys cache clusters that use Pivotal GemFire to provide high availability, replication guarantees, and eventual consistency.
When you first spin up a cluster, you have three locators and at least four servers.
When you scale the cluster up, you have more servers, increasing the capacity of the cache. There are always three locators.
When a client connects to the cluster, it first connects to a locator. The locator replies with the IP address of a server for it to talk to. The client then connects to that server.
When the client wants to read or write data, it sends a request directly to the server.
If the server doesn’t have the data locally, it fetches it from another server.
The workflow for the PCF admin setting up a PCC service plan:
This section describes networking considerations for Pivotal Cloud Cache.
When you deploy PCF, you must create a statically defined network to host the component virtual machines that constitute the PCF infrastructure.
PCF components, like the Cloud Controller and UAA, run on this infrastructure network. In PCF v2.0 and earlier, on-demand PCF services require that you host them on a network that runs separately from this network.
Cloud operators pre-provision service instances from Ops Manager. Then, for each service, Ops Manager allocates and recovers static IP addresses from a pre-defined block of addresses.
To enable on-demand services in PCF v2.0 and earlier, operators must create a service networks in BOSH Director and select the Service Network checkbox. Operators then can select the service network to host on-demand service instances when they configure the tile for that service.
On-demand PCF services rely on the BOSH 2.0 ability to dynamically deploy VMs in a dedicated network. The on-demand service broker uses this capability to create single-tenant service instances in a dedicated service network.
On-demand services use the dynamically-provisioned service network to host the single-tenant worker VMs that run as service instances within development spaces. This architecture lets developers provision IaaS resources for their service instances at creation time, rather than the operator pre-provisioning a fixed quantity of IaaS resources when they deploy the service broker.
By making services single-tenant, where each instance runs on a dedicated VM rather than sharing VMs with unrelated processes, on-demand services eliminate the “noisy neighbor” problem when one app hogs resources on a shared cluster. Single-tenant services can also support regulatory compliance where sensitive data must be compartmentalized across separate machines.
An on-demand service splits its operations between the default network and the service network. Shared components of the service, such as executive controllers and databases, run centrally on the default network along with the Cloud Controller, UAA, and other PCF components. The worker pool deployed to specific spaces runs on the service network.
The diagram below shows worker VMs in an on-demand service instance running on a separate services network, while other components run on the default network.
Before deploying a service tile that uses the on-demand service broker (ODB), request the needed network connections to allow components of Pivotal Cloud Foundry (PCF) to communicate with ODB.
The specifics of how to open those connections varies for each IaaS.
See the following table for key components and their responsibilities in an on-demand architecture.
|Key Components||Their Responsibilities|
|BOSH Director||Creates and updates service instances as instructed by ODB.|
|BOSH Agent||Includes an agent on every VM that it deploys. The agent listens for instructions from the BOSH Director and carries out those instructions. The agent receives job specifications from the BOSH Director and uses them to assign a role, or job, to the VM.|
|BOSH UAA||Issues OAuth2 tokens for clients to use when they act on behalf of BOSH users.|
|PAS||Contains the apps that are consuming services|
|ODB||Instructs BOSH to create and update services, and connects to services to create bindings.|
|Deployed service instance||Runs the given data service. For example, the deployed Redis for PCF service instance runs the Redis for PCF data service.|
Regardless of the specific network layout, the operator must ensure network rules are set up so that connections are open as described in the table below.
|This component…||Must communicate with…||Default TCP Port||Communication direction(s)||Notes|
||One-way||The default ports are not configurable.|
|ODB||Deployed service instances||Specific to the service (such as RabbitMQ for PCF). May be one or more ports.||One-way||This connection is for administrative tasks. Avoid opening general use, app-specific ports for this connection.|
|ODB||PAS (or Elastic Runtime)||8443||One-way||The default port is not configurable.|
||One-way||The default port is not configurable.|
|BOSH Agent||BOSH Director||4222||Two-way||The BOSH Agent runs on every VM in the system, including the BOSH Director VM.
The BOSH Agent initiates the connection with the BOSH Director.
The default port is not configurable.
|Deployed apps on PAS (or Elastic Runtime)||Deployed service instances||Specific to the service. May be one or more ports.||One-way||This connection is for general use, app-specific tasks. Avoid opening administrative ports for this connection.|
|PAS (or Elastic Runtime)||ODB||8080||One-way||This port may be different for individual services. This port may also be configurable by the operator if allowed by the tile developer.|
PCC service instances running within distinct PCF foundations may communicate with each other across a WAN. In a topology such as this, the members within one service instance use their own private address space, as defined in RFC1918.
A VPN may be used to connect the private network spaces that lay across the WAN. The steps required to enable the connectivity by VPN are dependent on the IaaS provider(s).
The private address space for each service instance’s network must be configured with non-overlapping CIDR blocks. Configure the network prior to creating service instances. Locate directions for creating a network on the appropriate IAAS provider within the section titled Architecture and Installation Overview.
- See Design Patterns for descriptions of the variety of design patterns that PCC supports.
- PCC stores objects in key/value format, where value can be any object.
- Any gfsh command not explained in the PCC documentation is not supported.
- PCC supports basic OQL queries, with no support for joins.
- Scale down of the cluster is not supported.
- Plan migrations, for example,
-pflag with the
cf update-servicecommand, are not supported.
Pivotal recommends that you do the following:
- Run PCC in its own network
- Use a load balancer to block direct, outside access to the Gorouter
To allow PCC network access from apps, you must create application security groups that allow access on the following ports:
For more information, see the PCF Application Security Groups topic.
PCC works with the IPsec Add-on for PCF. For information about the IPsec Add-on for PCF, see Securing Data in Transit with the IPsec Add-on.
PCC service instances are created with three default GemFire user roles for interacting with clusters:
- A cluster operator manages the GemFire cluster and can access region data.
- A developer can access region data.
- A gateway sender propagates region data to another PCC service instance.
All client apps, gfsh, and JMX clients must authenticate as one of these user roles to access the cluster.
The identifiers assigned for these roles are detailed in Create Service Keys.
Each user role is given predefined permissions for cluster operations. To accomplish a cluster operation, the user authenticates using one of the roles. Prior to initiating the requested operation, there is a verification that the authenticated user role has the permission authorized to do the operation. Here are the permissions that each user role has:
- The cluster operator role has
- The developer role has
- The gateway sender role has
More details about these permissions are in the Pivotal GemFire manual under Implementing Authorization.
The topology diagram might not be accurate and might show more members than are actually in the cluster. However, the numerical value displayed on the top bar is accurate.
Please provide any bugs, feature requests, or questions to the Pivotal Cloud Foundry Feedback list.