LATEST VERSION: 1.4 - RELEASE NOTES
Pivotal Cloud Cache for PCF v1.3

Pivotal Cloud Cache

Overview

Pivotal Cloud Cache (PCC) is a high-performance, high-availability caching layer for Pivotal Cloud Foundry (PCF). PCC offers an in-memory key-value store. It delivers low-latency responses to a large number of concurrent data access requests.

PCC provides a service broker to create in-memory data clusters on demand. These clusters are dedicated to the PCF space and tuned for specific use cases defined by your service plan. Service operators can create multiple plans to support different use cases.

PCC uses Pivotal GemFire. The Pivotal GemFire API Documentation details the API for client access to data objects within Pivotal GemFire.

This documentation performs the following functions:

  • Describes the features and architecture of PCC
  • Provides the PCF operator with instructions for installing, configuring, and maintaining PCC
  • Provides app developers instructions for choosing a service plan, creating, and deleting PCC service instances
  • Provides app developers instructions for binding apps

Product Snapshot

The following table provides version and version-support information about PCC:

Element Details
Version v1.3.3
Release date July 11, 2018
Software component version Pivotal GemFire v9.3.0
(Limited-availability v1.3.2 runs Pivotal GemFire v9.2.0)
Compatible Ops Manager version(s) v1.12.x and v2.0.x
(For limited-availability v1.3.2: v1.12.x, v2.0.x, v2.1.x, and v2.2.x)
Compatible Elastic Runtime version(s) v1.12.x
(For limited-availability v1.3.2: v1.12.x)
Compatible Pivotal Application Service (PAS)* version(s) v2.0.x
(For limited-availability v1.3.2: v2.0.x, v2.1.x, and v2.2.x)
IaaS support AWS, Azure, GCP, OpenStack, and vSphere
IPsec support Yes
Required BOSH stemcell version 3541
(Limited-availability v1.3.2 requires 3468)
Minimum Java buildpack version required for apps v3.13

* As of PCF v2.0, Elastic Runtime is renamed Pivotal Application Service (PAS).

PCC and Other PCF Services

Some PCF services offer on-demand service plans. These plans let developers provision service instances when they want.

These contrast with the more common pre-provisioned service plans, which require operators to provision the service instances during installation and configuration through the service tile UI.

The following PCF services offer on-demand service plans:

  • MySQL for PCF v2.0 and later

  • RabbitMQ for PCF

  • Redis for PCF

  • Pivotal Cloud Cache (PCC)

These services package and deliver their on-demand service offerings differently. For example, some services, like Redis for PCF, have one tile, and you configure the tile differently depending on whether you want on-demand service plans or pre-provisioned service plans.

For other services, like PCC, you install one tile for on-demand service plans and a different tile for pre-provisioned service plans.

The following table lists and contrasts the different ways that PCF services package on-demand and pre-provisioned service offerings.

PCF service tile Standalone product related to the service Versions supporting on demand Versions supporting pre-provisioned
RabbitMQ for PCF Pivotal RabbitMQ v1.8 and later All versions
Redis for PCF Redis v1.8 and later All versions
MySQL for PCF MySQL v2.x
(based on Percona Server)
v1.x
(based on MariaDB and Galera)
PCC Pivotal GemFire All versions NA
GemFire for PCF Pivotal GemFire NA All versions

PCC Architecture

GemFire Basics

Pivotal GemFire is the data store within Pivotal Cloud Cache (PCC). A small amount of administrative GemFire setup is required for a PCC service instance, and any app will use a limited portion of the GemFire API.

The PCC architectural model is a client-server model. The clients are apps or microservices, and the servers are a set of GemFire servers maintained by a PCC service instance. The GemFire servers provide a low-latency, consistent, fault-tolerant data store within PCC.

Client Server Model

GemFire holds data in key/value pairs. Each pair is called an entry. Entries are logically grouped into sets called regions. A region is a map (or dictionary) data structure.

The app (client) uses PCC as a cache. A cache lookup (read) is a get operation on a GemFire region. The cache operation of a cache write is a put operation on a GemFire region. The GemFire command-line interface, called gfsh, facilitates region administration. Use gfsh to create and destroy regions within the PCC service instance.

The PCC Cluster

PCC deploys cache clusters that use Pivotal GemFire to provide high availability, replication guarantees, and eventual consistency.

When you first spin up a cluster, you have three locators and at least four servers.

graph TD; Client subgraph P-CloudCache Cluster subgraph locators Locator1 Locator2 Locator3 end subgraph servers Server1 Server2 Server3 Server4 end end Client==>Locator1 Client-->Server1 Client-->Server2 Client-->Server3 Client-->Server4

When you scale the cluster up, you have more servers, increasing the capacity of the cache. There are always three locators.

graph TD; Client subgraph P-CloudCache Cluster subgraph locators Locator1 Locator2 Locator3 end subgraph servers Server1 Server2 Server3 Server4 Server5 Server6 Server7 end end Client==>Locator1 Client-->Server1 Client-->Server2 Client-->Server3 Client-->Server4 Client-->Server5 Client-->Server6 Client-->Server7

Member Communication

When a client connects to the cluster, it first connects to a locator. The locator replies with the IP address of a server for it to talk to. The client then connects to that server.

sequenceDiagram participant Client participant Locator participant Server1 Client->>+Locator: What servers can I talk to? Locator->>-Client: Server1 Client->>Server1: Hello!

When the client wants to read or write data, it sends a request directly to the server.

sequenceDiagram participant Client participant Server1 Client->>+Server1: What’s the value for KEY? Server1->>-Client: VALUE

If the server doesn’t have the data locally, it fetches it from another server.

sequenceDiagram participant Client participant Server1 participant Server2 Client->>+Server1: What’s the value for KEY? Server1->>+Server2: What’s the value for KEY? Server2->>-Server1: VALUE Server1->>-Client: VALUE

Workflow to Set Up a PCC Service

The workflow for the PCF admin setting up a PCC service plan:

graph TD; subgraph PCF Admin Actions s1 s2 end subgraph Developer Actions s4 end s1[1. Upload P-CloudCache.pivotal to Ops Manager] s2[2. Configure CloudCache Service Plans, i.e. caching-small] s1-->s2 s3[3. Ops Manager deploys CloudCache Service Broker] s2-->s3 s4[4. Developer calls `cf create-service p-cloudcache caching-small test`] s3-->s4 s5[5. Ops Manager creates a CloudCache cluster following the caching-small specifications] s4-->s5

Networking for On-Demand Services

This section describes networking considerations for Pivotal Cloud Cache.

Service Network Requirement

When you deploy PCF, you must create a statically defined network to host the component virtual machines that constitute the PCF infrastructure.

PCF components, like the Cloud Controller and UAA, run on this infrastructure network. In PCF v2.0 and earlier, on-demand PCF services require that you host them on a network that runs separately from this network.

Cloud operators pre-provision service instances from Ops Manager. Then, for each service, Ops Manager allocates and recovers static IP addresses from a pre-defined block of addresses.

To enable on-demand services in PCF v2.0 and earlier, operators must create a service networks in Ops Manager Director and select the Service Network checkbox. Operators then can select the service network to host on-demand service instances when they configure the tile for that service.

Default Network and Service Network

On-demand PCF services rely on the BOSH 2.0 ability to dynamically deploy VMs in a dedicated network. The on-demand service broker uses this capability to create single-tenant service instances in a dedicated service network.

On-demand services use the dynamically-provisioned service network to host the single-tenant worker VMs that run as service instances within development spaces. This architecture lets developers provision IaaS resources for their service instances at creation time, rather than the operator pre-provisioning a fixed quantity of IaaS resources when they deploy the service broker.

By making services single-tenant, where each instance runs on a dedicated VM rather than sharing VMs with unrelated processes, on-demand services eliminate the “noisy neighbor” problem when one application hogs resources on a shared cluster. Single-tenant services can also support regulatory compliance where sensitive data must be compartmentalized across separate machines.

An on-demand service splits its operations between the default network and the service network. Shared components of the service, such as executive controllers and databases, run centrally on the default network along with the Cloud Controller, UAA, and other PCF components. The worker pool deployed to specific spaces runs on the service network.

The diagram below shows worker VMs in an on-demand service instance running on a separate services network, while other components run on the default network.

ODB Architecture

Required Networking Rules for On-Demand Services

Prior to deploying any service tile that uses the on-demand broker (ODB), the operator must request the network connections needed to allow various components of Pivotal Cloud Foundry (PCF) to communicate with ODB. The specifics of how to open those connections varies for each IaaS.

The following table shows the responsibilities of the key components in an on-demand architecture.

Key Components Their Responsibility
BOSH Director Creates and updates service instances as instructed by ODB
BOSH Agent BOSH includes an Agent on every VM that it deploys. The Agent listens for instructions from the Director and carries out those instructions. The Agent receives job specifications from the Director and uses them to assign a role, or Job, to the VM.
BOSH UAA As an OAuth2 provider, BOSH UAA issues tokens for clients to use when they act on behalf of BOSH users.
ERT Contains the apps that are consuming services
ODB Instructs BOSH to create and update services, and connects to services to create bindings
Deployed service instance Runs the given data service (for example, the deployed Redis for PCF service instance runs the Redis for PCF data service)


Regardless of the specific network layout, the operator must ensure network rules are set up so that connections are open as described in the table below.

This component… Must communicate with… Default TCP Port Communication direction(s) Notes
ODB
  • BOSH Director
  • BOSH UAA
  • 25555
  • 8443
One-way The default ports are not configurable.
ODB Deployed service instances Specific to the service (such as RabbitMQ for PCF). May be one or more ports. One-way This connection is for administrative tasks. Avoid opening general use, app-specific ports for this connection.
ODB PAS (or Elastic Runtime) 8443 One-way The default port is not configurable.
Errand VMs
  • PAS (or Elastic Runtime)
  • ODB
  • Deployed Service Instances
  • 8443
  • 8080
  • Specific to the service. May be one or more ports.
One-way The default port is not configurable.
BOSH Agent BOSH Director 4222 Two-way The BOSH Agent runs on every VM in the system, including the BOSH Director VM. The BOSH Agent initiates the connection with the BOSH Director.
The default port is not configurable.
Deployed apps on PAS (or Elastic Runtime) Deployed service instances Specific to the service. May be one or more ports. One-way This connection is for general use, app-specific tasks. Avoid opening administrative ports for this connection.
PAS (or Elastic Runtime) ODB 8080 One-way This port may be different for individual services. This port may also be configurable by the operator if allowed by the tile developer.

PCC Instances Across WAN

PCC service instances running within distinct PCF foundations may communicate with each other across a WAN. In a topology such as this, the members within one service instance use their own private address space, as defined in RFC1918.

A VPN may be used to connect the private network spaces that lay across the WAN. The steps required to enable the connectivity by VPN are dependent on the IaaS provider(s).

The private address space for each service instance’s network must be configured with non-overlapping CIDR blocks. Configure the network prior to creating service instances. Locate directions for creating a network on the appropriate IAAS provider within the section titled Architecture and Installation Overview.

  • See Design Patterns for descriptions of the variety of design patterns that PCC supports.
  • PCC stores objects in key/value format, where value can be any object.
  • Any gfsh command not explained in the PCC documentation is not supported.
  • PCC supports basic OQL queries, with no support for joins.

Limitations

  • Scale down of the cluster is not supported.
  • Plan migrations, for example, -p flag with the cf update-service command, are not supported.

Security

Pivotal recommends that you do the following:

  • Run PCC in its own network
  • Use a load balancer to block direct, outside access to the Gorouter

To allow PCC network access from apps, you must create application security groups that allow access on the following ports:

  • 1099
  • 8080
  • 40404
  • 55221

For more information, see the PCF Application Security Groups topic.

PCC works with the IPsec Add-on for PCF. For information about the IPsec Add-on for PCF, see Securing Data in Transit with the IPsec Add-on.

Authentication

PCC service instances are created with three default GemFire user roles for interacting with clusters:

  • A cluster operator manages the GemFire cluster and can access region data.
  • A developer can access region data.
  • A gateway sender propagates region data to another PCC service instance.

All client apps, gfsh, and JMX clients must authenticate as one of these user roles to access the cluster.

The identifiers assigned for these roles are detailed in Create Service Keys.

Authorization

Each user role is given predefined permissions for cluster operations. To accomplish a cluster operation, the user authenticates using one of the roles. Prior to initiating the requested operation, there is a verification that the authenticated user role has the permission authorized to do the operation. Here are the permissions that each user role has:

  • The cluster operator role has CLUSTER:MANAGE, CLUSTER:WRITE, CLUSTER:READ, CLUSTER:MANAGE:DEPLOY, CLUSTER:MANAGE:GATEWAY, DATA:MANAGE, DATA:WRITE, and DATA:READ permissions.
  • The developer role has CLUSTER:READ, DATA:WRITE, and DATA:READ permissions.
  • The gateway sender role has DATA:WRITE permission.

More details about these permissions are in the Pivotal GemFire manual under Implementing Authorization.

Known Issues

Pulse Issue

The topology diagram might not be accurate and might show more members than are actually in the cluster. However, the numerical value displayed on the top bar is accurate.

Feedback

Please provide any bugs, feature requests, or questions to the Pivotal Cloud Foundry Feedback list.

Create a pull request or raise an issue on the source for this page in GitHub