LATEST VERSION: 1.2 - CHANGELOG
Pivotal Cloud Cache for PCF v1.2

Pivotal Cloud Cache

Overview

Pivotal Cloud Cache (PCC) is a high-performance, high-availability caching layer for Pivotal Cloud Foundry (PCF). PCC offers an in-memory key-value store. It delivers low-latency responses to a large number of concurrent data access requests.

PCC provides a service broker to create in-memory data clusters on demand. These clusters are dedicated to the PCF space and tuned for specific use cases defined by your service plan. Service operators can create multiple plans to support different use cases.

PCC uses Pivotal GemFire. You can use PCC to store any kind of data objects using the Pivotal GemFire Java client library.

This documentation performs the following functions:

  • Describes the features and architecture of PCC
  • Provides the PCF operator with instructions for installing, configuring, and maintaining PCC
  • Provides app developers instructions for choosing a service plan, creating, and deleting PCC service instances
  • Provides app developers instructions for binding apps

Product Snapshot

The following table provides version and version-support information about PCC:

Element Details
Version v1.2.1
Release date January 11, 2018
Software component version Pivotal GemFire v9.2.0
Compatible Ops Manager version(s) v1.11.x and v1.12.x
Compatible Elastic Runtime version(s) v1.11.x and v1.12.x
IaaS support AWS, Azure, GCP, OpenStack, and vSphere
IPsec support Yes

PCC and Other PCF Services

Some PCF services offer on-demand service plans. These plans let developers provision service instances when they want.

These contrast with the more common pre-provisioned service plans, which require operators to provision the service instances during installation and configuration through the service tile UI.

The following PCF services offer on-demand service plans:

  • MySQL for PCF v2.0 and later

  • RabbitMQ for PCF

  • Redis for PCF

  • Pivotal Cloud Cache (PCC)

These services package and deliver their on-demand service offerings differently. For example, some services, like Redis for PCF, have one tile, and you configure the tile differently depending on whether you want on-demand service plans or pre-provisioned service plans.

For other services, like PCC, you install one tile for on-demand service plans and a different tile for pre-provisioned service plans.

The following table lists and contrasts the different ways that PCF services package on-demand and pre-provisioned service offerings.

PCF service tile Standalone product related to the service Versions supporting on demand Versions supporting pre-provisioned
RabbitMQ for PCF Pivotal RabbitMQ v1.8 and later All versions
Redis for PCF Redis v1.8 and later All versions
MySQL for PCF MySQL v2.x
(based on Percona Server)
v1.x
(based on MariaDB and Galera)
PCC Pivotal GemFire All versions NA
GemFire for PCF Pivotal GemFire NA All versions

PCC Architecture

GemFire Basics

Pivotal GemFire is the data store within Pivotal Cloud Cache (PCC). A small amount of administrative GemFire setup is required for a PCC service instance, and any app will use a limited portion of the GemFire API.

The PCC architectural model is a client-server model. The clients are apps or microservices, and the servers are a set of GemFire servers maintained by a PCC service instance. The GemFire servers provide a low-latency, consistent, fault-tolerant data store within PCC.

Client Server Model

GemFire holds data in key/value pairs. Each pair is called an entry. Entries are logically grouped into sets called regions. A region is a map (or dictionary) data structure.

The app (client) uses PCC as a cache. A cache lookup (read) is a get operation on a GemFire region. The cache operation of a cache write is a put operation on a GemFire region. The GemFire command-line interface, called gfsh, facilitates region administration. Use gfsh to create and destroy regions within the PCC service instance.

The PCC Cluster

PCC deploys cache clusters that use Pivotal GemFire to provide high availability, replication guarantees, and eventual consistency.

When you first spin up a cluster, you have three locators and at least four servers.

graph TD; Client subgraph P-CloudCache Cluster subgraph locators Locator1 Locator2 Locator3 end subgraph servers Server1 Server2 Server3 Server4 end end Client==>Locator1 Client-->Server1 Client-->Server2 Client-->Server3 Client-->Server4

When you scale the cluster up, you have more servers, increasing the capacity of the cache. There are always three locators.

graph TD; Client subgraph P-CloudCache Cluster subgraph locators Locator1 Locator2 Locator3 end subgraph servers Server1 Server2 Server3 Server4 Server5 Server6 Server7 end end Client==>Locator1 Client-->Server1 Client-->Server2 Client-->Server3 Client-->Server4 Client-->Server5 Client-->Server6 Client-->Server7

Member Communication

When a client connects to the cluster, it first connects to a locator. The locator replies with the IP address of a server for it to talk to. The client then connects to that server.

sequenceDiagram participant Client participant Locator participant Server1 Client->>+Locator: What servers can I talk to? Locator->>-Client: Server1 Client->>Server1: Hello!

When the client wants to read or write data, it sends a request directly to the server.

sequenceDiagram participant Client participant Server1 Client->>+Server1: What’s the value for KEY? Server1->>-Client: VALUE

If the server doesn’t have the data locally, it fetches it from another server.

sequenceDiagram participant Client participant Server1 participant Server2 Client->>+Server1: What’s the value for KEY? Server1->>+Server2: What’s the value for KEY? Server2->>-Server1: VALUE Server1->>-Client: VALUE

Workflow to Set Up a PCC Service

The workflow for the PCF admin setting up a PCC service plan:

graph TD; subgraph PCF Admin Actions s1 s2 end subgraph Developer Actions s4 end s1[1. Upload P-CloudCache.pivotal to Ops Manager] s2[2. Configure CloudCache Service Plans, i.e. caching-small] s1-->s2 s3[3. Ops Manager deploys CloudCache Service Broker] s2-->s3 s4[4. Developer calls `cf create-service p-cloudcache caching-small test`] s3-->s4 s5[5. Ops Manager creates a CloudCache cluster following the caching-small specifications] s4-->s5

Networking for On-Demand Services

This section describes networking considerations for Pivotal Cloud Cache.

BOSH 2.0 and the Service Network

Before BOSH 2.0, cloud operators pre-provisioned service instances from Ops Manager. In the Ops Manager Director Networking pane, they allocated a block of IP addresses for the service instance pool, and under Resource Config they provisioned pool VM resources, specifying the CPU, hard disk, and RAM they would use. All instances had to be provisioned at the same level. With each create-service request from a developer, Ops Manager handed out a static IP address from this block, and with each delete-service it cleaned up the VM and returned it to the available pool.

With BOSH 2.0 dynamic networking and Cloud Foundry asynchronous service provisioning, operators can now define a dynamically-provisioned service network that hosts instances more flexibly. The service network runs separate from the PCF default network. While the default network hosts VMs launched by Ops Manager, the VMs running in the service network are created and provisioned on-demand by BOSH, and BOSH lets the IaaS assign IP addresses to the service instance VMs. Each dynamic network attached to a job instance is typically represented as its own Network Interface Controller in the IaaS layer.

Operators enable on-demand services when they deploy PCF, by creating one or more service networks in the Ops Manager Director Create Networks pane and selecting the Service Network checkbox. Designating a network as a service network prevents Ops Manager from creating VMs in the network, leaving instance creation to the underlying BOSH.

Service Network checkbox

When they deploy an on-demand service, operators select the service network when configuring the tile for that on-demand service.

Default Network and Service Network

Like other on-demand PCF services, PCC relies on the BOSH 2.0 ability to dynamically deploy VMs in a dedicated network. The on-demand service broker uses this capability to create single-tenant service instances in a dedicated service network.

On-demand services use the dynamically-provisioned service network to host the single-tenant worker VMs that run as service instances within development spaces. This architecture lets developers provision IaaS resources for their service instances at creation time, rather than the operator pre-provisioning a fixed quantity of IaaS resources when they deploy the service broker.

By making services single-tenant, where each instance runs on a dedicated VM rather than sharing VMs with unrelated processes, on-demand services eliminate the “noisy neighbor” problem when one application hogs resources on a shared cluster. Single-tenant services can also support regulatory compliance where sensitive data must be compartmentalized across separate machines.

An on-demand service splits its operations between the default network and the service network. Shared components of the service, such as executive controllers and databases, run centrally on the default network along with the Cloud Controller, UAA, and other PCF components. The worker pool deployed to specific spaces runs on the service network.

The diagram below shows worker VMs in an on-demand service instance, such as RabbitMQ for PCF, running on a separate services network, while other components run on the default network.

Architecture Diagram

Required Networking Rules for On-Demand Services

Prior to deploying any service tile that uses the on-demand broker (ODB), the operator must request the network connections needed to allow various components of Pivotal Cloud Foundry (PCF) to communicate with ODB. The specifics of how to open those connections varies for each IaaS.

The following table shows the responsibilities of the key components in an on-demand architecture.

Key Components Their Responsibility
BOSH Director Creates and updates service instances as instructed by ODB
BOSH Agent BOSH includes an Agent on every VM that it deploys. The Agent listens for instructions from the Director and carries out those instructions. The Agent receives job specifications from the Director and uses them to assign a role, or Job, to the VM.
BOSH UAA As an OAuth2 provider, BOSH UAA issues tokens for clients to use when they act on behalf of BOSH users.
ERT Contains the apps that are consuming services
ODB Instructs BOSH to create and updated services, and connects to services to create bindings
Deployed service instance Runs the given data service (for example, the deployed Redis for PCF service instance runs the Redis for PCF data service)

Regardless of the specific network layout, the operator must ensure network rules are set up so that connections are open as described in the table below.

This component… Must communicate with… Default TCP Port Communication direction(s) Notes
ODB
  • BOSH Director
  • BOSH UAA
  • 25555
  • 8443
One-way The default ports are not configurable.
ODB Deployed service instances Specific to the service (such as RabbitMQ for PCF). May be one or more ports. One-way This connection is for administrative tasks. Avoid opening general use, app-specific ports for this connection.
ODB ERT 8443 One-way The default port is not configurable.
Errand VMs
  • ERT
  • ODB
  • Deployed Service Instances
  • 8443
  • 8080
  • Specific to the service. May be one or more ports.
One-way The default port is not configurable.
BOSH Agent BOSH Director 4222 Two-way The BOSH Agent runs on every VM in the system, including the BOSH Director VM. The BOSH Agent initiates the connection with the BOSH Director.
The default port is not configurable.
Deployed apps on ERT Deployed service instances Specific to the service. May be one or more ports. One-way This connection is for general use, app-specific tasks. Avoid opening administrative ports for this connection.
ERT ODB 8080 One-way This port may be different for individual services. This port may also be configurable by the operator if allowed by the tile developer.
  • PCC can be used as a cache. It supports the look-aside cache pattern.
  • PCC can be used to store objects in key/value format, where value can be any object.
  • PCC works with gfsh. You can only use the gfsh version that matches this release’s version of GemFire. For the version of GemFire supported in this release, see Product Snapshot, above.
  • Any gfsh command not explained in the PCC documentation is not supported.
  • PCC supports basic OQL queries, with no support for joins.

Limitations

  • Scale down of the cluster is not supported.
  • Plan migrations, for example, -p flag with the cf update-service command, are not supported.
  • WAN (Cross Data Center) replication is not supported.
  • Persistent regions are not supported.

Security

Pivotal recommends that you do the following:

  • Run PCC in its own network
  • Use a load balancer to block direct, outside access to the Gorouter

To allow PCC network access from apps, you must create application security groups that allow access on the following ports:

  • 1099
  • 8080
  • 40404
  • 55221

For more information, see the PCF Application Security Groups topic.

PCC works with the IPsec Add-on for PCF. For information about the IPsec Add-on for PCF, see Securing Data in Transit with the IPsec Add-on.

Authentication

Clusters are created with two default users: cluster_operator and developer. A cluster can only be accessed using one of these two users. All client apps, gfsh, and JMX clients must authenticate as one of these users accounts to access the cluster.

Authorization

Default user roles cluster_operator and developer have different permissions:

  • cluster_operator role has CLUSTER:WRITE, CLUSTER:READ, DATA:MANAGE, DATA:WRITE, and DATA:READ permissions.
  • developer role has CLUSTER:READ, DATA:WRITE and DATA:READ permissions.

You can find more details about these permissions in the Pivotal GemFire Implementing Authorization topic.

Known Issues

Pulse Issue

The topology diagram might not be accurate and might show more members than are actually in the cluster. However, the numerical value displayed on the top bar is accurate.

Feedback

Please provide any bugs, feature requests, or questions to the Pivotal Cloud Foundry Feedback list.

Create a pull request or raise an issue on the source for this page in GitHub