LATEST VERSION: 1.9 - CHANGELOG
Redis for PCF v1.9

Redis for PCF Architecture and Lifecycle



Redis for PCF offers On-Demand, Dedicated-VM, and Shared-VM plans. This section describes the architecture, lifecycle, and configurations of these plans, as well as networking information for the on-demand service.

Redis for PCF Architecture for On-Demand Service Plan

Architecture Diagram for On-Demand Plan

This diagram shows how the architecture of the service broker and On-Demand plans and how the user’s app binds to a Redis instance.

On-Demand Architecture Diagram

On-Demand Service Plans

  • These plans are operator-configured and enabled. Once enabled, App Developers can provision a Redis instance from that plan.
  • You can disable any of the three on-demand plans in the plan’s page in Ops Manager. See descriptions of the three on-demand plans in the Overview of Redis for PCF topic.
  • The maximum number of instances is managed at by a per-plan and global quota. The maximum number of instances cannot surpass 50.
  • Operators can update the plan settings, including the VM size, disk size, and Redis configuration settings, after the plans have been created. Operators should not downsize the VMs or disk size as this can cause data loss in pre-existing instances.
  • App Developers can update certain Redis configurations.

Configuration for On-Demand Service Plans

For On-Demand Plans, certain Redis configurations can be set by the operator during plan configuration, and by the App Developer during instance provisioning. Other Redis configurations cannot be changed from the default.

  • Operator configurable Redis settings include the following: timeout, tcp-keepalive, maxclients, and lua scripting. See the Operator Guide section of this documentation for more detail.
  • App-Developer configurable Redis settings include the following: maxmemory-policy, notify-keyspace-events, slowlog-log-slower-than, and slowlog-max-len. See the App Developer guide of this documentation for more detail.

Lifecycle for On-Demand Service Plan

The image below shows the lifecycle of Redis for PCF, from an operator installing the tile, through an app developer using the service, to an operator deleting the tile.

On-Demand Lifecycle Diagram

On-Demand Services and Networking

BOSH 2.0 and the Service Network

Before BOSH 2.0, cloud operators pre-provisioned service instances from Ops Manager. In the Ops Manager Director Networking pane, they allocated a block of IP addresses for the service instance pool, and under Resource Config they provisioned pool VM resources, specifying the CPU, hard disk, and RAM they would use. All instances had to be provisioned at the same level. With each create-service request from a developer, Ops Manager handed out a static IP address from this block, and with each delete-service it cleaned up the VM and returned it to the available pool.

With BOSH 2.0 dynamic networking and Cloud Foundry asynchronous service provisioning, operators can now define a dynamically-provisioned service network that hosts instances more flexibly. The service network runs separate from the PCF default network. While the default network hosts VMs launched by Ops Manager, the VMs running in the service network are created and provisioned on-demand by BOSH, and BOSH lets the IaaS assign IP addresses to the service instance VMs. Each dynamic network attached to a job instance is typically represented as its own Network Interface Controller in the IaaS layer.

Operators enable on-demand services when they deploy PCF, by creating one or more service networks in the Ops Manager Director Create Networks pane and selecting the Service Network checkbox. Designating a network as a service network prevents Ops Manager from creating VMs in the network, leaving instance creation to the underlying BOSH.

Service Network checkbox

When they deploy an on-demand service, operators select the service network when configuring the tile for that on-demand service.

Default Network and Service Network

Like other on-demand PCF services, On-Demand Redis for PCF relies on the BOSH 2.0 ability to dynamically deploy VMs in a dedicated network. The on-demand service broker uses this capability to create single-tenant service instances in a dedicated service network.

On-demand services use the dynamically-provisioned service network to host the single-tenant worker VMs that run as service instances within development spaces. This architecture lets developers provision IaaS resources for their service instances at creation time, rather than the operator pre-provisioning a fixed quantity of IaaS resources when they deploy the service broker.

By making services single-tenant, where each instance runs on a dedicated VM rather than sharing VMs with unrelated processes, on-demand services eliminate the “noisy neighbor” problem when one application hogs resources on a shared cluster. Single-tenant services can also support regulatory compliance where sensitive data must be compartmentalized across separate machines.

An on-demand service splits its operations between the default network and the service network. Shared components of the service, such as executive controllers and databases, run centrally on the default network along with the Cloud Controller, UAA, and other PCF components. The worker pool deployed to specific spaces runs on the service network.

The diagram below shows worker VMs in an on-demand service instance, such as RabbitMQ for PCF, running on a separate services network, while other components run on the default network.

Architecture Diagram

Required Networking Rules for On-Demand Services

Prior to deploying any service tile that uses the on-demand broker (ODB), the operator must request the network connections needed to allow various components of Pivotal Cloud Foundry (PCF) to communicate with ODB. The specifics of how to open those connections varies for each IaaS.

The following table shows the responsibilities of the key components in an on-demand architecture.

Key Components Their Responsibility
BOSH Director Creates and updates service instances as instructed by ODB
BOSH Agent BOSH includes an Agent on every VM that it deploys. The Agent listens for instructions from the Director and carries out those instructions. The Agent receives job specifications from the Director and uses them to assign a role, or Job, to the VM.
BOSH UAA As an OAuth2 provider, BOSH UAA issues tokens for clients to use when they act on behalf of BOSH users.
ERT Contains the apps that are consuming services
ODB Instructs BOSH to create and updated services, and connects to services to create bindings
Deployed service instance Runs the given data service (for example, the deployed Redis for PCF service instance runs the Redis for PCF data service)

Regardless of the specific network layout, the operator must ensure network rules are set up so that connections are open as described in the table below.

This component… Must communicate with… Default Port Notes
ODB
  • BOSH Director
  • BOSH UAA
443 One-way communication.
The default port is not configurable.
ODB Deployed service instances Specific to the service (such as RabbitMQ for PCF). May be one or more ports. One-way communication.
This connection is for administrative tasks. Avoid opening general use, app-specific ports for this connection.
ODB ERT 443 One-way communication.
The default port is not configurable.
Errand VMs
  • ERT
  • ODB
  • Deployed Service Instances
  • 443
  • 8080
  • Specific to the service. May be one or more ports.
One-way communication.
The default port is not configurable.
BOSH Agent BOSH Director 6868 Two-way communication.
The BOSH agent runs on every VM in the system and on the BOSH director.

The default port is not configurable.
Deployed apps on ERT Deployed service instances Specific to the service. May be one or more ports. One-way communication.
This connection is for general use, app-specific tasks. Avoid opening administrative ports for this connection.
ERT ODB 8080 One-way communication.
This port may be different for individual services. This port may also be configurable by the operator if allowed by the tile developer.

Redis for PCF Architecture for Dedicated-VM and Shared-VM Service Plans

Architecture Diagram for Shared and Dedicated Plans

This diagram shows how the architecture of the service broker and Shared-VM and Dedicated-VM plans and how the user’s app binds to a Redis instance.

Architecture Diagram

Shared-VM Service Plan

  • This plan deploys a Redis instance inside the service broker VM.
  • To disable this plan, set the Max instances limit on the Shared-VM plan tab in Ops Manager to 0.
  • You can increase the maximum number of instances from the default 5 to a value of your choosing. If you increase the number of instances that can run on this single VM, you should increase the resources allocated to the VM, in particular RAM and CPU. You can overcommit to a certain extent, but may start to see performance degradations.
  • You can also increase the maximum amount of RAM allocated to each Redis process (service instance) that runs on this VM.
  • If you decrease the service instance limit, any instances that run where the count is now greater than the limit are not terminated. Until the total count drops below the new limit you cannot create any new instances. For example if you had a limit of 10 with all used and reduced the limit to 8, two instances will continue to run until you terminate them yourself.

Dedicated-VM Service Plan

  • This plan deploys the operator-configured number of dedicated Redis VMs alongside the service broker VM.
  • These instances are pre-provisioned during the deployment of the tile from Ops Manager into a pool. The VMs are provisioned and configured with a Redis process ready to be used when an instance of the Dedicated-VM plan is requested.
  • A default deployment will provision 5 instances of the Dedicated-VM plan into the pool. This number can be increased on the Resource Config tab in Ops Manager, either in the initial deployment or subsequently. The number of VMs cannot be decreased once deployed.
  • When a user provisions an instance, it is marked as in use and taken out of the pool.
  • When a user deprovisions an instance, the instance is cleansed of any data and configuration to restore it to a fresh state and placed back into the pool, ready to be used again.
  • You can disable this plan by setting the number of instances of the Dedicated node job in Ops Manager to 0.

Configuration for Dedicated-VM and Shared-VM Service Plans

For Dedicated-VM and Shared-VM plans, the default Redis configurations cannot be changed. A sample redis.conf from a Dedicated-VM plan instance is provided here.

  • Redis is configured with a maxmemory-policy of no-eviction. This policy means that when the memory is full, the service does not evict any keys or perform any write operations until memory becomes available.

  • Persistence is configured for both RDB and AOF.

  • The default maximum number of connections, maxclients, is set at 10000 but this number is adjusted by Redis according to the number of file handles available.

  • Replication and event notification are not configured.

Lifecycle for Dedicated-VM and Shared-VM Service Plans

Here is the lifecycle of Redis for PCF, from an operator installing the tile, to an app developer using the service, toan operator deleting the tile.

Lifecycle Diagram

Create a pull request or raise an issue on the source for this page in GitHub