Redis for PCF v1.9

On-Demand Service Offering

Page last updated:

Redis for PCF offers On-Demand, Dedicated-VM, and Shared-VM service plans. This section describes the architecture, lifecycle, and configurations of the on-demand plan, as well as networking information for the on-demand service. For similar information for the Dedicated-VM and Shared-VM plans, see Dedicated-VM and Shared-VM Service Offerings.

Architecture Diagram for On-Demand Plan

This diagram shows the architecture of the service broker and on-demand plans, and how the user’s app binds to a Redis instance.

On-Demand Architecture Diagram

On-Demand Service Plans

Three On-Demand Cache Plans

On-demand plans are best fit for cache use cases and are configured as such by default.

Redis for PCF offers three on-demand plans as the p.redis service within the PCF Redis tile. Below is a description of each plan as it appears in the cf marketplace and its intended use case.

  • Small Cache Plan: A Redis instance deployed to a dedicated VM, suggested to be configured with ~1 GB of memory and >3.5 GB of persistent disk.
  • Medium Cache Plan: A Redis instance deployed to a dedicated VM, suggested to be configured with ~2 GB of memory and >10 GB of persistent disk.
  • Large Cache: A Redis instance deployed to a dedicated VM, suggested to be configured with ~4 GB of memory and >14 GB of persistent disk.

For each service plan, the operator can configure the Plan name, Plan description, Server VM type and Server Disk type, or choose to disable the plan completely.

Features of On-Demand Service Plans

  • Each on-demand service instance is deployed to its own VM and is suitable for production workloads.
  • The service plans are operator-configured and enabled. Once enabled, app developers can view the available plans in the Marketplace and provision a Redis instance from that plan.
  • Operators can update the cache plan settings, including the VM size and disk size, after the plans have been created.
  • Operators and app developers can change certain Redis configurations from the default. See Configuration for On-Demand Service Plans for more information.
  • The default maxmemory-policy is allkeys-lru and can be updated for other cache policies.
  • The maximum number of instances is managed by a per-plan and global quota. The maximum number of instances cannot surpass 50.

Configuration of On-Demand Service Plans

For on-demand plans, certain Redis configurations can be set by the operator during plan configuration, and by the app developer during instance provisioning. Other Redis configurations cannot be changed from the default.

Operator Configurable Redis Settings

The Redis settings that an operator can configure in the tile UI include:

  • Redis Client Timeout
  • Redis TCP Keepalive
  • Max Clients
  • Lua Scripting
  • Plan Quota

For more information, see Additional Redis Configurations.

App Developer Configurable Redis Settings

The Redis settings that an app developer can configure include:

  • maxmemory-policy
  • notify-keyspace-events
  • slowlog-log-slower-than
  • slowlog-max-len.

For more information, see Customize an On-Demand Service Instance.

Operator Notes for On-Demand Service Plans

  • Instances of the on-demand plan can be deployed until their number reaches either an operator-set per-plan quota or a global quota.
  • Instances are provisioned based on the On-Demand Services SDK and service broker adapter associated with this plan.
  • maxmemory in redis.conf is set to 45% of the system memory
  • Any on-demand plan can be disabled from the plan page in Ops Manager.

Known Limitations for On-Demand Service Plans

Limitations for the on-demand service include:

  • Operators must not downsize the VMs or disk size as this can cause data loss in pre-existing instances.

  • Operators can update certain plan settings after the plans have been created. To ensure upgrades happen across all instances, set the upgrade instances errand to On.

  • If the operator updates the VM size, disk size, or the Redis configuration settings (enabling Lua Scripting, max-clients, timeout, and TCP keep-alive), these settings are implemented in all instances already created.

  • Backups are not available for on-demand plans.

Resource Usage Planning for On-Demand Plans

Redis on-demand plans use dedicated VMs and disks, which will consume IaaS resources. Operators can limit resource usage with Plan Quotas and a Global Quota, but resource usage will vary based on number of on-demand instances provisioned.

If the number of on-demand instances is greater than or equal to the Global Quota set on the On-Demand Service Settings page, no new instances can be provisioned.

To calculate the maximum cost/ usage for each plan:

max_plan_resources = plan_quota x plan_resources

To calculate the maximum cost across plans, add together the cost/ usage for each plan, while the quotas sum to less than the global quota.

While (plan_1_quota + plan_2_quota) ≤ global_quota:
max_resources = (plan_1_quota x plan_1_resources) + ( plan_2_quota x plan_2_resources)

To calculate the current IaaS cost/ usage across on-demand plans:

  1. Current instances provisioned for all plans can be found by referencing the total_instance metric as documented here
  2. Multiple the total_instance for each plan by that plan’s resources. Sum all plans that are active to get your total current usage
current_usage = (plan_1_total_instances x plan_1_resources) + (plan_2_total_instances x plan_2_resources)

Lifecycle for On-Demand Service Plan

The image below shows the lifecycle of Redis for PCF, from an operator installing the tile, through an app developer using the service, to an operator deleting the tile.

On-Demand Lifecycle Diagram

Create a pull request or raise an issue on the source for this page in GitHub