The design patterns in this section are intended as overviews of common and useful configurations. Effective implementations will require planning—consult with a system architect to fill in the design details for your app.
An inline cache places the caching layer between the app and the backend data store.
The app will want to accomplish CRUD (create, read, update, delete) operations on its data. The app’s implementation of the CRUD operations result in cache operations that break down into cache lookups (reads) and/or cache writes.
The algorithm for a cache lookup quickly returns the cache entry when the entry is in the cache. This is a cache hit. If the entry is not in the cache, it is a cache miss, and code on the cache server retrieves the entry from the backend data store. In the typical implementation, the entry returned from the backend data store on a cache miss is written to the cache, such that subsequent cache lookups of that same entry result in cache hits.
The implementation for a cache write typically creates or updates the entry within the cache. It also creates or updates the data store in one of the following ways:
- Synchronously, in a write-through manner. Each write operation from the app is sent on to be written to the backend data store. After the backend data store write finishes, the value is also written to the cache. The app blocks until the writes to both the backend data store and the cache complete.
- Asynchronously, in a write-behind manner. The cache gets updated, and the value to be written to the backend data store gets queued. Control then returns to the app, which continues independent of the write to the backend data store.
Developers design the server code to implement this inline-caching pattern. See Setting Up Servers for an Inline Cache for details about the custom server code and how to configure an inline cache.
The look-aside pattern of caching places the app in charge of communication with both the cache and the backend data store.
The app will want to accomplish CRUD (CREATE, READ, UPDATE, DELETE) operations on its data. That data may be
- in both the data store and the cache
- in the data store, but not in the cache
- not in either the data store or the cache
The app’s implementation of the CRUD operations result in cache operations that break down into cache lookups (reads) and/or cache writes.
The algorithm for a cache lookup returns the cache entry when the entry is in the cache. This is a cache hit. If the entry is not in the cache, it is a cache miss, and the app attempts to retrieve the entry from the data store. In the typical implementation, the entry returned from the backend data store is written to the cache, such that subsequent cache lookups of that same entry result in cache hits.
The look-aside pattern of caching leaves the app free to implement whatever it chooses if the data store does not have the entry.
The algorithm for a cache write implements one of these:
- The entry is either updated or created within the data store, and the entry is updated within or written to the cache.
- The entry is either updated or created within the backend data store, and the copy currently within the cache is invalidated.
Note: SDG (Spring Data GemFire) supports the look-aside pattern, as detailed at Configuring Spring’s Cache Abstraction.
Two VMware Tanzu GemFire for VMs service instances may be connected across a WAN to form a single distributed system with asynchronous communication. The cluster within each of the Tanzu GemFire service instances will host the same region. Updates to either Tanzu GemFire service instance are propagated across the WAN to the other Tanzu GemFire service instance. The distributed system implements an eventual consistency of the region that also handles write conflicts which occur when a single region entry is modified in both Tanzu GemFire service instances at the same time.
In this active-active system, an external entity implements load-balancing by directing app connections to one of the two service instances. If one of the Tanzu GemFire service instances fails, apps may be redirected to the remaining service instance.
This diagram shows multiple instances of an app interacting with one of the two Tanzu GemFire service instances, cluster A and cluster B. Any change made in cluster A is sent to cluster B, and any change made in cluster B is sent to cluster A.
Two Tanzu GemFire service instances may be connected across a WAN to form a single distributed system with asynchronous communication. An expected use case propagates all changes to a region’s data from the cluster within one service instance (the primary) to the other, where both service instances reside in the same foundation. The replicate increases the fault tolerance of the system by acting as a “hot” spare. In the scenario of the failure of an entire data center or an availability zone, you can rebind apps to the replicate and restage them. The replicate then takes over as the primary.
In this diagram, cluster A is primary, and it replicates all data across a WAN to cluster B.
If cluster A fails, you can manually rebind and restage the apps so that cluster B takes over.
Two Tanzu GemFire service instances may be connected across a WAN to form a single distributed system that implements a CQRS (Command Query Responsibility Segregation) pattern. Within this pattern, commands are those that change the state, where state is represented by region contents. All region operations that change state are directed to the cluster within one Tanzu GemFire service instance. The changes are propagated asynchronously to the cluster within the other Tanzu GemFire service instance via WAN replication, and that other cluster provides only query access to the region data.
This diagram shows an app that may update the region within the Tanzu GemFire service instance of cluster A. Changes are propagated across the WAN to cluster B. The app bound to cluster B may only query the region data; it will not create entries or update the region.
Multiple Tanzu GemFire service instances connected across a WAN form a single hub and a set of spokes. This diagram shows Tanzu GemFire service instance A is the hub, and Tanzu GemFire service instances B, C, and D are spokes.
A common implementation that uses this topology directs all app operations that write or update region contents to the hub. Writes and updates are then propagated asynchronously across the WAN from the hub to the spokes.
Performance improves when operation requests originate in close proximity to the service instance that handles those requests. Yet many data sets are relevant and used all over the world. If the most active location for write and update operations moves over the course of a day, then a performant design pattern is a variation on the hub-and-spoke implementation that changes which Tanzu GemFire service instance is the hub to the most active location.
Form a ring that contains each Tanzu GemFire service instance that will act as the hub. Define a token to identify the hub. Over time, pass the token from one Tanzu GemFire service instance to the next, around the ring.
This diagram shows Tanzu GemFire service instance A is the hub, as it has the token, represented in this diagram as a star. Tanzu GemFire service instances B, C, and D are spokes. Write and update operations are directed to the hub.
This diagram shows that the token has passed from A to B, and B has become the hub.