Minimal Viable Platform
Page last updated:
Starting small is a great way to establish a container-ready infrastructure. You can read more about each of the key components in subsequent chapters but at this point we introduce the smallest single configuration recommended for getting started. This configuration is validated as stable and functional for timed trials, proofs-of-concept, small, non-production-grade application deployments requiring only host level fault tolerance, and remote/branch office operation.
Starter Kit Applied to TAS for VMs View a larger version of this diagram.
Starter Kit Applied to TKGI View a larger version of this diagram.
This configuration includes the following key components:
- VxRail Appliances in a four node configuration
- VxRail HCI System Software deployed for server firmware and vSphere provisioning
- Leaf-Spine switching in a redundant configuration (not shown)
- vSAN storage configured as sole storage target
- App container platform of choice, either Tanzu Application Platform (TAS) for VMs or Tanzu Kubernetes Grid-integrated (TKGI)
- NSX-T Data Center network virtualization and security platform for either TAS, TKGI or both
- Host groups as needed to organize components for high availability
There are a few trade-offs made in the Starter Kit platform worth understanding, including:
- Overall capacity for apps is shared with the management infrastructure
- Management and operational components are kept to a minimum to reduce waste and speed recovery after a failure event
- Storage is purposely kept simple for stable operation during service outages, patching or unplanned outages
- A single host can be out of service at any one time but no more for normal operations
Also consider that vSphere DRS (Dynamic Resource Scheduling) can be applied to steer VMs either apart from each other (anti-affinity) or to each other (affinity). DRS can be used to push jobs apart that you want to make sure survive an outage, such as Gorouters.
When using DRS, we recommend using the “should” rule vs the “must” rule. This will allow VMs that violate the DRS rule to power on anyway as opposed to being denied power on. For example, you have three AZs and eleven Gorouters. With a DRS anti-affinity “should” rule they will deploy on each server separately and then overlap but all power on. With the “must” rule, only four will power on and the others will not, as they can not be placed on a server away from the others.
The next steps of system growth will be discussed in subsequent chapters.