Managing Application Configuration Service for VMware Tanzu Resources

This topic describes how to create, configure, and update ClusterConfigProvider and ConfigurationSlice resources for Application Configuration Service for VMware Tanzu.

Create a Configuration Provider

To create a configuration provider, create a file named my-config-provider.yaml file, with the following YAML definition:

apiVersion: "acs.tanzu.vmware.com/v1beta1"
kind: ClusterConfigProvider
metadata:
  name: cook-config-provider
spec:
  backends:
    - type: git
      uri: https://github.com/spring-cloud-services-samples/cook-config

For the metadata.name value, substitute your desired configuration provider name.

Next, apply this resource to your Kubernetes cluster:

$ kubectl apply -f my-config-provider.yaml

This will create a cluster-scoped ClusterConfigProvider resource, but otherwise will not do anything until you create a configuration slice that references this provider resource.

Create a Configuration Slice

To create a configuration slice, create a file named my-config-slice.yaml file, with the following YAML definition:

apiVersion: "acs.tanzu.vmware.com/v1beta1"
kind: ConfigurationSlice
metadata:
  name: cook-config-slice
spec:
  provider: cook-config-provider
  content:
  - cook/production

For the metadata.name value, substitute your desired configuration slice name. Likewise, for the spec.provider value, substitute the name of the provider resource created in the previous section.

The list under spec.content specifies one or more slices of configuration to be pulled from the referenced provider. Each content entry should match the following format:

{APP NAME}/{PROFILE NAME}/{LABELS}
  • {APP NAME} — The name of an application for which the configuration is being retrieved. If “application”, then this is considered the default application and includes configuration shared across multiple applications. Any other value specifies a specific application (in the example above, “cook” is the application name) and will include properties for both the specified application as well as shared properties for the default application.
  • {PROFILE NAME} — The name of a profile for which properties may be retrieved. If “default” or “*”, then this includes properties that are shared across any all profiles. If any non-default value, then the slice will include properties for the specified profile as well as properties for the default profile.
  • {LABELS} — A comma-separated list of labels from which to retrieve properties. If not specified, then the default is to pull properties from a branch named “master”. If specified, then properties will be retrieved from all listed labels, but not from the “master” branch (unless it is included in the list).

Next, apply this resource to your Kubernetes cluster:

$ kubectl apply -f my-config-slice.yaml

This will create a namespace-scoped ConfigurationSlice resource in the currently active namespace. After a few seconds, there should be a new ConfigMap resource with the same name as the ConfigurationSlice created in the same namespace that includes properties retrieved based on the specifications from the ConfigurationSlice and its referenced ClusterConfigProvider.

You can verify the creation of the ConfigMap like this:

$ kubectl get configmaps
NAME                DATA   AGE
cook-config-slice   1      0m19s

Use the kubectl describe command to view the contents of the ConfigMap:

$ kubectl describe configmap cook-config-slice
Name:         cook-config-slice
Namespace:    my-app-ns
Labels:       configslice.name=cook-config-slice
              configslice.namespace=my-app-ns
Annotations:  <none>

Data
====
cook.special:
----
Cake a la mode
Events:  <none>

This ConfigMap resource can then be consumed by any means fitting for your application(s), including mounting it as a mounted volume in the application pod.

Mounting a ConfigMap

In your application’s pod manifest, you’ll need to create a volumeMounts configuration on the pod’s container. For example:

volumeMounts:
- name: cook-config
  mountPath: /etc/config

And you’ll also need to define the volume specification on the pod itself. For example:

volumes:
- name: cook-config
  configMap:
    name: cook-config-slice

Those two things will result in the specified ConfigMap resource being mounted on the pod into the /etc/config folder. From there, your application can consume that configuration as appropriate for the language and/or framework that the application is based upon.

In the case of a Spring Boot application, you can specify the SPRING_CONFIG_IMPORT environment variable to have the mounted configuration included among the property sources in the Spring environment. On the container specification, set the environment variable like this:

env:
- name: SPRING_CONFIG_IMPORT
  value: "configtree:/etc/config/"

The complete deployment manifest for such an application is as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-kafe-deploy
  labels:
    app: k8s-kafe
spec:
  replicas: 1
  selector:
    matchLabels:
      app: k8s-kafe
  template:
    metadata:
      labels:
        app: k8s-kafe
    spec:
      containers:
      - name: k8s-kafe-container
        image: demo/k8s-kafe:0.0.1-SNAPSHOT
        env:
        - name: SPRING_CONFIG_IMPORT
          value: "configtree:/etc/config/"
        volumeMounts:
        - name: cook-config
          mountPath: /etc/config
      volumes:
      - name: cook-config
        configMap:
          name: cook-config-slice