Installing Velero and Restic

Page last updated:

This topic describes how to install Velero and Restic for the purposes of backing up and restoring Kubernetes workloads. This topic also describes how to install Minio as the storage for Velero with Restic backups.

Prerequisites

Read the workload backup and restore requirements before proceeding.

Prepare a Linux VM with sufficient storage to store several workload backups. You will install Minio on this VM.

It is assumed that you have TKGI Client VM (Linux) where CLI tools are installed, such as the TKGI CLI, kubectl, and others. You will install the Velero CLI on this client VM. If you do not have such a VM, you can install the Velero CLI on your laptop, but you will need to adjust the installation steps accordingly.

It is assumed that the Kubernetes environment has internet access and can be reached by the client VM. If the environment has no internet access, refer to the air-gapped instructions.

Deploy an Object Store

Restic requires an object store as the backup destination for for workload backups. Complete the following tasks to install and configure Minio as the backend object store. The instructions describe how to deploy the Minio Server on a Linux Ubuntu VM. For more information, see the Minio quick start guide.

Install Minio

wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio

Create directory where Minio data will be stored:

mkdir /DATA-MINIO

Start Minio

Start the Minio server:

./minio server /DATA-MINIO

Once the Minio server is started, you are provided with the endpoint URL, AccessKey, and ScretKey for the data store instance. Record this information.

Browse to the Minio data store using the endpoint URL, for example http://http://10.199.17.63:9000/minio/login/.

Minio Data Store

Provide the AccessKey and ScretKey and log in.

Minio Log In

Enable Minio as a Service

Configure Minio for automatic startup.

Download the minio.service script:

curl -O https://raw.githubusercontent.com/minio/minio-service/master/linux-systemd/minio.service

Edit the minio.service script and add the following value for ExecStart:

ExecStart=/usr/local/bin/minio server /DATA-MINIO path

Run the following series of commands to configure the Minio service:

cp minio.service /etc/systemd/system
cp minio /usr/local/bin/
systemctl daemon-reload
systemctl start  minio
systemctl status minio
systemctl enable minio

Create Minio Bucket

Create a Minio bucket for TKGI workload backup and restore.

  1. Launch the Mino browswer and log in to your object store.
  2. Click the Create bucket icon.
  3. Enter the bucket name, for example: tkgi-velero
  4. Verify that the bucket is created.

Minio Create Bucket Minio Bucket Name Verify Minio Bucket

Configure Minio Bucket

By default, the policy for the new tkgi-velero bucket is read-only. Change it to read-write:

  1. Select the bucket and click on the dots link.
  2. Select Edit Policy.
  3. Change the policy to Read and Write.
  4. Click Add.
  5. Click the X button to close the dialog box.

Minio Name Bucket Minio Name Bucket Minio Name Bucket

Install the Velero CLI on Your Workstation

Install the Velero CLI on your workstation.

Download the Velero CLI Binary

Download the supported version of the signed Velero binary from the TKGI product downloads page at myVMware.

Note: You must use the Velero binary signed by VMware to be eligible for support from VMware.

Install the Velero CLI

Install the Velero CLI on the TKIG client or on your local machine.

CD to the Velero CLI download and unzip the file.

gunzip velero-linux-v1.4.2_vmware.1.gz

Check for the Velero binary:

ls -l

-rw-r--r-- 1 root root 57819586 Sep 14 16:19 velero-linux-v1.4.2_vmware.1

Change the permissions and put the Velero CLI in the system path.

chmod +x velero-linux-v1.4.2_vmware.1

Put the CLI in the system path so it is globally available.

cp velero-linux-v1.4.2_vmware.1 /usr/local/bin/velero

Verify the installation:

velero version

Client:
    Version: v1.4.2

Install Velero and Restic on the Target Kubernetes Cluster

Install the Velero and Restic pods on each Kubernetes cluster whose workloads you want to backup. The following instructions assume the use of Minio and that the Kubernetes cluster has internet access.

The Velero CLI context follows the kubectl CLI context. Before running the Velero CLI command to install Velero and Restic on the target cluster, you first set the kubectl context. The Velero CLI context will automatically follow the kubectl context. For example, running kubectl config use-context <cluster-name> or pks get-credentials <cluster-name> tells the Velero CLI which cluster to work on.

Prerequisites

Before installing Velero and Restic on the Kubernetes cluster:

  • Get the name of the Minio bucket; it is tkgi-velero in our example
  • Get the keys for the Minio bucket:
    • AccessKey: 0XXNO8JCCGV41QZBV0RQ (for example)
    • SecretKey: SecretKey: clZ1bf8Ljkvkmq7fHucrKCkxV39BRbcycGeXQDfx (for example)
  • Make sure kubectl works against the cluster (use tkgi get-credentials if needed)

Procedure

Get the context for the target Kubernetes cluster so that the Velero CLI knows what cluster to work on. Replace cluster-1 with the name of the target cluster.

pks get-credentials cluster-1

Fetching credentials for cluster cluster-1.
Password: ********
Context set for cluster cluster-1.

You can now switch between clusters by using:
$kubectl config use-context <cluster-name>

Create a secrets file named credentials-minio with the credentials to access the Minio server.

aws_access_key_id = 0XXNO8JCCGV41QZBV0RQ

aws_secret_access_key = clZ1bf8Ljkvkmq7fHucrKCkxV39BRbcycGeXQDfx

Verify:

ls

credentials-minio

Run the following command to install Velero and Restic on the target Kubernetes cluster:

velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.0.0 \
--bucket tkgi-velero \
--secret-file ./credentials-minio \
--use-volume-snapshots=false \
--use-restic \
--backup-location-config \
region=minio,s3ForcePathStyle=“true”,s3Url=http://10.199.17.63:9000,publicUrl=http://10.199.17.63:9000

Sample expected result:

CustomResourceDefinition/backups.velero.io: created
...
Waiting for resources to be ready in cluster...
...
DaemonSet/restic: created
Velero is installed! Use 'kubectl logs deployment/velero -n velero' to view the status.

Verify

Verify the installation of Velero and Restic.

kubectl logs deployment/velero -n velero

Verify the velero namespace.

kubectl get ns

NAME              STATUS   AGE
default           Active   13d
kube-node-lease   Active   13d
kube-public       Active   13d
kube-system       Active   13d
pks-system        Active   13d
velero            Active   2m38s

Verify the velero and restic pods.

kubectl get all -n velero

NAME                         READY   STATUS             RESTARTS   AGE
pod/restic-25chx             0/1     CrashLoopBackOff   1          30s
pod/restic-rpxcp             0/1     CrashLoopBackOff   1          30s
pod/restic-wfxmg             0/1     CrashLoopBackOff   1          30s
pod/velero-8dc7498d9-9v7x4   1/1     Running            0          30s

Enable Privileged Mode and Modify the Host Path

To run the 3-pod Restic DaemonSet on a Kubernetes cluster in TKGI, you must modify the Restic DaemonSet spec. For more information as to why, see the Restic documentation. In addition, the hostPath should be changed from /var/lib/kubelet/pods to /var/vcap/data/kubelet/pods. The instructions for both of these configurations are provided below.

In addition, you must enable the Allow Privileged option in your plan configuration so that Restic is able to mount the hostPath. See Plan Configuration in the TKGI documentation for details. If you change the plan, you will need to redeploy the cluster. Note that Because you are allowing privileged containers in the cluster, you should apply PSP to control pod access. See Enabling Pod Security Policy.

Verify the 3-pod Restic DaemonSet:

kubectl get pod -n velero

NAME                          READY   STATUS             RESTARTS   AGE
pod/restic-p5bdz              0/1     CrashLoopBackOff   4          3m8s
pod/restic-rbmnd              0/1     CrashLoopBackOff   4          3m8s
pod/restic-vcpjm              0/1     CrashLoopBackOff   4          3m8s
pod/velero-68f47744f5-lb5df   1/1     Running            0          3m8s

Run the following command:

kubectl edit daemonset restic -n velero

Change hostPath from /var/lib/kubelet/pods to /var/vcap/data/kubelet/pods:

     - hostPath:

          path: /var/vcap/data/kubelet/pods

Save the file.

Verify the 3-pod Restic DaemonSet:

kubectl get pod -n velero

NAME                      READY   STATUS    RESTARTS   AGE
restic-5jln8              1/1     Running   0          73s
restic-bpvtq              1/1     Running   0          73s
restic-vg8j7              1/1     Running   0          73s
velero-68f47744f5-lb5df   1/1     Running   0          10m

Adjust Velero Memory Limits (if necessary)

By default Velero sets limits and request memory at 256 and 128 by default. In case a Velero backup get status=‘InProgress’ for many hours, increase these limits.

kubectl edit deployment/velero -n velero 

Change limits and request memory to 512 and 256.

Velero Limits

Install Velero and Restic in an Air-Gapped Environment

If you are working in an air-gapped environment, you can install Velero and Restic using an internal registry.

Prerequisites

  • A private container registry is installed and configured. The instructions use Harbor.
  • Docker is installed on the workstation or TKGI jump host.
  • Get the name of the Minio bucket; it is tkgi-velero in our example
  • Get the keys for the Minio bucket:
    • AccessKey: 0XXNO8JCCGV41QZBV0RQ (for example)
    • SecretKey: SecretKey: clZ1bf8Ljkvkmq7fHucrKCkxV39BRbcycGeXQDfx (for example)
  • Make sure kubectl works against the cluster (use tkgi get-credentials if needed)

Procedure

  1. Download the Velero CLI and Velero with Restic Dockers images from VMware at https://www.vmware.com/.

    • velero-plugin-for-aws-v1.1.0-beta.1.tar.gz
    • velero-v1.4.2_vmware.1.tar.gz
    • velero-restic-restore-helper-v1.4.2_vmware.1.tar.gz

      Note: You must use the container images signed by VMware to be eligible for support from VMware.

  2. Push the Docker images into the internal registry. Adjust the variables as needed for your registry instance and preferences.

    docker login harbor.example.com
    docker load -i velero-plugin-for-aws-v1.1.0-beta.1.tar.gz 
    docker tag vmware-tanzu/velero-plugin-for-aws:v1.1.0-beta.1 harbor.example.com/vmware-tanzu/velero-plugin-for-aws:v1.1.0-beta.1
    docker load -i velero-restic-restore-helper-v1.4.2_vmware.1.tar.gz 
    docker tag vmware-tanzu/velero-restic-restore-helper:v1.4.2_vmware.1 harbor.example.com/vmware-tanzu/velero-restic-restore-helper:v1.4.2_vmware.1
    docker load -i velero-v1.4.2_vmware.1.tar.gz
    docker tag vmware-tanzu/velero:v1.4.2_vmware.1 harbor.example.com/vmware-tanzu/harbor.example.com/vmware-tanzu/velero:v1.4.2_vmware.1
    docker push harbor.example.com/harbor-ci/vmware-tanzu/velero-plugin-for-aws:v1.1.0-beta.1
    docker push harbor.example.vmware.com/vmware-tanzu/velero-restic-restore-helper:v1.4.2_vmware.1
    docker push harbor.example.vmware.com/vmware-tanzu/harbor.example.com/vmware-tanzu/velero:v1.4.2_vmware.1
    
  3. Create a secrets file named credentials-minio with the credentials to access the Minio server.

    aws_access_key_id = 0XXNO8JCCGV41QZBV0RQ
    aws_secret_access_key = clZ1bf8Ljkvkmq7fHucrKCkxV39BRbcycGeXQDfx
    
  4. Install Velero with Restic. Refer to the On-Premises Environments instructions in the Velero documentation, and the Restic installation instructions.

    • You can install Velero using the following command. Adjust the variables for your environment.
      velero install --image harbor.example.com/vmware-tanzu/harbor.example.com/vmware-tanzu/velero:v1.4.2_vmware.1 --plugins harbor.example.com/vmware-tanzu/velero-plugin-for-aws:v1.1.0-beta.1 --provider aws --bucket velero --secret-file ./credentials-minio --use-volume-snapshots=false    --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http:span>//20.20.224.27:9000,publicUrl=http://20.20.224.27:9000 --use-restic
      
    • Expected result:
      Velero is installed! Use 'kubectl logs deployment/velero -n velero' to view the status.
      
  5. Perform post-installation configurations.

After install, customize the Restic helper container and make it the init container for the pod during the restore process. This can be done by creating a configmap and applying it in the velero namespace, such as kubect apply -f restic-cm.yaml -n velero. Download the example configmap restic-cm.yaml provided for such purposes.

After install, modify the host path by editing the Restic daemonset manifest. Replace /var/lib/kubelet/pods with /var/vcap/data/kubelet/pods. Verify that the Restic pods are running. See Enable Privileged Mode and Modify the Host Path.

Optionally, for large backups (such as 1 TB), you can edit the Velero deployment manifest and add '- --restic-timeout=900m' to spec.template.spec.containers.

Optionally, depending on your requirements, you can adjust the CPU and memory reserves and limits for the Velero and Restic pods. See Adjust Velero Memory Limits (if necessary).

restic pod

resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi

velero pod

resources:
limits:
cpu: "1"
memory: 256Mi
requests:
cpu: 500m
memory: 128Mi

Please send any feedback you have to pks-feedback@pivotal.io.