Installing and Configuring New Relic Cluster Monitoring for VMware Tanzu

This topic describes how to install and configure New Relic Cluster Monitoring for VMware Tanzu.

Installation Prerequisites

You need basic knowledge of the following tools and technologies:

Installation and Configuration of New Relic Cluster Monitoring for VMware Tanzu

Perform the instructions in this section to install New Relic Cluster Monitoring for VMware Tanzu.

  • Obtain your New Relic account license key
    • Login to New Relic account intended for use with NRI kubernetes Cluster Monitoring
    • On the top right click the pull-down menu and select “Account Settings”
    • The license key is on the right side towards the middle of the page

License Key in New Relic UI

  • Obtain the current Kubernetes cluster name using kubectl CLI command

    kubectl config current-context
    • If the current cluster is not the correct target cluster, perform the following:

      • Get a list of available clusters

        kubectl config get-clusters
      • Set the current context to the desired cluster from the list returned by the previous command

      kubectl config use-context <DESIRED_CLUSTER>
  • Download New Relic Cluster Monitoring for VMware Tanzu (new-relic-cluster-monitoring-x.x.x.tgz) from Pivotal Network. Ensure “x.x.x”, the version of New Relic Cluster Monitoring for VMware Tanzu package downloaded from Pivotal Network, is also the same as the New Relic Infrastructure Helm chart version.

  • extract the downloaded package. This creates a subdirectory by the name “new-relic-cluster-monitoring-x.x.x”

    tar xzf new-relic-cluster-monitoring-x.x.x.tgz
  • Change directory to “new-relic-cluster-monitoring-x.x.x”

    cd new-relic-cluster-monitoring-x.x.x
  • The installation folder includes a yaml file called nr-install-params.yaml. Open this file in text editor of your choice and assign proper values to the following properties:

    licenseKey                             # New Relic account's license key
    cluster                                # kubernetes cluster name
    config.custom_attributes.platform      # platform name (i.e. pks)
    config.custom_attributes.cluster       # kubernetes cluster name
  • Perform the instructions in this section of the installation only if your cluster does not have access to public docker registriy (i.e. docker hub) and utilizes private registries

    Note: If your kubernetes cluster has access to docker hub you can skip this step and continue here

    • If your network does not have access to public docker registry and you are using your internal/private registry for docker images, please follow the instructions in this step to tag and push New Relic Infrastructure image for kubernetes to your private docker registry. For your convenience a docker image of New Relic Infrastructure agent is included in New Relic Cluster Monitoring for VMware Tanzu.
    • Run the following docker command to load the image to your local docker images

      docker load newrelic-infrastructure-k8s-image-x.x.x.tgz newrelic/infrastructure-k8s

      Where “x.x.x” is the version of New Relic Infrastructure image for kubernetes that’s provided in PivNet package

    • Tag the image with your private docker registry address

      docker tag newrelic/infrastructure-k8s:x.x.x <__PRIVATE_DOCKER_REGISTRY_ADDRESS__>/<__FULL_PATH_TO__>/newrelic/infrastructure-k8s:x.x.x

    • Run the following command to push New Relic Infrastructure docker image to your private registry

      docker push <__PRIVATE_DOCKER_REIGSTRY_ADDRESS__>/<__FULL_PATH_TO__>/newrelic/infrastructure-k8s:x.x.x
    • Edit nr-install-params.yaml, uncomment and update image.repository with correct value of New Relic Infrastructure full image name.

    • If your private registry requires credentials, uncomment the imageCredentials section in nr-install-params.yaml and provide values for secret name, registry, username, and password.

  • Make sure you have installed Helm on your machine. You could also download the latest release from helm github releases.

  • A serviceaccount will be used for tiller (helm’s service which runs inside the k8s cluster) for role-based access control purposes (RBAC). If you have an existing service account which you want to use, uncomment and specify in nr-install-params.yaml by replacing "__MY_SERVICEACCOUNT_NAME__" with your service account name. Otherwise, a service account name is generated using the fullname template.


  • Save the installation parameters yaml file as it will be used in the following steps to install helm chart.

  • If this is the first time you are using Helm on this cluster, initialize it so that tiller is installed on the cluster

    helm init
  • Run the following helm install command with "--dry-run" and "--debug" switches to make sure everything is configured properly in the chart. Note that "--dry-run" switch causes the command not to install the bits, but test the command with its arguments :

    helm install --dry-run --debug newrelic-infrastructure-x.x.x.tgz -f nr-install-params.yaml
  • Once you ensure everything is configured properly, run the following command to install New Relic Cluster Monitoring for kube-state-metrics. Note that in this command “–dry-run”and“–debug”` switches are removed.

    helm install newrelic-infrastructure-x.x.x.tgz -f nr-install-params.yaml
  • Wait for several seconds. Then run the following command to ensure that the daemonset and pods for NRI Daemonset have been created

    kubectl get daemonsets,pods

    You should see a daemonset, and one pod per worker node

Upgrade New Relic Infrastructure Integration for VMware PKS

To upgrade NRI kubernetes integration follow the instructions below

  • Obtain the new release of Helm chart for NRI kubernetes integration from Pivotal Network
  • Edit “nr-install-params.yaml” and make sure it has all correct values
  • Run the helm upgrade command to upgrade to the new chart

    helm upgrade $(helm list | grep "newrelic-infrastructure-\d*\.\d*\.\d*" | awk '{print $1}') newrelic-infrastructure-x.x.x.tgz -f nr-install-params.yaml

Cluster Update and Modification

When the configuration of the cluster is modified (i.e. increasing number of cluster worker nodes) the daemonset is adjusted as necessary, and NRI pods are created on newly created nodes. There is no need for the operator to make any manual modifications to the NRI daemonset.