Pivotal Ingress Router Release Notes
- Releases
- Limitations
-
Known Issues
- Upgrade fails to apply istio-init jobs
- Install fails with 'no matches for kind "PortAllocation"'
- Workload routing controllers stuck in `CrashLoopBackOff` with panic: No Auth Provider found for name "oidc"
- Workload routing controller stuck in `CrashLoopBackOff` with error: unable to add a new VirtualService controller to the manager
- Pilot fails to configure Envoy Proxy on istio-ingressgateway pod
Page last updated:
Releases
v0.5.0
- Tested with PKS 1.5
v0.4.0
- [Feature] (Experimental): TCP Routes can be created to workloads on any cluster when the routing to workloads feature is enabled.
- Routes are assigned a dedicated port from a set of configurable port ranges.
- Users on workload clusters define routes with an Istio VirtualService.
- The Istio VirtualService CRD is installed automatically on each cluster when this feature is enabled.
- [Feature] For high availability, the ingress gateway replica count now defaults to 3 and is configurable
- [Feature] Clusters can now be created using hostnames that are from different domains.
- To support multiple domains, wildcard DNS for each domain can be configured to point to the IP address of the load balancer in front of the ingress gateway.
- The
clusterTLD
parameter previously used to define the domain for clusters has been removed. The cluster registrar now just uses the full name configured as the hostname when a PKS cluster is created.
v0.3.0
- [Feature Improvement] Installation no longer requires a certificate to be created and configured for the ingress gateway
v0.2.0
- [Bug Fix] Fix for Kubectl returns error: the service doesn’t have a resource type “services” due to cluster registrar failing to reach PKS API after token expires
- [Bug Fix] Fix for Kubectl returns connection refused for new clusters due to master-routing-controller failing to configure Istio pilot
- [Bug Fix] Fix for Calling pks update-cluster or pks resize deregisters the cluster
v0.1.0
This is the initial beta release of Pivotal Ingress Router.
- [Feature] Allows users of PKS immediate access to the Kubernetes API for newly created clusters.
Limitations
Limited IaaS support
- Pivotal Ingress Router has been tested on vSphere without NSX-T and GCP.
- Other infrastructure may work, but support is limited at this time.
Known Issues
Upgrade fails to apply istio-init jobs
Symptom
The kubectl apply command fails when upgrading Pivotal Ingress Router.
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"name\":\"istio-init-crd-10\",\"namespace\":\"ingress-router-system\"},\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"sidecar.istio.io/inject\":\"false\"}},\"spec\":{\"containers\":[{\"command\":[\"kubectl\",\"apply\",\"-f\",\"/etc/istio/crd-10/crd-10.yaml\"],\"image\":\"gcr.io/cf-routing-desserts/kubectl:1.2.3\",\"imagePullPolicy\":\"Always\",\"name\":\"istio-init-crd-10\",\"volumeMounts\":[{\"mountPath\":\"/etc/istio/crd-10\",\"name\":\"crd-10\",\"readOnly\":true}]}],\"restartPolicy\":\"OnFailure\",\"serviceAccountName\":\"istio-init-service-account\",\"volumes\":[{\"configMap\":{\"name\":\"istio-crd-10\"},\"name\":\"crd-10\"}]}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"istio-init-crd-10"}],"containers":[{"image":"gcr.io/cf-routing-desserts/kubectl:1.2.3","name":"istio-init-crd-10"}]}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "istio-init-crd-10", Namespace: "ingress-router-system"
Object: &{map["apiVersion":"batch/v1" "kind":"Job" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"name\":\"istio-init-crd-10\",\"namespace\":\"ingress-router-system\"},\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"sidecar.istio.io/inject\":\"false\"}},\"spec\":{\"containers\":[{\"command\":[\"kubectl\",\"apply\",\"-f\",\"/etc/istio/crd-10/crd-10.yaml\"],\"image\":\"gcr.io/cf-routing-desserts/kubectl:1.1.7\",\"imagePullPolicy\":\"Always\",\"name\":\"istio-init-crd-10\",\"volumeMounts\":[{\"mountPath\":\"/etc/istio/crd-10\",\"name\":\"crd-10\",\"readOnly\":true}]}],\"restartPolicy\":\"OnFailure\",\"serviceAccountName\":\"istio-init-service-account\",\"volumes\":[{\"configMap\":{\"name\":\"istio-crd-10\"},\"name\":\"crd-10\"}]}}}}\n"] "creationTimestamp":"2019-09-09T19:07:54Z" "labels":map["controller-uid":"206e9451-d335-11e9-9de8-42010a000b0a" "job-name":"istio-init-crd-10"] "name":"istio-init-crd-10" "namespace":"ingress-router-system" "resourceVersion":"10231" "selfLink":"/apis/batch/v1/namespaces/ingress-router-system/jobs/istio-init-crd-10" "uid":"206e9451-d335-11e9-9de8-42010a000b0a"] "spec":map["backoffLimit":'\x06' "completions":'\x01' "parallelism":'\x01' "selector":map["matchLabels":map["controller-uid":"206e9451-d335-11e9-9de8-42010a000b0a"]] "template":map["metadata":map["annotations":map["sidecar.istio.io/inject":"false"] "creationTimestamp":<nil> "labels":map["controller-uid":"206e9451-d335-11e9-9de8-42010a000b0a" "job-name":"istio-init-crd-10"]] "spec":map["containers":[map["command":["kubectl" "apply" "-f" "/etc/istio/crd-10/crd-10.yaml"] "image":"gcr.io/cf-routing-desserts/kubectl:1.1.7" "imagePullPolicy":"Always" "name":"istio-init-crd-10" "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "volumeMounts":[map["mountPath":"/etc/istio/crd-10" "name":"crd-10" "readOnly":%!q(bool=true)]]]] "dnsPolicy":"ClusterFirst" "restartPolicy":"OnFailure" "schedulerName":"default-scheduler" "securityContext":map[] "serviceAccount":"istio-init-service-account" "serviceAccountName":"istio-init-service-account" "terminationGracePeriodSeconds":'\x1e' "volumes":[map["configMap":map["defaultMode":'\u01a4' "name":"istio-crd-10"] "name":"crd-10"]]]]] "status":map["completionTime":"2019-09-09T19:08:21Z" "conditions":[map["lastProbeTime":"2019-09-09T19:08:21Z" "lastTransitionTime":"2019-09-09T19:08:21Z" "status":"True" "type":"Complete"]] "startTime":"2019-09-09T19:07:54Z" "succeeded":'\x01']]}
for: "/tmp/psm-installation.yaml": Job.batch "istio-init-crd-10" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"controller-uid":"206e9451-d335-11e9-9de8-42010a000b0a", "job-name":"istio-init-crd-10"}, Annotations:map[string]string{"sidecar.istio.io/inject":"false"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:core.PodSpec{Volumes:[]core.Volume{core.Volume{Name:"crd-10", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(nil), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(0xc011e85d40), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil)}}}, InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"istio-init-crd-10", Image:"gcr.io/cf-routing-desserts/kubectl:1.2.3", Command:[]string{"kubectl", "apply", "-f", "/etc/istio/crd-10/crd-10.yaml"}, Args:[]string(nil), WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount{core.VolumeMount{Name:"crd-10", ReadOnly:true, MountPath:"/etc/istio/crd-10", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil)}}, VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc00e039ec8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"istio-init-service-account", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc0036dd260), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"name\":\"istio-init-crd-11\",\"namespace\":\"ingress-router-system\"},\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"sidecar.istio.io/inject\":\"false\"}},\"spec\":{\"containers\":[{\"command\":[\"kubectl\",\"apply\",\"-f\",\"/etc/istio/crd-11/crd-11.yaml\"],\"image\":\"gcr.io/cf-routing-desserts/kubectl:1.2.3\",\"imagePullPolicy\":\"Always\",\"name\":\"istio-init-crd-11\",\"volumeMounts\":[{\"mountPath\":\"/etc/istio/crd-11\",\"name\":\"crd-11\",\"readOnly\":true}]}],\"restartPolicy\":\"OnFailure\",\"serviceAccountName\":\"istio-init-service-account\",\"volumes\":[{\"configMap\":{\"name\":\"istio-crd-11\"},\"name\":\"crd-11\"}]}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"istio-init-crd-11"}],"containers":[{"image":"gcr.io/cf-routing-desserts/kubectl:1.2.3","name":"istio-init-crd-11"}]}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "istio-init-crd-11", Namespace: "ingress-router-system"
Object: &{map["apiVersion":"batch/v1" "kind":"Job" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"name\":\"istio-init-crd-11\",\"namespace\":\"ingress-router-system\"},\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"sidecar.istio.io/inject\":\"false\"}},\"spec\":{\"containers\":[{\"command\":[\"kubectl\",\"apply\",\"-f\",\"/etc/istio/crd-11/crd-11.yaml\"],\"image\":\"gcr.io/cf-routing-desserts/kubectl:1.1.7\",\"imagePullPolicy\":\"Always\",\"name\":\"istio-init-crd-11\",\"volumeMounts\":[{\"mountPath\":\"/etc/istio/crd-11\",\"name\":\"crd-11\",\"readOnly\":true}]}],\"restartPolicy\":\"OnFailure\",\"serviceAccountName\":\"istio-init-service-account\",\"volumes\":[{\"configMap\":{\"name\":\"istio-crd-11\"},\"name\":\"crd-11\"}]}}}}\n"] "creationTimestamp":"2019-09-09T19:07:54Z" "labels":map["controller-uid":"207f3dea-d335-11e9-9de8-42010a000b0a" "job-name":"istio-init-crd-11"] "name":"istio-init-crd-11" "namespace":"ingress-router-system" "resourceVersion":"10234" "selfLink":"/apis/batch/v1/namespaces/ingress-router-system/jobs/istio-init-crd-11" "uid":"207f3dea-d335-11e9-9de8-42010a000b0a"] "spec":map["backoffLimit":'\x06' "completions":'\x01' "parallelism":'\x01' "selector":map["matchLabels":map["controller-uid":"207f3dea-d335-11e9-9de8-42010a000b0a"]] "template":map["metadata":map["annotations":map["sidecar.istio.io/inject":"false"] "creationTimestamp":<nil> "labels":map["controller-uid":"207f3dea-d335-11e9-9de8-42010a000b0a" "job-name":"istio-init-crd-11"]] "spec":map["containers":[map["command":["kubectl" "apply" "-f" "/etc/istio/crd-11/crd-11.yaml"] "image":"gcr.io/cf-routing-desserts/kubectl:1.1.7" "imagePullPolicy":"Always" "name":"istio-init-crd-11" "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "volumeMounts":[map["mountPath":"/etc/istio/crd-11" "name":"crd-11" "readOnly":%!q(bool=true)]]]] "dnsPolicy":"ClusterFirst" "restartPolicy":"OnFailure" "schedulerName":"default-scheduler" "securityContext":map[] "serviceAccount":"istio-init-service-account" "serviceAccountName":"istio-init-service-account" "terminationGracePeriodSeconds":'\x1e' "volumes":[map["configMap":map["defaultMode":'\u01a4' "name":"istio-crd-11"] "name":"crd-11"]]]]] "status":map["completionTime":"2019-09-09T19:08:22Z" "conditions":[map["lastProbeTime":"2019-09-09T19:08:22Z" "lastTransitionTime":"2019-09-09T19:08:22Z" "status":"True" "type":"Complete"]] "startTime":"2019-09-09T19:07:54Z" "succeeded":'\x01']]}
for: "/tmp/psm-installation.yaml": Job.batch "istio-init-crd-11" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"controller-uid":"207f3dea-d335-11e9-9de8-42010a000b0a", "job-name":"istio-init-crd-11"}, Annotations:map[string]string{"sidecar.istio.io/inject":"false"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:core.PodSpec{Volumes:[]core.Volume{core.Volume{Name:"crd-11", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(nil), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(0xc0107c7840), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil)}}}, InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"istio-init-crd-11", Image:"gcr.io/cf-routing-desserts/kubectl:1.2.3", Command:[]string{"kubectl", "apply", "-f", "/etc/istio/crd-11/crd-11.yaml"}, Args:[]string(nil), WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount{core.VolumeMount{Name:"crd-11", ReadOnly:true, MountPath:"/etc/istio/crd-11", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil)}}, VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc00f0c57c0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"istio-init-service-account", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc0025fe7e0), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}: field is immutable
Problem
The istio-init-crd-*
jobs are immutable and the kubectl apply is attempting to change the istio-init jobs.
Solution
The istio-init jobs can be safely deleted after installation or before running the upgrade.
Install fails with ‘no matches for kind “PortAllocation”’
Symptom
The kubectl apply command fails when installing Pivotal Ingress Router
error: unable to recognize "./ingress-router-installation.yaml": no matches for kind "PortAllocation" in version "ingressrouter.pivotal.io/v1"
Problem
The rendered helm chart does not guarantee the order in which config is applied and an object that depends on the PortAllocation kind is installed before that kind is defined.
Solution
Re-run kubectl apply
.
Workload routing controllers stuck in CrashLoopBackOff
with panic: No Auth Provider found for name “oidc”
Symptom
After installing Ingress Router on a system with clusters that have Open ID Connect (OIDC) enabled, the workload routing controller pods
for those clusters are stuck in a CrashLoopBackOff
state:
NAME READY STATUS RESTARTS AGE
8f74336b-6fd0-4004-b5b3-d43f8d6f8aab-7c8bbdb47d-5c72b 0/1 CrashLoopBackOff 9 25m
d778530f-572f-4da5-896a-0a1321e682a6-7f588b587d-57h59 0/1 CrashLoopBackOff 9 25m
The logs for those pods shows an error:
{"level":"info","ts":1572296157.3144333,"logger":"entrypoint","msg":"Setting up manager to watch the user cluster."}
panic: No Auth Provider found for name "oidc"
goroutine 1 [running]:
k8s.io/client-go/discovery.NewDiscoveryClientForConfigOrDie(...)
/root/go/pkg/mod/k8s.io/client-go@v10.0.0+incompatible/discovery/discovery_client.go:454
sigs.k8s.io/controller-runtime/pkg/client/apiutil.NewDiscoveryRESTMapper(0xc0003e48c0, 0x0, 0xa, 0x1447a64, 0x2d)
/root/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.1.12/pkg/client/apiutil/apimachinery.go:35 +0xd9
sigs.k8s.io/controller-runtime/pkg/manager.New(0xc0003e48c0, 0xc0002235e0, 0x14af680, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/root/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.1.12/pkg/manager/manager.go:171 +0x112
main.main()
/root/ingress-router/workload-routing-controller/cmd/manager/main.go:75 +0x4b6
Problem
When routing to workloads is enabled, workload-routing-controller
pods are created for each user cluster
that install Istio VirtualService
CRD and watch for VirtualService
resources to set up routing to workloads.
These controllers must be able to authenticate with the user clusters but are currently unable to do when OIDC is enabled.
Solution
The experimental routing to workloads feature is currently not supported for clusters with OIDC enabled.
Try out the feature on clusters without OIDC enabled or stay tuned for a future release that resolves this issue.
Workload routing controller stuck in CrashLoopBackOff
with error: unable to add a new VirtualService controller to the manager
Symptom
After installing Ingress Router on a system with clusters, the workload routing controller pods get stuck in a CrashLoopBackOff
state:
$ kubectl -n $namespace get pods
NAME READY STATUS RESTARTS AGE
a9e81b2a-f642-48a9-9a2c-45f309bf0ecd-64f6b4bc8f-tmnnz 0/1 CrashLoopBackOff 90 7h19m
cluster-registrar-7bc5cbccb6-bv8vw 1/1 Running 0 7h19m
...
The logs for the workload routing controller show that it cannot add a new VirtualService controller:
{"level":"error","ts":1572382133.1576822,"logger":"kubebuilder.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"VirtualService.networking.istio.io","error":"no matches for kind \"VirtualService\" in version \"networking.istio.io/v1alpha3\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/root/go/pkg/mod/github.com/go-logr/zapr@v0.1.1/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start\n\t/root/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.1.12/pkg/source/source.go:89\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Watch\n\t/root/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.1.12/pkg/internal/controller/controller.go:122\ngithub.com/pivotal/ingress-router/workload-routing-controller/pkg/controller/virtualservice.add\n\t/root/ingress-router/workload-routing-controller/pkg/controller/virtualservice/virtualservice_controller.go:64\ngithub.com/pivotal/ingress-router/workload-routing-controller/pkg/controller/virtualservice.Add\n\t/root/ingress-router/workload-routing-controller/pkg/controller/virtualservice/virtualservice_controller.go:43\nmain.main\n\t/root/ingress-router/workload-routing-controller/cmd/manager/main.go:92\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203"}
{"level":"error","ts":1572382133.1579545,"logger":"entrypoint","msg":"unable to add a new VirtualService controller to the manager","error":"no matches for kind \"VirtualService\" in version \"networking.istio.io/v1alpha3\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/root/go/pkg/mod/github.com/go-logr/zapr@v0.1.1/zapr.go:128\nmain.main\n\t/root/ingress-router/workload-routing-controller/cmd/manager/main.go:93\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203"}
You may also see warnings that the secret cannot be mounted onto the workload routing controller:
Warning FailedMount 15m (x657 over 22h) kubelet, b97703f9-3b05-4808-a4e6-0cf915d8784e MountVolume.SetUp failed for volume "kubeconfig-volume" : secret "kubeconfig-62ebe62f-1619-4f01-a48e-57be1d671c98" not found
Warning FailedMount 5m21s (x587 over 22h) kubelet, b97703f9-3b05-4808-a4e6-0cf915d8784e Unable to mount volumes for pod "a9e81b2a-f642-48a9-9a2c-45f309bf0ecd-64f6b4bc8f-tmnnz_ingress-router-system(d0ae472a-9a59-47c1-9727-faef58ca31a0)": timeout expired waiting for volumes to attach or mount for pod "ingress-router-system"/"a9e81b2a-f642-48a9-9a2c-45f309bf0ecd-64f6b4bc8f-tmnnz". list of unmounted volumes=[kubeconfig-volume]. list of unattached volumes=[kubeconfig-volume workload-routing-controller-token-8rxjb]
The logs for the batch job that installs the virtual service crd may show BackoffLimitExceeded
:
kubectl -n ingress-router-system get job.batch/install-virtualservice-crd-a9e81b2a-f642-48a9-9a2c-45f309bf0ecd -oyaml
...
status:
conditions:
- lastProbeTime: "2019-10-29T13:38:22Z"
lastTransitionTime: "2019-10-29T13:38:22Z"
message: Job has reached the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
failed: 1
startTime: "2019-10-29T13:32:17Z"
Problem
The workload routing controller does not have permission to create a virtual service CRD on the user cluster. The cluster registrar did not attempt to retry creation of the service account that grants this permission.
Solution
Recreate the cluster registrar by deleting the cluster registrar pod. OR, Delete the cluster object that is related to the failing workload routing controller on kubernetes. This reregisters the cluster and attempts to create the service account.
Pilot fails to configure Envoy Proxy on istio-ingressgateway pod
Symptom
Attempting to communicate to cluster’s Kubernetes API through the Ingress
Gateway fails with either connection refused
or EOF
.
Problem
Istio’s Pilot didn’t correctly configure the Envoy Proxy on the Ingress Gateway Pod. This results in the Envoy Proxy not understanding how to handle requests for certain clusters.
Solution
Restart the Istio Pilot pod. This can be done by running the following on the cluster hosting the Pivotal Ingress Router:
kubectl -n ingress-router-system delete pod istio-pilot-xxxxxxxx-xxxxx