LATEST VERSION: v1.2 - RELEASE NOTES
Pivotal Container Service v1.2

Managing Users in PKS with UAA

Page last updated:

This topic describes how to manage users in Pivotal Container Service (PKS) with User Account and Authentication (UAA). Create and manage users in UAA with the UAA Command Line Interface (UAAC).

How to Use UAAC

Use the UAA Command Line Interface (UAAC) to interact with the UAA server. You can either run UAAC commands from the Ops Manager VM or install UAAC on your local workstation.

To run UAAC commands from the Ops Manager VM, see the following SSH procedures for vSphere or Google Cloud Platform (GCP).

To install UAAC locally, see Component: User Account and Authentication (UAA) Server.

SSH into the Ops Manager VM on vSphere

To SSH into the Ops Manager VM on vSphere, you need the credentials used to import the PCF .ova or .ovf file into your virtualization system. You set these credentials when you installed Ops Manager.

Note: If you lose your credentials, you must shut down the Ops Manager VM in the vSphere UI and reset the password. See vCenter Password Requirements and Lockout Behavior in the vSphere documentation for more information.

  1. From a command line, run ssh ubuntu@OPS-MANAGER-FQDN to SSH into the Ops Manager VM. Replace OPS-MANAGER-FQDN with the fully qualified domain name of Ops Manager.

  2. When prompted, enter the password that you set during the .ova deployment into vCenter. For example:

    $ ssh ubuntu@my-opsmanager-fqdn.example.com
    Password: ***********
    

  3. Proceed to the Log in as an Admin section to manage users with UAAC.

SSH into the Ops Manager VM on GCP

To SSH into the Ops Manager VM in GCP, do the following:

  1. Confirm that you have installed the gcloud CLI. See the Google Cloud Platform documentation for more information.

  2. From the GCP console, click Compute Engine.

  3. Locate the Ops Manager VM in the VM Instances list.

  4. Click the SSH menu button.

  5. Copy the SSH command that appears in the popup window.

  6. Paste the command into your terminal window to SSH to the Ops Manager VM. For example:

    $ gcloud compute ssh om-pcf-1a --zone us-central1-b
    

  7. Run sudo su - ubuntu to switch to the ubuntu user.

  8. Proceed to the Log in as an Admin section to manage users with UAAC.

SSH into the Ops Manager VM on AWS

To SSH into the Ops Manager VM on AWS, you need the key pair you used when you created the Ops Manager VM. To see the name of the key pair, click on the Ops Manager VM in the AWS console and locate the key pair name in the properties.

To SSH into the Ops Manager VM on AWS, do the following:

  1. From the AWS console, locate the Ops Manager FQDN on the AWS EC2 instances page.

  2. Run chmod 600 ops_mgr.pem to change the permissions on the .pem file to be more restrictive. For example:

    $ chmod 600 ops_mgr.pem
    
  3. Run ssh -i ops_mgr.pem ubuntu@OPS-MANAGER-FQDN to SSH into the Ops Manager VM. Replace OPS-MANAGER-FQDN with the fully qualified domain name of Ops Manager. For example:

    $ ssh -i ops_mgr.pem ubuntu@my-opsmanager-fqdn.example.com
    
  4. Proceed to the Log in as an Admin section to manage users with UAAC.

Log in as a UAA Admin

To retrieve the PKS UAA management admin client secret, do the following:

  1. In a web browser, navigate to the fully qualified domain name (FQDN) of Ops Manager and click the Pivotal Container Service tile.
  2. Click Credentials.
  3. To view the secret, click Link to Credential next to Pks Uaa Management Admin Client. The client username is admin.
  4. On the command line, run the following command to target your UAA server:

    uaac target https://PKS-API:8443 --ca-cert ROOT-CA-FILENAME
    Replace PKS-API with the URL to your PKS API server. You configured this URL in the PKS API section of Installing PKS for your IaaS. For example, see Installing PKS on vSphere. Replace ROOT-CA-FILENAME with the certificate file you downloaded in Configuring PKS API Access. For example:
    $ uaac target api.pks.example.com:8443 --ca-cert my-cert.cert
    

    Note: If you receive an Unknown key: Max-Age = 86400 warning message, you can safely ignore it because it has no impact.

  5. Authenticate with UAA using the secret you retrieved in a previous step. Run the following command, replacing ADMIN-CLIENT-SECRET with your PKS UAA management admin client secret:

    uaac token client get admin -s ADMIN-CLIENT-SECRET

Grant PKS Access

PKS access gives users the ability to deploy and manage Kubernetes clusters. As an Admin user, you can assign the following UAA scopes to users, external LDAP groups, and clients:

  • pks.clusters.manage: accounts with this scope can create and access their own clusters.
  • pks.clusters.admin: accounts with this scope can create and access all clusters.

Grant PKS Access to a User

You can create a new UAA user with PKS access by performing the following steps:

  1. Log in as the UAA admin using the procedure above.

  2. To create a new user, run the following command:

    uaac user add USERNAME --emails USER-EMAIL -p USER-PASSWORD

    For example:

    $ uaac user add alana --emails alana@example.com -p password

  3. Assign a scope to the user to allow them to access Kubernetes clusters. Run uaac member add UAA-SCOPE USERNAME, replacing UAA-SCOPE with one of the UAA scopes defined above. For example:

    $ uaac member add pks.clusters.admin alana

Grant PKS Access to an External LDAP Group

Connecting PKS to a LDAP external user store allows the User Account and Authentication (UAA) server to delegate authentication to existing enterprise user stores.

Note: When integrating with an external identity provider such as LDAP, authentication within the UAA becomes chained. UAA first attempts to authenticate with a user’s credentials against the UAA user store before the external provider, LDAP. For more information, see Chained Authentication in the User Account and Authentication LDAP Integration GitHub documentation.

For more information about the process used by the UAA Server when it attempts to authenticate a user through LDAP, see the Configuring LDAP Integration with Pivotal Cloud Foundry Knowledge Base article.

To grant PKS access to an external LDAP group, perform the following steps:

  1. Log in as the UAA admin using the procedure above.

  2. To assign the pks.clusters.manage scope to all users in an LDAP group, run the following command:

    uaac group map --name pks.clusters.manage GROUP-DISTINGUISHED-NAME
    Replace GROUP-DISTINGUISHED-NAME with the LDAP Distinguished Name (DN) for the group. For example:
    $ uaac group map --name pks.clusters.manage cn=operators,ou=groups,dc=example,dc=com
    
    For more information about LDAP DNs, see the LDAP documentation.

  3. (Optional) To assign the pks.clusters.admin scope to all users in an LDAP group, run the following command:

    uaac group map --name pks.clusters.admin GROUP-DISTINGUISHED-NAME
    Replace GROUP-DISTINGUISHED-NAME with the LDAP DN for the group. For example:
    $ uaac group map --name pks.clusters.admin cn=operators,ou=groups,dc=example,dc=com
    

Grant PKS Access to a Client

To grant PKS access to an automated client for a script or service, perform the following steps:

  1. Log in as the UAA admin using the procedure above.

  2. Create a client with the desired scopes by running the following command:

    uaac client add CLIENT-NAME -s CLIENT-SECRET \
    --authorized_grant_types client_credentials \
    --authorities UAA-SCOPES
    Replace CLIENT-NAME and CLIENT-SECRET with the client credentials. Replace UAA-SCOPES with one or more of the UAA scopes defined above, separated by a comma. For example:
    $ uaac client add automated-client \
    -s randomly-generated-secret
    --authorized_grant_types client_credentials  \
    --authorities pks.clusters.admin,pks.clusters.manage

Grant Cluster Access

The following diagram outlines the workflow you use to grant cluster access to a user who belongs to an LDAP group:

This diagram shows the workflow that operators and cluster admins use to grant cluster access to users.

You can grant a user access to an entire cluster with a ClusterRole or to a namespace within a given cluster with a Role.

The admin of the cluster must then create a ClusterRoleBinding or a RoleBinding for that Kubernetes end user.

For more information, see RoleBinding and ClusterRoleBinding in the Kubernetes documentation.

Grant Cluster Access to a User

After being granted cluster access, the Kubernetes end user is able to use the Kubernetes Command Line Interface (kubectl) to connect to the cluster and perform actions as configured by their cluster admin. However, even with this access, Kubernetes end users cannot create, resize, or delete clusters.

Note: In order for cluster admins to grant cluster access to Kubernetes end users, cluster admins must ensure that they have selected Enable UAA as OIDC provider in the UAA section of the PKS tile. Once you enable OIDC, you must run get-credentials again to update your existing kubeconfig.

To grant cluster access to other users, the cluster admin must perform the following actions:

  1. Run the following command to log in to PKS client using LDAP credentials:
    pks login -u LDAP-USER-NAME - p LDAP-PASSWORD -a  PKS-API --ca-cert 
    ROOT-CA-FILENAME

    Where:
    • LDAP-USER-NAME is the cluster admin’s LDAP username.
    • LDAP-PASSWORD is the cluster admin’s LDAP password.
    • PKS-API is the fully qualified domain name (FQDN) you use to access the PKS API.
  2. Run the following command to confirm that you can successfully connect to a cluster and use kubectl as a cluster admin:

    pks get-credentials CLUSTER-NAME
    This step creates a ClusterRoleBinding for the LDAP cluster admin.

  3. When prompted, re-enter your LDAP password.

  4. Create a spec YML file with either the Role or ClusterRole for your Kubernetes end user.

  kind: ROLE-TYPE
  apiVersion: rbac.authorization.k8s.io/v1
  metadata:
    namespace: NAMESPACE
    name: ROLE-OR-CLUSTER-ROLE-NAME
  rules:
  - apiGroups:
    resources: RESOURCE
    verbs: API-REQUEST-VERB


Where:

  • ROLE-TYPE is the type of role you are creating. This must be either Role or ClusterRole.
  • NAMESPACE is the namespace within the cluster. This is omitted when creating a ClusterRole.
  • ROLE-OR-CLUSTER-ROLE-NAME is the name of the Role or ClusterRole you are creating. This name is created by the cluster admin.
  • RESOURCE is the resource you are granting access to. It must be specified in a comma-separated array. An example resource could be ["pod-reader"].
  • API-REQUEST-VERB is used to specify resource requests. For more information, see Determine the Request Verb in the Kubernetes documentation.

  1. Run the following command to create the Role or ClusterRole resource based on your spec file:

    $ kubectl create -f ROLE-SPEC.yml

  2. Create a spec YML file containing either a ClusterRoleBinding or RoleBinding for the Kubernetes end user.

  kind: ROLE-BINDING-TYPE
  apiVersion: rbac.authorization.k8s.io/v1
  metadata:
    name: ROLE-OR-CLUSTER-ROLE-BINDING-NAME
    namespace: NAMESPACE
  subjects:
  - kind: User
    name: USERNAME
    apiGroup: rbac.authorization.k8s.io
  roleRef:
    kind: ROLE-TYPE
    name: ROLE-OR-CLUSTER-ROLE-BINDING-NAME
    apiGroup: rbac.authorization.k8s.io


Where:

  • ROLE-BINDING-TYPE is the type of role binding you are creating. This must be either RoleBinding or ClusterRoleBinding.
  • ROLE-OR-CLUSTER-ROLE-BINDING-NAME is the name of the role binding. This name is created by the cluster admin.
  • NAMESPACE is the namespace within the cluster. This is omitted when creating a ClusterRole.
  • USERNAME is the Kubernetes end user’s username. If your organization uses LDAP, for example, this is your LDAP username.
  • ROLE-TYPE is the type of role you created in the previous step. This must be either Role or ClusterRole.
  • ROLE-OR-CLUSTER-ROLE-NAME is the name of the Role or ClusterRole you are creating.

  1. Run the following command to create the above defined ClusterRoleBinding resource in the cluster:

    $ kubectl apply -f ROLE-BINDING-SPEC.yml

  2. The cluster admin partially completes the kubeconfig by detailing the following:

    • clusters.cluster.certificate-authority-data
    • clusters.cluster.server
    • cluster.name
    • contexts.context.cluster
    • contexts.context.name
    • current-context
    • users.user.auth-provider.config.idp-issuer-url
  3. The cluster admin sends the partially completed kubeconfig to their Kubernetes end user.

Review the example kubeconfig file below. For more information about organizing information using kubeconfig, see Organizing Cluster Access Using kubeconfig Files in the Kubernetes documentation.


apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: PROVIDED-BY-ADMIN
    server: PROVIDED-BY-ADMIN
  name: PROVIDED-BY-ADMIN
contexts:
- context:
    cluster: PROVIDED-BY-ADMIN
    user: PROVIDED-BY-USER
  name:  PROVIDED-BY-ADMIN
current-context: PROVIDED-BY-ADMIN
kind: Config
preferences: {}
users:
- name:  PROVIDED-BY-USER
  user: 
    auth-provider:
      config:
        client-id: pks_cluster_client
        cluster_client_secret: ""
        id-token: PROVIDED-BY-USER
        idp-issuer-url: https://PROVIDED-BY-ADMIN:8443/oauth/token
        refresh-token:  PROVIDED-BY-USER
      name: oidc

To finish this process, the Kubernetes end user must be sent the partially completed kubeconfig and perform the following actions:

  1. Run the following command to obtain the users.user.auth-provider.config.id-token and users.user.auth-provider.config.refresh-token:

    curl 'https://PKS-API>:8443/oauth/token' -k -XPOST -H 
    'Accept: application/json' -d "client_id= 
    pks_cluster_client&client_secret=""&grant_type=password
    &username=UAA-USERNAME&password=UAA-PASSWORD&response_type=id_token"
    

    Where:

    • PKS-API is the FQDN you use to access the PKS API.
    • UAA-USERNAME is the Kubernetes end user’s UAA username.
    • UAA-PASSWORD is the Kubernetes end user’s UAA password.

  2. Edit the kubeconfig by providing the following:

    • contexts.context.user
    • users.name
    • users.user.auth-provider.config.id-token
    • users.user.auth-provider.config.refresh-token
  3. Save the kubeconfig to the $HOME/.kube/config directory. After doing so, the Kubernetes end user can connect to the cluster using kubectl.

Grant Cluster Access to a Group

Cluster admins can also grant cluster-wide access to an LDAP Group by creating a ClusterRoleBinding for that LDAP group. This feature is only available if LDAP is used as your identity provider for UAA.

Note: You must confirm that the group you are referencing in your ClusterRoleBinding has been whitelisted in the PKS tile. To do so, review the External Groups Whitelist field in the UAA section of the PKS tile.

The process for granting cluster access to an LDAP is similar to the process described in Grant Cluster Access to a User.

The only difference is that when the cluster admin is creating the spec file containing the RoleBinding or ClusterRoleBinding for a group, the spec file must reflect the following:

kind: ROLE-BINDING-TYPE
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ROLE-OR-CLUSTER-ROLE-BINDING-NAME
  namespace: NAMESPACE
subjects:
‑ kind: Group
  name: NAME-OF-GROUP
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ROLE-TYPE
  name: ROLE-OR-CLUSTER-ROLE-NAME
  apiGroup: rbac.authorization.k8s.io
  • ROLE-BINDING-TYPE is the type of role binding you are creating. This must be either RoleBinding or ClusterRoleBinding.
  • ROLE-OR-CLUSTER-ROLE-BINDING-NAME is the name of your RoleBinding or ClusterRoleBinding. This is created by the cluster admin.
  • NAME-OF-GROUP is the LDAP group name. This name is case sensitive.
  • ROLE-TYPE is the type of role you are creating. This must be either Role or ClusterRole.
  • ROLE-OR-CLUSTER-ROLE-NAME is the name of your Role or ClusterRole. This is created by the cluster admin.

Please send any feedback you have to pks-feedback@pivotal.io.

Create a pull request or raise an issue on the source for this page in GitHub