LATEST VERSION: 1.10 - CHANGELOG
Pivotal Cloud Foundry v1.10

Accessing Apps with SSH

Page last updated:

This page assumes you are using cf CLI v6.13.0 or later.

The Cloud Foundry Command Line Interface (cf CLI) lets you securely log into remote host virtual machines (VMs) running Elastic Runtime application instances. This topic describes the commands that enable SSH access to applications, and enable, disable, and check permissions for such access.

The cf CLI looks up the app_ssh_oauth_client identifier in the Cloud Controller /v2/info endpoint, and uses this identifier to query the UAA server for an SSH authorization code. On the target VM side, the SSH proxy contacts the Cloud Controller through the app_ssh_endpoint listed in /v2/info to confirm permission for SSH access.

Application SSH Commands

cf CLI Command Purpose
cf enable-ssh
cf disable-ssh
cf allow-space-ssh
cf disallow-space-ssh
Enable and Disable SSH Access
cf ssh-enabled
cf space-ssh-allowed
Check SSH Access Permissions
cf ssh
Securely log into an application container
cf ssh-code
Enable secure log in to an application container
using non-CF SSH tools
like ssh, scp, and sftp

Enabling and Disabling SSH Access

A cloud operator can deploy Elastic Runtime to either allow or prohibit Application SSH across the entire deployment. For more information, see Configuring SSH Access for PCF.

Within a deployment that permits SSH access to applications, Space Developers can enable or disable SSH access to individual applications, and Space Managers can enable or disable SSH access to all apps running within a space.

Configuring SSH Access at the Application Level

cf enable-ssh enables SSH access to all instances of an app:

$ cf enable-ssh MY-AWESOME-APP

cf disable-ssh disables SSH access to all instances of an app:

$ cf disable-ssh MY-AWESOME-APP

Configuring SSH Access at the Space Level

cf allow-space-ssh allows SSH access into all apps in a space:

$ cf allow-space-ssh SPACE-NAME

cf disallow-space-ssh disallows SSH access into all apps in a space:

$ cf disallow-space-ssh SPACE-NAME

Checking SSH Permissions

cf ssh-enabled checks whether an app is accessible with SSH:

$ cf ssh-enabled MY-AWESOME-APP
ssh support is disabled for 'MY-AWESOME-APP'

cf space-ssh-allowed checks whether all apps running within a space are accessible with SSH:

$ cf space-ssh-allowed SPACE-NAME
ssh support is enabled in space 'SPACE-NAME'

Logging Into an Application Container with cf SSH

If SSH access is allowed at the deployment, space, and application level, you can run the cf ssh APP-NAME command to start an interactive SSH session with a VM hosting an application. By default, the command accesses the container running the first instance of the application, the instance with index 0.

$ cf ssh MY-AWESOME-APP

When logged into a VM hosting an app, you can use tools like the Cloud Foundry Diego Operator Toolkit (cfdot) to run app status diagnostics. For more information, see How to use CF Diego Operator Toolkit.

Common cf SSH Flags

You can tailor cf ssh commands with the following flags, most of which mimic flags for the Unix or Linux ssh command. Run the cf ssh --help command for more details.

  • The -i flag targets a specific instance of an application. To log into the VM container hosting the third instance, index=2, of MY-AWESOME-APP, run:

    $ cf ssh MY-AWESOME-APP -i 2
    

  • The -L flag enables local port forwarding, binding an output port on your machine to an input port on the application VM. Pass in a local port, and your application VM port and port number, all colon delimited. You can prepend your local network interface, or use the default localhost.

    $ cf ssh MY-AWESOME-APP -L [LOCAL-NETWORK-INTERFACE:]LOCAL-PORT:REMOTE-HOST-NAME:REMOTE-HOST-PORT
    

  • The -N flag skips returning a command prompt on the remote machine. This sets up local port forwarding if you do not need to execute commands on the host VM.

  • The --request-pseudo-tty and --force-pseudo-tty flags let you run an SSH session in pseudo-tty mode rather than generate terminal line output.

SSH Session Environment

If you want the environment of your interactive SSH session to match the environment of your buildpack-based app, with the same environment variables and working directory, run the following commands after starting the session:

export HOME=/home/vcap/app
export TMPDIR=/home/vcap/tmp
cd /home/vcap/app

Before running commands below, verify that the contents of the files in both the /home/vcap/app/.profile and /home/vcap/app/.profile.d directories will not perform any actions that are undesirable for your running app. The .profile.d directory contains buildpack-specific initialization tasks, and the .profile file contains application-specific initialization tasks.

If the profile and .profile.d scripts would alter your instance in undesirable ways, only run the commands in them that you need for environmental setup.

[ -d /home/vcap/app/.profile.d ] && for f in /home/vcap/app/.profile.d/*.sh; do source "$f"; done
source /home/vcap/app/.profile

After running the above commands, the value of the VCAP_APPLICATION environment variable differs slightly from its value in the environment of the app process, as it will not have the host, instance_id, instance_index, or port fields set. These fields are available in other environment variables, as described in the VCAP_APPLICATION documentation.

Application SSH Access without cf CLI

In addition to cf ssh, you can use other SSH clients such as ssh, scp, or sftp to access your application, if you have SSH permissions.

Follow the steps below to securely connect to an application instance by logging in with a specially-formed username that passes information to the SSH proxy running on the host VM. For the password, use a one-time SSH authorization code generated by cf ssh-code.

  1. Run cf app MY-AWESOME-APP --guid and record the GUID of your target app.

    $ cf app MY-AWESOME-APP --guid
    abcdefab-1234-5678-abcd-1234abcd1234
    

  2. Query the /v2/info endpoint of the Cloud Controller in your deployment. Record the domain name and port of the app_ssh_endpoint field, and the app_ssh_host_key_fingerprint field. You will compare the app_ssh_host_key_fingerprint with the fingerprint returned by the SSH proxy on your target VM.

    $ cf curl /v2/info
    {
    ...
    "app_ssh_endpoint": "ssh.MY-DOMAIN.com:2222",
    "app_ssh_host_key_fingerprint": "a6:14:c0:ea:42:07:b2:f7:53:2c:0b:60:e0:00:21:6c",
    ...
    }
    

  3. Run cf ssh-code to obtain a one-time authorization code that substitutes for an SSH password. You can run cf ssh-code | pbcopy to automatically copy the code to the clipboard.

    $ cf ssh-code
    E1x89n
    

  4. Run your ssh or other command to connect to the application instance. For the username, use a string of the form cf:APP-GUID/APP-INSTANCE-INDEX@SSH-ENDPOINT, where APP-GUID and SSH-ENDPOINT come from the previous steps. For the port number, use the SSH-PORT recorded above. APP-INSTANCE-INDEX is the index of the instance you want to access.

    With the above example, you ssh into the container hosting the first instance of your app by running the following command:

    $ ssh -p 2222 cf:abcdefab-1234-5678-abcd-1234abcd1234/0@ssh.MY-DOMAIN.com
    

    Or you can use scp to transfer files by running the following command:
    $ scp -P 2222 -o User=cf:abcdefab-1234-5678-abcd-1234abcd1234/0 ssh.MY-DOMAIN.com:REMOTE-FILE-TO-RETRIEVE LOCAL-FILE-DESTINATION
    

  5. When the SSH proxy reports its RSA fingerprint, confirm that it matches the app_ssh_host_key_fingerprint recorded above. When prompted for a password, paste in the authorization code returned by cf ssh-code.

    $ ssh -p 2222 cf:abcdefab-1234-5678-abcd-1234abcd1234/0@ssh.MY-DOMAIN.com
    The authenticity of host '[ssh.MY-DOMAIN.com]:2222 ([203.0.113.5]:2222)' can't be established.
    RSA key fingerprint is a6:14:c0:ea:42:07:b2:f7:53:2c:0b:60:e0:00:21:6c.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '[ssh.MY-DOMAIN.com]:2222 [203.0.113.5]:2222' (RSA) to the list of known hosts.
    cf:d0a2e11d-e6ca-4120-b32d-140@ssh.ketchup.cf-app.com's password:
    vcap@ce4l5164kws:~$
    

    You have now securely connected to the application instance.

Proxy to Container Authentication

A second layer of SSH security runs within each container. When the SSH proxy attempts to handshake with the SSH daemon inside the target container, it uses the following fields associated with the diego-ssh key in its route to the application instance. This inner layer works invisibly and requires no user action, but is described here to complete the SSH security picture.

CONTAINER_PORT (required)

container_port indicates which port inside the container the SSH daemon is listening on. The proxy attempts to connect to host side mapping of this port after authenticating the client.

HOST_FINGERPRINT (optional)

When present, host_fingerprint declares the expected fingerprint of the SSH daemon’s host public key. When the fingerprint of the actual target’s host key does not match the expected fingerprint, the connection is terminated. The fingerprint should only contain the hex string generated by ssh-keygen -l.

USER (optional)

user declares the user ID to use during authentication with the container’s SSH daemon. While this is not a required part of the routing data, it is required for password authentication and may be required for public key authentication.

PASSWORD (optional)

password declares the password to use during password authentication with the container’s ssh daemon.

PRIVATE_KEY (optional)

private_key declares the private key to use when authenticating with the container’s SSH daemon. If present, the key must be a PEM encoded RSA or DSA public key.

Example Application Process

{
  "process_guid": "ssh-process-guid",
  "domain": "ssh-experiments",
  "rootfs": "preloaded:cflinuxfs2",
  "instances": 1,
  "start_timeout": 30,
  "setup": {
    "download": {
      "artifact": "diego-sshd",
      "from": "http://file-server.service.cf.internal.example.com:8080/v1/static/diego-sshd/diego-sshd.tgz",
      "to": "/tmp",
      "cache_key": "diego-sshd"
    }
  },
  "action": {
    "run": {
      "path": "/tmp/diego-sshd",
      "args": [
          "-address=0.0.0.0:2222",
          "-authorizedKey=ssh-rsa ..."
      ],
      "env": [],
      "resource_limits": {}
    }
  },
  "ports": [ 2222 ],
  "routes": {
    "diego-ssh": {
      "container_port": 2222,
      "private_key": "PEM encoded PKCS#1 private key"
    }
  }
}

Daemon discovery

To be accessible via the SSH proxy, containers must host an SSH daemon, expose it via a mapped port, and advertise the port in a diego-ssh route. If a proxy cannot find the target process or a route, user authentication fails.

  "routes": {
    "diego-ssh": { "container_port": 2222 }
  }

The Diego system generates the appropriate process definitions for Elastic Runtime applications which reflect the policies that are in effect.

Create a pull request or raise an issue on the source for this page in GitHub