LATEST VERSION: 1.12 - CHANGELOG
Pivotal Cloud Foundry v1.12

Pivotal Cloud Foundry v1.11 Partners Release Notice

This topic describes the changes that PCF v1.11 introduces which may be relevant to partner service tiles.

IPv6 Disabled in Stemcell

To reduce the attack surface of PCF and improve security, Pivotal disabled IPv6 for all stemcells v3363.20 and later, including stemcells bundled with PCF v1.11. This means that apps deployed by PCF service tiles using current stemcells cannot communicate via IPv6 unless your tile explicitly configures VM network settings to enable IPv6.

Within PCF, all platform and service components, brokers, and apps communicate via IPv4 and do not use IPv6.

This stemcell change does not impact many PCF tiles. Possible exceptions are:

  • If your service needs IPv6. Please contact Pivotal Support to discuss. We want to know which PCF services require IPv6, if any.

  • If your tile doesn’t work on PCF v1.11 and triggers a no interface available or similar error when trying to open a connection. Whether you knew it or not, your service might be expecting IPv6, and the updated stemcell causes an error that did not appear previously. As above, please contact Pivotal Support.

  • If a security scan shows that IPv6 is active for your service, but your service does not use it. Please remove any tile code that opens unused IPv6 ports, to remove a security vulnerability that earlier PCF versions tolerated.

Tiles may need to enable IPv6 because:

  • Helper apps deploy in a post-install errand communicate externally via IPv6 during runtime.
  • Brokered or managed service instances communicate externally via IPv6 during instance creation.
  • Brokered service instances communicate externally via IPv6 during runtime.

You can enable IPv6 with sysctl in a VM startup script or elsewhere, but a more secure approach would be to make IPv6 an option configurable in Ops Manager.

For more information, see the Pivotal Knowledge Base article IPv6 disabled at kernel level on Stemcell 3363.20.

SHA-2 Checksums

When PCF transfers a large data file internally, it ensures data integrity by comparing hash values at each end of the transfer. PCF runs these checksums for downloads from Pivotal Network, uploads to Ops Manager, and file transfers to and from the BOSH and Cloud Controller blobstores.

Previous versions of PCF used the MD5 hash algorithm to generate checksums. PCF v1.11 uses the more secure SHA-256 function, the 256-bit version of the SHA-2 algorithm. SHA-256 generates hashes that render as 64-character hexadecimal strings containing 32 hex pairs.

To make your product SHA-2 compatible, you need to do two things:

  1. Determine if your BOSH release uses SHA-2. If it does not, then make the BOSH releases within your product compatible with SHA-2, so their components use SHA‑2 with the BOSH and Cloud Controller blobstores. See Check If a Release Uses SHA-2 and Create Releases That Use SHA-2 below.

  2. Post your zipped product file to Pivotal Network with its SHA-2 checksum to maintain integrity transferring from Pivotal Network and to Ops Manager. See Publish a Tile Using SHA-2 below.

Check If a Release Uses SHA-2

To check if an existing release uses SHA-256:

  1. Upload the release to the BOSH Director.

  2. Run bosh inspect-release PRODUCT-NAME/VERSION.

  3. In the bosh inspect release output, confirm that the Digest column contains 256-bit checksums prefixed with sha256:.

    $ bosh inspect-release uaa/24
    Using environment 'director.example.com' as 'admin'
    Job                                                    Blobstore ID                          Digest                                                                   Links Consumed  Links Provided
    uaa/a323f7e9973692f5ab4e6b294fcab4cc3d0ba9e3           71771c8d-23ac-4342-81c4-716e97e18601  sha256:e45f4c4b46fcb19644121f079d02712eb46b0ff0da411fdbd31ba5b64c45cbf1  -               -
    uaa_postgres/4267e92b14e45051772bdc5f89066d55d8bd9297  36d7e73b-2263-432d-8a1e-d41632fa24d1  sha256:72f17f5efadebb2ba0c9ef9ef78de3e2d87097fb61a04edba1a8eb930dd9c92f  -               -
    

  4. If you see the sha256: checksums, the release uses SHA-2.

Create Releases That Use SHA-2

When you run bosh create-release to create releases for your product, make them compatible with SHA-2 by passing in the --sha2 flag:

$ bosh create-release my-manifest --name my-product-tile --sha2

Note: The BOSH CLI v2 includes commands create-release, upload-release, inspect-release and others with hyphens. Analogous BOSH CLI v1 commands lack hyphens.

Publish a Tile Using SHA-2

After you’ve packaged your product’s BOSH releases, stemcell, metadata, and other tile components into a single zipped download file, post it to Pivotal Network with its SHA-2 checksum as follows.

  1. Run shasum -a 256 PRODUCT-FILE to generate the 64-character SHA-256 hash for the release.

  2. Include the SHA-256 hash when uploading the release to Pivotal Network. Do this in one of two ways:

    • API: Pass in the hash as the sha256 field for the Pivotal Network API command POST /api/v2/products/:product_slug/product_files.
    • UI: Enter the hash in the SHA256 field in the Pivotal Network product upload form. Pivnet Upload GUI

PCF v1.11 enables the BOSH Links feature available with bosh-release v255.5+. For managed services, BOSH Links streamlines BOSH releases by letting multiple jobs share configuration data, such as IP addresses, rather than requiring redundant configurations in the release and in the manifest.

Pivotal encourages tile developers to “linkify” new versions of their tiles to make them easier to use and maintain and to prepare them for future BOSH developments.

For more information, see BOSH Links: Why and How

Background

BOSH releases are portable, but when BOSH consumes a release to deploy job instances, it needs local and deployment-specific configuration values that the generalized release cannot contain. Such values can include IP addresses, networks, ports, and credentials. BOSH either reads these values from the deployment manifest or sets them from its current state or local environment.

BOSH releases handle these deploy-time values in two places, as described in more detail below:

  • The properties: section of the job spec file lists deployment-specific properties for the job, along with their descriptions and default values. The job spec lists all of the deploy-time properties that the job uses.
  • The job’s templates are incomplete versions of the configuration files and scripts that a job needs to start, stop, and run. These templates contain Ruby calls embedded in place for any deployment-specific values. Different templates can use different sets of properties listed in the job spec, as needed.

To fill in deployment-specific values at deploy time, BOSH does the following:

  1. Reads the property value definitions in the manifest.

  2. Runs the job templates through Ruby to plug in any deploy-time values.

  3. Saves out the resulting config files and scripts for use by the job.

Without BOSH Links, the spec file properties list for each job has include all configuration-time values that the job uses, even if they’re also used by other jobs. Deployment manifests often have to define identical property-value pairs repeatedly, once for each job.

BOSH Links lets jobs share deploy-time configuration properties as follows:

  • Job spec files include provides: and consumes: sections for sharing properties across jobs. The job spec properties: section still lists the deploy-time properties defined locally in the job, which was the only way to include the properties before BOSH Links. The consumes: section now adds deploy-time properties that are defined in and provided by other jobs.

  • Templates use a link function to pull in shared properties provided by other jobs, plus basic information about the instances running those jobs.

With little overhead, BOSH links make BOSH manifests shorter, easier to read, and simpler to build. By removing environment-specific details, they also make manifests more secure and more readily shareable and extensible.

In a BOSH release, each job has a job spec file, which contains job metadata and is located in the job’s spec subdirectory. This spec file has a properties: section that lists configuration properties that BOSH needs to fill in at deploy time. To enable BOSH Links, an additional provides: section publishes some or all of a job’s properties to other jobs, and in the spec files of those other jobs, a consumes: section brings in the provided properties.

Job Provides Properties

To share properties with other jobs, declare them in a provides: section in the spec file, as a single data structure that contains all of the properties you want to export. For example, a database job could share its username, password, and port values in a database_access_info structure like this:

provides:
- {name: myjob_database_access, type: database_access_info, properties: [port, username, password]}

In this declaration, the properties value lists the properties to export, the type value identifies to other jobs the database_access_info data structure containing those properties, and the name value disambiguates when multiple jobs provide the same data structure.

When a job shares properties in this way, it also automatically shares basic information about instances of the job, including their address values. For the basic instance properties that jobs provide through BOSH Links without explicit declaration, see Template Accessors in the BOSH docs.

Job Consumes Properties

To use properties provided by other jobs, import them with a consumes: section in the spec file, referring to the exported data structure’s type: (not name:) attribute. For example, a webserver job could obtain credentials for the database above by including this in its spec file:

consumes: 
- {name: my_database_access, type: database_access_info}

The type value in this declaration identifies the data structure provided by the database job. The name value gives a unique name to the webserver job’s own local version of this structure, which the webserver job’s template files use to access the local structure’s property values.

As explained above, the webserver job can access address values for database job instances even though the field is not in the database_access_info structure. With address, port, username, and password, the webserver has all of the property values it needs to access the database.

Job template files, in the job’s templates subdirectory, call three methods to fill in properties at deploy time:

  • The p method retrieves a property from any object, which can be the job’s locally-defined property list or a data structure consumed from another job.

  • The link method retrieves a property structure consumed from another job.

  • The instances method retrieves information about deployed instances of any job providing linked properties.

Following the scenario above, an example webserver job template config.json.erb looks like:

{
  "greeting": "<%= p("greeting") %>",
  "port": <%= p("port") %>
  "database": {
    "port": <%= link("my_database_access").p("port") %>,
    "username": "<%= link("my_database_access").p("username") %>",
    "password": "<%= link("my_database_access").p("password") %>"
    "address": "<%= link("my_database_access").instances[0].address %>",
  },
}

In this code:

  • The p("greeting") and p("port") calls retrieve properties listed in the webserver’s own spec file, without BOSH links. port here is the webserver port, not the database port.

  • The four link("my_database_access") expressions retrieve the database_data_class object with the name myjob_database_access to the webserver. With only one database_data_class object available, the name does not matter.

  • .p("port">, .p("username">), and .p("password">) retrieve properties from the database_data_class object in front of the dot.

  • .instances[0].address retrieves the address value of the first database job instance. Values obtainable from instances are .address, .az, .id, .index, and .name, as described here.

At deploy time, BOSH would replace the embeds between the Ruby code delimiters (<%= and =>) with the property values they generate and save the resulting JSON out as a webserver job configuration file.

Logic for Backward Compatibility

System components evolve separately and people combine them in different ways. So someone writing templates for one job might not know whether to use duplicated local properties or BOSH-linked properties from another job and when that might change.

To make templates backward- and forward-compatible, templates embed Ruby with if_p and if_link helper functions. These functions let templates include local or BOSH-linked properties and favor one over the other, as described here.

With or without BOSH links, a manifest includes jobs in its instance_groups: definitions and property values explicitly set in the manifest override default values set in the job spec and values set in templates.

The BOSH link: function takes as its argument a data structure type that multiple jobs might use. You can disambiguate this in the manifest by defining provides: and consumes: relationships between jobs and tagging them with from: and as: differently for different jobs that provide and consume them.

In the manifest, you can also define a job to consume configuration properties from jobs in other BOSH deployments and from external, non-BOSH services. See BOSH Links: Why and How for more information.

CredHub

Version 1.11.0 of PCF introduces CredHub, a credential management component that runs on the BOSH VM to provide more a secure way for PCF to configure, store, and manage credentials.

In PCF v1.11.1, CredHub supports automatic tile migrations through Ops Manager for secret credential types only. Tile developers can write JavaScript migrations to migrate these credentials to CredHub. See Migrating Existing Credentials to CredHub for more information.

Tile authors may choose to wait for a PCF release that includes support for more credential types before incorporating CredHub into their service.

Learn more about how tiles use CredHub in CredHub integrations. See the CredHub documentation for more information about CredHub.

Syslog Standardization

Historically, platform and service components running on PCF have not all used the same format for the log messages they produce. Cloud operators set up their tools to parse these logs in different ways, depending on their format, in order to monitor their PCF deployments.

To make analyzing logs easier, Pivotal is standardizing on the RFC-5424 syslog format. The RFC-5424 standard includes a structured data element, and Pivotal has also standardized its format for that element to carry information about the VM instance that generates the log.

Pivotal has migrated its own platform and product components to use this new Pivotal-standard version RFC-5424 and recommends that its partners follow the standard if they do not already.

Pivotal syslog Log Example

Following the RFC-5424 syslog standard, a Pivotal log line is formatted as follows:

<${PRI}>${VERSION} ${TIMESTAMP} ${HOST_IP} ${APP_NAME} ${PROD_ID} ${MSG_ID} [instance@${ENTERPRISE_ID} director=”${DIRECTOR}” deployment="${DEPLOYMENT}" group="${INSTANCE_GROUP}" az="${AVAILABILITY_ZONE}" id="${ID}"] ${MESSAGE}

For example:

<13>1 2016-11-29T12:35:16.931595Z 10.10.147.88 vcap.rep - - [instance@47450 director="us-pws" deployment="cf" group="cell" az="us-east-1b" id="006463a1-45719-9f6d"] {"timestamp":"1480422916.931512594", "source":"rep","message":"rep.metrics-reporter.containerstore-list.starting","log_level":1,"data":{"session":"7.19442"}}

See Log Format for PCF Components for details.

syslog-release and syslog-migration-release

Tile components can forward standard-format logs by including either syslog-release or syslog-migration-release in their tile configurations. These BOSH releases create processes that:

  • Read log messages generated and saved to a local source directory, which defaults to /var/vcap/sys/log/*.
  • Forward the log messages in Pivotal syslog format to a syslog endpoint or other configurable network destination.

syslog Forwarder Differences

syslog-release only supports logging that follows RFC-5424 protocol.

Pivotal developed syslog-migration-release as a variant of syslog-release to let tiles offer a configuration choice between generating logs in a legacy format and generating logs in Pivotal-standard RFC-5424 format.

Tile developers can use syslog-migration-release to give PCF operators a choice of log formats, affording the operator some time to convert from processing logs generated in a tile’s bespoke legacy format to logs generated in Pivotal standard format.

This table summarizes the differences between syslog-release and syslog-migration-release:

syslog-release syslog-migration-release
Endpoint configuration Expects a defined syslog forwarding endpoint or else it breaks Configurable with boolean syslog.migration.disabled to disable without breaking if no endpoint supplied
Log format Only forwards logs in RFC-5424 format Configurable with syslog.migration.message_format to permit RFC-5424 or other formats
Intended longevity Standard for PCF log forwarding; here to stay Intended as a temporary “bridge” for migrating tiles to syslog-release

Note: Tiles should only use one log format at a time, for all components. Do not use the Unix logger command alongside syslog-release or syslog-migration-release, or else the component may forward duplicate log lines to the configured syslog endpoint.

syslog Migration Path

Pivotal recommends the following migration path for tiles that currently generate log messages in formats other than Pivotal-standard syslog:

  • Current release, n: Deploys service that generates non-syslog format logs.
  • Release n + 1: “Bridging” release, uses syslog-migration-release and lets operators configure service to generate either legacy, non-syslog format logs or Pivotal-standard RFC-5424 logs.
  • Release n + 2: Service includes syslog-release and generates Pivotal-standard RFC-5425 logs exclusively.

Configuration Interface

To give operators a uniform experience configuring PCF tiles for syslog, Pivotal recommends that tile developers define their forms to render a System Logging configuration pane that looks like this:

Syslog Config Pane

For tiles that use syslog-migration-release and give operators a choice of log formats, Pivotal recommends adding a dropdown like this:

Syslog Migration Format dropdown

Longer term, Ops Manager may evolve to provide a built-in configuration pane and backend for Pivotal-standard syslog logging, as it does currently with the shared Assign AZs and Networks, Resource Config, and Stemcell panes.

Create a pull request or raise an issue on the source for this page in GitHub