Skip to content

Install Concourse for Platform Automation

This guide describes a process for installing Concourse for use with Platform Automation Toolkit. This approach to deploying Concourse uses the BOSH Director deployed by Ops Manager to deploy and maintain Concourse, Credhub, and UAA.

This approach is appropriate for those who need a Concourse in order to run Platform Automation Toolkit. Platform Automation Toolkit requires a secret store. Credhub satisfies this requirement, but requires UAA in turn. So these directions include both. Platform Automation Toolkit supports all credential managers supported by Concourse.

Beta Feature

Support for Platform Automation Toolkit is a Beta feature of Concourse. While the software is all production-ready, this documentation and deployment strategy are still in beta. We're very interested in feedback; feel free to email us at platform-automation@pivotal.io or contact VMware Support with any questions or comments. For access to the Beta, work with your VMware account team.

Prerequisites

Before you install Concourse with BOSH, you must have the following:

  • A supported IaaS provider: AWS, Azure, GCP, or vSphere
  • Terraform v0.12+ (or manual creation of IaaS components): Download
  • Docker: Download
  • The om v4.5+ CLI: For more information, see Installation in the README.
  • BOSH CLI v5.x: For more information, see Installing the CLI in the BOSH documentation.
  • Platform Automation Toolkit Docker Image: Download this from Tanzu Network. The Platform Automation Toolkit docs have instructions to use the docker image on your local workstation.
  • Concourse for Platform Automation: Download all components for the 5.5.11 Platform Automation release from Tanzu Network NOTE This is a Beta release! If you do not see the release when you follow the link, please ensure you are signed in to Tanzu Network. If you still don't see it, work with your VMware account team to get access to the beta.
  • Ops Manager Image for your IaaS: You can download the image reference (a YAML file) or VM image file from Tanzu Network.
  • The stemcell for your IaaS: You'll need these when you create your Concourse deployment manifest.

    Concourse v5.5.11 was tested on Stemcell 621.29 (Xenial) upon release and supports the 621.* Stemcell family.

    You can download an Ubuntu Xenial stemcell from Tanzu Network.

    This stemcell will be referenced as stemcell.tgz in this guide.

Get Your Working Directory and Shell Setup

Create a single directory to work in:

1
2
mkdir concourse-working-directory
cd concourse-working-directory

Choose which IaaS you'll be working with and set that as a variable for use in future commands:

1
export IAAS="aws"
1
export IAAS="azure"
1
export IAAS="gcp"
1
export IAAS="nsxt"

Create the Required IaaS Infrastructure

The paving repository contains Terraform templates for each supported IaaS: AWS, Azure, GCP, and vSphere. This includes infrastructure for the Ops Manager, BOSH Director, and Concourse.

  1. Clone the repo on the command line from the concourse-working-directory folder:

    1
    git clone https://github.com/pivotal/paving.git
    
  2. In the checked out repository there are directories for each IaaS. Copy the terraform templates for the infrastructure of your choice to a new directory outside of the paving repo, so you can modify it:

    1
    2
    3
    4
    # cp -Ra paving/${IAAS} paving-${IAAS}
    mkdir paving-${IAAS}
    cp -a paving/$IAAS/. paving-$IAAS
    cd paving-${IAAS}
    

    IAAS must be set to match one of the infrastructure directories at the top level of the paving repo - for example, aws, azure, gcp, or nsxt. This was done in Get Your Working Directory and Shell Setup, but if you're in a new shell, you may need to do it again.

  3. Within the new directory, the terraform.tfvars.example file shows what values are required for that IaaS. Remove the .example from the name, and replace the examples with real values.

  4. You'll be extending the Terraform files from the paving repo with an additional file that defines resources for Concourse. Create a new concourse.tf file in the new directory and copy the following into it:

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    resource "aws_route53_record" "concourse" {
      name = "ci.${var.environment_name}.${data.aws_route53_zone.hosted.name}"
    
      zone_id = data.aws_route53_zone.hosted.zone_id
      type    = "A"
    
      alias {
        name                   = aws_lb.concourse.dns_name
        zone_id                = aws_lb.concourse.zone_id
        evaluate_target_health = true
      }
    }
    
    //create a load balancer for concourse
    resource "aws_lb" "concourse" {
      name                             = "${var.environment_name}-concourse-lb"
      load_balancer_type               = "network"
      enable_cross_zone_load_balancing = true
      subnets                          = aws_subnet.public-subnet[*].id
    }
    
    resource "aws_lb_listener" "concourse-tcp" {
      load_balancer_arn = aws_lb.concourse.arn
      port              = 443
      protocol          = "TCP"
    
      default_action {
        type             = "forward"
        target_group_arn = aws_lb_target_group.concourse-tcp.arn
      }
    }
    
    resource "aws_lb_listener" "concourse-ssh" {
      load_balancer_arn = aws_lb.concourse.arn
      port              = 2222
      protocol          = "TCP"
    
      default_action {
        type             = "forward"
        target_group_arn = aws_lb_target_group.concourse-ssh.arn
      }
    }
    
    resource "aws_lb_listener" "concourse-credhub" {
      load_balancer_arn = aws_lb.concourse.arn
      port              = 8844
      protocol          = "TCP"
    
      default_action {
        type             = "forward"
        target_group_arn = aws_lb_target_group.concourse-credhub.arn
      }
    }
    
    resource "aws_lb_listener" "concourse-uaa" {
      load_balancer_arn = aws_lb.concourse.arn
      port              = 8443
      protocol          = "TCP"
    
      default_action {
        type             = "forward"
        target_group_arn = aws_lb_target_group.concourse-uaa.arn
      }
    }
    
    resource "aws_lb_target_group" "concourse-tcp" {
      name     = "${var.environment_name}-concourse-tg-tcp"
      port     = 443
      protocol = "TCP"
      vpc_id   = aws_vpc.vpc.id
    
      health_check {
        protocol = "TCP"
      }
    }
    
    resource "aws_lb_target_group" "concourse-ssh" {
      name     = "${var.environment_name}-concourse-tg-ssh"
      port     = 2222
      protocol = "TCP"
      vpc_id   = aws_vpc.vpc.id
    
      health_check {
        protocol = "TCP"
      }
    }
    
    resource "aws_lb_target_group" "concourse-credhub" {
      name     = "${var.environment_name}-concourse-tg-credhub"
      port     = 8844
      protocol = "TCP"
      vpc_id   = aws_vpc.vpc.id
    
      health_check {
        protocol = "TCP"
      }
    }
    
    resource "aws_lb_target_group" "concourse-uaa" {
      name     = "${var.environment_name}-concourse-tg-uaa"
      port     = 8443
      protocol = "TCP"
      vpc_id   = aws_vpc.vpc.id
    
      health_check {
        protocol = "TCP"
      }
    }
    
    //create a security group for concourse
    resource "aws_security_group" "concourse" {
      name   = "${var.environment_name}-concourse-sg"
      vpc_id = aws_vpc.vpc.id
    
      ingress {
        cidr_blocks = var.ops_manager_allowed_ips
        protocol    = "tcp"
        from_port   = 443
        to_port     = 443
      }
    
      ingress {
        cidr_blocks = var.ops_manager_allowed_ips
        protocol    = "tcp"
        from_port   = 2222
        to_port     = 2222
      }
    
      ingress {
        cidr_blocks = var.ops_manager_allowed_ips
        protocol    = "tcp"
        from_port   = 8844
        to_port     = 8844
      }
    
      ingress {
        cidr_blocks = var.ops_manager_allowed_ips
        protocol    = "tcp"
        from_port   = 8443
        to_port     = 8443
      }
    
      egress {
        cidr_blocks = ["0.0.0.0/0"]
        protocol    = "-1"
        from_port   = 0
        to_port     = 0
      }
    
      tags = merge(
        var.tags,
        { "Name" = "${var.environment_name}-concourse-sg" },
      )
    }
    
    output "concourse_url" {
      value = aws_route53_record.concourse.name
    }
    
      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    resource "azurerm_public_ip" "concourse" {
      name                         = "${var.environment_name}-concourse-lb"
      location                     = var.location
      resource_group_name          = azurerm_resource_group.platform.name
      allocation_method            = "Static"
      sku                          = "Basic"
    
      tags = {
        environment = var.environment_name
      }
    }
    
    resource "azurerm_lb" "concourse" {
      name                = "${var.environment_name}-concourse-lb"
      resource_group_name = azurerm_resource_group.platform.name
      location            = var.location
      sku                 = "Basic"
    
      frontend_ip_configuration {
        name                 = "${var.environment_name}-concourse-frontend-ip-configuration"
        public_ip_address_id = azurerm_public_ip.concourse.id
      }
    }
    
    resource "azurerm_lb_rule" "concourse-https" {
      name                = "${var.environment_name}-concourse-https"
      resource_group_name = azurerm_resource_group.platform.name
      loadbalancer_id     = azurerm_lb.concourse.id
    
      frontend_ip_configuration_name = "${var.environment_name}-concourse-frontend-ip-configuration"
      protocol                       = "TCP"
      frontend_port                  = 443
      backend_port                   = 443
    
      backend_address_pool_id = azurerm_lb_backend_address_pool.concourse.id
      probe_id                = azurerm_lb_probe.concourse-https.id
    }
    
    resource "azurerm_lb_probe" "concourse-https" {
      name                = "${var.environment_name}-concourse-https"
      resource_group_name = azurerm_resource_group.platform.name
      loadbalancer_id     = azurerm_lb.concourse.id
      protocol            = "TCP"
      port                = 443
    }
    
    resource "azurerm_lb_rule" "concourse-http" {
      name                = "${var.environment_name}-concourse-http"
      resource_group_name = azurerm_resource_group.platform.name
      loadbalancer_id     = azurerm_lb.concourse.id
    
      frontend_ip_configuration_name = "${var.environment_name}-concourse-frontend-ip-configuration"
      protocol                       = "TCP"
      frontend_port                  = 80
      backend_port                   = 80
    
      backend_address_pool_id = azurerm_lb_backend_address_pool.concourse.id
      probe_id                = azurerm_lb_probe.concourse-http.id
    }
    
    resource "azurerm_lb_probe" "concourse-http" {
      name                = "${var.environment_name}-concourse-http"
      resource_group_name = azurerm_resource_group.platform.name
      loadbalancer_id     = azurerm_lb.concourse.id
      protocol            = "TCP"
      port                = 80
    }
    
    resource "azurerm_lb_rule" "concourse-uaa" {
      name                = "${var.environment_name}-concourse-uaa"
      resource_group_name = azurerm_resource_group.platform.name
      loadbalancer_id     = azurerm_lb.concourse.id
    
      frontend_ip_configuration_name = "${var.environment_name}-concourse-frontend-ip-configuration"
      protocol                       = "TCP"
      frontend_port                  = 8443
      backend_port                   = 8443
    
      backend_address_pool_id = azurerm_lb_backend_address_pool.concourse.id
      probe_id                = azurerm_lb_probe.concourse-uaa.id
    }
    
    resource "azurerm_lb_probe" "concourse-uaa" {
      name                = "${var.environment_name}-concourse-uaa"
      resource_group_name = azurerm_resource_group.platform.name
      loadbalancer_id     = azurerm_lb.concourse.id
      protocol            = "TCP"
      port                = 8443
    }
    
    resource "azurerm_lb_rule" "concourse-credhub" {
      name                = "${var.environment_name}-concourse-credhub"
      resource_group_name = azurerm_resource_group.platform.name
      loadbalancer_id     = azurerm_lb.concourse.id
    
      frontend_ip_configuration_name = "${var.environment_name}-concourse-frontend-ip-configuration"
      protocol                       = "TCP"
      frontend_port                  = 8844
      backend_port                   = 8844
    
      backend_address_pool_id = azurerm_lb_backend_address_pool.concourse.id
      probe_id                = azurerm_lb_probe.concourse-credhub.id
    }
    
    resource "azurerm_lb_probe" "concourse-credhub" {
      name                = "${var.environment_name}-concourse-credhub"
      resource_group_name = azurerm_resource_group.platform.name
      loadbalancer_id     = azurerm_lb.concourse.id
      protocol            = "TCP"
      port                = 8844
    }
    
    resource "azurerm_network_security_rule" "concourse-credhub-platform-vms" {
      name                        = "${var.environment_name}-credhub"
      priority                    = 300
      direction                   = "Inbound"
      access                      = "Allow"
      protocol                    = "Tcp"
      source_port_range           = "*"
      destination_port_range      = "8844"
      source_address_prefix       = "*"
      destination_address_prefix  = "*"
      resource_group_name         = azurerm_resource_group.platform.name
      network_security_group_name = azurerm_network_security_group.platform-vms.name
    }
    
    resource "azurerm_network_security_rule" "concourse-uaa-platform-vms" {
      name                        = "${var.environment_name}-uaa"
      priority                    = 3001
      direction                   = "Inbound"
      access                      = "Allow"
      protocol                    = "Tcp"
      source_port_range           = "*"
      destination_port_range      = "8443"
      source_address_prefix       = "*"
      destination_address_prefix  = "*"
      resource_group_name         = azurerm_resource_group.platform.name
      network_security_group_name = azurerm_network_security_group.platform-vms.name
    }
    
    resource "azurerm_network_security_rule" "concourse-credhub-ops-manager" {
      name                        = "${var.environment_name}-credhub"
      priority                    = 300
      direction                   = "Inbound"
      access                      = "Allow"
      protocol                    = "Tcp"
      source_port_range           = "*"
      destination_port_range      = "8844"
      source_address_prefix       = "*"
      destination_address_prefix  = "*"
      resource_group_name         = azurerm_resource_group.platform.name
      network_security_group_name = azurerm_network_security_group.ops-manager.name
    }
    
    resource "azurerm_network_security_rule" "concourse-uaa-ops-manager" {
      name                        = "${var.environment_name}-uaa"
      priority                    = 3001
      direction                   = "Inbound"
      access                      = "Allow"
      protocol                    = "Tcp"
      source_port_range           = "*"
      destination_port_range      = "8443"
      source_address_prefix       = "*"
      destination_address_prefix  = "*"
      resource_group_name         = azurerm_resource_group.platform.name
      network_security_group_name = azurerm_network_security_group.ops-manager.name
    }
    
    resource "azurerm_lb_backend_address_pool" "concourse" {
      name                = "${var.environment_name}-concourse-backend-pool"
      resource_group_name = azurerm_resource_group.platform.name
      loadbalancer_id     = azurerm_lb.concourse.id
    }
    
    resource "azurerm_dns_a_record" "concourse" {
      name                = "ci.${var.environment_name}"
      zone_name           = data.azurerm_dns_zone.hosted.name
      resource_group_name = data.azurerm_dns_zone.hosted.resource_group_name
      ttl                 = "60"
      records             = [azurerm_public_ip.concourse.ip_address]
    
      tags = merge(
        var.tags,
        { name = "ci.${var.environment_name}" },
      )
    }
    
    output "concourse_url" {
      value  = "${azurerm_dns_a_record.concourse.name}.${azurerm_dns_a_record.concourse.zone_name}"
    }
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    resource "google_dns_record_set" "concourse" {
      name = "ci.${var.environment_name}.${data.google_dns_managed_zone.hosted-zone.dns_name}"
      type = "A"
      ttl  = 60
    
      managed_zone = var.hosted_zone
    
      rrdatas = [google_compute_address.concourse.address]
    }
    
    //create a load balancer for concourse
    resource "google_compute_address" "concourse" {
      name = "${var.environment_name}-concourse"
    }
    
    resource "google_compute_firewall" "concourse" {
      allow {
        ports    = ["443", "2222", "8844", "8443"]
        protocol = "tcp"
      }
    
      direction     = "INGRESS"
      name          = "${var.environment_name}-concourse-open"
      network       = google_compute_network.network.self_link
      source_ranges = ["0.0.0.0/0"]
      target_tags   = ["concourse"]
    }
    
    resource "google_compute_forwarding_rule" "concourse_credhub" {
      ip_address  = google_compute_address.concourse.address
      ip_protocol = "TCP"
      name        = "${var.environment_name}-concourse-credhub"
      port_range  = "8844-8844"
      target      = google_compute_target_pool.concourse_target_pool.self_link
    }
    
    resource "google_compute_forwarding_rule" "concourse_ssh" {
      ip_address  = google_compute_address.concourse.address
      ip_protocol = "TCP"
      name        = "${var.environment_name}-concourse-ssh"
      port_range  = "2222-2222"
      target      = google_compute_target_pool.concourse_target_pool.self_link
    }
    
    resource "google_compute_forwarding_rule" "concourse_tcp" {
      ip_address  = google_compute_address.concourse.address
      ip_protocol = "TCP"
      name        = "${var.environment_name}-concourse-tcp"
      port_range  = "443-443"
      target      = google_compute_target_pool.concourse_target_pool.self_link
    }
    
    resource "google_compute_forwarding_rule" "concourse_uaa" {
      ip_address  = google_compute_address.concourse.address
      ip_protocol = "TCP"
      name        = "${var.environment_name}-concourse-uaa"
      port_range  = "8443-8443"
      target      = google_compute_target_pool.concourse_target_pool.self_link
    }
    
    resource "google_compute_target_pool" "concourse_target_pool" {
      name = "${var.environment_name}-concourse"
    }
    
    output "concourse_url" {
      value = replace(replace("${google_dns_record_set.concourse.name}", "/\\.$/", ""), "*.", "")
    }
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    resource "nsxt_lb_service" "concourse_lb_service" {
      description  = "concourse lb_service"
      display_name = "${var.environment_name}_concourse_lb_service"
    
      enabled           = true
      logical_router_id = nsxt_logical_tier1_router.t1_infrastructure.id
      virtual_server_ids = ["${nsxt_lb_tcp_virtual_server.concourse_lb_virtual_server.id}"]
      error_log_level   = "INFO"
      size              = "SMALL"
    
      depends_on        = ["nsxt_logical_router_link_port_on_tier1.t1_infrastructure_to_t0"]
    
      tag {
        scope = "terraform"
        tag   = var.environment_name
      }
    }
    
    resource "nsxt_ns_group" "concourse_ns_group" {
      display_name = "${var.environment_name}_concourse_ns_group"
    
      tag {
        scope = "terraform"
        tag   = var.environment_name
      }
    }
    
    resource "nsxt_lb_tcp_monitor" "concourse_lb_tcp_monitor" {
      display_name = "${var.environment_name}_concourse_lb_tcp_monitor"
      interval     = 5
      monitor_port  = 443
      rise_count    = 3
      fall_count    = 3
      timeout      = 15
    
      tag {
        scope = "terraform"
        tag   = var.environment_name
      }
    }
    
    resource "nsxt_lb_pool" "concourse_lb_pool" {
      description              = "concourse_lb_pool provisioned by Terraform"
      display_name             = "${var.environment_name}_concourse_lb_pool"
      algorithm                = "WEIGHTED_ROUND_ROBIN"
      min_active_members       = 1
      tcp_multiplexing_enabled = false
      tcp_multiplexing_number  = 3
      active_monitor_id        = "${nsxt_lb_tcp_monitor.concourse_lb_tcp_monitor.id}"
      snat_translation {
        type          = "SNAT_AUTO_MAP"
      }
      member_group {
        grouping_object {
          target_type = "NSGroup"
          target_id   = "${nsxt_ns_group.concourse_ns_group.id}"
        }
      }
    
      tag {
        scope = "terraform"
        tag   = var.environment_name
      }
    }
    
    resource "nsxt_lb_fast_tcp_application_profile" "tcp_profile" {
      display_name = "${var.environment_name}_concourse_fast_tcp_profile"
    
      tag {
        scope = "terraform"
        tag   = var.environment_name
      }
    }
    
    resource "nsxt_lb_tcp_virtual_server" "concourse_lb_virtual_server" {
      description                = "concourse lb_virtual_server provisioned by terraform"
      display_name               = "${var.environment_name}_concourse virtual server"
      application_profile_id     = "${nsxt_lb_fast_tcp_application_profile.tcp_profile.id}"
      ip_address                 = "${var.nsxt_lb_concourse_virtual_server_ip_address}"
      ports                       = ["443","8443","8844"]
      pool_id                    = "${nsxt_lb_pool.concourse_lb_pool.id}"
    
      tag {
        scope = "terraform"
        tag   = var.environment_name
      }
    }
    
    variable "nsxt_lb_concourse_virtual_server_ip_address" {
      default     = ""
      description = "IP Address for concourse loadbalancer"
      type        = "string"
    }
    
    output "concourse_url" {
      value = var.nsxt_lb_concourse_virtual_server_ip_address
    }
    
  5. Now that you've got your variables and modifications in place, you can initialize Terraform which will download the required IaaS providers.

    1
    terraform init
    
  6. Run terraform refresh to update the state with what currently exists on the Iaas.

    1
    2
    terraform refresh \
      -var-file=terraform.tfvars
    
  7. Next, you can run terraform plan to see what changes will be made to the infrastructure on the IaaS.

    1
    2
    3
    terraform plan \
      -out=terraform.tfplan \
      -var-file=terraform.tfvars
    
  8. Finally, you can run terraform apply to create the required infrastructure on the IaaS.

    1
    2
    3
    terraform apply \
      -parallelism=5 \
      terraform.tfplan
    
  9. Save off the output from terraform output stable_config as terraform-outputs.yml up a level in your working directory:

    1
    terraform output stable_config > ../terraform-outputs.yml
    
  10. Export the CONCOURSE_URL from terraform output concourse_url

    1
    export CONCOURSE_URL="$(terraform output concourse_url)"
    

  11. Return to your working directory for the next, post-terraform steps:

    1
    cd ..
    

Deploy the Director

Platform Automation Toolkit provides tools to create an Ops Manager with a BOSH Director.

  1. Ops Manager needs to be deployed with IaaS specific configuration. Platform Automation Toolkit provides a configuration file format that looks like this:

    Copy and paste the YAML below for your IaaS and save as opsman-config.yml in your working directory.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    ---
    opsman-configuration:
      aws:
        access_key_id: ((access_key))
        boot_disk_size: 100
        iam_instance_profile_name: ((ops_manager_iam_instance_profile_name))
        instance_type: m5.large
        key_pair_name: ((ops_manager_key_pair_name))
        public_ip: ((ops_manager_public_ip))
        region: ((region))
        secret_access_key: ((secret_key))
        security_group_ids: [((ops_manager_security_group_id))]
        vm_name: ((environment_name))-ops-manager-vm
        vpc_subnet_id: ((ops_manager_subnet_id))
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    ---
    opsman-configuration:
      azure:
        boot_disk_size: "100"
        client_id: ((client_id))
        client_secret: ((client_secret))
        cloud_name: ((iaas_configuration_environment_azurecloud))
        container: ((ops_manager_container_name))
        location: ((location))
        network_security_group: ((ops_manager_security_group_name))
        private_ip: ((ops_manager_private_ip))
        public_ip: ((ops_manager_public_ip))
        resource_group: ((resource_group_name))
        ssh_public_key: ((ops_manager_ssh_public_key))
        storage_account: ((ops_manager_storage_account_name))
        storage_sku: "Premium_LRS"
        subnet_id: ((management_subnet_id))
        subscription_id: ((subscription_id))
        tenant_id: ((tenant_id))
        use_managed_disk: "true"
        vm_name: "((resource_group_name))-ops-manager"
        vm_size: "Standard_DS2_v2"
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    ---
    opsman-configuration:
      gcp:
        boot_disk_size: 100
        custom_cpu: 4
        custom_memory: 16
        gcp_service_account: ((service_account_key))
        project: ((project))
        public_ip: ((ops_manager_public_ip))
        region: ((region))
        ssh_public_key: ((ops_manager_ssh_public_key))
        tags: ((ops_manager_tags))
        vm_name: ((environment_name))-ops-manager-vm
        vpc_subnet: ((management_subnet_name))
        zone: ((availability_zones.0))
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    ---
    opsman-configuration:
      vsphere:
        vcenter:
          datacenter: ((vcenter_datacenter))
          datastore: ((vcenter_datastore))
          folder: ((ops_manager_folder))
          url: ((vcenter_host))
          username: ((vcenter_username))
          password: ((vcenter_password))
          resource_pool: /((vcenter_datastore))/host/((vcenter_cluster))/Resources/((vcenter_resource_pool))
          insecure: ((allow_unverified_ssl))
        disk_type: thin
        dns: ((ops_manager_dns_servers))
        gateway: ((management_subnet_gateway))
        hostname: ((ops_manager_dns))
        netmask: ((ops_manager_netmask))
        network: ((management_subnet_name))
        ntp: ((ops_manager_ntp))
        private_ip: ((ops_manager_private_ip))
        ssh_public_key: ((ops_manager_ssh_public_key))
    

    Where:

    • The ((parameters)) map to outputs from the terraform-outputs.yml, which can be provided via vars file for YAML interpolation in a subsequent step.

    opsman.yml for an unlisted IaaS

    For a supported IaaS not listed above, reference the Platform Automation Toolkit docs.

  2. First import the Platform Automation Toolkit Docker Image:

    1
    docker import ${PLATFORM_AUTOMATION_TOOLKIT_IMAGE_TGZ} platform-automation-toolkit-image
    

    Where ${PLATFORM_AUTOMATION_TOOLKIT_IMAGE_TGZ} is set to the filepath of the image downloaded from Pivnet.

  3. Create the Ops Manager using the p-automator CLI. This requires the Ops Manager Image for your IaaS and the previously created opsman-config.yml to be present in your working directory.

    The following command runs a docker image to invoke the p-automator command to create the Ops Manager VM, mounts the current directory from your local filesystem as a new directory called /workspace within the image, and does its work from within that directory.

    1
    2
    3
    4
    5
    docker run -it --rm -v $PWD:/workspace -w /workspace platform-automation-toolkit-image \
      p-automator create-vm \
        --config opsman-config.yml \
        --image-file ops-manager*.{yml,ova,raw} \
        --vars-file terraform-outputs.yml
    

    The p-automator create-vm command writes a state.yml file uniquely identifying the created Ops Manager VM. This state.yml file is used for long term management of the Ops Manager VM. We recommend storing it for future use.

  4. Create an env.yml file in your working directory to provide parameters to allow om to target the Ops Manager.

    1
    2
    3
    connect-timeout: 30            # default 5
    request-timeout: 1800          # default 1800
    skip-ssl-validation: true      # default false
    
  5. Export the Ops Manager DNS entry created by terraform as the as the target Ops Manager for om.

    1
    export OM_TARGET="$(om interpolate -c terraform-outputs.yml --path /ops_manager_dns)"
    

    Alternatively, this can be included in the env.yml created above as the target attribute.

  6. Setup authentication on the Ops Manager.

    1
    2
    3
    4
    om --env env.yml configure-authentication \
       --username ${OM_USERNAME} \
       --password ${OM_PASSWORD} \
       --decryption-passphrase ${OM_DECRYPTION_PASSPHRASE}
    

    Where:

    • ${OM_USERNAME} is the desired username for accessing the Ops Manager.
    • ${OM_PASSWORD} is the desired password for accessing the Ops Manager.
    • ${OM_DECRYPTION_PASSPHRASE} is the desired decryption passphrase used for recovering the Ops Manager if the VM is restarted.

    This configures Ops Manager with whichever credentials you set which will be required with every subsequent om command.

  7. The Ops Manager can now be used to create a BOSH Director.

    Copy and paste the YAML below for your IaaS and save as director-config.yml.

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    ---
    az-configuration:
    - name: ((availability_zones.0))
    - name: ((availability_zones.1))
    - name: ((availability_zones.2))
    network-assignment:
      network:
        name: management
      singleton_availability_zone:
        name: ((availability_zones.0))
    networks-configuration:
      icmp_checks_enabled: false
      networks:
      - name: management
        subnets:
        - availability_zone_names:
          - ((availability_zones.0))
          cidr: ((management_subnet_cidrs.0))
          dns: 169.254.169.253
          gateway: ((management_subnet_gateways.0))
          iaas_identifier: ((management_subnet_ids.0))
          reserved_ip_ranges: ((management_subnet_reserved_ip_ranges.0))
        - availability_zone_names:
          - ((availability_zones.1))
          cidr: ((management_subnet_cidrs.1))
          dns: 169.254.169.253
          gateway: ((management_subnet_gateways.1))
          iaas_identifier: ((management_subnet_ids.1))
          reserved_ip_ranges: ((management_subnet_reserved_ip_ranges.1))
        - availability_zone_names:
          - ((availability_zones.2))
          cidr: ((management_subnet_cidrs.2))
          dns: 169.254.169.253
          gateway: ((management_subnet_gateways.2))
          iaas_identifier: ((management_subnet_ids.2))
          reserved_ip_ranges: ((management_subnet_reserved_ip_ranges.2))
      - name: pas
        subnets:
        - availability_zone_names:
          - ((availability_zones.0))
          cidr: ((pas_subnet_cidrs.0))
          dns: 169.254.169.253
          gateway: ((pas_subnet_gateways.0))
          iaas_identifier: ((pas_subnet_ids.0))
          reserved_ip_ranges: ((pas_subnet_reserved_ip_ranges.0))
        - availability_zone_names:
          - ((availability_zones.1))
          cidr: ((pas_subnet_cidrs.1))
          dns: 169.254.169.253
          gateway: ((pas_subnet_gateways.1))
          iaas_identifier: ((pas_subnet_ids.1))
          reserved_ip_ranges: ((pas_subnet_reserved_ip_ranges.1))
        - availability_zone_names:
          - ((availability_zones.2))
          cidr: ((pas_subnet_cidrs.2))
          dns: 169.254.169.253
          gateway: ((pas_subnet_gateways.2))
          iaas_identifier: ((pas_subnet_ids.2))
          reserved_ip_ranges: ((pas_subnet_reserved_ip_ranges.2))
      - name: pks
        subnets:
        - availability_zone_names:
          - ((availability_zones.0))
          cidr: ((pks_subnet_cidrs.0))
          dns: 169.254.169.253
          gateway: ((pks_subnet_gateways.0))
          iaas_identifier: ((pks_subnet_ids.0))
          reserved_ip_ranges: ((pks_subnet_reserved_ip_ranges.0))
        - availability_zone_names:
          - ((availability_zones.1))
          cidr: ((pks_subnet_cidrs.1))
          dns: 169.254.169.253
          gateway: ((pks_subnet_gateways.1))
          iaas_identifier: ((pks_subnet_ids.1))
          reserved_ip_ranges: ((pks_subnet_reserved_ip_ranges.1))
        - availability_zone_names:
          - ((availability_zones.2))
          cidr: ((pks_subnet_cidrs.2))
          dns: 169.254.169.253
          gateway: ((pks_subnet_gateways.2))
          iaas_identifier: ((pks_subnet_ids.2))
          reserved_ip_ranges: ((pks_subnet_reserved_ip_ranges.2))
      - name: services
        subnets:
        - availability_zone_names:
          - ((availability_zones.0))
          cidr: ((services_subnet_cidrs.0))
          dns: 169.254.169.253
          gateway: ((services_subnet_gateways.0))
          iaas_identifier: ((services_subnet_ids.0))
          reserved_ip_ranges: ((services_subnet_reserved_ip_ranges.0))
        - availability_zone_names:
          - ((availability_zones.1))
          cidr: ((services_subnet_cidrs.1))
          dns: 169.254.169.253
          gateway: ((services_subnet_gateways.1))
          iaas_identifier: ((services_subnet_ids.1))
          reserved_ip_ranges: ((services_subnet_reserved_ip_ranges.1))
        - availability_zone_names:
          - ((availability_zones.2))
          cidr: ((services_subnet_cidrs.2))
          dns: 169.254.169.253
          gateway: ((services_subnet_gateways.2))
          iaas_identifier: ((services_subnet_ids.2))
          reserved_ip_ranges: ((services_subnet_reserved_ip_ranges.2))
    properties-configuration:
      director_configuration:
        ntp_servers_string: 169.254.169.123
      iaas_configuration:
        access_key_id: ((ops_manager_iam_user_access_key))
        secret_access_key: ((ops_manager_iam_user_secret_key))
        iam_instance_profile: ((ops_manager_iam_instance_profile_name))
        vpc_id: ((vpc_id))
        security_group: ((platform_vms_security_group_id))
        key_pair_name: ((ops_manager_key_pair_name))
        ssh_private_key: ((ops_manager_ssh_private_key))
        region: ((region))
    resource-configuration:
      compilation:
        instance_type:
          id: automatic
    vmextensions-configuration:
    - name: web-lb-security-groups
      cloud_properties:
        security_groups:
        - ((web_lb_security_group_id))
        - ((platform_vms_security_group_id))
    - name: ssh-lb-security-groups
      cloud_properties:
        security_groups:
        - ((ssh_lb_security_group_id))
        - ((platform_vms_security_group_id))
    - name: tcp-lb-security-groups
      cloud_properties:
        security_groups:
        - ((tcp_lb_security_group_id))
        - ((platform_vms_security_group_id))
    - name: pks-api-lb-security-groups
      cloud_properties:
        security_groups:
        - ((pks_api_lb_security_group_id))
        - ((platform_vms_security_group_id))
    vmextensions-configuration:
    - name: concourse-lb
      cloud_properties:
        lb_target_groups:
          - ((environment_name))-concourse-tg-tcp
          - ((environment_name))-concourse-tg-ssh
          - ((environment_name))-concourse-tg-credhub
          - ((environment_name))-concourse-tg-uaa
        security_groups:
          - ((environment_name))-concourse-sg
          - ((platform_vms_security_group_id))
    - name: increased-disk
      cloud_properties:
        type: gp2
        size: 512000
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    ---
    network-assignment:
      network:
        name: management
      singleton_availability_zone:
        name: 'zone-1'
      other_availability_zones:
        name: 'zone-2'
    networks-configuration:
      icmp_checks_enabled: false
      networks:
      - name: management
        service_network: false
        subnets:
        - iaas_identifier: ((network_name))/((management_subnet_name))
          cidr: ((management_subnet_cidr))
          reserved_ip_ranges: ((management_subnet_gateway))-((management_subnet_range))
          dns: 168.63.129.16
          gateway: ((management_subnet_gateway))
      - name: pas
        service_network: false
        subnets:
        - iaas_identifier: ((network_name))/((pas_subnet_name))
          cidr: ((pas_subnet_cidr))
          reserved_ip_ranges: ((pas_subnet_gateway))-((pas_subnet_range))
          dns: 168.63.129.16
          gateway: ((pas_subnet_gateway))
      - name: pks
        service_network: false
        subnets:
        - iaas_identifier: ((network_name))/((pks_subnet_name))
          cidr: ((pks_subnet_cidr))
          reserved_ip_ranges: ((pks_subnet_gateway))-((pks_subnet_range))
          dns: 168.63.129.16
          gateway: ((pks_subnet_gateway))
      - name: services-1
        service_network: false
        subnets:
        - iaas_identifier: ((network_name))/((services_subnet_name))
          cidr: ((services_subnet_cidr))
          reserved_ip_ranges: ((services_subnet_gateway))-((services_subnet_range))
          dns: 168.63.129.16
          gateway: ((services_subnet_gateway))
    properties-configuration:
      iaas_configuration:
        subscription_id: ((subscription_id))
        tenant_id: ((tenant_id))
        client_id: ((client_id))
        client_secret: ((client_secret))
        resource_group_name: ((resource_group_name))
        bosh_storage_account_name: ((bosh_storage_account_name))
        default_security_group: ((platform_vms_security_group_name))
        ssh_public_key: ((ops_manager_ssh_public_key))
        ssh_private_key: ((ops_manager_ssh_private_key))
        cloud_storage_type: managed_disks
        storage_account_type: Standard_LRS
        environment: ((iaas_configuration_environment_azurecloud))
        availability_mode: availability_sets
      director_configuration:
        ntp_servers_string: 0.pool.ntp.org
        metrics_ip: ''
        resurrector_enabled: true
        post_deploy_enabled: false
        bosh_recreate_on_next_deploy: false
        retry_bosh_deploys: true
        hm_pager_duty_options:
          enabled: false
        hm_emailer_options:
          enabled: false
        blobstore_type: local
        database_type: internal
      security_configuration:
        trusted_certificates: ''
        generate_vm_passwords: true
    vmextensions-configuration:
      - name: pks-api-lb-security-groups
        cloud_properties:
          security_group: ((pks_api_network_security_group_name))
          application_security_groups: ["((pks_api_application_security_group_name))"]
    vmextensions-configuration:
    - name: concourse-lb
      cloud_properties:
        load_balancer: ((environment_name))-concourse-lb
    - name: increased-disk
      cloud_properties:
        ephemeral_disk:
          size: 512000
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    ---
    az-configuration:
    - name: ((availability_zones.0))
    - name: ((availability_zones.1))
    - name: ((availability_zones.2))
    network-assignment:
      network:
        name: management
      singleton_availability_zone:
        name: ((availability_zones.0))
    networks-configuration:
      icmp_checks_enabled: false
      networks:
      - name: management
        subnets:
        - availability_zone_names:
          - ((availability_zones.0))
          - ((availability_zones.1))
          - ((availability_zones.2))
          cidr: ((management_subnet_cidr))
          dns: 169.254.169.254
          gateway: ((management_subnet_gateway))
          iaas_identifier: ((network_name))/((management_subnet_name))/((region))
          reserved_ip_ranges: ((management_subnet_reserved_ip_ranges))
      - name: pas
        subnets:
        - availability_zone_names:
          - ((availability_zones.0))
          - ((availability_zones.1))
          - ((availability_zones.2))
          cidr: ((pas_subnet_cidr))
          dns: 169.254.169.254
          gateway: ((pas_subnet_gateway))
          iaas_identifier: ((network_name))/((pas_subnet_name))/((region))
          reserved_ip_ranges: ((pas_subnet_reserved_ip_ranges))
      - name: services
        subnets:
        - availability_zone_names:
          - ((availability_zones.0))
          - ((availability_zones.1))
          - ((availability_zones.2))
          cidr: ((services_subnet_cidr))
          dns: 169.254.169.254
          gateway: ((services_subnet_gateway))
          iaas_identifier: ((network_name))/((services_subnet_name))/((region))
          reserved_ip_ranges: ((services_subnet_reserved_ip_ranges))
      - name: pks
        subnets:
        - availability_zone_names:
          - ((availability_zones.0))
          - ((availability_zones.1))
          - ((availability_zones.2))
          cidr: ((pks_subnet_cidr))
          dns: 168.63.129.16
          gateway: ((pks_subnet_gateway))
          iaas_identifier: ((network_name))/((pks_subnet_name))/((region))
          reserved_ip_ranges: ((pks_subnet_reserved_ip_ranges))
    properties-configuration:
      iaas_configuration:
        project: ((project))
        auth_json: ((ops_manager_service_account_key))
        default_deployment_tag: ((platform_vms_tag))
      director_configuration:
        ntp_servers_string: 169.254.169.254
      security_configuration:
        trusted_certificates: ''
        generate_vm_passwords: true
    resource-configuration:
      compilation:
        instance_type:
          id: xlarge.disk
    vmextensions-configuration:
    - name: concourse-lb
      cloud_properties:
        target_pool: ((environment_name))-concourse
    - name: increased-disk
      cloud_properties:
        root_disk_size_gb: 500
        root_disk_type: pd-ssd
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    ---
    az-configuration:
      - name: az1
        clusters:
          - cluster: ((vcenter_cluster))
            resource_pool: ((vcenter_resource_pool))
    properties-configuration:
      director_configuration:
        ntp_servers_string: ((ops_manager_ntp))
        retry_bosh_deploys: true
      iaas_configuration:
        vcenter_host: ((vcenter_host))
        vcenter_username: ((vcenter_username))
        vcenter_password: ((vcenter_password))
        datacenter: ((vcenter_datacenter))
        disk_type: thin
        ephemeral_datastores_string: ((vcenter_datastore))
        persistent_datastores_string: ((vcenter_datastore))
        nsx_networking_enabled: true
        nsx_mode: nsx-t
        nsx_address: ((nsxt_host))
        nsx_username: ((nsxt_username))
        nsx_password: ((nsxt_password))
        nsx_ca_certificate: ((nsxt_ca_cert))
        ssl_verification_enabled: ((disable_ssl_verification))
    network-assignment:
      network:
        name: management
      singleton_availability_zone:
        name: az1
    networks-configuration:
      icmp_checks_enabled: false
      networks:
        - name: management
          subnets:
            - availability_zone_names:
                - az1
              cidr: ((management_subnet_cidr))
              dns: ((ops_manager_dns_servers))
              gateway: ((management_subnet_gateway))
              reserved_ip_ranges: ((management_subnet_reserved_ip_ranges))
              iaas_identifier: ((management_subnet_name))
    vmextensions-configuration:
      - name: concourse-lb
        cloud_properties:
          nsxt:
            ns_groups:
            - ((environment_name))_concourse_ns_group
      - name: increased-disk
        cloud_properties:
          disk: 512000
    

    Where:

    • The ((parameters)) map to outputs from the terraform-outputs.yml, which can be provided via vars file for YAML interpolation in a subsequent step.
  8. Create the BOSH director using the om CLI.

    The previously saved director-config.yml and terraform-outputs.yml files can be used directly with om to configure the director.

    Info

    The following om commands implicitly use the OM_USERNAME, OM_PASSWORD, and OM_DECRYPTION_PASSPHRASE environment variables. These were set in a previous step, so you may need to re-set them if you are in a fresh shell.

    1
    2
    3
    4
    5
    6
    om --env env.yml configure-director \
       --config director-config.yml \
       --vars-file terraform-outputs.yml
    
    om --env env.yml apply-changes \
       --skip-deploy-products
    

    The end result will be a working BOSH director, which can be targeted for the Concourse deployment.

Upload Releases and the Stemcell to the BOSH Director

  1. Write the private key for connecting to the BOSH director.

    1
    2
    3
    om interpolate \
      -c terraform-outputs.yml \
      --path /ops_manager_ssh_private_key > /tmp/private_key
    
  2. Export the environment variables required to target the BOSH director/BOSH Credhub and verify you are properly targeted.

    1
    2
    3
    4
    eval "$(om --env env.yml bosh-env --ssh-private-key=/tmp/private_key)"
    
    # Will return a non-error if properly targeted
    bosh curl /info
    
  3. Upload all of the BOSH releases previously downloaded. Note that you'll either need to copy them to your working directory before running these commands, or change directories to wherever you originally downloaded them.

    1
    2
    3
    4
    5
    6
    7
    # upload releases
    bosh upload-release concourse-release*.tgz
    bosh upload-release bpm-release*.tgz
    bosh upload-release postgres-release*.tgz
    bosh upload-release uaa-release*.tgz
    bosh upload-release credhub-release*.tgz
    bosh upload-release backup-and-restore-sdk-release*.tgz
    
  4. Upload the previously downloaded stemcell. (If you changed to your downloads directory, remember to change back after uploading this file.)

    1
    bosh upload-stemcell *stemcell*.tgz
    

Set up concourse-bosh-deployment Directory on Your Local Machine

concourse-bosh-deployment has a sample BOSH manifest, versions.yml file, and a selection of deployment-modifying operations files. Using these sample files makes it much faster and easier to get started.

  1. Create a directory called concourse-bosh-deployment in your working directory:

    1
    mkdir concourse-bosh-deployment
    
  2. Untar the concourse-bosh-deployment.tgz file, downloaded from Tanzu Network

    1
    tar -C concourse-bosh-deployment -xzf concourse-bosh-deployment.tgz
    

Deploy with BOSH

The deployment instructions below deploy the following:

  • A Concourse worker VM
  • A Concourse web VM with co-located Credhub and UAA
  • A Postgres Database VM
  • A single user for logging in to Concourse with basic auth

All files should be created in your working directory

  1. Create a vars file called vars.yml with the following and replace the values as necessary:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    # BOSH uses this to identify the deployment
    deployment_name: concourse
    # This can be any VM type from the cloud config: bosh cloud-config
    web_vm_type: c5.large
    # This is the external concourse URL exported from the terraform output
    external_host: $CONCOURSE_URL
    # This is the external concourse URL exported from the terraform output
    external_url: https://$CONCOURSE_URL
    # This can be any VM type from the cloud config: bosh cloud-config
    db_vm_type: c5.large
    # This can be any disk type from the cloud config: bosh cloud-config
    db_persistent_disk_type: 102400
    # This can be any VM type from the cloud config: bosh cloud-config
    worker_vm_type: c5.large
    # This assigns created VMs (web, worker, and db) to AZs in the IaaS
    azs: ((availability_zones))
    # The network name to assign the VMs to.
    network_name: management
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    # BOSH uses this to identify the deployment
    deployment_name: concourse
    # This can be any VM type from the cloud config: bosh cloud-config
    web_vm_type: Standard_DS2_v2
    # This is the external concourse URL exported from the terraform output
    external_host: $CONCOURSE_URL
    # This is the external concourse URL exported from the terraform output
    external_url: https://$CONCOURSE_URL
    # This can be any VM type from the cloud config: bosh cloud-config
    db_vm_type: Standard_DS2_v2
    # This can be any disk type from the cloud config: bosh cloud-config
    db_persistent_disk_type: 102400
    # This can be any VM type from the cloud config: bosh cloud-config
    worker_vm_type: Standard_DS2_v2
    # This assigns created VMs (web, worker, and db) to AZs in the IaaS
    azs: ["Availability Sets"]
    # The network name to assign the VMs to.
    network_name: management
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    # BOSH uses this to identify the deployment
    deployment_name: concourse
    # This can be any VM type from the cloud config: bosh cloud-config
    web_vm_type: large
    # This is the external concourse URL exported from the terraform output
    external_host: $CONCOURSE_URL
    # This is the external concourse URL exported from the terraform output
    external_url: https://$CONCOURSE_URL
    # This can be any VM type from the cloud config: bosh cloud-config
    db_vm_type: large
    # This can be any disk type from the cloud config: bosh cloud-config
    db_persistent_disk_type: 102400
    # This can be any VM type from the cloud config: bosh cloud-config
    worker_vm_type: large
    # This assigns created VMs (web, worker, and db) to AZs in the IaaS
    azs: ((availability_zones))
    # The network name to assign the VMs to.
    network_name: management
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    # BOSH uses this to identify the deployment
    deployment_name: concourse
    # This can be any VM type from the cloud config: bosh cloud-config
    web_vm_type: large
    # This is the external concourse URL exported from the terraform output
    external_host: $CONCOURSE_URL
    # This is the external concourse URL exported from the terraform output
    external_url: https://$CONCOURSE_URL
    # This can be any VM type from the cloud config: bosh cloud-config
    db_vm_type: large
    # This can be any disk type from the cloud config: bosh cloud-config
    db_persistent_disk_type: 102400
    # This can be any VM type from the cloud config: bosh cloud-config
    worker_vm_type: large
    # This assigns created VMs (web, worker, and db) to AZs in the IaaS
    azs: [ az1 ]
    # The network name to assign the VMs to.
    network_name: management
    

    Where:

    • $CONCOURSE_URL is the URL to the Concourse load balancer created with the terraform templates. The terraform output key is concourse_url.
    • ((availability_zones)) are the AZs where Concourse infrastructure was created in, which will be automatically provided from the terraform-outputs.yml file.
  2. Create an ops file called operations.yml. It will contain information for assigning vm extensions for the load balancer, disk size of the worker, and access for the worker to talk to Tanzu Network.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    - type: replace
      path: /instance_groups/name=web/vm_extensions?/-
      value: concourse-lb
    - type: replace
      path: /instance_groups/name=web/vm_extensions?/-
      value: public_ip
    - type: replace
      path: /instance_groups/name=worker/vm_extensions?/-
      value: public_ip
    - type: replace
      path: /instance_groups/name=worker/vm_extensions?/-
      value: increased-disk
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    - type: replace
      path: /instance_groups/name=web/vm_extensions?/-
      value: concourse-lb
    - type: replace
      path: /instance_groups/name=web/vm_extensions?/-
      value: public_ip
    - type: replace
      path: /instance_groups/name=worker/vm_extensions?/-
      value: public_ip
    - type: replace
      path: /instance_groups/name=worker/vm_extensions?/-
      value: increased-disk
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    - type: replace
      path: /instance_groups/name=web/vm_extensions?/-
      value: concourse-lb
    - type: replace
      path: /instance_groups/name=web/vm_extensions?/-
      value: public_ip
    - type: replace
      path: /instance_groups/name=worker/vm_extensions?/-
      value: public_ip
    - type: replace
      path: /instance_groups/name=worker/vm_extensions?/-
      value: increased-disk
    
    1
    2
    3
    4
    5
    6
    - type: replace
      path: /instance_groups/name=web/vm_extensions?/-
      value: concourse-lb
    - type: replace
      path: /instance_groups/name=worker/vm_extensions?/-
      value: increased-disk
    
  3. Create a user in the BOSH Credhub for Concourse basic auth

    1
    2
    3
    4
    5
    6
    7
    8
    export ADMIN_USERNAME=admin
    export ADMIN_PASSWORD=password
    
    credhub set \
       -n /p-bosh/concourse/local_user \
       -t user \
       -z "${ADMIN_USERNAME}" \
       -w "${ADMIN_PASSWORD}"
    
  4. From your working directory, run BOSH deploy

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    bosh -n -d concourse deploy concourse-bosh-deployment/cluster/concourse.yml \
      -o concourse-bosh-deployment/cluster/operations/privileged-http.yml \
      -o concourse-bosh-deployment/cluster/operations/privileged-https.yml \
      -o concourse-bosh-deployment/cluster/operations/basic-auth.yml \
      -o concourse-bosh-deployment/cluster/operations/tls-vars.yml \
      -o concourse-bosh-deployment/cluster/operations/tls.yml \
      -o concourse-bosh-deployment/cluster/operations/uaa.yml \
      -o concourse-bosh-deployment/cluster/operations/credhub-colocated.yml \
      -o concourse-bosh-deployment/cluster/operations/offline-releases.yml \
      -o concourse-bosh-deployment/cluster/operations/backup-atc-colocated-web.yml \
      -o concourse-bosh-deployment/cluster/operations/secure-internal-postgres.yml \
      -o concourse-bosh-deployment/cluster/operations/secure-internal-postgres-bbr.yml \
      -o concourse-bosh-deployment/cluster/operations/secure-internal-postgres-uaa.yml \
      -o concourse-bosh-deployment/cluster/operations/secure-internal-postgres-credhub.yml \
      -o operations.yml \
      -l <(om interpolate --config vars.yml --vars-file terraform-outputs.yml) \
      -l concourse-bosh-deployment/versions.yml
    

    Don't I have a Credhub and UAA on the BOSH Director already?

    The Credhub and UAA releases Ops Manager deployed alongside the BOSH Director cannot be scaled out.

Connect to and Test Concourse, Credhub, and UAA

This section describes both how to connect to the Concourse, Credhub, and UAA as well as provides an example for how to test that they are all working as intended.

  1. In order to connect to the Concourse Credhub, you must get the Concourse Credhub admin password and CA certificate from the BOSH.

    If you are still connected to the BOSH Credhub from the upload releases step, you can export Concourse's Credhub Secret and Credhub CA certificate for accessing the Concourse's Credhub:

    1
    2
    export CONCOURSE_CREDHUB_SECRET="$(credhub get -n /p-bosh/concourse/credhub_admin_secret -q)"
    export CONCOURSE_CA_CERT="$(credhub get -n /p-bosh/concourse/atc_tls -k ca)"
    
  2. Unset the environment variables previously set by om bosh-env to prepare to target the Concourse Credhub.

    1
    unset CREDHUB_SECRET CREDHUB_CLIENT CREDHUB_SERVER CREDHUB_PROXY CREDHUB_CA_CERT
    
  3. Log into the Concourse Credhub.

    1
    2
    3
    4
    5
    credhub login \
      --server "https://${CONCOURSE_URL}:8844" \
      --client-name=credhub_admin \
      --client-secret="${CONCOURSE_CREDHUB_SECRET}" \
      --ca-cert "${CONCOURSE_CA_CERT}"
    

    Where:

    • ${CONCOURSE_URL} is the URL to the Concourse load balancer created with the terraform templates. The terraform output key is concourse_url.
    • ${CONCOURSE_CREDHUB_SECRET} is the client secret used to access the Concourse's Credhub.
    • ${CONCOURSE_CREDHUB_CA_CERT} is the CA certificate used to access the Concourse's Credhub.

    All the shell variables in this command were set in previous steps.

  4. Create a new pipeline file called pipeline.yml.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    jobs:
    - name: test-job
      plan:
      - task: display-cred
        config:
          platform: linux
          image_resource:
            type: registry-image
            source:
              repository: ubuntu
          run:
            path: bash
            args: [-c, "echo Hello, ((provided-by-credhub))"]
    
  5. Add the provided-by-credhub value to the Concourse Credhub for testing.

    1
    2
    3
    4
    credhub set \
      -n /concourse/main/test-pipeline/provided-by-credhub \
      -t value \
      -v "World"
    
  6. Download the fly CLI and make it executable.

    1
    2
    3
    4
    curl "https://${CONCOURSE_URL}/api/v1/cli?arch=amd64&platform=${PLATFORM}" \
      --output fly \
      --cacert <(echo "${CONCOURSE_CA_CERT}")
    chmod +x fly
    

    Where:

    • ${CONCOURSE_URL} is the URL to the Concourse load balancer created with the terraform templates. The terraform output key is concourse_url.
    • ${PLATFORM} must be set to the operating system you are running: linux, windows, or darwin (Mac).
  7. Log into Concourse.

    1
    2
    3
    4
    5
    ./fly -t ci login \
      -c "https://${CONCOURSE_URL}" \
      -u "${ADMIN_USERNAME}" \
      -p "${ADMIN_PASSWORD}" \
      --ca-cert <(echo "${CONCOURSE_CA_CERT}")
    

    Where:

    • ${CONCOURSE_URL} is the URL to the Concourse load balancer created with the terraform templates. The terraform output key is concourse_url.
    • ${ADMIN_PASSWORD} and ${ADMIN_USERNAME} are values for the local.user set in previous steps.
  8. Set the test pipeline.

    1
    2
    3
    4
    5
    ./fly -t ci set-pipeline \
      -n \
      -p test-pipeline \
      -c pipeline.yml \
      --check-creds
    
  9. Unpause and run the test pipeline.

    1
    2
    3
    ./fly -t ci unpause-pipeline -p test-pipeline
    
    ./fly -t ci trigger-job -j test-pipeline/test-job --watch
    
  10. The Concourse output from the job should include:

    1
    Hello, World
    

Next Steps

We recommend you commit the results of your Terraform modification, and all the created config files, to source control. Be aware that terraform-outputs.yml will contain private keys for Ops Manager; you may wish to remove these and store them in Credhub instead.

For information about using Platform Automation Toolkit, see the docs.