Install Concourse for Platform Automation
This guide describes a process for installing Concourse for use with Platform Automation Toolkit. This approach to deploying Concourse uses the BOSH Director deployed by Ops Manager to deploy and maintain Concourse, Credhub, and UAA.
This approach is appropriate for those who need a Concourse in order to run Platform Automation Toolkit. Platform Automation Toolkit requires a secret store. Credhub satisfies this requirement, but requires UAA in turn. So these directions include both. Platform Automation Toolkit supports all credential managers supported by Concourse.
Prerequisites
Before you install Concourse with BOSH, you must have the following:
- A supported IaaS provider: AWS, Azure, GCP, or vSphere
- Terraform v0.13+ (or manual creation of IaaS components): Download
- Docker: Download
- The om v4.5+ CLI: For more information, see Installation in the README.
- BOSH CLI v5.x: For more information, see Installing the CLI in the BOSH documentation.
- Platform Automation Toolkit Docker Image: Download this from Tanzu Network. The Platform Automation Toolkit docs have instructions to use the docker image on your local workstation.
- Concourse for Platform Automation: Download all components
for the
Platform Automation
release from Tanzu Network - Ops Manager Image for your IaaS: You can download the image reference (a YAML file) or VM image file from Tanzu Network.
-
The stemcell for your IaaS: You'll need these when you create your Concourse deployment manifest.
Concourse was tested on Stemcell line 621 (Xenial) upon release and supports the 621.* Stemcell family.
You can download an Ubuntu Xenial stemcell from Tanzu Network.
This stemcell will be referenced as
stemcell.tgz
in this guide.
Get Your Working Directory and Shell Setup
Create a single directory to work in:
1 2 |
|
Choose which IaaS you'll be working with and set that as a variable for use in future commands:
1 |
|
1 |
|
1 |
|
1 |
|
Create the Required IaaS Infrastructure
If you can use Terraform, it is the most straightforward way to create the resources required for the Ops Manager and Concourse deployment. However, if you are unable or do not wish to use Terraform, see the "Manual Resource Creation" tab to see a list of required resources to deploy an Ops Manager and a Concourse so that you may create those resources manually.
The paving
repository
contains Terraform templates for each supported IaaS:
AWS, Azure, GCP, and vSphere.
This includes infrastructure for the Ops Manager, BOSH Director, and Concourse.
-
Clone the repo on the command line from the
concourse-working-directory
folder:1
git clone https://github.com/pivotal/paving.git
-
In the checked out repository there are directories for each IaaS. Copy the Terraform templates for the infrastructure of your choice to a new directory outside of the paving repo, so you can modify it:
1 2 3 4 5 6
# cp -Ra paving/${IAAS} paving-${IAAS} mkdir paving-${IAAS} cp -a paving/$IAAS/. paving-$IAAS cd paving-${IAAS} rm -f pas-*.tf rm -f pks-*.tf
IAAS must be set to match one of the infrastructure directories at the top level of the
paving
repo - for example,aws
,azure
,gcp
, ornsxt
. This was done in Get Your Working Directory and Shell Setup, but if you're in a new shell, you may need to do it again. -
Within the new directory, the
terraform.tfvars.example
file shows what values are required for that IaaS. Remove the.example
from the name, and replace the examples with real values. -
You'll be extending the Terraform files from the
paving
repo with an additional file that defines resources for Concourse. Create a newconcourse.tf
file in the new directory and copy the following into it:1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158
resource "aws_route53_record" "concourse" { name = "ci.${var.environment_name}.${data.aws_route53_zone.hosted.name}" zone_id = data.aws_route53_zone.hosted.zone_id type = "A" alias { name = aws_lb.concourse.dns_name zone_id = aws_lb.concourse.zone_id evaluate_target_health = true } } //create a load balancer for concourse resource "aws_lb" "concourse" { name = "${var.environment_name}-concourse-lb" load_balancer_type = "network" enable_cross_zone_load_balancing = true subnets = aws_subnet.public-subnet[*].id } resource "aws_lb_listener" "concourse-tcp" { load_balancer_arn = aws_lb.concourse.arn port = 443 protocol = "TCP" default_action { type = "forward" target_group_arn = aws_lb_target_group.concourse-tcp.arn } } resource "aws_lb_listener" "concourse-ssh" { load_balancer_arn = aws_lb.concourse.arn port = 2222 protocol = "TCP" default_action { type = "forward" target_group_arn = aws_lb_target_group.concourse-ssh.arn } } resource "aws_lb_listener" "concourse-credhub" { load_balancer_arn = aws_lb.concourse.arn port = 8844 protocol = "TCP" default_action { type = "forward" target_group_arn = aws_lb_target_group.concourse-credhub.arn } } resource "aws_lb_listener" "concourse-uaa" { load_balancer_arn = aws_lb.concourse.arn port = 8443 protocol = "TCP" default_action { type = "forward" target_group_arn = aws_lb_target_group.concourse-uaa.arn } } resource "aws_lb_target_group" "concourse-tcp" { name = "${var.environment_name}-concourse-tg-tcp" port = 443 protocol = "TCP" vpc_id = aws_vpc.vpc.id health_check { protocol = "TCP" } } resource "aws_lb_target_group" "concourse-ssh" { name = "${var.environment_name}-concourse-tg-ssh" port = 2222 protocol = "TCP" vpc_id = aws_vpc.vpc.id health_check { protocol = "TCP" } } resource "aws_lb_target_group" "concourse-credhub" { name = "${var.environment_name}-concourse-tg-credhub" port = 8844 protocol = "TCP" vpc_id = aws_vpc.vpc.id health_check { protocol = "TCP" } } resource "aws_lb_target_group" "concourse-uaa" { name = "${var.environment_name}-concourse-tg-uaa" port = 8443 protocol = "TCP" vpc_id = aws_vpc.vpc.id health_check { protocol = "TCP" } } //create a security group for concourse resource "aws_security_group" "concourse" { name = "${var.environment_name}-concourse-sg" vpc_id = aws_vpc.vpc.id ingress { cidr_blocks = var.ops_manager_allowed_ips protocol = "tcp" from_port = 443 to_port = 443 } ingress { cidr_blocks = var.ops_manager_allowed_ips protocol = "tcp" from_port = 2222 to_port = 2222 } ingress { cidr_blocks = var.ops_manager_allowed_ips protocol = "tcp" from_port = 8844 to_port = 8844 } ingress { cidr_blocks = var.ops_manager_allowed_ips protocol = "tcp" from_port = 8443 to_port = 8443 } egress { cidr_blocks = ["0.0.0.0/0"] protocol = "-1" from_port = 0 to_port = 0 } tags = merge( var.tags, { "Name" = "${var.environment_name}-concourse-sg" }, ) } output "concourse_url" { value = aws_route53_record.concourse.name }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190
resource "azurerm_public_ip" "concourse" { name = "${var.environment_name}-concourse-lb" location = var.location resource_group_name = azurerm_resource_group.platform.name allocation_method = "Static" sku = "Basic" tags = { environment = var.environment_name } } resource "azurerm_lb" "concourse" { name = "${var.environment_name}-concourse-lb" resource_group_name = azurerm_resource_group.platform.name location = var.location sku = "Basic" frontend_ip_configuration { name = "${var.environment_name}-concourse-frontend-ip-configuration" public_ip_address_id = azurerm_public_ip.concourse.id } } resource "azurerm_lb_rule" "concourse-https" { name = "${var.environment_name}-concourse-https" resource_group_name = azurerm_resource_group.platform.name loadbalancer_id = azurerm_lb.concourse.id frontend_ip_configuration_name = "${var.environment_name}-concourse-frontend-ip-configuration" protocol = "TCP" frontend_port = 443 backend_port = 443 backend_address_pool_id = azurerm_lb_backend_address_pool.concourse.id probe_id = azurerm_lb_probe.concourse-https.id } resource "azurerm_lb_probe" "concourse-https" { name = "${var.environment_name}-concourse-https" resource_group_name = azurerm_resource_group.platform.name loadbalancer_id = azurerm_lb.concourse.id protocol = "TCP" port = 443 } resource "azurerm_lb_rule" "concourse-http" { name = "${var.environment_name}-concourse-http" resource_group_name = azurerm_resource_group.platform.name loadbalancer_id = azurerm_lb.concourse.id frontend_ip_configuration_name = "${var.environment_name}-concourse-frontend-ip-configuration" protocol = "TCP" frontend_port = 80 backend_port = 80 backend_address_pool_id = azurerm_lb_backend_address_pool.concourse.id probe_id = azurerm_lb_probe.concourse-http.id } resource "azurerm_lb_probe" "concourse-http" { name = "${var.environment_name}-concourse-http" resource_group_name = azurerm_resource_group.platform.name loadbalancer_id = azurerm_lb.concourse.id protocol = "TCP" port = 80 } resource "azurerm_lb_rule" "concourse-uaa" { name = "${var.environment_name}-concourse-uaa" resource_group_name = azurerm_resource_group.platform.name loadbalancer_id = azurerm_lb.concourse.id frontend_ip_configuration_name = "${var.environment_name}-concourse-frontend-ip-configuration" protocol = "TCP" frontend_port = 8443 backend_port = 8443 backend_address_pool_id = azurerm_lb_backend_address_pool.concourse.id probe_id = azurerm_lb_probe.concourse-uaa.id } resource "azurerm_lb_probe" "concourse-uaa" { name = "${var.environment_name}-concourse-uaa" resource_group_name = azurerm_resource_group.platform.name loadbalancer_id = azurerm_lb.concourse.id protocol = "TCP" port = 8443 } resource "azurerm_lb_rule" "concourse-credhub" { name = "${var.environment_name}-concourse-credhub" resource_group_name = azurerm_resource_group.platform.name loadbalancer_id = azurerm_lb.concourse.id frontend_ip_configuration_name = "${var.environment_name}-concourse-frontend-ip-configuration" protocol = "TCP" frontend_port = 8844 backend_port = 8844 backend_address_pool_id = azurerm_lb_backend_address_pool.concourse.id probe_id = azurerm_lb_probe.concourse-credhub.id } resource "azurerm_lb_probe" "concourse-credhub" { name = "${var.environment_name}-concourse-credhub" resource_group_name = azurerm_resource_group.platform.name loadbalancer_id = azurerm_lb.concourse.id protocol = "TCP" port = 8844 } resource "azurerm_network_security_rule" "concourse-credhub-platform-vms" { name = "${var.environment_name}-credhub" priority = 300 direction = "Inbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "8844" source_address_prefix = "*" destination_address_prefix = "*" resource_group_name = azurerm_resource_group.platform.name network_security_group_name = azurerm_network_security_group.platform-vms.name } resource "azurerm_network_security_rule" "concourse-uaa-platform-vms" { name = "${var.environment_name}-uaa" priority = 3001 direction = "Inbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "8443" source_address_prefix = "*" destination_address_prefix = "*" resource_group_name = azurerm_resource_group.platform.name network_security_group_name = azurerm_network_security_group.platform-vms.name } resource "azurerm_network_security_rule" "concourse-credhub-ops-manager" { name = "${var.environment_name}-credhub" priority = 300 direction = "Inbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "8844" source_address_prefix = "*" destination_address_prefix = "*" resource_group_name = azurerm_resource_group.platform.name network_security_group_name = azurerm_network_security_group.ops-manager.name } resource "azurerm_network_security_rule" "concourse-uaa-ops-manager" { name = "${var.environment_name}-uaa" priority = 3001 direction = "Inbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "8443" source_address_prefix = "*" destination_address_prefix = "*" resource_group_name = azurerm_resource_group.platform.name network_security_group_name = azurerm_network_security_group.ops-manager.name } resource "azurerm_lb_backend_address_pool" "concourse" { name = "${var.environment_name}-concourse-backend-pool" resource_group_name = azurerm_resource_group.platform.name loadbalancer_id = azurerm_lb.concourse.id } resource "azurerm_dns_a_record" "concourse" { name = "ci.${var.environment_name}" zone_name = data.azurerm_dns_zone.hosted.name resource_group_name = data.azurerm_dns_zone.hosted.resource_group_name ttl = "60" records = [azurerm_public_ip.concourse.ip_address] tags = merge( var.tags, { name = "ci.${var.environment_name}" }, ) } output "concourse_url" { value = "${azurerm_dns_a_record.concourse.name}.${azurerm_dns_a_record.concourse.zone_name}" }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67
resource "google_dns_record_set" "concourse" { name = "ci.${var.environment_name}.${data.google_dns_managed_zone.hosted-zone.dns_name}" type = "A" ttl = 60 managed_zone = var.hosted_zone rrdatas = [google_compute_address.concourse.address] } //create a load balancer for concourse resource "google_compute_address" "concourse" { name = "${var.environment_name}-concourse" } resource "google_compute_firewall" "concourse" { allow { ports = ["443", "2222", "8844", "8443"] protocol = "tcp" } direction = "INGRESS" name = "${var.environment_name}-concourse-open" network = google_compute_network.network.self_link source_ranges = ["0.0.0.0/0"] target_tags = ["concourse"] } resource "google_compute_forwarding_rule" "concourse_credhub" { ip_address = google_compute_address.concourse.address ip_protocol = "TCP" name = "${var.environment_name}-concourse-credhub" port_range = "8844-8844" target = google_compute_target_pool.concourse_target_pool.self_link } resource "google_compute_forwarding_rule" "concourse_ssh" { ip_address = google_compute_address.concourse.address ip_protocol = "TCP" name = "${var.environment_name}-concourse-ssh" port_range = "2222-2222" target = google_compute_target_pool.concourse_target_pool.self_link } resource "google_compute_forwarding_rule" "concourse_tcp" { ip_address = google_compute_address.concourse.address ip_protocol = "TCP" name = "${var.environment_name}-concourse-tcp" port_range = "443-443" target = google_compute_target_pool.concourse_target_pool.self_link } resource "google_compute_forwarding_rule" "concourse_uaa" { ip_address = google_compute_address.concourse.address ip_protocol = "TCP" name = "${var.environment_name}-concourse-uaa" port_range = "8443-8443" target = google_compute_target_pool.concourse_target_pool.self_link } resource "google_compute_target_pool" "concourse_target_pool" { name = "${var.environment_name}-concourse" } output "concourse_url" { value = replace(replace("${google_dns_record_set.concourse.name}", "/\\.$/", ""), "*.", "") }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97
resource "nsxt_lb_service" "concourse_lb_service" { description = "concourse lb_service" display_name = "${var.environment_name}_concourse_lb_service" enabled = true logical_router_id = nsxt_logical_tier1_router.t1_infrastructure.id virtual_server_ids = ["${nsxt_lb_tcp_virtual_server.concourse_lb_virtual_server.id}"] error_log_level = "INFO" size = "SMALL" depends_on = ["nsxt_logical_router_link_port_on_tier1.t1_infrastructure_to_t0"] tag { scope = "terraform" tag = var.environment_name } } resource "nsxt_ns_group" "concourse_ns_group" { display_name = "${var.environment_name}_concourse_ns_group" tag { scope = "terraform" tag = var.environment_name } } resource "nsxt_lb_tcp_monitor" "concourse_lb_tcp_monitor" { display_name = "${var.environment_name}_concourse_lb_tcp_monitor" interval = 5 monitor_port = 443 rise_count = 3 fall_count = 3 timeout = 15 tag { scope = "terraform" tag = var.environment_name } } resource "nsxt_lb_pool" "concourse_lb_pool" { description = "concourse_lb_pool provisioned by Terraform" display_name = "${var.environment_name}_concourse_lb_pool" algorithm = "WEIGHTED_ROUND_ROBIN" min_active_members = 1 tcp_multiplexing_enabled = false tcp_multiplexing_number = 3 active_monitor_id = "${nsxt_lb_tcp_monitor.concourse_lb_tcp_monitor.id}" snat_translation { type = "SNAT_AUTO_MAP" } member_group { grouping_object { target_type = "NSGroup" target_id = "${nsxt_ns_group.concourse_ns_group.id}" } } tag { scope = "terraform" tag = var.environment_name } } resource "nsxt_lb_fast_tcp_application_profile" "tcp_profile" { display_name = "${var.environment_name}_concourse_fast_tcp_profile" tag { scope = "terraform" tag = var.environment_name } } resource "nsxt_lb_tcp_virtual_server" "concourse_lb_virtual_server" { description = "concourse lb_virtual_server provisioned by terraform" display_name = "${var.environment_name}_concourse virtual server" application_profile_id = "${nsxt_lb_fast_tcp_application_profile.tcp_profile.id}" ip_address = "${var.nsxt_lb_concourse_virtual_server_ip_address}" ports = ["443","8443","8844"] pool_id = "${nsxt_lb_pool.concourse_lb_pool.id}" tag { scope = "terraform" tag = var.environment_name } } variable "nsxt_lb_concourse_virtual_server_ip_address" { default = "" description = "IP Address for concourse loadbalancer" type = "string" } output "concourse_url" { value = var.nsxt_lb_concourse_virtual_server_ip_address }
-
Now that you've got your variables and modifications in place, you can initialize Terraform which will download the required IaaS providers.
1
terraform init
-
Run
terraform refresh
to update the state with what currently exists on the Iaas.1 2
terraform refresh \ -var-file=terraform.tfvars
-
Next, you can run
terraform plan
to see what changes will be made to the infrastructure on the IaaS.1 2 3
terraform plan \ -out=terraform.tfplan \ -var-file=terraform.tfvars
-
Finally, you can run
terraform apply
to create the required infrastructure on the IaaS.1 2 3
terraform apply \ -parallelism=5 \ terraform.tfplan
-
Save off the output from
terraform output stable_config
asterraform-outputs.yml
up a level in your working directory:1
terraform output stable_config_opsmanager > ../terraform-outputs.yml
Terraform v0.14.3+ introduced backwards incompatible change.
When using the
terraform output
command in v0.14.3+, the additional flag for--raw
will be required. This ensures that output is not JSON encoding a JSON output value.1
terraform output --raw <output value>
-
Export the
CONCOURSE_URL
fromterraform output concourse_url
1
export CONCOURSE_URL="$(terraform output concourse_url)"
-
Return to your working directory for the next, post-terraform steps:
1
cd ..
Required resources fall into one of the following categories:
- "Collected": we expect that these resources are already created in your vCenter
- "Collected or Create": we expect that you may or may not already have these resources created already in your vCenter. If you do not have these resources already, they need to be created in order to continue.
- "Determine": these resources may be defined based on other resources, local policy, or other factors.
- "Remove": these resource names are created by the Terraform scripts.
In order to proceed without error when you Deploy the Director,
you will need to remove these values from the
director-config.yml
, as the director configuration included in this document assumes Terraform's outputs.
Resources that are named in a code block
are used directly as variables in this documentation.
Resources that are not in a code block
are resources that are required
for a successful Ops Manager, BOSH, or Concourse deployment,
but are not used directly in a deploy script or config file.
Collect
region
: Region to deploy the Ops Manager VM and BOSH VMs.
Collect or Create
access_key
: Access key for creating the Ops Manager VM and BOSH VMs.secret_key
: Matching secret key toaccess_key
for creating the Ops Manager VM and BOSH VMs.ops_manager_key_pair_name
: Keypair name with which to deploy the Ops Manager VM.management_subnet_cidrs
: List of CIDRs for the subnet to deploy the BOSH director and VMs (total: 3).management_subnet_gateways
: List of gateways for the subnet to deploy the BOSH director and VMs (total: 3).management_subnet_ids
: List of subnet IDs for the deploy of the BOSH director and VMs (total: 3).management_subnet_reserved_ip_ranges
: List of reserved IP ranges for the subnet to deploy the BOSH director and VMs (total: 3).ops_manager_public_ip
: Public IP to assign the Ops Manager VM.ops_manager_security_group_id
: ID of the security group to deploy the Ops Manager VM to.ops_manager_ssh_private_key
: Private SSH key with which to connect to the BOSH director.ops_manager_ssh_public_key
: Public key forops_manager_ssh_private_key
.ops_manager_subnet_id
: ID of the subnet to deploy the Ops Manager VM to.ops_manager_dns
: DNS entry for the Ops Manager VM. This will be used to connect to the Ops Manager from the command line.ops_manager_iam_instance_profile_name
: Instance profile name for the BOSH director. BOSH will use this to deploy VMs.ops_manager_iam_user_access_key
: IAM user access key for the BOSH director. BOSH will use this to deploy VMs.ops_manager_iam_user_secret_key
: IAM user secret key for the BOSH director. BOSH will use this to deploy VMs.platform_vms_security_group_id
: Security group that will be assigned to the BOSH director and deployed VMs.vpc_id
: ID of the VPC that will be assigned to the BOSH director and deployed VMs.- A DNS record for Concourse. This will be your
$CONCOURSE_URL
later in this guide.
Determine
- environment_name
: Arbitrary name with which to prefix the name of the Ops Manager.
NOTE: when creating load balancers for Concourse, this name should be used as a prefix.
Remove
pas_subnet_cidrs
pas_subnet_gateways
pas_subnet_ids
pas_subnet_reserved_ip_ranges
pks_api_lb_security_group_id
pks_subnet_cidrs
pks_subnet_gateways
pks_subnet_ids
pks_subnet_reserved_ip_ranges
services_subnet_cidrs
services_subnet_gateways
services_subnet_ids
services_subnet_reserved_ip_ranges
- From the
vmextensions-configuration
section, remove thessh-lb-security-groups
,tcp-lb-security-groups
,web_lb_security_group_id
, andpks-api-lb-security-groups
sections.
Collect
subscription_id
: Subscription ID for the Azure Cloud.tenant_id
: Tenant ID for the Azure Cloud.client_id
: Client ID for the Ops Manager, BOSH, and VMs to use.client_secret
: Client secret for the Ops Manager, BOSH, and VMs to use.
Collect or Create
management_subnet_cidr
: CIDR for the subnet to deploy the BOSH director and VMs.management_subnet_gateway
: Gateway for the subnet to deploy the BOSH director and VMs.management_subnet_name
: Name of the subnet to deploy the BOSH director and VMs.management_subnet_range
: Reserved IP ranges for the subnet to deploy the BOSH director and VMs (excludes gateway).management_subnet_id
: ID of the subnet to deploy the BOSH director and VMs.bosh_storage_account_name
: Storage account for BOSH to store VMs.network_name
: Network name to deploy BOSH and VMs.ops_manager_dns
: DNS entry for the Ops Manager VM. This will be used to connect to the Ops Manager from the command line.ops_manager_public_ip
: Public IP to assign the Ops Manager VM.ops_manager_ssh_private_key
: Private SSH key with which to connect to the BOSH director.ops_manager_ssh_public_key
: Public key forops_manager_ssh_private_key
.platform_vms_security_group_name
: Security group to assign to BOSH and VMs.resource_group_name
: Resource group to deploy Ops Manager, BOSH, and VMs.iaas_configuration_environment_azurecloud
: Which Azure cloud to deploy to (default: AzureCloud).ops_manager_container_name
: Container to deploy the Ops Manager VM.ops_manager_security_group_name
: Security group to attach to the Ops Manager VM.ops_manager_storage_account_name
: Storage account name to associate with the Ops Manager VM.- A DNS record for Concourse. This will be your
$CONCOURSE_URL
later in this guide.
Determine
environment_name
: Arbitrary name with which to prefix the name of the Ops Manager. NOTE: when creating load balancers for Concourse, this name should be used as a prefix.location
: Location to deploy the Ops Manager VMops_manager_private_ip
: Private IP to assign to the Ops Manager VM
Remove
pas_subnet_name
pas_subnet_gateway
pas_subnet_cidr
pas_subnet_range
pks_api_application_security_group_name
pks_api_network_security_group_name
pks_subnet_cidr
pks_subnet_gateway
pks_subnet_name
pks_subnet_range
services_subnet_cidr
services_subnet_gateway
services_subnet_range
services_subnet_name
- From the
vmextensions-configuration
section, remove thepks-api-lb-security-groups
section.
Collect
service_account_key
: Service account key to deploy the Ops Manager VM.ops_manager_service_account_key
: Service account key to deploy BOSH and VMs.project
: Project name to deploy Ops Manager, BOSH, and VMs.region
: Region to deploy Ops Manager, BOSH, and VMs.
Collect or Create
management_subnet_cidr
: CIDR for the subnet to deploy the BOSH director and VMs.management_subnet_gateway
: Gateway for the subnet to deploy the BOSH director and VMs.management_subnet_name
: Name of the subnet to deploy the BOSH director and VMs.management_subnet_reserved_ip_ranges
: Reserved IP ranges for the subnet to deploy the BOSH director and VMs (excludes gateway).availability_zones
: List of availability zones to deploy Ops Manager, BOSH, and VMs (total: 3).network_name
: Network name to deploy BOSH and VMs.ops_manager_dns
: DNS entry for the Ops Manager VM. This will be used to connect to the Ops Manager from the command line.ops_manager_public_ip
: Public IP to assign the Ops Manager VM.ops_manager_ssh_private_key
: Private SSH key with which to connect to the BOSH director.ops_manager_ssh_public_key
: Public key forops_manager_ssh_private_key
.- A DNS record for Concourse. This will be your
$CONCOURSE_URL
later in this guide.
Determine
environment_name
: Arbitrary name with which to prefix the name of the Ops Manager. NOTE: when creating load balancers for Concourse, this name should be used as a prefix.platform_vms_tag
: Tag to assign to VMs created by BOSH.
Remove
pas_subnet_cidr
pas_subnet_gateway
pas_subnet_name
pas_subnet_reserved_ip_ranges
pks_subnet_cidr
pks_subnet_gateway
pks_subnet_name
pks_subnet_reserved_ip_ranges
services_subnet_cidr
services_subnet_gateway
services_subnet_name
services_subnet_reserved_ip_ranges
Collect
vcenter_host
: Hostname for the vCentervcenter_username
: Username for logging in to the vCentervcenter_password
: Password for logging in to the vCentervcenter_datacenter
: Datacenter to deploy the Ops Manager, Concourse, and associated VMs.vcenter_cluster
: Cluster to deploy the Ops Manager, Concourse, and associated VMs.vcenter_datastore
: Datastore to deploy the Ops Manager, Concourse, and associated VMs. This guide assumes the same persistent and ephemeral datastores. It also assumes your resource pool is in that datastore.ops_manager_dns_servers
: The address of your DNS server(s). These are comma separated.ops_manager_ntp
: NTP server to set server time on Ops Manager and the BOSH director.
Collect or Create
ops_manager_dns
: DNS record for the Ops Manager VMconcourse_url
: DNS record for the Concourse web instanceops_manager_folder
: Folder to store Ops Manager, BOSH, and its deployed VMsops_manager_public_ip
: (OPTIONAL) This guide does not make use of the public IP. You will need this set if you want to interact with the Ops Manager outside of the defined private network.vcenter_resource_pool
: Resource Pool to deploy the Ops Manager, Concourse, and associated VMsmanagement_subnet_name
: Name of the subnet to deploy the Ops Manager, Concourse, and associated VMsmanagement_subnet_gateway
: Gateway of themanagement_subnet
. This is typically the first IP of the subnetmanagement_subnet_cidr
: Private CIDR of themanagement_subnet
We recommend a /24 subnet CIDR.ops_manager_ssh_private_key
: A private key (such as might be generated byssh-keygen
) to ssh to the BOSH director.ops_manager_ssh_public_key
: The public key pair toops_manager_ssh_private_key
. This key is used when creating the Ops Manager VM.- A load balancer with the following ports open:
443
,8443
,8844
. This load balancer should have an IP assigned to it. This IP address will be used as the$CONCOURSE_URL
later in this guide.
Determine
management_subnet_reserved_ip_ranges
: IP Addresses that will not be managed by BOSH. This range is typicallyx.x.x.1
-x.x.x.10
.ops_manager_netmask
: Netmask for themanagement_subnet
ops_manager_private_ip
: Private IP for the Ops Manager VM. This is typicallyx.x.x.10
allow_unverified_ssl
: Based on your local policies, this may be set to true or false. This is used byom vm-lifecycle
CLI to communicate with vCenter when creating the Ops Manager VM.disable_ssl_verification
: Based on your local policies, this may be set to true or false. This is used by the BOSH director to create VMs on the vCenter.
These resources are based on the Terraform template in the paving repo. For additional context, see the appropriate templates for your IAAS.
To follow the rest of this guide, you will also need to create a vars file
that contains all of the outputs that would have been created by Terraform.
For simplicity, we recommend naming this file terraform-outputs.yml
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
Deploy the Director
Platform Automation Toolkit provides tools to create an Ops Manager with a BOSH Director.
-
Ops Manager needs to be deployed with IaaS specific configuration. Platform Automation Toolkit provides a configuration file format that looks like this:
Copy and paste the YAML below for your IaaS and save as
opsman-config.yml
in your working directory.1 2 3 4 5 6 7 8 9 10 11 12 13 14
--- opsman-configuration: aws: access_key_id: ((access_key)) boot_disk_size: 100 iam_instance_profile_name: ((ops_manager_iam_instance_profile_name)) instance_type: m5.large key_pair_name: ((ops_manager_key_pair_name)) public_ip: ((ops_manager_public_ip)) region: ((region)) secret_access_key: ((secret_key)) security_group_ids: [((ops_manager_security_group_id))] vm_name: ((environment_name))-ops-manager-vm vpc_subnet_id: ((ops_manager_subnet_id))
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
--- opsman-configuration: azure: boot_disk_size: "100" client_id: ((client_id)) client_secret: ((client_secret)) cloud_name: ((iaas_configuration_environment_azurecloud)) container: ((ops_manager_container_name)) location: ((location)) network_security_group: ((ops_manager_security_group_name)) private_ip: ((ops_manager_private_ip)) public_ip: ((ops_manager_public_ip)) resource_group: ((resource_group_name)) ssh_public_key: ((ops_manager_ssh_public_key)) storage_account: ((ops_manager_storage_account_name)) storage_sku: "Premium_LRS" subnet_id: ((management_subnet_id)) subscription_id: ((subscription_id)) tenant_id: ((tenant_id)) use_managed_disk: "true" vm_name: "((resource_group_name))-ops-manager" vm_size: "Standard_DS2_v2"
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
--- opsman-configuration: gcp: boot_disk_size: 100 custom_cpu: 4 custom_memory: 16 gcp_service_account: ((service_account_key)) project: ((project)) public_ip: ((ops_manager_public_ip)) region: ((region)) ssh_public_key: ((ops_manager_ssh_public_key)) tags: ((ops_manager_tags)) vm_name: ((environment_name))-ops-manager-vm vpc_subnet: ((management_subnet_name)) zone: ((availability_zones.0))
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
--- opsman-configuration: vsphere: vcenter: datacenter: ((vcenter_datacenter)) datastore: ((vcenter_datastore)) folder: ((ops_manager_folder)) url: ((vcenter_host)) username: ((vcenter_username)) password: ((vcenter_password)) resource_pool: /((vcenter_datacenter))/host/((vcenter_cluster))/Resources/((vcenter_resource_pool)) insecure: ((allow_unverified_ssl)) disk_type: thin dns: ((ops_manager_dns_servers)) gateway: ((management_subnet_gateway)) hostname: ((ops_manager_dns)) netmask: ((ops_manager_netmask)) network: ((management_subnet_name)) ntp: ((ops_manager_ntp)) private_ip: ((ops_manager_private_ip)) ssh_public_key: ((ops_manager_ssh_public_key))
Where:
- The
((parameters))
map to outputs from theterraform-outputs.yml
, which can be provided via vars file for YAML interpolation in a subsequent step.
opsman.yml
for an unlisted IaaSFor a supported IaaS not listed above, reference the Platform Automation Toolkit docs.
- The
-
First import the Platform Automation Toolkit Docker Image:
1
docker import ${PLATFORM_AUTOMATION_TOOLKIT_IMAGE_TGZ} platform-automation-toolkit-image
Where
${PLATFORM_AUTOMATION_TOOLKIT_IMAGE_TGZ}
is set to the filepath of the image downloaded from Pivnet. -
Create the Ops Manager using the
om vm-lifecycle
CLI. This requires the Ops Manager Image for your IaaS and the previously createdopsman-config.yml
to be present in your working directory.The following command runs a docker image to invoke the
om vm-lifecycle
command to create the Ops Manager VM, mounts the current directory from your local filesystem as a new directory called/workspace
within the image, and does its work from within that directory.1 2 3 4 5
docker run -it --rm -v $PWD:/workspace -w /workspace platform-automation-toolkit-image \ om vm-lifecycle create-vm \ --config opsman-config.yml \ --image-file ops-manager*.{yml,ova,raw} \ --vars-file terraform-outputs.yml
The
om vm-lifecycle create-vm
command writes astate.yml
file uniquely identifying the created Ops Manager VM. Thisstate.yml
file is used for long term management of the Ops Manager VM. We recommend storing it for future use. -
Create an
env.yml
file in your working directory to provide parameters to allowom
to target the Ops Manager.1 2 3
connect-timeout: 30 # default 5 request-timeout: 1800 # default 1800 skip-ssl-validation: true # default false
-
Export the Ops Manager DNS entry created by Terraform as the as the target Ops Manager for
om
.1
export OM_TARGET="$(om interpolate -c terraform-outputs.yml --path /ops_manager_dns)"
Alternatively, this can be included in the
env.yml
created above as thetarget
attribute. -
Setup authentication on the Ops Manager.
1 2 3 4
om --env env.yml configure-authentication \ --username ${OM_USERNAME} \ --password ${OM_PASSWORD} \ --decryption-passphrase ${OM_DECRYPTION_PASSPHRASE}
Where:
${OM_USERNAME}
is the desired username for accessing the Ops Manager.${OM_PASSWORD}
is the desired password for accessing the Ops Manager.${OM_DECRYPTION_PASSPHRASE}
is the desired decryption passphrase used for recovering the Ops Manager if the VM is restarted.
This configures Ops Manager with whichever credentials you set which will be required with every subsequent
om
command. -
The Ops Manager can now be used to create a BOSH Director.
Copy and paste the YAML below for your IaaS and save as
director-config.yml
.1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
--- az-configuration: - name: ((availability_zones.0)) - name: ((availability_zones.1)) - name: ((availability_zones.2)) network-assignment: network: name: management singleton_availability_zone: name: ((availability_zones.0)) networks-configuration: icmp_checks_enabled: false networks: - name: management subnets: - availability_zone_names: - ((availability_zones.0)) cidr: ((management_subnet_cidrs.0)) dns: 169.254.169.253 gateway: ((management_subnet_gateways.0)) iaas_identifier: ((management_subnet_ids.0)) reserved_ip_ranges: ((management_subnet_reserved_ip_ranges.0)) - availability_zone_names: - ((availability_zones.1)) cidr: ((management_subnet_cidrs.1)) dns: 169.254.169.253 gateway: ((management_subnet_gateways.1)) iaas_identifier: ((management_subnet_ids.1)) reserved_ip_ranges: ((management_subnet_reserved_ip_ranges.1)) - availability_zone_names: - ((availability_zones.2)) cidr: ((management_subnet_cidrs.2)) dns: 169.254.169.253 gateway: ((management_subnet_gateways.2)) iaas_identifier: ((management_subnet_ids.2)) reserved_ip_ranges: ((management_subnet_reserved_ip_ranges.2)) - name: services subnets: - availability_zone_names: - ((availability_zones.0)) cidr: ((services_subnet_cidrs.0)) dns: 169.254.169.253 gateway: ((services_subnet_gateways.0)) iaas_identifier: ((services_subnet_ids.0)) reserved_ip_ranges: ((services_subnet_reserved_ip_ranges.0)) - availability_zone_names: - ((availability_zones.1)) cidr: ((services_subnet_cidrs.1)) dns: 169.254.169.253 gateway: ((services_subnet_gateways.1)) iaas_identifier: ((services_subnet_ids.1)) reserved_ip_ranges: ((services_subnet_reserved_ip_ranges.1)) - availability_zone_names: - ((availability_zones.2)) cidr: ((services_subnet_cidrs.2)) dns: 169.254.169.253 gateway: ((services_subnet_gateways.2)) iaas_identifier: ((services_subnet_ids.2)) reserved_ip_ranges: ((services_subnet_reserved_ip_ranges.2)) properties-configuration: director_configuration: ntp_servers_string: 169.254.169.123 iaas_configuration: access_key_id: ((ops_manager_iam_user_access_key)) secret_access_key: ((ops_manager_iam_user_secret_key)) iam_instance_profile: ((ops_manager_iam_instance_profile_name)) vpc_id: ((vpc_id)) security_group: ((platform_vms_security_group_id)) key_pair_name: ((ops_manager_key_pair_name)) ssh_private_key: ((ops_manager_ssh_private_key)) region: ((region)) resource-configuration: compilation: instance_type: id: automatic vmextensions-configuration: - name: concourse-lb cloud_properties: lb_target_groups: - ((environment_name))-concourse-tg-tcp - ((environment_name))-concourse-tg-ssh - ((environment_name))-concourse-tg-credhub - ((environment_name))-concourse-tg-uaa security_groups: - ((environment_name))-concourse-sg - ((platform_vms_security_group_id)) - name: increased-disk cloud_properties: type: gp2 size: 512000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66
--- network-assignment: network: name: management singleton_availability_zone: name: 'zone-1' other_availability_zones: name: 'zone-2' networks-configuration: icmp_checks_enabled: false networks: - name: management service_network: false subnets: - iaas_identifier: ((network_name))/((management_subnet_name)) cidr: ((management_subnet_cidr)) reserved_ip_ranges: ((management_subnet_gateway))-((management_subnet_range)) dns: 168.63.129.16 gateway: ((management_subnet_gateway)) - name: services-1 service_network: false subnets: - iaas_identifier: ((network_name))/((services_subnet_name)) cidr: ((services_subnet_cidr)) reserved_ip_ranges: ((services_subnet_gateway))-((services_subnet_range)) dns: 168.63.129.16 gateway: ((services_subnet_gateway)) properties-configuration: iaas_configuration: subscription_id: ((subscription_id)) tenant_id: ((tenant_id)) client_id: ((client_id)) client_secret: ((client_secret)) resource_group_name: ((resource_group_name)) bosh_storage_account_name: ((bosh_storage_account_name)) default_security_group: ((platform_vms_security_group_name)) ssh_public_key: ((ops_manager_ssh_public_key)) ssh_private_key: ((ops_manager_ssh_private_key)) cloud_storage_type: managed_disks storage_account_type: Standard_LRS environment: ((iaas_configuration_environment_azurecloud)) availability_mode: availability_sets director_configuration: ntp_servers_string: 0.pool.ntp.org metrics_ip: '' resurrector_enabled: true post_deploy_enabled: false bosh_recreate_on_next_deploy: false retry_bosh_deploys: true hm_pager_duty_options: enabled: false hm_emailer_options: enabled: false blobstore_type: local database_type: internal security_configuration: trusted_certificates: '' generate_vm_passwords: true vmextensions-configuration: - name: concourse-lb cloud_properties: load_balancer: ((environment_name))-concourse-lb - name: increased-disk cloud_properties: ephemeral_disk: size: 512000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57
--- az-configuration: - name: ((availability_zones.0)) - name: ((availability_zones.1)) - name: ((availability_zones.2)) network-assignment: network: name: management singleton_availability_zone: name: ((availability_zones.0)) networks-configuration: icmp_checks_enabled: false networks: - name: management subnets: - availability_zone_names: - ((availability_zones.0)) - ((availability_zones.1)) - ((availability_zones.2)) cidr: ((management_subnet_cidr)) dns: 169.254.169.254 gateway: ((management_subnet_gateway)) iaas_identifier: ((network_name))/((management_subnet_name))/((region)) reserved_ip_ranges: ((management_subnet_reserved_ip_ranges)) - name: services subnets: - availability_zone_names: - ((availability_zones.0)) - ((availability_zones.1)) - ((availability_zones.2)) cidr: ((services_subnet_cidr)) dns: 169.254.169.254 gateway: ((services_subnet_gateway)) iaas_identifier: ((network_name))/((services_subnet_name))/((region)) reserved_ip_ranges: ((services_subnet_reserved_ip_ranges)) properties-configuration: iaas_configuration: project: ((project)) auth_json: ((ops_manager_service_account_key)) default_deployment_tag: ((platform_vms_tag)) director_configuration: ntp_servers_string: 169.254.169.254 security_configuration: trusted_certificates: '' generate_vm_passwords: true resource-configuration: compilation: instance_type: id: xlarge.disk vmextensions-configuration: - name: concourse-lb cloud_properties: target_pool: ((environment_name))-concourse - name: increased-disk cloud_properties: root_disk_size_gb: 500 root_disk_type: pd-ssd
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
--- az-configuration: - name: az1 clusters: - cluster: ((vcenter_cluster)) resource_pool: ((vcenter_resource_pool)) properties-configuration: director_configuration: ntp_servers_string: ((ops_manager_ntp)) retry_bosh_deploys: true iaas_configuration: vcenter_host: ((vcenter_host)) vcenter_username: ((vcenter_username)) vcenter_password: ((vcenter_password)) datacenter: ((vcenter_datacenter)) disk_type: thin ephemeral_datastores_string: ((vcenter_datastore)) persistent_datastores_string: ((vcenter_datastore)) nsx_networking_enabled: true nsx_mode: nsx-t nsx_address: ((nsxt_host)) nsx_username: ((nsxt_username)) nsx_password: ((nsxt_password)) nsx_ca_certificate: ((nsxt_ca_cert)) ssl_verification_enabled: ((disable_ssl_verification)) network-assignment: network: name: management singleton_availability_zone: name: az1 networks-configuration: icmp_checks_enabled: false networks: - name: management subnets: - availability_zone_names: - az1 cidr: ((management_subnet_cidr)) dns: ((ops_manager_dns_servers)) gateway: ((management_subnet_gateway)) reserved_ip_ranges: ((management_subnet_reserved_ip_ranges)) iaas_identifier: ((management_subnet_name)) vmextensions-configuration: - name: concourse-lb cloud_properties: nsxt: ns_groups: - ((environment_name))_concourse_ns_group - name: increased-disk cloud_properties: disk: 512000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
--- az-configuration: - name: default iaas_configuration_name: az1 clusters: - cluster: ((vcenter_cluster)) resource_pool: ((vcenter_resource_pool)) iaas-configurations: - datacenter: ((vcenter_datacenter)) disk_type: thin ephemeral_datastores_string: ((vcenter_datastore)) nsx_networking_enabled: false persistent_datastores_string: ((vcenter_datastore)) ssl_verification_enabled: ((disable_ssl_verification)) vcenter_host: ((vcenter_host)) vcenter_password: ((vcenter_password)) vcenter_username: ((vcenter_username)) network-assignment: network: name: az1 singleton_availability_zone: name: az1 networks-configuration: icmp_checks_enabled: false networks: - name: az1 subnets: - iaas_identifier: ((management_subnet_name)) cidr: ((management_subnet_cidr)) dns: ((ops_manager_dns_servers)) gateway: ((management_subnet_gateway)) reserved_ip_ranges: ((management_subnet_reserved_ip_ranges)) availability_zone_names: - az1 properties-configuration: director_configuration: ntp_servers_string: ((ops_manager_ntp)) retry_bosh_deploys: true security_configuration: generate_vm_passwords: true opsmanager_root_ca_trusted_certs: false syslog_configuration: enabled: false vmextensions-configuration: [ # depending on how your routing is set up # you may need to create a vm-extension here # to route traffic for your Concourse ]
Where:
- The
((parameters))
map to outputs from theterraform-outputs.yml
, which can be provided via vars file for YAML interpolation in a subsequent step.
- The
-
Create the BOSH director using the
om
CLI.The previously saved
director-config.yml
andterraform-outputs.yml
files can be used directly withom
to configure the director.Info
The following
om
commands implicitly use theOM_USERNAME
,OM_PASSWORD
, andOM_DECRYPTION_PASSPHRASE
environment variables. These were set in a previous step, so you may need to re-set them if you are in a fresh shell.1 2 3 4 5 6
om --env env.yml configure-director \ --config director-config.yml \ --vars-file terraform-outputs.yml om --env env.yml apply-changes \ --skip-deploy-products
The end result will be a working BOSH director, which can be targeted for the Concourse deployment.
Upload Releases and the Stemcell to the BOSH Director
-
Write the private key for connecting to the BOSH director.
1 2 3
om interpolate \ -c terraform-outputs.yml \ --path /ops_manager_ssh_private_key > /tmp/private_key
-
Export the environment variables required to target the BOSH director/BOSH Credhub and verify you are properly targeted.
1 2 3 4
eval "$(om --env env.yml bosh-env --ssh-private-key=/tmp/private_key)" # Will return a non-error if properly targeted bosh curl /info
-
Upload all of the BOSH releases previously downloaded. Note that you'll either need to copy them to your working directory before running these commands, or change directories to wherever you originally downloaded them.
1 2 3 4 5 6 7
# upload releases bosh upload-release concourse-release*.tgz bosh upload-release bpm-release*.tgz bosh upload-release postgres-release*.tgz bosh upload-release uaa-release*.tgz bosh upload-release credhub-release*.tgz bosh upload-release backup-and-restore-sdk-release*.tgz
-
Upload the previously downloaded stemcell. (If you changed to your downloads directory, remember to change back after uploading this file.)
1
bosh upload-stemcell *stemcell*.tgz
Set up concourse-bosh-deployment
Directory on Your Local Machine
concourse-bosh-deployment
has a sample BOSH manifest, versions.yml
file,
and a selection of deployment-modifying operations
files.
Using these sample files makes it much faster and easier to get started.
-
Create a directory called
concourse-bosh-deployment
in your working directory:1
mkdir concourse-bosh-deployment
-
Untar the
concourse-bosh-deployment.tgz
file downloaded from Tanzu Network1
tar -C concourse-bosh-deployment -xzf concourse-bosh-deployment.tgz
Deploy with BOSH
The deployment instructions below deploy the following:
- A Concourse
worker
VM - A Concourse
web
VM with co-located Credhub and UAA - A Postgres Database VM
- A single user for logging in to Concourse with basic auth
All files should be created in your working directory
-
Create a vars file called
vars.yml
with the following and replace the values as necessary:1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
# BOSH uses this to identify the deployment deployment_name: concourse # This can be any VM type from the cloud config: bosh cloud-config web_vm_type: c5.large # This is the external concourse URL exported from the terraform output external_host: $CONCOURSE_URL # This is the external concourse URL exported from the terraform output external_url: https://$CONCOURSE_URL # This can be any VM type from the cloud config: bosh cloud-config db_vm_type: c5.large # This can be any disk type from the cloud config: bosh cloud-config db_persistent_disk_type: 102400 # This can be any VM type from the cloud config: bosh cloud-config worker_vm_type: c5.large # This assigns created VMs (web, worker, and db) to AZs in the IaaS azs: ((availability_zones)) # The network name to assign the VMs to. network_name: management
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
# BOSH uses this to identify the deployment deployment_name: concourse # This can be any VM type from the cloud config: bosh cloud-config web_vm_type: Standard_DS2_v2 # This is the external concourse URL exported from the terraform output external_host: $CONCOURSE_URL # This is the external concourse URL exported from the terraform output external_url: https://$CONCOURSE_URL # This can be any VM type from the cloud config: bosh cloud-config db_vm_type: Standard_DS2_v2 # This can be any disk type from the cloud config: bosh cloud-config db_persistent_disk_type: 102400 # This can be any VM type from the cloud config: bosh cloud-config worker_vm_type: Standard_DS2_v2 # This assigns created VMs (web, worker, and db) to AZs in the IaaS azs: ["Availability Sets"] # The network name to assign the VMs to. network_name: management
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
# BOSH uses this to identify the deployment deployment_name: concourse # This can be any VM type from the cloud config: bosh cloud-config web_vm_type: large # This is the external concourse URL exported from the terraform output external_host: $CONCOURSE_URL # This is the external concourse URL exported from the terraform output external_url: https://$CONCOURSE_URL # This can be any VM type from the cloud config: bosh cloud-config db_vm_type: large # This can be any disk type from the cloud config: bosh cloud-config db_persistent_disk_type: 102400 # This can be any VM type from the cloud config: bosh cloud-config worker_vm_type: large # This assigns created VMs (web, worker, and db) to AZs in the IaaS azs: ((availability_zones)) # The network name to assign the VMs to. network_name: management
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
# BOSH uses this to identify the deployment deployment_name: concourse # This can be any VM type from the cloud config: bosh cloud-config web_vm_type: large # This is the external concourse URL exported from the terraform output external_host: $CONCOURSE_URL # This is the external concourse URL exported from the terraform output external_url: https://$CONCOURSE_URL # This can be any VM type from the cloud config: bosh cloud-config db_vm_type: large # This can be any disk type from the cloud config: bosh cloud-config db_persistent_disk_type: 102400 # This can be any VM type from the cloud config: bosh cloud-config worker_vm_type: large # This assigns created VMs (web, worker, and db) to AZs in the IaaS azs: [ az1 ] # The network name to assign the VMs to. network_name: management
Where:
$CONCOURSE_URL
is the URL to the Concourse load balancer created with theterraform
templates. Theterraform output
key isconcourse_url
.((availability_zones))
are the AZs where Concourse infrastructure was created in, which will be automatically provided from theterraform-outputs.yml
file.
-
Create an ops file called
operations.yml
. It will contain information for assigning vm extensions for the load balancer, disk size of the worker, and access for the worker to talk to Tanzu Network.1 2 3 4 5 6 7 8 9 10 11 12
- type: replace path: /instance_groups/name=web/vm_extensions?/- value: concourse-lb - type: replace path: /instance_groups/name=web/vm_extensions?/- value: public_ip - type: replace path: /instance_groups/name=worker/vm_extensions?/- value: public_ip - type: replace path: /instance_groups/name=worker/vm_extensions?/- value: increased-disk
1 2 3 4 5 6 7 8 9 10 11 12
- type: replace path: /instance_groups/name=web/vm_extensions?/- value: concourse-lb - type: replace path: /instance_groups/name=web/vm_extensions?/- value: public_ip - type: replace path: /instance_groups/name=worker/vm_extensions?/- value: public_ip - type: replace path: /instance_groups/name=worker/vm_extensions?/- value: increased-disk
1 2 3 4 5 6 7 8 9 10 11 12
- type: replace path: /instance_groups/name=web/vm_extensions?/- value: concourse-lb - type: replace path: /instance_groups/name=web/vm_extensions?/- value: public_ip - type: replace path: /instance_groups/name=worker/vm_extensions?/- value: public_ip - type: replace path: /instance_groups/name=worker/vm_extensions?/- value: increased-disk
1 2 3 4 5 6
- type: replace path: /instance_groups/name=web/vm_extensions?/- value: concourse-lb - type: replace path: /instance_groups/name=worker/vm_extensions?/- value: increased-disk
If you needed to create vm extensions in
director-config.yml
, from the Deploy Director step, you may need to create an ops file similar to one of the ops file above in order to use those extensions in your Concourse deployment. -
Create a user in the BOSH Credhub for Concourse basic auth
1 2 3 4 5 6 7 8
export ADMIN_USERNAME=admin export ADMIN_PASSWORD=password credhub set \ -n /p-bosh/concourse/local_user \ -t user \ -z "${ADMIN_USERNAME}" \ -w "${ADMIN_PASSWORD}"
-
From your working directory, run BOSH deploy
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
bosh -n -d concourse deploy concourse-bosh-deployment/cluster/concourse.yml \ -o concourse-bosh-deployment/cluster/operations/privileged-http.yml \ -o concourse-bosh-deployment/cluster/operations/privileged-https.yml \ -o concourse-bosh-deployment/cluster/operations/basic-auth.yml \ -o concourse-bosh-deployment/cluster/operations/tls-vars.yml \ -o concourse-bosh-deployment/cluster/operations/tls.yml \ -o concourse-bosh-deployment/cluster/operations/uaa.yml \ -o concourse-bosh-deployment/cluster/operations/credhub-colocated.yml \ -o concourse-bosh-deployment/cluster/operations/offline-releases.yml \ -o concourse-bosh-deployment/cluster/operations/backup-atc-colocated-web.yml \ -o concourse-bosh-deployment/cluster/operations/secure-internal-postgres.yml \ -o concourse-bosh-deployment/cluster/operations/secure-internal-postgres-bbr.yml \ -o concourse-bosh-deployment/cluster/operations/secure-internal-postgres-uaa.yml \ -o concourse-bosh-deployment/cluster/operations/secure-internal-postgres-credhub.yml \ -o operations.yml \ -l <(om interpolate --config vars.yml --vars-file terraform-outputs.yml) \ -l concourse-bosh-deployment/versions.yml
Don't I have a Credhub and UAA on the BOSH Director already?
The Credhub and UAA releases Ops Manager deployed alongside the BOSH Director cannot be scaled out.
Connect to and Test Concourse, Credhub, and UAA
This section describes both how to connect to the Concourse, Credhub, and UAA as well as provides an example for how to test that they are all working as intended.
-
In order to connect to the Concourse Credhub, you must get the Concourse Credhub admin password and CA certificate from the BOSH.
If you are still connected to the BOSH Credhub from the upload releases step, you can export Concourse's Credhub Secret and Credhub CA certificate for accessing the Concourse's Credhub:
1 2
export CONCOURSE_CREDHUB_SECRET="$(credhub get -n /p-bosh/concourse/credhub_admin_secret -q)" export CONCOURSE_CA_CERT="$(credhub get -n /p-bosh/concourse/atc_tls -k ca)"
-
Unset the environment variables previously set by
om bosh-env
to prepare to target the Concourse Credhub.1
unset CREDHUB_SECRET CREDHUB_CLIENT CREDHUB_SERVER CREDHUB_PROXY CREDHUB_CA_CERT
-
Log into the Concourse Credhub.
1 2 3 4 5
credhub login \ --server "https://${CONCOURSE_URL}:8844" \ --client-name=credhub_admin \ --client-secret="${CONCOURSE_CREDHUB_SECRET}" \ --ca-cert "${CONCOURSE_CA_CERT}"
Where:
${CONCOURSE_URL}
is the URL to the Concourse load balancer created with theterraform
templates. Theterraform output
key isconcourse_url
.${CONCOURSE_CREDHUB_SECRET}
is the client secret used to access the Concourse's Credhub.${CONCOURSE_CREDHUB_CA_CERT}
is the CA certificate used to access the Concourse's Credhub.
All the shell variables in this command were set in previous steps.
-
Create a new pipeline file called
pipeline.yml
.1 2 3 4 5 6 7 8 9 10 11 12 13
jobs: - name: test-job plan: - task: display-cred config: platform: linux image_resource: type: registry-image source: repository: ubuntu run: path: bash args: [-c, "echo Hello, ((provided-by-credhub))"]
-
Add the
provided-by-credhub
value to the Concourse Credhub for testing.1 2 3 4
credhub set \ -n /concourse/main/test-pipeline/provided-by-credhub \ -t value \ -v "World"
-
Download the fly CLI and make it executable.
1 2 3 4
curl "https://${CONCOURSE_URL}/api/v1/cli?arch=amd64&platform=${PLATFORM}" \ --output fly \ --cacert <(echo "${CONCOURSE_CA_CERT}") chmod +x fly
Where:
${CONCOURSE_URL}
is the URL to the Concourse load balancer created with theterraform
templates. Theterraform output
key isconcourse_url
.${PLATFORM}
must be set to the operating system you are running:linux
,windows
, ordarwin
(Mac).
-
Log into Concourse.
1 2 3 4 5
./fly -t ci login \ -c "https://${CONCOURSE_URL}" \ -u "${ADMIN_USERNAME}" \ -p "${ADMIN_PASSWORD}" \ --ca-cert <(echo "${CONCOURSE_CA_CERT}")
Where:
${CONCOURSE_URL}
is the URL to the Concourse load balancer created with theterraform
templates. Theterraform output
key isconcourse_url
.${ADMIN_PASSWORD}
and${ADMIN_USERNAME}
are values for thelocal.user
set in previous steps.
-
Set the test pipeline.
1 2 3 4 5
./fly -t ci set-pipeline \ -n \ -p test-pipeline \ -c pipeline.yml \ --check-creds
-
Unpause and run the test pipeline.
1 2 3
./fly -t ci unpause-pipeline -p test-pipeline ./fly -t ci trigger-job -j test-pipeline/test-job --watch
-
The Concourse output from the job should include:
1
Hello, World
Next Steps
We recommend you commit the results of your Terraform modification,
and all the created config files, to source control.
Be aware that terraform-outputs.yml
will contain private keys for Ops Manager;
you may wish to remove these and store them in Credhub instead.
For information about using Platform Automation Toolkit, see the docs.