Common configuration ¶
In order to provision the infrastructure and install an Atlassian Data Center product, you need to create a valid Terraform configuration. All configuration data should go to a Terraform configuration file.
The content of the configuration file is divided into two groups:
Configuration file format.
The configuration file is an ASCII text file with the .tfvars
extension. The config file must contain all mandatory configuration items with valid values. If any optional items are missing, the default values will be applied.
The mandatory configuration items are those you should define once before the first installation. Mandatory values cannot be changed during the entire environment lifecycle.
The optional configuration items are not required for installation by default. Optional values may change at any point in the environment lifecycle. Terraform will retain the latest state of the environment and keep track of any configuration changes made later.
The following is an example of a valid configuration file:
# Mandatory items
environment_name = "my-bamboo-env"
region = "us-east-2"
# Optional items
resource_tags = {
Terraform = "true",
Organization = "atlassian",
product = "bamboo" ,
}
instance_types = ["m5.xlarge"]
desired_capacity = 2
domain = "mydomain.com"
Common Configuration ¶
Environmental properties common to all deployments.
Environment Name ¶
environment_name
provides your environment a unique name within a single cloud provider account. This value cannot be altered after the configuration has been applied. The value will be used to form the name of some resources including VPC
and Kubernetes cluster
.
environment_name = "<your-environment-name>" # e.g. "my-terraform-env"
Format
Environment names should start with a letter and can contain letters, numbers, and dashes (-
). The maximum value length is 24 characters.
EKS K8S API version ¶
eks_version
is the supported EKS K8S API version. It must be a valid EKS version.
Latest EKS version
It is recommended to use the default value, however it is possible to override it to try a different (the latest) EKS version for experimental purposes.
Region ¶
region
defines the cloud provider region that the environment will be deployed to.
region = "<REGION>" # e.g. "ap-northeast-2"
Format
The value must be a valid AWS region.
Products ¶
The products
list can be configured with one or multiple products. This will result in these products being deployed to the same K8s cluster. For example, if a Jira and Confluence deployment is required this property can be configured as follows:
products = ["jira", "confluence"]
Available values
jira
, confluence
, bitbucket
, bamboo
Whitelist IP blocks ¶
whitelist_cidr
defines a set of CIDRs that are allowed to run the applications.
By default, the deployed applications are publicly accessible. You can restrict this access by changing the default value to your desired CIDR blocks that are allowed to run the applications.
whitelist_cidr = ["199.0.0.0/8", "119.81.0.0/16"]
Domain ¶
We recommend using a domain name to access the application via HTTPS
. You will be required to secure a domain name and supply the configuration to the config file.
When the domain is provided, Terraform will create a Route53 hosted zone based on the environment
name.
domain="<DOMAIN_NAME>" # e.g. "mydomain.com"
A fully qualified domain name uses the following format: <product>.<environment-name>.<domain-name>
. For example bamboo.staging.mydomain.com
.
Ingress controller
If a domain name is defined, Terraform will create a nginx-ingress controller in the EKS cluster that will provide access to the application via the domain name.
Terraform will also create an ACM certificate to provide secure connections over HTTPS.
Provision the infrastructure without a domain
When commented out the product will be exposed via an unsecured (HTTP
only) DNS endpoint automatically provisioned as part of the AWS ELB load balancer, for example: http://<load-balancer-id>.<region>.elb.amazonaws.com
. This DNS Name will be printed out as part of the outputs after the infrastructure has been provisioned.
Resource tags ¶
resource_tags
are custom metadata for all resources in the environment. You can provide multiple tags as a list.
Tag propagation
Tag names must be unique, and tags will be propogated to all provisioned resources.
resource_tags = {
<tag-name-0> = "<tag-value>",
<tag-name-1> = "<tag-value>",
...
<tag-name-n> = "<tag-value>",
}
Using Terraform CLI to apply tags is not recommended and may lead to missing tags in some resources.
To apply tags to all resources, follow the installation guide.
EKS instance type and storage size ¶
instance_types
defines the instance type for the EKS cluster node group.
instance_types = ["m5.2xlarge"]
The instance type must be a valid AWS instance type.
instance_disk_size
defines the size of default storage attached to an instance.
instance_disk_size = 50
Instance type and disk size selection
Both properties cannot be changed once the infrastructure has been provisioned.
EKS Node Launch Template ¶
When Terraform creates a node group for the EKS cluster, the default launch template is created behind the scenes. However, if you need to install any additional tooling/software in the worker node EC2 instances, you may provide your own template in data-center-terraform/modules/AWS/eks/nodegroup_launch_template/templates
. This needs to be a file with .tlp
extension. See: Amazon EC2 user data. Here's an example:
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="
--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
echo "Running custom user data script"
--==MYBOUNDARY==--
data-center-terraform/modules/AWS/eks/nodegroup_launch_template/templates
will be merged with the default launch template. If you need to use environment variables in your custom scripts, make sure you escape them with an additional dollar sign, otherwise templatefile
function will complain about a missing env var: foo="bar"
echo $${foo}
templatefile
function are not configurable (defined in data-center-terraform/modules/AWS/eks/nodegroup_launch_template/locals.tf
), thus it makes sense to generate template outside terraform and pull it before installing/upgrading. Cluster size ¶
EKS cluster creates an Autoscaling Group (ASG) that has defined minimum and maximum capacity. You are able to set these values in the config file:
- Minimum values are
1
and maximum is20
.
min_cluster_capacity = 1 # between 1 and 20
max_cluster_capacity = 5 # between 1 and 20
Cluster size and cost
In the installation process, cluster-autoscaler is installed in the Kubernetes cluster. The number of nodes will be automatically adjusted depending on the workload resource requirements.
Additional IAM roles ¶
When the EKS cluster is created, only the entity that created the cluster can access and list resources inside the cluster. To enable access for additional roles, you can add them to the config file:
eks_additional_roles = {
user = {
kubernetes_group = []
principal_arn = "arn:aws:iam::121212121212:role/test-policy-role"
policy_associations = {
admin = {
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
access_scope = {
namespaces = []
type = "cluster"
}
}
}
}
}
Access Entries in AWS EKS
For additional information regarding adding access entries in EKS cluster, follow the official AWS documentation.
Logging S3 bucket name ¶
If you wish to log activities of terraform backend create S3 bucket and provide the name of the S3 bucket as follows. This will allow the terraform script to link your terraform backend to logging bucket.
logging_bucket = <LOGGING_S3_BUCKET_NAME> # default is null
S3 Logging bucket Creation
Providing logging_bucket
will not guarantee the creation of the S3 Bucket. You will need to create one as part of the prerequisites.
Monitoring ¶
If you want to deploy a monitoring stack to the cluster, use the following variable in config.tfvars file:
monitoring_enabled = true
When enabled, Terraform will deploy kube-prometheus-stack Helm chart with Prometheus, AlertManager, Node Exporter and Grafana.
By default, Grafana service isn't exposed, and you can login to Grafana at http://localhost:3000
after running:
kubectl port-forward $grafana-pod 3000:3000 -n kube-monitoring
If you want to expose Grafana service as LoadBalancer
, set monitoring_grafana_expose_lb
to true
:
monitoring_grafana_expose_lb = true
Run the following command to get Grafana service hostname:
kubectl get svc -n kube-monitoring
Out of the box Grafana is shipped with default Kubernetes dashboards which you can use to monitor pods health. You can also create own custom configmaps labeled grafana_dashboard=dc_monitoring
, and Grafana sidecar will automatically import them.
By default, both Prometheus and Grafana claim 10Gi of persistent storage. You can override the default values by setting:
prometheus_pvc_disk_size = "50Gi"
grafana_pvc_disk_size = "20Gi"
Volume Expansion
Out of the box EKS cluster is created with gp2 storage class which does not allow volume expansion, i.e. if you expect a high volume of metrics or metrics with high cardinality it is recommended to override the default Prometheus 10Gi PVC storage request when creating enabling monitoring for the first time. AWS documentation.
Snapshot Configuration ¶
It is possible to restore DC products from snapshots. Each DC product requires a valid public RDS and EBS (shared-home) snapshot defined by the following variables:
<product>_db_snapshot_id
<product>_shared_home_snapshot_id
Note that COnfluence and Crowd also require a build number:
confluence_db_snapshot_build_number = "8017"
crowd_db_snapshot_build_number = "5023"
Snapshots must be public and exist in the target region.
Snapshots JSON File
It is also possible to use a special snapshots JSON file with pre-defined snapshot ID and build numbers for all products for both small and large dataset sizes.
You can find example JSON in test/dcapt-snapshots.json
. To use snapshots JSON rather than dedicated environment variables, set in config.tfvars
:
snapshots_json_file_path = "test/dcapt-snapshots.json"
If snapshots_json_file_path
snapshot variables defined in config.tfvars
are ignored.
Only use snapshots JSON suggested by DCAPT team.
Product specific configuration ¶
Sensitive Data ¶
Sensitive input data will eventually be stored as secrets within Kubernetes cluster.
We use config.tfvars
file to pass configuration values to Terraform stack. The file itself is plain-text on local machine, and will not be stored in remote backend where all the Terraform state files will be stored encrypted. More info regarding sensitive data in Terraform state can be found here.
To avoid storing sensitive data in a plain-text file like config.tfvars
, we recommend storing them in environment variables prefixed with TF_VAR_
.
Takebamboo_admin_password
for example, for Linux-like sytems, run the following command to write bamboo admin password to environment variable:
export TF_VAR_bamboo_admin_password=<password>
If storing this data as plain-text is not a particular concern for the environment to be deployed, you can also choose to supply the values in config.tfvars
file. Uncomment the corresponding line and configure the value there.