Skip to content

Commit

Permalink
Merge pull request #7 from terraform-google-modules/master
Browse files Browse the repository at this point in the history
update
  • Loading branch information
bharathkkb committed Dec 9, 2019
2 parents 1cc42d0 + 3212e3b commit e8317f2
Show file tree
Hide file tree
Showing 63 changed files with 820 additions and 292 deletions.
13 changes: 13 additions & 0 deletions .kitchen.yml
Original file line number Diff line number Diff line change
Expand Up @@ -162,6 +162,19 @@ suites:
systems:
- name: workload_metadata_config
backend: local
- name: "beta_cluster"
driver:
root_module_directory: test/fixtures/beta_cluster
verifier:
systems:
- name: gcloud
backend: local
controls:
- gcloud
- name: gcp
backend: gcp
controls:
- gcp
- name: "deploy_service"
driver:
root_module_directory: test/fixtures/deploy_service
Expand Down
39 changes: 37 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,30 @@ Extending the adopted spec, each change should have a link to its corresponding

## [Unreleased]

## [v6.1.1] - 2019-12-04

### Fixed

- Fix endpoint output for private clusters where `private_nodes=false`. [#365](https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/365)

## [v6.1.0] - 2019-12-03

### Added
- Support for using a pre-existing Service Account with the ACM submodule. [#346](https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/346)

### Fixed
- Compute region output for zonal clusters. [#362](https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/362)

## [v6.0.1] - 2019-12-02

### Fixed

- The required Google provider constraint has been relaxed to `~> 2.18` (>= 2.18, <3.0). [#359]

## [v6.0.0] - 2019-11-28

v6.0.0 is a backwards-incompatible release. Please see the [upgrading guide](./docs/upgrading_to_v6.0.md).

### Added

* Support for Shielded Nodes beta feature via `enabled_shielded_nodes` variable. [#300]
Expand All @@ -23,17 +47,20 @@ Extending the adopted spec, each change should have a link to its corresponding
* `private_zonal_with_networking` example. [#308]
* `regional_private_node_pool_oauth_scopes` example. [#321]
* The `cluster_autoscaling` variable for beta submodules. [#93]
* The `master_authorized_networks` variable. [#354]

### Changed

* The `node_pool_labels`, `node_pool_tags`, and `node_pool_taints` variables have defaults and can be overridden within the
`node_pools` object. [#3]
* `upstream_nameservers` variable is typed as a list of strings. [#350]
* The `network_policy` variable defaults to `true`. [#138]

### Removed

* **Breaking**: Removed support for enabling the Kubernetes dashboard, as this is deprecated on GKE. [#337]
* **Beaking**: Removed support for versions of the Google provider and the Google Beta provider older than 2.18. [#261]
* **Breaking**: Removed support for versions of the Google provider and the Google Beta provider older than 2.18. [#261]
* **Breaking**: Removed the `master_authorized_networks_config` variable. [#354]

### Fixed

Expand Down Expand Up @@ -236,7 +263,11 @@ In either case, upgrading to module version `v1.0.0` will trigger a recreation o

* Initial release of module.

[Unreleased]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/compare/v5.2.0...HEAD
[Unreleased]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/compare/v6.1.1...HEAD
[v6.1.1]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/compare/v6.1.0...v6.1.1
[v6.1.0]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/compare/v6.0.1...v6.1.0
[v6.0.1]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/compare/v6.0.0...v6.0.1
[v6.0.0]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/compare/v5.1.0...v6.0.0
[v5.2.0]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/compare/v5.1.1...v5.2.0
[v5.1.1]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/compare/v5.1.0...v5.1.1
[v5.1.0]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/compare/v5.0.0...v5.1.0
Expand All @@ -254,6 +285,8 @@ In either case, upgrading to module version `v1.0.0` will trigger a recreation o
[v0.3.0]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/compare/v0.2.0...v0.3.0
[v0.2.0]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/compare/v0.1.0...v0.2.0

[#359]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/issues/359
[#354]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/354
[#350]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/350
[#340]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/340
[#339]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/339
Expand Down Expand Up @@ -307,6 +340,8 @@ In either case, upgrading to module version `v1.0.0` will trigger a recreation o
[#151]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/151
[#149]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/149
[#148]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/148
[#138]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/138
[#136]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/issues/138
[#136]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/136
[#132]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/132
[#124]: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/124
Expand Down
21 changes: 1 addition & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,22 +108,6 @@ Then perform the following commands on the root folder:
- `terraform apply` to apply the infrastructure build
- `terraform destroy` to destroy the built infrastructure

## Upgrade to v3.0.0

v3.0.0 is a breaking release. Refer to the
[Upgrading to v3.0 guide][upgrading-to-v3.0] for details.

## Upgrade to v2.0.0

v2.0.0 is a breaking release. Refer to the
[Upgrading to v2.0 guide][upgrading-to-v2.0] for details.

## Upgrade to v1.0.0

Version 1.0.0 of this module introduces a breaking change: adding the `disable-legacy-endpoints` metadata field to all node pools. This metadata is required by GKE and [determines whether the `/0.1/` and `/v1beta1/` paths are available in the nodes' metadata server](https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-cluster-metadata#disable-legacy-apis). If your applications do not require access to the node's metadata server, you can leave the default value of `true` provided by the module. If your applications require access to the metadata server, be sure to read the linked documentation to see if you need to set the value for this field to `false` to allow your applications access to the above metadata server paths.

In either case, upgrading to module version `v1.0.0` will trigger a recreation of all node pools in the cluster.

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Inputs

Expand Down Expand Up @@ -153,7 +137,7 @@ In either case, upgrading to module version `v1.0.0` will trigger a recreation o
| monitoring\_service | The monitoring service that the cluster should write metrics to. Automatically send metrics from pods in the cluster to the Google Cloud Monitoring API. VM metrics will be collected by Google Compute Engine regardless of this setting Available options include monitoring.googleapis.com, monitoring.googleapis.com/kubernetes (beta) and none | string | `"monitoring.googleapis.com"` | no |
| name | The name of the cluster (required) | string | n/a | yes |
| network | The VPC network to host the cluster in (required) | string | n/a | yes |
| network\_policy | Enable network policy addon | bool | `"false"` | no |
| network\_policy | Enable network policy addon | bool | `"true"` | no |
| network\_policy\_provider | The network policy provider. | string | `"CALICO"` | no |
| network\_project\_id | The project ID of the shared VPC's host (for shared vpc support) | string | `""` | no |
| node\_pools | List of maps containing node pools | list(map(string)) | `<list>` | no |
Expand Down Expand Up @@ -251,9 +235,6 @@ The project has the following folders and files:
- /README.MD: This file.
- /modules: Private and beta sub modules.


[upgrading-to-v2.0]: docs/upgrading_to_v2.0.md
[upgrading-to-v3.0]: docs/upgrading_to_v3.0.md
[terraform-provider-google]: https://github.com/terraform-providers/terraform-provider-google
[3.0.0]: https://registry.terraform.io/modules/terraform-google-modules/kubernetes-engine/google/3.0.0
[terraform-0.12-upgrade]: https://www.terraform.io/upgrade-guides/0-12.html
42 changes: 13 additions & 29 deletions autogen/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,20 @@ The resources/services/activations/deletions that this module will create/trigge
Sub modules are provided from creating private clusters, beta private clusters, and beta public clusters as well. Beta sub modules allow for the use of various GKE beta features. See the modules directory for the various sub modules.

{% if private_cluster %}
**Note**: You must run Terraform from a VM on the same VPC as your cluster, otherwise there will be issues connecting to the GKE master.
## Private Cluster Endpoints
When creating a [private cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters), nodes are provisioned with private IPs.
The Kubernetes master endpoint is also [locked down](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#access_to_the_cluster_endpoints), which affects these module features:
- `configure_ip_masq`
- `stub_domains`

If you are *not* using these features, then the module will function normally for private clusters and no special configuration is needed.
If you are using these features with a private cluster, you will need to either:
1. Run Terraform from a VM on the same VPC as your cluster (allowing it to connect to the private endpoint) and set `deploy_using_private_endpoint` to `true`.
2. Enable (beta) [route export functionality](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#master-on-prem-routing) to connect from an on-premise network over a VPN or Interconnect.
3. Include the external IP of your Terraform deployer in the `master_authorized_networks` configuration. Note that only IP addresses reserved in Google Cloud (such as in other VPCs) can be whitelisted.
4. Deploy a [bastion host](https://github.com/terraform-google-modules/terraform-google-bastion-host) or [proxy](https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies) in the same VPC as your GKE cluster.

{% endif %}
{% endif %}

## Compatibility

Expand Down Expand Up @@ -125,22 +136,6 @@ Then perform the following commands on the root folder:
- `terraform apply` to apply the infrastructure build
- `terraform destroy` to destroy the built infrastructure

## Upgrade to v3.0.0

v3.0.0 is a breaking release. Refer to the
[Upgrading to v3.0 guide][upgrading-to-v3.0] for details.

## Upgrade to v2.0.0

v2.0.0 is a breaking release. Refer to the
[Upgrading to v2.0 guide][upgrading-to-v2.0] for details.

## Upgrade to v1.0.0

Version 1.0.0 of this module introduces a breaking change: adding the `disable-legacy-endpoints` metadata field to all node pools. This metadata is required by GKE and [determines whether the `/0.1/` and `/v1beta1/` paths are available in the nodes' metadata server](https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-cluster-metadata#disable-legacy-apis). If your applications do not require access to the node's metadata server, you can leave the default value of `true` provided by the module. If your applications require access to the metadata server, be sure to read the linked documentation to see if you need to set the value for this field to `false` to allow your applications access to the above metadata server paths.

In either case, upgrading to module version `v1.0.0` will trigger a recreation of all node pools in the cluster.

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->

Expand Down Expand Up @@ -199,17 +194,6 @@ The project has the following folders and files:
- /README.MD: This file.
- /modules: Private and beta sub modules.


{% if private_cluster %}
[upgrading-to-v2.0]: ../../docs/upgrading_to_v2.0.md
{% else %}
[upgrading-to-v2.0]: docs/upgrading_to_v2.0.md
{% endif %}
{% if private_cluster or beta_cluster %}
[upgrading-to-v3.0]: ../../docs/upgrading_to_v3.0.md
{% else %}
[upgrading-to-v3.0]: docs/upgrading_to_v3.0.md
{% endif %}
{% if beta_cluster %}
[terraform-provider-google-beta]: https://github.com/terraform-providers/terraform-provider-google-beta
{% else %}
Expand Down
16 changes: 12 additions & 4 deletions autogen/cluster.tf.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -191,10 +191,18 @@ resource "google_container_cluster" "primary" {
}

{% if private_cluster %}
private_cluster_config {
enable_private_endpoint = var.enable_private_endpoint
enable_private_nodes = var.enable_private_nodes
master_ipv4_cidr_block = var.master_ipv4_cidr_block
dynamic "private_cluster_config" {
for_each = var.enable_private_nodes ? [{
enable_private_nodes = var.enable_private_nodes,
enable_private_endpoint = var.enable_private_endpoint
master_ipv4_cidr_block = var.master_ipv4_cidr_block
}] : []

content {
enable_private_endpoint = private_cluster_config.value.enable_private_endpoint
enable_private_nodes = private_cluster_config.value.enable_private_nodes
master_ipv4_cidr_block = private_cluster_config.value.master_ipv4_cidr_block
}
}
{% endif %}

Expand Down
16 changes: 7 additions & 9 deletions autogen/main.tf.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -96,16 +96,14 @@ locals {
{% endif %}

cluster_output_name = google_container_cluster.primary.name
cluster_output_location = google_container_cluster.primary.location
cluster_output_region = google_container_cluster.primary.region
cluster_output_regional_zones = google_container_cluster.primary.node_locations
cluster_output_zonal_zones = local.zone_count > 1 ? slice(var.zones, 1, local.zone_count) : []
cluster_output_zones = local.cluster_output_regional_zones

{% if private_cluster %}
cluster_output_endpoint = var.deploy_using_private_endpoint ? google_container_cluster.primary.private_cluster_config.0.private_endpoint : google_container_cluster.primary.private_cluster_config.0.public_endpoint
cluster_endpoint = var.enable_private_nodes ? (var.deploy_using_private_endpoint ? google_container_cluster.primary.private_cluster_config.0.private_endpoint : google_container_cluster.primary.private_cluster_config.0.public_endpoint) : google_container_cluster.primary.endpoint
{% else %}
cluster_output_endpoint = google_container_cluster.primary.endpoint
cluster_endpoint = google_container_cluster.primary.endpoint
{% endif %}

cluster_output_master_auth = concat(google_container_cluster.primary.*.master_auth, [])
Expand Down Expand Up @@ -137,12 +135,12 @@ locals {
cluster_master_auth_list_layer1 = local.cluster_output_master_auth
cluster_master_auth_list_layer2 = local.cluster_master_auth_list_layer1[0]
cluster_master_auth_map = local.cluster_master_auth_list_layer2[0]
# cluster locals

cluster_location = google_container_cluster.primary.location
cluster_region = var.regional ? google_container_cluster.primary.region : join("-", slice(split("-", local.cluster_location), 0, 2))
cluster_zones = sort(local.cluster_output_zones)

cluster_name = local.cluster_output_name
cluster_location = local.cluster_output_location
cluster_region = local.cluster_output_region
cluster_zones = sort(local.cluster_output_zones)
cluster_endpoint = local.cluster_output_endpoint
cluster_ca_certificate = local.cluster_master_auth_map["cluster_ca_certificate"]
cluster_master_version = local.cluster_output_master_version
cluster_min_master_version = local.cluster_output_min_master_version
Expand Down
2 changes: 1 addition & 1 deletion autogen/variables.tf.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ variable "http_load_balancing" {
variable "network_policy" {
type = bool
description = "Enable network policy addon"
default = false
default = true
}

variable "network_policy_provider" {
Expand Down
4 changes: 2 additions & 2 deletions autogen/versions.tf.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,9 @@ terraform {

required_providers {
{% if beta_cluster %}
google-beta = "~> 2.18.0"
google-beta = "~> 2.18"
{% else %}
google = "~> 2.18.0"
google = "~> 2.18"
{% endif %}
}
}
20 changes: 20 additions & 0 deletions build/int.cloudbuild.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -264,6 +264,26 @@ steps:
- verify workload-metadata-config-local
name: 'gcr.io/cloud-foundation-cicd/$_DOCKER_IMAGE_DEVELOPER_TOOLS:$_DOCKER_TAG_VERSION_DEVELOPER_TOOLS'
args: ['/bin/bash', '-c', 'source /usr/local/bin/task_helper_functions.sh && kitchen_do destroy workload-metadata-config-local']
- id: create beta-cluster-local
waitFor:
- prepare
name: 'gcr.io/cloud-foundation-cicd/$_DOCKER_IMAGE_DEVELOPER_TOOLS:$_DOCKER_TAG_VERSION_DEVELOPER_TOOLS'
args: ['/bin/bash', '-c', 'source /usr/local/bin/task_helper_functions.sh && kitchen_do create beta-cluster-local']
- id: converge beta-cluster-local
waitFor:
- create beta-cluster-local
name: 'gcr.io/cloud-foundation-cicd/$_DOCKER_IMAGE_DEVELOPER_TOOLS:$_DOCKER_TAG_VERSION_DEVELOPER_TOOLS'
args: ['/bin/bash', '-c', 'source /usr/local/bin/task_helper_functions.sh && kitchen_do converge beta-cluster-local']
- id: verify beta-cluster-local
waitFor:
- converge beta-cluster-local
name: 'gcr.io/cloud-foundation-cicd/$_DOCKER_IMAGE_DEVELOPER_TOOLS:$_DOCKER_TAG_VERSION_DEVELOPER_TOOLS'
args: ['/bin/bash', '-c', 'source /usr/local/bin/task_helper_functions.sh && kitchen_do verify beta-cluster-local']
#- id: destroy beta-cluster-local
# waitFor:
# - verify beta-cluster-local
# name: 'gcr.io/cloud-foundation-cicd/$_DOCKER_IMAGE_DEVELOPER_TOOLS:$_DOCKER_TAG_VERSION_DEVELOPER_TOOLS'
# args: ['/bin/bash', '-c', 'source /usr/local/bin/task_helper_functions.sh && kitchen_do destroy beta-cluster-local']
- id: create deploy-service-local
waitFor:
- prepare
Expand Down
5 changes: 5 additions & 0 deletions examples/simple_regional_beta/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,22 @@ This example illustrates how to create a simple cluster with beta features.
| cloudrun | Boolean to enable / disable CloudRun | string | `"true"` | no |
| cluster\_name\_suffix | A suffix to append to the default cluster name | string | `""` | no |
| compute\_engine\_service\_account | Service account to associate to the nodes in the cluster | string | n/a | yes |
| database\_encryption | Application-layer Secrets Encryption settings. The object format is {state = string, key_name = string}. Valid values of state are: "ENCRYPTED"; "DECRYPTED". key_name is the name of a CloudKMS key. | object | `<list>` | no |
| enable\_binary\_authorization | Enable BinAuthZ Admission controller | string | `"false"` | no |
| ip\_range\_pods | The secondary ip range to use for pods | string | n/a | yes |
| ip\_range\_services | The secondary ip range to use for pods | string | n/a | yes |
| istio | Boolean to enable / disable Istio | string | `"true"` | no |
| network | The VPC network to host the cluster in | string | n/a | yes |
| node\_metadata | Specifies how node metadata is exposed to the workload running on the node | string | `"SECURE"` | no |
| node\_pools | List of maps containing node pools | list(map(string)) | `<list>` | no |
| pod\_security\_policy\_config | enabled - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created. | list | `<list>` | no |
| project\_id | The project ID to host the cluster in | string | n/a | yes |
| region | The region to host the cluster in | string | n/a | yes |
| regional | Whether is a regional cluster (zonal cluster if set false. WARNING: changing this after cluster creation is destructive!) | bool | `"true"` | no |
| remove\_default\_node\_pool | Remove default node pool while setting up the cluster | bool | `"false"` | no |
| sandbox\_enabled | (Beta) Enable GKE Sandbox (Do not forget to set `image_type` = `COS_CONTAINERD` and `node_version` = `1.12.7-gke.17` or later to use it). | bool | `"false"` | no |
| subnetwork | The subnetwork to host the cluster in | string | n/a | yes |
| zones | The zones to host the cluster in (optional if regional cluster / required if zonal) | list(string) | `<list>` | no |

## Outputs

Expand Down
Loading

0 comments on commit e8317f2

Please sign in to comment.