Skip to content

Commit

Permalink
prepare release 2.11 (#1169)
Browse files Browse the repository at this point in the history
Signed-off-by: Zbynek Roubalik <zroubalik@gmail.com>
  • Loading branch information
zroubalik committed Jun 22, 2023
1 parent 3562c4e commit 90a3992
Show file tree
Hide file tree
Showing 93 changed files with 12,372 additions and 1 deletion.
2 changes: 1 addition & 1 deletion config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ alpine_js_version = "2.2.1"
favicon = "favicon.png"

[params.versions]
docs = ["2.10", "2.9", "2.8", "2.7", "2.6", "2.5", "2.4", "2.3", "2.2", "2.1", "2.0", "1.5", "1.4"]
docs = ["2.11", "2.10", "2.9", "2.8", "2.7", "2.6", "2.5", "2.4", "2.3", "2.2", "2.1", "2.0", "1.5", "1.4"]

# Site fonts. For more options see https://fonts.google.com.
[[params.fonts]]
Expand Down
8 changes: 8 additions & 0 deletions content/docs/2.12/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
+++
title = "The KEDA Documentation"
weight = 1
+++

Welcome to the documentation for **KEDA**, the Kubernetes Event-driven Autoscaler. Use the navigation to the left to learn more about how to use KEDA and its components.

Additions and contributions to these docs are managed on [the keda-docs GitHub repo](https://github.com/kedacore/keda-docs).
8 changes: 8 additions & 0 deletions content/docs/2.12/authentication-providers/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
+++
title = "Authentication Providers"
weight = 5
+++

Available authentication providers for KEDA:

{{< authentication-providers >}}
12 changes: 12 additions & 0 deletions content/docs/2.12/authentication-providers/aws-eks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
+++
title = "EKS Pod Identity Webhook for AWS"
+++

[**EKS Pod Identity Webhook**](https://github.com/aws/amazon-eks-pod-identity-webhook), which is described more in depth [here](https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/), allows you to provide the role name using an annotation on a service account associated with your pod.

You can tell KEDA to use EKS Pod Identity Webhook via `podIdentity.provider`.

```yaml
podIdentity:
provider: aws-eks # Optional. Default: none
```
12 changes: 12 additions & 0 deletions content/docs/2.12/authentication-providers/aws-kiam.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
+++
title = "Kiam Pod Identity for AWS"
+++

[**Kiam**](https://github.com/uswitch/kiam/) lets you bind an AWS IAM Role to a pod using an annotation on the pod.

You can tell KEDA to use Kiam via `podIdentity.provider`.

```yaml
podIdentity:
provider: aws-kiam # Optional. Default: none
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
+++
title = "Azure AD Pod Identity"
+++

Azure Pod Identity is an implementation of [**Azure AD Pod Identity**](https://github.com/Azure/aad-pod-identity) which lets you bind an [**Azure Managed Identity**](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/) to a Pod in a Kubernetes cluster as delegated access - *Don't manage secrets, let Azure AD do the hard work*.

You can tell KEDA to use Azure AD Pod Identity via `podIdentity.provider`.

```yaml
podIdentity:
provider: azure # Optional. Default: none
identityId: <identity-id> # Optional. Default: Identity linked with the label set when installing KEDA.
```
Azure AD Pod Identity will give access to containers with a defined label for `aadpodidbinding`. You can set this label on the KEDA operator deployment. This can be done for you during deployment with Helm with `--set podIdentity.activeDirectory.identity={your-label-name}`.

You can override the identity that was assigned to KEDA during installation, by specifying an `identityId` parameter under the `podIdentity` field. This allows end-users to use different identities to access various resources which is more secure than using a single identity that has access to multiple resources.
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
+++
title = "Azure AD Workload Identity"
+++

[**Azure AD Workload Identity**](https://github.com/Azure/azure-workload-identity) is the newer version of [**Azure AD Pod Identity**](https://github.com/Azure/aad-pod-identity). It lets your Kubernetes workloads access Azure resources using an
[**Azure AD Application**](https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals)
without having to specify secrets, using [federated identity credentials](https://azure.github.io/azure-workload-identity/docs/topics/federated-identity-credential.html) - *Don't manage secrets, let Azure AD do the hard work*.

You can tell KEDA to use Azure AD Workload Identity via `podIdentity.provider`.

```yaml
podIdentity:
provider: azure-workload # Optional. Default: none
identityId: <identity-id> # Optional. Default: ClientId From annotation on service-account.
```
Azure AD Workload Identity will give access to pods with service accounts having appropriate labels and annotations. Refer
to these [docs](https://azure.github.io/azure-workload-identity/docs/topics/service-account-labels-and-annotations.html) for more information. You can set these labels and annotations on the KEDA Operator service account. This can be done for you during deployment with Helm with the
following flags -
1. `--set podIdentity.azureWorkload.enabled=true`
2. `--set podIdentity.azureWorkload.clientId={azure-ad-client-id}`
3. `--set podIdentity.azureWorkload.tenantId={azure-ad-tenant-id}`

You can override the identity that was assigned to KEDA during installation, by specifying an `identityId` parameter under the `podIdentity` field. This allows end-users to use different identities to access various resources which is more secure than using a single identity that has access to multiple resources.
41 changes: 41 additions & 0 deletions content/docs/2.12/authentication-providers/azure-key-vault.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
+++
title = "Azure Key Vault secret"
+++


You can pull secrets from Azure Key Vault into the trigger by using the `azureKeyVault` key.

The `secrets` list defines the mapping between the key vault secret and the authentication parameter.

Currently, `azure` and `azure-workload` pod identity providers are supported for Azure Key Vault using `podIdentity` inside `azureKeyVault`.

Service principal authentication is also supported, needing to register an [application](https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals) with Azure Active Directory and specifying its credentials. The `clientId` and `tenantId` for the application are to be provided as part of the spec. The `clientSecret` for the application is expected to be within a kubernetes secret in the same namespace as the authentication resource.

Ensure that "read secret" permissions have been granted to the Azure AD application on the Azure Key Vault. Learn more in the Azure Key Vault [documentation](https://docs.microsoft.com/en-us/azure/key-vault/general/assign-access-policy?tabs=azure-portal).

The `cloud` parameter can be used to specify cloud environments besides `Azure Public Cloud`, such as known Azure clouds like
`Azure China Cloud`, etc. and even Azure Stack Hub or Air Gapped clouds.

```yaml
azureKeyVault: # Optional.
vaultUri: {key-vault-address} # Required.
podIdentity: # Optional.
provider: azure | azure-workload # Required.
identityId: <identity-id> # Optional
credentials: # Optional.
clientId: {azure-ad-client-id} # Required.
clientSecret: # Required.
valueFrom: # Required.
secretKeyRef: # Required.
name: {k8s-secret-with-azure-ad-secret} # Required.
key: {key-within-the-secret} # Required.
tenantId: {azure-ad-tenant-id} # Required.
cloud: # Optional.
type: AzurePublicCloud | AzureUSGovernmentCloud | AzureChinaCloud | AzureGermanCloud | Private # Required.
keyVaultResourceURL: {key-vault-resource-url-for-cloud} # Required when type = Private.
activeDirectoryEndpoint: {active-directory-endpoint-for-cloud} # Required when type = Private.
secrets: # Required.
- parameter: {param-name-used-for-auth} # Required.
name: {key-vault-secret-name} # Required.
version: {key-vault-secret-version} # Optional.
```
14 changes: 14 additions & 0 deletions content/docs/2.12/authentication-providers/environment-variable.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
+++
title = "Environment variable"
+++

You can pull information via one or more environment variables by providing the `name` of the variable for a given `containerName`.

```yaml
env: # Optional.
- parameter: region # Required - Defined by the scale trigger
name: my-env-var # Required.
containerName: my-container # Optional. Default: scaleTargetRef.envSourceContainerName of ScaledObject
```
**Assumptions:** `containerName` is in the same resource as referenced by `scaleTargetRef.name` in the ScaledObject, unless specified otherwise.
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
+++
title = "GCP Workload Identity"
+++

[**GCP Workload Identity**](https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity) allows workloads in your GKE clusters to impersonate Identity and Access Management (IAM) service accounts to access Google Cloud services.

You can tell KEDA to use GCP Workload Identity via `podIdentity.provider`.

```yaml
podIdentity:
provider: gcp # Optional. Default: none
```
### Steps to setup Workload Identity
If you are using podIdentity provider as `gcp`, you need to setup workload identity as below and your GKE cluster must have Workload Identity enabled.

* You need to create a GCP IAM service account with proper permissions to retrive metrics for particular scalers.

```shell
gcloud iam service-accounts create GSA_NAME \
--project=GSA_PROJECT
```

Replace the following: \
GSA_NAME: the name of the new IAM service account.\
GSA_PROJECT: the project ID of the Google Cloud project for your IAM service account.


* Ensure that your IAM service account has the [roles](https://cloud.google.com/iam/docs/understanding-roles) you need. You can grant additional roles using the following command:

```shell
gcloud projects add-iam-policy-binding PROJECT_ID \
--member "serviceAccount:GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com" \
--role "ROLE_NAME"
```

Replace the following:

PROJECT_ID: your Google Cloud project ID. \
GSA_NAME: the name of your IAM service account. \
GSA_PROJECT: the project ID of the Google Cloud project of your IAM service account. \
ROLE_NAME: the IAM role to assign to your service account, like roles/monitoring.viewer.

* Allow the Kubernetes service account to impersonate the IAM service account by adding an IAM policy binding between the two service accounts. This binding allows the Kubernetes service account to act as the IAM service account.
```shell
gcloud iam service-accounts add-iam-policy-binding GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE/KSA_NAME]"
```
Replace the following:

PROJECT_ID: your Google Cloud project ID. \
GSA_NAME: the name of your IAM service account. \
GSA_PROJECT: the project ID of the Google Cloud project of your IAM service account. \
NAMESPACE: Namespace where keda operator is installed; defaults to `keda` . \
KSA_NAME: Kubernetes service account name of the keda; defaults to `keda-operator` .
* Then you need to annotate the Kubernetes service account with the email address of the IAM service account.

```shell
kubectl annotate serviceaccount keda-operator \
--namespace keda \
iam.gke.io/gcp-service-account=GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com
```
Replace the following: \

GSA_NAME: the name of your IAM service account. \
GSA_PROJECT: the project ID of the Google Cloud project of your IAM service account.


Refer to GCP official [documentation](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#authenticating_to) for more.
28 changes: 28 additions & 0 deletions content/docs/2.12/authentication-providers/hashicorp-vault.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
+++
title = "Hashicorp Vault secret"
+++


You can pull one or more Hashicorp Vault secrets into the trigger by defining the authentication metadata such as Vault `address` and the `authentication` method (token | kubernetes). If you choose kubernetes auth method you should provide `role` and `mount` as well.
`credential` defines the Hashicorp Vault credentials depending on the authentication method, for kubernetes you should provide path to service account token (/var/run/secrets/kubernetes.io/serviceaccount/token) and for token auth method provide the token.
`secrets` list defines the mapping between the path and the key of the secret in Vault to the parameter.
`namespace` may be used to target a given Vault Enterprise namespace.

> Since version `1.5.0` Vault secrets backend **version 2** is supported.
> The support for Vault secrets backend **version 1** was added on version `2.10`.
```yaml
hashiCorpVault: # Optional.
address: {hashicorp-vault-address} # Required.
namespace: {hashicorp-vault-namespace} # Optional. Default is root namespace. Useful for Vault Enterprise
authentication: token | kubernetes # Required.
role: {hashicorp-vault-role} # Optional.
mount: {hashicorp-vault-mount} # Optional.
credential: # Optional.
token: {hashicorp-vault-token} # Optional.
serviceAccount: {path-to-service-account-file} # Optional.
secrets: # Required.
- parameter: {scaledObject-parameter-name} # Required.
key: {hasicorp-vault-secret-key-name} # Required.
path: {hasicorp-vault-secret-path} # Required.
```
14 changes: 14 additions & 0 deletions content/docs/2.12/authentication-providers/secret.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
+++
title = "Secret"
+++

You can pull one or more secrets into the trigger by defining the `name` of the Kubernetes Secret and the `key` to use.

```yaml
secretTargetRef: # Optional.
- parameter: connectionString # Required - Defined by the scale trigger
name: my-keda-secret-entity # Required.
key: azure-storage-connectionstring # Required.
```
**Assumptions:** `namespace` is in the same resource as referenced by `scaleTargetRef.name` in the ScaledObject, unless specified otherwise.
49 changes: 49 additions & 0 deletions content/docs/2.12/concepts/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
+++
title = "KEDA Concepts"
description = "What KEDA is and how it works"
weight = 1
+++

## What is KEDA?

**KEDA** is a [Kubernetes](https://kubernetes.io)-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed.

**KEDA** is a single-purpose and lightweight component that can be added into any Kubernetes cluster. KEDA works alongside standard Kubernetes components like the [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) and can extend functionality without overwriting or duplication. With KEDA you can explicitly map the apps you want to use event-driven scale, with other apps continuing to function. This makes KEDA a flexible and safe option to run alongside any number of any other Kubernetes applications or frameworks.

## How KEDA works

KEDA performs three key roles within Kubernetes:

1. **Agent** — KEDA activates and deactivates Kubernetes [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment) to scale to and from zero on no events. This is one of the primary roles of the `keda-operator` container that runs when you install KEDA.
2. **Metrics** — KEDA acts as a [Kubernetes metrics server](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics) that exposes rich event data like queue length or stream lag to the Horizontal Pod Autoscaler to drive scale out. It is up to the Deployment to consume the events directly from the source. This preserves rich event integration and enables gestures like completing or abandoning queue messages to work out of the box. The metric serving is the primary role of the `keda-operator-metrics-apiserver` container that runs when you install KEDA.
3. **Admission Webhooks** - Automatically validate resource changes to prevent misconfiguration and enforce best practices by using an [admission controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/). As an example, it will prevent multiple ScaledObjects to target the same scale target.

## Architecture

The diagram below shows how KEDA works in conjunction with the Kubernetes Horizontal Pod Autoscaler, external event sources, and Kubernetes' [etcd](https://etcd.io) data store:

![KEDA architecture](/img/keda-arch.png)

### Event sources and scalers

KEDA has a wide range of [**scalers**](/scalers) that can both detect if a deployment should be activated or deactivated, and feed custom metrics for a specific event source. The following scalers are available:

{{< scalers-compact >}}

### Custom Resources (CRD)

When you install KEDA, it creates four custom resources:

1. `scaledobjects.keda.sh`
1. `scaledjobs.keda.sh`
1. `triggerauthentications.keda.sh`
1. `clustertriggerauthentications.keda.sh`

These custom resources enable you to map an event source (and the authentication to that event source) to a Deployment, StatefulSet, Custom Resource or Job for scaling.
- `ScaledObjects` represent the desired mapping between an event source (e.g. Rabbit MQ) and the Kubernetes Deployment, StatefulSet or any Custom Resource that defines `/scale` subresource.
- `ScaledJobs` represent the mapping between event source and Kubernetes Job.
- `ScaledObject`/`ScaledJob` may also reference a `TriggerAuthentication` or `ClusterTriggerAuthentication` which contains the authentication configuration or secrets to monitor the event source.

## Deploy KEDA

See the [Deployment](../deploy) documentation for instructions on how to deploy KEDA into any cluster using tools like [Helm](../deploy/#helm).
17 changes: 17 additions & 0 deletions content/docs/2.12/concepts/admission-webhooks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
+++
title = "Admission Webhooks"
description = "Automatically validate resource changes to prevent misconfiguration and enforce best practices"
weight = 600
+++

There are some several misconfiguration scenarios that can produce scaling problems in productive workloads, for example: in Kubernetes a single workload should never be scaled by 2 or more HPA because that will produce conflicts and unintended behaviors.

Some errors with data format can be detected during the model validation, but these misconfigurations can't be detected in that step because the model is correct indeed. For trying to avoid those misconfigurations at data plane detecting them early, admission webhooks validate all the incoming (KEDA) resources (new or updated) and reject any resource that doesn't match the rules below.

### Prevention Rules

KEDA will block all incoming changes to `ScaledObject` that don't match these rules:

- The scaled workload (`scaledobject.spec.scaleTargetRef`) is already autoscaled by another other sources (other ScaledObject or HPA).
- CPU and/or Memory trigger are used and the scaled workload doesn't have the requests defined. **This rule doesn't apply to all the workload types, only to `Deployment` and `StatefulSet`.**
- CPU and/or Memory trigger are **the only used triggers** and the ScaledObject defines `minReplicaCount:0`. **This rule doesn't apply to all the workload types, only to `Deployment` and `StatefulSet`.**
Loading

0 comments on commit 90a3992

Please sign in to comment.