Skip to content

Generating Service Accounts for Kubernetes on any platform

Fernando Barreiro edited this page Sep 8, 2023 · 12 revisions

Generic service accounts

A simple method to create service accounts on any Kubernetes cluster (i.e. not bound to the IAM of a particular cloud provider) is the following. This solution is based on this documentation: https://docs.armory.io/docs/armory-admin/manual-service-account/

Using your administrator account:

1. Create a harvester user + role + role binding.

Create the file harvester-service-account.yaml with following content:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: harvester
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: harvester-role
  namespace: default
rules:
- apiGroups: ["*"]
  resources: ["jobs","pods","secrets","configmaps","pods/log"]
  verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: harvester-rb
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: harvester-role
subjects:
- namespace: default
  kind: ServiceAccount
  name: harvester
---
apiVersion: v1
kind: Secret
metadata:
  name: harvester-service-account-secret
  annotations:
    kubernetes.io/service-account.name: harvester
type: kubernetes.io/service-account-token

Create the objects in Kubernetes:

>>> kubectl create -f harvester-service-account.yaml

2. Generate the kubeconfig file that needs to be set in Harvester.

Create the script kubeconfig_generator.sh with following content:

SERVICE_ACCOUNT_NAME=harvester
CONTEXT=$(kubectl config current-context)
NAMESPACE=default

NEW_CONTEXT=harvester-context
KUBECONFIG_FILE="kubeconfig-harvester"

SECRET_NAME="harvester-service-account-secret"

TOKEN_DATA=$(kubectl get secret ${SECRET_NAME} \
  --context ${CONTEXT} \
  --namespace ${NAMESPACE} \
  -o jsonpath='{.data.token}')

TOKEN=$(echo ${TOKEN_DATA} | base64 -d)

# Create dedicated kubeconfig
# Create a full copy
kubectl config view --raw > ${KUBECONFIG_FILE}.full.tmp
# Switch working context to correct context
kubectl --kubeconfig ${KUBECONFIG_FILE}.full.tmp config use-context ${CONTEXT}
# Minify
kubectl --kubeconfig ${KUBECONFIG_FILE}.full.tmp \
  config view --flatten --minify > ${KUBECONFIG_FILE}.tmp
# Rename context
kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
  rename-context ${CONTEXT} ${NEW_CONTEXT}
# Create token user
kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
  set-credentials ${CONTEXT}-${NAMESPACE}-token-user \
  --token ${TOKEN}
# Set context to use token user
kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
  set-context ${NEW_CONTEXT} --user ${CONTEXT}-${NAMESPACE}-token-user
# Set context to correct namespace
kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
  set-context ${NEW_CONTEXT} --namespace ${NAMESPACE}
# Flatten/minify kubeconfig
kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
  view --flatten --minify > ${KUBECONFIG_FILE}
# Remove tmp
rm ${KUBECONFIG_FILE}.full.tmp
rm ${KUBECONFIG_FILE}.tmp

Run the script, it should generate a file called kubeconfig-harvester, which is the one to put on the harvester instance:

>>> source kubeconfig_generator.sh
>>> ls -lrt
...
-rw-r--r--  1 fbarreir  staff  3055 Nov  6 11:53 kubeconfig-harvester
...

Considerations

  • The setup above uses "default" namespace. Harvester supports running on other namespaces. If you (the cluster owner) prefer to operate on a specific namespace, you can edit the namespace on the above files. But be aware that if using CVMFS drivers, they also need to be configured for the same namespace, otherwise the pods will not be able to mount the volumes.
  • The set of permissions defined in the role is rather strict, no troubleshooting (e.g. seeing CVMFS volumes) is possible based on these permissions.
Clone this wiki locally