Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ingress for LAVA callback handler conflicts with API #340

Open
gctucker opened this issue Sep 3, 2023 · 0 comments
Open

Ingress for LAVA callback handler conflicts with API #340

gctucker opened this issue Sep 3, 2023 · 0 comments

Comments

@gctucker
Copy link
Contributor

gctucker commented Sep 3, 2023

An ingress using ingress-nginx has already been set up correctly in the AKS cluster in the kernelci-api namespace. Then when I try to add another one for kernelci-pipeline to handle LAVA callbacks I get this error:

CONTROLLER_IMAGE=ingress-nginx/controller
CONTROLLER_TAG=v1.2.1
DEFAULTBACKEND_IMAGE=defaultbackend-amd64
DEFAULTBACKEND_TAG=1.5
PATCH_IMAGE=ingress-nginx/kube-webhook-certgen
PATCH_TAG=v1.1.1
$helm install ingress-nginx ingress-nginx/ingress-nginx \
    --version=4.1.3 \
    --namespace=kernelci-pipeline \
    --create-namespace \
    --set controller.replicaCount=2 \
    --set controller.nodeSelector."kubernetes\.io/os"=linux \
    --set controller.image.image=$CONTROLLER_IMAGE \
    --set controller.image.tag=$CONTROLLER_TAG \
    --set controller.image.digest="" \
    --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
    --set controller.service.loadBalancerIP=10.224.0.42 \
    --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"=true \
    --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
    --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \
    --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
    --set controller.admissionWebhooks.patch.image.digest="" \
    --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
    --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \
    --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG \
    --set defaultBackend.image.digest=""
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "ingress-nginx" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "kernelci-pipeline": current value is "kernelci-api"

As a result, I don't think we can set a different DNS label for the callback ingress:

$ helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
    --namespace kernelci-pipeline \
    --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=kernelci-lava-callback \
    --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz

A temporary solution would be to run the callback handler on the staging VM for the early access phase, but this will need to be addressed properly for a full production deployment. One workaround would be to use a separate cluster for the pipeline, but it seems like there should be a way to properly set this up with multiple ingresses. Also, we ultimately won't need DNS labels if we just rely on the LF DNS for kernelci.org subdomains so that might simplify things a bit. We'll still needed multiple ingress definitions to route the traffic to either the API or a client-side service (LAVA callback, KCIDB bridge etc.).

@gctucker gctucker added this to the Production deployment milestone Sep 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Todo
Development

No branches or pull requests

1 participant