Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What's the right way to expose a redisfailover cluster outside of K8s? #275

Closed
seafoodbuffet opened this issue Jun 15, 2020 · 9 comments
Closed
Labels

Comments

@seafoodbuffet
Copy link

Not a bug report, but a question. Having played around the with operator, I'm curious as to what's the proper way to expose the failover cluster to clients outside of k8s.

It appears that right now, while I can expose Sentinel outside of the cluster by means of a TCP Ingress (or whatever other method), Sentinel is handing out cluster-internal IP's for masters/slaves which means clients won't actually be able to connect to those addresses for redis nodes. One way might be to use a custom configuration that forces some sort of external IP address to be reported by Sentinel as the connect IP. (something like this: https://dev.to/kermodebear/external-access-to-a-kuberenetes-redis-cluster-46n6)

Another option would be to use something like an HAProxy Pod with service discovery for the active redis nodes and use the healthcheck to send users to the master. Basically issue a info replication command to each redis node and mark the one that reports 'master' as healthy. This approach completely ignores that Sentinel is a thing and always routes traffic to the master. Something like this: https://www.willandskill.se/en/setup-a-highly-available-redis-cluster-with-sentinel-and-haproxy/

Yet another option would be to dynamically generate an HAProxy config by using some monitor process to keep asking Sentinel for the master and updating HAProxy to point to that master. This leverages Sentinel, avoids service discovery from within proxy Pod. (perhaps there exists a Sentinel-aware Redis proxy already, but I haven't found such a thing yet)

Was there a specific design intent for this situation when this operator was built or is this basically just left up to the reader and the operator provides no opinions on how to best achieve this?

@brucemcpherson
Copy link

Yes - thank you for this - I've been scratching my head on this for the past two days but your post made me realize tof course that although the sentinel service can be exposed as a loadbalancer service, it returns an internal cluster address like 10.x.x.x for the master - so of course its inaccessible from an external client. I'd love to hear of a neat solution to this.

@seafoodbuffet
Copy link
Author

For now, for dev purposes I'm running a single instance of the actual redis and exposing it via NodePort Service (this works just fine).

For production, my plans are to write an adapter that basically polls the sentinel service using it's Cluster-wide IP/DNS name, grab the IP of the current master, dump out an HAProxy config and expose THAT HAProxy outside of the cluster using a service. This solution suffers from you never being able to route traffic to the slaves, which is not great from a load-balancing solution. I suppose if I really cared, I could expose a separate endpoint via HAProxy that specifically pointed to a Slave.

If anyone comes up with a more elegant solution, by all means, please share.

@brucemcpherson
Copy link

Yes, previously I was running the bitnami helm chart, and it worked ok - including externally - but it was a single point of failure, which is why I liked this approach using sentinel. I've decided to only use redis inside the Kubernetes cluster for now, and disabled redis access from the outside (which was only for occassional dev use only - so i'll use a local redis instance for that). I'm only using redis as a cache for graphql in front of a database anyway. Looking forward to seeing your solution implemented, and I'll post here if I come across any othe workaround.

@rcodesmith
Copy link

I had this question also. It seems like a proxy (e.g. haproxy) is a common solution. I saw an earlier version of redis-operator had haproxy. Is there a prescribed solution now?

@slayerjain
Copy link

Has anyone found any interesting solution to this?

@youngnicks
Copy link

I haven't implemented it, but sentinel allows for scripts to be called upon master change. This script could update labels on the master pods which are fronted by a master service looking for the master label. The master service would be used for applications which need to write to redis. A second service can be used to serve all of the redis nodes for read-only access.

This method removes the need for applications to use Sentinel as the service will always be pointing to the current master. Care must be taken to limit access to the sentinel pods as they are running a script which will have some form of access to modify the Kubernetes cluster. Proper RBAC rules should also be used.

@jagol1312
Copy link

Redis can use hostNetwork: true,the sentinel will get correct IP.

lucming added a commit to lucming/redis-operator that referenced this issue Oct 29, 2021
fixes: spotahome#275
Signed-off-by: liuming6 <liuming6@360.cn>
@github-actions
Copy link

This issue is stale because it has been open for 45 days with no activity.

@github-actions github-actions bot added the stale label Jan 14, 2022
@github-actions
Copy link

This issue was closed because it has been inactive for 14 days since being marked as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
6 participants