Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(redis): Add pod label of redis role, to support Master/Slave model. #419

Merged
merged 8 commits into from
Aug 24, 2022

Conversation

shangjin92
Copy link

@shangjin92 shangjin92 commented Jul 1, 2022

Set redis role label for redis pod, so client can connect to master directly by adding redis-master-service. Also, It can find the master or slave pod directly, by using kubectl get po -o wide --show-labels. In this way, the redis cluster can support sentinel model and master/slave model at the same time.

Set redis role label for redis pod, so client can connect to master directly. Also, It can find the master or slave pod directly, by using `kubectl get po -o wide --show-labels`. In this way, the redis cluster can support sentinel model and master/slave model at the same time.
@shangjin92 shangjin92 requested a review from ese as a code owner July 1, 2022 07:13
jim.sj added 7 commits July 1, 2022 15:32
Set redis role label for redis pod, so client can connect to master directly. Also, It can find the master or slave pod directly, by using `kubectl get po -o wide --show-labels`. In this way, the redis cluster can support sentinel model and master/slave model at the same time.
Set redis role label for redis pod, so client can connect to master directly. Also, It can find the master or slave pod directly, by using `kubectl get po -o wide --show-labels`. In this way, the redis cluster can support sentinel model and master/slave model at the same time.
Set redis role label for redis pod, so client can connect to master directly. Also, It can find the master or slave pod directly, by using `kubectl get po -o wide --show-labels`. In this way, the redis cluster can support sentinel model and master/slave model at the same time.
@jiuker
Copy link
Contributor

jiuker commented Jul 29, 2022

@ese we need this feature.too

@ese
Copy link
Member

ese commented Jul 29, 2022

@shangjin92 Thanks for the contribution and sorry for the feedback times. @jiuker I will be taking care of the contributions and pushing a release cycle from second-week august to the end of the month.

@samof76
Copy link
Contributor

samof76 commented Aug 4, 2022

@ese any updates on this?

@ese
Copy link
Member

ese commented Aug 24, 2022

I'm ok with it, just take in account it could have delayed updates when a master failover happens since the reliable mechanisms we trust for it is sentinel.
If we implement a sentinel monitor to maintain updated labels I would setup also a Kubernetes service for redis master and others for slaves so they are easily discoverable for other apps that use simple redis clients.

@ese ese merged commit 7e1a068 into spotahome:master Aug 24, 2022
@samof76
Copy link
Contributor

samof76 commented Aug 26, 2022

@shangjin92 @jiuker I would totally agree with @ese to reliably configure endpoint, it should be done using the querying sentinel or watch for events from sentinels.

We do this...

  • Watch sentinel for successful failover events
  • Sidecar looks for these events and updates the route53

@jiuker
Copy link
Contributor

jiuker commented Aug 26, 2022

@shangjin92 @jiuker I would totally agree with @ese to reliably configure endpoint, it should be done using the querying sentinel or watch for events from sentinels.

We do this...

  • Watch sentinel for successful failover events
  • Sidecar looks for these events and updates the route53

LGTM

@Cowboy-coder
Copy link

I tried this on our Cluster by adding a Service targeting the master pod. Im curious if it is within the scope of the operator to solve this more seamlessly by watching sentinel events or is this expected to always be handled outside of the operator?

Using the current approach it seems like a failover (by deleting the master pod) sometimes results in 0 downtime and sometimes a downtime of about ~10s according to my tests (a loop where I set + get a key each second while doing the failover).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants