Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure Storage Queue Scaler: should report both visible and invisible message count to compute queue length. #4002

Closed
moria97 opened this issue Dec 12, 2022 · 8 comments · Fixed by #4003
Labels
bug Something isn't working

Comments

@moria97
Copy link
Contributor

moria97 commented Dec 12, 2022

Report

From Azure Storage Queue document, queue messages should be set to invisible state when pulled from queue for processing, deleted when processing completes and be visible again when processing fails. So when computing queue length for scaling, it's better to consider both visible and invisible messages like in the get-queue-length documentation instead of counting those visible only.

Expected Behavior

Use correct queue length to count both visible and invisible messages.

Actual Behavior

Count only visible messages.

Steps to Reproduce the Problem

None

Logs from KEDA operator

example

KEDA Version

None

Kubernetes Version

None

Platform

None

Scaler Details

Azure Storage Queue

Anything else?

No response

@moria97 moria97 added the bug Something isn't working label Dec 12, 2022
@moria97 moria97 changed the title Azure Storage Queue Scaler: should report both visible and invisible message count. Azure Storage Queue Scaler: should report both visible and invisible message count to compute queue length. Dec 12, 2022
@JorTurFer
Copy link
Member

Hi,
It makes sense, but maybe there is any reason behind that behavior. Do you know why we don't call directly to ApproximateMessagesCount() @ahmelsayed ?
Maybe you could now the reason too (or not) @v-shenoy

@v-shenoy
Copy link
Contributor

I am not really sure why this was implemented this way. Ahmed will know better.

@JorTurFer
Copy link
Member

Let's wait till @ahmelsayed has time to answer us before continue with the PR

@stale
Copy link

stale bot commented Feb 17, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Feb 17, 2023
@stale
Copy link

stale bot commented Feb 24, 2023

This issue has been automatically closed due to inactivity.

@stale stale bot closed this as completed Feb 24, 2023
@JorTurFer JorTurFer reopened this Feb 24, 2023
@stale stale bot removed the stale All issues that are marked as stale due to inactivity label Feb 24, 2023
@JorTurFer
Copy link
Member

@ahmelsayed, do you have more info?

@jellevankerk
Copy link

@JorTurFer @moria97 I think this fix has introduced a bug for me. It no longer scales down if there are invisible messages in the queue (Which is good). However, it also keeps scaling up on these invisible messages. Was this expected behavior, if yes is there some way to disable this behaviour

@JorTurFer
Copy link
Member

@JorTurFer @moria97 I think this fix has introduced a bug for me. It no longer scales down if there are invisible messages in the queue (Which is good). However, it also keeps scaling up on these invisible messages. Was this expected behavior, if yes is there some way to disable this behaviour

Hi,
That's not possible at this moment because there is a single value for both. In any case, could you open a separate issue to extend and track your use case?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

4 participants