Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Reporting] Reporting Job Params should not include timeout and max attempts. #76411

Closed
tsullivan opened this issue Sep 1, 2020 · 5 comments · Fixed by #64853
Closed

[Reporting] Reporting Job Params should not include timeout and max attempts. #76411

tsullivan opened this issue Sep 1, 2020 · 5 comments · Fixed by #64853
Labels
impact:low Addressing this issue will have a low level of impact on the quality/strength of our product. loe:small Small Level of Effort

Comments

@tsullivan
Copy link
Member

tsullivan commented Sep 1, 2020

Kibana version: 5.0+

Steps to reproduce:

  1. Have two Kibana instances
  2. KibanaA settings:
    • have xpack.reporting.queue.timeout set to a low value
    • xpack.reporting.queue.pollEnabled set to false
  3. KibanaB settings:
    • have queue timeout set to a high value
    • leave the default for pollEnabled.
  4. From KibanaA UI, queue a reporting job that is expected to take a long time, but not longer than the KibanaB queue timeout.

Expected behavior:
The report will complete, because it runs on KibanaB and that instance has a high queue timeout set.

Actual:
The report fails, because KibanaA set the timeout as job parameters, and the value is too low.

Any additional context:
The same problem is happening for the max attempts and browser settings in Reporting. They should be referenced from config at execution time, not set at job creation time.

@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-reporting-services (Team:Reporting Services)

@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-app-services (Team:AppServices)

@tsullivan
Copy link
Member Author

Removing these fields makes it much more complicated for the Management UI to show the values for these settings. I suggest this issue be closed.

@tsullivan
Copy link
Member Author

tsullivan commented Jul 8, 2021

Removing these fields makes it much more complicated for the Management UI to show the values for these settings. I suggest this issue be closed.

This shouldn't be a blocking issue - the management UI can easily show blank values for jobs that haven't been claimed and having had these fields populated.

This is an issue that can lead to confusion when investigating issues like a job failing before the configured timeout. All of the "runtime" specific fields like timeout and max_attempts should not be saved on the job at the time of creation. It can cause confusion in 2 regards:

  • The job can end up running on a different instance that has different values for those settings.
  • Even with 1 instance, changing the settings does not affect jobs that are pending.

@tsullivan tsullivan reopened this Jul 8, 2021
@exalate-issue-sync exalate-issue-sync bot added impact:low Addressing this issue will have a low level of impact on the quality/strength of our product. loe:small Small Level of Effort labels Jul 8, 2021
@tsullivan
Copy link
Member Author

#105508 cleaned this up as part of internal types cleanup. The code uses the actual timeout and max attempts values from the config, and has comments to mention that the timeout and max_attempts are added to the job payload for display purposes only.

@exalate-issue-sync exalate-issue-sync bot added loe:medium Medium Level of Effort and removed loe:small Small Level of Effort labels Jul 22, 2021
@exalate-issue-sync exalate-issue-sync bot added loe:small Small Level of Effort and removed loe:medium Medium Level of Effort labels Aug 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
impact:low Addressing this issue will have a low level of impact on the quality/strength of our product. loe:small Small Level of Effort
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants