You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The catch-up flag does not respect the retries limit.
If you have a process with the following criteria:
Catch-Up (Run All) checked
Timeout configured
Process (consistently) takes longer than timeout
The task will continuously restart after the initial event is triggered.
It appears the "ran on schedule" flag isn't cleared unless the job is successfully completed. My expectation was it would attempt to run the task to catch-up, but not retry if a failure occurred.
Details
Version 0.8.54
For example, my job below too long is configured to run:
at hour:05
with a 1 minute timeout
None retries
catch-up selected
script plug-in that just runs sleep 5m
The run log shows the task starting on schedule, and erroring out, but then running again, despite the retries disabled:
The job output confirms it's killed due to maximum run time:
# Job ID: jkk1a270qog
# Event Title: too long
# Hostname: fridgenas
# Date/Time: 2021/01/17 10:11:33 (GMT-5)
[2021/01/17 10:11:33] Sleeping for 5 minutes on a 1 minute max job
Caught SIGTERM, killing child: 1676820
Caught SIGTERM, killing child: 1676820
# Job failed at 2021/01/17 10:12:52 (GMT-5).
# Error: Job Aborted: Exceeded maximum run time (1 minute)
# End of log.
Comments
For my case I will workaround this by disabling the time-out for this job (or the catch-up, since the server isn't likely to be down in a way that would cause problems).
I noticed this because I have multiple backup jobs in a category with a category limit of 1 job at a time. The long-running (and incorrectly configured) prune event was timing out, then re-running, seemingly delaying the entire backup schedule.
The text was updated successfully, but these errors were encountered:
That seemed to be a side effect of the default job abortion behavior. Aborted jobs get into the catch-up queue (if catch-up checked) by default. To avoid this no_rewind flag should be set to 1, which only happens if you manually aborting job. I guess it's neither bug or feature, just a weird scenario. I'd agree timeout abortion should come with no_rewind=1 by default
Description
The catch-up flag does not respect the retries limit.
If you have a process with the following criteria:
The task will continuously restart after the initial event is triggered.
It appears the "ran on schedule" flag isn't cleared unless the job is successfully completed. My expectation was it would attempt to run the task to catch-up, but not retry if a failure occurred.
Details
Version 0.8.54
For example, my job below
too long
is configured to run:hour:05
1 minute
timeoutNone
retriescatch-up
selectedscript
plug-in that just runssleep 5m
The run log shows the task starting on schedule, and erroring out, but then running again, despite the retries disabled:
The job output confirms it's killed due to maximum run time:
Comments
For my case I will workaround this by disabling the time-out for this job (or the catch-up, since the server isn't likely to be down in a way that would cause problems).
I noticed this because I have multiple backup jobs in a category with a category limit of 1 job at a time. The long-running (and incorrectly configured) prune event was timing out, then re-running, seemingly delaying the entire backup schedule.
The text was updated successfully, but these errors were encountered: