-
-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remove duplicate exported scrape job name in file name #525
Conversation
Between voxpupuli#435 and voxpupuli#450, the file path for the exported job names was changed to use underscores instead of colons. That's fine, except that, in the end, the actual filename had a superfluous `$scrape_job_name` parameter added. Taking the Apache exporter as an example, I used to have: /etc/prometheus/file_sd_config.d/apache_perdulce.torproject.org:9117.yaml I now have: /etc/prometheus/file_sd_config.d/apache_apache_eugeni.torproject.org_9117.yaml That still works of course: the old file is removed and the new one is picked up, but it does look silly and needlessly redundant. I should note that this might mean some clashes between jobs, if the same host/port combination is scraped by multiple jobs on the Prometheus server. I have no idea if that's a valid use case, but I should point out that, if it is, then the `job_name` parameter should just be removed and instead the resource name should forcibly include the job name, instead of enforcing this weird duplication. In other words, maybe this is also an API breaking change that should warrant a changelog entry, but I'll let my co-maintainers be the judges of this.
Forcing a name to conform to some pattern rather than using parameters like |
hum. maybe i'm being overly obsessive with details here. other maintainers have any thoughts? i'm happy to drop this one... i thought the change was accidental. |
@treydock maybe we just got this backwards... why don't we pass the job name, host and port directly to scrape_job? in other words, instead of:
which gets called as:
we could have:
and then call it with:
and that way the scrape_job resource can construct a unique filename without redundancy... it makes scrape_job use a single target, unfortunately, but that's effectively how it's being used now anyways... otherwise i'm happy to just drop this PR as well. |
One major issue I see with that proposal is the lack of I also often don't need to set port like I will have a target that is The current parameters of |
makes sense. i'll just drop this already. :) i assume you have ways of not having clashes when you define multiple hosts in a job? |
Between #435 and #450, the file path for the exported job names was
changed to use underscores instead of colons. That's fine, except
that, in the end, the actual filename had a superfluous
$scrape_job_name
parameter added. Taking the Apache exporter as anexample, I used to have:
I now have:
That still works of course: the old file is removed and the new one is
picked up, but it does look silly and needlessly redundant.
I should note that this might mean some clashes between jobs, if the
same host/port combination is scraped by multiple jobs on the
Prometheus server. I have no idea if that's a valid use case, but I
should point out that, if it is, then the
job_name
parameter shouldjust be removed and instead the resource name should forcibly include
the job name, instead of enforcing this weird duplication.
In other words, maybe this is also an API breaking change that should
warrant a changelog entry, but I'll let my co-maintainers be the
judges of this.
Pull Request (PR) description
This Pull Request (PR) fixes the following issues