Hi, I have a set of custom metrics using spring actuator/micrometer framework to spit out metrics compatible with prometheus. I want some of these metrics to be scraped at 10 second interval and some that are process/thread heavy to be scraped every 2 minutes . But the way it looks we just have one micrometer registry where all metrics are added and a single url that prometheus is configured to hit .so either I can have all the metrics scraped at 10second interval or every 2 minutes . Is there a way to segregate these metrics so that I can create different jobs in prometheus with different scrape intervals ?

Thanks Debashish

Comment From: wilkinsona

I am not a Prometheus expert, but as far as I can tell, its static config only allows a target to be configured with a host and port. I cannot see how you could configure Prometheus to scrape different portions of the same target at different intervals.

Can you share some information about how you would configure Prometheus and the requests that it would make to your Spring Boot application to scrape its metrics? That would allow us to understand what’s needed on the Spring Boot side. It may be possible already with some manual configuration or it may require some enhancements first.

Comment From: debashish-github

So I was thinking of creating a seperate Actuator EndPoint using @WebEndpoint.

So for example I have metrics A, B, C, D and E . I want metrics A, B and C to be scraped at 10 second interval. So I create normal custom metric and add them to the micrometer registry . This will have default prometheus end point acutator/prometheus. For metric C and D it create a custom endPoint say actuator/mycustomEndPoint and in its @ReadOperation I create metrics D and E and add them to the @Autowired registry.

I tried this and it works great for Counters. It only gets updated when I hit the url actuator/customEndPoint and when I access url actuator/prometheus it returns the value that was updated. The API I use here is registry.counter("custom.health.metric").increment(Math.random());

When I try this for gauge using the API registry.gauge("custom1_azure_storage_connection", isTableConnectionAvailable()); to get the guage value of connection , this value only gets updated at prometheus end point when I first hit the url actuator/myCustomEndPoint . On subsequently hitting this url the actuator/prometheus url doesn't get the updated value . When I use the other API as below:

Gauge.builder(MetricConstants.METRIC_NAME_TYPE_CUSTOM + "1", this, value -> isTableConnectionAvailable()) * .description("Azure storage connection check") .tags(Tags.of(Tag.of(MetricConstants.METRIC_TAG_DATE, * sf.format(date)))).baseUnit("azure_storage_connection") .register(registry);

Even prometheus scraping calls the isTableConnectionAvailable() since I guess the meter is already registered . So now it gets called from both the end points.

I have attached both my files . Working one is CustomHealthEndPoint.java and non working one is AzureConnectionMetricCustom.java.. I have also attached our prometheus.yml configuraiton file. I was planning to add another job with a different endPointUrl having different scrape interval here.

Please let me know if you have any futher questions.

Comment From: wilkinsona

Thanks, @debashish-github. Unfortunately, attachments to email replies don’t make it to the issue. Could you please comment directly via the GitHub web UI and attach the files there?

Comment From: debashish-github

Sure please see all the attachments here .

AzureConnectionMetricCustom.txt CustomHealthEndPoint.txt prometheus.txt

Comment From: debashish-github

Any updates on this ?

Comment From: wilkinsona

Thanks, I think I have a better understanding of what you're trying to do now.

I thought it may be possible to achieve what I think you're trying to do with custom endpoints that make use of CollectorRegistry.filteredMetricFamilySamples(Set<String> includedNames). However, this does not work due to the way that Micrometer's MicrometerCollector behaves. I've opened https://github.com/micrometer-metrics/micrometer/issues/1883 to see if that can be addressed.

An alternative approach is to define multiple MeterRegistry beans, each with an appropriate filter, and to the define custom scrape endpoints that use the CollectorRegistry from each of the filtered meter registries. This would look something like this:

@Configuration
static class PrometheusConfiguration {

    private final PrometheusMeterRegistry tenSecondRegistry;

    private final PrometheusMeterRegistry twoMinuteRegistry;

    PrometheusConfiguration(PrometheusConfig config, Clock clock) {
        Collection<String> tenSecondMeters = Arrays.asList("jvm.memory.max");
        this.tenSecondRegistry = new PrometheusMeterRegistry(config, new CollectorRegistry(true), clock);
        this.tenSecondRegistry.config().meterFilter(MeterFilter.denyUnless((id) -> tenSecondMeters.contains(id.getName())));
        Collection<String> twoMinuteMeters = Arrays.asList("jvm.memory.used");
        this.twoMinuteRegistry = new PrometheusMeterRegistry(config, new CollectorRegistry(true), clock);
        this.twoMinuteRegistry.config().meterFilter(MeterFilter.denyUnless((id) -> twoMinuteMeters.contains(id.getName())));
    }

    @Bean
    PrometheusMeterRegistry tenSecondRegistry() {
        return this.tenSecondRegistry;
    }

    @Bean
    PrometheusMeterRegistry twoMinuteRegistry() {
        return this.twoMinuteRegistry;
    }

    @Bean
    PrometheusScrapeEndpoint tenSecondScrapeEndpoint() {
        return new TenSecondPrometheusScrapeEndpoint(this.tenSecondRegistry.getPrometheusRegistry());
    }

    @Bean
    PrometheusScrapeEndpoint twoMinuteScrapeEndpoint() {
        return new TwoMinutePrometheusScrapeEndpoint(this.twoMinuteRegistry.getPrometheusRegistry());
    }

}

@WebEndpoint(id = "prometheus10sec")
private static final class TenSecondPrometheusScrapeEndpoint extends PrometheusScrapeEndpoint {

    TenSecondPrometheusScrapeEndpoint(CollectorRegistry collectorRegistry) {
        super(collectorRegistry);
    }

}

@WebEndpoint(id = "prometheus2min")
private static final class TwoMinutePrometheusScrapeEndpoint extends PrometheusScrapeEndpoint {

    TwoMinutePrometheusScrapeEndpoint(CollectorRegistry collectorRegistry) {
        super(collectorRegistry);
    }

}

The above results in /actuator/prometheus10sec exposing max memory metrics:

$ http :8080/actuator/prometheus10sec
HTTP/1.1 200 
Connection: keep-alive
Content-Length: 546
Content-Type: text/plain; version=0.0.4;charset=utf-8
Date: Fri, 06 Mar 2020 12:31:05 GMT
Keep-Alive: timeout=60

# HELP jvm_memory_max_bytes The maximum amount of memory in bytes that can be used for memory management
# TYPE jvm_memory_max_bytes gauge
jvm_memory_max_bytes{area="heap",id="PS Survivor Space",} 2.2020096E7
jvm_memory_max_bytes{area="heap",id="PS Old Gen",} 5.726797824E9
jvm_memory_max_bytes{area="heap",id="PS Eden Space",} 2.819096576E9
jvm_memory_max_bytes{area="nonheap",id="Metaspace",} -1.0
jvm_memory_max_bytes{area="nonheap",id="Code Cache",} 2.5165824E8
jvm_memory_max_bytes{area="nonheap",id="Compressed Class Space",} 1.073741824E9

While /actuator/prometheus2min exposes used memory metrics:

$ http :8080/actuator/prometheus2min
HTTP/1.1 200 
Connection: keep-alive
Content-Length: 494
Content-Type: text/plain; version=0.0.4;charset=utf-8
Date: Fri, 06 Mar 2020 12:35:48 GMT
Keep-Alive: timeout=60

# HELP jvm_memory_used_bytes The amount of used memory
# TYPE jvm_memory_used_bytes gauge
jvm_memory_used_bytes{area="heap",id="PS Survivor Space",} 0.0
jvm_memory_used_bytes{area="heap",id="PS Old Gen",} 1.2129368E7
jvm_memory_used_bytes{area="heap",id="PS Eden Space",} 4.3157728E7
jvm_memory_used_bytes{area="nonheap",id="Metaspace",} 3.5945176E7
jvm_memory_used_bytes{area="nonheap",id="Code Cache",} 1.2710848E7
jvm_memory_used_bytes{area="nonheap",id="Compressed Class Space",} 4664176.0

If you have any further questions, please follow up on Stack Overflow or Gitter. As mentioned in the guidelines for contributing, we prefer to use GitHub issues only for bugs and enhancements.

Comment From: debashish-github

Thank you so much Andy ! This really helps .. I will try this solution .. In the meantime I see you have also created a ticket to resolve this issue in the Micrometer CollectionRegistry . Is there any rough guess by when this should be available ?

Regards Debashish

Comment From: wilkinsona

That's a question for the Micrometer team.

Comment From: izeye

@debashish-github For your information, filtered scrape support has been merged for Spring Boot 2.4.0.M1.