Hello, I've tested the following issue with both Spring Boot 2.3.0 and 2.3.1
I am trying to override the default health check behaviour by setting it to check the DB connection only, but when I include those configuration parameters I get a 404 from the probing endpoints locally.
For example, with the following configuration:
management:
server:
port: 10102
health:
probes:
enabled: true
defaults:
enabled: false
db:
enabled: true
I cannot access http://localhost:10102/actuator/health/liveness or http://localhost:10102/actuator/health/readiness, both return a 404. However, the response from http://localhost:10102/actuator/health lists both liveness and readiness as groups.
If I remove the default override, like this:
management:
server:
port: 10102
health:
probes:
enabled: true
Then both endpoints work fine.
I am not getting any errors or exceptions when running the application.
Comment From: mbhave
It seems to be because the auto-configuration checks for a @ConditionalOnEnabledHealthIndicator("livenessState")
and @ConditionalOnEnabledHealthIndicator("readinessState")
for determining whether to configure liveness and readiness indicators respectively, but the property we use to enable probes is management.health.probes.enabled
.
Comment From: wagnerluis1982
Hello, I am interested in contribute and code a solution if possible.
That's said, I see the label type:bug, but I did not really get why, that's because management.health.defaults.enabled
value overrode the other values compulsorily?
Comment From: mbhave
@wagnerluis1982 Thanks for offering to help but I'm not sure what the solution will look like. Unlike other health indicators, where the value used in the condition
(https://github.com/spring-projects/spring-boot/blob/255f8197ab35a363283429173de9db5f0c7f5eb0/spring-boot-project/spring-boot-actuator-autoconfigure/src/main/java/org/springframework/boot/actuate/autoconfigure/system/DiskSpaceHealthContributorAutoConfiguration.java#L38) matches the flag used to enable the indicator, management.health.diskspace.enabled
, the probes one does not. This causes the condition to check the value of management.health.defaults.enabled
. I will mark this for team attention to discuss it with the rest of the team and update the issue based on the result of that discussion.
Comment From: FlinnBurgess
Hi, thanks for taking a look at this. On further investigation it seems that I have been using the configuration incorrectly.
A colleague suggested the following implementation which seems to work as expected.
management:
server:
port: 10102
endpoint:
health:
group:
readiness:
include:
- readinessState
- db
show-details: always
health:
probes:
enabled: true
I don't know if this means that my original post is not actually a bug? I'm afraid I don't quite understand it well enough.
Comment From: philwebb
I think we need to refine the conditions in AvailabilityProbesAutoConfiguration
.
Comment From: bclozel
I think the confusion comes from several points:
1. the management.health.probes.enabled
configuration is not aligned with other management.health.*.enabled
, which are only about enabling specific health indicators. management.health.probes.enabled
is about enabling the livenessState
and readinessState
health indicators and the related liveness
and readiness
health groups
2. readinessState
and livenessState
are health indicators but they don't have their own management.health.*.enabled
configuration property documented even if it's available. Also, enabling this property since right now the AvailabilityProbesAutoConfiguration
is guarded by a ProbesCondition
3. the ProbesCondition
first checks for the management.health.probes.enabled
property, then the cloud environment (k8s); this is a bit unusual compared to other properties in that same space
Here are a few ideas to improve the situation.
Move the management.health.probes.enabled
configuration key
Move the management.health.probes.enabled
configuration key to management.endpoint.health.probes.enabled
; this makes it clear that this is not a health indicator and that this is about enabling a specific feature on the health endpoint.
Add configuration metadata for the probes health checks
Add configuration metadata for management.health.livenessState.enabled
and management.health.readinessState.enabled
so that developers can enable those.
Arguably we could also change their name since we don't usually have case sensitive names. liveness-state
is not a good canditate since this would imply a "-"
within a key prefix: management.health.liveness-state.enabled
. liveness
is not really an option either, or we would get configurations like:
management:
endpoint:
health:
group:
liveness:
include:
- liveness
- db
Improve the AvailabilityProbesAutoConfiguration
Right now the whole auto-configuration class is guarded by @Conditional(ProbesCondition.class)
.
We could ensure that:
* LivenessStateHealthIndicator
is created if @ConditionalOnEnabledHealthIndicator("livenessState")
OR @Conditional(ProbesCondition.class)
* ReadinessStateHealthIndicator
is created if @ConditionalOnEnabledHealthIndicator("readinessState")
OR @Conditional(ProbesCondition.class)
* AvailabilityProbesHealthEndpointGroupsPostProcessor
is created if @Conditional(ProbesCondition.class)
Note that it's actually more complex than that since we can't enable the livenessState or readinessState health checks by default because this would be a breaking change: the global health endpoint would now take those checks into account and applications might report failures where they didn't in the past.
Maybe we should use @ConditionalOnProperty
here instead with the help of AnyNestedCondition
?
In any case, we should consider switching those defaults for the next major version (turning on the health checks and the health group).