0


Springboot Actuator监控

官网连接: Spring Boot Reference Documentation

13 Production-ready Features

Spring Boot包括许多附加功能,帮助您在将应用程序推向生产时监视和管理应用程序。您可以选择使用HTTP端点或JMX来管理和监视应用程序。 审计,健康和指标收集 也可以自动应用于应用程序。

13.1 使用 Enabling Production-ready Features

spring-boot-actuator

模块提供所有Spring Boot的生产就绪功能。启用这些功能建议添加

spring-boot-starter-actuator

依赖。

An actuator is a manufacturing term that refers to a mechanical device for moving or controlling something. Actuators can generate a large amount of motion from a small change.

actuator是生产期间为了改变或者监控设备。actuator可以以微小的改动产生巨大的改变。

要添加actuator到基于Maven的项目,请添加以下 Starter 依赖:

<dependencies>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
  </dependency>
</dependencies>

对于Gradle,使用以下声明:

dependencies {
    implementation 'org.springframework.boot:spring-boot-starter-actuator'
}

13.2 端点 Endpoints

actuator端点允许您监视应用程序并与之交互。Spring Boot包含许多内置端点,允许您添加自己的端点。例如,健康端点提供基本的应用程序健康信息。

您可以启用或禁用每个端点,并通过HTTP或JMX暴露它们(使它们可以远程访问)。端点在启用暴露时都视为可用。内置端点仅在可用时自动配置。大多数应用程序选择通过HTTP进行公开,其中端点的ID和前缀/actuator映射到URL。例如,默认情况下,health端点映射到/actuator/health

The following technology-agnostic endpoints are available:

ID

Description

auditevents

Exposes audit events information for the current application. Requires an AuditEventRepository bean.

beans

Displays a complete list of all the Spring beans in your application.

caches

Exposes available caches.

conditions

Shows the conditions that were evaluated on configuration and auto-configuration classes and the reasons why they did or did not match.

configprops

Displays a collated list of all @ConfigurationProperties.

env

Exposes properties from Spring’s ConfigurableEnvironment.

flyway

Shows any Flyway database migrations that have been applied. Requires one or more Flyway beans.

health

Shows application health information.

httptrace

Displays HTTP trace information (by default, the last 100 HTTP request-response exchanges). Requires an HttpTraceRepository bean.

info

Displays arbitrary application info.

integrationgraph

Shows the Spring Integration graph. Requires a dependency on spring-integration-core.

loggers

Shows and modifies the configuration of loggers in the application.

liquibase

Shows any Liquibase database migrations that have been applied. Requires one or more Liquibase beans.

metrics

Shows “metrics” information for the current application.

mappings

Displays a collated list of all @RequestMapping paths.

quartz

Shows information about Quartz Scheduler jobs.

scheduledtasks

Displays the scheduled tasks in your application.

sessions

Allows retrieval and deletion of user sessions from a Spring Session-backed session store. Requires a servlet-based web application that uses Spring Session.

shutdown

Lets the application be gracefully shutdown. Disabled by default.

startup

Shows the startup steps data collected by the ApplicationStartup. Requires the SpringApplication to be configured with a BufferingApplicationStartup.

threaddump

Performs a thread dump.

如果你的应用是一个web应用(Spring MVC, Spring WebFlux, 或 Jersey),你还可以使用下面额外的端点:

ID

Description

heapdump

Returns a heap dump file. On a HotSpot JVM, an HPROF-format file is returned. On an OpenJ9 JVM, a PHD-format file is returned.

jolokia

Exposes JMX beans over HTTP when Jolokia is on the classpath (not available for WebFlux). Requires a dependency on jolokia-core.

logfile

Returns the contents of the logfile (if the logging.file.name or the logging.file.path property has been set). Supports the use of the HTTP Range header to retrieve part of the log file’s content.

prometheus

Exposes metrics in a format that can be scraped by a Prometheus server. Requires a dependency on micrometer-registry-prometheus.

13.2.1 开启端点

默认情况下,除

shutdown

端点外的所有端点都已启用。要配置启用端点,请使用其

management.endpoint.<id>

。以下示例启用

**shutdown**

(默认关闭)端点:

management.endpoint.shutdown.enabled=true

如果您希望选择性的开启端点,请设置

management.endpoints.enabled-by-default

(默认开启) 属性为false,并使用单独的启用属性重新开启此端点。以下示例启用info端点并禁用其他所有端点:

management.endpoints.enabled-by-default=false
management.endpoint.info.enabled=true

Disabled endpoints are removed entirely from the application context. If you want to change only the technologies over which an endpoint is exposed, use the include and exclude properties instead.

禁用的端点将从应用程序上下文中完全删除。如果只想更改暴露端点,请改用include和exclude属性

13.2.2. 暴露端点

由于端点可能包含敏感信息,因此应仔细考虑何时公开它们。下表列出了默认暴露的内置端点:

ID

JMX

Web

auditevents

Yes

No

beans

Yes

No

caches

Yes

No

conditions

Yes

No

configprops

Yes

No

env

Yes

No

flyway

Yes

No

health

Yes

Yes

heapdump

N/A

No

httptrace

Yes

No

info

Yes

No

integrationgraph

Yes

No

jolokia

N/A

No

logfile

N/A

No

loggers

Yes

No

liquibase

Yes

No

metrics

Yes

No

mappings

Yes

No

prometheus

N/A

No

quartz

Yes

No

scheduledtasks

Yes

No

sessions

Yes

No

shutdown

Yes

No

startup

Yes

No

threaddump

Yes

No

要更改公开的端点,请使用以下特定的包含和排除属性:

Property

Default

management.endpoints.jmx.exposure.exclude

management.endpoints.jmx.exposure.include

management.endpoints.web.exposure.exclude

**management.endpoints.web.exposure.**include

health

include属性列出公开的端点的ID。exclude属性列出了不应公开的端点的ID。exclude属性优先于include属性。您可以使用include和exclude属性配置端点ID列表.

例如,要停止在JMX上公开所有端点,仅公开health和info端点,请使用以下属性:

management.endpoints.jmx.exposure.include=health,info
  • can be used to select all endpoints. For example, to expose everything over HTTP except the env and beans endpoints, use the following properties:
*

表示所有端点。例如,为了在HTTP上暴露所有端点,除了env和beans端点,可以使用下面配置:

management.endpoints.web.exposure.include=*
management.endpoints.web.exposure.exclude=env,beans
  • has a special meaning in YAML, so be sure to add quotation marks if you want to include (or exclude) all endpoints.

*在YAML中具有特殊含义,因此如果要包含(或排除)所有端点,请务必添加引号。

If your application is exposed publicly, we strongly recommend that you also secure your endpoints.

如果您的应用程序公开,我们强烈建议您也保护端点。

If you want to implement your own strategy for when endpoints are exposed, you can register an EndpointFilter bean.

如果您想实现自己的端点公开策略,可以注册EndpointFilter bean。

13.2.3. Security

出于安全目的,默认情况下只有/health端点通过HTTP公开。您可以使用management.endpoints.web.export.include属性以配置公开的端点。

在设置

management.endpoints.web.exposure.include

之前。包括,确保暴露的 actuators 不包含敏感信息,通过将其放置在防火墙后进行保护,或通过类似于Spring Security的方式进行保护。

如果

Spring Security

位于类路径上,并且不存在其他

WebSecurityConfigurerAdapter

SecurityFilterChain bean

,则除

/health

之外的所有

actuators

都由

Spring Boot

自动配置保护。如果您自定义的

WebSecurityConfigurerAdapter

SecurityFilterChain bean

Spring Boot

自动配置将不会保护,并由你自己完全控制

actuators

访问规则。

如果您希望为HTTP端点配置自定义安全性(例如,仅允许具有特定角色的用户访问它们),

Spring Boot

提供了一些方便的RequestMatcher对象,可以与

Spring security

结合使用。

典型的Spring Security配置可如下示例:

import org.springframework.boot.actuate.autoconfigure.security.servlet.EndpointRequest;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.web.SecurityFilterChain;

import static org.springframework.security.config.Customizer.withDefaults;

@Configuration(proxyBeanMethods = false)
public class MySecurityConfiguration {

    @Bean
    public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
        http.requestMatcher(EndpointRequest.toAnyEndpoint());
        http.authorizeRequests((requests) -> requests.anyRequest().hasRole("ENDPOINT_ADMIN"));
        http.httpBasic(withDefaults());
        return http.build();
    }

}

The preceding example uses EndpointRequest.toAnyEndpoint() to match a request to any endpoint and then ensures that all have the ENDPOINT_ADMIN role. Several other matcher methods are also available on EndpointRequest. See the API documentation (HTML or PDF) for details.

前面的示例使用

EndpointRequest.toAnyEndpoint() 

将请求匹配到任何端点,然后确保所有端点都具有

ENDPOINT_ADMIN

角色。EndpointRequest上还提供了其他几种匹配器方法。有关详细信息,请参阅API文档(HTML或PDF)。

If you deploy applications behind a firewall, you may prefer that all your actuator endpoints can be accessed without requiring authentication. You can do so by changing the management.endpoints.web.exposure.include property, as follows:

如果您在防火墙后面部署应用程序,您可能希望可以访问所有 actuator 端点而不需要身份验证。您可以通过更改

management.endpoints.web.exposure.include

来执行此操作,如下所示:

management.endpoints.web.exposure.include=*

此外,如果存在Spring Security,则需要添加自定义安全配置,以允许对端点进行未经身份验证的访问,如下例所示:

import org.springframework.boot.actuate.autoconfigure.security.servlet.EndpointRequest;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.web.SecurityFilterChain;

@Configuration(proxyBeanMethods = false)
public class MySecurityConfiguration {

    @Bean
    public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
        http.requestMatcher(EndpointRequest.toAnyEndpoint());
        http.authorizeRequests((requests) -> requests.anyRequest().permitAll());
        return http.build();
    }

}

在上述两个示例中,配置仅适用于 actuator 端点。由于 Spring Boot 的安全配置在任一个SecurityFilterChain bean的情况下完全失效,因此您需要在应用的其他部分中配置一个额外的SecurityFilterChain bean。

跨站点请求伪造保护 Cross Site Request Forgery Protection

由于Spring Boot依赖于Spring Security的默认值,因此CSRF保护默认打开。这意味着当使用默认安全配置时,POST(shutdown and loggers 端点)、PUT或DELETE的请求端点会收到403(禁止)错误。

We recommend disabling CSRF protection completely only if you are creating a service that is used by non-browser clients.

我们建议在创建非浏览器客户端使用的服务时完全禁用CSRF保护。

13.2.4. 配置端点 Configuring Endpoints

Endpoints automatically cache responses to read operations that do not take any parameters. To configure the amount of time for which an endpoint caches a response, use its cache.time-to-live property. The following example sets the time-to-live of the beans endpoint’s cache to 10 seconds:

端点对不带任何参数读取操作的响应会自动缓存。要配置端点响应缓存的时间,请使用

cache.time-to-live

这个属性。以下示例将bean端点缓存的生存时间设置为10秒:

management.endpoint.beans.cache.time-to-live=10s

The management.endpoint.<name> prefix uniquely identifies the endpoint that is being configured.

management.endpoint.<name> 前缀配置唯一标识端点。

13.2.5. web访问端点 Hypermedia for Actuator Web Endpoints

所有端点都有一个

"discovery page"

。默认情况下,

"discovery page"

/actuator

上是开启的。

要禁用

"discovery page"

,请将以下属性添加到应用程序属性中:

management.endpoints.web.discovery.enabled=false

配置自定义管理上下文路径后,

"discovery page"

将自动从

/actuator

移动到 管理上下文的根目录。例如,如果管理上下文路径为

/management

,则可以从

/management

获得

"discovery page"

。当管理上下文路径设置为

/

时,将禁用

"discovery page"

,以防止与其他映射发生冲突。

13.2.6. CORS Support

Cross-origin resource sharing (CORS)是W3C规范,允许您以灵活的方式允许哪种跨域请求。如果您使用

SpringMVC

SpringWebFlux

,则可以配置Actuator的web端点以支持此类场景。

CORS 在默认情况下是禁用的,只有在设置了

management.endpoints.web.cors.allowed-origins

属性才启用。以下配置允许从example.com 域名中通过GET和POST访问:

management.endpoints.web.cors.allowed-origins=https://example.com
management.endpoints.web.cors.allowed-methods=GET,POST

13.2.7 自定义端点 Implementing Custom Endpoints

如果添加

@Endpoint

注释的

@Bean

,任何用

@ReadOperation

@WriteOperation

@DeleteOperation

注释的方法都会自动通过JMX公开,在web应用程序中,也会通过HTTP公开。端点可以通过 Jersey、SpringMVC或SpringWebFlux以HTTP方式暴露出去。如果Jersey和SpringMVC都可用,则使用SpringMVC。

以下示例公开了一个返回自定义对象的读取操作:

您还可以使用

@JmxEndpoint

@WebEndpoint

编写特定的端点。这些端点仅限于各自的方式。例如,

@WebEndpoint

仅通过HTTP而不是JMX公开。

您可以使用

@EndpointWebExtension

@EndpointJmxExtension

编写特定于技术的扩展。这些注解允许您提供特定于技术的操作来扩充现有端点。

最后,如果您需要访问 web框架的功能,您可以实现servlet或Spring @Controller和@RestController 端点,代价是它们在JMX上不可用,或者在使用不同的web框架时不可用。

Receiving Input

端点上的操作通过其参数接收输入。当通过web公开时,这些参数的值取自URL的查询参数和JSON请求体。当通过JMX公开时,参数映射到MBean操作的参数。默认情况下需要参数。通过使用

@javax.annotation.Nullable

@org.springframework.lang.Nullable

对它们进行注释,可以使参数为可选的。

您可以将JSON请求体中的每个根属性映射到端点的一个参数。考虑以下JSON请求体:

{
  "name": "test",
  "counter": 42
}

您可以使用它来调用一个写操作,该操作使用 String name 和 int counter 两个参数,如下例所示:

@WriteOperation
    public void updateData(String name, int counter) {
        // injects "test" and 42
    }

因为端点是技术不可知的,所以只能在方法签名中指定简单类型。特别是,不支持使用name 和 counter 属性的自定义类型声明单个参数。

为了让输入映射到操作方法的参数,实现端点的Java代码应该用-parameters编译,而Kotlin代码应该用-java-parameters。如果您使用Spring Boot的Gradle插件或使用Maven和spring-boot-starter-parent.,这些将自动生效。

Input Type Conversion

如果需要,传递给端点操作方法的参数会自动转换为所需的类型。在调用操作方法之前,通过JMX或HTTP接收的输入参数将通过使用ApplicationConversionService的实例以及使用@EndpointConverter限定的任何Converter或GenericConverter bean转换为所需类型。

Custom Web Endpoints

Jersey

Spring MVC

Spring WebFlux

使用

@Endpoint

@WebEndpoint

@EndpointWebExtension

通过HTTP方式将自动暴露。如果

Jersey

SpringMVC

都可用,则使用SpringMVC。

Web Endpoint Request Predicates

请求断言是 web公开端点上的每个操作自动生成的。

Path

The path of the predicate is determined by the ID of the endpoint and the base path of the web-exposed endpoints. The default base path is /actuator. For example, an endpoint with an ID of sessions uses /actuator/sessions as its path in the predicate.

访问路径由端点的ID和web公开端点的基本路径确定。默认基本路径为/actuator。例如,端点 ID为 sessions 使用/actuator/sessions作为其路径。

通过使用

@Selector

注解操作方法的一个或多个参数,可以进一步自定义路径。这样的参数作为路径变量添加到访问路径中。当调用端点操作时,变量的值将传递给操作方法。如果要捕获所有剩余的路径元素,可以在最后一个参数中添加

@Selector(Match=ALL_REMAINING)

,并使其成为与

String[]

转换兼容的类型。

HTTP method

访问的HTTP方法由操作类型决定,如下表所示:

Operation

HTTP method

@ReadOperation

GET

@WriteOperation

POST

@DeleteOperation

DELETE

Consumes

For a @WriteOperation (HTTP POST) that uses the request body, the consumes clause of the predicate is application/vnd.spring-boot.actuator.v2+json, application/json. For all other operations, the consumes clause is empty.

对于使用

@WriteOperation

(HTTP POST)请求体的请求,

自定义

条件是

application/vnd.spring-boot.actuator.v2+json, application/json

。对于所有其他操作,consumers子句为空。

Produces

The produces clause of the predicate can be determined by the produces attribute of the @DeleteOperation, @ReadOperation, and @WriteOperation annotations. The attribute is optional. If it is not used, the produces clause is determined automatically.

请求生产的条款可以由

@DeleteOperation

@ReadOperation

@WriteOperation

注解的products属性确定。该属性是可选的。如果不使用,将自动确定products子句。

If the operation method returns void or Void, the produces clause is empty. If the operation method returns a org.springframework.core.io.Resource, the produces clause is application/octet-stream. For all other operations, the produces clause is application/vnd.spring-boot.actuator.v2+json, application/json.

如果操作方法返回void或void,则products子句为空。如果操作方法返回

org.springframework.core.io.Resource

,products子句是

application/octet-stream

。对于所有其他操作,products子句是

application/vnd.spring-boot.actuator.v2+json, application/json.

Web Endpoint Response Status

端点操作的默认响应状态取决于操作类型(read, write, or delete)以及操作返回的内容(如果有)。

如果

@ReadOperation

返回一个值,则响应状态将为200(OK)。如果未返回值,则响应状态将为404(未找到)。

如果

@WriteOperation

@DeleteOperation

有返回值,则响应状态将为200(OK)。如果未返回值,则响应状态将为204(无内容)

如果在缺少必填参数的情况下调用操作,或者无法转换参数所需类型的调用,则不会调用操作方法,响应状态将为400(错误请求)。

Web Endpoint Range Requests

您可以使用HTTP范围请求请求部分HTTP资源。使用

SpringMVC

SpringWebFlux

时,返回org.springframework.core.io.Resource的操作自动支持范围请求。

使用Jersey时不支持范围请求。

Web Endpoint Security

web端点或特定于web的端点扩展上的操作可以接收当前

java.security.Principal

org.springframework.boot.actuate.endpoint.SecurityContext

作为方法参数。前者通常与

@Nullable

结合使用,为经过身份验证和未经身份验证的用户提供不同的行为。后者通常用于使用其

isUserInRole

(String)方法执行授权检查。

Servlet Endpoints

servlet可以通过实现一个用

@ServletEndpoint

注解的类作为端点公开,该类也实现了

Supplier<EndpointServlet>.Servlet

端点提供了与Servlet容器的深度集成,但牺牲了可移植性。它们旨在用于将现有servlet公开为端点。对于新端点,应尽可能首选

@Endpoint

@WebEndpoint

注解。

Controller Endpoints

You can use @ControllerEndpoint and @RestControllerEndpoint to implement an endpoint that is exposed only by Spring MVC or Spring WebFlux. Methods are mapped by using the standard annotations for Spring MVC and Spring WebFlux, such as @RequestMapping and @GetMapping, with the endpoint’s ID being used as a prefix for the path. Controller endpoints provide deeper integration with Spring’s web frameworks but at the expense of portability. The @Endpoint and @WebEndpoint annotations should be preferred whenever possible.

你可以使用

@ControllerEndpoint

@RestControllerEndpoint

来实现由

SpringMVC

SpringWebFlux

公开的端点。方法通过使用

SpringMVC

SpringWebFlux

的标准注解(如

@RequestMapping

@GetMapping

)进行映射,并将端点的ID用作路径的前缀。

Controller endpoints

提供了与Spring的web框架的深度集成,但牺牲了可移植性。应尽可能首选

@Endpoint

@WebEndpoint

注释。

13.2.8. 健康信息 Health Information

您可以使用

health information

来检查正在运行的应用程序的状态。当生产系统故障时,监控软件经常使用它来提醒某人。健康端点公开的信息取决于

management.endpoint.health.show-details

 management.endpoint.health.show-components

属性,可以使用以下值之一进行配置:

Name

Description

never

Details are never shown.

when-authorized

Details are shown only to authorized users. Authorized roles can be configured by using management.endpoint.health.roles.

always

Details are shown to all users.

如果您已经保护了应用程序并希望始终使用,则安全配置必须允许经过身份验证和未经身份验证的用户访问健康端点。

健康信息是从

HealthContributorRegistry

的内容中收集的(默认情况下,ApplicationContext中定义的所有HealthContributor实例)。Spring Boot包含许多自动配置的

HealthContributor

,您也可以编写自己的。

HealthContributor

可以是

HealthIndicator

CompositeHealthContributer

HealthIndicator

提供实际健康信息,包括状态。

CompositeHealthContributor

提供其他

HealthContributors

的组合。总之,贡献者形成了一个树结构来表示整个系统的健康状况

默认情况下,最终系统运行状况由

StatusAggregator

派生,它根据有序的状态列表对每个

HealthIndicator

的状态进行排序。排序列表中的第一个状态用作总体运行状况。如果没有

HealthIndicator

返回

StatusAggregator

已知的状态,则使用

UNKNOWN

状态。

You can use the HealthContributorRegistry to register and unregister health indicators at runtime.

您可以使用HealthContributorRegistry在运行时注册和注销健康指标。

Auto-configured HealthIndicators

如果合适,Spring Boot会自动配置下表中列出的HealthIndicators。您还可以通过配置

management.health.key.enabled

来启用或禁用所选指标。启用,键列在下表中:

Key

Name

Description

cassandra

CassandraDriverHealthIndicator

Checks that a Cassandra database is up.

couchbase

CouchbaseHealthIndicator

Checks that a Couchbase cluster is up.

db

DataSourceHealthIndicator

Checks that a connection to DataSource can be obtained.

diskspace

DiskSpaceHealthIndicator

Checks for low disk space.

elasticsearch

ElasticsearchRestHealthIndicator

Checks that an Elasticsearch cluster is up.

hazelcast

HazelcastHealthIndicator

Checks that a Hazelcast server is up.

influxdb

InfluxDbHealthIndicator

Checks that an InfluxDB server is up.

jms

JmsHealthIndicator

Checks that a JMS broker is up.

ldap

LdapHealthIndicator

Checks that an LDAP server is up.

mail

MailHealthIndicator

Checks that a mail server is up.

mongo

MongoHealthIndicator

Checks that a Mongo database is up.

neo4j

Neo4jHealthIndicator

Checks that a Neo4j database is up.

ping

PingHealthIndicator

Always responds with UP.

rabbit

RabbitHealthIndicator

Checks that a Rabbit server is up.

redis

RedisHealthIndicator

Checks that a Redis server is up.

solr

SolrHealthIndicator

Checks that a Solr server is up.

You can disable them all by setting the management.health.defaults.enabled property.

您可以通过设置

management.health.defaults.enabled

来禁用它们。

一些健康指标可用,但默认情况下未启用:

Key

Name

Description

livenessstate

LivenessStateHealthIndicator

Exposes the “Liveness” application availability state.

readinessstate

ReadinessStateHealthIndicator

Exposes the “Readiness” application availability state.

Writing Custom HealthIndicators

为了提供定制的健康信息,您可以注册实现了

HealthIndicator

接口的

Spring bean

。您需要提供

health()

方法的实现并返回health响应。运行状况响应应包括状态,并且可以选择包括要显示的其他详细信息。以下代码显示了示例

HealthIndicator

实现:

import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;
import org.springframework.stereotype.Component;

@Component
public class MyHealthIndicator implements HealthIndicator {

    @Override
    public Health health() {
        int errorCode = check();
        if (errorCode != 0) {
            return Health.down().withDetail("Error Code", errorCode).build();
        }
        return Health.up().build();
    }

    private int check() {
        // perform some specific health check
        return ...
    }

}

给定HealthIndicator的标识符是没有

HealthIndicater

后缀的bean的名称(如果存在)。在前面的示例中,健康信息在名为my的条目中可用。

健康指示器通常通过HTTP调用,需要在任何连接超时之前做出响应。Spring Boot将为超过10秒才响应的健康指示器记录警告消息。如果要配置此阈值,可以使用

management.endpoint.health.logging.slow-indicator-threshold

属性。

除了Spring Boot预定义的Status类型之外,Health还可以返回表示新系统状态的自定义Status。在这种情况下,还需要提供StatusAggregator接口的自定义实现,或者必须使用

management.endpoint.health.status.order

配置默认实现。订单配置属性

例如,假设您的一个HealthIndicator实现中正在使用代码为FATAL的新状态。要配置严重性顺序,请将以下属性添加到应用程序属性中:

management.endpoint.health.status.order=fatal,down,out-of-service,unknown,up

响应中的HTTP状态代码反映了总体运行状况。默认情况下,

OUT_OF_SERVICE

DOWN

映射到503。任何未映射的健康状态(包括

UP

)都映射到200。如果您通过HTTP访问健康端点,您可能还需要注册自定义状态映射。配置自定义映射禁用

DOWN

OUT_OF_SERVICE

的默认映射。如果要保留默认映射,则必须显式配置它们和任何自定义映射。例如,以下属性将

FATAL

映射到503(服务不可用),并保留

DOWN

OUT_OF_service

的默认映射:

management.endpoint.health.status.http-mapping.down=503
management.endpoint.health.status.http-mapping.fatal=503
management.endpoint.health.status.http-mapping.out-of-service=503

如果需要更多控制,可以定义自己的

HttpCodeStatusMapper

bean。

下表显示了内置状态的默认状态映射:

Status

Mapping

DOWN

SERVICE_UNAVAILABLE (503)

OUT_OF_SERVICE

SERVICE_UNAVAILABLE (503)

UP

No mapping by default, so HTTP status is 200

UNKNOWN

No mapping by default, so HTTP status is 200

Reactive Health Indicators

对于反应式应用程序,例如使用SpringWebFlux的应用程序,ReactiveHealthContributor为获取应用程序运行状况提供了一个非阻塞契约。与传统的HealthContributor类似,健康信息是从ReactiveHealthContributorRegistry的内容中收集的(默认情况下,ApplicationContext中定义所有HealthContributer和ReactiveHealthContributor实例)。不检查反应式API的常规HealthContributors在弹性调度程序上执行。

在反应式应用程序中,应使用

ReactiveHealthContributorRegistry

在运行时注册和注销运行状况指标。如果需要注册常规

HealthContributor

,应使用ReactiveHealthContributor#adapt将其包装。

To provide custom health information from a reactive API, you can register Spring beans that implement the ReactiveHealthIndicator interface. The following code shows a sample ReactiveHealthIndicator implementation:

要从反应式API提供自定义健康信息,可以注册实现

ReactiveHealthIndicator

接口的

Spring bean

。以下代码显示了

ReactiveHealthIndicator

实现示例:

import reactor.core.publisher.Mono;

import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.ReactiveHealthIndicator;
import org.springframework.stereotype.Component;

@Component
public class MyReactiveHealthIndicator implements ReactiveHealthIndicator {

    @Override
    public Mono<Health> health() {
        return doHealthCheck().onErrorResume((exception) ->
            Mono.just(new Health.Builder().down(exception).build()));
    }

    private Mono<Health> doHealthCheck() {
        // perform some specific health check
        return ...
    }

}

To handle the error automatically, consider extending from AbstractReactiveHealthIndicator.

要自动处理错误,请考虑从AbstractReactiveHealthIndicator扩展。

Auto-configured ReactiveHealthIndicators

适当时,Spring Boot会自动配置以下ReactiveHealthIndicators:

Key

Name

Description

cassandra

CassandraDriverReactiveHealthIndicator

Checks that a Cassandra database is up.

couchbase

CouchbaseReactiveHealthIndicator

Checks that a Couchbase cluster is up.

elasticsearch

ElasticsearchReactiveHealthIndicator

Checks that an Elasticsearch cluster is up.

mongo

MongoReactiveHealthIndicator

Checks that a Mongo database is up.

neo4j

Neo4jReactiveHealthIndicator

Checks that a Neo4j database is up.

redis

RedisReactiveHealthIndicator

Checks that a Redis server is up.

如有必要,响应式指标取代常规指标。此外,任何未显式处理的HealthIndicator都会自动包装。

Health Groups

有时将健康指标组织成可用于不同目的的组是有用的。

要创建健康指标组,可以使用

management.endpoint.health.group.<name>

属性,并指定要包含或排除的健康指标ID列表。例如,要创建仅包含数据库指标的组,可以定义以下内容:

management.endpoint.health.group.custom.include=db

然后,您可以通过点击localhost:8080/actuator/health/custom来检查结果。

同样,要创建一个将数据库指标从组中排除并包括所有其他指标的组,可以定义以下内容:

management.endpoint.health.group.custom.exclude=db

默认情况下,组继承与系统运行状况相同的

StatusAggregator

HttpCodeStatusMaper

设置。但是,您也可以在每个组的基础上定义这些。如果需要,还可以覆盖显示详细信息和角色属性:

management.endpoint.health.group.custom.show-details=when-authorized
management.endpoint.health.group.custom.roles=admin
management.endpoint.health.group.custom.status.order=fatal,up
management.endpoint.health.group.custom.status.http-mapping.fatal=500
management.endpoint.health.group.custom.status.http-mapping.out-of-service=500

You can use @Qualifier("groupname") if you need to register custom StatusAggregator or HttpCodeStatusMapper beans for use with the group.

如果需要注册用于组的自定义StatusAggregator或HttpCodeStatusMaper bean,可以使用 @Qualifier("groupname")。

健康组还可以包括/排除

CompositeHealthContributor

。您还可以仅包含/排除CompositeHealthContributor的某个组件。这可以使用组件的完全限定名称完成,如下所示:

management.endpoint.health.group.custom.include="test/primary"
management.endpoint.health.group.custom.exclude="test/primary/b"

In the example above, the custom group will include the HealthContributor with the name primary which is a component of the composite test. Here, primary itself is a composite and the HealthContributor with the name b will be excluded from the custom group.

在上面的示例中,自定义组将包含名为

primary

HealthContributor

,它是复合测试的一个组件。在这里,

primary

本身是一个组合,名称为b的

HealthContributor

将从自定义组中排除。

健康组可以在主端口或管理端口上的附加路径上提供。这在

Kubernetes

等云环境中非常有用,因为出于安全目的,actuator端点通常使用单独的管理端口。使用单独的端口可能会导致不可靠的健康检查,因为即使健康检查成功,主应用程序也可能无法正常工作。健康组可以配置一个附加路径,如下所示:

management.endpoint.health.group.live.additional-path="server:/healthz"

这将使主服务器端口/healthz上的实时运行状况组可用。前缀是必需的,必须是server:(表示主服务器端口)或management:(如果已配置则表示管理端口)。路径必须是单个路径段。

DataSource Health

DataSource

健康指示器显示标准数据源和路由数据源bean的健康状况。路由数据源的运行状况包括其每个目标数据源的健康状况。在健康端点的响应中,每个路由数据源的目标都使用其路由密钥来命名。如果不希望在指示器的输出中包含路由数据源,请设置

management.health.db.ignore-routing-data-sources

为true。

13.2.9. Kubernetes Probes

部署在Kubernetes上的应用程序可以通过Container Probes提供有关其内部状态的信息。根据您的Kubernetes配置,kubelet调用这些探测并对结果做出反应。

默认情况下,Spring Boot管理应用程序可用性状态。如果部署在Kubernetes环境中,actuator将从

ApplicationAvailability

接口收集“Liveness”和“Readiness”信息,并将这些信息用于专用的健康指标:LivenessStateHealthIndicator和ReadinesStateHealthindicator。这些指标显示在全局健康端点

("/actuator/health")

上。它们还通过使用健康组

"/actuator/health/liveness"

"/actuator/health/readiness"

作为单独的HTTP探测器公开。

然后,您可以使用以下端点信息配置Kubernetes基础设施:

livenessProbe:
  httpGet:
    path: "/actuator/health/liveness"
    port: <actuator-port>
  failureThreshold: ...
  periodSeconds: ...

readinessProbe:
  httpGet:
    path: "/actuator/health/readiness"
    port: <actuator-port>
  failureThreshold: ...
  periodSeconds: ...
<actuator port>

应设置为执行器端点可用的端口。如果设置了

"management.server.port"

属性,则它可以是主web服务器端口或单独的管理端口。

These health groups are automatically enabled only if the application runs in a Kubernetes environment. You can enable them in any environment by using the management.endpoint.health.probes.enabled configuration property.

只有当应用程序在Kubernetes环境中运行时,这些健康组才会自动启用。您可以使用配置

management.endpoint.health.probes.enabled

在任何环境中启用它们。

If an application takes longer to start than the configured liveness period, Kubernetes mentions the "startupProbe" as a possible solution. The "startupProbe" is not necessarily needed here, as the "readinessProbe" fails until all startup tasks are done. See the section that describes how probes behave during the application lifecycle.

如果应用程序的启动时间比配置的活跃期更长,Kubernetes会提到

"startupProbe" 

作为可能的解决方案。这里不一定需要

"startupProbe"

,因为

"readinessProbe"

在完成所有启动任务之前都会失败。请参阅描述探测器在应用程序生命周期中的行为的部分。

If your Actuator endpoints are deployed on a separate management context, the endpoints do not use the same web infrastructure (port, connection pools, framework components) as the main application. In this case, a probe check could be successful even if the main application does not work properly (for example, it cannot accept new connections). For this reason, is it a good idea to make the liveness and readiness health groups available on the main server port. This can be done by setting the following property:

如果Actuator端点部署在单独的管理上下文中,那么端点不会使用与主应用程序相同的web基础设施(端口、连接池、框架组件)。在这种情况下,即使主应用程序工作不正常(例如,它无法接受新连接),探测检查也可能成功。因此,在主服务器端口上设置活动状态和就绪状态健康组是一个好主意。这可以通过设置以下属性来实现:

management.endpoint.health.probes.add-additional-paths=true

This would make liveness available at /livez and readiness at readyz on the main server port.

这将使

liveness

/livez

上可用,并使readyz在主服务器端口上就绪。

Checking External State With Kubernetes Probes

Actuator configures the “liveness” and “readiness” probes as Health Groups. This means that all the health groups features are available for them. You can, for example, configure additional Health Indicators:

Actuator 将“

liveness

” and “

readiness

”检测配置为健康组。这意味着他们可以使用所有健康组功能。例如,您可以配置其他健康指标:

management.endpoint.health.group.readiness.include=readinessState,customCheck

By default, Spring Boot does not add other health indicators to these groups.

默认情况下,Spring Boot不会向这些组添加其他健康指标。

The “liveness” probe should not depend on health checks for external systems. If the liveness state of an application is broken, Kubernetes tries to solve that problem by restarting the application instance. This means that if an external system (such as a database, a Web API, or an external cache) fails, Kubernetes might restart all application instances and create cascading failures.

“活跃度”调查不应依赖外部系统的健康检查。如果应用程序的活跃状态被破坏,Kubernetes会尝试通过重新启动应用程序实例来解决这个问题。这意味着,如果外部系统(如数据库、Web API或外部缓存)发生故障,Kubernetes可能会重新启动所有应用程序实例并造成级联故障

As for the “readiness” probe, the choice of checking external systems must be made carefully by the application developers. For this reason, Spring Boot does not include any additional health checks in the readiness probe. If the readiness state of an application instance is unready, Kubernetes does not route traffic to that instance. Some external systems might not be shared by application instances, in which case they could be included in a readiness probe. Other external systems might not be essential to the application (the application could have circuit breakers and fallbacks), in which case they definitely should not be included. Unfortunately, an external system that is shared by all application instances is common, and you have to make a judgement call: Include it in the readiness probe and expect that the application is taken out of service when the external service is down or leave it out and deal with failures higher up the stack, perhaps by using a circuit breaker in the caller.

至于“准备就绪”检测,应用程序开发人员必须仔细选择检查外部系统。出于这个原因,Spring Boot在准备就绪检测中不包括任何额外的健康检查。如果应用程序实例的就绪状态未准备就绪,Kubernetes不会将流量路由到该实例。某些外部系统可能不由应用程序实例共享,在这种情况下,它们可以包含在就绪检测中。其他外部系统可能对应用程序不重要(应用程序可能有断路器和回退),在这种情况下,它们肯定不应包括在内。不幸的是,由所有应用程序实例共享的外部系统是常见的,您必须进行判断调用:将其包含在就绪状态检测中,并期望在外部服务关闭时应用程序停止服务,或者将其排除在外,并处理堆栈更高层的故障,可能在调用程序中使用断路器。

如果应用程序的所有实例都未准备就绪,则类型为ClusterIP或NodePort的Kubernetes服务不接受任何传入连接。没有HTTP错误响应(503等),因为没有连接。类型为LoadBalancer的服务可能接受连接,也可能不接受连接,具体取决于提供者。具有显式入口的服务也以取决于实现的方式进行响应 — 入口服务本身必须决定如何处理来自下游的“拒绝连接”。HTTP 503在负载平衡器和入口的情况下都很可能。

Also, if an application uses Kubernetes autoscaling, it may react differently to applications being taken out of the load-balancer, depending on its autoscaler configuration.

此外,如果应用程序使用Kubernetes自动监测,它可能会对从负载平衡器中取出的应用程序做出不同的反应,这取决于其自动监测的配置。

Application Lifecycle and Probe States

An important aspect of the Kubernetes Probes support is its consistency with the application lifecycle. There is a significant difference between the AvailabilityState (which is the in-memory, internal state of the application) and the actual probe (which exposes that state). Depending on the phase of application lifecycle, the probe might not be available.

Kubernetes Probes支持的一个重要方面是它与应用程序生命周期的一致性。AvailabilityState(应用程序的内存,内部状态)和实际检测(暴露该状态)之间存在显著差异。根据应用程序生命周期的阶段,检测可能不可用。

Spring Boot publishes application events during startup and shutdown, and probes can listen to such events and expose the AvailabilityState information.

Spring Boot在启动和关闭期间发布应用程序事件,监测可以监听这些事件并公开可用状态信息。

The following tables show the AvailabilityState and the state of HTTP connectors at different stages.

下表显示了AvailabilityState和HTTP连接器在不同阶段的状态。

When a Spring Boot application starts:

Startup phase

LivenessState

ReadinessState

HTTP server

Notes

Starting

BROKEN

REFUSING_TRAFFIC

Not started

Kubernetes checks the "liveness" Probe and restarts the application if it takes too long.

Started

CORRECT

REFUSING_TRAFFIC

Refuses requests

The application context is refreshed. The application performs startup tasks and does not receive traffic yet.

Ready

CORRECT

ACCEPTING_TRAFFIC

Accepts requests

Startup tasks are finished. The application is receiving traffic.

When a Spring Boot application shuts down:

Shutdown phase

Liveness State

Readiness State

HTTP server

Notes

Running

CORRECT

ACCEPTING_TRAFFIC

Accepts requests

Shutdown has been requested.

Graceful shutdown

CORRECT

REFUSING_TRAFFIC

New requests are rejected

If enabled, graceful shutdown processes in-flight requests.

Shutdown complete

N/A

N/A

Server is shut down

The application context is closed and the application is shut down.

13.2.10. 应用信息 Application Information

Application information exposes various information collected from all InfoContributor beans defined in your ApplicationContext. Spring Boot includes a number of auto-configured InfoContributor beans, and you can write your own.

应用程序信息公开从ApplicationContext中定义的所有InfoContributor bean收集的各种信息。Spring Boot包含许多自动配置的InfoContributor bean,您可以自己编写。

自动配置信息收集 Auto-configured InfoContributors

When appropriate, Spring auto-configures the following InfoContributor beans:

适当的时候,Spring会自动配置以下InfoContributor bean:

ID

Name

Description

Prerequisites 前置条件

build

BuildInfoContributor

Exposes build information.

A META-INF/build-info.properties resource.

env

EnvironmentInfoContributor

Exposes any property from the Environment whose name starts with info..

None.

git

GitInfoContributor

Exposes git information.

A git.properties resource.

java

JavaInfoContributor

Exposes Java runtime information.

None.

os

OsInfoContributor

Exposes Operating System information.

None.

Whether an individual contributor is enabled is controlled by its management.info.<id>.enabled property. Different contributors have different defaults for this property, depending on their prerequisites and the nature of the information that they expose.

单个信息是否启用由其

manage.info.<id>.enabled

属性控制。不同的指标信息对此属性有不同的默认值,这取决于他们的先决条件和他们公开的信息的性质。

With no prerequisites to indicate that they should be enabled, the env, java, and os contributors are disabled by default. Each can be enabled by setting its management.info.<id>.enabled property to true.

由于没有先决条件表明应该启用它们,默认情况下将禁用env、java和os信息收集。每个都可以通过将其

manage.info.<id>.enabled

属性设置为true来启用。

The build and git info contributors are enabled by default. Each can be disabled by setting its management.info.<id>.enabled property to false. Alternatively, to disable every contributor that is usually enabled by default, set the management.info.defaults.enabled property to false.

默认情况下,将启用build和git信息收集。每一个都可以通过将其

manage.info.<id>.enabled

属性设置为false来禁用。或者,要禁用默认情况下通常启用的每个参与者,请将

management.info.defaults.enabled

属性设置为false。

自定义应用信息 Custom Application Information

When the env contributor is enabled, you can customize the data exposed by the info endpoint by setting info.* Spring properties. All Environment properties under the info key are automatically exposed. For example, you could add the following settings to your application.properties file:

env信息收集开启后,您可以通过设置

info.*

自定义info端点公开的数据。info键下的所有Environment属性都将自动公开。例如,您可以将以下设置添加到

application.properties

文件:

info.app.encoding=UTF-8
info.app.java.source=11
info.app.java.target=11

Rather than hardcoding those values, you could also expand info properties at build time.

您还可以在构建时扩展信息属性,而不是对这些值进行硬编码。

Assuming you use Maven, you could rewrite the preceding example as follows:

假设您使用Maven,可以将前面的示例重写如下:

[email protected]@
[email protected]@
[email protected]@

git提交信息 Git Commit Information

Another useful feature of the info endpoint is its ability to publish information about the state of your git source code repository when the project was built. If a GitProperties bean is available, you can use the info endpoint to expose these properties.

info

端点的另一个有用特性是它能够发布有关项目构建时git源代码存储库状态的信息。如果GitProperties bean可用,则可以使用info端点公开这些属性。

A GitProperties bean is auto-configured if a git.properties file is available at the root of the classpath. See "how to generate git information" for more detail.

如果是git,则会自动配置GitProperties bean。properties文件位于类路径的根目录中可用。有关详细信息,请参阅“如何生成git信息”。

By default, the endpoint exposes git.branch, git.commit.id, and git.commit.time properties, if present. If you do not want any of these properties in the endpoint response, they need to be excluded from the git.properties file. If you want to display the full git information (that is, the full content of git.properties), use the management.info.git.mode property, as follows:

默认情况下,端点公开

gi.branch

git.commit.id

git.commit.time

属性(如果存在)。如果不希望在端点响应中包含这些属性,则需要从

git.properties

中排除它们。如果要显示完整的git信息(即git.properties的完整内容),请使用

management.info.git.mode

属性,如下所示:

management.info.git.mode=full

To disable the git commit information from the info endpoint completely, set the management.info.git.enabled property to false, as follows:

要完全禁用info端点的git提交信息,请设置

management.info.git.enabled

属性设置为false,如下所示:

management.info.git.enabled=false

构建信息 Build Information

If a BuildProperties bean is available, the info endpoint can also publish information about your build. This happens if a META-INF/build-info.properties file is available in the classpath.

如果BuildProperties bean可用,则

info

端点还可以发布有关您的构建的信息。即如果classpath下存在可用的

META-INF/build-info.properties

文件。

The Maven and Gradle plugins can both generate that file. See "how to generate build information" for more details.

Maven和Gradle插件都可以生成该文件。有关详细信息,请参阅“如何生成构建信息”。

Java Information

The info endpoint publishes information about your Java runtime environment, see JavaInfo for more details.

info端点发布有关Java运行时环境的信息,有关详细信息,请参阅JavaInfo。

OS Information

The info endpoint publishes information about your Operating System, see OsInfo for more details.

信息端点发布有关操作系统的信息,有关详细信息,请参阅OsInfo。

Writing Custom InfoContributors

To provide custom application information, you can register Spring beans that implement the InfoContributor interface.

为了提供自定义应用程序信息,可以注册实现

InfoContributor

接口的

Springbean

The following example contributes an example entry with a single value:

以下示例提供了具有单个值的示例条目:

import java.util.Collections;

import org.springframework.boot.actuate.info.Info;
import org.springframework.boot.actuate.info.InfoContributor;
import org.springframework.stereotype.Component;

@Component
public class MyInfoContributor implements InfoContributor {

    @Override
    public void contribute(Info.Builder builder) {
        builder.withDetail("example", Collections.singletonMap("key", "value"));
    }

}

If you reach the info endpoint, you should see a response that contains the following additional entry:

如果获得信息端点,则应看到包含以下附加条目的响应:

{
  "example": {
    "key" : "value"
  }
}

13.3. HTTP监控和管理 Monitoring and Management Over HTTP

If you are developing a web application, Spring Boot Actuator auto-configures all enabled endpoints to be exposed over HTTP. The default convention is to use the id of the endpoint with a prefix of /actuator as the URL path. For example, health is exposed as /actuator/health.

如果您正在开发一个web应用程序,

Spring Boot Actuator

会自动配置所有启用的端点以通过HTTP公开。默认约定是使用前缀为

/actuator

的端点id作为URL路径。例如,health暴露为

/actuator/health

Actuator is supported natively with Spring MVC, Spring WebFlux, and Jersey. If both Jersey and Spring MVC are available, Spring MVC is used.

Actuator由Spring MVC、Spring WebFlux和Jersey原生支持。如果Jersey和SpringMVC都可用,则使用SpringMVC。

Jackson is a required dependency in order to get the correct JSON responses as documented in the API documentation (HTML or PDF).

为了获得API文档(HTML或PDF)中记录的正确JSON响应,Jackson是必需的依赖项。

13.3.1. 自定义和管理端点路径 Customizing the Management Endpoint Paths

Sometimes, it is useful to customize the prefix for the management endpoints. For example, your application might already use /actuator for another purpose. You can use the management.endpoints.web.base-path property to change the prefix for your management endpoint, as the following example shows:

有时,自定义管理端点的前缀很有用。例如,您的应用程序可能已经将

/actuator

用于其他目的。您可以使用

management.endpoints.web.base-path

属性更改管理端点的前缀,如下例所示:

management.endpoints.web.base-path=/manage

The preceding application.properties example changes the endpoint from /actuator/{id} to /manage/{id} (for example, /manage/info).

前面的示例

application.properties

将端点从

/actuator/{id}

更改为

/manage/{id}

(例如,/manage/info)。

Unless the management port has been configured to expose endpoints by using a different HTTP port, management.endpoints.web.base-path is relative to server.servlet.context-path (for servlet web applications) or spring.webflux.base-path (for reactive web applications). If management.server.port is configured, management.endpoints.web.base-path is relative to management.server.base-path.

除非管理端口已配置为使用不同的HTTP端口,

management.endpoints.web.base-path

相对于

server.servlet.context-path

(用于servlet web应用程序)或

spring.webflux.base-path

(用于反应式web应用程序)。如果

management.server.port

已配置,

management.endpoints.web.base-path

是相对于

management.server.base-path

If you want to map endpoints to a different path, you can use the management.endpoints.web.path-mapping property.

如果要将端点映射到其他路径,可以使用

management.endpoints.web.path-mapping

属性。

The following example remaps /actuator/health to /healthcheck:

以下示例重新映射到

/actuator/health to /healthcheck

management.endpoints.web.base-path=/
management.endpoints.web.path-mapping.health=healthcheck

13.3.2. 自定义&管理服务端口 Customizing the Management Server Port

Exposing management endpoints by using the default HTTP port is a sensible choice for cloud-based deployments. If, however, your application runs inside your own data center, you may prefer to expose endpoints by using a different HTTP port.

使用默认HTTP端口公开管理端点是基于云部署的明智选择。但是,如果您的应用程序在自己的数据中心内运行,您可能更喜欢使用不同的HTTP端口来公开端点。

You can set the management.server.port property to change the HTTP port, as the following example shows:

您可以设置

management.server.port

属性来更改HTTP端口,如下例所示

management.server.port=8081

On Cloud Foundry, by default, applications receive requests only on port 8080 for both HTTP and TCP routing. If you want to use a custom management port on Cloud Foundry, you need to explicitly set up the application’s routes to forward traffic to the custom port.

在Cloud Foundry上,默认情况下,应用程序仅在端口8080上接收HTTP和TCP路由请求。如果您想在CloudFoundry上使用自定义管理端口,则需要明确设置应用程序的路由转发到自定义端口。

13.3.3. Configuring Management-specific SSL

When configured to use a custom port, you can also configure the management server with its own SSL by using the various

management.server.ssl.* 

properties. For example, doing so lets a management server be available over HTTP while the main application uses HTTPS, as the following property settings show:

当配置为使用自定义端口时,您还可以使用各种

management.server.ssl.* 

为管理服务器配置自己的SSL。例如,当主应用程序使用HTTPS为如下属性设置,可以使管理服务器通过HTTP可用:

server.port=8443
server.ssl.enabled=true
server.ssl.key-store=classpath:store.jks
server.ssl.key-password=secret
management.server.port=8080
management.server.ssl.enabled=false

Alternatively, both the main server and the management server can use SSL but with different key stores, as follows:

或者,主服务器和管理服务器都可以使用SSL,但密钥存储不同,如下所示:

server.port=8443
server.ssl.enabled=true
server.ssl.key-store=classpath:main.jks
server.ssl.key-password=secret
management.server.port=8080
management.server.ssl.enabled=true
management.server.ssl.key-store=classpath:management.jks
management.server.ssl.key-password=secret

13.3.4. Customizing the Management Server Address

You can customize the address on which the management endpoints are available by setting the management.server.address property. Doing so can be useful if you want to listen only on an internal or ops-facing network or to listen only for connections from localhost.

通过设置

management.server.address

属性,可以自定义管理端点地址。如果您只想侦听内部或面向操作的网络,或者只侦听来自本地主机的连接,那么这样做会很有用。

You can listen on a different address only when the port differs from the main server port.

只有当端口与主服务器端口不同时,才能侦听不同的地址。

The following example

application.properties

does not allow remote management connections:

以下示例

application.properties

不允许远程管理连接:

management.server.port=8081
management.server.address=127.0.0.1

13.3.5. Disabling HTTP Endpoints

If you do not want to expose endpoints over HTTP, you can set the management port to -1, as the following example shows:

如果不想通过HTTP公开端点,可以将管理端口设置为-1,如下例所示:

management.server.port=-1

You can also achieve this by using the

management.endpoints.web.exposure.exclude

property, as the following example shows:

您也可以通过使用

management.endpoints.web.exposure.exclude

来实现,如下例所示:

management.endpoints.web.exposure.exclude=*

13.4. Monitoring and Management over JMX

Java Management Extensions (JMX) provide a standard mechanism to monitor and manage applications. By default, this feature is not enabled. You can turn it on by setting the spring.jmx.enabled configuration property to true. Spring Boot exposes the most suitable MBeanServer as a bean with an ID of mbeanServer. Any of your beans that are annotated with Spring JMX annotations (@ManagedResource, @ManagedAttribute, or @ManagedOperation) are exposed to it.

Java管理扩展(JMX)提供了监视和管理应用程序的标准机制。默认情况下,此功能未启用。您可以通过设置

spring.jmx.enabled

为true来打开它。Spring Boot将最合适的MBeanServer公开为一个ID为

MBeanServer

的bean。使用Spring JMX注解(

@ManagedResource

@ManagedAttribute

@ManagedOperation

)的任何bean都将暴露给它。

If your platform provides a standard MBeanServer, Spring Boot uses that and defaults to the VM MBeanServer, if necessary. If all that fails, a new MBeanServer is created.

如果您的平台提供标准MBeanServer,则Spring Boot将使用该服务器,并在必要时默认为VM MBeanServer。如果所有这些都失败,将创建一个新的MBeanServer。

See the JmxAutoConfiguration class for more details.

By default, Spring Boot also exposes management endpoints as JMX MBeans under the org.springframework.boot domain. To take full control over endpoint registration in the JMX domain, consider registering your own EndpointObjectNameFactory implementation.

默认情况下,Spring Boot还将管理端点公开为

org.springframework.boot

域下的JMX MBean。要完全控制JMX域中的端点注册,请考虑注册自己的

EndpointObjectNameFactory

实现。

13.4.1. Customizing MBean Names

The name of the MBean is usually generated from the id of the endpoint. For example, the health endpoint is exposed as

org.springframework.boot:type=Endpoint,name=Health.

If your application contains more than one Spring ApplicationContext, you may find that names clash. To solve this problem, you can set the

spring.jmx.unique-names

property to

true

so that MBean names are always unique.

You can also customize the JMX domain under which endpoints are exposed. The following settings show an example of doing so in

application.properties

:

spring.jmx.unique-names=true
management.endpoints.jmx.domain=com.example.myapp

13.4.2. Disabling JMX Endpoints

If you do not want to expose endpoints over JMX, you can set the

management.endpoints.jmx.exposure.exclude

property to

*

, as the following example shows:

management.endpoints.jmx.exposure.exclude=*

13.4.3. Using Jolokia for JMX over HTTP

Jolokia is a JMX-HTTP bridge that provides an alternative method of accessing JMX beans. To use Jolokia, include a dependency to

org.jolokia:jolokia-core

. For example, with Maven, you would add the following dependency:

<dependency>
    <groupId>org.jolokia</groupId>
    <artifactId>jolokia-core</artifactId>
</dependency>

You can then expose the Jolokia endpoint by adding

jolokia

or

* 

to the

management.endpoints.web.exposure.include

property. You can then access it by using

/actuator/jolokia

on your management HTTP server.

Customizing Jolokia

Jolokia has a number of settings that you would traditionally configure by setting servlet parameters. With Spring Boot, you can use your

application.properties

file. To do so, prefix the parameter with

management.endpoint.jolokia.config.

, as the following example shows:

management.endpoint.jolokia.config.debug=true

Disabling Jolokia

If you use Jolokia but do not want Spring Boot to configure it, set the

management.endpoint.jolokia.enabled

property to

false

, as follows:

management.endpoint.jolokia.enabled=false

13.5. 日志 Loggers

Spring Boot Actuator includes the ability to view and configure the log levels of your application at runtime. You can view either the entire list or an individual logger’s configuration, which is made up of both the explicitly configured logging level as well as the effective logging level given to it by the logging framework. These levels can be one of:

Spring Boot Actuator

包括在运行时查看和配置应用程序日志级别的功能。您可以查看整个列表或单个日志记录的配置,该配置由显式配置的日志记录级别以及日志框架赋予的有效日志记录级别组成。这些级别可以是以下级别之一:

  • TRACE
  • DEBUG
  • INFO
  • WARN
  • ERROR
  • FATAL
  • OFF
  • null

null indicates that there is no explicit configuration. null表示没有显式配置。

13.5.1. Configure a Logger

To configure a given logger,

POST

a partial entity to the resource’s URI, as the following example shows:

要配置给定的日志,

POST

请求部分实体资源的URI,如下例所示:

{
    "configuredLevel": "DEBUG"
}

To "reset" the specific level of the logger (and use the default configuration instead), you can pass a value of

null

as the

configuredLevel

. 要“重置”日志的指定级别(并改用默认配置),可以传递

null

值作为

configuredLevel

13.6. 指标 Metrics

Spring Boot Actuator provides dependency management and auto-configuration for Micrometer, an application metrics facade that supports numerous monitoring systems, including:

Spring Boot Actuator

Micrometer

提供依赖性管理和自动配置,Micrometer是一个应用宏观指标,支持多种监控系统,包括:

  • AppOptics
  • Atlas
  • Datadog
  • Dynatrace
  • Elastic
  • Ganglia
  • Graphite
  • Humio
  • Influx
  • JMX
  • KairosDB
  • New Relic
  • Prometheus
  • SignalFx
  • Simple (in-memory)
  • Stackdriver
  • StatsD
  • Wavefront

To learn more about Micrometer’s capabilities, see its reference documentation, in particular the concepts section.

13.6.1. Getting started

Spring Boot auto-configures a composite MeterRegistry and adds a registry to the composite for each of the supported implementations that it finds on the classpath. Having a dependency on micrometer-registry-{system} in your runtime classpath is enough for Spring Boot to configure the registry.

Spring Boot

自动配置复合

MeterRegistry

,并为其在类路径上找到的每个受支持的实现向复合中添加一个注册表。在运行时类路径中依赖

micrometer-registry-{system}

就足以让Spring Boot配置注册表。

Most registries share common features. For instance, you can disable a particular registry even if the Micrometer registry implementation is on the classpath. The following example disables Datadog:

大多数注册中心都有共同的特点。例如,即使Micrometer注册表实现位于类路径上,也可以禁用特定注册表。以下示例禁用Datadog:

management.metrics.export.datadog.enabled=false

You can also disable all registries unless stated otherwise by the registry-specific property, as the following example shows:

您还可以禁用所有注册表,除非注册表特定属性另有说明,如下例所示:

management.metrics.export.defaults.enabled=false

Spring Boot also adds any auto-configured registries to the global static composite registry on the Metrics class, unless you explicitly tell it not to:

Spring Boot

还将任何自动配置的注册表添加到Metrics类的全局静态复合注册表中,除非您明确告诉它不要:

management.metrics.use-global-registry=false

You can register any number of MeterRegistryCustomizer beans to further configure the registry, such as applying common tags, before any meters are registered with the registry:

在向注册表注册任何指标之前,进一步配置注册表可以注册任意数量的MeterRegistryCustomizer bean,例如应用通用标记:

import io.micrometer.core.instrument.MeterRegistry;

import org.springframework.boot.actuate.autoconfigure.metrics.MeterRegistryCustomizer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration(proxyBeanMethods = false)
public class MyMeterRegistryConfiguration {

    @Bean
    public MeterRegistryCustomizer<MeterRegistry> metricsCommonTags() {
        return (registry) -> registry.config().commonTags("region", "us-east-1");
    }

}

You can apply customizations to particular registry implementations by being more specific about the generic type:

您可以通过更具体地描述泛型类型来将自定义应用于特定的注册表实现:

import io.micrometer.core.instrument.Meter;
import io.micrometer.core.instrument.config.NamingConvention;
import io.micrometer.graphite.GraphiteMeterRegistry;

import org.springframework.boot.actuate.autoconfigure.metrics.MeterRegistryCustomizer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration(proxyBeanMethods = false)
public class MyMeterRegistryConfiguration {

    @Bean
    public MeterRegistryCustomizer<GraphiteMeterRegistry> graphiteMetricsNamingConvention() {
        return (registry) -> registry.config().namingConvention(this::name);
    }

    private String name(String name, Meter.Type type, String baseUnit) {
        return ...
    }

}

Spring Boot also configures built-in instrumentation that you can control through configuration or dedicated annotation markers.

Spring Boot还配置了内置的工具,您可以通过配置或专用注解标记来控制这些工具。

13.6.2. Supported Monitoring Systems

本节简要介绍了每个受支持的监控系统。

仪表盘 AppOptics

By default, the AppOptics registry periodically pushes metrics to api.appoptics.com/v1/measurements. To export metrics to SaaS AppOptics, your API token must be provided:

默认情况下,AppOptics注册表定期将指标推送到

api.AppOptics.com/v1/measurements

。要将指标导出到

SaaS AppOptics

,必须提供API令牌:

management.metrics.export.appoptics.api-token=YOUR_TOKEN

地图集 Atlas

By default, metrics are exported to Atlas running on your local machine. You can provide the location of the Atlas server:

默认情况下,指标导出到本地计算机上运行的

Atlas

。您可以提供Atlas服务器的位置:

management.metrics.export.atlas.uri=https://atlas.example.com:7101/api/v1/publish

数据狗 Datadog

A Datadog registry periodically pushes metrics to datadoghq. To export metrics to Datadog, you must provide your API key:

Datadog注册表定期将指标推送到

datadoghq

。要将指标导出到Datadog,必须提供API密钥:

management.metrics.export.datadog.api-key=YOUR_KEY

If you additionally provide an application key (optional), then metadata such as meter descriptions, types, and base units will also be exported:

如果您另外提供了应用程序密钥(可选),则还将导出元数据,如指标说明、类型和基本单位:

management.metrics.export.datadog.api-key=YOUR_API_KEY
management.metrics.export.datadog.application-key=YOUR_APPLICATION_KEY

By default, metrics are sent to the Datadog US site (api.datadoghq.com). If your Datadog project is hosted on one of the other sites, or you need to send metrics through a proxy, configure the URI accordingly:

默认情况下,指标被发送到

Datadog US

站点(api.datadoghq.com)。如果您的Datadog项目托管在其他站点之一上,或者您需要通过代理发送指标,请配置相应地URI:

management.metrics.export.datadog.uri=https://api.datadoghq.eu

You can also change the interval at which metrics are sent to Datadog:

您还可以更改向Datadog发送指标的间隔:

management.metrics.export.datadog.step=30s

动态跟踪 Dynatrace

Dynatrace offers two metrics ingest APIs, both of which are implemented for Micrometer. You can find the Dynatrace documentation on Micrometer metrics ingest here. Configuration properties in the v1 namespace apply only when exporting to the Timeseries v1 API. Configuration properties in the v2 namespace apply only when exporting to the Metrics v2 API. Note that this integration can export only to either the v1 or v2 version of the API at a time, with v2 being preferred. If the device-id (required for v1 but not used in v2) is set in the v1 namespace, metrics are exported to the v1 endpoint. Otherwise, v2 is assumed.

Dynatrace提供了两个获取指标API,这两个API都是实现了

Micrometer

的。在此处可以找到动态文档中的详细指标。v1命名空间中的配置属性仅在导出到

Timeseries v1 API

时适用。v2命名空间中的配置属性仅在导出到

Metrics v2 API

时适用。请注意,此集成一次只能导出到API的v1或v2版本,首选v2。如果在v1命名空间中设置了设备id(v1需要但v2中未使用),则将指标导出到v1端点。否则,为v2。

v2 API

You can use the v2 API in two ways.

您可以通过两种方式使用v2 API。

1 Auto-configuration

Dynatrace auto-configuration is available for hosts that are monitored by the OneAgent or by the Dynatrace Operator for Kubernetes.

Dynatrace自动配置可用于

OneAgent

Dynatrace Operator for Kubernetes

监控的主机。

Local OneAgent: If a OneAgent is running on the host, metrics are automatically exported to the local OneAgent ingest endpoint. The ingest endpoint forwards the metrics to the Dynatrace backend.

Local OneAgent:如果OneAgent正在主机上运行,则指标将自动导出到本地

OneAgent

获取取端点。获取端点将指标转发到

Dynatrace

后端。

Dynatrace Kubernetes Operator: When running in Kubernetes with the Dynatrace Operator installed, the registry will automatically pick up your endpoint URI and API token from the operator instead.

Dynatrace Kubernetes Operator:当在安装了Dynatrace Operator的Kubernete中运行时,注册表将自动从该操作符获取端点URI和API令牌。

This is the default behavior and requires no special setup beyond a dependency on io.micrometer:micrometer-registry-dynatrace.

这是默认行为,除了依赖

io.micrometer:micrometer-registry-dynatrace

之外,不需要特殊设置。

2 Manual configuration

If no auto-configuration is available, the endpoint of the Metrics v2 API and an API token are required. The API token must have the “Ingest metrics” (metrics.ingest) permission set. We recommend limiting the scope of the token to this one permission. You must ensure that the endpoint URI contains the path (for example, /api/v2/metrics/ingest):

如果没有可用的自动配置,则需要

Metrics v2 API

的端点和API令牌。API令牌必须具有 "获取指标" (metrics.ingest)权限集。我们建议将令牌的范围限制为此权限。您必须确保端点URI包含路径(例如,

/api/v2/metrics/ingest

The URL of the Metrics API v2 ingest endpoint is different according to your deployment option:

根据您的部署选项,Metrics API v2获取端点的URL不同:

  • SaaS: https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest
  • Managed deployments: https://{your-domain}/e/{your-environment-id}/api/v2/metrics/ingest

The example below configures metrics export using the example environment id:

以下示例使用示例环境id配置度量导出:

management.metrics.export.dynatrace.uri=https://example.live.dynatrace.com/api/v2/metrics/ingest
management.metrics.export.dynatrace.api-token=YOUR_TOKEN

When using the Dynatrace v2 API, the following optional features are available (more details can be found in the Dynatrace documentation):

使用Dynatrace v2 API时,提供以下可选功能(更多详细信息,请参阅Dynatrace文档):

  • Metric key prefix: Sets a prefix that is prepended to all exported metric keys.
  • 指标key前缀:设置所有导出的指标关键字的前缀。
  • Enrich with Dynatrace metadata: If a OneAgent or Dynatrace operator is running, enrich metrics with additional metadata (for example, about the host, process, or pod).
  • 使用Dynatrace元数据丰富:如果OneAgent或Dynatrace运算符正在运行,请使用其他元数据(例如,关于主机、进程或pod)丰富指标。
  • Default dimensions: Specify key-value pairs that are added to all exported metrics. If tags with the same key are specified with Micrometer, they overwrite the default dimensions.
  • 默认维度:指定添加到所有导出指标的键值对。如果使用详细指定了具有相同键的标记,将覆盖默认值。
  • Use Dynatrace Summary instruments: In some cases the Micrometer Dynatrace registry created metrics that were rejected. In Micrometer 1.9.x, this was fixed by introducing Dynatrace-specific summary instruments. Setting this toggle to false forces Micrometer to fall back to the behavior that was the default before 1.9.x. It should only be used when encountering problems while migrating from Micrometer 1.8.x to 1.9.x.
  • 使用Dynatrace汇总工具:在某些情况下,Micrometer Dynatrace注册表创建了被拒绝的指标。在Micrometer 1.9.x中,通过引入Dynatrace-specific的汇总,解决了这一问题。在1.9.x之前,将此切换设置为false将迫使Micrometer为之前的默认值。仅当从Micrometer 1.8.x迁移到1.9.x时遇到问题时才应使用此切换。

It is possible to not specify a URI and API token, as shown in the following example. In this scenario, the automatically configured endpoint is used:

可以不指定URI和API令牌,如下例所示。在此场景中,使用自动配置的端点:

management.metrics.export.dynatrace.v2.metric-key-prefix=your.key.prefix
management.metrics.export.dynatrace.v2.enrich-with-dynatrace-metadata=true
management.metrics.export.dynatrace.v2.default-dimensions.key1=value1
management.metrics.export.dynatrace.v2.default-dimensions.key2=value2
management.metrics.export.dynatrace.v2.use-dynatrace-summary-instruments=true

v1 API (Legacy)

The Dynatrace v1 API metrics registry pushes metrics to the configured URI periodically by using the Timeseries v1 API. For backwards-compatibility with existing setups, when device-id is set (required for v1, but not used in v2), metrics are exported to the Timeseries v1 endpoint. To export metrics to Dynatrace, your API token, device ID, and URI must be provided:

Dynatrace v1 API指标注册表通过使用

Timeseries v1 API

定期将指标推送到配置的URI。为了与现有设置向后兼容,当设置了

device-id

(v1需要,但v2中不使用)时,将度量导出到

Timeseries v1

端点。要将度量导出到Dynatrace,必须提供API令牌、设备ID和URI:

management.metrics.export.dynatrace.uri=https://{your-environment-id}.live.dynatrace.com
management.metrics.export.dynatrace.api-token=YOUR_TOKEN
management.metrics.export.dynatrace.v1.device-id=YOUR_DEVICE_ID

For the v1 API, you must specify the base environment URI without a path, as the v1 endpoint path is added automatically.

对于v1 API,必须指定没有路径的基本环境URI,因为v1端点路径是自动添加的。

Version-independent Settings

In addition to the API endpoint and token, you can also change the interval at which metrics are sent to Dynatrace. The default export interval is 60s. The following example sets the export interval to 30 seconds:

除了API端点和令牌之外,还可以更改向Dynatrace发送度量的间隔。默认导出间隔为60秒。以下示例将导出间隔设置为30秒:

management.metrics.export.dynatrace.step=30s

You can find more information on how to set up the Dynatrace exporter for Micrometer in the Micrometer documentation and the Dynatrace documentation.

您可以在


Micrometer documentation



Dynatrace documentation


中找到有关如何为

Micrometer

设置

Dynatrace

导出的更多信息。

弹性 Elastic

By default, metrics are exported to Elastic running on your local machine. You can provide the location of the Elastic server to use by using the following property:

默认情况下,指标导出到本地计算机上运行的

Elastic

。使用以下属性提供要使用的

Elastic

服务器的位置:

management.metrics.export.elastic.host=https://elastic.example.com:8086

Ganglia

By default, metrics are exported to Ganglia running on your local machine. You can provide the Ganglia server host and port, as the following example shows:

默认情况下,指标导出到本地计算机上运行的Ganglia。您可以提供Ganglia服务器主机和端口,如下例所示:

management.metrics.export.ganglia.host=ganglia.example.com
management.metrics.export.ganglia.port=9649

Graphite

By default, metrics are exported to Graphite running on your local machine. You can provide the Graphite server host and port, as the following example shows:

默认情况下,指标导出到本地计算机上运行的

Graphite

。您可以提供

Graphite

服务器主机和端口,如下例所示:

management.metrics.export.graphite.host=graphite.example.com
management.metrics.export.graphite.port=9004

Micrometer provides a default HierarchicalNameMapper that governs how a dimensional meter ID is mapped to flat hierarchical names.

Micrometer提供了一个默认的HierarchicalNameMapper,它控制如何将维度表ID映射到平面分层名称。

To take control over this behavior, define your GraphiteMeterRegistry and supply your own HierarchicalNameMapper. An auto-configured GraphiteConfig and Clock beans are provided unless you define your own:

要控制此行为,请定义GraphiteMeterRegistry并提供自己的HierarchicalNameMapper。提供自动配置的GraphiteConfig和Clock bean除非您定义自己的:

import io.micrometer.core.instrument.Clock;
import io.micrometer.core.instrument.Meter;
import io.micrometer.core.instrument.config.NamingConvention;
import io.micrometer.core.instrument.util.HierarchicalNameMapper;
import io.micrometer.graphite.GraphiteConfig;
import io.micrometer.graphite.GraphiteMeterRegistry;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration(proxyBeanMethods = false)
    public class MyGraphiteConfiguration {

        @Bean
        public GraphiteMeterRegistry graphiteMeterRegistry(GraphiteConfig config, Clock clock) {
            return new GraphiteMeterRegistry(config, clock, this::toHierarchicalName);
        }

        private String toHierarchicalName(Meter.Id id, NamingConvention convention) {
            return ...
                }

    }

Humio

By default, the Humio registry periodically pushes metrics to cloud.humio.com. To export metrics to SaaS Humio, you must provide your API token:

默认情况下,Humio注册表会定期将指标推送到

cloud.Humio.com

。要将指标导出到

SaaS Humio

,您必须提供API令牌:

management.metrics.export.humio.api-token=YOUR_TOKEN

You should also configure one or more tags to identify the data source to which metrics are pushed:

您还应配置一个或多个标记,以标识将指标推送到的数据源:

management.metrics.export.humio.tags.alpha=a
management.metrics.export.humio.tags.bravo=b

Influx

By default, metrics are exported to an Influx v1 instance running on your local machine with the default configuration. To export metrics to InfluxDB v2, configure the org, bucket, and authentication token for writing metrics. You can provide the location of the Influx server to use by using:

默认情况下,指标导出到在本地计算机上运行的具有默认配置的

Influx v1

实例。要将度量导出到

InluxDB v2

,请配置用于写入度量的org、bucket和身份验证令牌。您可以通过以下方式提供要使用的

Inlux

服务器的位置:

management.metrics.export.influx.uri=https://influx.example.com:8086

JMX

Micrometer provides a hierarchical mapping to JMX, primarily as a cheap and portable way to view metrics locally. By default, metrics are exported to the metrics JMX domain. You can provide the domain to use by using:

Micrometer提供了到JMX的分层映射,主要是作为一种本地查看指标的简易且可移植的方式。默认情况下,指标导出到指标JMX域。您可以通过以下方式提供要使用的域:

management.metrics.export.jmx.domain=com.example.app.metrics

Micrometer provides a default HierarchicalNameMapper that governs how a dimensional meter ID is mapped to flat hierarchical names.

Micrometer

提供了一个默认的

HierarchicalNameMapper

,它控制如何将维度表ID映射到平面分层名称。

To take control over this behavior, define your JmxMeterRegistry and supply your own HierarchicalNameMapper. An auto-configured JmxConfig and Clock beans are provided unless you define your own:

要控制此行为,请定义

JmxMeterRegistry

并提供自己的

HierarchicalNameMapper

。除非您自己定义,否则将提供自动配置的JmxConfig和Clock bean:

import io.micrometer.core.instrument.Clock;
import io.micrometer.core.instrument.Meter;
import io.micrometer.core.instrument.config.NamingConvention;
import io.micrometer.core.instrument.util.HierarchicalNameMapper;
import io.micrometer.jmx.JmxConfig;
import io.micrometer.jmx.JmxMeterRegistry;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration(proxyBeanMethods = false)
public class MyJmxConfiguration {

    @Bean
    public JmxMeterRegistry jmxMeterRegistry(JmxConfig config, Clock clock) {
        return new JmxMeterRegistry(config, clock, this::toHierarchicalName);
    }

    private String toHierarchicalName(Meter.Id id, NamingConvention convention) {
        return ...
            }

}

KairosDB

By default, metrics are exported to KairosDB running on your local machine. You can provide the location of the KairosDB server to use by using:

默认情况下,指标导出到本地计算机上运行的KairosDB。您可以通过以下方式提供要使用的KairosDB服务器的位置:

management.metrics.export.kairos.uri=https://kairosdb.example.com:8080/api/v1/datapoints

New Relic

A New Relic registry periodically pushes metrics to New Relic. To export metrics to New Relic, you must provide your API key and account ID:

NewRelic注册表定期向NewRelic推送指标。要将指标导出到NewRelic,必须提供API密钥和帐户ID:

management.metrics.export.newrelic.api-key=YOUR_KEY
management.metrics.export.newrelic.account-id=YOUR_ACCOUNT_ID

You can also change the interval at which metrics are sent to New Relic:

您还可以更改向NewRelic发送度量的间隔:

management.metrics.export.newrelic.step=30s

By default, metrics are published through REST calls, but you can also use the Java Agent API if you have it on the classpath:

默认情况下,度量是通过REST调用发布的,但如果您在类路径上有Java代理API,也可以使用它:

management.metrics.export.newrelic.client-provider-type=insights-agent

Finally, you can take full control by defining your own NewRelicClientProvider bean.

最后,您可以通过定义自己的NewRelicClientProvider bean来完全控制。

Prometheus

Prometheus expects to scrape or poll individual application instances for metrics. Spring Boot provides an actuator endpoint at /actuator/prometheus to present a Prometheus scrape with the appropriate format.

Prometheus希望收集或轮询各个应用程序实例以获取指标。Spring Boot提供了一个位于

/actuator/prometheus

的执行器端点,以显示具有适当格式的


Prometheus scrape


The following example scrape_config adds to prometheus.yml:

scrape_configs:
  - job_name: "spring"
    metrics_path: "/actuator/prometheus"
    static_configs:
      - targets: ["HOST:PORT"]

Prometheus Exemplars are also supported. To enable this feature, a SpanContextSupplier bean should present. If you use Spring Cloud Sleuth, this will be auto-configured for you, but you can always create your own if you want.

普罗米修斯示例也被支持。要启用此功能,应提供

SpanContextSupplier bean

。如果您使用


Spring Cloud Sleuth


,这将为您自动配置,但如果需要,您可以创建自己的。
Please check the Prometheus Docs, since this feature needs to be explicitly enabled on Prometheus' side, and it is only supported using the OpenMetrics format.

请检查普罗米修斯文档,因为普罗米修斯需要明确启用此功能,并且仅使用OpenMetrics格式支持此功能。

For ephemeral or batch jobs that may not exist long enough to be scraped, you can use Prometheus Pushgateway support to expose the metrics to Prometheus. To enable Prometheus Pushgateway support, add the following dependency to your project:

对于短暂的或批处理作业,这些作业可能存在的时间不够长,无法清除,您可以使用

PrometheusPushgateway

支持向Prometheus公开指标。要启用

Prometheus Pushgateway

支持,请将以下依赖项添加到项目中:

<dependency>
  <groupId>io.prometheus</groupId>
  <artifactId>simpleclient_pushgateway</artifactId>
</dependency>

When the Prometheus Pushgateway dependency is present on the classpath and the

management.metrics.export.prometheus.pushgateway.enabled

property is set to true, a PrometheusPushGatewayManager bean is auto-configured. This manages the pushing of metrics to a Prometheus Pushgateway.

当类路径上存在

PrometheusPushgateway

依赖项,且

management.metrics.export.prometheus.pushgateway.enabled

属性设置为true时,

PrometheusPushGatewayManager bean

将自动配置。这管理着

Prometheus Pushgateway

的指标推送。

You can tune the PrometheusPushGatewayManager by using properties under

management.metrics.export.prometheus.pushgateway

. For advanced configuration, you can also provide your own PrometheusPushGatewayManager bean.

您可以使用

management.metrics.export.prometheus.pushgateway

下的属性来调整

PrometheusPushGatewayManager

。对于高级配置,你同样可以提供自己的

PrometheusPushGatewayManager bean

SignalFx

SignalFx registry periodically pushes metrics to SignalFx. To export metrics to SignalFx, you must provide your access token:

SignalFx注册表定期将指标推送到SignalFx。要将指标导出到SignalFx,必须提供访问令牌:

management.metrics.export.signalfx.access-token=YOUR_ACCESS_TOKEN

You can also change the interval at which metrics are sent to SignalFx:

您还可以更改向SignalFx发送指标的间隔:

management.metrics.export.signalfx.step=30s

Simple

Micrometer ships with a simple, in-memory backend that is automatically used as a fallback if no other registry is configured. This lets you see what metrics are collected in the metrics endpoint.

Micrometer附带了一个简单的内存后端,如果没有配置其他注册表,该后端将自动用作备用。这让您看到在指标端点中收集了哪些指标。

The in-memory backend disables itself as soon as you use any other available backend. You can also disable it explicitly:

一旦使用任何其他可用的后端,内存中的后端就会立即禁用自己。您也可以显式禁用它:

management.metrics.export.simple.enabled=false

Stackdriver

The Stackdriver registry periodically pushes metrics to Stackdriver. To export metrics to SaaS Stackdriver, you must provide your Google Cloud project ID:

Stackdriver注册表定期将指标推送到Stackdriver。要将指标导出到

SaaS Stackdriver

,您必须提供您的Google Cloud项目ID:

management.metrics.export.stackdriver.project-id=my-project

You can also change the interval at which metrics are sent to Stackdriver:

您还可以更改向Stackdriver发送度量的间隔:

management.metrics.export.stackdriver.step=30s

StatsD

The StatsD registry eagerly pushes metrics over UDP to a StatsD agent. By default, metrics are exported to a StatsD agent running on your local machine. You can provide the StatsD agent host, port, and protocol to use by using:

StatsD注册表急切地通过UDP将指标推送到StatsD代理。默认情况下,指标导出到本地计算机上运行的StatsD代理。您可以通过以下方式提供要使用的StatsD代理主机、端口和协议:

management.metrics.export.statsd.host=statsd.example.com
management.metrics.export.statsd.port=9125
management.metrics.export.statsd.protocol=udp

You can also change the StatsD line protocol to use (it defaults to Datadog):

management.metrics.export.statsd.flavor=etsy

Wavefront

The Wavefront registry periodically pushes metrics to Wavefront. If you are exporting metrics to Wavefront directly, you must provide your API token:

Wavefront注册表定期将指标推送到Wavefront。如果您要直接将指标导出到Wavefront,则必须提供API令牌:

management.metrics.export.wavefront.api-token=YOUR_API_TOKEN

Alternatively, you can use a Wavefront sidecar or an internal proxy in your environment to forward metrics data to the Wavefront API host:

或者,您可以在环境中使用Wavefront sidecar或内部代理将指标数据转发到WavefrontAPI主机:

management.metrics.export.wavefront.uri=proxy://localhost:2878

If you publish metrics to a Wavefront proxy (as described in the Wavefront documentation), the host must be in the proxy://HOST:PORT format.

如果将指标发布到Wavefront代理(如Wavefront文档中所述),则主机必须是proxy://HOST:PORT 格式。

You can also change the interval at which metrics are sent to Wavefront:

management.metrics.export.wavefront.step=30s

13.6.3. Supported Metrics and Meters

Spring Boot provides automatic meter registration for a wide variety of technologies. In most situations, the defaults provide sensible metrics that can be published to any of the supported monitoring systems.

Spring Boot为各种技术提供自动仪表注册。在大多数情况下,默认值提供了可发布合理指标到任何被支持的监控系统。

JVM Metrics

Auto-configuration enables JVM Metrics by using core Micrometer classes. JVM metrics are published under the jvm. meter name.

自动配置通过使用

core Micrometer classes

启用JVM度量。JVM度量在

jvm.

meter name下发布

The following JVM metrics are provided:

  • Various memory and buffer pool details
  • 各种内存和缓冲池详细信息
  • Statistics related to garbage collection
  • 与垃圾收集相关的统计信息
  • Thread utilization
  • 线程利用率
  • The number of classes loaded and unloaded
  • 加载和卸载的类数

System Metrics

Auto-configuration enables system metrics by using core Micrometer classes. System metrics are published under the system., process., and disk. meter names.

自动配置通过使用核心

Micrometer classes

实现系统度量。系统度量在

system.

,

process.

disk.

meter names下发布 。

The following system metrics are provided:

  • CPU metrics(CPU指标)
  • File descriptor metrics (文件描述符指标)
  • Uptime metrics (both the amount of time the application has been running and a fixed gauge of the absolute start time)(正常运行时间指标(应用程序运行的时间和绝对启动时间的固定度量))
  • Disk space available(可用磁盘空间)

Application Startup Metrics

Auto-configuration exposes application startup time metrics:

自动配置导出应用程序启动时间指标

  • application.started.time: time taken to start the application.
  • application.ready.time: time taken for the application to be ready to service requests.

Metrics are tagged by the fully qualified name of the application class.

度量由应用程序类的完全限定名称标记

Logger Metrics

Auto-configuration enables the event metrics for both Logback and Log4J2. The details are published under the log4j2.events. or logback.events. meter names.

自动配置启用Logback和Log4J2的事件指标。详细信息以

log4j2.events.

logback.events.

的meter名称发布。

Task Execution and Scheduling Metrics

Auto-configuration enables the instrumentation of all available ThreadPoolTaskExecutor and ThreadPoolTaskScheduler beans, as long as the underling ThreadPoolExecutor is available. Metrics are tagged by the name of the executor, which is derived from the bean name.

自动配置允许检测所有可用的ThreadPoolTaskExecutor和ThreadPoolTaskScheduler bean,只要下面的ThreadPoolExecutor可用。度量由执行器的名称标记,该名称源自bean名称。

Spring MVC Metrics

Auto-configuration enables the instrumentation of all requests handled by Spring MVC controllers and functional handlers. By default, metrics are generated with the name, http.server.requests. You can customize the name by setting the management.metrics.web.server.request.metric-name property.

自动配置使Spring MVC控制器和功能处理程序能够处理所有请求。默认情况下,使用

http.server.requests

名称生成指标。您可以通过设置

management.metrics.web.server.request.metric-name

属性自定义名称。

@Timed annotations are supported on @Controller classes and @RequestMapping methods (see @Timed Annotation Support for details). If you do not want to record metrics for all Spring MVC requests, you can set management.metrics.web.server.request.autotime.enabled to false and exclusively use @Timed annotations instead.

@Controller

类和

@RequestMapping

方法支持

@Timed

注解(有关详细信息,请参阅@Timed Annotation Support )。如果不想记录所有Spring MVC请求的指标,可以设置

management.metrics.web.server.request.autotime.enabled

为false,并仅使用

@Timed

注解代替。

By default, Spring MVC related metrics are tagged with the following information:

Tag

Description

exception

The simple class name of any exception that was thrown while handling the request.

method

The request’s method (for example, GET or POST)

outcome

The request’s outcome, based on the status code of the response. 1xx is INFORMATIONAL, 2xx is SUCCESS, 3xx is REDIRECTION, 4xx is CLIENT_ERROR, and 5xx is SERVER_ERROR

status

The response’s HTTP status code (for example, 200 or 500)

uri

The request’s URI template prior to variable substitution, if possible (for example, /api/person/{id})

To add to the default tags, provide one or more @Beans that implement WebMvcTagsContributor. To replace the default tags, provide a @Bean that implements WebMvcTagsProvider.

要添加到默认标记,请提供一个或多个实现了

WebMvcTagsContributor

@Beans

。要替换默认标记,请提供一个实现

WebMvcTagsProvider

@Bean

In some cases, exceptions handled in web controllers are not recorded as request metrics tags. Applications can opt in and record exceptions by setting handled exceptions as request attributes.

在某些情况下,web控制器中处理的异常不会记录为请求度量标记。应用程序可以通过将已处理的异常设置为请求属性来选择并记录异常。

By default, all requests are handled. To customize the filter, provide a @Bean that implements

FilterRegistrationBean<WebMvcMetricsFilter>

.

默认情况下,处理所有请求。要自定义过滤器,请提供一个实现

FilterRegistrationBean<WebMvcMetricsFilter>

的@Bean。

Spring WebFlux Metrics

Auto-configuration enables the instrumentation of all requests handled by Spring WebFlux controllers and functional handlers. By default, metrics are generated with the name, http.server.requests. You can customize the name by setting the management.metrics.web.server.request.metric-name property.

自动配置使Spring WebFlux控制器和功能处理程序能够处理所有请求。默认情况下,使用

http.server.requests

名称生成度量。您可以通过设置

management.metrics.web.server.request.metric-name

属性

@Timed annotations are supported on @Controller classes and @RequestMapping methods (see @Timed Annotation Support for details). If you do not want to record metrics for all Spring WebFlux requests, you can set management.metrics.web.server.request.autotime.enabled to false and exclusively use @Timed annotations instead.

@Controller

类和

@RequestMapping

方法支持

@Timed

注解(有关详细信息,请参阅@Timed Annotation Support)。如果不想记录所有SpringWebFlux请求的度量,可以设置

management.metrics.web.server.request.autotime.enabled

为false,并仅使用

@Timed

注释。

By default, WebFlux related metrics are tagged with the following information:

Tag

Description

exception

The simple class name of any exception that was thrown while handling the request.

method

The request’s method (for example, GET or POST)

outcome

The request’s outcome, based on the status code of the response. 1xx is INFORMATIONAL, 2xx is SUCCESS, 3xx is REDIRECTION, 4xx is CLIENT_ERROR, and 5xx is SERVER_ERROR

status

The response’s HTTP status code (for example, 200 or 500)

uri

The request’s URI template prior to variable substitution, if possible (for example, /api/person/{id})

Jersey Server Metrics

Auto-configuration enables the instrumentation of all requests handled by the Jersey JAX-RS implementation. By default, metrics are generated with the name, http.server.requests. You can customize the name by setting the management.metrics.web.server.request.metric-name property.

@Timed annotations are supported on request-handling classes and methods (see @Timed Annotation Support for details). If you do not want to record metrics for all Jersey requests, you can set management.metrics.web.server.request.autotime.enabled to false and exclusively use @Timed annotations instead.

By default, Jersey server metrics are tagged with the following information:

Tag

Description

exception

The simple class name of any exception that was thrown while handling the request.

method

The request’s method (for example, GET or POST)

outcome

The request’s outcome, based on the status code of the response. 1xx is INFORMATIONAL, 2xx is SUCCESS, 3xx is REDIRECTION, 4xx is CLIENT_ERROR, and 5xx is SERVER_ERROR

status

The response’s HTTP status code (for example, 200 or 500)

uri

The request’s URI template prior to variable substitution, if possible (for example, /api/person/{id})

To customize the tags, provide a @Bean that implements JerseyTagsProvider.

HTTP Client Metrics

Spring Boot Actuator manages the instrumentation of both RestTemplate and WebClient. For that, you have to inject the auto-configured builder and use it to create instances:

Spring Boot Actuator

管理

RestTemplate

WebClient

的检测。为此,您必须注入自动配置的生成器并使用它创建实例:

  • RestTemplateBuilder for RestTemplate
  • WebClient.Builder for WebClient

You can also manually apply the customizers responsible for this instrumentation, namely MetricsRestTemplateCustomizer and MetricsWebClientCustomizer.

您还可以手动应用负责此检测的自定义程序,即

MetricsRestTemplateCustomizer

MetricsWebClientCustomizer

By default, metrics are generated with the name, http.client.requests. You can customize the name by setting the management.metrics.web.client.request.metric-name property.

默认情况下,使用名称

http.client.requests

生成指标。您可以通过设置

management.metrics.web.client.request.metric-name

自定义名称。

By default, metrics generated by an instrumented client are tagged with the following information:

Tag

Description

clientName

The host portion of the URI

method

The request’s method (for example, GET or POST)

outcome

The request’s outcome, based on the status code of the response. 1xx is INFORMATIONAL, 2xx is SUCCESS, 3xx is REDIRECTION, 4xx is CLIENT_ERROR, and 5xx is SERVER_ERROR. Otherwise, it is UNKNOWN.

status

The response’s HTTP status code if available (for example, 200 or 500) or IO_ERROR in case of I/O issues. Otherwise, it is CLIENT_ERROR.

uri

The request’s URI template prior to variable substitution, if possible (for example, /api/person/{id})

To customize the tags, and depending on your choice of client, you can provide a @Bean that implements

RestTemplateExchangeTagsProvider

or

 WebClientExchangeTagsProvider

. There are convenience static functions in

RestTemplateExchangeTags

and

WebClientExchangeTags

.

要自定义标记,并且取决于您选择的客户端,您可以提供一个

@Bean

来实现

RestTemplateExchangeTagsProvider

 WebClientExchangeTagsProvider

RestTemplateExchangeTags

WebClientExchangeTags

中有方便的静态方法。

If you do not want to record metrics for all RestTemplate and WebClient requests, set

management.metrics.web.client.request.autotime.enabled 

to false.

如果不需要记录

RestTemplate

WebClient

的所有请求,可以设置

management.metrics.web.client.request.autotime.enabled

false

Tomcat Metrics

Auto-configuration enables the instrumentation of Tomcat only when an MBeanRegistry is enabled. By default, the MBeanRegistry is disabled, but you can enable it by setting server.tomcat.mbeanregistry.enabled to true.

自动配置仅在启用

MBeanRegistry

时才启用Tomcat的监测。默认情况下,

MBeanRegistry

处于禁用状态,但您可以通过设置

server.tomcat.mbeanregistry.enabled

为true启用。

Tomcat metrics are published under the tomcat. meter name.

Tomcat指标在

tomcat.

名称下发布。

Cache Metrics

Auto-configuration enables the instrumentation of all available Cache instances on startup, with metrics prefixed with cache. Cache instrumentation is standardized for a basic set of metrics. Additional, cache-specific metrics are also available.

自动配置允许在启动时检测所有可用的缓存实例,并使用前缀为缓存的指标。缓存检测标准化为一组基本指标。此外,还提供了缓存特定的指标

The following cache libraries are supported:

  • Cache2k
  • Caffeine
  • EhCache 2
  • Hazelcast
  • Any compliant JCache (JSR-107) implementation
  • Redis

Metrics are tagged by the name of the cache and by the name of the CacheManager, which is derived from the bean name.

度量由缓存的名称和CacheManager的名称标记,CacheManager是从bean名称派生的。

Spring GraphQL Metrics

Auto-configuration enables the instrumentation of GraphQL queries, for any supported transport.

Spring Boot records a graphql.request timer with:

Tag

Description

Sample values

outcome

Request outcome

"SUCCESS", "ERROR"

A single GraphQL query can involve many DataFetcher calls, so there is a dedicated graphql.datafetcher timer:

Tag

Description

Sample values

path

data fetcher path

"Query.project"

outcome

data fetching outcome

"SUCCESS", "ERROR"

The graphql.request.datafetch.countdistribution summary counts the number of non-trivial DataFetcher calls made per request. This metric is useful for detecting "N+1" data fetching issues and considering batch loading; it provides the "TOTAL" number of data fetcher calls made over the "COUNT" of recorded requests, as well as the "MAX" calls made for a single request over the considered period. More options are available for configuring distributions with application properties.

graphql.request.datafetch.count

统计每个请求进行的非平凡DataFetcher调用数。该度量对于检测“N+1”数据获取问题和考虑批量加载非常有用;它提供了在记录的请求的“COUNT”上进行的数据获取器调用的“TOTAL”数量,以及在所考虑的时间段内对单个请求进行的“MAX”调用。更多选项可用于配置具有应用程序属性的分发版。

A single response can contain many GraphQL errors, counted by the graphql.error counter:

一个响应可以包含许多GraphQL错误,按

graphql.error

计数器计数:

Tag

Description

Sample values

errorType

error type

"DataFetchingException"

errorPath

error JSON Path

"$.project"

DataSource Metrics

Auto-configuration enables the instrumentation of all available DataSource objects with metrics prefixed with jdbc.connections. Data source instrumentation results in gauges that represent the currently active, idle, maximum allowed, and minimum allowed connections in the pool.

自动配置启用对所有可用DataSource对象的检测,这些对象的指标前缀为

jdbc.connections

。数据源检测会生成表示池中当前活动连接、空闲连接、最大允许连接和最小允许连接的指标。

Metrics are also tagged by the name of the DataSource computed based on the bean name.

统计指标还通过基于bean名称计算的DataSource的名称进行标记。

By default, Spring Boot provides metadata for all supported data sources. You can add additional

DataSourcePoolMetadataProvider

beans if your favorite data source is not supported. See

DataSourcePoolMetadataProvidersConfiguration

for examples.

默认情况下,Spring Boot为所有支持的数据源提供元数据。如果不支持您喜爱的数据源,则可以添加其他

DataSourcePoolMetadataProvider

bean。有关示例,请参阅

DataSourcePoolMetadataProviderConfiguration

Also, Hikari-specific metrics are exposed with a hikaricp prefix. Each metric is tagged by the name of the pool (you can control it with spring.datasource.name).

此外,Hikari特定的度量以hikarip前缀公开。每个度量都由池的名称标记(您可以使用

spring.datasource.name

控制它)。

Spring Data Repository Metrics

Auto-configuration enables the instrumentation of all Spring Data Repository method invocations. By default, metrics are generated with the name,

spring.data.repository.invocations

. You can customize the name by setting the

management.metrics.data.repository.metric-name

property.

自动配置启用所有Spring Data Repository方法调用的检测。默认情况下,使用名称

spring.data.repository.invocations

生成度量。您可以通过设置

management.metrics.data.repository.metric-name

属性进行自定义。

@Timed annotations are supported on Repository classes and methods (see @Timed Annotation Support for details). If you do not want to record metrics for all Repository invocations, you can set management.metrics.data.repository.autotime.enabled to false and exclusively use @Timed annotations instead.

@Repository

类和方法支持

@Timed

注解(有关详细信息,请参阅@Timed Annotation Support)。如果不想记录所有存储库调用的度量,可以设置

management.metrics.data.repository.autotime.enabled

为false,并仅使用

@Timed

注解代替。

By default, repository invocation related metrics are tagged with the following information:

默认情况下,

repository

调用相关度量标记有以下信息:

Tag

Description

repository

The simple class name of the source Repository.

method

The name of the Repository method that was invoked.

state

The result state (SUCCESS, ERROR, CANCELED, or RUNNING).

exception

The simple class name of any exception that was thrown from the invocation.

To replace the default tags, provide a @Bean that implements RepositoryTagsProvider.

RabbitMQ Metrics

Auto-configuration enables the instrumentation of all available RabbitMQ connection factories with a metric named

rabbitmq

.

Spring Integration Metrics

Spring Integration automatically provides Micrometer support whenever a MeterRegistry bean is available. Metrics are published under the

spring.integration.

meter name.

Kafka Metrics

Auto-configuration registers a MicrometerConsumerListener and MicrometerProducerListener for the auto-configured consumer factory and producer factory, respectively. It also registers a KafkaStreamsMicrometerListener for StreamsBuilderFactoryBean. For more detail, see the Micrometer Native Metrics section of the Spring Kafka documentation.

自动配置分别为自动配置的消费者工厂和生产者工厂注册

MicrometerConsumerListener

MicrometerProducerListener

。它还为StreamsBuilderFactoryBean注册了KafkaStreamsMicrometerListener。有关更多详细信息,请参阅Spring Kafka文档的Micrometer Native Metrics部分。

Redis Metrics

Auto-configuration registers a MicrometerCommandLatencyRecorder for the auto-configured LettuceConnectionFactory. For more detail, see the Micrometer Metrics section of the Lettuce documentation.

自动配置为自动配置的LettueConnectionFactory注册MicrometerCommandLatencyRecorder。有关详细信息,请参阅Lettuce文档的


Micrometer Metrics


部分。

13.7. Auditing

Once Spring Security is in play, Spring Boot Actuator has a flexible audit framework that publishes events (by default, “authentication success”, “failure” and “access denied” exceptions). This feature can be very useful for reporting and for implementing a lock-out policy based on authentication failures.

一旦启用了Spring Security,Spring Boot Actuator就有一个灵活的审计框架来发布事件(默认情况下,“身份验证成功”、“失败”和“拒绝访问”异常)。此功能对于报告和基于身份验证失败实施锁定策略非常有用。

You can enable auditing by providing a bean of type AuditEventRepository in your application’s configuration. For convenience, Spring Boot offers an InMemoryAuditEventRepository. InMemoryAuditEventRepository has limited capabilities, and we recommend using it only for development environments. For production environments, consider creating your own alternative AuditEventRepository implementation.

您可以通过在应用程序的配置中提供

AuditEventRepository

类型的

bean

来启用审计。为了方便起见,Spring Boot提供了

InMemoryAuditEventRepository

InMemoryAuditEventRepository

的功能有限,我们建议仅在开发环境中使用它。对于生产环境,请考虑创建自己的备选

AuditEventRepository

实现。

13.7.1. Custom Auditing

To customize published security events, you can provide your own implementations of AbstractAuthenticationAuditListener and AbstractAuthorizationAuditListener.

要自定义发布的安全事件,您可以提供自己的

AbstractAuthenticationAuditListener

AbstractAuthorizationAuditListener

实现。

You can also use the audit services for your own business events. To do so, either inject the AuditEventRepository bean into your own components and use that directly or publish an AuditApplicationEvent with the Spring ApplicationEventPublisher (by implementing ApplicationEventPublisherAware).

您还可以为自己的业务事件使用审计服务。为此,可以将

AuditEventRepository bean

注入到您自己的组件中并直接使用它,或者使用

SpringApplicationEventPublisher

发布

AuditApplicationEvent

(通过实现ApplicationEventPublisherware)。

13.8. HTTP Tracing

You can enable HTTP Tracing by providing a bean of type HttpTraceRepository in your application’s configuration. For convenience, Spring Boot offers InMemoryHttpTraceRepository, which stores traces for the last 100 (the default) request-response exchanges. InMemoryHttpTraceRepository is limited compared to other tracing solutions, and we recommend using it only for development environments. For production environments, we recommend using a production-ready tracing or observability solution, such as Zipkin or Spring Cloud Sleuth. Alternatively, you can create your own HttpTraceRepository.

您可以通过在应用程序的配置中提供

HttpTraceRepository

类型的

bean

来启用HTTP跟踪。为了方便起见,

Spring Boot

提供了

InMemoryHttpTraceRepository

,它存储最近100次(默认)请求-响应交换的跟踪。与其他跟踪解决方案相比,

InMemoryHttpTraceRepository

有限,我们建议仅在开发环境中使用它。对于生产环境,我们建议使用生产就绪的跟踪或可观测性解决方案,例如

Zipkin

Spring Cloud Sleuth

。或者,您可以创建自己的

HttpTraceRepository

You can use the httptrace endpoint to obtain information about the request-response exchanges that are stored in the HttpTraceRepository.

您可以使用

httptrace

端点获取有关

HttpTraceRepository

中存储的请求-响应交换的信息。

13.8.1. Custom HTTP tracing

To customize the items that are included in each trace, use the management.trace.http.include configuration property. For advanced customization, consider registering your own HttpExchangeTracer implementation.

要自定义每个跟踪中包含的项目,请使用

management.trace.http.include

配置属性。对于高级定制,请考虑注册自己的

HttpExchangeTracer

实现。

13.9. Process Monitoring

In the spring-boot module, you can find two classes to create files that are often useful for process monitoring:

在spring boot模块中,您可以找到两个类来创建通常对进程监控有用的文件:

  • ApplicationPidFileWriter creates a file that contains the application PID (by default, in the application directory with a file name of application.pid).
ApplicationPidFileWriter

创建一个包含应用程序PID的文件(默认情况下,在应用程序目录中,文件名为application.PID)。

  • WebServerPortFileWriter creates a file (or files) that contain the ports of the running web server (by default, in the application directory with a file name of application.port).
WebServerPortFileWriter

创建一个或多个文件,其中包含正在运行的web服务器的端口(默认情况下,在应用程序目录中,文件名为

application.port

)。

By default, these writers are not activated, but you can enable them:

  • By Extending Configuration
  • Programmatically Enabling Process Monitoring

13.9.1. Extending Configuration

In the META-INF/spring.factories file, you can activate the listener (or listeners) that writes a PID file:

META-INF/spring.factories

文件,您可以激活编写PID文件的监听器:

org.springframework.context.ApplicationListener=\
org.springframework.boot.context.ApplicationPidFileWriter,\
org.springframework.boot.web.context.WebServerPortFileWriter

13.9.2. Programmatically Enabling Process Monitoring

You can also activate a listener by invoking the SpringApplication.addListeners(…) method and passing the appropriate Writer object. This method also lets you customize the file name and path in the Writer constructor.

您还可以通过调用SpringApplication.addListeners(…)方法并传递适当的Writer对象来激活侦听器。此方法还允许您自定义Writer构造函数中的文件名和路径。

13.10. Cloud Foundry Support

Spring Boot’s actuator module includes additional support that is activated when you deploy to a compatible Cloud Foundry instance. The /cloudfoundryapplication path provides an alternative secured route to all @Endpoint beans.

Spring Boot actuator

模块包括在部署到兼容的

Cloud Foundry

实例时激活的额外支持。

/cloudfoundryapplication

路径为所有

@Endpoint bean

提供了另一种安全路由。

The extended support lets Cloud Foundry management UIs (such as the web application that you can use to view deployed applications) be augmented with Spring Boot actuator information. For example, an application status page can include full health information instead of the typical “running” or “stopped” status.

扩展的支持使

Cloud Foundry

管理UI(例如可以用于查看已部署应用程序的web应用程序)可以使用

SpringBoot

执行器信息进行扩充。例如,应用程序状态页可以包含完整的运行状况信息,而不是典型的“正在运行”或“已停止”状态。

The /cloudfoundryapplication path is not directly accessible to regular users. To use the endpoint, you must pass a valid UAA token with the request.

普通用户无法直接访问/cloudfoundryapplication路径。要使用端点,必须随请求传递有效的UAA令牌。

13.10.1. Disabling Extended Cloud Foundry Actuator Support

If you want to fully disable the

/cloudfoundryapplication

endpoints, you can add the following setting to your

application.properties

file:

如果要完全禁用

/cloudfoundryapplication

端点,可以向

application.properties

加以下设置:

management.cloudfoundry.enabled=false

13.10.2. Cloud Foundry Self-signed Certificates

By default, the security verification for /cloudfoundryapplication endpoints makes SSL calls to various Cloud Foundry services. If your Cloud Foundry UAA or Cloud Controller services use self-signed certificates, you need to set the following property:

默认情况下,

/cloudfoundryapplication

端点的安全验证对各种

CloudFoundry

服务进行SSL调用。如果您的

Cloud Foundry UAA

Cloud Controller

服务使用自签名证书,则需要设置以下属性:

management.cloudfoundry.skip-ssl-validation=true

13.10.3. Custom Context Path

If the server’s context-path has been configured to anything other than

/

, the Cloud Foundry endpoints are not available at the root of the application. For example, if

server.servlet.context-path=/app

, Cloud Foundry endpoints are available at

/app/cloudfoundryapplication/*

.

如果服务器的上下文路径已配置为

/

以外的任何其他路径,则应用程序根目录中的

CloudFoundry

端点不可用。例如,如果

server.servlet.context-path=/app

,Cloud Foundry端点位于

/app/cloudfoundryapplication/*

If you expect the Cloud Foundry endpoints to always be available at /cloudfoundryapplication/*, regardless of the server’s context-path, you need to explicitly configure that in your application. The configuration differs, depending on the web server in use. For Tomcat, you can add the following configuration:

如果您希望Cloud Foundry端点始终在

/cloudfoundryapplication/*

处可用,而不管服务器的上下文路径如何,则需要在应用程序中明确配置。根据使用的web服务器,配置有所不同。对于Tomcat,可以添加以下配置:

import java.io.IOException;
import java.util.Collections;

import javax.servlet.GenericServlet;
import javax.servlet.Servlet;
import javax.servlet.ServletContainerInitializer;
import javax.servlet.ServletContext;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;

import org.apache.catalina.Host;
import org.apache.catalina.core.StandardContext;
import org.apache.catalina.startup.Tomcat;

import org.springframework.boot.web.embedded.tomcat.TomcatServletWebServerFactory;
import org.springframework.boot.web.servlet.ServletContextInitializer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration(proxyBeanMethods = false)
    public class MyCloudFoundryConfiguration {

        @Bean
        public TomcatServletWebServerFactory servletWebServerFactory() {
            return new TomcatServletWebServerFactory() {

                @Override
                protected void prepareContext(Host host, ServletContextInitializer[] initializers) {
                    super.prepareContext(host, initializers);
                    StandardContext child = new StandardContext();
                    child.addLifecycleListener(new Tomcat.FixContextListener());
                    child.setPath("/cloudfoundryapplication");
                    ServletContainerInitializer initializer = getServletContextInitializer(getContextPath());
                    child.addServletContainerInitializer(initializer, Collections.emptySet());
                    child.setCrossContext(true);
                    host.addChild(child);
                }

            };
        }

        private ServletContainerInitializer getServletContextInitializer(String contextPath) {
            return (classes, context) -> {
                Servlet servlet = new GenericServlet() {

                    @Override
                    public void service(ServletRequest req, ServletResponse res) throws ServletException, IOException {
                        ServletContext context = req.getServletContext().getContext(contextPath);
                        context.getRequestDispatcher("/cloudfoundryapplication").forward(req, res);
                    }

                };
                context.addServlet("cloudfoundry", servlet).addMapping("/*");
            };
        }

    }
标签: spring boot java 后端

本文转载自: https://blog.csdn.net/weixin_43971312/article/details/128247828
版权归原作者 墨竹闲人 所有, 如有侵权,请联系我们删除。

“Springboot Actuator监控”的评论:

还没有评论