文章采集api( WebApi接口采集指标数据的配置实践操作(组图) )

优采云 发布时间: 2021-11-06 01:17

  文章采集api(

WebApi接口采集指标数据的配置实践操作(组图)

)

  

  这个文章的主要目的是告诉你如何配置Prometheus,使其可以使用指定的Web Api接口采集指标数据。文章中使用的case是NGINX的采集配置,来自NGINX数据索引页的采集数据,设置了用户名和密码,所以这是文章@的副标题> 可能是nginx的prometheus 采集配置或者prometheus 采集 basic auth的nginx。

  

  上图展示了配置完成后在Grafana中配置模板的效果。

  用过Prometheus的朋友一定知道如何配置address:port服务。比如在采集某个Redis的信息时,配置可以这样写:

   - job_name: 'redis'

static_configs:

- targets: ['11.22.33.58:6087']

复制代码

  注意:以上情况假设Redis Exporter的地址和端口为11.22.33.58:6087。

  这是最简单也是最广为人知的方法。但是如果要监控指定的Web API,就不能这样写了。如果你没有看到这个 文章,你可能会在搜索引擎中搜索这样的:

  但是很遗憾,没有找到有效的信息(现在是2021年3月),基本上所有的坑都能找到。

  条件假设

  假设我们现在需要从带有地址...的接口采集相关的Prometheus监控指标,并且该接口使用basic auth(假设用户名为weishidong,密码为0099887kk)进行基本授权验证。

  配置实践

  如果填写之前看到的Prometheus配置,很可能这样写配置:

   - job_name: 'web'

static_configs:

- targets: ['http://www.weishidong.com/status/format/prometheus']

basic_auth:

username: weishidong

password: 0099887kk

复制代码

  保存配置文件,重启服务后,你会发现这种方式无法采集数据,太可怕了。

  官方配置指南

  刚才的*敏*感*词*实在是太可怕了。当我们遇到不明白的问题时,我们当然去官方文档-> Prometheus Configuration。建议从上到下阅读,但如果你赶时间,可以直接来这部分。官方示例如下(内容太多,这里只保留与本文相关的部分,建议大家阅读原文):

  # The job name assigned to scraped metrics by default.

job_name:

# How frequently to scrape targets from this job.

[ scrape_interval: | default = ]

# Per-scrape timeout when scraping this job.

[ scrape_timeout: | default = ]

# The HTTP resource path on which to fetch metrics from targets.

[ metrics_path: | default = /metrics ]

# honor_labels controls how Prometheus handles conflicts between labels that are

# already present in scraped data and labels that Prometheus would attach

# server-side ("job" and "instance" labels, manually configured target

# labels, and labels generated by service discovery implementations).

#

# If honor_labels is set to "true", label conflicts are resolved by keeping label

# values from the scraped data and ignoring the conflicting server-side labels.

#

# If honor_labels is set to "false", label conflicts are resolved by renaming

# conflicting labels in the scraped data to "exported_" (for

# example "exported_instance", "exported_job") and then attaching server-side

# labels.

#

# Setting honor_labels to "true" is useful for use cases such as federation and

# scraping the Pushgateway, where all labels specified in the target should be

# preserved.

#

# Note that any globally configured "external_labels" are unaffected by this

# setting. In communication with external systems, they are always applied only

# when a time series does not have a given label yet and are ignored otherwise.

[ honor_labels: | default = false ]

# honor_timestamps controls whether Prometheus respects the timestamps present

# in scraped data.

#

# If honor_timestamps is set to "true", the timestamps of the metrics exposed

# by the target will be used.

#

# If honor_timestamps is set to "false", the timestamps of the metrics exposed

# by the target will be ignored.

[ honor_timestamps: | default = true ]

# Configures the protocol scheme used for requests.

[ scheme: | default = http ]

# Optional HTTP URL parameters.

params:

[ : [, ...] ]

# Sets the `Authorization` header on every scrape request with the

# configured username and password.

# password and password_file are mutually exclusive.

basic_auth:

[ username: ]

[ password: ]

[ password_file: ]

# Sets the `Authorization` header on every scrape request with

# the configured bearer token. It is mutually exclusive with `bearer_token_file`.

[ bearer_token: ]

# Sets the `Authorization` header on every scrape request with the bearer token

# read from the configured file. It is mutually exclusive with `bearer_token`.

[ bearer_token_file: ]

复制代码

  如果仔细看,应该注意几个关键信息:metrics_path 和 basic_auth。其中,metrics_path用于指定HTTP类型指示符信息采集时的路由地址,默认值为/metrics;字段basic_auth用于授权验证,这里的password可以指定一个密码文件,而不是直接填写明文(一般来说,指定的密码文件的安全性稍高,明文)。

  有效配置

  根据官方文档的指引,我们可以快速推导出正确的配置写法:

  - job_name: 'web'

metrics_path: /status/format/prometheus

static_configs:

- targets: ['www.weishidong.com']

basic_auth:

username: weishidong

password: 0099887kk

复制代码

  需要注意的是这里的字不用填,因为Prometheus默认的Scheme是http。如果地址的scheme是https,我们需要根据文档指引添加scheme字段,对应的配置为:

  - job_name: 'web'

metrics_path: /status/format/prometheus

static_configs:

- targets: ['www.weishidong.com']

scheme: https

basic_auth:

username: weishidong

password: 0099887kk

复制代码

  配置完成后,Prometheus应该可以成功采集获取数据。用Grafana,可以看到开头给出的监控效果图。

  

0 个评论

要回复文章请先登录注册


官方客服QQ群

微信人工客服

QQ人工客服


线