定制化方案:fluent-bit debug调试,采集kubernetes podIP

优采云 发布时间: 2022-10-30 09:19

  定制化方案:fluent-bit debug调试,采集kubernetes podIP

  有时候调试 fluent-bit 的配置以达到想要的输出效果并不是一件简单的事情,下面就是通过调试镜像来调试 fluent-bit 采集kubernetes pod 的 IP。

  官方的流利位文档给出了一个用于调试的图像:docs.fluentbit.io/manual/installation/docker

  dockerhub存储库链接到:/r/fluent/fluent-bit/

  部署流畅位调试

  apiVersion: apps/v1

kind: Deployment

metadata:

labels:

app.kubernetes.io/name: fluent-bit-debug

name: fluent-bit-debug

namespace: kubesphere-logging-system

spec:

progressDeadlineSeconds: 600

replicas: 1

revisionHistoryLimit: 10

selector:

matchLabels:

app.kubernetes.io/name: fluent-bit-debug

strategy:

rollingUpdate:

maxSurge: 25%

maxUnavailable: 25%

type: RollingUpdate

template:

metadata:

creationTimestamp: null

labels:

app.kubernetes.io/name: fluent-bit-debug

name: fluent-bit-debug

spec:

containers:

- env:

- name: NODE_NAME

valueFrom:

fieldRef:

apiVersion: v1

fieldPath: spec.nodeName

command:

- /usr/local/bin/sh

- -c

- sleep 9999

image: fluent/fluent-bit:1.6.9-debug

imagePullPolicy: IfNotPresent

name: fluent-bit

ports:

- containerPort: 2020

name: metrics

protocol: TCP

resources: {}

terminationMessagePath: /dev/termination-log

terminationMessagePolicy: File

volumeMounts:

- mountPath: /var/lib/docker/containers

name: varlibcontainers

readOnly: true

- mountPath: /fluent-bit/config

name: config

readOnly: true

- mountPath: /var/log/

name: varlogs

readOnly: true

- mountPath: /var/log/journal

name: systemd

readOnly: true

- mountPath: /fluent-bit/tail

name: positions

dnsPolicy: ClusterFirst

restartPolicy: Always

schedulerName: default-scheduler

securityContext: {}

serviceAccount: fluent-bit

serviceAccountName: fluent-bit

terminationGracePeriodSeconds: 30

volumes:

- hostPath:

path: /var/lib/docker/containers

type: ""

name: varlibcontainers

- name: config

secret:

defaultMode: 420

secretName: fluent-bit-debug-config

- hostPath:

path: /var/log

type: ""

name: varlogs

- hostPath:

path: /var/log/journal

type: ""

name: systemd

- emptyDir: {}

name: positions

  使用的secret:fluent-bit-debug-config如下,收录两个键:

  第一个 parsers.conf: 为空;配置详情请参考官方文档:配置文件

  第二个 fluent-bit.conf 配置需要根据情况进行配置,下面主要针对不同场景给出。它还将涉及使用kubesphere和Filter CRD。

  使用流利位exec 到容器中可以使用 /fluent-bit/

  bin/fluent-bit 进行调试:

  /fluent-bit/bin # ./fluent-bit -h

Usage: fluent-bit [OPTION]

Available Options

-b --storage_path=PATH specify a storage buffering path

-c --config=FILE specify an optional configuration file

-d, --daemon run Fluent Bit in background mode

-f, --flush=SECONDS flush timeout in seconds (default: 5)

-F --filter=FILTER set a filter

-i, --input=INPUT set an input

-m, --match=MATCH set plugin match, same as '-p match=abc'

-o, --output=OUTPUT set an output

-p, --prop="A=B" set plugin configuration property

-R, --parser=FILE specify a parser configuration file

-e, --plugin=FILE load an external plugin (shared lib)

-l, --log_file=FILE write log info to a file

-t, --tag=TAG set plugin tag, same as '-p tag=abc'

-T, --sp-task=SQL define a stream processor task

-v, --verbose increase logging verbosity (default: info)

-H, --http enable monitoring HTTP server

-P, --port set HTTP server TCP port (default: 2020)

-s, --coro_stack_size Set coroutines stack size in bytes (default: 24576)

-q, --quiet quiet mode

-S, --sosreport support report for Enterprise customers

-V, --version show version number

-h, --help print this help

Inputs

cpu CPU Usage

mem Memory Usage

thermal Thermal

kmsg Kernel Log Buffer

proc Check Process health

disk Diskstats

systemd Systemd (Journal) reader

netif Network Interface Usage

docker Docker containers metrics

docker_events Docker events

tail Tail files

dummy Generate dummy data

head Head Input

health Check TCP server health

collectd collectd input plugin

statsd StatsD input plugin

serial Serial input

stdin Standard Input

syslog Syslog

exec Exec Input

tcp TCP

mqtt MQTT, listen for Publish messages

forward Fluentd in-forward

<p>

random Random

Filters

alter_size Alter incoming chunk size

aws Add AWS Metadata

record_modifier modify record

throttle Throttle messages using sliding window algorithm

kubernetes Filter to append Kubernetes metadata

modify modify records by applying rules

nest nest events by specified field values

parser Parse events

expect Validate expected keys and values

grep grep events by specified field values

rewrite_tag Rewrite records tags

lua Lua Scripting Filter

stdout Filter events to STDOUT

Outputs

azure Send events to Azure HTTP Event Collector

azure_blob Azure Blob Storage

bigquery Send events to BigQuery via streaming insert

counter Records counter

datadog Send events to DataDog HTTP Event Collector

es Elasticsearch

exit Exit after a number of flushes (test purposes)

file Generate log file

forward Forward (Fluentd protocol)

http HTTP Output

influxdb InfluxDB Time Series

logdna LogDNA

loki Loki

kafka Kafka

kafka-rest Kafka REST Proxy

nats NATS Server

nrlogs New Relic

null Throws away events

plot Generate data file for GNU Plot

pgsql PostgreSQL

slack Send events to a Slack channel

splunk Send events to Splunk HTTP Event Collector

stackdriver Send events to Google Stackdriver Logging

stdout Prints events to STDOUT

syslog Syslog

tcp TCP Output

td Treasure Data

flowcounter FlowCounter

gelf GELF Output

cloudwatch_logs Send logs to Amazon CloudWatch

kinesis_firehose Send logs to Amazon Kinesis Firehose

s3 Send to S3

Internal

Event Loop = epoll

Build Flags = FLB_HAVE_HTTP_CLIENT_DEBUG FLB_HAVE_PARSER FLB_HAVE_RECORD_ACCESSOR FLB_HAVE_STREAM_PROCESSOR JSMN_PARENT_LINKS JSMN_STRICT FLB_HAVE_TLS FLB_HAVE_AWS FLB_HAVE_SIGNV4 FLB_HAVE_SQLDB FLB_HAVE_METRICS FLB_HAVE_HTTP_SERVER FLB_HAVE_SYSTEMD FLB_HAVE_FORK FLB_HAVE_TIMESPEC_GET FLB_HAVE_GMTOFF FLB_HAVE_UNIX_SOCKET FLB_HAVE_PROXY_GO FLB_HAVE_JEMALLOC JEMALLOC_MANGLE FLB_HAVE_LIBBACKTRACE FLB_HAVE_REGEX FLB_HAVE_UTF8_ENCODER FLB_HAVE_LUAJIT FLB_HAVE_C_TLS FLB_HAVE_ACCEPT4 FLB_HAVE_INOTIFY</p>

  简单配置文件

  以下是使用简单配置文件采集 calico-node-* pod 的日志:

  [Service]

Parsers_File parsers.conf

[Input]

Name tail

Path /var/log/containers/*_kube-system_calico-node-*.log

Refresh_Interval 10

Skip_Long_Lines true

DB /fluent-bit/bin/pos.db

DB.Sync Normal

Mem_Buf_Limit 5MB

Parser docker

Tag kube.*

[Filter]

Name kubernetes

Match kube.*

Kube_URL https://kubernetes.default.svc:443

Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token

Labels false

Annotations true

[Output]

Name stdout

Match_Regex (?:kube|service)\.(.*)

  使用以下命令在流畅位调试容器内启动测试:

  /fluent-bit/bin # ./fluent-bit -c /fluent-bit/config/fluent-bit.conf

  您可以看到标准输出日志输出:

  [0] kube.var.log.containers.calico-node-lp4lm_kube-system_calico-node-cca502a39695f7452fd999af97bfbca5d74d2a372d94e0cacf2045f5f9721a81.log: [1634870260.700108403, {"log"=>"{"log":"2021-10-22 02:37:40.699 [INFO][85] monitor-addresses/startup.go 774: Using autodetected IPv4 address on interface bond4: 172.24.248.50/30\n","stream":"stdout","time":"2021-10-22T02:37:40.700056471Z"}", "kubernetes"=>{"pod_name"=>"calico-node-lp4lm", "namespace_name"=>"kube-system", "pod_id"=>"5a829979-9830-4b9c-a3cb-eeb6eee38bdd", "annotations"=>{"kubectl.kubernetes.io/restartedAt"=>"2021-10-20T23:00:27+08:00"}, "host"=>"node02", "container_name"=>"calico-node", "docker_id"=>"cca502a39695f7452fd999af97bfbca5d74d2a372d94e0cacf2045f5f9721a81", "container_hash"=>"calico/node@sha256:bc4a631d553b38fdc169ea4cb8027fa894a656e80d68d513359a4b9d46836b55", "container_image"=>"calico/node:v3.19.1"}}]

  截取重要部分,查看尚未处理采集的K8s日志格式。

  [

{

"kubernetes"=>{

"pod_name"=>"calico-node-lp4lm",

"namespace_name"=>"kube-system",

"pod_id"=>"5a829979-9830-4b9c-a3cb-eeb6eee38bdd",

"annotations"=>{

"kubectl.kubernetes.io/restartedAt"=>"2021-10-20T23:00:27+08:00"

},

"host"=>"node02",

"container_name"=>"calico-node",

"docker_id"=>"cca502a39695f7452fd999af97bfbca5d74d2a372d94e0cacf2045f5f9721a81",

"container_hash"=>"calico/node@sha256:bc4a631d553b38fdc169ea4cb8027fa894a656e80d68d513359a4b9d46836b55",

"container_image"=>"calico/node:v3.19.1"

}

}

]

  添加了嵌套过滤器

  展开 Kubernetes 块并添加 kubernetes_ 前缀:

  [Filter]

Name nest

Match kube.*

Operation lift

Nested_under kubernetes

Add_prefix kubernetes_

  这次测试输出,截取重要部分:

  {

"kubernetes_pod_name"=>"calico-node-lp4lm",

"kubernetes_namespace_name"=>"kube-system",

"kubernetes_pod_id"=>"5a829979-9830-4b9c-a3cb-eeb6eee38bdd",

"kubernetes_annotations"=>{

"kubectl.kubernetes.io/restartedAt"=>"2021-10-20T23:00:27+08:00"

},

"kubernetes_host"=>"node02",

"kubernetes_container_name"=>"calico-node",

"kubernetes_docker_id"=>"cca502a39695f7452fd999af97bfbca5d74d2a372d94e0cacf2045f5f9721a81",

"kubernetes_container_hash"=>"calico/node@sha256:bc4a631d553b38fdc169ea4cb8027fa894a656e80d68d513359a4b9d46836b55",

"kubernetes_container_image"=>"calico/node:v3.19.1"

}

  删除kubernetes_annotations块

  [Filter]

Name modify

Match kube.*

Remove kubernetes_annotations

  从kubernetes_annotations块中删除字段

  [Filter]

Name nest

Match kube.*

Operation lift

Nested_under kubernetes_annotations

Add_prefix kubernetes_annotations_

[Filter]

Name modify

Match kube.*

Remove kubernetes_annotations_kubectl.kubernetes.io/restartedAt

  或使用常规:

  [Filter]

Name nest

Match kube.*

Operation lift

Nested_under kubernetes_annotations

Add_prefix kubernetes_annotations_

[Filter]

Name modify

Match kube.*

Remove_regex kubernetes_annotations_kubectl*

  

  修改kubernetes_annotations块中的键名称

  [Filter]

Name nest

Match kube.*

Operation lift

Nested_under kubernetes_annotations

Add_prefix kubernetes_annotations_

[Filter]

Name modify

Match kube.*

Rename kubernetes_annotations_kubectl.kubernetes.io/restartedAt podIPs

  修改后:

  [

{

"kubernetes_pod_name"=>"calico-node-lp4lm",

"kubernetes_namespace_name"=>"kube-system",

"kubernetes_pod_id"=>"5a829979-9830-4b9c-a3cb-eeb6eee38bdd",

"kubernetes_host"=>"node02",

"kubernetes_container_name"=>"calico-node",

"kubernetes_docker_id"=>"cca502a39695f7452fd999af97bfbca5d74d2a372d94e0cacf2045f5f9721a81",

"kubernetes_container_hash"=>"calico/node@sha256:bc4a631d553b38fdc169ea4cb8027fa894a656e80d68d513359a4b9d46836b55",

"kubernetes_container_image"=>"calico/node:v3.19.1",

"podIPs"=>"2021-10-20T23:00:27+08:00"

}

]

  结合 ks 配置采集 podIP

  结合 kubesphere Filter CR 来配置 采集podIP 并删除其他不相关的注释。

  由于印花布用作 CNI,因此与 Pod IP 相关的注释将添加到 Pod 注释中。

  您需要在注释 (/podIP) 中保留一个密钥并删除其他密钥,因此在更改下面要保留的密钥的名称后,删除整个注释。

  Kubernetes Filter CR 配置如下:

  apiVersion: logging.kubesphere.io/v1alpha2

kind: Filter

metadata:

labels:

logging.kubesphere.io/component: logging

logging.kubesphere.io/enabled: &#39;true&#39;

name: kubernetes

namespace: kubesphere-logging-system

spec:

filters:

- kubernetes:

annotations: true

kubeCAFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

kubeTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token

kubeURL: &#39;https://kubernetes.default.svc:443&#39;

labels: false

- nest:

addPrefix: kubernetes_

nestedUnder: kubernetes

operation: lift

- nest:

addPrefix: kubernetes_annotations_

nestedUnder: kubernetes_annotations

operation: lift

- modify:

rules:

- remove: stream

- remove: kubernetes_pod_id

- remove: kubernetes_host

- remove: kubernetes_container_hash

- rename:

kubernetes_annotations_cni.projectcalico.org/podIPs: kubernetes_podIPs

- removeRegex: kubernetes_annotations*

- nest:

nestUnder: kubernetes_annotations

operation: nest

removePrefix: kubernetes_annotations_

wildcard:

- kubernetes_annotations_*

- nest:

nestUnder: kubernetes

operation: nest

removePrefix: kubernetes_

wildcard:

- kubernetes_*

match: kube.*

Kubernetes Filter CR

  生成的流畅位配置配置如下(只看 Filter 部分,省略输入输出 CR)。

  [Service]

Parsers_File parsers.conf

[Input]

Name tail

Path /var/log/containers/*.log

Exclude_Path /var/log/containers/*_kubesphere-logging-system_events-exporter*.log,/var/log/containers/kube-auditing-webhook*_kubesphere-logging-system_kube-auditing-webhook*.log

Refresh_Interval 10

Skip_Long_Lines true

DB /fluent-bit/tail/pos.db

DB.Sync Normal

Mem_Buf_Limit 5MB

Parser docker

Tag kube.*

[Filter]

Name kubernetes

Match kube.*

Kube_URL https://kubernetes.default.svc:443

Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token

Labels false

Annotations true

[Filter]

Name nest

Match kube.*

Operation lift

Nested_under kubernetes

Add_prefix kubernetes_

[Filter]

Name nest

Match kube.*

Operation lift

Nested_under kubernetes_annotations

Add_prefix kubernetes_annotations_

[Filter]

Name modify

Match kube.*

Remove stream

Remove kubernetes_pod_id

Remove kubernetes_host

Remove kubernetes_container_hash

Rename kubernetes_annotations_cni.projectcalico.org/podIPs kubernetes_podIPs

Remove_regex kubernetes_annotations*

[Filter]

Name nest

Match kube.*

Operation nest

Wildcard kubernetes_annotations_*

Nest_under kubernetes_annotations

Remove_prefix kubernetes_annotations_

[Filter]

Name nest

Match kube.*

Operation nest

Wildcard kubernetes_*

Nest_under kubernetes

Remove_prefix kubernetes_

[Output]

Name es

Match_Regex (?:kube|service)\.(.*)

Host es-cdc-a-es-http.cdc.svc.xke.test.cn

Port 9200

HTTP_User elastic

HTTP_Passwd elasticpwd

Logstash_Format true

Logstash_Prefix ks-logstash-log

Time_Key @timestamp

  您可以在 kibana 中看到 podIP 采集:

  作者简洁

  作者:小万

  唐,一个热爱写作、认真写作的家伙,目前维护着原创公众号:“我的小湾汤”,专注于撰写GO语言、Docker、Kubernetes、Java等开发、运维知识等提升硬实力文章,期待大家的关注。转载说明:请务必注明出处(注明:来自公众号:我的小碗汤,作者:小碗汤)。

  整套解决方案:如何采集shopline产品并导入到店铺,一键刊登独立站shopify爆款产品

  独立站的同学会遇到上架产品到店的问题。一个产品从title-main image-detail页面一个一个复制到自己的店铺,需要花费很多时间。

  如果采集里面有几十个或者几百个产品,手动做显然是很难的。现在有一种方法可以帮助您快速轻松地解决它!!一键式采集产品神器

  这是一套奶妈级教程,请耐心阅读,小案例使用!!

  Step 1:首先我们要知道采集的store是什么saas平台

  网页空白处右键——点击查看网页源代码

  在源码页面按Ctrl+F弹出搜索框,进入建站平台,如shopline

  第二步:进入Crossker官网-选择产品采集工具

  扫一扫公众号登录,每天5次免费试用,联系客服获取更多试用

  

  第三步:选择采集管理-产品分类采集,进入分类链接

  时间不用填写,采集的个数不要超过2000

  第 4 步:选择 采集管理 - 任务列表

  有四种状态:采集表示采集成功,等待采集表示排队,点击右上角刷新按钮查看进度

  第五步:选择产品管理——在产品管理中输入采集的编号,点击搜索

  记得点击搜索全选,如果没有搜索到默认是30,点击导出为CSV格式

  第六步:登录Shopline进入后台商品-商品管理,点击导入商品-shopify表格导入

  

  上传并导入采集的CSV文件,完成导入

  第八步:等待系统加载,所有产品导入完毕

  除了支持shopline采集外,还支持Shopify、Shoplaza、Oemsaas(YY2.0)、Shopplus、Shopbase等主流SaaS平台的产品采集。WordPress、Woocommerce、Magento 项目、Aliexpress 项目也可以处理。Ueeshop、OpenCart等平台后续会更新!总之,有什么需要,都满足你!

  您还在犹豫?都是小问题,联系客服免费试用Crossker产品采集工具

  我们刚刚成立Crossker跨境交流群,欢迎各位跨境电商朋友加入!有任何问题请随时提出,我们将尽最大努力为您解答,帮助独立卖家共同成长!

  客服C:Crossker-tool

  扫描下方二维码加入群~

0 个评论

要回复文章请先登录注册


官方客服QQ群

微信人工客服

QQ人工客服


线