性能監(jiān)控Telegraf+InfluxDB+Grafana 實(shí)現(xiàn)結(jié)構(gòu)化日志實(shí)時(shí)監(jiān)控

      網(wǎng)友投稿 1035 2022-05-30

      背景

      Telegraf logparser

      Grok 解析器

      示例

      背景

      Telegraf logparser

      Grok 解析器

      示例

      具體實(shí)踐

      日志格式

      Telegraf 配置

      Grafana設(shè)置

      小結(jié)

      背景

      由于我們的自研客戶端壓測工具的測試結(jié)果是結(jié)構(gòu)化日志文件,而考慮到目前性能監(jiān)控需要做到實(shí)時(shí)化和集中化,那么需要一種定時(shí)和批量采集結(jié)構(gòu)化日志文件的采集 agent,而剛好 Telegraf Logparser插件可以滿足這個(gè)需求。

      Telegraf logparser

      Logparser插件流式傳輸并解析給定的日志文件,目前支持解析 “grok” 模式和正則表達(dá)式模式。

      Grok 解析器

      熟悉 grok 解析器的最佳途徑是參考 logstash文檔:

      https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html

      Telegraf 解析器使用經(jīng)過稍微修改的 logstash “grok” 模式版本,其格式為

      %{[:][:]}

      capture_syntax :定義解析輸入行的 grok 模式

      semantic_name:用于命名字段或標(biāo)記

      modifier:擴(kuò)展被解析項(xiàng)轉(zhuǎn)換為的數(shù)據(jù)類型或其他特殊處理

      默認(rèn)情況下,所有命名的捕獲都轉(zhuǎn)換為字符串字段。如果模式?jīng)]有語義名稱,則不會(huì)捕獲它。時(shí)間戳修飾符可用于將捕獲轉(zhuǎn)換為已解析度量的時(shí)間戳。如果未解析任何時(shí)間戳,則將使用當(dāng)前時(shí)間創(chuàng)建度量。

      注意:每行必須捕獲至少一個(gè)字段。將所有捕獲轉(zhuǎn)換為標(biāo)記的模式將導(dǎo)致無法寫入到時(shí)序數(shù)據(jù)庫的點(diǎn)。

      Available modifiers:

      string (default if nothing is specified)

      int

      float

      duration (ie, 5.23ms gets converted to int nanoseconds)

      tag (converts the field into a tag)

      drop (drops the field completely)

      Timestamp modifiers:

      ts (This will auto-learn the timestamp format)

      ts-ansic (“Mon Jan _2 15:04:05 2006”)

      ts-unix (“Mon Jan _2 15:04:05 MST 2006”)

      ts-ruby (“Mon Jan 02 15:04:05 -0700 2006”)

      ts-rfc822 (“02 Jan 06 15:04 MST”)

      ts-rfc822z (“02 Jan 06 15:04 -0700”)

      ts-rfc850 (“Monday, 02-Jan-06 15:04:05 MST”)

      ts-rfc1123 (“Mon, 02 Jan 2006 15:04:05 MST”)

      ts-rfc1123z (“Mon, 02 Jan 2006 15:04:05 -0700”)

      ts-rfc3339 (“2006-01-02T15:04:05Z07:00”)

      ts-rfc3339nano (“2006-01-02T15:04:05.999999999Z07:00”)

      ts-httpd (“02/Jan/2006:15:04:05 -0700”)

      ts-epoch (seconds since unix epoch, may contain decimal)

      ts-epochmilli (milliseconds since unix epoch, may contain decimal)

      ts-epochnano (nanoseconds since unix epoch)

      ts-syslog (“Jan 02 15:04:05”, parsed time is set to the current year)

      ts-“CUSTOM”

      自定義時(shí)間格式必須在引號(hào)內(nèi),并且必須是 “參考時(shí)間” 的表示形式 on Jan 2 15:04:05 -0700 MST 2006。

      性能監(jiān)控之 Telegraf+InfluxDB+Grafana 實(shí)現(xiàn)結(jié)構(gòu)化日志實(shí)時(shí)監(jiān)控

      要匹配逗號(hào)小數(shù)點(diǎn),可以使用句點(diǎn)。例如,%{TIMESTAMP:timestamp:ts-"2006-01-02 15:04:05.000"} 可以用來匹配 "2018-01-02 15:04:05,000" 要匹配逗號(hào)小數(shù)點(diǎn),可以在模式字符串中使用句點(diǎn)。

      有關(guān)更多詳細(xì)信息,請參考:

      https://golang.org/pkg/time/#Parse

      Telegraf 具有許多自己的內(nèi)置模式,并支持大多數(shù) logstash 的內(nèi)置模式。 Golang 正則表達(dá)式不支持向前或向后查找。不支持依賴于這些的logstash 模式。

      如果需要構(gòu)建模式以匹配日志的調(diào)試,使用 https://grokdebug.herokuapp.com 調(diào)試非常有用!

      示例

      我們可以使用 logparser 將 Telegraf 生成的日志行轉(zhuǎn)換為指標(biāo)。

      為此,我們需要配置 Telegraf 以將日志寫入文件。可以使用 agent.logfile 參數(shù)或配置 syslog 來完成。

      [agent] logfile = "/var/log/telegraf/telegraf.log"

      Logparser配置:

      [[inputs.logparser]] files = ["/var/log/telegraf/telegraf.log"] [inputs.logparser.grok] measurement = "telegraf_log" patterns = ['^%{TIMESTAMP_ISO8601:timestamp:ts-rfc3339} %{TELEGRAF_LOG_LEVEL:level:tag}! %{GREEDYDATA:msg}'] custom_patterns = ''' TELEGRAF_LOG_LEVEL (?:[DIWE]+) '''

      log 內(nèi)容:

      2018-06-14T06:41:35Z I! Starting Telegraf v1.6.4 2018-06-14T06:41:35Z I! Agent Config: Interval:3s, Quiet:false, Hostname:"archer", Flush Interval:3s 2018-02-20T22:39:20Z E! Error in plugin [inputs.docker]: took longer to collect than collection interval (10s) 2018-06-01T10:34:05Z W! Skipping a scheduled flush because there is already a flush ongoing. 2018-06-14T07:33:33Z D! Output [file] buffer fullness: 0 / 10000 metrics.

      InfluxDB 采集的數(shù)據(jù):

      telegraf_log,host=somehostname,level=I msg="Starting Telegraf v1.6.4" 1528958495000000000 telegraf_log,host=somehostname,level=I msg="Agent Config: Interval:3s, Quiet:false, Hostname:\"somehostname\", Flush Interval:3s" 1528958495001000000 telegraf_log,host=somehostname,level=E msg="Error in plugin [inputs.docker]: took longer to collect than collection interval (10s)" 1519166360000000000 telegraf_log,host=somehostname,level=W msg="Skipping a scheduled flush because there is already a flush ongoing." 1527849245000000000 telegraf_log,host=somehostname,level=D msg="Output [file] buffer fullness: 0 / 10000 metrics." 1528961613000000000

      具體實(shí)踐

      日志格式

      需要采集的結(jié)構(gòu)化日志示例如下:

      TestConfig1,5.0,2019/3/6 17:48:23,2019/3/6 17:48:30,demo_1,open,3,1,6.8270219,openscreen>validatestage TestConfig2,5.0,2019/3/6 17:48:33,2019/3/6 17:48:40,demo_2,open,3,2,6.9179322,openscreen>validatestage TestConfig3,5.0,2019/3/6 17:48:43,2019/3/6 17:50:23,demo_1,open,3,3,100.1237885,switchscreen>validatestag TestConfig3,5.0,2019/3/6 17:48:43,2019/3/6 17:50:23,demo_1,open,3,3,100.1237885,switchscreen>validatestag TestConfig3,5.0,2019/3/6 17:48:43,2019/3/6 17:50:23,demo_1,open,3,3,100.1237885,switchscreen>validatestag TestConfig3,5.0,2019/3/6 17:48:43,2019/3/6 17:50:23,demo_1,open,3,3,100.1237885,switchscreen>validatestag TestConfig3,5.0,2019/3/6 17:48:43,2019/3/6 17:50:23,demo_1,open,3,3,100.1237885,switchscreen>validatestag TestConfig3,5.0,2019/3/6 17:48:43,2019/3/6 17:50:23,demo_1,open,3,3,100.1237885,switchscreen>validatestag

      注意:這個(gè)日志是批量生成的,每一次客戶端壓測當(dāng)前目錄都會(huì)生成一個(gè) *.log 的文件。數(shù)據(jù)采集的時(shí)候需要為對應(yīng)列指定列名。

      Telegraf 配置

      配置 Telegraf.conf

      [[inputs.logparser]] ## Log files to parse. ## These accept standard unix glob matching rules, but with the addition of ## ** as a "super asterisk". ie: ## /var/log/**.log -> recursively find all .log files in /var/log ## /var/log/*/*.log -> find all .log files with a parent dir in /var/log ## /var/log/apache.log -> only tail the apache log file files = ["C:\\Release\\TestConfigLog\\*.log"] ## Read files that currently exist from the beginning. Files that are created ## while telegraf is running (and that match the "files" globs) will always ## be read from the beginning. from_beginning = false ## Method used to watch for file updates. Can be either "inotify" or "poll". watch_method = "poll" ## Parse logstash-style "grok" patterns: ## Telegraf built-in parsing patterns: https://goo.gl/dkay10 [inputs.logparser.grok] ## This is a list of patterns to check the given log file(s) for. ## Note that adding patterns here increases processing time. The most ## efficient configuration is to have one pattern per logparser. ## Other common built-in patterns are: ## %{COMMON_LOG_FORMAT} (plain apache & nginx access logs) ## %{COMBINED_LOG_FORMAT} (access logs + referrer & agent) patterns = ['%{WORD:scene},%{NUMBER:version:float},%{TS_WIN:begtime},%{TS_WIN:endtime},%{WORD:canvasName},%{WORD:canvasCase},%{NUMBER:totaltimes:int},%{NUMBER:current:int},%{NUMBER:time_consuming:float}'] ## Name of the outputted measurement name. measurement = "bigscreen" ## Full path(s) to custom pattern files. ## custom_pattern_files = [] ## Custom patterns can also be defined here. Put one pattern per line. custom_patterns = 'TS_WIN %{YEAR}/%{MONTHNUM}/%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?' ## Timezone allows you to provide an override for timestamps that ## don't already include an offset ## e.g. 04/06/2016 12:41:45 data one two 5.43μs ## ## Default: "" which renders UTC ## Options are as follows: ## 1. Local -- interpret based on machine localtime ## 2. "Canada/Eastern" -- Unix TZ values like those found in https://en.wikipedia.org/wiki/List_of_tz_database_time_zones ## 3. UTC -- or blank/unspecified, will return timestamp in UTC timezone = "Local"

      注意:

      files = [" *.log"],解決了當(dāng)前目錄多文件對象匹配的需求

      watch_method = "poll",設(shè)置輪訓(xùn)獲取文件更新

      custom_patterns,自定義一個(gè)時(shí)間格式化模式匹配

      InfluxDB 生成的指標(biāo)數(shù)據(jù)如下:

      > select * from bigscreen limit 5 name: bigscreen time begtime canvasCase canvasName current endtime host path scene time_consuming totaltimes version ---- ------- ---------- ---------- ------- ------- ---- ---- ----- -------------- ---------- ------- 1552296231630588200 2019/3/6 17:48:43 open demo_1 3 2019/3/6 17:50:23 DESKTOP-MLD0KTS C:\Users\htsd\Desktop\VBI5\Release\TestConfigLog\1.log TestConfig3 100.1237885 3 5 1552296231630588201 2019/3/6 17:48:43 open demo_1 3 2019/3/6 17:50:23 DESKTOP-MLD0KTS C:\Users\htsd\Desktop\VBI5\Release\TestConfigLog\1.log TestConfig3 100.1237885 3 5 1552296231630588202 2019/3/6 17:48:43 open demo_1 3 2019/3/6 17:50:23 DESKTOP-MLD0KTS C:\Users\htsd\Desktop\VBI5\Release\TestConfigLog\1.log TestConfig3 100.1237885 3 5 1552296231631587700 2019/3/6 17:48:43 open demo_1 3 2019/3/6 17:50:23 DESKTOP-MLD0KTS C:\Users\htsd\Desktop\VBI5\Release\TestConfigLog\1.log TestConfig3 100.1237885 3 5 1552297570005076300 2019/3/6 17:48:23 open demo_1 1 2019/3/6 17:48:30 DESKTOP-MLD0KTS C:\Users\htsd\Desktop\VBI5\Release\TestConfigLog\12.log TestConfig1 6.8270219 3 5

      列名都是我們自定義的。

      Grafana設(shè)置

      整體的考慮是使用一個(gè)表格進(jìn)行數(shù)據(jù)展示,支持按個(gè)別字段篩選。

      設(shè)置篩選變量,滿足字段過濾篩選要求:

      創(chuàng)建Dashboard,并選擇表格組件:

      定義數(shù)據(jù)源:

      設(shè)置表格字段樣式,對時(shí)間字段進(jìn)行格式化

      對響應(yīng)時(shí)間字段進(jìn)行不同級(jí)別高亮設(shè)置(綠,黃,紅三個(gè)顏色)

      實(shí)際的動(dòng)態(tài)效果如下:

      小結(jié)

      本文通過一個(gè)簡單的示例展示了 Telegraf+InfluxDB+Grafana 如何對結(jié)構(gòu)化日志進(jìn)行實(shí)時(shí)監(jiān)控,當(dāng)然也支持非結(jié)構(gòu)化日志采集,大家有興趣的話也可以自己動(dòng)手實(shí)踐。

      相關(guān)資料:

      https://github.com/zuozewei/blog-example/tree/master/Performance-testing/03-performance-monitoring/telegraf-Influxdb-grafana-jmx

      云監(jiān)控服務(wù) 壓力測試 正則表達(dá)式 自動(dòng)化測試

      版權(quán)聲明:本文內(nèi)容由網(wǎng)絡(luò)用戶投稿,版權(quán)歸原作者所有,本站不擁有其著作權(quán),亦不承擔(dān)相應(yīng)法律責(zé)任。如果您發(fā)現(xiàn)本站中有涉嫌抄襲或描述失實(shí)的內(nèi)容,請聯(lián)系我們jiasou666@gmail.com 處理,核實(shí)后本網(wǎng)站將在24小時(shí)內(nèi)刪除侵權(quán)內(nèi)容。

      上一篇:AI+無線通信總結(jié)——初賽賽題
      下一篇:雷學(xué)委趣談編程 大型鞋廠與開發(fā)工程化
      相關(guān)文章
      久久精品国产亚洲AV无码偷窥| 亚洲精品和日本精品| 亚洲VA综合VA国产产VA中| 亚洲AV无码之国产精品| 亚洲日本乱码卡2卡3卡新区| 亚洲人成网国产最新在线| 亚洲六月丁香婷婷综合| 亚洲AV成人精品日韩一区| 亚洲日韩一区精品射精| 亚洲av永久无码精品三区在线4 | 亚洲AV无码乱码在线观看性色扶| 亚洲国产一区二区三区在线观看| 亚洲国产高清人在线| 亚洲国产精品VA在线看黑人| 亚洲AV永久无码区成人网站 | 亚洲日韩国产一区二区三区在线 | 亚洲日韩精品国产一区二区三区 | 亚洲人成人网毛片在线播放| 2020久久精品亚洲热综合一本| 亚洲日本va在线观看| 中文字幕乱码亚洲精品一区| 亚洲视频精品在线观看| 亚洲欧洲日产国码二区首页| 亚洲另类精品xxxx人妖| 亚洲人成人无码.www石榴 | 亚洲天堂一区二区| 亚洲精品乱码久久久久久下载| 亚洲日本乱码一区二区在线二产线| 亚洲图片中文字幕| 伊人久久五月丁香综合中文亚洲| 亚洲日本VA午夜在线电影| 亚洲暴爽av人人爽日日碰| 国产精品亚洲综合一区在线观看| 亚洲综合偷自成人网第页色| 亚洲欧美成aⅴ人在线观看| 国产亚洲Av综合人人澡精品| 青青草原亚洲视频| 亚洲成AV人片在线观看| 亚洲色欲或者高潮影院| 亚洲日韩国产精品乱-久| 深夜国产福利99亚洲视频|