如何进行证券运维日志管理与日志分析项目的系统方案设计?有哪些具体的设计内容?

参与6

1同行回答

nkj827nkj827  项目经理 , 长春长信华天
ELK Stack 证券 业务系统日志管理与分析平台在 证券 业务系统数据采集对象和采集方式:下面对 证券 业务系统用到最多的 Db2 数据库 和IBM小型机进行数据采集并展现出来。以db2diag日志采集为例,其它日志类型参考db2diag **源端配置logstash-forwarder{"network": {"servers...显示全部

ELK Stack 证券 业务系统日志管理与分析平台在 证券 业务系统数据采集对象和采集方式:

下面对 证券 业务系统用到最多的 Db2 数据库 和IBM小型机进行数据采集并展现出来。

以db2diag日志采集为例,其它日志类型参考db2diag **

源端配置logstash-forwarder

{

  1. "network": {
  2. "servers": [ "ip:5063"],
  3. "ssl ca": "/home/sysadmin/certs/keystore.jks"
  4. },
  5. "files": [
  6. {
  7. "paths": [ " /home/db2inst1/sqllib/db2dump/db2diag.log" ],
  8. 添加标签:业务系统,OS平台,ip地址,“type”用于logstash分类

  9. "fields": {"type": "db2log","env":" 交易 业务","platform":"AIX610704","ip":"ipxx"}
  10. }
  11. ]
  12. }

Server端配置

  1. input {
  2. lumberjack {
  3. port => 5063
  4. type => "db2log"
  5. ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
  6. ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  7. codec => multiline {
  8. charset =>"UTF-8"
  9. pattern => "^\d{4}-\d{2}-\d{2}-\d{2}\.\d{2}\.\d{2}\.\d{6}[\+-]\d{3}"
  10. negate => true
  11. what => previous
  12. }
  13. }
  14. }
  15. filter {
  16. if [type] == "db2log" {
  17. mutate {
  18. gsub => ['message', "\n", " "]
  19. }
  20. grok {
  21. match => { "message" =>"(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY}-%{HOUR}\.%{MINUTE}\.%{SECOND})%{INT:timezone}(?:%{SPACE}%{WORD:r
  22. ecordid}%{SPACE})(?:LEVEL%{SPACE}:%{SPACE}%{DATA:level}%{SPACE})(?:PID%{SPACE}:%{SPACE}%{INT:processid}%{SPACE})(?:TID%{SPACE}:%{SPACE
  23. }%{INT:threadid}%{SPACE})(?:PROC%{SPACE}:%{SPACE}%{DATA:process}%{SPACE})?(?:INSTANCE%{SPACE}:%{SPACE}%{WORD:instance}%{SPACE})?(?:NOD
  24. E%{SPACE}:%{SPACE}%{WORD:node}%{SPACE})?(?:DB%{SPACE}:%{SPACE}%{WORD:dbname}%{SPACE})?(?:APPHDL%{SPACE}:%{SPACE}%{NOTSPACE:apphdl}%{SP
  25. ACE})?(?:APPID%{SPACE}:%{SPACE}%{NOTSPACE:appid}%{SPACE})?(?:AUTHID%{SPACE}:%{SPACE}%{WORD:authid}%{SPACE})?(?:HOSTNAME%{SPACE}:%{SPAC
  26. E}%{HOSTNAME:hostname}%{SPACE})?(?:EDUID%{SPACE}:%{SPACE}%{INT:eduid}%{SPACE})?(?:EDUNAME%{SPACE}:%{SPACE}%{DATA:eduname}%{SPACE})?(?:
  27. FUNCTION%{SPACE}:%{SPACE}%{DATA:function}%{SPACE})(?:probe:%{SPACE}%{INT:probe}%{SPACE})%{GREEDYDATA:functionlog}"
  28. }
  29. }
  30. date {
  31. match => [ "timestamp", "YYYY-MM-dd-HH.mm.ss.SSSSSS" ]
  32. }
  33. if "_grokparsefailure" in [tags] {
  34. drop {}
  35. }
  36. }
  37. }
  38. output {
  39. if [type] == "db2log" {
  40. elasticsearch {
  41. hosts => ["ip:9200"]
  42. index => "db2log-%{+YYYY.MM.dd}"
  43. user => "xxxx"
  44. password => "xxxx"
  45. }
  46. stdout {
  47. codec => rubydebug
  48. }
  49. }
  50. }

Kiban a 展示

Kibana 数据统计展现

IBM小型机日志采集相关配置

通过shell脚本crontab定时任务每天集中对hmc事件进行采集汇总, 包括以下两步 : 1、建立与hmc的信任关系 ; 2、shell脚本远程定时执行 ;

Aix errpt日志需要添加字段,并添加至odm库。进行格式化以实现实时日志转发。具体配置 如下:

errpt2logstash

Send events from AIX error report (errpt) to a logstash server

install

cp errpt2logstash.pl /usr/local/bin/errpt2logstash.pl

chown root:system /usr/local/bin/errpt2logstash.pl

chmod 750 /usr/local/bin/errpt2logstash.pl

customize the configuration file errpt2logstash.conf

cp errpt2logstash.conf /etc/errpt2logstash.conf

chown root:system /etc/errpt2logstash.conf

chmod 660 /etc/errpt2logstash.conf

odmadd errpt2logstash.add

example logstash input configuration

input {

tcp {

port => 5555

type => errpt

codec => json

}

}

example logstash filter configuration

filter {

#

# AIX ERRPT

# define and handle critical messages

#

if [type] == "errpt" {

if [errpt_error_class] == "H" or [facility_label] == "Hardware" {

if [errpt_error_type] == "PERM" or [severity_label] == "Permanent" {

#

# PERM H exclude list

#

# 07A33B6A SC_TAPE_ERR4 PERM H TAPE DRIVE FAILURE

# 4865FA9B TAPE_ERR1 PERM H TAPE OPERATION ERROR

# 68C66836 SC_TAPE_ERR1 PERM H TAPE OPERATION ERROR

# E1D8D4A4 SC_TAPE_ERR2 PERM H TAPE DRIVE FAILURE

# BFE4C025 SCAN_ERROR_CHRP PERM H UNDETERMINED ERROR

#

if [errpt_error_id] not in ["68C66836", "07A33B6A", "E1D8D4A4", "4865FA9B", "BFE4C025"] {

mutate {

add_tag => [ "critical" ]

}

}

}

}

#

# overall include list

#

# 0975DD6C KERNEL_ABEND PERM S KERNEL ABNORMALLY TERMINATED

# 4B97B439 J2_METADATA_CORRUPT UNKN U FILE SYSTEM CORRUPTION

# AE3E3FAD J2_FSCK_INFO INFO O FSCK FOUND ERRORS

# B6DB68E0 J2_FSCK_REQUIRED INFO O FILE SYSTEM RECOVERY REQUIRED

# C4C3339D LGPG_FREED INFO S ONE OR MORE LARGE PAGES HAS BEEN CONVERT

# C5C09FFA PGSP_KILL PERM S SOFTWARE PROGRAM ABNORMALLY TERMINATED

# FE2DEE00 AIXIF_ARP_DUP_ADDR PERM S DUPLICATE IP ADDRESS DETECTED IN THE NET

#

if [errpt_error_id] in ["0975DD6C", "4B97B439", "AE3E3FAD", "B6DB68E0", "C4C3339D", "C5C09FFA", "FE2DEE00"] {

mutate {

add_tag => [ "critical" ]

}

}

#

# Forward

#

if "critical" in [tags] {

throttle {

# max. one alert within five minutes per host and errpt identifier

before_count => -1

after_count => 1

key => "%{logsource}:%{errpt_error_id}"

period => 300

add_tag => [ "throttled" ]

}

if "throttled" not in [tags] {

email {

from => "logstash@server.de"

subject => "CRITICAL: %{logsource} - %{errpt_description}"

to => "admin@server.de"

via => "sendmail"

body => "%{message}"

options => { "location" => "/usr/sbin/sendmail" }

}

}

}

}

}

testing

Logstash server

/opt/logstash/bin/logstash agent -e 'input {tcp { port => 5555 codec => json }} output { stdout { codec => rubydebug }}'

AIX server

errlogger "Hello World"

logger -plocal0.crit "Hello World"

deinstall

odmdelete -q 'en_name=errpt2logstash' -o errnotify

rm /usr/local/bin/errpt2logstash.pl

rm /etc/errpt2logstash.conf

ELK Stack 证券 业务系统日志管理与分析平台 结合收集的HMC日志,每天进 行小型机的硬件检查;设置不同权限,运维小组根据各自运维板块对日志进行查看分析,进行潜在故障排查;业务运行异常时,根据业务系统进行全局日志查看、分析;对历史日志进行保存,做数据分析;采集用户登录行为及操作记录,进行安全审计;整合应用系统日志,实现全平台的日志收集;结合zabbix对日志进 行监控告警。

收起
系统集成 · 2019-08-10
浏览1939

相关问题

相关文章

问题状态

  • 发布时间:2019-08-09
  • 关注会员:2 人
  • 问题浏览:2407
  • 最近回答:2019-08-10
  • X社区推广