Elasticsearch
文章目录
consistency参数
- one(primary shard)
- all(all shard)
- quorum(default)
部署es
# docker方式 部署
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.7.0
# k8s方式 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: init-sysctl
image: busybox:1.30.1
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.7.0
ports:
- containerPort: 9300
name: es-tran-port
- containerPort: 9200
name: es-port
env:
- name: discovery.type
value: "single-node"
- name: path.data
value: "/data"
volumeMounts:
- name: es-persistent-storage
mountPath: /data
volumes:
- name: es-persistent-storage
emptyDir: {}
# /usr/share/elasticsearch/
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-svc
namespace: default
labels:
app: elasticsearch
kubernetes.io/cluster-service: "true"
spec:
type: NodePort
selector:
app: elasticsearch
ports:
- port: 9300
targetPort: 9300
nodePort: 30093
name: es-tran-port
- port: 9200
targetPort: 9200
nodePort: 30092
name: es-port
# ES修改默认的密码信息 默认账户为 elastic 默认密码为 changme
curl -XPUT -u elastic 'http://localhost:9200/_xpack/security/user/kibana/_password' -d '{"password" : "yourpasswd"}'
部署filebeat
# docker方式 部署
curl -L -O https://raw.githubusercontent.com/elastic/beats/6.7/deploy/docker/filebeat.docker.yml
docker run -d \
--name=filebeat \
--user=root \
--volume="/data/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" \
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
docker.elastic.co/beats/filebeat:6.7.1 filebeat -e -strict.perms=false -E output.elasticsearch.hosts=["192.168.33.26:30092"]
# k8s方式 部署
curl -L -O https://raw.githubusercontent.com/elastic/beats/master/deploy/kubernetes/filebeat-kubernetes.yaml
部署Kibana
docker run -d --name kibana -e ELASTICSEARCH_HOSTS=http://192.168.33.26:30092 -e XPACK_MONITORING_ENABLED=false -p 5601:5601 docker.elastic.co/kibana/kibana:6.7.0
{"type":"log","@timestamp":"2019-04-17T08:26:34Z","tags":["info","optimize"],"pid":1,"message":"Optimizing and caching bundles for graph, space_selector, login, logout, logged_out, ml, dashboardViewer, apm, maps, canvas, infra, uptime, kibana, status_page, stateSessionStorageRedirect and timelion. This may take a few minutes"}
部署fluentd
# https://github.com/fluent/fluentd-kubernetes-daemonset
2019-08-21 14:35:04 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf"
2019-08-21 14:35:04 +0000 [error]: config error file="/fluentd/etc/fluent.conf" error_class=Fluent::ConfigError error="Invalid Kubernetes API v1 endpoint https://10.254.0.1:443/api: SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get issuer certificate)"
2019-08-21 15:16:31 +0000 [error]: config error file="/etc/fluent/fluent.conf" error_class=Fluent::ConfigError error="Invalid Kubernetes API v1 endpoint https://10.254.0.1:443/api: SSL_connect returned=1 errno=0 state=error: certificate verify failed"
2019-08-23 09:05:43 +0000 [info]: fluent/log.rb:322:info: starting fluentd-1.6.3 without supervision pid=1 ruby="2.3.3"
2019-08-23 09:05:43 +0000 [info]: fluent/log.rb:322:info: gem 'fluent-plugin-concat' version '2.4.0'
2019-08-23 09:05:43 +0000 [info]: fluent/log.rb:322:info: gem 'fluent-plugin-detect-exceptions' version '0.0.12'
2019-08-23 09:05:43 +0000 [info]: fluent/log.rb:322:info: gem 'fluent-plugin-elasticsearch' version '3.5.4'
2019-08-23 09:05:43 +0000 [info]: fluent/log.rb:322:info: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.2.0'
2019-08-23 09:05:43 +0000 [info]: fluent/log.rb:322:info: gem 'fluent-plugin-multi-format-parser' version '1.0.0'
2019-08-23 09:05:43 +0000 [info]: fluent/log.rb:322:info: gem 'fluent-plugin-prometheus' version '1.4.0'
2019-08-23 09:05:43 +0000 [info]: fluent/log.rb:322:info: gem 'fluent-plugin-systemd' version '1.0.2'
2019-08-23 09:05:43 +0000 [info]: fluent/log.rb:322:info: gem 'fluentd' version '1.6.3'
2019-08-23 09:05:43 +0000 [info]: fluent/log.rb:322:info: adding match pattern="fluent.**" type="null"
2019-08-23 09:05:43 +0000 [info]: fluent/log.rb:322:info: adding match pattern="raw.kubernetes.**" type="detect_exceptions"
2019-08-23 09:05:43 +0000 [info]: fluent/log.rb:322:info: adding filter pattern="**" type="concat"
2019-08-23 09:05:43 +0000 [info]: fluent/log.rb:322:info: adding filter pattern="kubernetes.**" type="kubernetes_metadata"
2019-08-23 09:05:46 +0000 [debug]: [filter_kubernetes_metadata] Kubernetes URL is not set - inspecting environ
2019-08-23 09:05:46 +0000 [debug]: [filter_kubernetes_metadata] Kubernetes URL is now 'https://10.254.0.1:443/api'
2019-08-23 09:05:46 +0000 [debug]: [filter_kubernetes_metadata] Found directory with secrets: /var/run/secrets/kubernetes.io/serviceaccount
2019-08-23 09:05:46 +0000 [debug]: [filter_kubernetes_metadata] Creating K8S client
2019-08-23 09:05:46 +0000 [error]: fluent/log.rb:362:error: config error file="/etc/fluent/fluent.conf" error_class=Fluent::ConfigError error="Invalid Kubernetes API v1 endpoint https://10.254.0.1:443/api: SSL_connect returned=1 errno=0 state=error: certificate verify failed"
#### 修改configmap
<filter kubernetes.**>
@id filter_kubernetes_metadata
@type kubernetes_metadata
verify_ssl false
#ca_file /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
#bearer_token_file /var/run/secrets/kubernetes.io/serviceaccount/token
</filter>
2019-08-21 15:45:52 +0000 [warn]: [elasticsearch] Could not communicate to Elasticsearch, resetting connection and trying again. getaddrinfo: Temporary failure in name resolution (SocketError)
2019-08-21 15:45:52 +0000 [warn]: [elasticsearch] Remaining retry: 14. Retry to communicate after 2 second(s).
2019-08-21 15:45:56 +0000 [warn]: [elasticsearch] Could not communicate to Elasticsearch, resetting connection and trying again. getaddrinfo: Temporary failure in name resolution (SocketError)
2019-08-21 15:45:56 +0000 [warn]: [elasticsearch] Remaining retry: 13. Retry to communicate after 4 second(s).
2019-08-21 15:46:04 +0000 [warn]: [elasticsearch] Could not communicate to Elasticsearch, resetting connection and trying again. getaddrinfo: Temporary failure in name resolution (SocketError)
2019-08-21 15:46:04 +0000 [warn]: [elasticsearch] Remaining retry: 12. Retry to communicate after 8 second(s).
2019-08-21 15:46:20 +0000 [warn]: [elasticsearch] Could not communicate to Elasticsearch, resetting connection and trying again. getaddrinfo: Temporary failure in name resolution (SocketError)
2019-08-21 15:46:20 +0000 [warn]: [elasticsearch] Remaining retry: 11. Retry to communicate after 16 second(s).
2019-08-21 15:46:52 +0000 [warn]: [elasticsearch] Could not communicate to Elasticsearch, resetting connection and trying again. getaddrinfo: Temporary failure in name resolution (SocketError)
2019-08-21 15:46:52 +0000 [warn]: [elasticsearch] Remaining retry: 10. Retry to communicate after 32 second(s).
curl -XGET http://elasticsearch-logging:9200
curl -XGET http://elasticsearch-logging:9200/_cat
curl -XGET http://elasticsearch-logging:9200/_cluster/health?pretty=true
curl -XGET http://elasticsearch-logging:9200/_cat/indices?v
curl -XGET http://elasticsearch-logging:9200/_cat/nodes?v
2019-08-22 01:37:18 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kernel:default" location=nil tag="kernel" time=2019-08-22 01:37:18.358173931 +0000 record={"priority"=>"6", "boot_id"=>"e99bd0fe254f4f68a8ba83fb6d9cdf4b", "machine_id"=>"36bebd5f48e24ff7bef064391d657357", "hostname"=>"n50", "transport"=>"kernel", "syslog_facility"=>"0", "syslog_identifier"=>"kernel", "source_monotonic_timestamp"=>"922172463245", "message"=>"IPVS: Creating netns size=2048 id=15IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_UP): calic92a001be2c: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): calic92a001be2c: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready"}
2019-08-22 01:37:53 +0000 [warn]: [elasticsearch] failed to flush the buffer. retry_time=0 next_retry_seconds=2019-08-22 01:37:54 +0000 chunk="590aabc5ee2488877eb924f9624000c3" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch-svc.default.svc.cluster.local\", :port=>9200, :scheme=>\"http\"}): read timeout reached"
2019-08-22 01:37:53 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.5.4/lib/fluent/plugin/out_elasticsearch.rb:758:in `rescue in send_bulk'
2019-08-22 01:37:53 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.5.4/lib/fluent/plugin/out_elasticsearch.rb:734:in `send_bulk'
2019-08-22 01:37:53 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.5.4/lib/fluent/plugin/out_elasticsearch.rb:628:in `block in write'
2019-08-22 01:37:53 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.5.4/lib/fluent/plugin/out_elasticsearch.rb:627:in `each'
2019-08-22 01:37:53 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.5.4/lib/fluent/plugin/out_elasticsearch.rb:627:in `write'
2019-08-22 01:37:53 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.6.3/lib/fluent/plugin/output.rb:1128:in `try_flush'
2019-08-22 01:37:53 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.6.3/lib/fluent/plugin/output.rb:1434:in `flush_thread_run'
2019-08-22 01:37:53 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.6.3/lib/fluent/plugin/output.rb:457:in `block (2 levels) in start'
2019-08-22 01:37:53 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.6.3/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
2019-08-22 01:37:58 +0000 [warn]: [elasticsearch] failed to flush the buffer. retry_time=1 next_retry_seconds=2019-08-22 01:37:59 +0000 chunk="590aabcad4c3065b9f04e06fcbe47999" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch-svc.default.svc.cluster.local\", :port=>9200, :scheme=>\"http\"}): read timeout reached"
2019-08-22 01:37:58 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.5.4/lib/fluent/plugin/out_elasticsearch.rb:758:in `rescue in send_bulk'
2019-08-22 01:37:58 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.5.4/lib/fluent/plugin/out_elasticsearch.rb:734:in `send_bulk'
2019-08-22 01:37:58 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.5.4/lib/fluent/plugin/out_elasticsearch.rb:628:in `block in write'
2019-08-22 01:37:58 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.5.4/lib/fluent/plugin/out_elasticsearch.rb:627:in `each'
2019-08-22 01:37:58 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.5.4/lib/fluent/plugin/out_elasticsearch.rb:627:in `write'
2019-08-22 01:37:58 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.6.3/lib/fluent/plugin/output.rb:1128:in `try_flush'
2019-08-22 01:37:58 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.6.3/lib/fluent/plugin/output.rb:1434:in `flush_thread_run'
2019-08-22 01:37:58 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.6.3/lib/fluent/plugin/output.rb:457:in `block (2 levels) in start'
2019-08-22 01:37:58 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.6.3/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
2019-08-22 01:38:01 +0000 [warn]: [elasticsearch] failed to flush the buffer. retry_time=2 next_retry_seconds=2019-08-22 01:38:04 +0000 chunk="590aabc5ee2488877eb924f9624000c3" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch-svc.default.svc.cluster.local\", :port=>9200, :scheme=>\"http\"}): read timeout reached"
2019-08-22 01:38:01 +0000 [warn]: suppressed same stacktrace
2019-08-22 01:38:06 +0000 [warn]: [elasticsearch] failed to flush the buffer. retry_time=3 next_retry_seconds=2019-08-22 01:38:09 +0000 chunk="590aabcad4c3065b9f04e06fcbe47999" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch-svc.default.svc.cluster.local\", :port=>9200, :scheme=>\"http\"}): read timeout reached"
kubectl -n kube-system logs $(kubectl -n kube-system get pod -l k8s-app=fluentd-es -o jsonpath={.items[0].metadata.name})
参考
- filebeat.yml
- Running Filebeat on Docker
- Running Filebeat on Kubernetes
- Running Kibana on Docker
- Kibana configuration settings
- fluentd-elasticsearch
- 使用EFK快速搭建安全可靠的日志服务
- k8s搭建fluentd+ES
- 基于EFK构建日志分析系统
- fluentd实践
- fluentd 日志分流到不同的kafka
- Fluentd使用中遇到的丢数据问题
- k8s系列 – 日志采集fluent bit
- error_class=Fluent::ConcatFilter::TimeoutError error=“Timeout flush: …
- Fluent::Plugin::ConcatFilter::TimeoutError error=“Timeout flush
- fluent-plugin-concat/README
- filebeat DaemonSet
- https://docs.fluentbit.io/manual/installation/kubernetes#installation
- https://github.com/fluent/fluent-bit-kubernetes-logging
上次更新 2019-03-31
原始文档 查看本文 Markdown 版本 »