docker07 监控容器

childhood_1013 · · 799 次点击 · · 开始浏览    
这是一个创建于 的文章,其中的信息可能已经有所发展或是发生改变。

监控容器

使用docker inspect命令获取容器的详细信息

[root@docker ~]# docker inspect r-Default-nginx-1-fc08b70a 
[
    {
        "Id": "bdbc735f0802a755bfc2e4c669a6a029cf06a00d17591b5b1daf987855a4be48",
        "Created": "2020-05-14T11:23:09.025156613Z",
        "Path": "/.r/r",
        "Args": [
            "nginx",
            "-g",
            "daemon off;"
        ],
  • Docker的inspect命令有一个format参数,通过该参数指定一个Golang模板来获得一个容器或者镜像中指定的部分信息
[root@docker ~]# docker inspect -f '{{.NetworkSettings.IPAddress}}' friendly_banach 
172.17.0.2
[root@docker ~]# docker inspect -f '{{.State.Running}}' friendly_banach 
true
  • 也可以使用docker-py获取详细信息
[root@docker ~]# python3
Python 3.6.8 (default, Apr  2 2020, 13:34:55) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from docker import Client
>>> c=Client(base_url="unix://var/run/docker.sock")
>>> c.inspect_container('friendly_banach')['State']['Running']
True
>>> c.inspect_container('friendly_banach')['NetworkSettings']['IPAddress']
'172.17.0.2'

获取运行中容器的使用统计信息

  • 监控容器的资源使用情况
  • 使用docker stats命令
[root@docker ~]# docker run -d -p 5000:5000 flask
[root@docker ~]# docker stats 0212
  • 接收到的是一个流输出
image.png
  • 通过守护进程模式收集信息
[root@docker ~]# vim /etc/sysconfig/docker
OPTIONS='-H tcp://127.0.0.1:2375'

[root@docker ~]# docker -H tcp://127.0.0.1:2375 run -d -p 5002:5000 localhost:5000/flask:foobar
dab59d1f84a1233be3d1a144f80e5a5edb9c7c7361f2da4f4868d8a7f77f064d
[root@docker ~]# docker -H tcp://127.0.0.1:2375 ps
CONTAINER ID        IMAGE                         COMMAND                  CREATED             STATUS              PORTS                    NAMES
dab59d1f84a1        localhost:5000/flask:foobar   "python /tmp/hello.py"   6 seconds ago       Up 5 seconds        0.0.0.0:5002->5000/tcp   nostalgic_wright

# 使用curl访问Docker remote API
[root@docker ~]# curl http://127.0.0.1:2375/containers/nostagic_wright/stats
{"message":"No such container: nostagic_wright"}
[root@docker ~]# curl http://127.0.0.1:2375/containers/nostalgic_wright/stats
{"read":"2020-05-14T11:57:54.149141537Z","preread":"0001-01-01T00:00:00Z","pids_stats":{"current":1},"blkio_stats":{"io_service_bytes_recursive":[{"major":8,"minor":0,"op":"Read","value":15392768},{"major":8,"minor":0,"op":"Write","value":0},{"major":8,"minor":0,"op":"Sync","value":0},{"major":8,"minor":0,"op":"Async","value":15392768},{"major":8,"minor":0,"op":"Total","value":15392768}],"io_serviced_recursive":[{"major":8,"minor":0,"op":"Read","value":410},{"major":8,"minor":0,"op":"Write","value":0},{"major":8,"minor":0,"op":"Sync","value":0},{"major":8,"minor":0,"op":"Async","value":410},{"major":8,"minor":0,"op":"Total","value":410}],"io_queue_recursive":[],"io_service_time_recursive":[],"io_wait_time_recursive":[],"io_merged_recursive":[],"io_time_recursive":[],"sectors_recursive":[]},"num_procs":0,"storage_stats":{},"cpu_stats":{"cpu_usage":{"total_usage":510879029,"percpu_usage":[28316780,482562249],"usage_in_kernelmode":260000000,"usage_in_usermode":160000000},"system_cpu_usage":6079130000000,"throttling_data":{"periods":0,"throttled_periods":0,"throttled_time":0}},"precpu_stats":{"cpu_usage":{"total_usage":0,"usage_in_kernelmode":0,"usage_in_usermode":0},"throttling_data":{"periods":0,"throttled_periods":0,"throttled_time":0}},"memory_stats":{"usage":27471872,"max_usage":27541504,"stats":{"active_anon":13373440,"active_file":589824,"cache":14098432,"hierarchical_memory_limit":9223372036854771712,"hierarchical_memsw_limit":9223372036854771712,"inactive_anon":0,"inactive_file":13508608,"mapped_file":3932160,"pgfault":6898,"pgmajfault":20,"pgpgin":7376,"pgpgout":669,"rss":13373440,"rss_huge":0,"swap":0,"total_active_anon":13373440,"total_active_file":589824,"total_cache":14098432,"total_inactive_anon":0,"total_inactive_file":13508608,"total_mapped_file":3932160,"total_pgfault":6898,"total_pgmajfault":20,"total_pgpgin":7376,"total_pgpgout":669,"total_rss":13373440,"total_rss_huge":0,"total_swap":0,"total_unevictable":0,"unevictable":0},"limit":1019826176},"name":"/nostalgic_wright","id":"dab59d1f84a1233be3d1a144f80e5a5edb9c7c7361f2da4f4868d8a7f77f064d","networks":{"eth0":{"rx_bytes":656,"rx_packets":8,"rx_errors":0,"rx_dropped":0,"tx_bytes":656,"tx_packets":8,"tx_errors":0,"tx_dropped":0}}}

在Docker主机上监听Docker事件

  • 监控主机上的Docker事件.对取消镜像的标记、删除镜像以及容器的生命周期事件
  • 使用docker events命令
[root@docker ~]# docker events --since `date +%s`

2020-05-14T20:19:47.804778385+08:00 container destroy d453efdbe9ed7c5163861c807c2b17188a202f0672346b6b0624183a0411ec63 (image=c22, name=stupefied_bhaskara)
2020-05-14T20:20:38.504168502+08:00 container destroy 22197e73e2846564e42df146829fd76743098302f914a40c54b51278a0d18635 (image=c22, name=serene_feynman)
2020-05-14T20:21:13.541290011+08:00 container destroy f37dd785b8688e5d2168d14622a69a0f6470fbc007a87b323c684e75a6093d23 (image=hello-world, name=distracted_lumiere)
2020-05-14T20:21:16.308051302+08:00 image untag sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b (name=sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b)
2020-05-14T20:21:16.310636070+08:00 image delete sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b (name=sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b)

  • 使用docker-py
[root@docker ~]# cat 111.py
#!/usr/bin/env python
import json
import docker
import sys
import time

cli=docker.Client(base_url='unix://var/run/docker.sock')
events=cli.events(since=time.time())
for e in events:
    print(e.decode())
    

[root@docker ~]# python3 111.py
{"status":"destroy","id":"1dfd11ab934aae34ca62be0ff519f272784ea5b60299e045cf495d6715c01903","from":"bac","Type":"container","Action":"destroy","Actor":{"ID":"1dfd11ab934aae34ca62be0ff519f272784ea5b60299e045cf495d6715c01903","Attributes":{"image":"bac","name":"upbeat_hugle"}},"time":1589459334,"timeNano":1589459334373629869}

使用docker logs命令获取容器的日志

  • 在容器所在的主机上访问这个进程的日志
  • 使用docker logs命令,通过-f获得一个持续输出的日志流
[root@docker ~]# docker logs -f rancher-agent 
Found container ID: a551cf12eac99b78808d43b165b7366583bf6a2804e05f1465ba08b84e495a18
Checking root: /host/run/runc
Checking file: 4202ac2f9a86003d9bfb7e1970972d7157df0a7c4ed40267a446d4c86d4f577c
Checking file: a551cf12eac99b78808d43b165b7366583bf6a2804e05f1465ba08b84e495a18
Found state.json: a551cf12eac99b78808d43b165b7366583bf6a2804e05f1465ba08b84e495a18
time="2020-05-14T12:15:59Z" level=info msg="Execing [/usr/bin/nsenter --mount=/proc/7216/ns/mnt -F -- /var/lib/docker/overlay2/cc95229315d87f383653031597e09f89960f1f93d7a57d7a55d114afc5ad1cf0/merged/usr/bin/share-mnt --stage2 /var/lib/rancher/volumes /var/lib/kubelet -- norun]" 
INFO: Starting agent for CC0E75A94ABFD3800AF3

  • 使用docker top命令来监控容器内运行的进程
[root@docker ~]# docker top nginx
UID     PID     PPID    C    STIME   TTY     TIME     CMD
root   9587     9568    1    20:34    ?     00:00:00  nginx: master process nginx -g daemon off;
101    9613     9587    0    20:34    ?     00:00:00  nginx: worker process

使用Logspout采集容器日志

  • 从不同主机上运行的容器收集日志并进行聚合
  • Logspout可以收集一台Docker主机上的所有容器日志并把它们路由到其他主机.它以容器的方式运行,并且是完全无状态的
# 在10.0.0.180服务器上
[root@docker ~]# docker pull nginx
[root@docker ~]# docker pull gliderlabs/logspout
[root@docker ~]# docker run -d --name webserver -p 80:80 nginx
[root@docker ~]# docker run -d --name logspout -v /var/run/docker.sock:/tmp/docker.sock docker.io/gliderlabs/logspout syslog://10.0.0.181:5000

# 为了收集日志,需要在第二台主机(10.0.0.181)上运行Logstash容器.在UDP5000端口上监听syslog输入,并全部输出到同一台主机的标准输出中

# 在10.0.0.181服务器上
[root@docker02 ~]# cat Dockerfile
FROM ehazlett/logstash

COPY logstash.conf /etc/logstash.conf
ENTRYPOINT ["/opt/logstash/bin/logstash"]


# 镜像下载完后,需要构建自己的Logstash镜像并使用自定义Logstash配置文件
[root@docker02 ~]# cat logstash.conf 
input {
  udp {
    port => 5000
    type => syslog
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
  stdout { codec => rubydebug }
}

# 构建镜像
[root@docker02 ~]# docker build -t logstash .

# 启动Logstash容器,将容器的5000端口绑定到宿主机的5000端口
[root@docker02 ~]# docker run -d --name logstash -p 5000:5000/udp logstash -f /etc/logstash.conf

# 打开浏览器访问第一个Docker主机上运行的Nginx,Logstash容器就能看到相应的日志输出
[root@docker02 ~]# docker logs logstash 
{:timestamp=>"2020-05-14T13:28:01.798000+0000", :message=>"Using milestone 2 input plugin 'udp'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2020-05-14T13:28:01.834000+0000", :message=>"Using milestone 1 filter plugin 'syslog_pri'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{
                 "message" => "<11>1 2020-05-14T13:28:45Z bbb87d10eb4c logspout 11944 - - 2020/05/14 13:28:45 # logspout v3.2.11 by gliderlabs\n",
                "@version" => "1",
              "@timestamp" => "2020-05-14T13:28:45.497Z",
                    "type" => "syslog",
                    "host" => "10.0.0.180",
                    "tags" => [
        [0] "_grokparsefailure"
    ],
    "syslog_severity_code" => 5,
    "syslog_facility_code" => 1,
         "syslog_facility" => "user-level",
         "syslog_severity" => "notice"

管理Logspout路由来存储容器日志

  • 修改远程服务器的URI.直接在Logspout中查看日志来对容器进行调试,修改路由的URI,或者添加更多的路由URI
# 拉取一个带有curl命令的镜像
[root@docker ~]# docker pull tutum/curl
# 启动一个交互式容器
[root@docker ~]# docker run -it --link logspout:logspout tutum/curl /bin/bash
# 确认能ping同Logspout容器
root@d5ae3aea7330:/# ping logspout
PING logspout (172.17.0.2) 56(84) bytes of data.
64 bytes from logspout (172.17.0.2): icmp_seq=1 ttl=64 time=0.098 ms
^C
--- logspout ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms
# curl来访问Logspout容器的日志
root@d5ae3aea7330:/# curl http://logspout:80/logs
       webserver|10.0.0.1 - - [14/May/2020:13:47:15 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3902.4 Safari/537.36" "-"

# 路由API
root@d5ae3aea7330:/# curl http://logspout:80/routes
[
  {
    "id": "7f9822b34ec1",
    "adapter": "syslog",
    "address": "10.0.0.181:5000"
  }
]

root@d5ae3aea7330:/# curl http://logspout:80/routes/7f9822b34ec1
{
  "id": "7f9822b34ec1",
  "adapter": "syslog",
  "address": "10.0.0.181:5000"
}

# 删除路由
root@d5ae3aea7330:/# curl -X DELETE http://logspout:80/routes/7f9822b34ec1
root@d5ae3aea7330:/# curl http://logspout:80/routes 
[]

# 添加新的路由信息
root@d5ae3aea7330:/# curl -X POST -d '{"id": "7f9822b34ec1","adapter": "syslog","address": "10.0.0.181:5000"}' http://logspout:80/routes
{
  "id": "7f9822b34ec1",
  "adapter": "syslog",
  "address": "10.0.0.181:5000"
}
root@d5ae3aea7330:/# curl -X POST -d '{"id": "g59822b34qw2","adapter": "syslog","address": "10.0.0.182:5000"}' http://logspout:80/routes
{
  "id": "g59822b34qw2",
  "adapter": "syslog",
  "address": "10.0.0.182:5000"
}

# 查看添加的路由信息
root@d5ae3aea7330:/# curl http://logspout:80/routes
[
  {
    "id": "7f9822b34ec1",
    "adapter": "syslog",
    "address": "10.0.0.181:5000"
  },
  {
    "id": "g59822b34qw2",
    "adapter": "syslog",
    "address": "10.0.0.182:5000"
  }
]

使用Elasticsearch和Kibana对容器日志进行存储和可视化

  • Kibana是一个仪表盘系统.允许你查询Elasticsearch中的索引并方便进行可视化.使用ehazlett/logstash镜像以及它的默认配置启动一个Logstash容器
[root@docker02 ~]# docker run --name es -d -p 9200:9200 -p 9300:9300 docker.io/ehazlett/elasticsearch
[root@docker02 ~]# docker run --name kibana -d -p 80:80 docker.io/ehazlett/kibana
[root@docker02 ~]# docker run -d --name logstash -p 5000:5000/udp --link es:elasticsearch docker.io/ehazlett/logstash -f /etc/logstash.conf.sample
  • 访问10.0.0.181:80端口
image.png
  • 访问10.0.0.180的80端口,日志就自动被存储到了Kibana
image.png
  • 如果停止并删除了Elasticsearch容器,那么为溶出Logspout采集的日志流而创建的索引信息将会消失.可以挂载一个本地卷备份索引对Elasticsearch的数据进行持久化

使用Collected对容器指标进行可视化

  • 监控容器的各种指标

  • 在所有希望对容器进行监控的主机上运行该容器,将/var/run/docker.sock文件挂载到collectd的容器中,使用Collectd插件通过Docker统计API采集指标数据,并将指标数据发送到在其他主机上运行的Graphite仪表盘中

  • 一台主机运行四个容器:一个Nginx容器用来标准输出输出测试日志,一个Logspout容器用于将所有标准输出日志发送到Logstash实例,一个容器用于产生模拟负载,以及一个Collectd容器

  • 另一台主机也运行四个容器:一个Logstash容器用于接收来及Logstash的日志,一个Elasticsearch容器用于存储日志,一个Kibana容器用于对日志进行可视化,以及一个Graphite容器.Graphite容器也运行着carbon来存储指标数据

[root@docker ~]# vim docbook/ch09/collectd/Dockerfile 

FROM debian:jessie

RUN apt-get update && apt-get -y install \
    collectd \
    python \
    python-pip
RUN apt-get clean
RUN pip install docker-py

RUN groupadd -r docker && useradd -r -g docker docker

ADD docker-stats.py /opt/collectd/bin/docker-stats.py
ADD docker-report.py /opt/collectd/bin/docker-report.py
ADD collectd.conf /etc/collectd/collectd.conf

RUN chown -R docker /opt/collectd/bin

CMD ["/usr/sbin/collectd","-f"]

[root@docker ~]# vim docbook/ch09/collectd/worker.yml 

nginx:
  image: nginx
  ports:
   - 80:80
logspout:
  image: gliderlabs/logspout
  volumes:
   - /var/run/docker.sock:/tmp/docker.sock
  command: syslog://192.168.33.11:5000
collectd:
  build: .
  volumes:
   - /var/run/docker.sock:/var/run/docker.sock
load:
  image: borja/unixbench

[root@docker ~]# vim docbook/ch09/collectd/monitor.yml 

es:
  image: ehazlett/elasticsearch
  ports:
   - 9300:9300
   - 9200:9200
kibana:
  image: ehazlett/kibana
  ports:
   - 8080:80
graphite:
  image: hopsoft/graphite-statsd
  ports:
   - 80:80
   - 2003:2003
   - 8125:8125/udp
logstash:
  image: ehazlett/logstash
  ports:
   - 5000:5000
   - 5000:5000/udp
  volumes:
   - /vagrant/logstash.conf:/etc/logstash.conf
  links:
   - es:elasticsearch
  command: -f /etc/logstash.conf
  
  
# 在被监控的主机上运行
docker-compose -f docbook/ch09/collectd/worker.yml up -d
# 在收集日志的主机上运行
docker-compose -f docbook/ch09/collectd/monitor.yml up -d

使用cAdvisor监控容器资源使用状况

  • cAdvisor是一个资源利用率监控系统.能用来监控容器资源使用情况和性能的软件.cAdvisor在Docker主机上以容器方式运行.通过挂载本地卷,它可以监控同一台主机上运行的所有容器.还提供了一个本地Web界面和API,并且能将数据存储到InfluxDB.将运行中容器的数据存储到远程InfluxDB集群,这样就可以对所有在一个集群中运行的容器的性能指标进行聚合
# 下载cAdvisor镜像和borja/unixbench镜像,borja/unixbench镜像用来模拟产生负载
[root@docker ~]# docker pull google/cadvisor
[root@docker ~]# docker pull borja/unixbench
[root@docker ~]# docker run -v /var/run:/var/run:rw -v /sys:/sys:ro -v /var/lib/docker/:/var/lib/docker:ro -p 8080:8080 -d --name cadvisor docker.io/google/cadvisor
[root@docker ~]# docker run -d docker.io/borja/unixbench

使用Weave Scope对容器布局进行可视化

[root@docker ~]# cat docbook/ch09/weavescope/docker-compose.yml
db1:
  image: peterbourgon/tns-db
db2:
  image: peterbourgon/tns-db
  links:
    - db1
db3:
  image: peterbourgon/tns-db
  links:
    - db1
    - db2

app1:
  image: peterbourgon/tns-app
  links:
    - db1
    - db2
    - db3
app2:
  image: peterbourgon/tns-app
  links:
    - db1
    - db2
    - db3

lb1:
  image: peterbourgon/tns-lb
  links:
    - app1
    - app2
  ports:
    - 0.0.0.0:8001:80
lb2:
  image: peterbourgon/tns-lb
  links:
    - app1
    - app2
  ports:
    - 0.0.0.0:8002:80
    
[root@docker ~]# docker-compose docbook/ch09/weavescope/docker-compose.yml

有疑问加站长微信联系(非本文作者)

本文来自:简书

感谢作者:childhood_1013

查看原文:docker07 监控容器

入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889

799 次点击  
加入收藏 微博
暂无回复
添加一条新回复 (您需要 登录 后才能回复 没有账号 ?)
  • 请尽量让自己的回复能够对别人有帮助
  • 支持 Markdown 格式, **粗体**、~~删除线~~、`单行代码`
  • 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
  • 图片支持拖拽、截图粘贴等方式上传