In the usual local running some high resource consumption of the container, you may sometimes be unhappy with a container on the machine to make the card can not be, this time you want to shut down the container, but there can not shut down, I do not know if you have this trouble, at least I have, so here is how to limit the Docker container resource consumption.
Docker limits CPU in several dimensions.
- Simple: limit the number of CPU cores, this is well understood
[[email protected]]# docker run --cpus=1,2,3,4means 1, 2, 3, 4 cores are allowed
- Complexity: based on CPU time slice limitation, docker is based on CFS scheduling implementation This is used in older versions, newer versions are recommended to use the simple way
[[email protected]]# docker run --cpu-period=100000 --cpu-quota=200000means that each CPU uses 100 ms, and this container uses up to 200 ms (equivalent to a limit of 2 cores, but not absolute)
In addition, docker also provides other cpu options, without going into detail, a brief description of them is
cpuset-cpus: bind which cores the container can only use
cpu-shares: when multiple containers grab CPU time, you can use this value to allocate CPU time proportionally
There are several options for limiting memory, so I’ll pick 4 meaningful ones and talk about them.
- Limit memory size:
[[email protected]]# docker run -m 200mOnly 200M of memory can be used, more will be OOM
- Limit memory soft size:
[[email protected]]# docker run --memory-reservation 200mLimit the use of 200M memory when the system memory is tight, more is not much, this is actually relatively chicken
- Kernel memory usage limit:
[[email protected]]# docker run --kernel-memory 200mOnly 200m of kernel memory can be used, if more is used, OOM
- OOM setting:
[[email protected]]# docker run --oom-kill-disableProcesses over memory don’t OOM, then what to do, can’t request memory
Limit disk size
By default Docker can only use a 10 G volume, if you want to go bigger, you need to modify the startup parameters: ```.
[[email protected].io]# cat /etc/docker/daemon.json
Limit disk IO
This option is a bit more involved, just look at the following list.
--blkio-weight uint16 Block IO (relative weight), between 10 and 1000, or 0 to disable (default 0)
--blkio-weight-device list Block IO weight (relative device weight) (default )
--device-read-bps list Limit read rate (bytes per second) from a device (default )
--device-read-iops list Limit read rate (IO per second) from a device (default )
--device-write-bps list Limit write rate (bytes per second) to a device (default )
--device-write-iops list Limit write rate (IO per second) to a device (default )
# This is the disk limit
[[email protected].io]# docker run -it --rm --device-write-bps /dev/sda:50mb ubuntu /bin/bash
# This is the limit file
[[email protected].io]# docker run -it --rm --device-write-bps /dev/dm-x:50mb centos /bin/bash
- For details on how to find the device name of a file, see this: How to limit IO speed in docker and share file with system in the same time?
The need for networking would be all too common, but from what I’ve found, it doesn’t seem to be officially supported by Docker, however, some users have simply implemented it directly inside the container with the tc command, operating as follows
[[email protected].io]# docker run --rm -it centos:7 /bin/sh
tc qdisc add dev eth0 handle 1: ingress
tc filter add dev eth0 parent 1: protocol ip prio 50 u32 match ip src 0.0.0.0/0 police rate 1mbit burst 10k drop flowid :1
tc qdisc add dev eth0 root tbf rate 1mbit latency 25ms burst 10k`
This can be used to limit the eth0 interface to 1M, see: How can I rate limit network traffic on a docker container.