lizhibing
作者lizhibing·2022-08-14 10:54
系统运维工程师·建亿通(北京)数据处理信息有限公司

Docker之Calico网络

字数 21422阅读 656评论 0赞 2

calico网络

1.calico简介
(1)Calico是一个纯三层的虚拟网络方案;
(2)Calico为每个容器分配一个IP,每个host都是router,把不同host的容器连接起来;
(3)与VxLAN不同的是,Calico不对数据包做额外封装,不需要NAT和端口映射,扩展性和性能很好。
(4)与其他容器网络方案相比,Calico优势:network policy。用户可以动态定义ACL规则,控制进出容器的数据包,实现业务需求。

2.实验拓朴
主机 IP OS 软件包
node1: 192.168.1.140 ubuntu1804 docker-ce:20.10.12,etcd-3.3.24,
host1: 192.168.1.144 ubuntu1804 docker-ce:20.10.12, calico/node:v2.6.2,calicoctl-v1.6.2
host2: 192.168.1.145 ubuntu1804 docker-ce:20.10.12, calico/node:v2.6.2,calicoctl-v1.6.2

Calico依赖etcd在不同主机间共享和交换信息,存储Calico网络状态。
node1(192.168.1.140)负责运行etcd.
Calico网络中的每个主机(host1,host2)都需要运行Calico组件,实现容器interface管理、动态路由、动态ACL、报告状态等。

3.首先启动etcd
etcd的安装参见前面的etcd节。
在node1(192.168.1.140)上运行以下命令启动etcd:
[root@node1 ~]# etcd -listen-client-urls http://192.168.1.140:2379 -advertise-client-urls http://192.168.1.140:2379

2022-01-27 10:56:20.900279 I | etcdmain: etcd Version: 3.3.24
2022-01-27 10:56:20.901895 I | etcdmain: Git SHA: bdd57848d
2022-01-27 10:56:20.902933 I | etcdmain: Go Version: go1.12.17
2022-01-27 10:56:20.903766 I | etcdmain: Go OS/Arch: linux/amd64
2022-01-27 10:56:20.904524 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
2022-01-27 10:56:20.905327 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
2022-01-27 10:56:20.906553 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2022-01-27 10:56:20.908606 I | embed: listening for peers on http://localhost:2380
2022-01-27 10:56:20.909852 I | embed: listening for client requests on 192.168.1.140:2379
2022-01-27 10:56:20.947753 I | etcdserver: name = default
2022-01-27 10:56:20.948925 I | etcdserver: data dir = default.etcd
2022-01-27 10:56:20.949576 I | etcdserver: member dir = default.etcd/member
2022-01-27 10:56:20.950310 I | etcdserver: heartbeat = 100ms
2022-01-27 10:56:20.950973 I | etcdserver: election = 1000ms
2022-01-27 10:56:20.951766 I | etcdserver: snapshot count = 100000
2022-01-27 10:56:20.952373 I | etcdserver: advertise client URLs = http://192.168.1.140:2379
2022-01-27 10:56:21.030243 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 7221
2022-01-27 10:56:21.032264 I | raft: 8e9e05c52164694d became follower at term 3
2022-01-27 10:56:21.033521 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 3, commit: 7221, applied: 0, lastindex: 7221, lastterm: 3]
2022-01-27 10:56:21.037423 W | auth: simple token is not cryptographically signed
2022-01-27 10:56:21.058460 I | etcdserver: starting server... [version: 3.3.24, cluster version: to_be_decided]
2022-01-27 10:56:21.097010 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
2022-01-27 10:56:21.099223 N | etcdserver/membership: set the initial cluster version to 3.3
2022-01-27 10:56:21.135269 I | etcdserver/api: enabled capabilities for version 3.3
2022-01-27 10:56:23.038366 I | raft: 8e9e05c52164694d is starting a new election at term 3
2022-01-27 10:56:23.048735 I | raft: 8e9e05c52164694d became candidate at term 4
2022-01-27 10:56:23.048830 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 4
2022-01-27 10:56:23.049174 I | raft: 8e9e05c52164694d became leader at term 4
2022-01-27 10:56:23.049259 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 4
2022-01-27 10:56:23.052111 I | etcdserver: published {Name:default ClientURLs:[http://192.168.1.140:2379]} to cluster cdf818194e3a8c32
2022-01-27 10:56:23.052430 E | etcdmain: forgot to set Type=notify in systemd service file?
2022-01-27 10:56:23.052530 I | embed: ready to serve client requests
2022-01-27 10:56:23.053841 N | embed: serving insecure client requests on 192.168.1.140:2379, this is strongly discouraged!

修改host1,host2的docker daemon配置文件/lib/systemd/system/docker.service,连接etcd, 如下所示:

[root@host1 ~]# vim /lib/systemd/system/docker.service
添加 --cluster-store=etcd://192.168.1.140:2379
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --cluster-store=etcd://192.168.1.140:2379

重启Docker daemon:
[root@host1 ~]# systemctl daemon-reload
[root@host1 ~]# systemctl restart docker.service
[root@host2 ~]# vim /lib/systemd/system/docker.service
添加 --cluster-store=etcd://192.168.1.140:2379
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --cluster-store=etcd://192.168.1.140:2379

重启Docker daemon:
[root@host1 ~]# systemctl daemon-reload
[root@host1 ~]# systemctl restart docker.service
**
4.安装Calico**

4.1下载calicoctl-v1.6.2
host1,host2都安装calicoctl.

[root@host1 ~]# wget -O /usr/local/bin/calicoctl https://ghproxy.com/https://github.com/projectcalico/calicoctl/releases/download/v1.6.2/calicoctl

[root@host1 ~]# curl -L https://ghproxy.com/https://github.com/projectcalico/calicoctl/releases/download/v1.6.2/calicoctl -o /usr/local/bin/calicoctl

[root@host1 ~]# chmod +x /usr/local/bin/calicoctl

参照https://www.cnblogs.com/xuchenCN/p/11381735.html安装。

[root@host1 ~]# scp /usr/local/bin/calicoctl root@192.168.1.145:/usr/local/bin/

4.2下载calicoctl镜像

host1,host2都下载calicoctl镜像.
[root@host1 bin]# docker pull quay.io/calico/node:v2.6.2
v2.6.2: Pulling from calico/node
88286f41530e: Pull complete
451e44d240b0: Pull complete
564d30bd7dc2: Pull complete
39b8f29b8ec9: Pull complete
cd8e6a6bdbfe: Pull complete
Digest: sha256:99bce1d4b4d02e0f3a0ab1c9383654f07ef0fe2dab950c59f6cad4fbbc63291c
Status: Downloaded newer image for quay.io/calico/node:v2.6.2
quay.io/calico/node:v2.6.2
**
4.3 host1,host2下载docker-ubuntu-with-ping 镜像**

[root@host1 ~]# docker pull adiazmor/docker-ubuntu-with-ping //host1,host2都下载。用于测试,有其他镜像代替也可以
Using default tag: latest
latest: Pulling from adiazmor/docker-ubuntu-with-ping
af49a5ceb2a5: Pull complete
8f9757b472e7: Pull complete
e931b117db38: Pull complete
47b5e16c0811: Pull complete
9332eaf1a55b: Pull complete
a97364a03b95: Pull complete
e40915917595: Pull complete
Digest: sha256:f782d62c3f3cf73b3dad4654e3cc40e9e358995a55b91250dd7446fcef4bf446
Status: Downloaded newer image for adiazmor/docker-ubuntu-with-ping:latest
docker.io/adiazmor/docker-ubuntu-with-ping:latest

4.4 编写配置文件位于/etc/calico/calicoctl.cfg

[root@host1 ~]# vim /etc/calico/calicoctl.cfg
apiVersion: v1
kind: calicoApiConfig
metadata:
spec:
etcdEndpoints: "http://192.168.1.140:2379"

etcdKeyFile:

etcdCertFile:

etcdCACertFile:

[root@host1 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
a614af5ece2d brdge bridge local
c11bcacf251b host host local
9e556a3bedc2 none null local

[root@host1 ~]#

4.5启动calico

在host1和host2上启动calico.
host1:
[root@host1 ~]# calicoctl node run --node-image=calico/node:v2.6.2 --ip=192.168.1.144 //在host1启动calico
Running command to load modules: modprobe -a xt_set ip6_tables
Enabling IPv4 forwarding
Enabling IPv6 forwarding
Increasing conntrack limit
Removing old calico-node container (if running).
Running the following command to start calico-node:
docker run --net=host --privileged --name=calico-node -d --restart=always -e NODENAME=host1 -e CALICO_NETWORKING_BACKEND=bird -e CALICO_LIBNETWORK_ENABLED=true -e IP=192.168.1.144 -e ETCD_ENDPOINTS=http://192.168.1.140:2379 -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico -v /lib/modules:/lib/modules -v /run:/run -v /run/docker/plugins:/run/docker/plugins -v /var/run/docker.sock:/var/run/docker.sock calico/node:v2.6.2

Image may take a short time to download if it is not available locally.
Container started, checking progress logs.
Skipping datastore connection test
Using IPv4 address from environment: IP=192.168.1.144
IPv4 address 192.168.1.144 discovered on interface eth0
No AS number configured on node resource, using global value
Using node name: host1
Starting libnetwork service
Calico node started successfully

[root@host1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ad90ed03880b calico/node:v2.6.2 "start_runit" 46 seconds ago Up 44 seconds calico-node

[root@host1 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
776c29babfe9 bridge bridge local
c11bcacf251b host host local
9e556a3bedc2 none null local
**
5.创建calico网络
5.1在host1或者host2上执行以下命令创建calico网络cal_net1**

[root@host1 ~]# docker network create --driver calico --ipam-driver calico-ipam cal_net1
9d6100ea4e803dd0b3c1a8fd506ba4232a76bc28209fd7e930e1fab485acf376
也可以指定subnet,例如:
docker network create --driver calico --ipam-driver calico-ipam --subnet=10.233.0.0/16 calico
参数说明:
--driver calico: 网络使用calico驱动
--ipam-driver calico-ipam :指定使用calico的IPAM驱动管理IP
--subnet: 如果需要指定容器IP的话,需要指定calico网络的IP段calico是global网络,etcd会将calico-net1同步到所有主机

** 5.2查看刚创建的网络:
**
calico为global网络,etcd会将cal_net1同步到所有主机,如下所示:
host1:calico网络
[root@host1 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
776c29babfe9 bridge bridge local
9d6100ea4e80 cal_net1 calico global
c11bcacf251b host host local
9e556a3bedc2 none null local

host2:calico网络
[root@host2 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
998d22ebd4a0 bridge bridge local
9d6100ea4e80 cal_net1 calico global
c11bcacf251b host host local
9e556a3bedc2 none null local

5.3 在calico中运行容器:
在host1中运行容器bbox1并连接到cal_net1
[root@host1 ~]# docker run -itd --name bbox1 --net cal_net1 busybox
4daf57a135f565ddd2970b794c62480f3071218999d902426d02616aca022894

查看运行的容器
[root@host1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4daf57a135f5 busybox "sh" 7 minutes ago Up 7 minutes bbox1
ad90ed03880b calico/node:v2.6.2 "start_runit" 11 minutes ago Up 11 minutes calico-node

查看bbox1的网络配置:
[root@host1 ~]# docker exec bbox1 ip address
1: lo: mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
4: cali0@if5: mtu 1500 qdisc noqueue
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff
inet 192.168.119.0/32 scope global cali0
valid_lft forever preferred_lft forever

// cali0是calico interface,分配的ip为192.168.119.0/32.caili0对应host1编号5的cali23572555706@if4 ,如下所示
[root@host1 ~]# ip link show

5: cali23572555706@if4: mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 4e:27:31:b3:1b:77 brd ff:ff:ff:ff:ff:ff link-netnsid 0

5.4路由分析

host1将作为router负责转发目的地址为bbox1的数据包,如下所示:
[root@host1 ~]# ip route
default via 192.168.1.1 dev eth0 onlink
169.254.0.0/16 dev eth0 scope link metric 1000
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.144
192.168.119.0 dev cali23572555706 scope link //负责转发目的地址为bbox1的数据包
blackhole 192.168.119.0/26 proto bird
所有发送到bbox1的数据包都会发给cali23572555706,因为cali23572555706与cali0是一对veth pair, bbox1能够接收到数据。

接下来我们在host2中运行容器bbox2,也连接到cal_net1.
host2:

6.host2下载calicoctl镜像、docker-ubuntu-with-ping 镜像,启动calico
**
6.1下载calicoctl镜像**
[root@host2 ~]# docker pull quay.io/calico/node:v2.6.2
v2.6.2: Pulling from calico/node
88286f41530e: Pull complete
451e44d240b0: Pull complete
564d30bd7dc2: Pull complete
39b8f29b8ec9: Pull complete
cd8e6a6bdbfe: Pull complete
Digest: sha256:99bce1d4b4d02e0f3a0ab1c9383654f07ef0fe2dab950c59f6cad4fbbc63291c
Status: Downloaded newer image for quay.io/calico/node:v2.6.2
quay.io/calico/node:v2.6.2
**
6.2下载docker-ubuntu-with-ping 镜像**

[root@host2 ~]# docker pull adiazmor/docker-ubuntu-with-ping
Using default tag: latest
latest: Pulling from adiazmor/docker-ubuntu-with-ping
af49a5ceb2a5: Pull complete
8f9757b472e7: Pull complete
e931b117db38: Pull complete
47b5e16c0811: Pull complete
9332eaf1a55b: Pull complete
a97364a03b95: Pull complete
e40915917595: Pull complete
Digest: sha256:f782d62c3f3cf73b3dad4654e3cc40e9e358995a55b91250dd7446fcef4bf446
Status: Downloaded newer image for adiazmor/docker-ubuntu-with-ping:latest
docker.io/adiazmor/docker-ubuntu-with-ping:latest

6.3编写配置文件位于/etc/calico/calicoctl.cfg
[root@host2 ~]# vim /etc/calico/calicoctl.cfg
apiVersion: v1
kind: calicoApiConfig
metadata:
spec:
etcdEndpoints: "http://192.168.1.140:2379"

etcdKeyFile:

etcdCertFile:

etcdCACertFile:

6.4 在host2启动calico

[root@host2 ~]# c alicoctl node run --node-image=calico/node:v2.6.2 --ip=192.168.1.145

Running command to load modules: modprobe -a xt_set ip6_tables
Enabling IPv4 forwarding
Enabling IPv6 forwarding
Increasing conntrack limit
Removing old calico-node container (if running).
Running the following command to start calico-node:
docker run --net=host --privileged --name=calico-node -d --restart=always -e NODENAME=host2 -e CALICO_NETWORKING_BACKEND=bird -e CALICO_LIBNETWORK_ENABLED=true -e IP=192.168.1.145 -e ETCD_ENDPOINTS=http://192.168.1.140:2379 -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico -v /lib/modules:/lib/modules -v /run:/run -v /run/docker/plugins:/run/docker/plugins -v /var/run/docker.sock:/var/run/docker.sock calico/node:v2.6.2

Image may take a short time to download if it is not available locally.
Container started, checking progress logs.
Skipping datastore connection test
Using IPv4 address from environment: IP=192.168.1.145
IPv4 address 192.168.1.145 discovered on interface eth0
No AS number configured on node resource, using global value
Using node name: host2
Starting libnetwork service
Calico node started successfully
**
6.5查看calico容器,自动运行
**
[root@host2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bfb95bc42ae4 calico/node:v2.6.2 "start_runit" 16 minutes ago Up 16 minutes calico-node

6.6 查看网络:cal_net1自动从etcd同步过来。

[root@host2 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
998d22ebd4a0 bridge bridge local
9d6100ea4e80 cal_net1 calico global
c11bcacf251b host host local
9e556a3bedc2 none null local

6.7 在host2中运行容器bbox2,也连接到cal_net1:
[root@host2 ~]# docker run -itd --name bbox2 --net cal_net1 busybox
8c7e29b19709bb3968148000f68772883dedf6839dd2baf18d79fac464f1dc61

[root@host2 ~]#

查看bbox2的网络配置,如下所示:
[root@host2 ~]# docker exec bbox2 ip addr
1: lo: mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
4: cali0@if5 : mtu 1500 qdisc noqueue
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff
inet 192.168.183.64/32 scope global cali0
valid_lft forever preferred_lft forever

//cali0是calico interface,分配的ip为192.168.183.64/32.caili0对应host2编号5的cali847c5f22084@if4,如下所示

[root@host2 ~]# ip link show
5: cali847c5f22084@if4: mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether d6:8f:d4:63:ec:55 brd ff:ff:ff:ff:ff:ff link-netnsid 0

6.8 host2路由分析
host2添加了两条路由,如下所示:
[root@host2 ~]# ip route
default via 192.168.1.1 dev eth0 onlink
169.254.0.0/16 dev eth0 scope link metric 1000
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.145
192.168.119.0/26 via 192.168.1.144 dev eth0 proto bird //(1)目的地址为host1容器subnet 192.168.119.0/26的路由.
192.168.183.64 dev cali847c5f22084 scope link //(2)目的地址为本地host2容器192.168.184.64的路由.
blackhole 192.168.183.64/26 proto bird

同样,host1也自动添加了到192.168.183.64/26的路由 ,如下所示:

[root@host1 ~]# ip route
default via 192.168.1.1 dev eth0 onlink
169.254.0.0/16 dev eth0 scope link metric 1000
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.144
192.168.119.0 dev cali23572555706 scope link //
blackhole 192.168.119.0/26 proto bird
192.168.183.64/26 via 192.168.1.145 dev eth0 proto bird //

7. calico默认连通性

测试一下bbox1与bbox2的连通性,如下所示:
[root@host1 ~]# docker exec bbox1 ping -c 2 bbox2
PING bbox2 (192.168.183.64): 56 data bytes
64 bytes from 192.168.183.64: seq=0 ttl=62 time=3.415 ms
64 bytes from 192.168.183.64: seq=1 ttl=62 time=2.565 ms
ping成功。bbox1到bbox2的数据流向:

(1)根据bbox1的路由表,将数据包从cali0发出,如下所示:

[root@host1 ~]# docker exec bbox1 ip route
default via 169.254.1.1 dev cali0
169.254.1.1 dev cali0 scope link

(2)数据经过veth pair到达host1,查看路由表,数据由eth0发给host2(192.168.1.145).
192.168.183.64/26 via 192.168.1.145 dev eth0 proto bird

(3)host2收到数据包,根据路由表发送给cali847c5f22084,进而通过veth pair cali0到达bbox2.
192.168.183.64 dev cali847c5f22084 scope link

**8.不同calico网络之间的连通性
8.1 创建cal_net2网络**

host1:
[root@host1 ~]# docker network create --driver calico --ipam-driver calico-ipam cal_net2
fa7477b22c8f43e61794ae4f03a2455cf62cf9139e580af2a16857b355cd3f0a

[root@host1 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
776c29babfe9 bridge bridge local
9d6100ea4e80 cal_net1 calico global
fa7477b22c8f cal_net2 calico global
c11bcacf251b host host local
9e556a3bedc2 none null local

[root@host1 ~]#

host2上也查看cal_net2:
[root@host2 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
998d22ebd4a0 bridge bridge local
9d6100ea4e80 cal_net1 calico global
fa7477b22c8f cal_net2 calico global
c11bcacf251b host host local
9e556a3bedc2 none null local

8.2在host1中运行bbox3,连接到cal_net2:

[root@host1 ~]# docker run -itd --name bbox3 --net cal_net2 busybox
b97f5c5093e15cbdb7733cd5570fa32cba1c85da75d922a85c592bb63dacb027

[root@host1 ~]# docker ps -n 1
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b97f5c5093e1 busybox "sh" 34 seconds ago Up 30 seconds bbox3

查看bbox3的网络:calico为bbox3分配了ip 192.168.119.1,如下所示:
[root@host1 ~]# docker exec bbox3 ip addr show cali0
6: cali0@if7: mtu 1500 qdisc noqueue
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff
inet 192.168.119.1/32 scope global cali0
valid_lft forever preferred_lft forever
验证bbox1与bbox3的连通性,如下所示:

[root@host1 ~]# docker exec bbox1 ping -c 2 192.168.119.1

PING 192.168.119.1 (192.168.119.1): 56 data bytes
--- 192.168.119.1 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss

[root@host1 ~]#

尽管bbox1和bbox3都位于host1,而且都在一个subnet 192.168.119.0/26,但他们属于不同的网络,默认不能通信。

calico默认的policy规则是:容器只能与同一个calico网络中的容器通信。

calico的每个网络都有一个同名的profile,profile中定义了该网络的policy 。具体看一下cal_net1的profile,如下所示:

[root@host1 ~]# calicoctl get profile cal_net1 -o yaml

  • apiVersion: v1
    kind: profile
    metadata:
    name: cal_net1 (1)
    tags:
  • cal_net1 (2)
    spec:
    egress: (3)
  • action: allow
    destination: {}
    source: {}
    ingress:
  • action: allow
    destination: {}
    source:
    tag: cal_net1 (4)

(1)命名为cal_net1,这就是calico网络cal_net1的profile.
(2)为profile添加一个tag cal_net1。注意,这个tag虽然也叫cal_net1,其实可以随便设置,这跟上面的name:cal_net1没有任何关系。此tag后面会用到。
(3)egress对从容器发出的数据包进行控制,当前没有任何限制。
(4)ingress对进入容器的数据包进行限制,当前设置是接收来自tag cal_net1的容器,根据第(1)步设置我们知道,实际上就是只接收本网络的数据包,这也进一步解释了前面的实验结果。

既然这是 默认policy,那就有方法定制policy,这也就calico较其他网络方案最大的特性 。

9. calico policy

calico 能让 用户定义灵活的policy规则,精细化控制进出容器的流量 ,下面看一个场景:
(1)创建一个新的calico网络cal_web并部署一个httpd容器web1.
(2)定义policy允许cal_net2中的容器访问web1的80端口.

首先创建cal_web网络。
[root@host1 ~]# docker network create --driver calico --ipam-driver calico-ipam cal_web
382e841a9d09320a3395854ed14b50f0ad74a94ec3bd9e9c34a74983fe49fb94

[root@host1 ~]# docker network ls -f 'driver=calico'
NETWORK ID NAME DRIVER SCOPE
9d6100ea4e80 cal_net1 calico global
fa7477b22c8f cal_net2 calico global
382e841a9d09 cal_web calico global

在host1中运行容器web1,连接到cal_web:
[root@host1 ~]# docker run -itd --name web1 --net cal_web http:v1.0
8b1401751dd4acd5d3497884a670e369ee38cf5a1d65ecb3dd1d50959e202cf

[root@host1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8b1401d751dd http:v1.0 "httpd-foreground" 44 seconds ago Up 41 seconds web1
b97f5c5093e1 busybox "sh" 2 hours ago Up 2 hours bbox3
4daf57a135f5 busybox "sh" 4 hours ago Up 4 hours bbox1
ad90ed03880b calico/node:v2.6.2 "start_runit" 4 hours ago Up 4 hours calico-node

[root@host1 ~]#
[root@host1 ~]#
[root@host1 ~]# docker exec web1 ip address show cali0
14: cali0@if15: mtu 1500 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.119.3/32 scope global cali0
valid_lft forever preferred_lft forever
[root@host1 ~]# docker exec bbox3 ip addr show cali0
6: cali0@if7: mtu 1500 qdisc noqueue
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff
inet 192.168.119.1/32 scope global cali0
valid_lft forever preferred_lft forever

目前bbox3还无法访问web1的80端口,如下所示:
[root@host1 ~]# docker exec bbox3 wget 192.168.119.3
Connecting to 192.168.119.3 (192.168.119.3:80)
wget: can't connect to remote host (192.168.119.3): Connection timed out

创建policy文件web.yml,内容如下:
[root@host1 ~]# vim web.yml

  • apiVersion: v1
    kind: profile
    metadata:
    name: cal_web
    spec:
    ingress:
  • action: allow
    protocol: tcp
    source:
    tag: cal_net2
    destination:
    ports:
  • 80

[root@host1 ~]# calicoctl apply -f /root/web.yml
Successfully applied 1 'profile' resource(s)
现在bbox3已经可以访问web1的http服务了,如下所示:
[root@host1 ~]# docker exec bbox3 wget 192.168.119.3
Connecting to 192.168.119.3 (192.168.119.3:80)
saving to 'index.html'
index.html 100% **
'index.html' saved
不过ping还是不行,因为只开放了80端口,如下所示:

[root@host1 ~]# docker exec bbox3 ping -c 2 192.168.119.3
PING 192.168.119.3 (192.168.119.3): 56 data bytes
--- 192.168.119.3 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss

上面这个例子比较简单,不过已经展示了calico强大的policy功能。

通过policy,可以动态实现非常复杂的容器访问控制。

  1. calico IPAM

如果不特别配置,calico会自动为网络分配subnet,当然也可以定制。

首先定义一个IP Pool,比如:
cat < mtu 1500 qdisc noqueue
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff
inet 17.2.119.0/32 scope global cali0
valid_lft forever preferred_lft forever

当然也可通过 --ip为容器指定IP,但必须在subnet范围之内,如下所示:
[root@host1 ~]# docker run --net my_net --ip 17.2.6.11 -it busybox sh
/ #
/ # ip address show cali0
18: cali0@if19: mtu 1500 qdisc noqueue
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff
inet 17.2.6.11/32 scope global cali0
valid_lft forever preferred_lft forever
/ #
Calico网络实验结束。

如果觉得我的文章对您有用,请点赞。您的支持将鼓励我继续创作!

2

添加新评论0 条评论

Ctrl+Enter 发表

作者其他文章

X社区推广