dpctl 程序是一个命令行工具用来检测和管理 OpenFlow 数据通路,它能够显示当前的状态数据通路,包括功能配置和表中的条目,以及合适使用 OpenFlow 的内核模块,可以用来添加,删除,修改和监视 datapaths。

1.查看交换机端口

TCP 端口 6634 是默认交换机监听端口。

# dpctl show tcp:9.123.137.25:6634
features_reply (xid=0x94af8117): ver:0x1, dpid:1
n_tables:255, n_buffers:256
features: capabilities:0xc7, actions:0xfff
 1(s1-eth1): addr:2e:d1:ca:aa:af:67, config: 0, state:0
     current:    10GB-FD COPPER
 2(s1-eth2): addr:66:93:32:1e:9b:9e, config: 0, state:0
     current:    10GB-FD COPPER
 LOCAL(s1): addr:5e:bc:ab:cc:dc:43, config: 0x1, state:0x1
get_config_reply (xid=0x92fc9e48): miss_send_len=0

2.查看流表

此时,流表为空,执行 h1 ping h2 无法得到响应。需要手动添加流表项,实现转发。

# dpctl dump-flows tcp:9.123.137.25:6634
stats_reply (xid=0xe2c7ea1e): flags=none type=1(flow)

3.添加流表项

此时查看流表可以看到新的转发信息,同时可以在 h1 和 h2 之间可以相互连通。

# dpctl add-flow tcp:9.123.137.25:6634 in_port=1,actions=output:2
# dpctl add-flow tcp:9.123.137.25:6634 in_port=2,actions=output:1
# dpctl dump-flows tcp:9.123.137.25:6634      
  stats_reply (xid=0x131ed782): flags=none type=1(flow)
  cookie=0, duration_sec=13s, duration_nsec=401000000s, table_id=0, priority=32768, \
  n_packets=0, n_bytes=0,idle_timeout=60,hard_timeout=0,in_port=1,actions=output:2
  cookie=0, duration_sec=5s, duration_nsec=908000000s, table_id=0, priority=32768, \
  n_packets=0, n_bytes=0,idle_timeout=60,hard_timeout=0,in_port=2,actions=output:1

4.其他常用操作

4.1 创建 datapath 编号为 0

#dpctl adddp n1:0

4.2 增加两个网络设备到新的 datapath

#dpctl adddp n1:0 eth0
#dpctl adddp n1:0 eth1

4.3 检测数据通路接收的流量

#dpctl monitor n1:0

4.4 在数据通路中删除网络设备

#dpctl delif nl:0 eth0

5.综合应用

建立一台openflow交换机、连接三台交互机,没有连接控制器的虚拟网络,可以看到h1和h2是不通的。

$ sudo mn --topo single,3 --mac --switch ovsk --controller remote  
*** Creating network
*** Adding controller
Unable to contact the remote controller at 127.0.0.1:6653
Unable to contact the remote controller at 127.0.0.1:6633
Setting remote controller to 127.0.0.1:6653
*** Adding hosts:
h1 h2 h3 
*** Adding switches:
s1 
*** Adding links:
(h1, s1) (h2, s1) (h3, s1) 
*** Configuring hosts
h1 h2 h3 
*** Starting controller
c0 
*** Starting 1 switches
s1 ...
*** Starting CLI:
mininet>  dpctl dump-flows
*** s1 ------------------------------------------------------------------------
NXST_FLOW reply (xid=0x4):
mininet> dump
<Host h1: h1-eth0:10.0.0.1 pid=24213> 
<Host h2: h2-eth0:10.0.0.2 pid=24215> 
<Host h3: h3-eth0:10.0.0.3 pid=24217> 
<OVSSwitch s1: lo:127.0.0.1,s1-eth1:None,s1-eth2:None,s1-eth3:None pid=24222> 
<RemoteController c0: 127.0.0.1:6653 pid=24205> 
mininet> link
invalid number of args: link end1 end2 [up down]
mininet> links
h1-eth0<->s1-eth1 (OK OK) 
h2-eth0<->s1-eth2 (OK OK) 
h3-eth0<->s1-eth3 (OK OK)
mininet> h1 ping -c 2 h3
PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
From 10.0.0.1 icmp_seq=1 Destination Host Unreachable
From 10.0.0.1 icmp_seq=2 Destination Host Unreachable

5.1 查看s1的监听端口

$ sudo ovs-vsctl show 
2b05a0c0-015b-44cd-8ce8-d540d9f551c4
    Bridge "s1"
        Controller "tcp:127.0.0.1:6653"
        Controller "ptcp:6654"
        fail_mode: secure
        Port "s1-eth2"
            Interface "s1-eth2"
        Port "s1-eth1"
            Interface "s1-eth1"
        Port "s1-eth3"
            Interface "s1-eth3"
        Port "s1"
            Interface "s1"
                type: internal
    ovs_version: "2.5.2"

可看出是6654。

5.2 加流表

~$ dpctl add-flow tcp:127.0.0.1:6654 in_port=1,actions=output:2
~$ dpctl add-flow tcp:127.0.0.1:6654 in_port=2,actions=output:1
~$ dpctl dump-flows tcp:127.0.0.1:6654
stats_reply (xid=0xbf1d157c): flags=none type=1(flow)
  cookie=0, duration_sec=18s, duration_nsec=579000000s, table_id=0, priority=32768, n_packets=0, n_bytes=0, idle_timeout=60,hard_timeout=0,in_port=1,actions=output:2
  cookie=0, duration_sec=12s, duration_nsec=103000000s, table_id=0, priority=32768, n_packets=0, n_bytes=0, idle_timeout=60,hard_timeout=0,in_port=2,actions=output:1

建立s1端口1和2的双向转发。在mininet里。s1-eth1和s1-eth2的转发流表已经建立了,故h1和h2能ping通,而h1和h3不行。

mininet> dpctl dump-flows
*** s1 ------------------------------------------------------------------------
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=13.858s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, idle_age=13, in_port=2 actions=output:1
 cookie=0x0, duration=6.319s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, idle_age=6, in_port=1 actions=output:2
mininet> h1 ping -c 2 h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.072 ms
 
--- 10.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1023ms
rtt min/avg/max/mdev = 0.072/0.205/0.338/0.133 ms
mininet> h1 ping -c 2 h3
PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
From 10.0.0.1 icmp_seq=1 Destination Host Unreachable
 
--- 10.0.0.3 ping statistics ---
2 packets transmitted, 0 received, +1 errors, 100% packet loss, time 1015ms

5.3 删流表

这时h1和h2又不通了。

~$ dpctl del-flows tcp:127.0.0.1:6654

6.ovs-ofctl综合应用

6.1 设置流表自动更新

sh ovs-ofctl add-flow s1 action=normal

将交换机s1的转发行为设置为normal,流表将会自动更新。

6.2 交换机s1的流表清空

sh ovs-ofctl del-flows s1

以下演示自定义在不同网络层之间的流匹配。

6.3 自定义第一层的流匹配规则,根据port匹配

sh ovs-ofctl add-flow s1 priority=500, in_port=1, actions=output:2
sh ovs-ofctl add-flow s1 priority=500, in_port=2, actions=output:1

6.4 自定义第二层的流匹配规则,根据协议和mac地址匹配

sh ovs-ofctl add-flow s1 dl_src=00:00:00:00:00:01, dl_dst=00:00:00:00:00:02, actions=output:2
sh ovs-ofctl add-flow s1 dl_src=00:00:00:00:00:02, dl_dst=00:00:00:00:00:01, actions=output:1
sh ovs-ofctl add-flow s1 dl_type=0x806,nw_proto=1,actions=flood

6.5 自定义第三层的流匹配规则,根据协议和IP地址匹配

sh ovs-ofctl add-flow s1 priority=500, dl_type=0x800, nw_src=10.0.0.0/24, nw_dst=10.0.0.0/24, actions=normal
sh ovs-ofctl add-flow s1 priority=800, ip,nw_src=10.0.0.3,actions=mod_nw_tos:184,normal
sh ovs-ofctl add-flow s1 arp,nw_dst=10.0.0.1,actions=output:1
sh ovs-ofctl add-flow s1 arp,nw_dst=10.0.0.2,actions=output:2
sh ovs-ofctl add-flow s1 arp,nw_dst=10.0.0.3,actions=output:3

6.6 综合案例

h3 python -m SimpleHTTPServer 80 &
sh ovs-ofctl add-flow s1 arp,actions=normal
sh ovs-ofctl add-flow s1 priority=500,dl_type=0x800,nw_proto=6,tp_dst=80,actions=output:3
sh ovs-ofctl add-flow s1 priority=800,ip,nw_src=10.0.0.3,actions=normal
h1 curl h3
h2 curl h3