• 发文章

  • 发资料

  • 发帖

  • 提问

  • 发视频

创作活动
0
登录后你可以
  • 下载海量资料
  • 学习在线课程
  • 观看技术视频
  • 写文章/发帖/加入社区
返回

电子发烧友 电子发烧友

  • 全文搜索
    • 全文搜索
    • 标题搜索
  • 全部时间
    • 全部时间
    • 1小时内
    • 1天内
    • 1周内
    • 1个月内
  • 默认排序
    • 默认排序
    • 按时间排序
  • 全部板块
    • 全部板块
大家还在搜
  • 工业以太网交换机有什么特点?

    工业以太网交换机用于在不同的自动化系统中组建骨干网,如变电站综合自动化系统,智能交通系统,及其他主要的工控自动化项目。

    2019-09-27 09:11

  • KVM中的SRIOV和ubuntu绑定

    00:04.0以太网控制器:英特尔公司82599以太网控制器虚拟功能(rev 01)00:05.0以太网控制器:英特尔公司82599以太网控制器虚拟功能(rev 01)我有一个带有Intel SRIOV卡的KVM。 5:enp4s0f0:mtu 1500 qdisc mq state UP模式DEFAULT group default qlen 1000 link / ether 00:0c:bd:05:d9:82 brd ff:ff:ff:ff:ff:ff vf 0 MAC 00:0c:bd :05:d9:11,欺骗检查,链路状态自动vf 1 MAC 00:0c:bd:05:d9:ab,欺骗检查关闭,链路状态自动vf 2 MAC 00:0c:bd:05:d9 :a2,欺骗检查,链接状态自动vf 3 MAC 00:0c:bd:05:d9:ac,欺骗检查,链接状态自动vf 4 MAC 00:0c:bd:05:d9:ad,恶搞检查关闭,链接状态自动vf 5 MAC 00:0c:bd:05:d9:ae,欺骗检查关闭,链路状态自动vf 6 MAC 00:0c:bd:05:d9:af,欺骗检查,链接-state auto vf 7 MAC 00:0c:bd:05:d9:a1,欺骗检查关闭,链接状态自动8:enp4s0f1:mtu 1500 qdisc mq状态UP模式DEFAULT组默认qlen 1000链接/ ether 00:0c:bd :05:d9:83 brd ff:ff:ff:ff:ff:ff vf 0 MAC 00:0c:bd:05:d9:12,欺骗检查,链接状态自动vf 1 MAC 00:0c:bd: 05:d9:bb,欺骗检查,链路状态自动vf 2 MAC 00:0c:bd:05:d9:bc,欺骗检查关闭,链路状态自动vf 3 MAC 00:0c:bd:05:d9: bd,恶搞checki ng off,link-state auto vf 4 MAC 00:0c:bd:05:d9:be,欺骗检查,链路状态自动vf 5 MAC 00:0c:bd:05:d9:bf,欺骗检查,链接-state auto vf 6 MAC 00:0c:bd:05:d9:b1,欺骗检查,链路状态自动vf 7 MAC 00:0c:bd:05:d9:b2,欺骗检查,链路状态自动我在GUEST Ubuntu机器上使用了每张卡的vf0物理函数enp4s0f0具有以下虚函数:PCI BDF接口======= ========= 0000:04:10.0 0000:04:10.2 0000:04:10.4 0000:04:10.6 0000: 04:11.0 0000:04:11.2 0000:04:11.4 0000:04:11.6物理功能enp4s0f1具有以下虚函数:PCI BDF接口======= ========= 0000:04:10.1 0000:04:10.3 0000:04:10.5 0000:04:10.7 0000: 04:11.1 0000:04:11.3 0000:04:11.5dumpxml的片段 我的绑定配置:uto eth1 iface eth1 inet manual bond-master bond0auto eth2 iface eth2 inet manual bond-master bond0auto bond0 iface bond0 inet static address 192.168.23.101 netmask 255.255.255.0 bond-slaves none bond-mode 2 bond-miimon 100 bond-downdelay 0 bond-updelay 0cat / proc / net / bonding / bond0以太网通道绑定驱动程序:v3.7.1(2011年4月27日)绑定模式:负载均衡(xor)传输哈希策略:layer2(0)MII状态:up MII轮询间隔(ms):100 Up Delay(ms):0 Down Delay(ms):0从机接口:eth1 MII状态:上行速度:1000 Mbps双工:完整链路故障计数:0永久硬件地址:00:0c:bd:05:d9:11从机队列ID:0从机接口:eth2 MII状态:上行速度:1000 Mbps双工:完整链路故障计数:0永久硬件地址:00:0c:bd:05:d9:12从机队列ID:0bond0链接封装:以太网HWaddr 00:0c:bd:05:d9:11 inet地址:192.168.23.101 Bcast:192.168.23.255掩码:255.255.255.0 inet6地址:fe80 :: 20c:bdff:fe05:d911 / 64范围: Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500公制:1 RX包:122错误:0丢弃:0超限:0帧:0 TX包:295错误:0丢弃:0超限:0载波:0冲突:0 txqueuelen:0 RX字节:33881(33.8 KB)TX字节:34294(34.2 KB)eth1链接封装:以太网HWaddr 00:0c:bd:05:d9:11 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500公制:1 RX包:122错误:0丢弃:0超限:0帧:0 TX包:183错误: 0丢弃:0溢出:0载波:0冲突:0 txqueuelen:1000 RX字节:33881(33.8 KB)TX字节:28212(28.2 KB)eth2链接封装:以太网HWaddr 00:0c:bd:05:d9:11 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500公制:1 RX包:0错误:0丢弃:0超限:0帧:0 TX包:112错误: 0丢弃:0溢出:0载波:0冲突:0 txqueuelen:1000 RX字节:0(0.0 B)TX字节:6082(6.0 KB)这里bond0 MAC与eth1 MAC相同,因为eth1首先出现。问题是当我在PF中接收到与eth2相对应的数据包时,数据包在GUEST(eth2)中收到,eth2 RX为0 ...可能是什么问题..有时候eth2在重新启动后首先出现,而bond0有eth2 MAC,此时eth1 RX为零。以上来自于谷歌翻译以下为原文 00:04.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)00:05.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) I have a KVM with Intel SRIOV card. 5: enp4s0f0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:bd:05:d9:82 brd ff:ff:ff:ff:ff:ff vf 0 MAC 00:0c:bd:05:d9:11, spoof checking off, link-state auto vf 1 MAC 00:0c:bd:05:d9:ab, spoof checking off, link-state auto vf 2 MAC 00:0c:bd:05:d9:a2, spoof checking off, link-state auto vf 3 MAC 00:0c:bd:05:d9:ac, spoof checking off, link-state auto vf 4 MAC 00:0c:bd:05:d9:ad, spoof checking off, link-state auto vf 5 MAC 00:0c:bd:05:d9:ae, spoof checking off, link-state auto vf 6 MAC 00:0c:bd:05:d9:af, spoof checking off, link-state auto vf 7 MAC 00:0c:bd:05:d9:a1, spoof checking off, link-state auto 8: enp4s0f1: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:bd:05:d9:83 brd ff:ff:ff:ff:ff:ff vf 0 MAC 00:0c:bd:05:d9:12, spoof checking off, link-state auto vf 1 MAC 00:0c:bd:05:d9:bb, spoof checking off, link-state auto vf 2 MAC 00:0c:bd:05:d9:bc, spoof checking off, link-state auto vf 3 MAC 00:0c:bd:05:d9:bd, spoof checking off, link-state auto vf 4 MAC 00:0c:bd:05:d9:be, spoof checking off, link-state auto vf 5 MAC 00:0c:bd:05:d9:bf, spoof checking off, link-state auto vf 6 MAC 00:0c:bd:05:d9:b1, spoof checking off, link-state auto vf 7 MAC 00:0c:bd:05:d9:b2, spoof checking off, link-state autoI have used vf0 from each card in a GUEST Ubuntu machinePhysical Function enp4s0f0 has the following virtual functions: PCI BDF Interface ======= ========= 0000:04:10.0 0000:04:10.2 0000:04:10.4 0000:04:10.6 0000:04:11.0 0000:04:11.2 0000:04:11.4 0000:04:11.6Physical Function enp4s0f1 has the following virtual functions: PCI BDF Interface ======= ========= 0000:04:10.1 0000:04:10.3 0000:04:10.5 0000:04:10.7 0000:04:11.1 0000:04:11.3 0000:04:11.5Snippet of dumpxmlMy bonding configuration:uto eth1 iface eth1 inet manual bond-master bond0auto eth2 iface eth2 inet manual bond-master bond0auto bond0 iface bond0 inet static address 192.168.23.101 netmask 255.255.255.0 bond-slaves none bond-mode 2 bond-miimon 100 bond-downdelay 0 bond-updelay 0cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)Bonding Mode: load balancing (xor) Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0c:bd:05:d9:11 Slave queue ID: 0Slave Interface: eth2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0c:bd:05:d9:12 Slave queue ID: 0bond0 Link encap:Ethernet HWaddr 00:0c:bd:05:d9:11 inet addr:192.168.23.101 Bcast:192.168.23.255 Mask:255.255.255.0 inet6 addr: fe80::20c:bdff:fe05:d911/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:122 errors:0 dropped:0 overruns:0 frame:0 TX packets:295 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:33881 (33.8 KB) TX bytes:34294 (34.2 KB)eth1 Link encap:Ethernet HWaddr 00:0c:bd:05:d9:11 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:122 errors:0 dropped:0 overruns:0 frame:0 TX packets:183 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:33881 (33.8 KB) TX bytes:28212 (28.2 KB)eth2 Link encap:Ethernet HWaddr 00:0c:bd:05:d9:11 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:112 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:6082 (6.0 KB)Here bond0 MAC is same as eth1 MAC as eth1 came up first. Issues is when I receive packets in PF corresponding to eth2 , packets are recieved in GUEST(eth2) , eth2 RX is 0 ...What could be the issue..sometimes eth2 comes up first after reboot and bond0 has eth2 MAC and at that point eth1 RX is zero..

    2018-11-07 11:13