Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BESS connecting to VM #874

Closed
ztz1989 opened this issue Dec 13, 2018 · 3 comments
Closed

BESS connecting to VM #874

ztz1989 opened this issue Dec 13, 2018 · 3 comments

Comments

@ztz1989
Copy link

ztz1989 commented Dec 13, 2018

Hello, I am trying to make BESS connect to a QEMU VM through two virtual interfaces, the configuration is illustrated below:

screenshot 2018-12-13 09 32 13

Our server has two physical interfaces (NIC1 and NIC2), I run the DPDK l2fwd app inside the VM. BESS is built from the haswell-linux tarball. I use MoonGen to transmit packets from another server and traffic flows as indicated by the arrows in the figure. The bess script implementing the inter-connection is as follows:

`
**inport::PMDPort(port_id=0, num_inc_q=1, num_out_q=1)
outport::PMDPort(port_id=1, num_inc_q=1, num_out_q=1)

my_vhost1::PMDPort(vdev='eth_vhost0,iface=/tmp/bess/vhost-user-1,queues=1')
my_vhost2::PMDPort(vdev='eth_vhost1,iface=/tmp/bess/vhost-user-2,queues=1')

in0::QueueInc(port=inport, qid=0)
out0::QueueOut(port=outport, qid=0)

v1::QueueOut(port=my_vhost1,qid=0)
v2::QueueInc(port=my_vhost2,qid=0)

in0 -> v1
v2 -> out0**
`

The corresponding script to start the VM is as follows:
`
**#!/bin/bash

export VM_NAME=vhost-vm
export GUEST_MEM=8192M
export CDROM=/home/CentOS-7-x86_64-Azure.qcow2
export VHOST_SOCK_DIR=/tmp/bess

sudo qemu-system-x86_64 -name $VM_NAME -cpu host -enable-kvm
-m $GUEST_MEM -drive file=$CDROM --nographic
-numa node,memdev=mem -mem-prealloc -smp sockets=1,cores=2
-object memory-backend-file,id=mem,size=$GUEST_MEM,mem-path=/dev/hugepages,share=on
-chardev socket,id=char0,path=$VHOST_SOCK_DIR/vhost-user-1
-netdev type=vhost-user,id=mynet1,chardev=char0
-device virtio-net-pci, mac=00:00:00:00:00:01,netdev=mynet1,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,mrg_rxbuf=off,mq=on,vectors=6
-chardev socket,id=char1,path=$VHOST_SOCK_DIR/vhost-user-2
-netdev type=vhost-user,id=mynet2,chardev=char1
-device virtio-net-pci, mac=00:00:00:00:00:02,netdev=mynet2,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,mrg_rxbuf=off,mq=on,vectors=6
-net user,hostfwd=tcp::10020-:22 -net nic**
`

Currently, when I start the bessctl and execute the bess script, it runs without any problem. But when I proceed to execute the qemu command to instantiate the VM, the bessctl disconnects from bessd immediately. Actually, I tried exactly the same configuration with OVS-DPDK and snabb. Both worked smoothly and I could tx/rx packets using MoonGen. So I guess maybe I am not using BESS in the correct way. Could you give me some hints about the problem?

Thanks & regards,

@ztz1989
Copy link
Author

ztz1989 commented Dec 31, 2018

Just for further info, I post the bessd error log:

`Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
F1231 15:43:41.152448 20477 debug.cc:405] A critical error has occured. Aborting...
Signal: 11 (Segmentation fault), si_code: 1 (SEGV_MAPERR: address not mapped to object)
pid: 20467, tid: 20477, address: 0x7f36f4897002, IP: 0x55fff995357e
Backtrace (recent calls first) ---

(0): /home/bess/core/bessd(rte_vhost_dequeue_burst+0x12e) [0x55fff995357e]
rte_vhost_dequeue_burst at /build/bess/deps/dpdk-17.11/lib/librte_vhost/virtio_net.c:1254
(file/line not available)

(1): /home/bess/core/bessd(+0x8a3c3c) [0x55fff98eec3c]
eth_vhost_rx at /build/bess/deps/dpdk-17.11/drivers/net/vhost/rte_eth_vhost.c:410
(file/line not available)

(2): /home/bess/core/bessd(_ZN7PMDPort11RecvPacketsEhPPN4bess6PacketEi+0x57) [0x55fff951a127]
rte_eth_rx_burst at /build/dpdk-17.11/lib/librte_ether/rte_ethdev.h:2897
(file/line not available)
(inlined by) PMDPort::RecvPackets(unsigned char, bess::Packet**, int) at /build/bess/core/drivers/pmd.cc:438
(file/line not available)

(3): /home/bess/core/bessd(_ZN8QueueInc7RunTaskEP7ContextPN4bess11PacketBatchEPv+0xbd) [0x55fff955364d]
QueueInc::RunTask(Context*, bess::PacketBatch*, void*) at /build/bess/core/modules/queue_inc.cc:102
(file/line not available)

(4): /home/bess/core/bessd(_ZNK4TaskclEP7Context+0x67) [0x55fff93cecc7]
Task::operator()(Context*) const at /build/bess/core/task.cc:53
(file/line not available)

(5): /home/bess/core/bessd(_ZN4bess16DefaultScheduler12ScheduleLoopEv+0x1c3) [0x55fff93fc3b3]
bess::DefaultScheduler::ScheduleOnce(Context*) at /build/bess/core/scheduler.h:271
(file/line not available)
(inlined by) bess::DefaultScheduler::ScheduleLoop() at /build/bess/core/scheduler.h:250
(file/line not available)

(6): /home/bess/core/bessd(_ZN6Worker3RunEPv+0x20f) [0x55fff93f97bf]
Worker::Run(void*) at /build/bess/core/worker.cc:316
(file/line not available)

(7): /home/bess/core/bessd(_Z10run_workerPv+0x7c) [0x55fff93f9a8c]
run_worker(void*) at /build/bess/core/worker.cc:330
(file/line not available)

(8): /home/bess/core/bessd(+0xc6aede) [0x55fff9cb5ede]
execute_native_thread_routine at thread.o:?

(9): /lib/x86_64-linux-gnu/libpthread.so.0(+0x76b9) [0x7f378bdeb6b9]
start_thread at ??:?

(10): /lib/x86_64-linux-gnu/libc.so.6(clone+0x6c) [0x7f378b3fe41c]
clone at /build/glibc-Cl5G7W/glibc-2.23/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:109
(file/line not available)`

Actually, when I use only one NIC and one vdev, everything works perfectly, I can even transmit at line rate to the connected VM. But when I configure two vdev with BESS, bessd crashes upon start of VM.
Any help is highly appreciated. Thanks!

@ztz1989
Copy link
Author

ztz1989 commented Dec 31, 2018

Dear all, new updates: I have tried different configurations and just found out everything I used PortInc with a vdev PMD port, bessd immediately crashed. And qemu reported
qemu-system-x86_64: Failed to read from slave

Is there any solution for this issue?

ps: I attach some log info as follows:

`Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1231 17:30:12.469645 26983 main.cc:68] bessd unknown

W1231 17:30:12.469811 26983 main.cc:75] LoadPlugins() failed to load from directory:
/home/bess/core/modules: No such file or directory [2]

I1231 17:30:12.469898 26983 dpdk.cc:202] Initializing DPDK

W1231 17:30:12.470826 26983 bessd.cc:307] EAL: Detected 48 lcore(s)

I1231 17:30:12.483736 26983 dpdk.cc:65] EAL: Probing VFIO support...

I1231 17:30:25.991405 26983 dpdk.cc:65] EAL: PCI device 0000:05:00.0 on NUMA socket 0

I1231 17:30:25.991425 26983 dpdk.cc:65] EAL: probe driver: 1137:43 net_enic

I1231 17:30:25.991430 26983 dpdk.cc:65] EAL: PCI device 0000:06:00.0 on NUMA socket 0

I1231 17:30:25.991434 26983 dpdk.cc:65] EAL: probe driver: 1137:43 net_enic

I1231 17:30:25.991438 26983 dpdk.cc:65] EAL: PCI device 0000:07:00.0 on NUMA socket 0

I1231 17:30:25.991442 26983 dpdk.cc:65] EAL: probe driver: 1137:43 net_enic

I1231 17:30:25.991446 26983 dpdk.cc:65] EAL: PCI device 0000:08:00.0 on NUMA socket 0

I1231 17:30:25.991449 26983 dpdk.cc:65] EAL: probe driver: 1137:43 net_enic

I1231 17:30:25.991454 26983 dpdk.cc:65] EAL: PCI device 0000:0b:00.0 on NUMA socket 0

I1231 17:30:25.991458 26983 dpdk.cc:65] EAL: probe driver: 8086:10fb net_ixgbe

I1231 17:30:26.129918 26983 dpdk.cc:65] EAL: PCI device 0000:0b:00.1 on NUMA socket 0

I1231 17:30:26.129930 26983 dpdk.cc:65] EAL: probe driver: 8086:10fb net_ixgbe

I1231 17:30:26.267974 26983 dpdk.cc:65] EAL: PCI device 0000:84:00.0 on NUMA socket 1

I1231 17:30:26.267984 26983 dpdk.cc:65] EAL: probe driver: 8086:10fb net_ixgbe

I1231 17:30:26.416899 26983 dpdk.cc:65] EAL: PCI device 0000:84:00.1 on NUMA socket 1

I1231 17:30:26.416909 26983 dpdk.cc:65] EAL: probe driver: 8086:10fb net_ixgbe

I1231 17:30:26.565845 26983 bessd.cc:297] Segment 0: IOVA:0x100000000, len:1073741824, virt:0x7fcc80000000, socket_id:0, hugepage_sz:1073741824, nchannel:0, nrank:0

I1231 17:30:26.565856 26983 bessd.cc:297] Segment 1: IOVA:0x1880000000, len:1073741824, virt:0x7fc400000000, socket_id:1, hugepage_sz:1073741824, nchannel:0, nrank:0

I1231 17:30:26.565860 26983 packet_pool.cc:49] Creating DpdkPacketPool for 262144 packets on node 0

I1231 17:30:26.565867 26983 packet_pool.cc:70] PacketPool0 requests for 262144 packets

I1231 17:30:26.652488 26983 packet_pool.cc:156] PacketPool0 has been created with 262144 packets

I1231 17:30:26.652509 26983 packet_pool.cc:49] Creating DpdkPacketPool for 262144 packets on node 1

I1231 17:30:26.652515 26983 packet_pool.cc:70] PacketPool1 requests for 262144 packets

I1231 17:30:26.723106 26983 packet_pool.cc:156] PacketPool1 has been created with 262144 packets

I1231 17:30:26.723165 26983 pmd.cc:68] 4 DPDK PMD ports have been recognized:

I1231 17:30:26.723181 26983 pmd.cc:90] DPDK port_id 0 (net_ixgbe) RXQ 128 TXQ 64 90:e2:ba:cb:f5:38 00000000:0b:00.00 8086:10fb numa_node 0

I1231 17:30:26.723188 26983 pmd.cc:90] DPDK port_id 1 (net_ixgbe) RXQ 128 TXQ 64 90:e2:ba:cb:f5:39 00000000:0b:00.01 8086:10fb numa_node 0

I1231 17:30:26.723193 26983 pmd.cc:90] DPDK port_id 2 (net_ixgbe) RXQ 128 TXQ 64 90:e2:ba:cb:f5:44 00000000:84:00.00 8086:10fb numa_node 1

I1231 17:30:26.723198 26983 pmd.cc:90] DPDK port_id 3 (net_ixgbe) RXQ 128 TXQ 64 90:e2:ba:cb:f5:45 00000000:84:00.01 8086:10fb numa_node 1

I1231 17:30:26.723242 26983 bessctl.cc:1924] Server listening on 127.0.0.1:10514
W1231 17:30:26.723309 26983 bessd.cc:307] I1231 17:30:26.723302736 26983 server_builder.cc:247] Synchronous server. Num CQs: 1, Min pollers: 1, Max Pollers: 1, CQ timeout (msec): 1000

I1231 17:30:32.042904 26996 bessctl.cc:487] *** All workers have been paused ***

I1231 17:30:32.059286 26997 worker.cc:312] Worker 0(0x7fcd5a9ff440) is running on core 9 (socket 0)

I1231 17:30:32.061713 26996 dpdk.cc:71] PMD: Initializing pmd_vhost for eth_vhost0

I1231 17:30:32.061733 26996 dpdk.cc:71] PMD: Creating VHOST-USER backend on numa socket 4294967295

I1231 17:30:32.061765 26996 dpdk.cc:71] VHOST_CONFIG: vhost-user server: socket created, fd: 26

I1231 17:30:32.061983 26996 dpdk.cc:71] VHOST_CONFIG: bind to /tmp/bess/vhost-user-0

I1231 17:30:32.067025 26996 bessctl.cc:690] Checking scheduling constraints

I1231 17:30:32.067615 26996 bessctl.cc:516] *** Resuming ***

I1231 17:30:35.810925 26998 dpdk.cc:71] VHOST_CONFIG: new vhost user connection is 27

I1231 17:30:35.811055 26998 dpdk.cc:71] VHOST_CONFIG: new device, handle is 0

I1231 17:30:35.811735 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_GET_FEATURES

I1231 17:30:35.812407 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES

I1231 17:30:35.812433 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES

I1231 17:30:35.812443 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM

I1231 17:30:35.812486 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD

I1231 17:30:35.812502 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_OWNER

I1231 17:30:35.812510 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_GET_FEATURES

I1231 17:30:35.812541 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL

I1231 17:30:35.812901 26998 dpdk.cc:71] VHOST_CONFIG: vring call idx:0 file:29

I1231 17:30:35.812924 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL

I1231 17:30:35.812986 26998 dpdk.cc:71] VHOST_CONFIG: vring call idx:1 file:30

I1231 17:30:40.385443 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE

I1231 17:30:40.385488 26998 dpdk.cc:71] VHOST_CONFIG: set queue enable: 1 to qp idx: 0

I1231 17:30:40.385499 26998 dpdk.cc:71] PMD: vring0 is enabled

I1231 17:30:40.385514 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE

I1231 17:30:40.385519 26998 dpdk.cc:71] VHOST_CONFIG: set queue enable: 1 to qp idx: 1

I1231 17:30:40.385524 26998 dpdk.cc:71] PMD: vring1 is enabled

I1231 17:30:40.385532 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE

I1231 17:30:40.385537 26998 dpdk.cc:71] VHOST_CONFIG: set queue enable: 1 to qp idx: 0

I1231 17:30:40.385542 26998 dpdk.cc:71] PMD: vring0 is enabled

I1231 17:30:40.385550 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE

I1231 17:30:40.385555 26998 dpdk.cc:71] VHOST_CONFIG: set queue enable: 1 to qp idx: 1

I1231 17:30:40.385560 26998 dpdk.cc:71] PMD: vring1 is enabled

I1231 17:30:40.386418 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_FEATURES

I1231 17:30:40.386433 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE

I1231 17:30:40.387137 26998 dpdk.cc:71] VHOST_CONFIG: guest memory region 0, size: 0x80000000
guest physical addr: 0x0
guest virtual addr: 0x7f6400000000
host virtual addr: 0x7fcc00000000
mmap addr : 0x7fcc00000000
mmap size : 0x80000000
mmap align: 0x40000000
mmap off : 0x0

I1231 17:30:40.387161 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM

I1231 17:30:40.387171 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE

I1231 17:30:40.387177 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR

I1231 17:30:40.387189 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK

I1231 17:30:40.387194 26998 dpdk.cc:71] VHOST_CONFIG: vring kick idx:0 file:32

I1231 17:30:40.387202 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL

I1231 17:30:40.387207 26998 dpdk.cc:71] VHOST_CONFIG: vring call idx:0 file:33

I1231 17:30:40.387215 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM

I1231 17:30:40.387223 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE

I1231 17:30:40.387231 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR

I1231 17:30:40.387240 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK

I1231 17:30:40.387245 26998 dpdk.cc:71] VHOST_CONFIG: vring kick idx:1 file:29

I1231 17:30:40.387250 26998 dpdk.cc:71] VHOST_CONFIG: virtio is now ready for processing.

I1231 17:30:40.387257 26998 dpdk.cc:71] PMD: New connection established

I1231 17:30:40.387300 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL

I1231 17:30:40.387310 26998 dpdk.cc:71] VHOST_CONFIG: vring call idx:1 file:34

I1231 17:30:40.387317 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE

I1231 17:30:40.387323 26998 dpdk.cc:71] VHOST_CONFIG: set queue enable: 1 to qp idx: 0

I1231 17:30:40.387329 26998 dpdk.cc:71] PMD: vring0 is enabled

I1231 17:30:40.387337 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE

I1231 17:30:40.387342 26998 dpdk.cc:71] VHOST_CONFIG: set queue enable: 1 to qp idx: 1

I1231 17:30:40.387347 26998 dpdk.cc:71] PMD: vring1 is enabled

I1231 17:30:40.387356 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE

I1231 17:30:40.387362 26998 dpdk.cc:71] VHOST_CONFIG: set queue enable: 1 to qp idx: 0

I1231 17:30:40.387367 26998 dpdk.cc:71] PMD: vring0 is enabled

I1231 17:30:40.387374 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE

I1231 17:30:40.387379 26998 dpdk.cc:71] VHOST_CONFIG: set queue enable: 1 to qp idx: 1

I1231 17:30:40.387384 26998 dpdk.cc:71] PMD: vring1 is enabled

I1231 17:30:42.730056 26998 dpdk.cc:71] VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE

I1231 17:30:42.730808 26998 dpdk.cc:71] VHOST_CONFIG: guest memory region 0, size: 0x80000000
guest physical addr: 0x0
guest virtual addr: 0x7f6400000000
host virtual addr: 0x7fcc00000000
mmap addr : 0x7fcc00000000
mmap size : 0x80000000
mmap align: 0x40000000
mmap off : 0x0

F1231 17:30:43.317170 26997 debug.cc:405] A critical error has occured. Aborting...
Signal: 11 (Segmentation fault), si_code: 1 (SEGV_MAPERR: address not mapped to object)
pid: 26983, tid: 26997, address: 0x7fcc348ef002, IP: 0x5559b4d4657e`

@ztz1989
Copy link
Author

ztz1989 commented Jan 9, 2019

Problem solved by switching back to QEMU 2.5. There is a compatibility issue with the newest version of QEMU. Sorry for the noise.

@ztz1989 ztz1989 closed this as completed Jan 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant