docker container bridged(?) via vHost User Interface socket on the same host
-
Hello everybody,
I was reading the documentation https://docs.netgate.com/tnsr/en/latest/interfaces/types-vhost-user.html and was trying to understand how would this work, or if at all.
So I've created the
interface vhost-user 0
with someSocket filename: /var/run/vpp/my_container-100-0.sock
and bridge it to my one of my physical interfaces. That seems to work.but how do I bridge this new interface in the netns dataplane to my containers via socket?
do I need to bridge 2 different(my_container+dataplane) namespaces on the host? via veth or so...?
or does my container App has to be socket aware to be able to speak through the above socket?
I would have Proxmox on my host, since they now support lxc containers, but TNSR comes with Ubuntu and not Debian required by Proxmox. Any way to solve this other than reinstalling TNSR fresh on a Debian?
My host has 14 cores, enough RAM and an NVMe disk which should support additional load.
Any help or suggesting on this are highly welcomed
-
I've been working with the vhost-user work with TNSR. The work is focused towards Virtual Machine networking interfaces, vhost-user sockets are connected to VMs through QEMU. Currently the vhost-user support with Proxmox is a technical preview. As you've noticed, vhost-user also works in Ubuntu, and would be compatible with KVM/QEMU virtual machines.
https://docs.netgate.com/tnsr/en/latest/recipes/vm-router/index.htmlI don't have experience with container connectivity, however, I do not think vhost-user will provide a way to interface with containers. It appears to me that veth is the typical interface with containers. I will do a little digging and come back to your question.
-
any updates on this?
I would also like to try something more simple like iperf which usually uses 5001/tcp+udp running on the same host with tnsr. Would that be more achievable?
https://stefano-garzarella.github.io/posts/2019-08-22-vsock-iperf3/I know for more relevant an trex test would be better but for simpler cases an iperf is good enough.
-
@pfctl said in docker container bridged(?) via vHost User Interface socket on the same host:
https://docs.netgate.com/tnsr/en/latest/recipes/vm-router/index.html
Would something like this help?
https://doc.dpdk.org/guides/howto/virtio_user_for_container_networking.html
-
@pfctl or more specific to VPP
this
https://wiki.fd.io/view/VPP/VPPCommunicationsLibrary -
-
/usr/lib/x86_64-linux-gnu/libvcl_ldpreload.so.23.02
Seems to be present and included at compile time. That is good
but for the rest I need a bit of guidance -
Sorry for the long delay,
I don't have a definitive answer for you yet, but from what I've been reading, I believe you should be able to use vppctl to create a veth device in the same manner as the VPP docs do.
This page shows a simple set of steps for creating the pair. You may have to make changes based on which namespaces your containers are in.
https://s3-docs.fd.io/vpp/23.06/gettingstarted/progressivevpp/interface.htmlJust to be clear, and noting the title of the thread again, vhost-user doesn't intend to solve container networking, and using veth via vppctl is very much going to put you in the 'experimental feature' category for the time being. If it does work for you that would be helpful and important feedback to us that could help move things along.
I'm going to answer these other posts individually yet.
-
@zez3 said in docker container bridged(?) via vHost User Interface socket on the same host:
I would have Proxmox on my host, since they now support lxc containers, but TNSR comes with Ubuntu and not Debian required by Proxmox. Any way to solve this other than reinstalling TNSR fresh on a Debian?
I will also note here, I believe the technical preview for Debian specific packages is only available to licensed users, it does require valid certificates to the repositories. If you do fit into this category, I believe a support ticket is the best way to go to get the details.
-
@zez3 said in docker container bridged(?) via vHost User Interface socket on the same host:
https://doc.dpdk.org/guides/howto/virtio_user_for_container_networking.html
Are you intending on running a DPDK application in the container? This might be possible with TNSR and vhost-user. In this context, your container would still need access to the socket, and you'd have to connect your DPDK application to that socket via DPDK's virtio components.
-
@pfctl said in docker container bridged(?) via vHost User Interface socket on the same host:
Are you intending on running a DPDK application in the container? This might be possible with TNSR and vhost-user. In this context, your container would still need access to the socket, and you'd have to connect your DPDK application to that socket via DPDK's virtio components.
That would be also an idea if I cannot just bridge the host iperf socket to the VPP one.
I see someone has already prepared this https://github.com/ConnorDoyle/docker-iperf3-dpdk
and in just found this:
https://www.mail-archive.com/vpp-dev@lists.fd.io/msg04979.html
I guess I'll have to give it try. -
ok, so iperf was easier than I thought. I don't even need the special vsock compiled from stefano garzella repo
iperf3 --version
iperf 3.9 (cJSON 1.7.13)
Linux tnsr 6.2.0-34-generic #34~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Sep 7 13:12:03 UTC 2 x86_64
Optional features available: CPU affinity setting, IPv6 flow label, SCTP, TCP congestion algorithm setting, sendfile / zerocopy, socket pacing, authenticationwith the help of this:
https://wiki.fd.io/view/VPP/HostStack/LDP/iperfadd in /etc/vpp/startup.conf
thesession { use-app-socket-api enable }
in vcp.conf I had to change
app-socket-api
to/var/run/vpp/app_ns_sockets/default
and I was able to start the iperf server on the TNSR hostCheck your
/var/run/vpp/
if in doubtI used another core on my same numa node 0
echo $LDP_PATH
/usr/lib/x86_64-linux-gnu/libvcl_ldpreload.so.23.02
sudo taskset --cpu-list 5 sh -c "LD_PRELOAD=$LDP_PATH VCL_CONFIG=$VCL_CFG iperf3 -4 -s"
then a client can use any(dpdk or traditional through linux kernel) iperf client to perform tests.
I wonder if I could reach the same result through vhost-user-interfaces->sock-filename configuration described on TNSR documentation
-
@zez3 I have tested the vpp host stack with iperf3 in the past also. It works as you found, but we don't have any pieces in TNSR to control it.
I have not tried the DPDK enabled build of iperf3 that you have linked, but if memory serves me correctly, they've pulled the FreeBSD TCP/IP stack into that build.
The vhost-user socket is the control socket for virtio between the VM and VPP, if you are using QEMU, you bind it to a character device and connect your virtio networking to that character device. I don't think it will function by binding other things to that socket.
TNSR does have memory interfaces available, which I believe DPDK supports as well, that may provide a more generic interconnect. However, I want to back up a little bit and ask what your specific goal is currently. I would like to get a better handle on your intended use case, as exploring new features and use cases for TNSR is something I am quite interested in.
As I understand it currently, your goal was to create a container on to run iperf3 from? Is your reasoning for the container because you didn't see a way of running the iperf3 binary in a way that was accessible from the dataplane networks? Or was your goal to provide isolation to the iperf3 service AND have it be accessible from the dataplane networks?
Are you using iperf3 in these posts as just an example of a generic application to run in a container or link to the dataplane, with the intention of running other applications after you found a solution to an example application?
-
I want to back up a little bit and ask what your specific goal is currently. I would like to get a better handle on your intended use case, as exploring new features and use cases for TNSR is something I am quite interested in.
As I understand it currently, your goal was to create a container on to run iperf3 from? Is your reasoning for the container because you didn't see a way of running the iperf3 binary in a way that was accessible from the dataplane networks? Or was your goal to provide isolation to the iperf3 service AND have it be accessible from the dataplane networks?
Are you using iperf3 in these posts as just an example of a generic application to run in a container or link to the dataplane, with the intention of running other applications after you found a solution to an example application?
I would say all of the above.
My self built test box that has enough cores to support multiple services. So I was looking to put those cores and memory to some use, such that I don't need another system. TNSR AIO if you want. Ideally those services running should support some sort of resource contention. The linux kernel provides that via cgroups with a multitude of implementation. Docker being just one of them. Of course we should isolate/reserve/dedicate some cores for TNSR and DPDK only.
iperf was indeed an example. This would eventually imply that our monitoring system performs and records regular tests. I work for a Swiss university so we already use that to measure different parts of our network. Nothing out of ordinary here.
Anyway the generic application sounds more likely to what I would like to achieve. I was thinking to expose a webserver for an not so trustworthy containerized App through the TNSR dataplane. If this gets compromised it should not be possible to influence the TNSR router.I saw on the VPP wiki they have nginx examples
I would try that next.
I am not sure(I forgot to check) if that iperf port is exposed on all TNSR interfaces. I would probably need to apply some ACLs. Btw. do the TNSR ACLs protect/work against packet fragmentation attacks?
Are the TNSR ACLs the VPP ones? The TNSR Docu is not clear about that...I hope it explains a bit more my use case.