Very slow traffic from other VM's through pfSense on XenServer
-
Well, this issue is when traffic flows from external machines through pfsense wan-interface to resources on the internal lan.
The host on where this works has different hardware (including different NIC's) than the other two hosts in the pool. So when I migrate or restarts pfsense on host 1 or 2 I don't get through the firewall from the outside (ia its so slow that it dont work). But with pfsense on host 3 it works as expected.
Before it worked on all 3 hosts. Now the pfsense is not protected against host failure.
-
Well, this issue is when traffic flows from external machines through pfsense wan-interface to resources on the internal lan.
The host on where this works has different hardware (including different NIC's) than the other two hosts in the pool. So when I migrate or restarts pfsense on host 1 or 2 I don't get through the firewall from the outside (ia its so slow that it dont work). But with pfsense on host 3 it works as expected.
Before it worked on all 3 hosts. Now the pfsense is not protected against host failure.
What are the eth specs when it's failing? And is it a live migration or a shutdown-boot migration?
If you want to protect against failure, it's better to use pfSense's failover options instead of hypervisor-based failover. -
I think he was trying to do that but he perceived one pfsense to work and two others not to work.
I'll try to explain it another way… the interface (if any) which transmits traffic to machines on the same physical xen server needs to have tx check sums turned off as I noted in my post. That's the only interface affected.
If you have a pf on xen and it does not route for any hosts on the same xen box you don't see any problem.
This would affect any traffic to which check sums would be applicable (all I think?) - so it would affect carp traffic too I imagine IF your pf boxes were on the same network - if they are on different boxes the carp traffic will be fine.
Just turn off the tx check sums for all the pfsense interfaces if you don't understand what I mean - the method I described surives rebooting and only affects the pf vms you apply the changes to.
Hope that clarfies. Cheers.
-
Perhaps my explanation was not so clear. The offload settings mentioned here has been applied on all interfaces of pf from the start when I was running it on XenServer 6.2. That fixed the problem then and pf worked perfectly fine on all 3 hosts. It was like living in a Dream where the streets where paved with gold and there was free candy for everyone.
After upgrading to XS 6.5/SP1 pf only works on 1 host. It doesnt matter if I live migrate or shut down and restart on Another host. It ONLY works on "host 3".
I am only running 1 instance of pfsense and sure it may be better running 2 or more in a HA setup, but thats not really the question here. I had a fine working setup. But not anymore. The candy is all gone and the only change is XS that has been upgraded.
In reply to johnkeates I dont know what eht spec I should look into…?
-
In reply to johnkeates I dont know what eht spec I should look into…?
Use XE to get all the vif specs from the working pf hypervisor and one non-functional hypervisor, as well as ethtool parameters for both.
We're looking for other variables that might mess with the in-memory transport, because that's where VirtIO related issues seem to lie.
If you could post those 4 outputs it'd help us diagnose. -
My bad…
I noticed tht the interfaces on 2 failing XenServer hosts was reordered for some reason. Correcting this solved my problem, hence it was not related to pfsense.
I am thankful for your effort to help out and apologize for confusing you!
-
My bad…
I noticed tht the interfaces on 2 failing XenServer hosts was reordered for some reason. Correcting this solved my problem, hence it was not related to pfsense.
I am thankful for your effort to help out and apologize for confusing you!
Glad you got it fixed!
-
Just to keep this updated.
This problem still happens on XenServer 7.0 with pfSense 2.3.1.
-
Just to keep this updated.
This problem still happens on XenServer 7.0 with pfSense 2.3.1.
Yep, until it's fixed in upstream FreeBSD it won't get fixed, ever.
-
@johnkeates:
Just to keep this updated.
This problem still happens on XenServer 7.0 with pfSense 2.3.1.
Yep, until it's fixed in upstream FreeBSD it won't get fixed, ever.
Just figured I'd update this thread on these issues. It looks like freebsd 11 is supporting dom0 support for xen, so hopefully these issues will be fixed. I'm just getting a virtualized setup going with support ending for 32 bit here soon so I may try 2.4 of PFSense to see how it works out of the box with xen.
Here is a link to the freebsd support, though it will be experimental at this stage:
https://wiki.freebsd.org/Xen
-
@johnkeates:
@johnkeates:
Just to keep this updated.
This problem still happens on XenServer 7.0 with pfSense 2.3.1.
Yep, until it's fixed in upstream FreeBSD it won't get fixed, ever.
Just figured I'd update this thread on these issues. It looks like freebsd 11 is supporting dom0 support for xen, so hopefully these issues will be fixed. I'm just getting a virtualized setup going with support ending for 32 bit here soon so I may try 2.4 of PFSense to see how it works out of the box with xen.
Here is a link to the freebsd support, though it will be experimental at this stage:
https://wiki.freebsd.org/Xen
I suppose that could actually fix the netback/netfront problems because it will be BSD on the other end too. Interesting.
Yes very. Although there is still some work to do. I got the latest 2.4 snapshot running (as of March 18th) with FreeBSD 11.0-p8 under Xenserver 7.1 with all patches, and the issues with checksum offloading still exist. Disabling it still fixes the issue through only on the rx and tx side, but I do believe there is a slight performance drop like others have said here. I haven't tested local file transfers yet, but I do notice a slight drop in internet bandwidth. I'll do more testing when I got time.
-
So as I understand it, we need an upstream fix from FreeBSD for this to be magically solved once and for all. What about workarounds? Can someone summarize what steps to take so we can add it to the Wiki under Virtualization / Xen?
Out of curiosity, is it the same with other environments, like KVM or ESXI?
-
Hi Guys, is this soluction necessary? I mean, I've already disabled the "hardware checksum offloading" and I'm running XS 7.2 with citrix DVSC... I just can't go thru the internet from my second server, all my VM's hosted in my pool master are working fine...
PFSENSE is my gateway running on master. -
Yes. It is still necessary if you want to use the PV NICs.
You can also put this in /boot/loader.conf.local:
hw.xen.disable_pv_nics=1
Your interfaces will now present as reX and you will not have to make those VM checksum changes. But they won't be paravirtualized.
-
I'm wondering... this thread is 4 years old... and problem is still there.
The trick xe vif-param-set uuid={uuid of vif} other-config:ethtool-tx="off" is still working also.Any idea when this will be fixed?
Thanks
-
Hello, I run my pfSense on XenServer. I edit both lan card
xe vif-param-set uuid=... other-config:ethtool-rx="off"
and
xe vif-param-set uuid=... other-config:ethtool-rx="on"and speed of internet will be beter. But I still have problem with some pages (www) some pages open some pages don't open, some pages open very long and sometimes works fine :(.
Any idea ??
Thanks.
-
Hi.
Much better
https://xcp-ng.org/ + https://xen-orchestra.com/docs/ -
This post is deleted!