PfSense on Proxmox3.1 - low network speed with virtIO
-
Hi all
i have ha big problem with my pfSense on my Proxmox3.1 environement. If i use pfSense without virtualization, the wan speed is perfect (2x 150Mbit/s bundled), i can receive the full 300Mbit/s. But with virtualization, i get only about 200Mbit/s on the output (LAN) interface. With only one wan connection (150Mbit/s), i get the full 150Mmbit/s, so i think there is a Problem with the LAN interface? All 3 NICs are gbit/s ready and configured as VirtIO devices in Proxmox.
so there must be a problem with virtzalization or a config issue?please help me
greez
-
I am very new to PfSense (second post this one) but I can confirm that I had same experience (although I did not take measurements). Having NICs on virtio caused everything to slow down. I removed virtio support and everything went back to normal.
Latest proxmox (non-subscription) and PfSense.
-
I just made the switch from esxi to proxmox (KVM) this week and neglected to test pfsense throughput before putting my machine back into colo. Unfortunately I am in the same boat as you, ~200Mbit/s taps out the virtio interface and pegs 2 cpu cores doing it (e1000 is even worse), rendering the connection useless. Does anyone have any idea if the virtio drivers under freebsd are this terrible in general, or if better ones just have not made it into pfsense yet? Unfortunately I had to abandon pfsense for ipfire for the time being, I have no other choice.
-
Same issue here testing pfsense in a nested virtual KVM/CentOS 6.5 install on Vmware Fusion. I don't think the problem is with pfsense directly. I think the issue comes from the freebsd 8 virtio drivers not being so good.
During my tests I can get 352mbps with iperf to pfsense using virtio. CPU is at 100% usage. After the test has completed the cpu stays stuck at ~20% usage while doing nothing. I run the exact same iperf commands on my freenas (based on freebsd 9.2) test kvm install and get 352mbps and the cpu usage is only ~30%. Once the test is done the cpu sits at 0% usage. I then installed ipfire in to kvm and ran iperf. I get 452mbps and similar cpu usage to freenas.
My test environment consists of vmware fusion installed on my i5 system with 8GB RAM. CentOS is given 2 cores and 4GB RAM. Each KVM domain is given no more than 1 cpu core and no more than 2GB RAM. I only run one KVM domain at a time.
From what I have read freebsd 10 is supposed to have improved virtio drivers. Once pfsense 2.2 comes out then it should work much better virtualized.
Diablo, thanks for mentioning ipfire. I've never heard of it until now. This will work for me until pfsense 2.2 is out. I find ipfire hard to use compared to pfsense but it'll work as a temporary fix.
-
From what I have read freebsd 10 is supposed to have improved virtio drivers. Once pfsense 2.2 comes out then it should work much better virtualized.
That would be great news, I really hope that happens. IPFire is able to push gigabit speeds all day without breaking a sweat, but it is extremely limited compared to pfsense. Anything even slightly advanced is left to the user to figure out with iptables rules which isn't much fun.
-
A firewall on an virtual machine in production use is a no-go for me anyway, except I hand over the hardware interface to the VM.
I don't know if this is possible in Proxmox. Also it depends on the hardware. In KVM with newer Mainboards an NICs it is. However, if you do so you have to care for hardware compatibility in pfSense directly which also could be a challange with some hardware. -
A firewall on an virtual machine in production use is a no-go for me anyway, except I hand over the hardware interface to the VM.
I don't know if this is possible in Proxmox. Also it depends on the hardware. In KVM with newer Mainboards an NICs it is. However, if you do so you have to care for hardware compatibility in pfSense directly which also could be a challange with some hardware.I understand why you are opposed to this for mission critical business use. I believe many of us here do not fit into that category. I run proxmox virtualized at home and on my colo'd server (that I have for hobby/personal offsite backup use) for convenience and financial reasons. It has been rock solid for me for years now running virtualized under both esxi and proxmox.
-
Just tested out the virtio in 2.2-alpha…problem is still there. Maybe the issue isn't the virtio drivers then. I was only able to get ~190mbps with 100% CPU usage using iperf.
-
Just tested out the virtio in 2.2-alpha…problem is still there. Maybe the issue isn't the virtio drivers then. I was only able to get ~190mbps with 100% CPU usage using iperf.
Damn….I was just about to test this myself tonight as well, guess I don't need to. Thanks for letting us know about the bad news :(
-
Just tested out the virtio in 2.2-alpha…problem is still there. Maybe the issue isn't the virtio drivers then. I was only able to get ~190mbps with 100% CPU usage using iperf.
Damn….I was just about to test this myself tonight as well, guess I don't need to. Thanks for letting us know about the bad news :(
I tested using nested virtualization. I have CentOS 6.5 with KVM running on vmware fusion (2 cores 4GB RAM). I then had pfsense set up on KVM using 1 core 2GB RAM. I'm really doubting my test results. If you have a dedicated proxmox or KVM server, please test the alpha on that. I'm very curious to know if it performs better.
I ran a similar test using esxi 5.5 installed on vmware fusion and it had a similar issue…High cpu usage and ~300mbps. The CPU usage wasn't pegged at 100% like KVM though. It was up and down. That was with a 2 minute iperf test.
-
Alright, I just tested with the latest i386 pfsense 2.2 snapshot (20140404). The amd64 build does not boot (same with freebsd 10 amd64) on proxmox 3.1 running on a Xeon E3-1230v2. Unfortunately the iperf package fails to install on this build of pfsense so i had to improvise with netcat and dd. I ran the following command on a local linux machine:
dd if=/dev/zero bs=1024K count=2048 | nc -v 192.168.1.89 2222
the pfsense machine ran:```
nc -v -l 2222 > /dev/nullResults:``` 568Mbit/s transfer speed sustained, 81% CPU usage on 2 cores.
So it's an improvement but still far below where it needs to be unfortunately.
I also went ahead and installed freebsd 10 i386 in a VM on the same proxmox host and ran an iperf test, the results were MUCH better.
[ 3] local 192.168.1.152 port 12404 connected with 192.168.1.55 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.08 GBytes 930 Mbits/sec
CPU usage was about 60% on 1 core, still high but at least manageable. There is hope for pfsense on kvm it seems.
For consistency I also ran the same dd/netcat test as above on freebsd 10. The results are:
728Mbit/s transfer speed sustained, 80% CPU usage.
-
Good to know, thanks for all of that! I'm thinking I may scrap the idea of virtualizing pfsense and freenas and go with a hardware router and install freenas by itself.
I should've mentioned before that I couldn't install iperf using the pfsense GUI. You can however installing using pkg from the command line.
pkg update
pkg install iperf/usr/local/bin/iperf
/usr/local/bin isn't in the path so it has to be specified.
-
Good to know, thanks for all of that! I'm thinking I may scrap the idea of virtualizing pfsense and freenas and go with a hardware router and install freenas by itself.
I should've mentioned before that I couldn't install iperf using the pfsense GUI. You can however installing using pkg from the command line.
pkg update
pkg install iperf/usr/local/bin/iperf
/usr/local/bin isn't in the path so it has to be specified.
Thanks for the info. I wish I had the option of not virtualizing it, I just can't afford to colo more hardware in this case. At home I've been running it under proxmox for years but I only have 100Mbit so that obviously isn't a problem.
-
I had a problem with Proxmox and pfSense. Network speeds were slow and the only way to fix it was to install ethtool, change to Virtio NICs and by adding two lines on the Proxmox host /etc/network/interfaces file.
pre-up /sbin/ethtool -s eth1 speed 1000 duplex full autoneg off
pre-up /sbin/ethtool -K eth1 tx off
After host reboot the speeds were back to several hundred Mbs.
Sam