Pfsense on Openstack
-
Hi all,
I setup Pfsense and test BW on Openstack follow topology VM1--------Pfssense--------VM2. All links is 10Gb and I used iperf for test BW. But result always 600Mb-800Mb on link 10Gb.
Can you help me ???? -
What hardware you running this on? And pfsense virtual nics show that they are connected at 10ge? What nics does pfsense think it has?
And you did the same test like this
vm1 --- virtual network --- vm2And you show what for speed with iperf?
What does pfsense show for cpu usage when you run this test..
Are you also natting between vm1 and vm2 networks? Or just firewall? -
Hi johnpoz,
- Hardware i running is CPU Xeon E5-2690 v4
- yes, virtual nics of pfsense connected at 10Gb
- When test like vm1----virtual network ---vm2, the result is 9,4Gb
- I run pfsense on VM have 2 vcpu and 2G ram
- When run iperf, CPU usege is 20-30%. But i don't understand when i show System Activity , CPU idle always high:
"*last pid: 53160; load averages: 0.02, 0.20, 0.20 up 0+02:14:57 10:09:30
145 processes: 3 running, 115 sleeping, 27 waiting
Mem: 26M Active, 87M Inact, 112M Wired, 33M Buf, 1726M Free
Swap:PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND
11 root 155 ki31 0K 32K CPU1 1 132:45 100.00% [idle{idle: cpu1}]
11 root 155 ki31 0K 32K RUN 0 132:17 100.00% [idle{idle: cpu0}]
12 root -92 - 0K 432K WAIT 0 0:29 0.00% [intr{irq263: virtio_pci2}]
0 root -16 - 0K 384K swapin 1 0:21 0.00% [kernel{swapper}]
316 root 26 0 88588K 34892K piperd 0 0:08 0.00% php-fpm: pool nginx (php-fpm)
12 root -92 - 0K 432K WAIT 0 0:07 0.00% [intr{irq257: virtio_pci0}]
17281 root 20 0 6600K 2348K bpf 1 0:07 0.00% /usr/local/sbin/filterlog -i pflog0 -p /
12 root -92 - 0K 432K WAIT 1 0:07 0.00% [intr{irq260: virtio_pci1}]
90798 root 52 0 92816K 35856K accept 0 0:05 0.00% php-fpm: pool nginx (php-fpm){php-fpm}
12 root -60 - 0K 432K WAIT 0 0:04 0.00% [intr{swi4: clock (0)}]
317 root 52 0 93072K 35928K accept 0 0:04 0.00% php-fpm: pool nginx (php-fpm){php-fpm}
17 root -16 - 0K 16K pftm 0 0:03 0.00% [pf purge]
18 root -16 - 0K 16K - 0 0:02 0.00% [rand_harvestq]
87705 root 20 0 6400K 2544K select 0 0:02 0.00% /usr/sbin/syslogd -s -c -c -l /var/dhcpd
83181 root 52 20 6968K 2852K wait 1 0:02 0.00% /bin/sh /var/db/rrd/updaterrd.sh
25 root 16 - 0K 16K syncer 0 0:01 0.00% [syncer]
54811 root 20 0 6900K 2472K nanslp 1 0:01 0.00% [dpinger{dpinger}]
12 root -92 - 0K 432K WAIT 0 0:01 0.00% [intr{irq261: virtio_pci1}]*"- I don't config NAT between vm1 and vm2, it default config (firewall)
-
Out of the box is 1 network is wan and has a gateway and other is lan then pfsense would nat.
And your running the test thru pfsense right.. vm1 is either client/server for iperf and vm2 is the opposite right. Did you switch directions. What speeds do you get then.
This thread while old shows link to old blog and also the 1 poster states he sees 8Gbps on esxi.. So you should be see way higher than that I would think.
https://forum.netgate.com/topic/111302/pfsense-tuning-for-10-gbit-throughput/6