x540 and x550 performance the same
-
Internally my systems are going 10Gbit, example, my workstation to my plex server:
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 1.05 GBytes 8.99 Gbits/sec
[ 4] 1.00-2.00 sec 1.04 GBytes 8.95 Gbits/sec
[ 4] 2.00-3.00 sec 1.05 GBytes 9.01 Gbits/sec
[ 4] 3.00-4.00 sec 1.05 GBytes 9.04 Gbits/sec
[ 4] 4.00-5.00 sec 1.05 GBytes 9.01 Gbits/sec
[ 4] 5.00-6.00 sec 1.05 GBytes 9.00 Gbits/sec
[ 4] 6.00-7.00 sec 1.05 GBytes 9.01 Gbits/sec
[ 4] 7.00-8.00 sec 1.04 GBytes 8.94 Gbits/sec
[ 4] 8.00-9.00 sec 1.05 GBytes 9.05 Gbits/sec
[ 4] 9.00-10.00 sec 1.04 GBytes 8.96 Gbits/secNow pfsense with the 540 or 550
[ 4] 0.00-1.00 sec 346 MBytes 2.90 Gbits/sec
[ 4] 1.00-2.00 sec 419 MBytes 3.52 Gbits/sec
[ 4] 2.00-3.00 sec 419 MBytes 3.52 Gbits/sec
[ 4] 3.00-4.00 sec 399 MBytes 3.35 Gbits/sec
[ 4] 4.00-5.00 sec 412 MBytes 3.46 Gbits/sec
[ 4] 5.00-6.00 sec 418 MBytes 3.51 Gbits/sec
[ 4] 6.00-7.00 sec 420 MBytes 3.52 Gbits/sec
[ 4] 7.00-8.00 sec 420 MBytes 3.52 Gbits/sec
[ 4] 8.00-9.00 sec 405 MBytes 3.40 Gbits/sec
[ 4] 9.00-10.00 sec 415 MBytes 3.48 Gbits/sec
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 3.98 GBytes 3.42 Gbits/sec sender
[ 4] 0.00-10.00 sec 3.98 GBytes 3.42 Gbits/sec receiverI've set these in the loader.conf.local file
PFsense has dual x2499v4s and 128GB ram and 4 SSD drives......cpu itself never goes above 1% cpu usage
CPU Type Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
88 CPUs: 2 package(s) x 22 core(s) x 2 hardware threads
AES-NI CPU Crypto: Yes (active) -
Check the CPU usage per core. Use
top -aSH
on pfSense whilst testing.Try iperf with multiple parallel streams to use more NIC queues and hence CPU cores.
Steve
-
@stephenw10 said in x540 and x550 performance the same:
Try iperf with multiple parallel streams to use more NIC queues and hence CPU cores.
This machine had windows 10 on it before, 9Gbit was an everyday thing.....
Sure with multi i can max it out but on my windows servers and workstations on windows 10, single streams, 9.6/9.7 without effort (i know pfsense is a firewall but Id assume Id get 7Gbit or 8 with the systems specs).
-
Are you testing dircetly to/from the firewall?
That's an invalid test for firewall throughput. Your actually testing pfSense's ability to run iperf, which is single threaded. So 1 out of 22 cores in use.
Steve
-
1 out of 88 actually lol read out, doesnt seem busy
PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND
11 root 155 ki31 0B 1408K CPU15 15 250:03 100.00% [idle{idle: cpu15}]
11 root 155 ki31 0B 1408K CPU25 25 250:03 100.00% [idle{idle: cpu25}]
11 root 155 ki31 0B 1408K CPU17 17 250:02 100.00% [idle{idle: cpu17}]
11 root 155 ki31 0B 1408K CPU23 23 250:02 100.00% [idle{idle: cpu23}]
11 root 155 ki31 0B 1408K CPU35 35 250:01 100.00% [idle{idle: cpu35}]
11 root 155 ki31 0B 1408K CPU33 33 250:01 100.00% [idle{idle: cpu33}]
11 root 155 ki31 0B 1408K CPU53 53 250:00 100.00% [idle{idle: cpu53}]
11 root 155 ki31 0B 1408K CPU37 37 250:00 100.00% [idle{idle: cpu37}]
11 root 155 ki31 0B 1408K CPU39 39 250:00 100.00% [idle{idle: cpu39}]
11 root 155 ki31 0B 1408K CPU45 45 249:58 100.00% [idle{idle: cpu45}]
11 root 155 ki31 0B 1408K CPU49 49 249:58 100.00% [idle{idle: cpu49}]
11 root 155 ki31 0B 1408K CPU36 36 249:56 100.00% [idle{idle: cpu36}]
11 root 155 ki31 0B 1408K CPU81 81 249:55 100.00% [idle{idle: cpu81}]
11 root 155 ki31 0B 1408K CPU46 46 249:55 100.00% [idle{idle: cpu46}]
11 root 155 ki31 0B 1408K CPU14 14 249:54 100.00% [idle{idle: cpu14}]
11 root 155 ki31 0B 1408K CPU56 56 249:53 100.00% [idle{idle: cpu56}]
11 root 155 ki31 0B 1408K CPU16 16 249:53 100.00% [idle{idle: cpu16}]
11 root 155 ki31 0B 1408K CPU76 76 249:52 100.00% [idle{idle: cpu76}]
11 root 155 ki31 0B 1408K CPU50 50 249:51 100.00% [idle{idle: cpu50}]
11 root 155 ki31 0B 1408K CPU68 68 249:50 100.00% [idle{idle: cpu68}]
11 root 155 ki31 0B 1408K CPU52 52 249:50 100.00% [idle{idle: cpu52}]
11 root 155 ki31 0B 1408K CPU54 54 249:49 100.00% [idle{idle: cpu54}]
11 root 155 ki31 0B 1408K CPU38 38 249:48 100.00% [idle{idle: cpu38}]
11 root 155 ki31 0B 1408K CPU79 79 249:47 100.00% [idle{idle: cpu79}]
11 root 155 ki31 0B 1408K CPU67 67 249:46 100.00% [idle{idle: cpu67}]
11 root 155 ki31 0B 1408K CPU62 62 249:44 100.00% [idle{idle: cpu62}]
11 root 155 ki31 0B 1408K CPU73 73 249:44 100.00% [idle{idle: cpu73}]
11 root 155 ki31 0B 1408K CPU87 87 249:43 100.00% [idle{idle: cpu87}]
11 root 155 ki31 0B 1408K CPU51 51 249:41 100.00% [idle{idle: cpu51}]
11 root 155 ki31 0B 1408K CPU40 40 249:40 100.00% [idle{idle: cpu40}]
11 root 155 ki31 0B 1408K CPU83 83 249:34 100.00% [idle{idle: cpu83}]
11 root 155 ki31 0B 1408K CPU75 75 249:34 100.00% [idle{idle: cpu75}]
11 root 155 ki31 0B 1408K CPU86 86 249:30 100.00% [idle{idle: cpu86}]
11 root 155 ki31 0B 1408K CPU22 22 249:30 100.00% [idle{idle: cpu22}]
11 root 155 ki31 0B 1408K CPU70 70 249:29 100.00% [idle{idle: cpu70}]
11 root 155 ki31 0B 1408K CPU65 65 249:06 100.00% [idle{idle: cpu65}]
11 root 155 ki31 0B 1408K CPU78 78 249:50 99.99% [idle{idle: cpu78}]
11 root 155 ki31 0B 1408K CPU77 77 249:36 99.99% [idle{idle: cpu77}]
11 root 155 ki31 0B 1408K CPU47 47 249:46 99.99% [idle{idle: cpu47}]
11 root 155 ki31 0B 1408K CPU9 9 250:03 99.99% [idle{idle: cpu9}]
11 root 155 ki31 0B 1408K CPU8 8 249:58 99.97% [idle{idle: cpu8}]
11 root 155 ki31 0B 1408K RUN 85 249:33 99.89% [idle{idle: cpu85}]
11 root 155 ki31 0B 1408K CPU7 7 250:02 99.88% [idle{idle: cpu7}]
11 root 155 ki31 0B 1408K CPU1 1 250:03 99.88% [idle{idle: cpu1}]
11 root 155 ki31 0B 1408K CPU6 6 250:00 99.87% [idle{idle: cpu6}]
11 root 155 ki31 0B 1408K CPU74 74 249:51 99.81% [idle{idle: cpu74}]
11 root 155 ki31 0B 1408K CPU69 69 249:42 99.81% [idle{idle: cpu69}]
11 root 155 ki31 0B 1408K CPU71 71 249:33 99.81% [idle{idle: cpu71}]
11 root 155 ki31 0B 1408K CPU58 58 250:00 99.81% [idle{idle: cpu58}]
11 root 155 ki31 0B 1408K CPU32 32 249:58 99.74% [idle{idle: cpu32}]
11 root 155 ki31 0B 1408K CPU60 60 249:40 99.62% [idle{idle: cpu60}]
11 root 155 ki31 0B 1408K CPU61 61 249:52 99.62% [idle{idle: cpu61}]
11 root 155 ki31 0B 1408K CPU64 64 249:38 99.61% [idle{idle: cpu64}]
11 root 155 ki31 0B 1408K CPU21 21 250:03 99.57% [idle{idle: cpu21}]
11 root 155 ki31 0B 1408K CPU34 34 245:48 99.56% [idle{idle: cpu34}]
11 root 155 ki31 0B 1408K CPU12 12 249:58 99.47% [idle{idle: cpu12}]
11 root 155 ki31 0B 1408K CPU13 13 250:00 99.41% [idle{idle: cpu13}]
11 root 155 ki31 0B 1408K CPU44 44 249:57 99.32% [idle{idle: cpu44}]
11 root 155 ki31 0B 1408K CPU48 48 249:43 99.29% [idle{idle: cpu48}]
11 root 155 ki31 0B 1408K CPU0 0 246:58 99.20% [idle{idle: cpu0}]
11 root 155 ki31 0B 1408K CPU84 84 249:24 99.01% [idle{idle: cpu84}]
11 root 155 ki31 0B 1408K CPU55 55 249:51 98.67% [idle{idle: cpu55}]
11 root 155 ki31 0B 1408K CPU66 66 249:30 97.60% [idle{idle: cpu66}]
11 root 155 ki31 0B 1408K CPU57 57 249:56 97.24% [idle{idle: cpu57}]
0 root -76 - 0B 7312K CPU20 20 0:18 96.78% [kernel{if_io_tqg_20}]
11 root 155 ki31 0B 1408K CPU26 26 249:55 95.12% [idle{idle: cpu26}]
11 root 155 ki31 0B 1408K CPU4 4 249:54 95.11% [idle{idle: cpu4}]
11 root 155 ki31 0B 1408K CPU59 59 249:59 95.02% [idle{idle: cpu59}]
11 root 155 ki31 0B 1408K CPU5 5 250:04 95.01% [idle{idle: cpu5}]
11 root 155 ki31 0B 1408K CPU19 19 250:02 95.00% [idle{idle: cpu19}]
11 root 155 ki31 0B 1408K CPU31 31 250:00 94.96% [idle{idle: cpu31}]
11 root 155 ki31 0B 1408K CPU30 30 249:50 94.93% [idle{idle: cpu30}]
11 root 155 ki31 0B 1408K CPU72 72 249:55 94.68% [idle{idle: cpu72}]
11 root 155 ki31 0B 1408K CPU24 24 238:00 94.67% [idle{idle: cpu24}]
11 root 155 ki31 0B 1408K CPU28 28 249:57 94.02% [idle{idle: cpu28}]
11 root 155 ki31 0B 1408K CPU10 10 249:58 93.92% [idle{idle: cpu10}]
11 root 155 ki31 0B 1408K CPU63 63 249:42 93.65% [idle{idle: cpu63}] -
Ha, yup there you go. You need to test through it with multiple stream to have any idea what it's capable of.
-
@stephenw10 then why on windows can i run single stream and get 9+Gbit bidirectionally? Exactly same machine
-
In iperf3? With a firewall running?
pfSense is specifically not optimised as a TCP end point. It's setup to forward packets.
What you're seeing there is pretty much exactly what I expect if you're running iperf3 on pfSense. In fact it's better than I would expect.
Steve
-
@stephenw10 the crappy thing is, i should have just left it as a vm because really there's no advantages to vm vs hardware then because my speeds in a vm were no different
-
Certainly pfSense is never going to use anywhere near 88 CPU cores. So in that respect you would get far better use from the hardware running as a VM along side other VMs.
It still looks like you're testing it wrong though.
Steve
-
@stephenw10 oddly enough discovered the issue this am
Using upnp for device to negotiate to the outside world just works
manually nat/mapping seems to push some weird packet limit, maybe in the software, users couldnt get more than 20-25mbit streams going, now they're doing full.
-
Like in pfSense itself? Or on some upstream device?
Is this all inbound traffic then?
This is all new information.....
Steve
-
@stephenw10 So instead of nating a port internally, i use upnp and just let it do its thing, cleared up my issue, no idea why
-
To be clear you're talking about inbound traffic and some server opening ports via UPnP?
UPnP does nothing with outbound traffic.
Steve
-
@stephenw10 It solved one issue but now I need to find another but its not related to the nic