Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Performance tests of pfSense 2.1 vs. 2.2 and different NIC types on VMware ESXi

    Virtualization
    10
    34
    17.2k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • H
      heper
      last edited by

      maxed out in vsphere or inside VM  or both ?

      1 Reply Last reply Reply Quote 0
      • H
        heper
        last edited by

        i did some tests on esxi 5.1.0:

        this is an iperf test from debian VM –> win 7 VM  ;  pfsense 2.2-X64 / 1core / 1GB Ram  VM is router between both doing NAT

        also i think the win7 was the bottleneck in my quick n dirty setup ..... it hogged more cpu time then the other VM's involved.

        VMpf2.2_iperf_graph.png
        VMpf2.2_iperf_graph.png_thumb
        VMpf2.2_iperf_1core.png
        VMpf2.2_iperf_1core.png_thumb

        1 Reply Last reply Reply Quote 0
        • V
          VFrontDe
          last edited by

          @heper:

          i did some tests on esxi 5.1.0:

          this is an iperf test from debian VM –> win 7 VM  ;  pfsense 2.2-X64 / 1core / 1GB Ram  VM is router between both doing NAT

          also i think the win7 was the bottleneck in my quick n dirty setup ..... it hogged more cpu time then the other VM's involved.

          I will try to do more tests and look closer to CPU usage. The maxing out I saw was on the ESXi host, I did not look at the load inside pfSense.

          In this test with iperf … were all VMs running on the same host using host internal network connections only? If yes can you try cross host connections involving physical wire?

          Andreas Peetz
          VMware Front Experience Blog | Twitter | Google+ | vExpert 2012-2015

          1 Reply Last reply Reply Quote 0
          • H
            heper
            last edited by

            yes all vm's were running on same host. if i find some time i could setup a different VM on a different host, but then i'd just hit wire speed (i hit wirespeed with the legacy e1000)

            today i did the same test as yesterday but replaced to win7-VM with another debian-VM and hit 1.7Gbit  …. still kind of slow.
            the esxi host is hitting 100% cpu usage

            perhaps the devs could help us with useful tuning tips for the vmxnet3 nics ?

            1 Reply Last reply Reply Quote 0
            • johnpozJ
              johnpoz LAYER 8 Global Moderator
              last edited by

              pfSense 2.2 with FreeBSD builtin vmxnet3 driver:
              Down 406 Mbit/s, Up 1.3 Mbit/s (WTF?!)

              Clearly that is not right.. I can tell you for fact that is wrong. Since that is what I am running and 10+Mbps from every box on my network, even wireless.  Running on esxi 5.5 build 2143827 which is patch after update2… About ready to update to 2403361 which came out on 1/27

              Come on.. Clearly something wrong if your only seeing 1.3Mbps other than driver not performing as well as other driver, etc..  I don't have the isp connection you do, so can not test to those speeds.  But I pay for 50/10, I would have to connect something to wan side to test gig speeds, etc.  Or could fire up something on my other lan segment..    But clearly that number is way off!

              An intelligent man is sometimes forced to be drunk to spend time with his fools
              If you get confused: Listen to the Music Play
              Please don't Chat/PM me for help, unless mod related
              SG-4860 24.11 | Lab VMs 2.7.2, 24.11

              1 Reply Last reply Reply Quote 0
              • C
                cogumel0
                last edited by

                I'll be performing the same tests, so we have something to compare against.

                On a different note, could some of this have to do with the lack of the other VMware Tools components? Specially if the CPU is being maxed out and obviously there is a lot going through the ram/swap?

                1 Reply Last reply Reply Quote 0
                • K
                  kejianshi
                  last edited by

                  I'm testing on a virtualized environment with way more traffic than my old gigabit switch should have on it from a virtualized linux machine through pfsense 2.2

                  Its no slower than before.  I'm getting about 750 - 800 up and down and the rest of the gigabit connection is occupied with actual traffic.

                  Also, you can push that test button 10 time on 10 different servers and get 10 different speeds on each server - really crap way to benchmark pfsense.

                  I get widely varying results with speedtest.net depending on server, my OS, my browser, random unexplainable differences when tested a few times with nothing changed and which phase the moon is in….

                  1 Reply Last reply Reply Quote 0
                  • C
                    cogumel0
                    last edited by

                    Agree with kejianshi, I'll be using iPerf for me tests between two local machines and forcing the traffic through pfSense 2.2/pfSense 2.1.5

                    If there's any particular setup you'd like me to go for, let me know. Currently I'm thinking of going with the following:

                    pfSense 2.2 with 2 LANs, completely default config on ESXi 5.5U2

                    Ubuntu VM (LAN1) > pfSense 2.2 > Ubuntu VM (LAN2) – all on same physical host

                    I can also perform the tests as...

                    Ubuntu/Windows physical desktop (LAN1) > 1GB switch > 1GB switch > pfSense 2.2 > 1GB switch > 1GB switch > Ubuntu/Windows physical laptop (LAN2)

                    Though the speeds on physical should be slightly slower as the NIC on the laptop last time I tested it with iPerf was about 150mb/s slower than the desktop...

                    Let me know if there's any changes you'd like me to make on the setup.

                    1 Reply Last reply Reply Quote 0
                    • K
                      kejianshi
                      last edited by

                      Ubuntu VM (LAN1) > pfSense 2.2 > Ubuntu VM (LAN2) – all on same physical host

                      sounds good.

                      1 Reply Last reply Reply Quote 0
                      • H
                        heper
                        last edited by

                        my tests were with NAT enabled … haven't tested without NAT

                        1 Reply Last reply Reply Quote 0
                        • K
                          kejianshi
                          last edited by

                          Me either and I have no need to.

                          1 Reply Last reply Reply Quote 0
                          • C
                            cogumel0
                            last edited by

                            Ok, so here are my test results.

                            These were carried out on ESXi 5.5U2.

                            The setup is as follows: 3 VMs, all on the same host, 2 running Ubuntu 14.04, 1 running pfSense.

                            pfSense VM has 1 WAN and 2 LANs, 2 CPUs with 2 cores each and 8GB ram.

                            Ubuntu1 has 4 CPUs, 1 NIC on LAN1 and 4GB ram, running from LiveCD

                            Ubuntu2 has 4 CPUs, 1 NIC on LAN2 and 4GB ram, running from LiveCD.

                            The only changes made in pfSense were to enable DHCP on OPT1 and setup any-to-any IPv4 and IPv6 rules on OPT1 (to mirror those on LAN1).

                            Tests were done using iPerf and always from Ubuntu1 to Ubuntu2 (LAN1 to OPT1 in pfSense).

                            There is only one other VM running on this physical host (a live pfSense box but traffic was pretty much non-existent during the test and no packages installed so its affects on the test results are negligible)

                            The test results are the average of running iPerf 5x on each configuration.

                            Test results:

                            1.546GB - pfSense 2.1.5 E1000 pf enabled
                            2.512GB - pfsense 2.1.5 E1000 pf disabled
                            1.91GB  - pfSense 2.1.5 VMX3 pf enabled
                            2.554GB - pfSense 2.1.5 VMX3 pf disabled

                            1.474GB - pfSense 2.2 E1000 pf enabled
                            2.334GB - pfSense 2.2 E1000 pf disabled
                            1.818GB - pfSense 2.2 VMX3 pf enabled
                            2.732GB - pfSense 2.2 VMX3 pf disabled

                            Based on these results, I think it's fair to draw the following conclusions:

                            • Performance with pf enabled is always slower in pfSense 2.2
                            • Bigger gains are achieved in pfSense 2.2 with pf disabled
                            • While there is a small decrease in speed with VMX3 in pfSense 2.2 in comparison to pfSense 2.1.5 with pf enabled, I don't think this is to do with the driver as much as it is to do with pfSense itself, as the same speed decrease was noticed when using E1000.

                            In short, I don't think that VMware Tools VMX3 driver will provide any better performance increase vs the one from FreeBSD 10.1 (though it's impossible to say without testing it, you never know…) but an interesting thing to point out is that out of the box pfSense 2.2 performs worse than pfSense 2.1.5 with pf enabled (though better with pf disabled when using VMX3).

                            Are there any configuration changes I can do to attempt to increase the performance in 2.2?

                            1 Reply Last reply Reply Quote 0
                            • H
                              heper
                              last edited by

                              so basically there is "only' a 300-400 mbit increase with the vmxnet drivers.

                              i was hoping bsd10.1 with the new pf would handle lots more  …. in my tests, increasing virtual cores didn't make much of a difference .

                              Perhaps it needs serious tuning to get to speeds between 5-8gbit ??

                              1 Reply Last reply Reply Quote 0
                              • K
                                kejianshi
                                last edited by

                                Don't suppose you could do the same test on bare hardware?

                                1 Reply Last reply Reply Quote 0
                                • C
                                  cogumel0
                                  last edited by

                                  @kejianshi:

                                  Don't suppose you could do the same test on bare hardware?

                                  Well, not on a server, but I could put pfSense on a desktop. But how would you then want me to carry out the tests? The speed would be capped by the NIC at 1GB…

                                  1 Reply Last reply Reply Quote 0
                                  • K
                                    kejianshi
                                    last edited by

                                    Thats ok - Just a simple test that shows throughput and cpu load would still be informative.

                                    Better if you had a 10bg LAN of course but still should be nice to see.

                                    1 Reply Last reply Reply Quote 0
                                    • C
                                      cogumel0
                                      last edited by

                                      Just for completeness, I took the 2 Ubuntu VMs I had and put them on the same LAN and repeated the iPerf test:

                                      15.8GB/s .. ..

                                      Didn't quite expect to get these speed with pfSense in the middle, even with pf disabled, but it shows what the server can handle.

                                      Just FYI it is a Dell DL380 G6 2x Xeon 6 core 72GB ram

                                      1 Reply Last reply Reply Quote 0
                                      • K
                                        kejianshi
                                        last edited by

                                        well - It the two ubuntu machines were on the same LAN (same switch) then it would do an end run around pfsense completely.  (I'm sure you already know)

                                        I'm just stating the obvious for the few people it might not be obvious to.

                                        1 Reply Last reply Reply Quote 0
                                        • C
                                          cogumel0
                                          last edited by

                                          Correct, I just added that to show what the server can handle, and what the difference between having the traffic go through pfSense and not is.

                                          Still… I expected pfSense to not slow things as much, specially with pf disabled... It's a 600% decrease in performance by having the traffic going through pfSense. I'm sure that the ESXi host has something to do with it too since it's two different virtual switches involved but...

                                          1 Reply Last reply Reply Quote 0
                                          • S
                                            Supermule Banned
                                            last edited by

                                            This is 2.1.5 on E1000….

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.