Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Performance tests of pfSense 2.1 vs. 2.2 and different NIC types on VMware ESXi

    Virtualization
    10
    34
    17.1k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • V
      VFrontDe
      last edited by

      I wanted to find out if it's worth trying to get the native VMware supplied vmxnet3 driver to work in pfSense 2.2, i.e. how it compares to the FreeBSD builtin vmxnet3 driver and the e1000 driver. So I did some performance tests …

      The setup: I'm using a pfSense VM on a test ESXi host (version 5.5 Update 2) to connect a host internal network (using private IPv4 addresses) to the Internet (using one public IPv4 address), so I'm using NAT for IPv4. The same box also acts as an IPv6 router, but all tests were done using IPv4 connections only. On the private network I have a Windows 8.1 VM and I used the browser based http://www.speedtest.net to measure download and upload speed from/to selected remote servers.
      I am aware that there are better ways to test network performance, but I wanted something that is very easy to setup and use ... In order to keep comparability I did all tests using the same remote server and in a narrow time window of about 20min, and I repeated all tests at least 3 times and present average values here. After all I only wanted to get a feeling for the magnitude, so these are my results:

      pfSense 2.1.5 with VMware vmxnet3 driver:
      Down 750 Mbit/s, Up 923 Mbit/s

      pfSense 2.1.5 with e1000:
      Down 683 Mbit/s, Up 700 Mbit/s

      pfSense 2.2 with FreeBSD builtin vmxnet3 driver:
      Down 406 Mbit/s, Up 1.3 Mbit/s (WTF?!)

      pfSense 2.2 with e1000:
      Down 673 Mbit/s, Up 851 Mbit/s

      pfSense 2.1.5 and 2.2 were using the exact same hardware (1 vCPU/1 GB RAM) and pfSense configuration in all test cases. No tuning was applied.

      My conclusions:
      The FreeBSD builtin vmxnet3 driver of pfSense 2.2 does not give you optimal performance (I wonder what causes the insanely slow upload?). The reason for this is probably not caused by pfSense itself, but really the driver, because with e1000 there is merely a difference between pfSense 2.1 and 2.2. I tried to tinker with the tunables of the vmxnet3 driver (described here: https://www.freebsd.org/cgi/man.cgi?query=vmx), but that made it rather worse than better.
      I wish I could get the native VMware vmxnet3 driver to work, but for now e1000 seems to be the best choice for pfSense 2.2 running on ESXi.

      Any comments, suggestions? Did someone else already run some performance tests?

      Thanks
      Andreas

      Andreas Peetz
      VMware Front Experience Blog | Twitter | Google+ | vExpert 2012-2015

      1 Reply Last reply Reply Quote 0
      • H
        heper
        last edited by

        what version of esxi ?

        1 Reply Last reply Reply Quote 0
        • V
          VFrontDe
          last edited by

          5.5 U2. I updated the post and added that.

          Andreas Peetz
          VMware Front Experience Blog | Twitter | Google+ | vExpert 2012-2015

          1 Reply Last reply Reply Quote 0
          • H
            heper
            last edited by

            i havent updated my production machines to use the vmxnet drivers (they'r still on legacy for now).

            but i know that pfsense own infrastructure runs partly on esxi and the devs use it a lot for testing purposes ….
            so i'm guessing there is a good reason for this.

            Are logs showing anything useful? What about cpu usage / mbuf /ram ?
            Does anything seem wrong with the resources when watching from vsphere client/vcenter ?

            pfsense 2.2 has a multithreaded pf now .... so more cores help now (although i don't think there is a known issue with degraded single thread performance )

            1 Reply Last reply Reply Quote 0
            • V
              VFrontDe
              last edited by

              Sure, the one CPU maxed out in all test cases … I can try 2.2 with more cores, but with 2.1 this did not make a difference for me in the past.
              mbuf / RAM usage was comparable in all cases and never reached critical thresholds.

              Andreas Peetz
              VMware Front Experience Blog | Twitter | Google+ | vExpert 2012-2015

              1 Reply Last reply Reply Quote 0
              • H
                heper
                last edited by

                maxed out in vsphere or inside VM  or both ?

                1 Reply Last reply Reply Quote 0
                • H
                  heper
                  last edited by

                  i did some tests on esxi 5.1.0:

                  this is an iperf test from debian VM –> win 7 VM  ;  pfsense 2.2-X64 / 1core / 1GB Ram  VM is router between both doing NAT

                  also i think the win7 was the bottleneck in my quick n dirty setup ..... it hogged more cpu time then the other VM's involved.

                  VMpf2.2_iperf_graph.png
                  VMpf2.2_iperf_graph.png_thumb
                  VMpf2.2_iperf_1core.png
                  VMpf2.2_iperf_1core.png_thumb

                  1 Reply Last reply Reply Quote 0
                  • V
                    VFrontDe
                    last edited by

                    @heper:

                    i did some tests on esxi 5.1.0:

                    this is an iperf test from debian VM –> win 7 VM  ;  pfsense 2.2-X64 / 1core / 1GB Ram  VM is router between both doing NAT

                    also i think the win7 was the bottleneck in my quick n dirty setup ..... it hogged more cpu time then the other VM's involved.

                    I will try to do more tests and look closer to CPU usage. The maxing out I saw was on the ESXi host, I did not look at the load inside pfSense.

                    In this test with iperf … were all VMs running on the same host using host internal network connections only? If yes can you try cross host connections involving physical wire?

                    Andreas Peetz
                    VMware Front Experience Blog | Twitter | Google+ | vExpert 2012-2015

                    1 Reply Last reply Reply Quote 0
                    • H
                      heper
                      last edited by

                      yes all vm's were running on same host. if i find some time i could setup a different VM on a different host, but then i'd just hit wire speed (i hit wirespeed with the legacy e1000)

                      today i did the same test as yesterday but replaced to win7-VM with another debian-VM and hit 1.7Gbit  …. still kind of slow.
                      the esxi host is hitting 100% cpu usage

                      perhaps the devs could help us with useful tuning tips for the vmxnet3 nics ?

                      1 Reply Last reply Reply Quote 0
                      • johnpozJ
                        johnpoz LAYER 8 Global Moderator
                        last edited by

                        pfSense 2.2 with FreeBSD builtin vmxnet3 driver:
                        Down 406 Mbit/s, Up 1.3 Mbit/s (WTF?!)

                        Clearly that is not right.. I can tell you for fact that is wrong. Since that is what I am running and 10+Mbps from every box on my network, even wireless.  Running on esxi 5.5 build 2143827 which is patch after update2… About ready to update to 2403361 which came out on 1/27

                        Come on.. Clearly something wrong if your only seeing 1.3Mbps other than driver not performing as well as other driver, etc..  I don't have the isp connection you do, so can not test to those speeds.  But I pay for 50/10, I would have to connect something to wan side to test gig speeds, etc.  Or could fire up something on my other lan segment..    But clearly that number is way off!

                        An intelligent man is sometimes forced to be drunk to spend time with his fools
                        If you get confused: Listen to the Music Play
                        Please don't Chat/PM me for help, unless mod related
                        SG-4860 24.11 | Lab VMs 2.7.2, 24.11

                        1 Reply Last reply Reply Quote 0
                        • C
                          cogumel0
                          last edited by

                          I'll be performing the same tests, so we have something to compare against.

                          On a different note, could some of this have to do with the lack of the other VMware Tools components? Specially if the CPU is being maxed out and obviously there is a lot going through the ram/swap?

                          1 Reply Last reply Reply Quote 0
                          • K
                            kejianshi
                            last edited by

                            I'm testing on a virtualized environment with way more traffic than my old gigabit switch should have on it from a virtualized linux machine through pfsense 2.2

                            Its no slower than before.  I'm getting about 750 - 800 up and down and the rest of the gigabit connection is occupied with actual traffic.

                            Also, you can push that test button 10 time on 10 different servers and get 10 different speeds on each server - really crap way to benchmark pfsense.

                            I get widely varying results with speedtest.net depending on server, my OS, my browser, random unexplainable differences when tested a few times with nothing changed and which phase the moon is in….

                            1 Reply Last reply Reply Quote 0
                            • C
                              cogumel0
                              last edited by

                              Agree with kejianshi, I'll be using iPerf for me tests between two local machines and forcing the traffic through pfSense 2.2/pfSense 2.1.5

                              If there's any particular setup you'd like me to go for, let me know. Currently I'm thinking of going with the following:

                              pfSense 2.2 with 2 LANs, completely default config on ESXi 5.5U2

                              Ubuntu VM (LAN1) > pfSense 2.2 > Ubuntu VM (LAN2) – all on same physical host

                              I can also perform the tests as...

                              Ubuntu/Windows physical desktop (LAN1) > 1GB switch > 1GB switch > pfSense 2.2 > 1GB switch > 1GB switch > Ubuntu/Windows physical laptop (LAN2)

                              Though the speeds on physical should be slightly slower as the NIC on the laptop last time I tested it with iPerf was about 150mb/s slower than the desktop...

                              Let me know if there's any changes you'd like me to make on the setup.

                              1 Reply Last reply Reply Quote 0
                              • K
                                kejianshi
                                last edited by

                                Ubuntu VM (LAN1) > pfSense 2.2 > Ubuntu VM (LAN2) – all on same physical host

                                sounds good.

                                1 Reply Last reply Reply Quote 0
                                • H
                                  heper
                                  last edited by

                                  my tests were with NAT enabled … haven't tested without NAT

                                  1 Reply Last reply Reply Quote 0
                                  • K
                                    kejianshi
                                    last edited by

                                    Me either and I have no need to.

                                    1 Reply Last reply Reply Quote 0
                                    • C
                                      cogumel0
                                      last edited by

                                      Ok, so here are my test results.

                                      These were carried out on ESXi 5.5U2.

                                      The setup is as follows: 3 VMs, all on the same host, 2 running Ubuntu 14.04, 1 running pfSense.

                                      pfSense VM has 1 WAN and 2 LANs, 2 CPUs with 2 cores each and 8GB ram.

                                      Ubuntu1 has 4 CPUs, 1 NIC on LAN1 and 4GB ram, running from LiveCD

                                      Ubuntu2 has 4 CPUs, 1 NIC on LAN2 and 4GB ram, running from LiveCD.

                                      The only changes made in pfSense were to enable DHCP on OPT1 and setup any-to-any IPv4 and IPv6 rules on OPT1 (to mirror those on LAN1).

                                      Tests were done using iPerf and always from Ubuntu1 to Ubuntu2 (LAN1 to OPT1 in pfSense).

                                      There is only one other VM running on this physical host (a live pfSense box but traffic was pretty much non-existent during the test and no packages installed so its affects on the test results are negligible)

                                      The test results are the average of running iPerf 5x on each configuration.

                                      Test results:

                                      1.546GB - pfSense 2.1.5 E1000 pf enabled
                                      2.512GB - pfsense 2.1.5 E1000 pf disabled
                                      1.91GB  - pfSense 2.1.5 VMX3 pf enabled
                                      2.554GB - pfSense 2.1.5 VMX3 pf disabled

                                      1.474GB - pfSense 2.2 E1000 pf enabled
                                      2.334GB - pfSense 2.2 E1000 pf disabled
                                      1.818GB - pfSense 2.2 VMX3 pf enabled
                                      2.732GB - pfSense 2.2 VMX3 pf disabled

                                      Based on these results, I think it's fair to draw the following conclusions:

                                      • Performance with pf enabled is always slower in pfSense 2.2
                                      • Bigger gains are achieved in pfSense 2.2 with pf disabled
                                      • While there is a small decrease in speed with VMX3 in pfSense 2.2 in comparison to pfSense 2.1.5 with pf enabled, I don't think this is to do with the driver as much as it is to do with pfSense itself, as the same speed decrease was noticed when using E1000.

                                      In short, I don't think that VMware Tools VMX3 driver will provide any better performance increase vs the one from FreeBSD 10.1 (though it's impossible to say without testing it, you never know…) but an interesting thing to point out is that out of the box pfSense 2.2 performs worse than pfSense 2.1.5 with pf enabled (though better with pf disabled when using VMX3).

                                      Are there any configuration changes I can do to attempt to increase the performance in 2.2?

                                      1 Reply Last reply Reply Quote 0
                                      • H
                                        heper
                                        last edited by

                                        so basically there is "only' a 300-400 mbit increase with the vmxnet drivers.

                                        i was hoping bsd10.1 with the new pf would handle lots more  …. in my tests, increasing virtual cores didn't make much of a difference .

                                        Perhaps it needs serious tuning to get to speeds between 5-8gbit ??

                                        1 Reply Last reply Reply Quote 0
                                        • K
                                          kejianshi
                                          last edited by

                                          Don't suppose you could do the same test on bare hardware?

                                          1 Reply Last reply Reply Quote 0
                                          • C
                                            cogumel0
                                            last edited by

                                            @kejianshi:

                                            Don't suppose you could do the same test on bare hardware?

                                            Well, not on a server, but I could put pfSense on a desktop. But how would you then want me to carry out the tests? The speed would be capped by the NIC at 1GB…

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.