Navigation

    Netgate Discussion Forum
    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search

    Performance tests of pfSense 2.1 vs. 2.2 and different NIC types on VMware ESXi

    Virtualization
    10
    34
    15161
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • V
      VFrontDe last edited by

      I wanted to find out if it's worth trying to get the native VMware supplied vmxnet3 driver to work in pfSense 2.2, i.e. how it compares to the FreeBSD builtin vmxnet3 driver and the e1000 driver. So I did some performance tests …

      The setup: I'm using a pfSense VM on a test ESXi host (version 5.5 Update 2) to connect a host internal network (using private IPv4 addresses) to the Internet (using one public IPv4 address), so I'm using NAT for IPv4. The same box also acts as an IPv6 router, but all tests were done using IPv4 connections only. On the private network I have a Windows 8.1 VM and I used the browser based http://www.speedtest.net to measure download and upload speed from/to selected remote servers.
      I am aware that there are better ways to test network performance, but I wanted something that is very easy to setup and use ... In order to keep comparability I did all tests using the same remote server and in a narrow time window of about 20min, and I repeated all tests at least 3 times and present average values here. After all I only wanted to get a feeling for the magnitude, so these are my results:

      pfSense 2.1.5 with VMware vmxnet3 driver:
      Down 750 Mbit/s, Up 923 Mbit/s

      pfSense 2.1.5 with e1000:
      Down 683 Mbit/s, Up 700 Mbit/s

      pfSense 2.2 with FreeBSD builtin vmxnet3 driver:
      Down 406 Mbit/s, Up 1.3 Mbit/s (WTF?!)

      pfSense 2.2 with e1000:
      Down 673 Mbit/s, Up 851 Mbit/s

      pfSense 2.1.5 and 2.2 were using the exact same hardware (1 vCPU/1 GB RAM) and pfSense configuration in all test cases. No tuning was applied.

      My conclusions:
      The FreeBSD builtin vmxnet3 driver of pfSense 2.2 does not give you optimal performance (I wonder what causes the insanely slow upload?). The reason for this is probably not caused by pfSense itself, but really the driver, because with e1000 there is merely a difference between pfSense 2.1 and 2.2. I tried to tinker with the tunables of the vmxnet3 driver (described here: https://www.freebsd.org/cgi/man.cgi?query=vmx), but that made it rather worse than better.
      I wish I could get the native VMware vmxnet3 driver to work, but for now e1000 seems to be the best choice for pfSense 2.2 running on ESXi.

      Any comments, suggestions? Did someone else already run some performance tests?

      Thanks
      Andreas

      Andreas Peetz
      VMware Front Experience Blog | Twitter | Google+ | vExpert 2012-2015

      1 Reply Last reply Reply Quote 0
      • H
        heper last edited by

        what version of esxi ?

        1 Reply Last reply Reply Quote 0
        • V
          VFrontDe last edited by

          5.5 U2. I updated the post and added that.

          Andreas Peetz
          VMware Front Experience Blog | Twitter | Google+ | vExpert 2012-2015

          1 Reply Last reply Reply Quote 0
          • H
            heper last edited by

            i havent updated my production machines to use the vmxnet drivers (they'r still on legacy for now).

            but i know that pfsense own infrastructure runs partly on esxi and the devs use it a lot for testing purposes ….
            so i'm guessing there is a good reason for this.

            Are logs showing anything useful? What about cpu usage / mbuf /ram ?
            Does anything seem wrong with the resources when watching from vsphere client/vcenter ?

            pfsense 2.2 has a multithreaded pf now .... so more cores help now (although i don't think there is a known issue with degraded single thread performance )

            1 Reply Last reply Reply Quote 0
            • V
              VFrontDe last edited by

              Sure, the one CPU maxed out in all test cases … I can try 2.2 with more cores, but with 2.1 this did not make a difference for me in the past.
              mbuf / RAM usage was comparable in all cases and never reached critical thresholds.

              Andreas Peetz
              VMware Front Experience Blog | Twitter | Google+ | vExpert 2012-2015

              1 Reply Last reply Reply Quote 0
              • H
                heper last edited by

                maxed out in vsphere or inside VM  or both ?

                1 Reply Last reply Reply Quote 0
                • H
                  heper last edited by

                  i did some tests on esxi 5.1.0:

                  this is an iperf test from debian VM –> win 7 VM  ;  pfsense 2.2-X64 / 1core / 1GB Ram  VM is router between both doing NAT

                  also i think the win7 was the bottleneck in my quick n dirty setup ..... it hogged more cpu time then the other VM's involved.




                  1 Reply Last reply Reply Quote 0
                  • V
                    VFrontDe last edited by

                    @heper:

                    i did some tests on esxi 5.1.0:

                    this is an iperf test from debian VM –> win 7 VM  ;  pfsense 2.2-X64 / 1core / 1GB Ram  VM is router between both doing NAT

                    also i think the win7 was the bottleneck in my quick n dirty setup ..... it hogged more cpu time then the other VM's involved.

                    I will try to do more tests and look closer to CPU usage. The maxing out I saw was on the ESXi host, I did not look at the load inside pfSense.

                    In this test with iperf … were all VMs running on the same host using host internal network connections only? If yes can you try cross host connections involving physical wire?

                    Andreas Peetz
                    VMware Front Experience Blog | Twitter | Google+ | vExpert 2012-2015

                    1 Reply Last reply Reply Quote 0
                    • H
                      heper last edited by

                      yes all vm's were running on same host. if i find some time i could setup a different VM on a different host, but then i'd just hit wire speed (i hit wirespeed with the legacy e1000)

                      today i did the same test as yesterday but replaced to win7-VM with another debian-VM and hit 1.7Gbit  …. still kind of slow.
                      the esxi host is hitting 100% cpu usage

                      perhaps the devs could help us with useful tuning tips for the vmxnet3 nics ?

                      1 Reply Last reply Reply Quote 0
                      • johnpoz
                        johnpoz LAYER 8 Global Moderator last edited by

                        pfSense 2.2 with FreeBSD builtin vmxnet3 driver:
                        Down 406 Mbit/s, Up 1.3 Mbit/s (WTF?!)

                        Clearly that is not right.. I can tell you for fact that is wrong. Since that is what I am running and 10+Mbps from every box on my network, even wireless.  Running on esxi 5.5 build 2143827 which is patch after update2… About ready to update to 2403361 which came out on 1/27

                        Come on.. Clearly something wrong if your only seeing 1.3Mbps other than driver not performing as well as other driver, etc..  I don't have the isp connection you do, so can not test to those speeds.  But I pay for 50/10, I would have to connect something to wan side to test gig speeds, etc.  Or could fire up something on my other lan segment..    But clearly that number is way off!

                        An intelligent man is sometimes forced to be drunk to spend time with his fools
                        If you get confused: Listen to the Music Play
                        Please don't Chat/PM me for help, unless mod related
                        2440 2.4.5p1 | 2x 3100 2.4.4p3 | 2x 3100 22.01 | 4860 22.05

                        1 Reply Last reply Reply Quote 0
                        • C
                          cogumel0 last edited by

                          I'll be performing the same tests, so we have something to compare against.

                          On a different note, could some of this have to do with the lack of the other VMware Tools components? Specially if the CPU is being maxed out and obviously there is a lot going through the ram/swap?

                          1 Reply Last reply Reply Quote 0
                          • K
                            kejianshi last edited by

                            I'm testing on a virtualized environment with way more traffic than my old gigabit switch should have on it from a virtualized linux machine through pfsense 2.2

                            Its no slower than before.  I'm getting about 750 - 800 up and down and the rest of the gigabit connection is occupied with actual traffic.

                            Also, you can push that test button 10 time on 10 different servers and get 10 different speeds on each server - really crap way to benchmark pfsense.

                            I get widely varying results with speedtest.net depending on server, my OS, my browser, random unexplainable differences when tested a few times with nothing changed and which phase the moon is in….

                            1 Reply Last reply Reply Quote 0
                            • C
                              cogumel0 last edited by

                              Agree with kejianshi, I'll be using iPerf for me tests between two local machines and forcing the traffic through pfSense 2.2/pfSense 2.1.5

                              If there's any particular setup you'd like me to go for, let me know. Currently I'm thinking of going with the following:

                              pfSense 2.2 with 2 LANs, completely default config on ESXi 5.5U2

                              Ubuntu VM (LAN1) > pfSense 2.2 > Ubuntu VM (LAN2) – all on same physical host

                              I can also perform the tests as...

                              Ubuntu/Windows physical desktop (LAN1) > 1GB switch > 1GB switch > pfSense 2.2 > 1GB switch > 1GB switch > Ubuntu/Windows physical laptop (LAN2)

                              Though the speeds on physical should be slightly slower as the NIC on the laptop last time I tested it with iPerf was about 150mb/s slower than the desktop...

                              Let me know if there's any changes you'd like me to make on the setup.

                              1 Reply Last reply Reply Quote 0
                              • K
                                kejianshi last edited by

                                Ubuntu VM (LAN1) > pfSense 2.2 > Ubuntu VM (LAN2) – all on same physical host

                                sounds good.

                                1 Reply Last reply Reply Quote 0
                                • H
                                  heper last edited by

                                  my tests were with NAT enabled … haven't tested without NAT

                                  1 Reply Last reply Reply Quote 0
                                  • K
                                    kejianshi last edited by

                                    Me either and I have no need to.

                                    1 Reply Last reply Reply Quote 0
                                    • C
                                      cogumel0 last edited by

                                      Ok, so here are my test results.

                                      These were carried out on ESXi 5.5U2.

                                      The setup is as follows: 3 VMs, all on the same host, 2 running Ubuntu 14.04, 1 running pfSense.

                                      pfSense VM has 1 WAN and 2 LANs, 2 CPUs with 2 cores each and 8GB ram.

                                      Ubuntu1 has 4 CPUs, 1 NIC on LAN1 and 4GB ram, running from LiveCD

                                      Ubuntu2 has 4 CPUs, 1 NIC on LAN2 and 4GB ram, running from LiveCD.

                                      The only changes made in pfSense were to enable DHCP on OPT1 and setup any-to-any IPv4 and IPv6 rules on OPT1 (to mirror those on LAN1).

                                      Tests were done using iPerf and always from Ubuntu1 to Ubuntu2 (LAN1 to OPT1 in pfSense).

                                      There is only one other VM running on this physical host (a live pfSense box but traffic was pretty much non-existent during the test and no packages installed so its affects on the test results are negligible)

                                      The test results are the average of running iPerf 5x on each configuration.

                                      Test results:

                                      1.546GB - pfSense 2.1.5 E1000 pf enabled
                                      2.512GB - pfsense 2.1.5 E1000 pf disabled
                                      1.91GB  - pfSense 2.1.5 VMX3 pf enabled
                                      2.554GB - pfSense 2.1.5 VMX3 pf disabled

                                      1.474GB - pfSense 2.2 E1000 pf enabled
                                      2.334GB - pfSense 2.2 E1000 pf disabled
                                      1.818GB - pfSense 2.2 VMX3 pf enabled
                                      2.732GB - pfSense 2.2 VMX3 pf disabled

                                      Based on these results, I think it's fair to draw the following conclusions:

                                      • Performance with pf enabled is always slower in pfSense 2.2
                                      • Bigger gains are achieved in pfSense 2.2 with pf disabled
                                      • While there is a small decrease in speed with VMX3 in pfSense 2.2 in comparison to pfSense 2.1.5 with pf enabled, I don't think this is to do with the driver as much as it is to do with pfSense itself, as the same speed decrease was noticed when using E1000.

                                      In short, I don't think that VMware Tools VMX3 driver will provide any better performance increase vs the one from FreeBSD 10.1 (though it's impossible to say without testing it, you never know…) but an interesting thing to point out is that out of the box pfSense 2.2 performs worse than pfSense 2.1.5 with pf enabled (though better with pf disabled when using VMX3).

                                      Are there any configuration changes I can do to attempt to increase the performance in 2.2?

                                      1 Reply Last reply Reply Quote 0
                                      • H
                                        heper last edited by

                                        so basically there is "only' a 300-400 mbit increase with the vmxnet drivers.

                                        i was hoping bsd10.1 with the new pf would handle lots more  …. in my tests, increasing virtual cores didn't make much of a difference .

                                        Perhaps it needs serious tuning to get to speeds between 5-8gbit ??

                                        1 Reply Last reply Reply Quote 0
                                        • K
                                          kejianshi last edited by

                                          Don't suppose you could do the same test on bare hardware?

                                          1 Reply Last reply Reply Quote 0
                                          • C
                                            cogumel0 last edited by

                                            @kejianshi:

                                            Don't suppose you could do the same test on bare hardware?

                                            Well, not on a server, but I could put pfSense on a desktop. But how would you then want me to carry out the tests? The speed would be capped by the NIC at 1GB…

                                            1 Reply Last reply Reply Quote 0
                                            • K
                                              kejianshi last edited by

                                              Thats ok - Just a simple test that shows throughput and cpu load would still be informative.

                                              Better if you had a 10bg LAN of course but still should be nice to see.

                                              1 Reply Last reply Reply Quote 0
                                              • C
                                                cogumel0 last edited by

                                                Just for completeness, I took the 2 Ubuntu VMs I had and put them on the same LAN and repeated the iPerf test:

                                                15.8GB/s .. ..

                                                Didn't quite expect to get these speed with pfSense in the middle, even with pf disabled, but it shows what the server can handle.

                                                Just FYI it is a Dell DL380 G6 2x Xeon 6 core 72GB ram

                                                1 Reply Last reply Reply Quote 0
                                                • K
                                                  kejianshi last edited by

                                                  well - It the two ubuntu machines were on the same LAN (same switch) then it would do an end run around pfsense completely.  (I'm sure you already know)

                                                  I'm just stating the obvious for the few people it might not be obvious to.

                                                  1 Reply Last reply Reply Quote 0
                                                  • C
                                                    cogumel0 last edited by

                                                    Correct, I just added that to show what the server can handle, and what the difference between having the traffic go through pfSense and not is.

                                                    Still… I expected pfSense to not slow things as much, specially with pf disabled... It's a 600% decrease in performance by having the traffic going through pfSense. I'm sure that the ESXi host has something to do with it too since it's two different virtual switches involved but...

                                                    1 Reply Last reply Reply Quote 0
                                                    • S
                                                      Supermule Banned last edited by

                                                      This is 2.1.5 on E1000….

                                                      1 Reply Last reply Reply Quote 0
                                                      • C
                                                        cmb last edited by

                                                        @johnpoz:

                                                        pfSense 2.2 with FreeBSD builtin vmxnet3 driver:
                                                        Down 406 Mbit/s, Up 1.3 Mbit/s (WTF?!)

                                                        Clearly that is not right.. I can tell you for fact that is wrong.

                                                        Yeah there's something seriously wrong there that has nothing to do with the type of NIC being used. Most of our dev/test setup is on vmxnet3. One quick test through one of those VMs:
                                                        http://www.speedtest.net/my-result/4106023142

                                                        Hard to get great results when you have a gigabit connection in a datacenter, as speed test servers vary in performance so much at that level. But there's real Internet traffic that's way beyond that "ideal circumstance" with local-only traffic.

                                                        1 Reply Last reply Reply Quote 0
                                                        • S
                                                          Supermule Banned last edited by

                                                          And you are running 100gbit vswitches??? :D

                                                          @cogumel0:

                                                          Just for completeness, I took the 2 Ubuntu VMs I had and put them on the same LAN and repeated the iPerf test:

                                                          15.8GB/s .. ..

                                                          Didn't quite expect to get these speed with pfSense in the middle, even with pf disabled, but it shows what the server can handle.

                                                          Just FYI it is a Dell DL380 G6 2x Xeon 6 core 72GB ram

                                                          1 Reply Last reply Reply Quote 0
                                                          • F
                                                            FauxShow last edited by

                                                            Getting the official VMWare Tools to work on 2.2 was a pain, but that offered the best speeds for me; I'm able to get 2.5Gb/s through pfSense using the VMXNet3 drivers.
                                                            Using Open-VM-Tools I was able to get just 2Gb/s

                                                            Using vSphere 5.5 U2 on Enterprise Plus licensing on a HP c7000 blade chassis with two hosts running 10GbE FlexNET adapters.

                                                            brb; gotta reboot my pfsense

                                                            1 Reply Last reply Reply Quote 0
                                                            • KOM
                                                              KOM last edited by

                                                              Getting the official VMWare Tools to work on 2.2 was a pain, but that offered the best speeds for me

                                                              Considering how pfSense 2.2 is built on FreeBSD 10.1 which has direct support for vmxnet3 NICs, I'm wondering what the real VMware Tools install is giving you.  One of the regulars here, johnpoz, was testing just yesterday and saw problems when using VMware Tools under 2.2.

                                                              https://forum.pfsense.org/index.php?topic=90535.msg505761#msg505761

                                                              1 Reply Last reply Reply Quote 0
                                                              • johnpoz
                                                                johnpoz LAYER 8 Global Moderator last edited by

                                                                I was not able to get them to work.. I could ping stuff but could not access anything using the native tools.. Following the great instructions here..

                                                                http://www.v-front.de/2015/01/pfsense-22-was-released-how-to-install.html

                                                                He also had problems with the native tools.  The native tools work other than the driver for vmx3…  I would be very interested how you got the native tools to work..

                                                                I can try disabling the checksumming

                                                                rx-checksumming: on
                                                                tx-checksumming: on

                                                                Which I show is still on, at the esxi level.  But I have offloading off.

                                                                [root@esxi:~] ethtool –show-pause vmnic2
                                                                Pause parameters for vmnic2:
                                                                Autonegotiate:  on
                                                                RX:            off
                                                                TX:            off

                                                                I can play around with them a bit more.. So if you say they are working I would love to know how you got them working.

                                                                Here are the nics I have in my esxi box

                                                                [root@esxi:~] esxcfg-nics -l
                                                                Name    PCI          Driver      Link Speed    Duplex MAC Address      MTU    Description
                                                                vmnic0  0000:04:00.0 tg3        Up  1000Mbps  Full  2c:76:8a:ad:f6:56 1500  Broadcom Corporation NetXtreme BCM5723 Gigabit Ethernet
                                                                vmnic1  0000:02:00.0 e1000e      Up  1000Mbps  Full  00:1f:29:54:17:14 1500  Intel Corporation 82571EB Gigabit Ethernet Controller
                                                                vmnic2  0000:02:00.1 e1000e      Up  1000Mbps  Full  00:1f:29:54:17:15 1500  Intel Corporation 82571EB Gigabit Ethernet Controller
                                                                vmnic3  0000:03:00.0 e1000e      Up  1000Mbps  Full  68:05:ca:01:b3:26 1500  Intel Corporation 82574L Gigabit Network Connection

                                                                An intelligent man is sometimes forced to be drunk to spend time with his fools
                                                                If you get confused: Listen to the Music Play
                                                                Please don't Chat/PM me for help, unless mod related
                                                                2440 2.4.5p1 | 2x 3100 2.4.4p3 | 2x 3100 22.01 | 4860 22.05

                                                                1 Reply Last reply Reply Quote 0
                                                                • F
                                                                  FauxShow last edited by

                                                                  You were exactly correct about turning options off on the NICs to get pfSense 2.2 with VMWare Tools to work on vSphere 5.5U2+.

                                                                  I noticed by monitoring the console that if I changed the link speed of a VMXNET3 adapter that it changed the enabled options from what you show here as RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO to just VLAN_MTU,VLAN_HWTAGGING

                                                                  So I made a script in /usr/local/etc/rc.d that has a "ifconfig vmx3f0 -rxcsum -txcsum -tso4" for each adapter, and that gets it to work upon startup. So far it works and iperf between two RHEL VMs shows an average of 2.5 or so Gb/s through VMXNET3 pfSense interfaces.

                                                                  Don't know if it's worth it though, as it seems to be about 100Mb/s difference from the native drivers to Open VM Tools and then up to VMWare Tools.

                                                                  brb; gotta reboot my pfsense

                                                                  1 Reply Last reply Reply Quote 0
                                                                  • johnpoz
                                                                    johnpoz LAYER 8 Global Moderator last edited by

                                                                    Well I can validate that it in does seem to work.. As soon as turned that stuff off, traffic flows through pfsense.

                                                                    This should be added to the docs, I will do that sometime this weekend I hope.  Want to play around with it a bit more, etc.  I want to check the performance between local network segments, etc.

                                                                    Thanks!  While I agree that native driver with the openvm tools is much easier method.. Unless there some drastic difference in performance just stick with that method.. Nice thing about the native drivers, etc is very easy to get pfsense up and running on esxi vs having to install tools before it works, etc.  Guess I could put up a ova for others with the tools already installed.  So your saying this does not survice reboot?

                                                                    An intelligent man is sometimes forced to be drunk to spend time with his fools
                                                                    If you get confused: Listen to the Music Play
                                                                    Please don't Chat/PM me for help, unless mod related
                                                                    2440 2.4.5p1 | 2x 3100 2.4.4p3 | 2x 3100 22.01 | 4860 22.05

                                                                    1 Reply Last reply Reply Quote 0
                                                                    • F
                                                                      FauxShow last edited by

                                                                      Correct, the adapters need to be snapped into action every time. I tried to do "ifconfig vmx3f0 media autoselect" but that just gave an error code and "ifconfig vmx3f0 media 10Gbase-T" didn't wake them up. If you use the webgui to select the link speed then that does work though, and you can monitor the console to see.

                                                                      brb; gotta reboot my pfsense

                                                                      1 Reply Last reply Reply Quote 0
                                                                      • D
                                                                        dburkland last edited by

                                                                        Just wanted to say thanks for this post as this helped me get past this connectivity issue with PfSense 2.2.3 + vSphere 6.0. I created the following RC script (not pretty but it gets the job done) based on the example provided which seems to do the trick:

                                                                        
                                                                        #!/bin/sh
                                                                        
                                                                        for vnic in $(/sbin/ifconfig | grep "vmx3f[0-9]:" | awk -F ':' '{ print $1 }'); do
                                                                        	/usr/bin/logger -t vmxnetfix.sh "Disabling checksumming on $vnic"
                                                                        	/sbin/ifconfig $vnic -rxcsum -txcsum -tso4
                                                                        	/sbin/ifconfig $vnic down
                                                                        	/sbin/ifconfig $vnic up
                                                                        	/usr/bin/logger -t vmxnetfix.sh "Checksumming has been properly disabled on $vnic"
                                                                        done
                                                                        
                                                                        

                                                                        Cheers,

                                                                        Dan

                                                                        1 Reply Last reply Reply Quote 0
                                                                        • First post
                                                                          Last post