Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    HYPERVISOR performance testing

    Scheduled Pinned Locked Moved Virtualization
    31 Posts 11 Posters 12.4k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • R
      redpine
      last edited by

      Any thoughts on how Xen would compare?

      1 Reply Last reply Reply Quote 0
      • J
        jsone
        last edited by

        @redpine:

        Any thoughts on how Xen would compare?

        just finished a xen center 6.5 test run, after some digging it appears xen centers internal gutts are mostly centos or fedora based

        iperf  produced 18mbit

        no way in the vm editor to change the network drivers, maybe you can hack around to find a way. id bet e1k would be slightly faster and virtio even more so.

        cpu in the vm only used two of the cores(100% of each), which was exactly what occurred with centos without multi core virtio drivers

        i did not find any clear indication that the xen center supported bsd, you probably want avoid xen for bsd based operating systems such as pfsense

        prob want to stick with centos 7 with kvm for bsd virtio support.

        i did not test the flood commands, who knows maybe thats where it shines? ;D but prob not!

        it should be noted that i did disable hardware checksum offloads and rebooted the vm guest prior to the test.

        update:8-21-15
        so it may appear the scores were so dismally low related to: https://forum.pfsense.org/index.php?topic=88467.0
        while in kvm, disabling HCO solves this issue maybe in xen you may need to mess with the hypervisor nics as prior post states. after scanning the post i feel safe saying xencenter isnt the place for bsd guest, it appears to specialize in linux and windows.

        1 Reply Last reply Reply Quote 0
        • R
          redpine
          last edited by

          prob want to stick with centos 7 with kvm for bsd virtio support.

          i did not test the flood commands, who knows maybe thats where it shines? ;D but prob not!

          Thanks.  I was all setup to install centos 6 and Xen (waiting for my SSD).  Guess I'll change my dhcp server to serve up a centos 7 install via PXE instead.  Really don't like KVM… ugh!!!

          Is your configuration/setup with centos 7 KVM documented anywhere?  Hate to setup pfSense through trial and error.

          1 Reply Last reply Reply Quote 0
          • J
            jsone
            last edited by

            @redpine:

            Is your configuration/setup with centos 7 KVM documented anywhere?  Hate to setup pfSense through trial and error.

            installing pfsense on centos 7.1

            1. into centos , i choose server with GUI, then i check all the boxs on right for virtualization, and check the box for devel tools
            a. storage is always tricky, i LOVE lvm for kvm storage, its never failed me when snapshotting and dd backups, so you may want to shrink the install drive down to 20gig and leave the rest of your main VG open for vm guests.
            i. vgdisplay
            ii. lvcreate –name pfsense -L 5G /dev/lvnamefromvgdisplay/pfsense
            b. tuning sysctl in centos 7 run "tune-adm virtual-host" and reboot.

            2. copy/gunzip the pfsense iso to your /iso/ using winscp

            3. using xming(youll need the donation version of xming to use x11 forwarding with virt-manager) or xwindows do "virt-manager" when creating the VM choose "other" then it will populate more options choose freebsd 10x
            a. virt-manager is where youll setup your br0, br1, br2, etc for each interface (youll assign these to your vm interfaces later), make sure you tell all interfaces to start on boot. youll want to reboot once its all setup to see if you missed something

            4. tell the vm to boot from the iso, at the end , check the box for customize before launch.

            5. add your extra network interfaces as needed, drag all the drivers types to "Virtio" and hit save.

            6. launch the vm, force the vm off.

            7. set howmany cores you want each interface to use for queues (usually the same as your cores)
            to do this you get on terminal, type "virsh edit pfsense", find each virtio network interface
            add <driver name="vhost" queues="8">to the bottom just above each tag, exit the editor with a save

            8. start the vm

            9. once you are into the pfsense UI, check all boxes for disable hardware offload under adv>misc

            10. increase the state limit to atleast 1 million.

            11. start to ddos yourself, responsibly.

            update: 9-7-15

            on centos 7, if we issued a network restart, NM greedily removes the guest nic interfaces and doesnt add them back

            run these commands, then hypervisor network restart wont permanently cripple your pfsense guest

            systemctl disable NetworkManager
            systemctl stop NetworkManager

            side notes:

            1. centos 7 in my tests automatically disabled its firewalld system for all bridged interfaces, you do want to make sure thats happening in your install too.

            you should see the following lines in the following file
            /usr/lib/sysctl.d/00-system.conf

            Disable netfilter on bridges.

            net.bridge.bridge-nf-call-ip6tables = 0
            net.bridge.bridge-nf-call-iptables = 0
            net.bridge.bridge-nf-call-arptables = 0

            2. it should be completely fine to leave selinux and firewalld running

            3. i did play with increasing the buffer on the interfaces in centos, it didnt make a huge difference in over all performance, tho it did appear that initial bursts were handled much faster with higher tx and rx queues. i left this all at default 256 for my tests and production. you may want to play with it in your setups

            ethtool --show-ring enp0s20f0
            Ring parameters for enp0s20f0:
            Pre-set maximums:
            RX:            4096
            RX Mini:        0
            RX Jumbo:      0
            TX:            4096
            Current hardware settings:
            RX:            256
            RX Mini:        0
            RX Jumbo:      0
            TX:            256

            #to change it (do this for each interface)
            ethtool --set-ring enp0s20f0 rx 4096 tx 4096

            4. depending on the nature of your setup you may want it less power save and more cpu ready, to do that you can tell speed step to be less conservative.

            #to see where you are at
            grep -i mhz /proc/cpuinfo
            cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

            #to change it you can use this command
            for CPUFREQ in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do [ -f $CPUFREQ ] || continue; echo -n performance > $CPUFREQ; done</driver>

            1 Reply Last reply Reply Quote 0
            • J
              jsone
              last edited by

              @Keljian:

              Is there any chance you can run some Hyper-V tests to compare? I'm curious.

              i had hyper v setup in pfsense 2.1, it was a disaster, no benchmarks from then, it was just a problem with nic driver compatibility. since then and windows 8+10 i decided not to renew my msdn, which despite the fact that ms said id keep access to all the software i paid for in the prior subscription, they lied its all locked down, the fact i even have to pay over $1000/y to test microsofts shit software is absurd. id love to help by testing it, but microsofts business methods lead me to say forget about microsoft as a hypervisor, can you even imagine having your entire network go down every 2nd tuesday of the month because of windows update on your pfsense hypervisor? :)

              thats what carp is for? lol no.

              whats that you say? windows srv 2012 doesnt reboot on the 2nd tuesday after updates anymore? you are right LOL it sits there vulnerable for days until you reboot it!

              windows as a router hypervisor is really no less absurd than saying you need java or flash installed in your browser to manage your VMs.

              just forget about ms all together as a production env os.

              1 Reply Last reply Reply Quote 0
              • H
                heper
                last edited by

                @jstar1:

                [just forget about ms all together as a production env os.
                [/quote]

                you do that in your reality, while the rest of us are stuck in this reality ;)

                1 Reply Last reply Reply Quote 0
                • J
                  jsone
                  last edited by

                  @heper:

                  @jstar1:

                  [just forget about ms all together as a production env os.
                  [/quote]

                  you do that in your reality, while the rest of us are stuck in this reality ;)

                  hey, ive got my share of prod windows servers like everyone else, everyday is another opportunity for me phase them out / move user interaction away from them ;)

                  ill be over here in my nice soft padded reality, just remember, microsoft wants to be an ASP and grab market share, every time you pay them for software, you are paying your competitor to allow you to compete with them, if you arent providing products that would suggest a conflict of interest,  at a minimum you are stuck supporting a monopoly.

                  i found a Hyper-V Server 2012 R2 Evaluations  |  Unlimited, i might give that a try if i get some freetime, although it sounds like a major waste of time

                  http://www.microsoft.com/en-us/evalcenter/evaluate-hyper-v-server-2012-r2?i=1

                  https://technet.microsoft.com/en-us/library/dn792027.aspx

                  claims 2012hyperv supports freebsd, might have some interesting test results

                  apparently hyper-v has no UI to speak of, and to manage it remotely i would need a 2012 install with hyper-v mmc along with a ton of other nonsense i read about here.
                  http://pc-addicts.com/12-steps-to-remotely-manage-hyper-v-server-2012-core/

                  i think ill give up on testing this nightmare for now.

                  1 Reply Last reply Reply Quote 0
                  • K
                    Keljian
                    last edited by

                    I should mention I've noticed a lot less latency in ESXI with the new open-vm-tools which was released a day or two ago

                    1 Reply Last reply Reply Quote 0
                    • J
                      jsone
                      last edited by

                      some things you might need to know to get hyper v installed and working on a c2758

                      to install successfully switched c2758 to ide sata mode. (prob could just install the driver?)

                      install intel network drivers

                      http://www.supermicro.com/products/motherboard/Atom/X10/A1SRM-2758F.cfm

                      PnPUtil -i -a d:\PRO1000\Winx64\NDIS64\e1s64x64.inf
                      PnPUtil -e

                      To Turn Off:
                          NetSh Advfirewall set allprofiles state off
                          To Turn On:
                          NetSh Advfirewall set allrprofiles state on
                          To check the status of Windows Firewall:
                          Netsh Advfirewall show allprofiles

                      on some 2012 client somewhere do this to get hyperv tools
                      Install-WindowsFeature RSAT-Hyper-V-Tools -IncludeAllSubFeature

                      upload the iso to the hyperv by \192.168.x.x\c$

                      left firewall off for testing and setup.

                      1 Reply Last reply Reply Quote 0
                      • R
                        redpine
                        last edited by

                        3. using xming(youll need the donation version of xming to use x11 forwarding with virt-manager) or xwindows do "virt-manager" when creating the VM choose "other" then it will populate more options choose freebsd 10x

                        First install of Centos 7.  I have an xming license, but like opennx better.  Unfortunately neither work with gnome in Centos 7, so I have to use KDE.  Probably will install xfce latter.  Now on to installing pfSense in a VM.

                        Thanks for the help.

                        1 Reply Last reply Reply Quote 0
                        • J
                          jsone
                          last edited by

                          @redpine:

                          First install of Centos 7.  I have an xming license, but like opennx better.  Unfortunately neither work with gnome in Centos 7, so I have to use KDE.  Probably will install xfce latter.  Now on to installing pfSense in a VM.

                          Thanks for the help.

                          ya, xming is what you would want use to do  x11 forwarding in ssh to a windows computer for virt-manager. on linux just start xwindows and click on the virt-manager icon within gnome

                          1 Reply Last reply Reply Quote 0
                          • J
                            jsone
                            last edited by

                            my next step was to test centos 7 kvm+pfsense 2.2.4 and baremetal in a lagg configuration.

                            lagg allows you to bond multiple interfaces together, using mode 4, gives you additional thru put AND redundancy

                            our switches support LACP, 3+4 so we setup all of the clients this way.

                            lan and wan clients have 2 1g nics, 2 bridge interfaces with seperate ips each, then we run 2 iperfs at the same time to two different iperf backend ips

                            the pfsense setup, has 1 bond interface with a wan vlan on it, lagg0 is the lan.

                            centos 7 guest worked like a charm, we got 1.5gbit thruput using piror mentioned iperf3 tests.

                            switched over to the baremetal unit, had to add a igb2 to the "lan" on its own subnet to configure the lagg group from the webui

                            once the lagg was up i performed the same test, the baremetal maxes out at 950mbit,

                            attempted to adjust the lagghash

                            ifconfig lagg0 laggash l3,l4
                            ifconfig lagg0 laggash l2
                            ifconfig lagg0 laggash l2,l3,l4

                            no improvement

                            watching "systat -ifstat 1"

                            shows laggmember0 igb0 is maxed out, while lagmember1 igb1 has 2mbit inbound, but no traffic going out.

                            lagg0_vlan100 114MB in, 2.1MB out
                            lagg0 114MB in, 114MB out
                            lo0 0,0
                            igb2, 0.0
                            igb1 2mb in, 0 out
                            igb0 114mb in, 114mb out

                            removed all vlans and interfaces not used by the lagg, rebooted. performed same test, still had results above, no performance gain.

                            should also be noted we attempted to adjust the strict tunable without and improvement.

                            i just plugged the cables into a different lagg on the switch, when the lagg came up, we saw the same issue in reverse, igb1 would pass all the traffic, while igb0 did 2mbit max with 0 send

                            ###giving up on lagg with baremetal###
                            after reviewing the mac info, it turns out that as of 2.2.4 pfsense reports only 1 mac to the switch for LACP on a vlan, while it knows to report both macs on vlan1, as a result when the switch returns traffic for vlan100, it only knows about 1 port, so the router in a lacp will never go faster than 1 interfaces unless you bandaid in the switch somehow. either way, LACP doesnt work in 2.2.4 with vlans,  id assume redundancy remains, although i did not test it. the centos 7 hypervisor managing the lag with a pfsense guest reports both laggmember macs on all vlans.

                            you can successfully get pfsense to route its traffic out of both lagg members using the following tunables which are not set bydefault in 2.2.4
                            sysctl net.link.lagg.default_use_flowid=0
                            sysctl net.link.lagg.0.use_flowid=0

                            1 Reply Last reply Reply Quote 0
                            • J
                              jsone
                              last edited by

                              setup 2 c2758 with centos 7 1503 kvm, put pfsense 2.2.4 guests on each,

                              setup pfsync, carp,

                              used Ethernet port 4 for direct connection for state sync

                              eth0,1 ->bond0-> br0(vtnet_vlanX(wan),vtnet0_vlanY(wan2),vtnet0(lan)
                              eth3 -> br3(vtnet0(pfsync)

                              state sync at 900mbit, using 75-80mbit.

                              configs are copying perfectly.

                              started the iperf test, virsh destroy MASTER,

                              BACKUP never changes carp ips from backup to master, waited 20minutes no change. carp still does not work in virtio, - SAD faces.

                              rebooted both units, gracefully shut down master using menu"halt" option, backup successfully takes over

                              booted master, it took over.

                              under no load, carp failover appears to work, its possible that after syncing carp ips each unit needs a to reboot ?

                              i forced a sync using the button in the UI, killed MASTER, while under load from 40 iperfs, backup picked up immedatly only 1 of the 40iperfs 0.0 for a second.

                              its possible after adding carp ips and then syncing them to the back youll want to do a reboot on the backup.

                              it appears if you issue an /etc/init.d/network restart, the guests also die. i had not seen this in the past.
                              normally i would yum remove NetworkManager but i did not for these two tests.

                              i took an op, to test what would happen to the carp setup under the hping with random source addresses.
                              the pfsense 2.2.4 guest inside of the centos 7 hypervisors did suprisingly well.

                              they processed new states with only 30% packetloss,

                              i doubled down on the flood (now that im using stacked, carped cisco switches this isnt lagging like the poor little desktop switches i had used originally.)

                              which was 85mbit of random state flooding, master went to 60% packetloss, there was a 20mbit stream of states between MASTER and BACKUP

                              once MASTER got to 6.5million states i killed the guest, amazingly within 2-10seconds all of the carp ips had moved over and the backup was handling it just as well.

                              now when i booted the MASTER back up, things got a little shitty, i couldnt get back into the web interface until i stopped the floods. kept giving me timeouts and crf / missing cookie errors.

                              it didnt appear that the backup tranfered its 5 or so million states back to the MASTER either, they both had about 1.5 million states when the master took over, im not sure where the other 4million went.

                              after 30minutes when i stopped the flood, MASTER had 4.5million states

                              1 Reply Last reply Reply Quote 0
                              • D
                                diablo266
                                last edited by

                                Thank you SO MUCH for this thread, it is extremely useful for me! I've been running ipfire under kvm as an alternative to pfsense due to the horrendous performance virtio in pfsense/bsd used to have a few years ago. I look forward to switching back to running pfsense now that it seems kvm/virtio support under pfsense is finally able to push gigabit. I noticed you gave it 8 cores to achieve that, I really hope it doesn't actually need all 8?

                                Hopefully performance continues to improve, as virtio under ipfire is able to saturate gigabit easily with a single core on ancient nehalem era hardware. I'm going to be throwing part of an e5-2680v3 at pfsense this time around…

                                1 Reply Last reply Reply Quote 0
                                • J
                                  jsone
                                  last edited by

                                  @diablo266:

                                  Hopefully performance continues to improve, as virtio under ipfire is able to saturate gigabit easily with a single core on ancient nehalem era hardware. I'm going to be throwing part of an e5-2680v3 at pfsense this time around…

                                  obtaining  gigabit line speed did not take all 8 cores, tho it did take more than one would like, 4-6, native linux firewall appears faster in a linux hypervisor, there is alot more going on in these tests im doing than 1 giant download, it is 40+ downloads that attempt to go as fast as they can, so states do get more excited than a couple browser downloads, the cpus only gets near maxed out when doing high packet per second thruput testing.

                                  i just finished doing some testing today with 2 centos hypervisors, 2 pfsense 2.2.4 guest running in a carp failover. they are working better than ive ever seen before using virtio! no kpanics or anything!

                                  the backup was about 100mbit slower (850mbit or so), i did not install adm-tune on the centos hypervisor, that may be the reason why.

                                  i tested a linux firewall guest on this setup also, heres the downside of a linux firewall. the centos router guest got 930mbit, but i could not get it to go anyfaster with the bonding, even tho i added 2 more intertfaces to the guest to try and send traffic out.

                                  the centos router was getting 222megabytes from vlan on its wan(which it saw as a 1g interface), but could only send 118megabytes no matterwhat i tried bond 4,5 etc. the test wasnt comparable to the pfsense tests as i had the linux firewalld off, and i did not turn on ipmasq(nat) (pfsense with pfctl -d) pf off, runs crazy fast too, its really a shame there isnt a way to get it running that fast with the firewall on!

                                  pfsense negotiates its virtio interfaces at 10gigbit even if you only have 2 bonded 1g nics, because linux firewalls are "sortofatthispoint" para-virtualized in its drivers, they perform WAY faster, but they give you 1g even if you bonded bridge, so the native linux driver support is a blessing its also a curse, unless of course you have 10g hardware across the board. then linux firewall might be a faster option for you in a linux hypervisor.

                                  im glad to see pfsense 2.2 is doing virtio flawlessly without kpanics, now we need performance!

                                  1 Reply Last reply Reply Quote 0
                                  • M
                                    Mats
                                    last edited by

                                    @jstar1:

                                    @heper:

                                    @jstar1:

                                    [just forget about ms all together as a production env os.
                                    [/quote]

                                    you do that in your reality, while the rest of us are stuck in this reality ;)

                                    hey, ive got my share of prod windows servers like everyone else, everyday is another opportunity for me phase them out / move user interaction away from them ;)

                                    ill be over here in my nice soft padded reality, just remember, microsoft wants to be an ASP and grab market share, every time you pay them for software, you are paying your competitor to allow you to compete with them, if you arent providing products that would suggest a conflict of interest,  at a minimum you are stuck supporting a monopoly.

                                    i found a Hyper-V Server 2012 R2 Evaluations  |  Unlimited, i might give that a try if i get some freetime, although it sounds like a major waste of time

                                    http://www.microsoft.com/en-us/evalcenter/evaluate-hyper-v-server-2012-r2?i=1

                                    https://technet.microsoft.com/en-us/library/dn792027.aspx

                                    claims 2012hyperv supports freebsd, might have some interesting test results

                                    apparently hyper-v has no UI to speak of, and to manage it remotely i would need a 2012 install with hyper-v mmc along with a ton of other nonsense i read about here.
                                    http://pc-addicts.com/12-steps-to-remotely-manage-hyper-v-server-2012-core/

                                    i think ill give up on testing this nightmare for now.

                                    Or you do the managment the easy way :)
                                    download the free edition of 5nine manager from their web http://www.5nine.com/products.aspx and install it directly on the hyper-v host

                                    1 Reply Last reply Reply Quote 0
                                    • K
                                      Keljian
                                      last edited by

                                      You shouldn't need a 3rd party tool to make it easy to manage sorry- that's not cool.

                                      1 Reply Last reply Reply Quote 0
                                      • J
                                        jsone
                                        last edited by

                                        another update on this front, related to 10gb chelsio cards.

                                        we put Chelsio-T520-SO-CR into our c2758 test lab, baremetal performance was amazing, we actually couldnt push the connection in the test lab past 2gb due to the limits on  our clients, although the hping floods were handled amazingly well, only minor bursts of 100-300ms ping spikes, rather than majors fireballs like we saw before on bonded 1gb. on the baremetal setup we were pushing 2gb in/2gbs out the interrupts were at 33%.

                                        we then attempted the same tests in centos 7 with kvm and virtio,
                                        in a bridged setup the traffic would not go over 1gb, ethtool showed the link up at 10gbit, so did the switch. under full iperf load, the pfsense was at 6.6 out of 8 load. so it appears we were mostly cpu maxed.

                                        attempted to adjust the txqueuelen of the hypervisor 5000 and 10000, it did make the connection go about 5% faster.

                                        attempted to assign the chelsio cards directly to the guest but dont support vt-d. so it doesnt work.

                                        looks like cheslio+c2758 is baremetal is really the best option.

                                        while bonding the built in intel 1g nics works similarly well in both centos 7 and baremetal

                                        1 Reply Last reply Reply Quote 0
                                        • P
                                          pfoo
                                          last edited by

                                          hi, just stumbled on this topic as I'm having trooble achieving the same performances on similar hardware

                                          Hypervisor : Proxmox (KVM) on a Atom C2750 supermicro board (A1SAM), 4 intel GB nic.
                                          VM : up-to-date pfsense 2.2.5 with 4 cores assigned, 2 virtio NIC with 4 queues enabled on both interfaces, 2048Gb memory
                                          MTU is 1500 everywhere

                                          target iperf cmdline : iperf3 -s
                                          input iperf cmdline : iperf3 -c 192.168.50.10 -P 20 -t 30

                                          Direct switching (without passing through pfsense nor the hypervisor at all)
                                          INPUT IPERF –-> switch ---> TARGET iperf
                                          941 Mbits/sec
                                          => input iperf, target iperf, switch and cables are able to sustain 941mbps.

                                          4 core, 4queues, through pfsense with pf disabled (pfctl -d)
                                          INPUT IPERF –-> switch ---> pfsenseNic0 ---> pfsenseNic1 ---> TARGET iperf
                                          935-941Mbits/s
                                          nearly 100% interrupts on 2 cpu cores
                                          no significative system load on cpu (somewhat 2-5%)
                                          => the two nics and virtualised pfsense are able to sustain nearly the same bandwitdth (a bit less actually, maybe some kvm overhead ?)

                                          4 core, 4queues, through pfsense with pf enabled (pfctl -e)
                                          751-811Mbits/s
                                          100% interrupt on 1 cpu core, 75% interrupts on 1 cpu core
                                          30 + 20% system cpu load on the 2 last cores

                                          • why am I missing 130mbps on this last test
                                          • why do I "only" have 75% interupt load on the second core (considering I had 100% on 2 cores during previous test) ?

                                          I already tried giving 8 cores to the vm, either with 4 or 8 virtio queues, doesn't change anything.
                                          also tried playing with numa/taskset in order to lock the kvm process to the same 4 cores

                                          Any idea ?

                                          [edit] pfsense baremetal perf:  941mbps.

                                          1 Reply Last reply Reply Quote 0
                                          • J
                                            jsone
                                            last edited by

                                            i know a few people tried my setup with other flavors of linux and had similar issues to what you are seeing, i would consider retesting with centos 7.1 or later and see if that resolves it, your interrupts should be maxed out during your tests.

                                            the tunable that i mentioned for centos may not exist in prox, ive never used prox.

                                            see if you can force all your cores to max performance, and tune the operating system sysctls to be a hypervisor like i mentioned in prior posts

                                            see my prior posts about setting up centos to see if you can apply similar tunables to prox.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.