Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Network performance vtnet

    Scheduled Pinned Locked Moved Virtualization
    4 Posts 2 Posters 2.2k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • O
      Ofloo
      last edited by

      Pfsense has bad performance compared to vanilla freebsd when it comes to vtnet inside bhyve, what could the reason be?

      On or off doesn't really make a performance difference and yes I've rebooted each time when I changed a checkbox. However I don't see real changes on the interface flags so not sure if it applies to vtnet.

      Checked:
      Disable hardware checksum offload
      Disable hardware TCP segmentation offload
      Disable hardware large receive offload

      Tests performed with iperf

      Freebsd 11.1 Vanilla:
      1.28 Gbits/sec
      pfSense 2.4.2p1:
      613 Mbits/sec
      pfSense 2.4.2p1 (pfctl -d):
      616 Mbits/sec

      I've checked both vtnet on the vanilla freebsd and pfsense have the same interface flags.

      Oh right and the vanilla freebsd has 1 core 2.4ghz and 256mb ram, the pfsense has 4 cores 2.4ghz and 4gb ram, just tested the development release 2.4.3 has the same bad results.

      And there using the exact same bridge interface.

      edit: added version numbers

      1 Reply Last reply Reply Quote 0
      • O
        Ofloo
        last edited by

        I've noticed that the webinterface doesn't apply the values diable lro,tso,csum to the vnet interfaces, .. not that it makes that much of a difference though.

        sysctl hw.vtnet
        hw.vtnet.rx_process_limit: 512
        hw.vtnet.mq_max_pairs: 8
        hw.vtnet.mq_disable: 0
        hw.vtnet.lro_disable: 1
        hw.vtnet.tso_disable: 1
        hw.vtnet.csum_disable: 1
        

        Had to manually enter it in /boot/loader.conf.local in order for it to apply.

        1 Reply Last reply Reply Quote 0
        • KOMK
          KOM
          last edited by

          I don't see too many people using pfSense under bhyve, or at least if they do they don't post about it here.  Most are using VMware ESXi or xenserver.  That's likely why you aren't getting much response.

          1 Reply Last reply Reply Quote 0
          • O
            Ofloo
            last edited by

            Don't mind if anyone gets the same issue at least they might benefit from what i did, ..

            Did pci passthrough basicly the same result, ..
            Tried to see if it was related to only pfsense installed opnsense same result, ..
            Made lagg0 of 4 pci gbit nic same result, .. not tested this senario on opnsense though.

            I've got the impression there's some sort of bottleneck, .. not quite sure if it's vtnet related.

            1 Reply Last reply Reply Quote 0
            • First post
              Last post
            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.