Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    No network traffic in pfSense 2.4 KVM VM

    Scheduled Pinned Locked Moved 2.4 Development Snapshots
    14 Posts 5 Posters 3.9k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M
      mjtbrady
      last edited by

      Not yet no.

      I have experienced problems with hardware offloading in the past and these symptoms are different.  No traffic seems to be getting through to the VM at all.  With hardware offloading issues some traffic gets through.  Enough to allow the use of the Web Configurator to turn off hardware offloading.  At least that has always been the case in the past.

      When I get some time I will bring up the 2.4 VM again and look at the adapter settings more closely in case there is something different in FreeBSD 11.  Don't know when that will be though.

      1 Reply Last reply Reply Quote 0
      • A
        athurdent
        last edited by

        @mjtbrady:

        Is there something additional that needs doing on 2.4 to get virtio drivers working that I have missed?

        No, working absolutely fine here on Proxmox. Everything virtio.

        1 Reply Last reply Reply Quote 0
        • D
          doktornotor Banned
          last edited by

          @athurdent:

          @mjtbrady:

          Is there something additional that needs doing on 2.4 to get virtio drivers working that I have missed?

          No, working absolutely fine here on Proxmox. Everything virtio.

          You must have a shitton of luck, pretty much every thread I noticed about virtualization lately (I largely skip them) was one where virtio either did not work at all, or had absolutely horrible throughput. All fixed after replacing them with emulated Intel.

          1 Reply Last reply Reply Quote 0
          • A
            athurdent
            last edited by

            Hmm, odd. I'm even using VLANs on them. Not only on Proxmox but also on a HA secondary (CARP/pfsync) on my QNAP (TS 253 Pro 8GB) with Virtualization Station.
            Proxmox (i3 7100 with Supermicro X11-SSL-F) does nearly Gigabit throughput, OpenVPN around 250MBit. The QNAP NAS throughput is at least enough for my 400MBit line.
            Maybe it's the Intel NICs? IIRC the QNAP also has Intel NICs.

            1 Reply Last reply Reply Quote 0
            • A
              athurdent
              last edited by

              Here's a test without involvement of physical adapters. This is iperf3 from a LAN VM to a DMZ VM trough the pfSense VM. No NAT, only routed:

              [ ID] Interval           Transfer     Bandwidth       Retr
              [  4]   0.00-10.00  sec  3.51 GBytes  3.02 Gbits/sec   98             sender
              [  4]   0.00-10.00  sec  3.51 GBytes  3.01 Gbits/sec                  receiver
              

              with -R

              [ ID] Interval           Transfer     Bandwidth       Retr
              [  4]   0.00-10.00  sec  3.08 GBytes  2.65 Gbits/sec  314             sender
              [  4]   0.00-10.00  sec  3.08 GBytes  2.65 Gbits/sec                  receiver
              

              Maybe it's Proxmox, not Intel?

              1 Reply Last reply Reply Quote 0
              • ?
                Guest
                last edited by

                @athurdent:

                Here's a test without involvement of physical adapters. This is iperf3 from a LAN VM to a DMZ VM trough the pfSense VM. No NAT, only routed:

                [ ID] Interval           Transfer     Bandwidth       Retr
                [  4]   0.00-10.00  sec  3.51 GBytes  3.02 Gbits/sec   98             sender
                [  4]   0.00-10.00  sec  3.51 GBytes  3.01 Gbits/sec                  receiver
                

                with -R

                [ ID] Interval           Transfer     Bandwidth       Retr
                [  4]   0.00-10.00  sec  3.08 GBytes  2.65 Gbits/sec  314             sender
                [  4]   0.00-10.00  sec  3.08 GBytes  2.65 Gbits/sec                  receiver
                

                Maybe it's Proxmox, not Intel?

                VirtIO problems seem to only happen when it hits the NAT part of pf. With no NAT, VirtIO always works. The problem has to do with checksumming not existing in VirtIO and NAT pf not liking bad checksums.

                1 Reply Last reply Reply Quote 0
                • A
                  athurdent
                  last edited by

                  Same setup, turned on NAT to my DMZ:

                  [ ID] Interval           Transfer     Bandwidth       Retr
                  [  4]   0.00-10.00  sec  3.37 GBytes  2.89 Gbits/sec  519             sender
                  [  4]   0.00-10.00  sec  3.37 GBytes  2.89 Gbits/sec                  receiver
                  
                  

                  with -R

                  [ ID] Interval           Transfer     Bandwidth       Retr
                  [  4]   0.00-10.00  sec  3.07 GBytes  2.64 Gbits/sec  243             sender
                  [  4]   0.00-10.00  sec  3.07 GBytes  2.64 Gbits/sec                  receiver
                  
                  
                  1 Reply Last reply Reply Quote 0
                  • M
                    mjtbrady
                    last edited by

                    Got around to doing some more testing today and still no joy.

                    I reinstalled from pfSense-CE-2.4.0-BETA-amd64-20170327-1320.iso and disabled checksum, TSO and LRO in loader.conf.

                    On occasion during the boot process some pings were replied to, but this happened inconsistently.

                    The host that I am using is CentOS 6.  I will try on a CentOS 7 host at some point to see if a newer kvm and kernel make a difference, but at this point 2.4 is completely unusable for me.

                    1 Reply Last reply Reply Quote 0
                    • ?
                      Guest
                      last edited by

                      @mjtbrady:

                      Got around to doing some more testing today and still no joy.

                      I reinstalled from pfSense-CE-2.4.0-BETA-amd64-20170327-1320.iso and disabled checksum, TSO and LRO in loader.conf.

                      On occasion during the boot process some pings were replied to, but this happened inconsistently.

                      The host that I am using is CentOS 6.  I will try on a CentOS 7 host at some point to see if a newer kvm and kernel make a difference, but at this point 2.4 is completely unusable for me.

                      You must disable it on the hypervisor side.

                      1 Reply Last reply Reply Quote 0
                      • M
                        mjtbrady
                        last edited by

                        I have never had to do anything on the hypervisor side before.  In the same VM, a part from the virtual disk, on the same host 2.3 works. 2.4 doesn't.

                        When you say the hypervisor what do you mean exactly.  In the VM definition or on the actual host adapters?

                        1 Reply Last reply Reply Quote 0
                        • ?
                          Guest
                          last edited by

                          @mjtbrady:

                          I have never had to do anything on the hypervisor side before.  In the same VM, a part from the virtual disk, on the same host 2.3 works. 2.4 doesn't.

                          When you say the hypervisor what do you mean exactly.  In the VM definition or on the actual host adapters?

                          What I mean is: you must prevent TCP & UDP packets on virtual interfaces to arrive with bad checksums as pf in BSD drops them silently. VirtIO doesn't calculate checksums so it's always bad. Disabling checksum offloading to the VirtIO driver means that the checksums will be calculated in the network stack instead of the NIC driver.

                          On the hypervisor side, I set it using ethtool, i.e. ethtool -k vif1 tx off

                          1 Reply Last reply Reply Quote 0
                          • M
                            mjtbrady
                            last edited by

                            Tried all of the above suggestions and more with out success.

                            Created a new VM on the same host (CentOS 6.9) using virt-manager and that worked.

                            The not working VM definition had

                            <os><type arch="x86_64" machine="rhel5.4.0">hvm</type></os>

                            Changed this to

                            <os><type arch="x86_64" machine="rhel6.6.0">hvm</type></os>

                            based on the working VM using virsh edit and it now works.

                            There was also a difference in the "feature policy" loaded (invtsc was missing), which I reloaded using virt-manager.

                            1 Reply Last reply Reply Quote 0
                            • First post
                              Last post
                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.