Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    VLANs seems to be mostly broken with Intel SR-IOV VF

    Scheduled Pinned Locked Moved L2/Switching/VLANs
    22 Posts 3 Posters 1.3k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • HLPPCH
      HLPPC Galactic Empire @nazar-pc
      last edited by HLPPC

      @nazar-pc

      Here are proxmox and ntp instructions too

      pve

      https://tinyurl.com/ru7jn2c8

      ntp burst issues
      https://tinyurl.com/6fwfuezx

      1 Reply Last reply Reply Quote 0
      • HLPPCH
        HLPPC Galactic Empire @nazar-pc
        last edited by HLPPC

        @nazar-pc

        https://docs.netgate.com/pfsense/en/latest/network/broadcast-domains.html

        The fewer broadcast domains, the better I think at least from the VM's perspective. Or the hypervisor.

        1 Reply Last reply Reply Quote 0
        • HLPPCH
          HLPPC Galactic Empire @nazar-pc
          last edited by

          @nazar-pc

          Could be a checksum offloading issue too. Disable Hardware Checksums with Proxmox VE VirtIO.

          1 Reply Last reply Reply Quote 0
          • nazar-pcN
            nazar-pc
            last edited by

            @HLPPC said in VLANs seems to be mostly broken with Intel SR-IOV VF:

            You asked a lunatic question involving SFP+s in an unknown VM with an unknown CPU and mobo and whether or not it is official hardware. I gave you lunatic answers trying to make it work.

            I appreciate the effort, but you can always ask clarification questions about setup if I missed something important, I'm happy to clarify.

            I have an Intel NIC with two SFP+ ports as mentioned in the very first post that supports SR-IOV. VM is just a simple KVM-based one on Linux host with virtual function device assigned to the VM running pfSense. I don't have Infiniband, Wi-Fi, IPv6, cloud, TPM or some other seemingly random things you have mentioned. I have no idea what NTP and WoL has to do with any of this either.

            The interface works fine without VLANs and also works with VLANs until reboot, but when VLANs are added it hangs on boot and interfaces are not working after that.

            So as far as I'm concerned there are no hardware issues here, no driver issues either, it is just something pfSense-specific (or maybe FreeBSD in general) that is problematic when it comes to VLANs specifically and specifically at boot time. Maybe ordering of stuff at boot is off or something.

            1 Reply Last reply Reply Quote 1
            • G
              Gblenn @nazar-pc
              last edited by Gblenn

              @nazar-pc said in VLANs seems to be mostly broken with Intel SR-IOV VF:

              Honestly I'm not following what you're trying to say, @HLPPC, these messages look like AI-generated hallucination to me

              +1 on that... I'm seeing similar in other threads unfortunately.

              But regarding your problem, you mention pfsense running as a VM. So you create these "virtual functions" of your NIC in the hypervisor? What hypervisor are you running and how is your setup exactly?
              Are you saying that you are using the physical port for more VM's than pfsense, and for other things than WAN?

              nazar-pcN HLPPCH 2 Replies Last reply Reply Quote 0
              • nazar-pcN
                nazar-pc @Gblenn
                last edited by

                @Gblenn said in VLANs seems to be mostly broken with Intel SR-IOV VF:

                But regarding your problem, you mention pfsense running as a VM. So you create these "virtual functions" of your NIC in the hypervisor? What hypervisor are you running and how is your setup exactly?

                I'm creating virtual function with udev on Linux host like this:

                ACTION=="add", SUBSYSTEM=="net", KERNEL=="intel-ocp-0", ATTR{device/sriov_numvfs}="2"
                

                By the time VM starts they already exist like "normal" PCIe devices. VM is created with libvirt and I just take such PCIe device and assign it to the VM. pfSense mostly treats them as normal-ish Intel NICs as far as I can see.

                @Gblenn said in VLANs seems to be mostly broken with Intel SR-IOV VF:

                Are you saying that you are using the physical port for more VM's than pfsense, and for other things than WAN?

                The physical port I have is connected to a switch on the other end. Switch wraps 2 WANs into VLANs and I want to extract both WAN and WAN2 from virtual function in pfSense. In this particular case physical function may or may not be used on the host, but it is mostly irrelevant to what is happening to a specific virtual function I'm assigning to the VM to the best of my knowledge.

                As mentioned this whole setup works. I boot VM, create VLANs, assign them to WAN and WAN2 and everything works as expected. Its just when I reboot it hangs, times out and VLANs are "dead" in pfSense.

                G HLPPCH 3 Replies Last reply Reply Quote 0
                • G
                  Gblenn @nazar-pc
                  last edited by Gblenn

                  @nazar-pc Aha, but do you really need pfsense to be "involved" with the VLAN's on the switch? In fact, do you even need VLAN on the switch at all??

                  I guess it depends on your ISP and what type of connection you have of course. I have two public IP's from the same ISP and in my case it's the MAC on each respective WAN that determines which IP is offered to which port. But even if that doesn't work for you, which it doesn't if it is two different ISP's, couldn't you limit the VLAN to just be something between the switch and libvirt?

                  I run Proxmox and set ID's on some ports to "tunnel" some traffic between individual ports in my network. So that VLAN ID is not used or even known by pfsense at all, it's only for the switches and e.g. one single VM...

                  1 Reply Last reply Reply Quote 0
                  • HLPPCH
                    HLPPC Galactic Empire @nazar-pc
                    last edited by HLPPC

                    @nazar-pc

                    as a system tunable consider

                    hw.ix.unsupported_sfp=1 (or whatever other hw.intel card you have options)

                    maybe try

                    sysctl -a | grep (your intel driver)
                    pciconf -lvvv
                    ifconfig -vvv

                    and then consider disabling msix in the vm if it is on. btw ifdisabled disables duplicate address dectection with ipv6 and others have had success in freebsd VMs by disabling it; it isn't a robot suggestion. Dual stack sucks sometimes and pfSense HAS to be dual stack compliant to partner with AWS; hence it is forced to be enabled.

                    I haven't actually seen a intel card driver show up itself a vm or tried passthrough.

                    https://man.freebsd.org/cgi/man.cgi?query=iovctl&sektion=8&manpath=freebsd-release-ports

                    There might be a setup where bridging the WAN helps it out in the VM

                    I am just throwing bsd at you to see if it helps. Because you know, it is the reason it isn't :) there are certainly more efficient ways of doing things.

                    1 Reply Last reply Reply Quote 0
                    • HLPPCH
                      HLPPC Galactic Empire @nazar-pc
                      last edited by HLPPC

                      @nazar-pc

                      collect information

                      rtfm

                      read everyone else's manual (definitely use ai)

                      pray

                      https://docs.netgate.com/pfsense/en/latest/bridges/create.html

                      also, you have clones so maybe I should be less worried about giving you info that crashes your stuff idk

                      https://docs.netgate.com/pfsense/en/latest/bridges/index.html

                      1 Reply Last reply Reply Quote 0
                      • HLPPCH
                        HLPPC Galactic Empire @Gblenn
                        last edited by HLPPC

                        @Gblenn there are multicast vlans, broadcast vlans and Switch Virtual Interfaces SVIs and Multicast VLAN registration. You may need an IGMP querier. And IGMP snooping. And to configure the NAT more completely in the VM. None of that is easy. The full duplex 10gbps part seems wild. You may have to force speed/duplex instead of auto-negotiate for each VM.

                        Some people recommend a NIC for each different VLAN, and plugging them all into the same switch, presumeably to stabilize autonegotiation.

                        I am just going to quote this since I have been stocking igmp in traffic shaping: IGMP Querier
                        An IGMP querier is a multicast router (a router or a Layer 3 switch) that sends query messages to maintain a list of multicast group memberships for each attached network, and a timer for each membership.

                        No clue where you get the time for an SFP+ module. Some people say try PTP and others say NTP can slow you down by 30ms. Most of the time it seems due to machdep on my Zen processor, which transmits data at 70gbps between each core. Some people have built GPSs for their pfSense.

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post
                        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.