VLANs seems to be mostly broken with Intel SR-IOV VF
-
@nazar-pc https://man.freebsd.org/cgi/man.cgi?route
And none of that is AI generated. I search for crap until my wifi works. And my lan runs smoothly. Usually. VMs were great for switching IPv4 public addresses on the fly. If I used AI to summarize all of that maybe it'd make more sense.
You asked a lunatic question involving SFP+s in an unknown VM with an unknown CPU and mobo and whether or not it is official hardware. I gave you lunatic answers trying to make it work.
I definitely disable dhcp in these setups, and having it enabled even in the VM may cause issues too.
-
@nazar-pc also, the VMs may need to do some encryption with the cloud, and auto-configure your interface drivers. And maybe each VM with cpu encryption keys is a little off depending on the setup. Or whether there is TPM passthrough With other VMs.
Like, if you've only purchased one instance of pfSense+, can you clone it with 5 public IPs? I have and it worked for a bit pre-pfsense monetization, but maybe it caused issues.
I eventually used VIPs on bare metal instead.
-
This post is deleted! -
@nazar-pc There are also NTP files that sync the kernel and bios time which affects interfaces. If you disable NTP and maybe not send kiss of death packets to the VM, could help. Wake on Lan/magic packets may need to be blocked too. Killing syslog PIDs could also reduce interference.
-
Here are proxmox and ntp instructions too
pve
https://tinyurl.com/ru7jn2c8
ntp burst issues
https://tinyurl.com/6fwfuezx -
https://docs.netgate.com/pfsense/en/latest/network/broadcast-domains.html
The fewer broadcast domains, the better I think at least from the VM's perspective. Or the hypervisor.
-
Could be a checksum offloading issue too. Disable Hardware Checksums with Proxmox VE VirtIO.
-
@HLPPC said in VLANs seems to be mostly broken with Intel SR-IOV VF:
You asked a lunatic question involving SFP+s in an unknown VM with an unknown CPU and mobo and whether or not it is official hardware. I gave you lunatic answers trying to make it work.
I appreciate the effort, but you can always ask clarification questions about setup if I missed something important, I'm happy to clarify.
I have an Intel NIC with two SFP+ ports as mentioned in the very first post that supports SR-IOV. VM is just a simple KVM-based one on Linux host with virtual function device assigned to the VM running pfSense. I don't have Infiniband, Wi-Fi, IPv6, cloud, TPM or some other seemingly random things you have mentioned. I have no idea what NTP and WoL has to do with any of this either.
The interface works fine without VLANs and also works with VLANs until reboot, but when VLANs are added it hangs on boot and interfaces are not working after that.
So as far as I'm concerned there are no hardware issues here, no driver issues either, it is just something pfSense-specific (or maybe FreeBSD in general) that is problematic when it comes to VLANs specifically and specifically at boot time. Maybe ordering of stuff at boot is off or something.
-
@nazar-pc said in VLANs seems to be mostly broken with Intel SR-IOV VF:
Honestly I'm not following what you're trying to say, @HLPPC, these messages look like AI-generated hallucination to me
+1 on that... I'm seeing similar in other threads unfortunately.
But regarding your problem, you mention pfsense running as a VM. So you create these "virtual functions" of your NIC in the hypervisor? What hypervisor are you running and how is your setup exactly?
Are you saying that you are using the physical port for more VM's than pfsense, and for other things than WAN? -
@Gblenn said in VLANs seems to be mostly broken with Intel SR-IOV VF:
But regarding your problem, you mention pfsense running as a VM. So you create these "virtual functions" of your NIC in the hypervisor? What hypervisor are you running and how is your setup exactly?
I'm creating virtual function with udev on Linux host like this:
ACTION=="add", SUBSYSTEM=="net", KERNEL=="intel-ocp-0", ATTR{device/sriov_numvfs}="2"
By the time VM starts they already exist like "normal" PCIe devices. VM is created with libvirt and I just take such PCIe device and assign it to the VM. pfSense mostly treats them as normal-ish Intel NICs as far as I can see.
@Gblenn said in VLANs seems to be mostly broken with Intel SR-IOV VF:
Are you saying that you are using the physical port for more VM's than pfsense, and for other things than WAN?
The physical port I have is connected to a switch on the other end. Switch wraps 2 WANs into VLANs and I want to extract both WAN and WAN2 from virtual function in pfSense. In this particular case physical function may or may not be used on the host, but it is mostly irrelevant to what is happening to a specific virtual function I'm assigning to the VM to the best of my knowledge.
As mentioned this whole setup works. I boot VM, create VLANs, assign them to WAN and WAN2 and everything works as expected. Its just when I reboot it hangs, times out and VLANs are "dead" in pfSense.
-
@nazar-pc Aha, but do you really need pfsense to be "involved" with the VLAN's on the switch? In fact, do you even need VLAN on the switch at all??
I guess it depends on your ISP and what type of connection you have of course. I have two public IP's from the same ISP and in my case it's the MAC on each respective WAN that determines which IP is offered to which port. But even if that doesn't work for you, which it doesn't if it is two different ISP's, couldn't you limit the VLAN to just be something between the switch and libvirt?
I run Proxmox and set ID's on some ports to "tunnel" some traffic between individual ports in my network. So that VLAN ID is not used or even known by pfsense at all, it's only for the switches and e.g. one single VM...
-
as a system tunable consider
hw.ix.unsupported_sfp=1 (or whatever other hw.intel card you have options)
maybe try
sysctl -a | grep (your intel driver)
pciconf -lvvv
ifconfig -vvvand then consider disabling msix in the vm if it is on. btw ifdisabled disables duplicate address dectection with ipv6 and others have had success in freebsd VMs by disabling it; it isn't a robot suggestion. Dual stack sucks sometimes and pfSense HAS to be dual stack compliant to partner with AWS; hence it is forced to be enabled.
I haven't actually seen a intel card driver show up itself a vm or tried passthrough.
https://man.freebsd.org/cgi/man.cgi?query=iovctl&sektion=8&manpath=freebsd-release-ports
There might be a setup where bridging the WAN helps it out in the VM
I am just throwing bsd at you to see if it helps. Because you know, it is the reason it isn't :) there are certainly more efficient ways of doing things.
-
collect information
rtfm
read everyone else's manual (definitely use ai)
pray
https://docs.netgate.com/pfsense/en/latest/bridges/create.html
also, you have clones so maybe I should be less worried about giving you info that crashes your stuff idk
https://docs.netgate.com/pfsense/en/latest/bridges/index.html
-
@Gblenn there are multicast vlans, broadcast vlans and Switch Virtual Interfaces SVIs and Multicast VLAN registration. You may need an IGMP querier. And IGMP snooping. And to configure the NAT more completely in the VM. None of that is easy. The full duplex 10gbps part seems wild. You may have to force speed/duplex instead of auto-negotiate for each VM.
Some people recommend a NIC for each different VLAN, and plugging them all into the same switch, presumeably to stabilize autonegotiation.
I am just going to quote this since I have been stocking igmp in traffic shaping: IGMP Querier
An IGMP querier is a multicast router (a router or a Layer 3 switch) that sends query messages to maintain a list of multicast group memberships for each attached network, and a timer for each membership.No clue where you get the time for an SFP+ module. Some people say try PTP and others say NTP can slow you down by 30ms. Most of the time it seems due to machdep on my Zen processor, which transmits data at 70gbps between each core. Some people have built GPSs for their pfSense.