• Chelsio T4 VF is not recognized as NIC

    9
    0 Votes
    9 Posts
    878 Views
    nazar-pcN
    My patch https://github.com/pfsense/FreeBSD-src/pull/57 fixing above redmine ticket (by enabling corresponding driver in kernel config) was merged last month and will be a part of 2.9.0, whenever that comes out.
  • vm_fault: pager read error, pid 98103 (rrdtool)

    2
    0 Votes
    2 Posts
    217 Views
    K
    So I ended up running mem86+ on the xcp-ng host. No RAM errors reported -- All 14 tests x 4. I'll probably just end up reinstalling the VM but it's really strange. There isn't too much out on the web on this type of error, and what's out there would suggest RAM module failure, however I'm not certain this is accurate within a VM, as clearly the host RAM is not the cause. I'm not getting a lot of feedback on this issue unfortunately.
  • UNMAP failed on Boot

    3
    0 Votes
    3 Posts
    287 Views
    P
    @geovaneg It may not be related but whilst the PVSCSI is traditionally more performant, i've used the LSILogic SAS controller on my installs without issue. I believe it is also mentioned in the docs.
  • IPv4 stops working on Hyper-V after some time

    1
    0 Votes
    1 Posts
    123 Views
    No one has replied
  • Does this setup work? PfSense 2.8.0 + VmWare 8.0 + Guest OS FreeBSD 14

    4
    0 Votes
    4 Posts
    450 Views
    G
    Hi, I don't know if there's any direct connection, but I decided to mention this issue here because this error message in UNMAP is occurring on the new VMs I'm creating with this setup. I've created a specific thread here: https://forum.netgate.com/topic/198476/unmap-failed-on-boot Thanks, Geovane
  • pfSense install extremely slow under Proxmox 8.4.8

    5
    0 Votes
    5 Posts
    350 Views
    P
    @KOM Oh! :) Thanks!
  • C3xxx QAT via VFs

    2
    0 Votes
    2 Posts
    289 Views
    O
    Browsing the qat driver sources for FreeBSD and Linux, it sure looks like there simply isn't a driver for the VF PCI device IDs for c3xxx in the FreeBSD tree, but there is for Linux. Am I reading this correctly? Can pfSense even use all of the QAT engines in the c3xxx silicon at the same time to warrant passing through the entire hardware as is seemingly required? I was thinking it would be useful to leave some VFs for host ZFS use and possibly another guest application. Is it worth the effort to copy&paste the PF driver to make the VF version?
  • Hyper-V Console Dimensions/Resolution

    3
    0 Votes
    3 Posts
    264 Views
    B
    @provels Thank you for the reply. Only two modes were available after the loader changes. 80x25 and 80x50. This provided me with a starting point to learn more but I got lost again as I tried to learn about KMS and DRM and Xorg and EDID and vt(4) and syscons and kernels and compiling and scteken and framebuffer and...
  • This topic is deleted!

    1
    0 Votes
    1 Posts
    14 Views
    No one has replied
  • pfSense 2.7.2 in Hyper-V freezing with no crash report after reboot

    Moved
    62
    0 Votes
    62 Posts
    10k Views
    T
    Yesterday we built a new pfSense 2.7.2 cluster, master firewall was running for over a week without problems, but about half an hour after setting up CARP and pfSync to the new slave it died with known hvevent problem. It then died several times, again and again.. Not sure but maybe it has something to do with either CARP/ConfigSync/pfSync or multicast traffic (because we know dying pfsense setups without carp configured, so might be multicast traffic in the network which triggers something). We have had the same experience with our only OPNsense setup, of which the master is running smoothly since we removed the slave firewall.
  • Help with disk resize

    1
    0 Votes
    1 Posts
    130 Views
    No one has replied
  • How to use non-legacy virtio networking with libvirt?

    7
    0 Votes
    7 Posts
    687 Views
    nazar-pcN
    @wickeren I actually had it enabled with legacy version (but I didn't make a difference), while switching to modern I removed it. Probably should add back and see if there is a difference, however as mentioned in the links in the first post, I don't think pfSense has corresponding support enabled in the kernel anyway There must be something equivalent in Proxmox as well, it probably designs PCIe architecture in a way that produces legacy devices just like it was in my case originally. I'm still puzzled as to why that was the case, but glad it is resolved. Here is the full QEMU command that libvirt generates for the VM in case it is helpful: Spoiler /usr/bin/qemu-system-x86_64 -name guest=pfSense,debug-threads=on -S -object {"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-26-pfSense/master-key.aes"} -blockdev {"driver":"file","filename":"/usr/share/OVMF/OVMF_CODE_4M.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"} -blockdev {"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/pfSense_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"} -machine pc-q35-8.2,usb=off,dump-guest-core=off,memory-backend=pc.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format,hpet=off,acpi=on -accel kvm -cpu host,migratable=on -m size=2097152k -object {"qom-type":"memory-backend-ram","id":"pc.ram","size":2147483648} -overcommit mem-lock=off -smp 8,sockets=1,dies=1,cores=8,threads=1 -uuid REDACTED -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=38,server=on,wait=off -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-shutdown -global ICH9-LPC.disable_s3=1 -global ICH9-LPC.disable_s4=1 -boot menu=off,strict=on -device {"driver":"pcie-root-port","port":16,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x2"} -device {"driver":"pcie-root-port","port":17,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x2.0x1"} -device {"driver":"pcie-root-port","port":18,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x2.0x2"} -device {"driver":"pcie-root-port","port":19,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x2.0x3"} -device {"driver":"pcie-root-port","port":20,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x2.0x4"} -device {"driver":"pcie-root-port","port":21,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x2.0x5"} -device {"driver":"ich9-usb-ehci1","id":"usb","bus":"pcie.0","addr":"0x1d.0x7"} -device {"driver":"ich9-usb-uhci1","masterbus":"usb.0","firstport":0,"bus":"pcie.0","multifunction":true,"addr":"0x1d"} -device {"driver":"ich9-usb-uhci2","masterbus":"usb.0","firstport":2,"bus":"pcie.0","addr":"0x1d.0x1"} -device {"driver":"ich9-usb-uhci3","masterbus":"usb.0","firstport":4,"bus":"pcie.0","addr":"0x1d.0x2"} -blockdev {"driver":"file","filename":"/var/lib/libvirt/images/pfSense.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null} -device {"driver":"virtio-blk-pci","bus":"pci.3","addr":"0x0","drive":"libvirt-1-format","id":"virtio-disk0","bootindex":1} -netdev {"type":"tap","fd":"39","vhost":true,"vhostfd":"44","id":"hostnet0"} -device {"driver":"virtio-net-pci","netdev":"hostnet0","id":"net0","mac":"REDACTED","bus":"pci.1","addr":"0x0"} -netdev {"type":"tap","fd":"45","vhost":true,"vhostfd":"46","id":"hostnet1"} -device {"driver":"virtio-net-pci","netdev":"hostnet1","id":"net1","mac":"REDACTED","bus":"pci.2","addr":"0x0"} -netdev {"type":"tap","fd":"47","vhost":true,"vhostfd":"48","id":"hostnet2"} -device {"driver":"virtio-net-pci","netdev":"hostnet2","id":"net2","mac":"REDACTED","bus":"pci.5","addr":"0x0"} -netdev {"type":"tap","fd":"49","vhost":true,"vhostfd":"50","id":"hostnet3"} -device {"driver":"virtio-net-pci","netdev":"hostnet3","id":"net3","mac":"REDACTED","bus":"pci.6","addr":"0x0"} -chardev pty,id=charserial0 -device {"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0} -audiodev {"id":"audio1","driver":"spice"} -spice port=5901,addr=127.0.0.1,disable-ticketing=on,seamless-migration=on -device {"driver":"qxl-vga","id":"video0","max_outputs":1,"ram_size":67108864,"vram_size":67108864,"vram64_size_mb":0,"vgamem_mb":16,"bus":"pcie.0","addr":"0x1"} -global ICH9-LPC.noreboot=off -watchdog-action reset -device {"driver":"virtio-balloon-pci","id":"balloon0","bus":"pci.4","addr":"0x0"} -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on And this is XML domain config it was generated from: Spoiler <domain type="kvm"> <name>pfSense</name> <uuid>REDACTED</uuid> <metadata> <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> <libosinfo:os id="http://freebsd.org/freebsd/14.0"/> </libosinfo:libosinfo> </metadata> <memory unit="KiB">2097152</memory> <currentMemory unit="KiB">2097152</currentMemory> <vcpu placement="static" cpuset="8-11,24-27">8</vcpu> <os firmware="efi"> <type arch="x86_64" machine="pc-q35-8.2">hvm</type> <firmware> <feature enabled="no" name="enrolled-keys"/> <feature enabled="no" name="secure-boot"/> </firmware> <loader readonly="yes" secure="no" type="pflash">/usr/share/OVMF/OVMF_CODE_4M.fd</loader> <nvram template="/usr/share/OVMF/OVMF_VARS_4M.fd">/var/lib/libvirt/qemu/nvram/pfSense_VARS.fd</nvram> <boot dev="hd"/> <bootmenu enable="no"/> </os> <features> <acpi/> <apic/> </features> <cpu mode="host-passthrough" check="none" migratable="on"> <topology sockets="1" dies="1" cores="8" threads="1"/> </cpu> <clock offset="utc"> <timer name="rtc" tickpolicy="catchup"/> <timer name="pit" tickpolicy="delay"/> <timer name="hpet" present="no"/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <pm> <suspend-to-mem enabled="no"/> <suspend-to-disk enabled="no"/> </pm> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type="file" device="disk"> <driver name="qemu" type="qcow2"/> <source file="/var/lib/libvirt/images/pfSense.qcow2"/> <target dev="vda" bus="virtio"/> <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/> </disk> <controller type="sata" index="0"> <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/> </controller> <controller type="pci" index="0" model="pcie-root"/> <controller type="pci" index="1" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="1" port="0x10"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/> </controller> <controller type="pci" index="2" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="2" port="0x11"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/> </controller> <controller type="pci" index="3" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="3" port="0x12"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/> </controller> <controller type="pci" index="4" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="4" port="0x13"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/> </controller> <controller type="pci" index="5" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="5" port="0x14"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/> </controller> <controller type="pci" index="6" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="6" port="0x15"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/> </controller> <controller type="pci" index="7" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="7" port="0x16"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/> </controller> <controller type="usb" index="0" model="qemu-xhci" ports="15"> <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/> </controller> <interface type="bridge"> <mac address="REDACTED"/> <source bridge="wan"/> <target dev="pfsense-wan"/> <model type="virtio"/> <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/> </interface> <interface type="bridge"> <mac address="REDACTED"/> <source bridge="wan2"/> <target dev="pfsense-wan2"/> <model type="virtio"/> <address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/> </interface> <interface type="bridge"> <mac address="REDACTED"/> <source bridge="lan"/> <target dev="pfsense-lan"/> <model type="virtio"/> <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/> </interface> <interface type="bridge"> <mac address="REDACTED"/> <source bridge="guest"/> <target dev="pfsense-guest"/> <model type="virtio"/> <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/> </interface> <serial type="pty"> <target type="isa-serial" port="0"> <model name="isa-serial"/> </target> </serial> <console type="pty"> <target type="serial" port="0"/> </console> <input type="mouse" bus="ps2"/> <input type="keyboard" bus="ps2"/> <graphics type="spice" autoport="yes"> <listen type="address"/> </graphics> <audio id="1" type="spice"/> <video> <model type="virtio" heads="1" primary="yes"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/> </video> <watchdog model="itco" action="reset"/> <memballoon model="virtio"> <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/> </memballoon> </devices> </domain>
  • Isolated Network: Howto Configure a Lab Network with VLANs?

    1
    0 Votes
    1 Posts
    227 Views
    No one has replied
  • Export config from VMDK?

    3
    0 Votes
    3 Posts
    617 Views
    P
    @SteveITS thanks appreciate this. That's what I ended up doing. PITA for sure lol. Having some residual weirdness, like packages didn't come back and it won't let me install any. Saying I have an instance of pfsense already running. Wish there was a way to add the config I edited with my new interface names to my vmdk that had all the packages etc.
  • A virtual pfSense as ^config viewer^ and as ^back-up pfSense^

    4
    0 Votes
    4 Posts
    751 Views
    G
    @louis2 I do passthru of disks to TrueNAS and I pass thru the NIC's (LAN and WAN) to pfsense, simply because I want the application to be in total control of what they each do best. But I also have a second TrueNAS at our summer home where disks are virtualized and I have been running firewalls with virtualized NIC's, no problem... In fact I do have my failover WAN assigned using VirtIO, and no issues whatsoever...
  • Azure pfSense Plus 23.09 VM backup failed

    Moved azure backup waagent 23.09
    21
    0 Votes
    21 Posts
    4k Views
    stephenw10S
    In the mean time you can still always make backups with it stopped. That has always worked. Though obviously it's far more inconvenient. We are digging...
  • pfsense 2.7.2 crush report without reboot os

    2
    0 Votes
    2 Posts
    341 Views
    K
    @mic-bummer That appears to be a bug in the pfsync code. Try the 2.8.0 beta. There are a large number of pfsync fixes in there, and it's quite likely that this problem is already fixed there.
  • 0 Votes
    13 Posts
    2k Views
    G
    @Ghost-0 Well, I have been using Docker/Portainer to deploy a number of applications and services. But I have no clue about Frigate or the Coral that you mention. The way I have tried to install most or all my Docker containers is as Stacks. And in case I have been unable to find a Docker Compose file to use, I have actually asked ChatGPT or Claude to create one based on a Docker command. The benefit is that the structure is much more visible and simpler to edit. Seems to me it's a fairly complex installation that you are trying to do. I found this youtube video of a guy installing what I think you are also attampting? https://www.youtube.com/watch?v=zKk9dnAp8FM
  • 0 Votes
    3 Posts
    639 Views
    F
    I did reboot pfsensene VM and made sure from ip a in the proxmox host that there was no IP set for the interfaces I was bridging and it magically worked. Actually, now it's even better. In the proxmox interface I managed to set an IP for the interface that is being bridged, so that when pfsense VM is down I can still reach the proxmox on its fixed ip on the LAN side.
  • pfSense Plus Beta not finding NICs

    2
    0 Votes
    2 Posts
    525 Views
    N
    @FInnishFlash Doesn't work even with bridged interfaces, and virtio
Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.