• onboard 4 port NIC, secure for PFsense use?

    8
    0 Votes
    8 Posts
    963 Views
    B

    Thanks for taking your time, appreciate the help.

  • KVM and pfsense CPU overhead

    4
    0 Votes
    4 Posts
    1k Views
    S

    @l0rdraiden Hi,
    yea found a solution for virtualizing pfsense. I switched from kvm / esxi to XEN and its working perfectly now.

  • VMware ESXi 6.7 increased latency with NIC Passthrough

    2
    0 Votes
    2 Posts
    2k Views
    P

    Just a follow up here in case anything in the future experiences issues like this.

    I was able to resolve this by changing two settings. First, I had to set the ESXi host power management to High Performance, this was defaulted to Balanced. This stopped the clock throttling on the CPU, which in my case is a very low power 10W SOC, I did not notice an increase in power usage but it helped to stabilize the responsiveness of the ESXi host on this low powered system.

    The second item that I changed was to edit the pfSense VM, go to advanced options, and set the latency sensitivity to High. by default, the latency sensitivity is set to Normal. A High setting means that the CPU and memory need to be reserved for the VM to power on, so ensure that your host has enough resources to support this.

    Changing the two settings above resolved the latency differences that I was seeing. I am now getting identical latency readings between bare metal and a virtualized pfSense VM with NIC passthrough.

    I tested this method with 2.4.4p2, 2.4.4p3, and current versions of the 2.5 development images. All experienced the same successful outcome in terms of reducing latency to that of a bare metal system.

  • Loving pfSense in my vSphere environment, but...high CPU...

    13
    0 Votes
    13 Posts
    4k Views
    S

    Ok -
    Naked FreeBSD 11 on esxi 6.7u2 with i350 t4 in pci passthrough -> iperf 1Gbit = 80% CPU load on a 3.9 Ghz i3 7100 cpu
    Naked FreeBSD 12 on esxi 6.7u2 with i350 t4 in pci passthrough -> iperf 1Gbit = 6% CPU load on a 3.9 Ghz i3 7100 cpu

    Edit - seems like the host cpu ramps up to 80% on freebsd 12 and 11 when i run 1 Gbit through the vms...
    the vms show only 12% cpu load inside (htop).
    Guess its just how freebsd handles packets... a debian vm shows 12% host load and 8% vm load when im iperfing...

  • pfSense on ESXi 6.7 - slow throughput

    15
    1 Votes
    15 Posts
    4k Views
    T

    @johnpoz

    hello

    I did not understand it well you removed the autonegotiated in the WAN of the Esxi? or of the Pfsense? I will prove it

    Thank you

  • virtualbox pfsense

    6
    0 Votes
    6 Posts
    1k Views
    johnpozJ

    @lmh1 said in virtualbox pfsense:

    http://192.168.2.2:2/

    What?? Select port 2??? Why and the hell would you think you need to select port 2? If your trying to get pfsense gui to listen on port 2, that is a HORRIBLE idea... I would suggest you just leave that at default.. Or use a more common port for web gui.. Not port 2

    As to what card you use in VB - I suggest you look to their docs.. This is not a support forum for virtualbox..
    https://www.virtualbox.org/wiki/Documentation

    I think your list of stuff you don't have much experience with would fill a library ;)

  • AWS AMI - Spot Instances not avail?

    1
    0 Votes
    1 Posts
    218 Views
    No one has replied
  • Azure Firewall Setup

    14
    0 Votes
    14 Posts
    2k Views
    C

    Was this ever worked? I am in a similar situation...

  • Pfsense

    3
    0 Votes
    3 Posts
    452 Views
    johnpozJ

    You have 4 port nic in your host.. And what vswitches do you have each port connected too?

    what part of this is confusing?
    https://docs.netgate.com/pfsense/en/latest/virtualization/virtualizing-pfsense-with-vmware-vsphere-esxi.html

  • PfSense in VirtualBox

    15
    0 Votes
    15 Posts
    7k Views
    V

    So I tried both Debian and ESXI6.7.
    On both it is possible to do a PCI Passthrough on of the network cards.
    Both run pretty much the same, I don't think there's much of a difference in the resources they use.
    Using Debian I can still use my PC as a media station / media server on the LAN. I could also install a few other things and have some LAN or WAN servers on it but I've never got to it.
    Running ESXI I can have a web server, a file server and my own mail server (a long time obsession of mine) but I cannot use the unit for displaying any media to a TV / Monitor.

    The issue with Debian is that it is set for automatic security updates (normal in my mind) and a couple of times already it rebooted on it's own and the VM does not come back up due to various issues, last incident it was something related to the display adapter for the VM.

    ESXI on the other hand comes back up reliably, autostarts the VMs just fine but I loose the GPU basically.

    Bottom line, there's no happy medium.

    In another order of ideas, what is the difference security wise in running a bare metal hypervisor like ESXI compared to libvirt in Debian; all under the assumption that the network cards are being passed through and not bridged. It is my understanding that in a PCI passthrough situation the hardware is passed through with not underlying hypervisor interaction so in order to compromise anything one would have to exploit a firmware vulnerability in the NIC or a software vulnerability in the PfSense / OPNSense itself before getting to the host in any way.

  • pfSense with Ubuntu 18.10 and XEN 4.9.2

    3
    1 Votes
    3 Posts
    633 Views
    V

    removed the parameter from /boot/defaults/loader.conf and put it in

    /boot/loader.conf.local

    upgrade from 2.4.3 to 2.4.2_2 worked fine, parameter (and NICS) are still there

  • LAN NIC stops passing traffic on Proxmox

    4
    0 Votes
    4 Posts
    694 Views
    V

    I think so.
    On my home installation on KVM with virtIO NIC the network was blocked after each view minutes till I checked "Hardware Checksum Offloading". After checking it there is no issue anymore.
    The other option are the default values anyway.

    It's recommended to disable it in the pfSense docs as well:
    https://docs.netgate.com/pfsense/en/latest/virtualization/virtualizing-pfsense-with-proxmox.html

  • PFsense with SR-IOV virtual function NIC

    6
    0 Votes
    6 Posts
    6k Views
    provelsP

    @ITFlyer Hello again. I found this, appears it may be FreeBSD's lack of support of SR-IOV in any version as Hyper-V VM. https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/supported-freebsd-virtual-machines-on-hyper-v#table-legend

  • 0 Votes
    6 Posts
    2k Views
    KOMK

    Glad it's working for you.

  • Auto config backup with vmware templates (2.4.4)

    7
    0 Votes
    7 Posts
    872 Views
    T

    @Gisle thanks for your reply.
    Cheers!

  • 0 Votes
    1 Posts
    480 Views
    No one has replied
  • On Restart no valid ip for Host

    3
    0 Votes
    3 Posts
    433 Views
    J

    I did a static ip from Windows 10 with Gateway 192.168.1.253 (pfsense) DNS 8.8.8.8, 8.8.4.4 - WORKS
    Thanks You.

    Static ip from pfsense MAC binding - DID NOT WORK

    Static ip from Windows with Gateway pfsense DNS pfsense - DID NOT WORK

  • Power Failures, vSwitch & Modems

    3
    0 Votes
    3 Posts
    476 Views
    Z

    @Gertjan Thanks!

    I found what you were referring to and noticed the dpinger processes. I don't have control of anything on the other side of my modem, but I do know the gateway address. I've had my IP change a few times over the year, to be honest... so I went with a google DNS server. It's 8 hops away, but it's not likely to change in the foreseeable future.

    Thanks for the assistance.

  • pfSense + OpenVswitch issues

    1
    0 Votes
    1 Posts
    552 Views
    No one has replied
  • (solved)Multiple pfsense instances

    4
    0 Votes
    4 Posts
    985 Views
    M

    it is the same ;)
    I have migrated Forefront TMG:s that way.
    The only thing is to allow a private IP on the wan if of the sub FW

Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.