• Proxmox Pfsense only 1 public IP

    Moved
    22
    0 Votes
    22 Posts
    3k Views
    stephenw10S

    That's what it looks like to me.

  • pfSense running on Plex Media server host: 3 NICs, no LAN network access?

    3
    0 Votes
    3 Posts
    666 Views
    T

    I'll go ahead an answer my own question here: the issue was apparently driver related. On my third time installing pfSense, I manually uninstalled and reinstalled an updated version of the Intel PRO/1000 PT driver. After that, it worked without any issues exactly how I wanted it to. I'm not sure why that would be the case as even the updated driver was only slightly newer, but either way it works and is invisible to the host OS, which is all I need. I'm getting full gigabit speeds both ways with only 2ms more latency than I had before. I'm running Snort.

  • Very slow upload speed outside vm host

    5
    1 Votes
    5 Posts
    1k Views
    Bob.DigB

    @raqib But it is not helping in every case, I had problems on Server 2022 and this only helped some of the time.

    What "helped" me all of the time was using two separate, external vSwitches, one only for pfSense and one for all the other VMs. And that meant there had to be a physical switch in place to connect those two vSwitches. Thankfully it is working in the latest Plus-version.

  • 0 Votes
    3 Posts
    830 Views
    A

    @bob-dig Vodafone gigafiber it works find on dedicated box

  • pfSense VM is malfunctioning

    Moved
    7
    0 Votes
    7 Posts
    881 Views
    J

    @stephenw10 Thanks, will check. as of now no issues with Ubuntu after setting up static DNS and IP.

  • Adding Nics on PFsense guest Esxi

    3
    0 Votes
    3 Posts
    759 Views
    I

    OK ! What a nice feature XD

    Many thanks for your reply

  • PfSense nics in ESXi running half-duplex

    6
    0 Votes
    6 Posts
    1k Views
    E

    I'm having this same problem in ESXi 6.5 with standard vSwitches. Same Duplex issues two different VMware clusters. Seeing it in the cisco logs because LLDP/CDP is turned on. CARP seems to work just fine for me after enabling promiscuous mode in the vSwitch. PFSense 2.6.0. Intel 82599 NIC. I can see this in the logs on our Cisco 6509 and Nexus 5K switches depending which hypervisor is running the VM. 6509s connect to the hypervisors with a standard LACP port channel, the Nexus switches are a vPC LACP bond. I also do not have any other gear throwing these errors. I can see this issue on both stand alone and clustered PFSense VMs.

  • pfSense 22.05 showing kernel panic on Proxmox after few hours

    4
    0 Votes
    4 Posts
    880 Views
    stephenw10S

    Not to a VM. Not yet. However I tested the upgrade process over the weekend and it completed no problem so I would expect a clean install of 2.6 then upgrade to 22.01 and then 22.05 to work fine.

    Steve

  • Virtualization and pfsense+ subscription

    3
    0 Votes
    3 Posts
    551 Views
    stephenw10S

    Adding new hardware devices to the VM, such as NICs, will likely change the NDI and hence the subscription. Cores or RAM should not.
    I suggest adding as many NICs as you might need before you start. You can always simply leave them disabled in pfSense or even disconnected in Proxmox.

    Steve

  • pfSense on Hetzner Cloud

    Moved
    10
    0 Votes
    10 Posts
    7k Views
    stephenw10S

    Ah, so more like a full AWS/Azure setup. That seems...complex!

  • This topic is deleted!

    1
    0 Votes
    1 Posts
    4 Views
    No one has replied
  • pfSense on Proxmox

    7
    0 Votes
    7 Posts
    2k Views
    E

    @patch https://www.youtube.com/watch?v=3l0AySgYlkg&t=380s

  • Proxmox VLAN

    4
    0 Votes
    4 Posts
    1k Views
    E

    @michelvdriel

    https://www.youtube.com/watch?v=3l0AySgYlkg&t=380s

  • 0 Votes
    2 Posts
    620 Views
    stephenw10S

    What is 192.168.100.7?

    What IP are you pinging from?

    Which adapter is that subnet on?

    I recommend not using Net networks in VBox unless you have no choice, it makes everything much more complex.

    Steve

  • pfsense NIC speed

    2
    0 Votes
    2 Posts
    795 Views
    D

    Yeah, this issue is the same. I was trying in my OpenStack Queen and Train, but still like pfSense having a bottleneck in hypervisor KVM. I don't know why.

    Someone say 'https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=165059#c24'

    Maybe you can try from this ref https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=165059

  • pfSense on Proxmox - High RAM Usage

    13
    0 Votes
    13 Posts
    5k Views
    bearhntrB

    @viragomann

    For now, I will leave it at 2GB. I have a new SFF box coming which has 32GB of RAM. Plans are to move all of this over there, and put a few more VMs on it. The box that I am currently using has 16GB RAM and 2 VMs (this one now at 2GB and the other at 8GB).

    I may need to increase the RAM as I install packages and re-setup CloudFlare DDNS on here.

    I appreciate your and @Patch input.

  • 0 Votes
    1 Posts
    619 Views
    No one has replied
  • Hot to enable pfsense on ESXi to 10 gbit?

    7
    0 Votes
    7 Posts
    765 Views
    P

    @epimpin

    that would be awesome.
    as per my knowledge, it is possible to adjust the firmware of the connectx-2 cards to enable SR-IOV but this was never intended by mellanox.

    so I did that and switched on sr-iov in esxi but a message to reboot appeared, so i rebooted the server and after reboot, i had the exact same msg there.

    in case you figure out a way to enable it, it might be the solution to 10gbit

    HW conf mellanox connectx-2 MNPH-29D-XTR, FW: 2.9.1200
    .ini file

    ;; Generated automatically by iniprep tool on Mon May 07 15:39:40 IDT 2012 from ./b0_hawk_gen2_464.prs ;; PRS FILE FOR Hawk ;; $Id: b0_hawk_gen2_464.prs,v 1.7.2.3 2012-04-24 12:43:10 ofirm Exp $ [PS_INFO] Name = 81Y9992 Description = Mellanox ConnectX-2 EN Dual-port 10GbE PCI-E 2.0 Adapter [ADAPTER] PSID = IBM0FC0000010 pcie_gen2_speed_supported = true silicon_rev=0xb0 adapter_dev_id = 0x6750 ;;;;; {gpio_mode1, gpio_mode0} {DataOut=0, DataOut=1} ;;;;; 0 = Input PAD ;;;;; 1 = {0,1} Normal Output PAD ;;;;; 2 = {0,Z} 0-pull down the PAD, 1-float ;;;;; 3 = {Z,1} 0-float, 1-pull up the pad ;;;;; Under [ADAPTER] section ;;;;; Integer parameter. Values range : 0x0 - 0xffffffff. gpio_mode1 = 0x80010 gpio_mode0 = 0x0b160bef gpio_default_val = 0x000e031f receiver_detect_time = 0x1e [HCA] hca_header_device_id = 0x6750 hca_header_subsystem_id = 0x0019 eth_xfi_en = true mdio_en_port1 = 0 num_pfs = 1 total_vfs = 64 sriov_en = true [IB] gen_guids_from_mac = true port1_802_3ap_kx4_ability = false port2_802_3ap_kx4_ability = false phy_type_port1 = XFI phy_type_port2 = XFI new_gpio_scheme_en = true read_cable_params_port1_en = true read_cable_params_port2_en = true eth_tx_lane_polarity_port1 = 0x0 eth_rx_lane_polarity_port1 = 0x0 eth_tx_lane_polarity_port2 = 0x0 eth_rx_lane_polarity_port2 = 0x0 eth_tx_lane_reversal_port1 = off eth_tx_lane_reversal_port2 = off eth_rx_lane_reversal_port1 = off eth_rx_lane_reversal_port2 = off ;;;;; SerDes static parameters for FixedLinkSpeed ;;;;; Under [IB] section port1_sd0_muxmain_qdr = 0x1f port2_sd0_muxmain_qdr = 0x1f port1_sd1_muxmain_qdr = 0x1f port2_sd1_muxmain_qdr = 0x1f port1_sd2_muxmain_qdr = 0x1f port2_sd2_muxmain_qdr = 0x1f port1_sd3_muxmain_qdr = 0x1f port2_sd3_muxmain_qdr = 0x1f port1_sd0_ob_preemp_pre_qdr = 0x0 port2_sd0_ob_preemp_pre_qdr = 0x0 port1_sd1_ob_preemp_pre_qdr = 0x0 port2_sd1_ob_preemp_pre_qdr = 0x0 port1_sd2_ob_preemp_pre_qdr = 0x0 port2_sd2_ob_preemp_pre_qdr = 0x0 port1_sd3_ob_preemp_pre_qdr = 0x0 port2_sd3_ob_preemp_pre_qdr = 0x0 port1_sd0_ob_preemp_post_qdr = 0x2 port2_sd0_ob_preemp_post_qdr = 0x2 port1_sd1_ob_preemp_post_qdr = 0x2 port2_sd1_ob_preemp_post_qdr = 0x2 port1_sd2_ob_preemp_post_qdr = 0x2 port2_sd2_ob_preemp_post_qdr = 0x2 port1_sd3_ob_preemp_post_qdr = 0x2 port2_sd3_ob_preemp_post_qdr = 0x2 port1_sd0_ob_preemp_main_qdr = 0x10 port2_sd0_ob_preemp_main_qdr = 0x10 port1_sd1_ob_preemp_main_qdr = 0x10 port2_sd1_ob_preemp_main_qdr = 0x10 port1_sd2_ob_preemp_main_qdr = 0x10 port2_sd2_ob_preemp_main_qdr = 0x10 port1_sd3_ob_preemp_main_qdr = 0x10 port2_sd3_ob_preemp_main_qdr = 0x10 port1_sd0_ob_preemp_msb_qdr = 0x0 port2_sd0_ob_preemp_msb_qdr = 0x0 port1_sd1_ob_preemp_msb_qdr = 0x0 port2_sd1_ob_preemp_msb_qdr = 0x0 port1_sd2_ob_preemp_msb_qdr = 0x0 port2_sd2_ob_preemp_msb_qdr = 0x0 port1_sd3_ob_preemp_msb_qdr = 0x0 port2_sd3_ob_preemp_msb_qdr = 0x0 center_mix90phase = true ext_phy_board_port1 = HAWK3 ext_phy_board_port2 = HAWK3 ;;;;; External Phy: ignore mellanox OUI checking. ;;;;; Under [IB] section ;;;;; Integer parameter. Values range : 0x0 - 0x1. ignore_mellanox_oui = 0x1 ;;;;; External Phy check GPIOs values for the 4 configurable GPIOs per port. ;;;;; every GPIO has 2 bits that can get the values "00", "01", "11" - dont check. ;;;;; Under [IB] section ;;;;; Integer parameter. Values range : 0x0 - 0xff. ext_phy_check_value_port1 = 0xff ext_phy_check_value_port2 = 0xff [PLL] lbist_en = 0 lbist_shift_freq = 3 pll_stabilize = 0x13 flash_div = 0x3 lbist_array_bypass = 1 lbist_pat_cnt_lsb = 0x2 core_f = 44 core_r = 27 ddr_6_db_preemp_pre = 0x4 ddr_6_db_preemp_main = 0x7 ddr_6_db_preemp_post = 0x0 ddr_3_dot_5_db_preemp_pre = 0x2 ddr_3_dot_5_db_preemp_main = 0x7 ddr_3_dot_5_db_preemp_post = 0x0 [FW]

    server spec:
    Xeon E5-1620v4 3,5GHz 2011-3
    Supermicro X10SRA-F
    4x 16GB Samsung DDR4-2133 reg. ECC Ram

    ESXi:
    6.7.0 Update 3 (Build 19997733)
    I flashed ESXi from pre U1 up to latest patch after enabled SR-IOV (but not working) to see if something has changed. Nothing changed, from pre U1 to post U3 SR-IOV seems not supported, as described above.

    3ef4594c-f8c6-4c21-957c-617ebf239f78-grafik.png

    cannot select the mellanox card
    91cd4b53-2cdf-47cc-b12f-1b5b49990cfe-grafik.png

  • Pfsense 2.6 on ESXI 6.7 Dell PE R320 | vm's unable to connect on reboot

    4
    0 Votes
    4 Posts
    688 Views
    stephenw10S

    Are you passing the Broadcom NICs through to pfSense? It looks like you're probably not but if you are there was an issue we've seen with that driver that required the NIC to be in promiscuous mode. That may be getting reset when pfSense is rebooted.

    Steve

  • 0 Votes
    18 Posts
    2k Views
    E

    @loser8491 Okay so there are some virtualization caveats on windows 11. Virtualization on Intel cpu's older than 6th gen will be disabled out of the box due to security concerns with previous generations. There is a workaround but requires some hackery. Check Microsoft forums on the issue. Furthermore, if you have a 6th gen Intel or later or a xen architecture amd cpu or later, check the bios settings. You have to enable virtualization in bios and also directed io if possible. (VT-x and VT-d) respectively. Then reinstall virtual box. If you have an older cpu you need to first verify it is capable of virtualization.

Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.