just following up for others... all interfaces powered up with no fuss with 2.4.4.
igb0: <Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k> port 0x9060-0x907f mem 0xe0c60000-0xe0c7ffff,0xe0c8c000-0xe0c8ffff at device 0.0 numa-domain 0 on pci8
igb0: Using MSIX interrupts with 9 vectors
igb0: Ethernet address: ac:1f:6b:73:87:e0
igb0: Bound queue 0 to cpu 0
igb0: Bound queue 1 to cpu 1
igb0: Bound queue 2 to cpu 2
igb0: Bound queue 3 to cpu 3
igb0: Bound queue 4 to cpu 4
igb0: Bound queue 5 to cpu 5
igb0: Bound queue 6 to cpu 6
igb0: Bound queue 7 to cpu 7
igb0: netmap queues/slots: TX 8/1024, RX 8/1024
igb1: <Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k> port 0x9040-0x905f mem 0xe0c40000-0xe0c5ffff,0xe0c88000-0xe0c8bfff at device 0.1 numa-domain 0 on pci8
igb1: Using MSIX interrupts with 9 vectors
igb1: Ethernet address: ac:1f:6b:73:87:e1
igb1: Bound queue 0 to cpu 8
igb1: Bound queue 1 to cpu 9
igb1: Bound queue 2 to cpu 10
igb1: Bound queue 3 to cpu 11
igb1: Bound queue 4 to cpu 12
igb1: Bound queue 5 to cpu 13
igb1: Bound queue 6 to cpu 14
igb1: Bound queue 7 to cpu 15
igb1: netmap queues/slots: TX 8/1024, RX 8/1024
igb2: <Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k> port 0x9020-0x903f mem 0xe0c20000-0xe0c3ffff,0xe0c84000-0xe0c87fff at device 0.2 numa-domain 0 on pci8
igb2: Using MSIX interrupts with 9 vectors
igb2: Ethernet address: ac:1f:6b:73:87:e2
igb2: Bound queue 0 to cpu 0
igb2: Bound queue 1 to cpu 1
igb2: Bound queue 2 to cpu 2
igb2: Bound queue 3 to cpu 3
igb2: Bound queue 4 to cpu 4
igb2: Bound queue 5 to cpu 5
igb2: Bound queue 6 to cpu 6
igb2: Bound queue 7 to cpu 7
igb2: netmap queues/slots: TX 8/1024, RX 8/1024
igb3: <Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k> port 0x9000-0x901f mem 0xe0c00000-0xe0c1ffff,0xe0c80000-0xe0c83fff at device 0.3 numa-domain 0 on pci8
igb3: Using MSIX interrupts with 9 vectors
igb3: Ethernet address: ac:1f:6b:73:87:e3
igb3: Bound queue 0 to cpu 8
igb3: Bound queue 1 to cpu 9
igb3: Bound queue 2 to cpu 10
igb3: Bound queue 3 to cpu 11
igb3: Bound queue 4 to cpu 12
igb3: Bound queue 5 to cpu 13
igb3: Bound queue 6 to cpu 14
igb3: Bound queue 7 to cpu 15
igb3: netmap queues/slots: TX 8/1024, RX 8/1024
ixl0: <Intel(R) Ethernet Connection 700 Series PF Driver, Version - 1.9.9-k> mem 0xfa000000-0xfaffffff,0xfb018000-0xfb01ffff at device 0.0 numa-domain 0 on pci14
ixl0: using 1024 tx descriptors and 1024 rx descriptors
ixl0: fw 3.1.57069 api 1.5 nvm 3.33 etid 80001006 oem 1.262.0
ixl0: PF-ID: VFs 32, MSIX 129, VF MSIX 5, QPs 384, MDIO shared
ixl0: Using MSIX interrupts with 9 vectors
ixl0: Allocating 8 queues for PF LAN VSI; 8 queues active
ixl0: Ethernet address: ac:1f:6b:73:89:70
ixl0: SR-IOV ready
queues is 0xfffffe00017f4000
ixl0: netmap queues/slots: TX 8/1024, RX 8/1024
ixl1: <Intel(R) Ethernet Connection 700 Series PF Driver, Version - 1.9.9-k> mem 0xf9000000-0xf9ffffff,0xfb010000-0xfb017fff at device 0.1 numa-domain 0 on pci14
ixl1: using 1024 tx descriptors and 1024 rx descriptors
ixl1: fw 3.1.57069 api 1.5 nvm 3.33 etid 80001006 oem 1.262.0
ixl1: PF-ID: VFs 32, MSIX 129, VF MSIX 5, QPs 384, MDIO shared
ixl1: Using MSIX interrupts with 9 vectors
ixl1: Allocating 8 queues for PF LAN VSI; 8 queues active
ixl1: Ethernet address: ac:1f:6b:73:89:71
ixl1: SR-IOV ready
queues is 0xfffffe000192c000
ixl1: netmap queues/slots: TX 8/1024, RX 8/1024
ixl2: <Intel(R) Ethernet Connection 700 Series PF Driver, Version - 1.9.9-k> mem 0xf8000000-0xf8ffffff,0xfb008000-0xfb00ffff at device 0.2 numa-domain 0 on pci14
ixl2: using 1024 tx descriptors and 1024 rx descriptors
ixl2: fw 3.1.57069 api 1.5 nvm 3.33 etid 80001006 oem 1.262.0
ixl2: PF-ID: VFs 32, MSIX 129, VF MSIX 5, QPs 384, I2C
ixl2: Using MSIX interrupts with 9 vectors
ixl2: Allocating 8 queues for PF LAN VSI; 8 queues active
ixl2: Ethernet address: ac:1f:6b:73:89:72
ixl2: SR-IOV ready
queues is 0xfffffe0001a82000
ixl2: netmap queues/slots: TX 8/1024, RX 8/1024
ixl3: <Intel(R) Ethernet Connection 700 Series PF Driver, Version - 1.9.9-k> mem 0xf7000000-0xf7ffffff,0xfb000000-0xfb007fff at device 0.3 numa-domain 0 on pci14
ixl3: using 1024 tx descriptors and 1024 rx descriptors
ixl3: fw 3.1.57069 api 1.5 nvm 3.33 etid 80001006 oem 1.262.0
ixl3: PF-ID: VFs 32, MSIX 129, VF MSIX 5, QPs 384, I2C
ixl3: Using MSIX interrupts with 9 vectors
ixl3: Allocating 8 queues for PF LAN VSI; 8 queues active
ixl3: Ethernet address: ac:1f:6b:73:89:73
ixl3: SR-IOV ready
queues is 0xfffffe0001bc6000
ixl3: netmap queues/slots: TX 8/1024, RX 8/1024
CPU: Intel(R) Xeon(R) D-2146NT CPU @ 2.30GHz (2294.68-MHz K8-class CPU)
Origin="GenuineIntel" Id=0x50654 Family=0x6 Model=0x55 Stepping=4
Structured Extended Features=0xd39ffffb<FSGSBASE,TSCADJ,BMI1,HLE,AVX2,FDPEXC,SMEP,BMI2,ERMS,INVPCID,RTM,PQM,NFPUSG,MPX,PQE,AVX512F,AVX512DQ,RDSEED,ADX,SMAP,CLFLUSHOPT,CLWB,PROCTRACE,AVX512CD,AVX512BW,AVX512VL>
Structured Extended Features2=0x8<PKU>
Structured Extended Features3=0x9c000000<IBPB,STIBP,SSBD>
TSC: P-state invariant, performance statistics
@stephenw10 said in Atom E3950 Performance Question:
The i210 I believe it limited to 2 queues anyway, I'm insure about i211.
My understanding is i210 has up to 4 queues and the i211 has up to 2. Which is why the MBT-4220 (4-Core) uses the i210 and the MBT-2220 (2-Core) uses i211.
See table 1-6 on page 11...
Moved to hardware. This is very definitely not official hardware!
Most socket 775 CPUs will work. The best source is to check the XTM5 thread.
However why do you need to upgrade? That CPU is already close to the fastest you can get for that. No CPUs that will work there have AES-NI.
Thank you for your swift reply :) Honestly i haven't tested to much, but your reply gave me a bit more confidence that the problem is not with the hardware, but poor networking elsewhere. I will do some more testing and report back my findings.
@slimaxpower said in Looking for Low Power Budget Build Suggestions (BC, Canada):
@rnatalli until they fail.
I have just replaced my current system (n54l) with an i7 4770 with multiple intel pcie nics 80+ PSU so its low power with way more grunt than I will ever need.
My current system CPU is always above 60% and 8gb out of 16gb ram usage. that's without snort or vpn active.
edit. around 300aud
Hi @SLIMaxPower would it be possible to get some more details of your build please? I'm also in Australia, finding it hard to put together something low-power for around the $300 level that would suit.
There are already several open threads about this, and notes have been added to the upgrade guide.
Here's one of the other threads https://forum.netgate.com/topic/135852/console-no-longer-working-after-upgrade-to-2-4-4/
Edit- beaten to the post by Derelict...
@stephenw10 said in WGXepc on XTM5:
Because I added two sets of fans for boards that can control both. -f for the CPU fan and -f2 for the system fan.
Since the XTM5 only has controllable system fans it is now on the -f2 switch.
Thx Steve, i gonna try it tomorrow.
@stephenw10 Yeah, I've given up on trying to find a compatible USB wifi adapter only to still have limited range and flakiness anyways. Easier to go the route of a USB ethernet adapter, which has better support and then I'll use an external AP.
@grimson The problem is the SG-1000 only has one LAN (inuse) and WAN port. However, I believe this is close to an answer. I had a USB to Ethernet adapter laying around. Plugged it in and it's an E1000 chipset and worked the first time. So now I have a 2nd lan port that I will put an external AP on and forget about trying to save space or simplicity for the customer. The wifi on our pfSenses (about 10+) have always been flaky and I've migrated a couple of locations to an external AP and no more issues.
I tried both the UFS and the ZFS auto options, and both of them resulted in the same :S
I read up on the link @Grimson posted and of course the one thing I had not tried solved it.
I used "gpart set -a active" on both the mirrored drives and now it boots fine!.
Thank you both for the quick replies and the help :D
In the supermicro superserver's c3000 series barebones. It uses A2SDi-8C-HLN4F as its motherboard. But it has 8 cores other than c3558 which has only 4 cores.
Someone gets 1gbps on netgate xg-7100 which has 4-core c3558 and Marvell switch chip(link to reddit). He also used suricata.
Currently, I certainly don't need much more than the 2008 ALIX I have.
Because 20/2 VDSL.
But hopefully, it will last for another decade or more and FFTH might actually arrive here by then (or I move).
I think I'm mostly limited by the lack of RAM today, so I'd like to get something with 4 GB.
Unfortunately, Netgate products are much, much more expensive here (.ch) than PC-Engines - but I haven't fully made up my mind, yet.
Only one local Netgate partner shows the hardware on their homepage/shop - and the Minnow-Boards are not there, so I don't even know what they cost.
I know this topic is quite old, but I'd like to add my own & successful try to build a low-cost pfSense router.
In detail it was just projected to be an upgrade to an old D-Link 524 in a mechatronics lab envoirment, wit just a few features - like static IP assignment of clients via their MAC adress & MAC-Whitelisting as well as blocking access to the internet from or to the lab equipment [including several Siemens Siematic PLCs, a Kubota robot arm, Several printers and a professional 3D Printer], but not the PCs.
The D-Link was used as an AP instead, this saving the cost of buying a wifi card that isn't faster anyway...
[I hope 802.11ac support and drivers for the Intel 7260ac will soon come!].
I used an ASRock J1900M Mainboard [€ 40],
an LC-Power 1400 Case w/ 250W PSU [€ 40],
3 Realtek 811x NICs with PCIe x1 [€ 5 each]
as well as a Corsair ValueRAM 2x 2GB DDR3-1333 Kit [€ 30]
and a Transcend 32GB 2,5" SSD [€ 20]
Which totaled around € 145 in parts and € 150 with shipping.
Since it neither needed VPN or any crypto handling besides it's HTTPS web interface, performance is sufficient.
On the WAN side, this unit just goes straight into a Cable CPE with roughly 150M/10M, and it can fully saturate that [before the D-Link capped it with it's 100M ports...
This setup was easy to deploy and fully satisfied the customer's needs, as it was just the needed and reasonable priced upgrade to a customer/SoHo router and while being cheaper than the famous Fritz!Box routers, it had significantly more features and didn't have arbitrary and artificial limitations [like MAC-Whitelisting only on the Wireless interface and limited to 25 devices like the D-Link].
Sadly, with some of those cheap ASRock boards being in low supply, espechally the QC5000M [same board, but with an AMD A4-5000, thus having AES-Ni], the few offers on Amazon ramp up prices to 300% or more - at least in Germany.
But I'm pretty shure some Celeron J4xxx or J5xxx as well as potentially upcoming, low-end Ryzen-based SoCs will fill the gap without getting too pricy.
Yes, those boxes are pretty much for fun or experimenting only now.
If you do go ahead you might want to choose one of the 400MHz FSB Pentium M CPUs as they are supported directly by powerd so you get proper frequency scaling.
I know you are asking for something other than a Chelsio card, however, I have been running a Chelsio T50-SO-CR 2-Port SFP+ card without issue for approximately a year. It was simply plug and play with my Supermicro X10SDV-8C-TLN4F board.
This card has been working with pfSense v2.4.3 and prior versions without issue.
The left port is connected to a Meraki Switch and the right port is connected to a Cisco Switch. Everything works great.
First thing would be to try a FreeBSD 12 snapshot to confirm it is recognised and working there. No point wasting time otherwise. https://download.freebsd.org/ftp/snapshots/ISO-IMAGES/12.0/
Recompiling to get that into 2.4.4 is non-trival though.
The biggest factor there is how much of that traffic will be over OpenVPN. If the majority of it is and you want to get anywhere near 2Gbps you're going to need the fastest CPU you can get hold of. Each OpenVPN process is single threaded so less cores at higher speeds wins here if you have only a few tunnels.
@stephenw10 said in Temperatures on fanless systems:
I would probably consider swapping out two of the fans for quieter ones then or fitting a speed reducer resistor maybe.
Seems like a good idea. I am looking for some 1U 40mm fans that are quiet. Don't need too high a cfm, so I am hoping it shouldn't be hard to find.
@stephenw10 said in Temperatures on fanless systems:
Those fans are probably moving waaay more air than is required for that CPU.
True. And that was the crux of the question. If 39C or 40C idle is not too bad, then I can skip all fans and just run it fanless.
Over time, like I mentioned, I will move it in another 1U and then I could buy the fans that fit that chassis correctly. This current 1U chassis is a 4 bay server, and having 4 drive bays when I am using none of them seems like a waste. I plan to use it for a small server running ESXi on bare metal and running 2 - 3 VMs in the future.
But that is going to happen after some time. Currently my 2 kids don't leave me with enough time to tinker around.
Yes, exactly you can't use the shell script to start it as that just calls the standard files created by the package. You need to call the binary directly as I wrote above.
Be sure to check the post I linked above too. The actual commands you want to use in the shellcmd are:
/usr/bin/nice -20 /usr/local/sbin/LCDd -c /root/LCDd.conf > /dev/null &
/usr/bin/nice -20 /usr/local/bin/lcdproc C T U &
Assuming your customised LCDd.conf file is in /root.
The calculation used here doesn't seem too terrible:
There were a number of comparisons to actual throughput there and it was not massively out.
As states there though there are many variables.
@perforado said in Both SSDs vanish from rpool -> pfSense hangs and does not recover:
Gaffatape was gone the visit after that.
Mmm, I think that says it all. Someone went in there and removed it when they shouldn't have. You have a rogue admin IMO.