Budget 1Gbps hardware in 1U rack format
-
Hi guys,
right now I'm running the pfsense on my old power hungry gear. (i5-750)I would like to move to a more power efficient processor.
But I have no idea of the hardware to choose from.My internet 1Gbps link is done via PPPoE and via a specific VLAN 835.
I also have a LAGG and 3 different VLANs/Subnet. I also use a bit of OpenVPN like remote desktop or file transfer.
I have seen a fiew solutions based on those cpus but I have no idea on what to choose from:- Celeron 3855U
- i3 3217U
- i5 3317U
If you know a "cheap" solution in rack 1U format that would be awesome.
Thanks guys for help!
-
You are going to want high single thread performance both PPPoE (which is limited to one queue only) and OpenVPN.
I would run some tests on your current box to see how the cores are loaded under maximum throughput conditions.
Only the i5 3317U is comparable on single thread performance from a very brief search. If your 750 is close to maxing out one core currently you might look at something other than those laptop CPUs.
Steve
-
@stephenw10 said in Budget 1Gbps hardware in 1U rack format:
I would run some tests on your current box to see how the cores are loaded under maximum throughput conditions.
Right now I have opened a bunch of speedtest/dslreport (10). I enabled Steam updates and added a fiew Linux distro downloads on torrent.
The cpu was 15-25-30% under load.
Load average 0.32, 0.29, 0.21
Could you suggest a proper way to test it?Nicolò
-
No CPU with a "U" in the name would be a good start.
-
Open an SSH session to the firewall and at the command line run
top -aSH
. Hit q to quit out and leave the current data i the screen, you can then copy/paste it out if required. That will show you the per-core usage which is what you need to know here. A 30% overall usage could be one core at 100%.Steve
-
I think this is overthinking the problem. A skylake or newer G series pentium will run rings around the i5-750, will have AES-NI, and will run much more efficiently without costing too much. Anything higher-end than that will still be faster and more power efficient, but will become progressively more expensive. Avoid low-TDP chips like -U, -Y, or -T series because they aren't needed for this application, and just find the cheapest 1U rackmount you can with a reasonably modern CPU. (Understanding that you'll pay more for a rackmount.) I don't entirely understand why most of the possibilities you're looking at are 6 years old; even used you should be able to find something newer than an ivy bridge CPU.
-
@vamike said in Budget 1Gbps hardware in 1U rack format:
I don't entirely understand why most of the possibilities you're looking at are 6 years old
Well actually I'm looking for an already "firewall" made 1U servers wtith 6Gb eth ports.
The CPU list is limited/old t due to my available amount of money.Anyway this is the top output during a speed test.
last pid: 40076; load averages: 0.56, 0.34, 0.28 up 0+20:19:03 14:28:05 188 processes: 5 running, 155 sleeping, 28 waiting CPU: 4.5% user, 0.0% nice, 3.6% system, 10.9% interrupt, 80.9% idle Mem: 76M Active, 1210M Inact, 882M Wired, 394M Buf, 1763M Free Swap: PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 root 155 ki31 0K 64K CPU1 1 20.0H 92.97% [idle{idle: cpu1}] 11 root 155 ki31 0K 64K CPU2 2 20.0H 91.76% [idle{idle: cpu2}] 11 root 155 ki31 0K 64K RUN 0 20.0H 85.57% [idle{idle: cpu0}] 11 root 155 ki31 0K 64K CPU3 3 19.9H 59.24% [idle{idle: cpu3}] 12 root -92 - 0K 448K WAIT 3 5:55 41.05% [intr{irq273: re0}] 0 root -92 - 0K 368K - 2 0:58 13.42% [kernel{em1 que}] 75592 root 24 0 287M 169M bpf 1 0:55 6.70% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ 75592 root 23 0 287M 169M bpf 2 0:55 6.57% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ 0 root -92 - 0K 368K - 3 1:09 1.04% [kernel{em0 que}] 75592 root 20 0 287M 169M nanslp 1 3:50 0.24% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ 20 root -16 - 0K 16K - 0 0:08 0.19% [rand_harvestq] 12 root -60 - 0K 448K WAIT 0 1:37 0.13% [intr{swi4: clock (0)}] 40076 root 20 0 22116K 4392K CPU0 0 0:00 0.08% top -aSH 75592 root 20 0 287M 169M nanslp 0 0:31 0.04% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ 75037 root 20 0 24544K 8860K kqread 2 0:52 0.03% redis-server: /usr/local/bin/redis-server 127.0.0.1:6379 (redis-server){redis-server} 71243 root 20 0 234M 44228K nanslp 0 0:20 0.03% /usr/local/bin/php -f /usr/local/pkg/pfblockerng/pfblockerng.inc dnsbl 19 root -16 - 0K 16K pftm 0 0:12 0.02% [pf purge] 9406 root 20 0 78872K 9248K select 1 0:00 0.01% sshd: admin@pts/0 (sshd) 56652 root 20 0 10376K 2104K select 0 0:07 0.01% /usr/sbin/powerd -b hadp -a hadp -n hadp 12 root -72 - 0K 448K WAIT 3 0:17 0.01% [intr{swi1: netisr 1}] 28071 root 20 0 10988K 2444K nanslp 1 0:05 0.01% [dpinger{dpinger}] 75592 root 20 0 287M 169M select 1 0:04 0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ 44926 dhcpd 20 0 16652K 8428K select 0 0:03 0.00% /usr/local/sbin/dhcpd -user dhcpd -group _dhcp -chroot /var/dhcpd -cf /etc/dhcpd.conf -pf /var/run/dhcpd.pid bridge0 lagg0.100 75592 root 20 0 287M 169M bpf 0 0:02 0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ 39089 root 20 0 24656K 12480K select 2 0:03 0.00% /usr/local/sbin/ntpd -g -c /var/etc/ntpd.conf -p /var/run/ntpd.pid{ntpd} 15 root -68 - 0K 160K - 1 0:01 0.00% [usb{usbus1}] 28071 root 20 0 10988K 2444K sbwait 2 0:01 0.00% [dpinger{dpinger}] 15 root -68 - 0K 160K - 2 0:01 0.00% [usb{usbus0}] 970 root 20 0 274M 38452K kqread 2 0:01 0.00% php-fpm: master process (/usr/local/lib/php-fpm.conf) (php-fpm) 15 root -68 - 0K 160K - 3 0:01 0.00% [usb{usbus0}] 15 root -68 - 0K 160K - 2 0:01 0.00% [usb{usbus1}] 65773 root 20 0 34752K 6396K kqread 1 0:01 0.00% /usr/local/sbin/lighttpd_pfb -f /var/unbound/pfb_dnsbl_lighty.conf 75592 root 20 0 287M 169M nanslp 0 0:01 0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ 28071 root 20 0 10988K 2444K nanslp 0 0:01 0.00% [dpinger{dpinger}] 12 root -88 - 0K 448K WAIT 0 0:01 0.00% [intr{irq16: ehci0}] 22 root -16 - 0K 48K psleep 0 0:02 0.00% [pagedaemon{pagedaemon}] 12 root -88 - 0K 448K WAIT 0 0:01 0.00% [intr{irq23: ehci1}] 75592 root 20 0 287M 169M nanslp 0 0:01 0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ 12 root -56 - 0K 448K WAIT 1 0:02 0.00% [intr{swi5: fast taskq}] 28 root 16 - 0K 16K syncer 0 0:23 0.00% [syncer] 75592 root 20 0 287M 169M nanslp 1 0:01 0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ 75592 root 20 0 287M 169M nanslp 0 0:01 0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ 25657 root 20 0 12736K 2624K bpf 2 0:03 0.00% /usr/local/sbin/filterlog -i pflog0 -p /var/run/filterlog.pid 26 root 20 - 0K 32K sdflus 2 0:01 0.00% [bufdaemon{/ worker}] 12 root -92 - 0K 448K WAIT 1 0:01 0.00% [intr{irq19: re1 atapci0}] 25 root 20 - 0K 16K - 3 0:00 0.00% [bufspacedaemon] 27 root 20 - 0K 16K vlruwt 0 0:00 0.00% [vnlru] 26 root 20 - 0K 32K psleep 3 0:00 0.00% [bufdaemon{bufdaemon}] 75592 root 20 0 287M 169M nanslp 2 7:00 0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ 75592 root 27 0 287M 169M uwait 0 6:11 0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ 75592 root 28 0 287M 169M uwait 3 6:10 0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ 75592 root 28 0 287M 169M uwait 0 6:10 0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ 75592 root 28 0 287M 169M uwait 2 6:08 0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ 75592 root 27 0 287M 169M uwait 1 6:06 0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ 75592 root 20 0 287M 169M uwait 0 1:09 0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{ [2.4.3-RELEASE][admin@p55.lan]/root:
-
@hitech95 said in Budget 1Gbps hardware in 1U rack format:
@vamike said in Budget 1Gbps hardware in 1U rack format:
I don't entirely understand why most of the possibilities you're looking at are 6 years old
Well actually I'm looking for an already "firewall" made 1U servers wtith 6Gb eth ports.
The CPU list is limited/old t due to my available amount of money.Honestly, I'd just leave things alone for now. Keep watching, and pull the trigger when something better comes along for a reasonable price.
You'll probably find much better options if you give up a pre-built firewall with 6 gbe ports and just pop a quad-port card into a normal 2 port server. Are you sure you need that many ports? I hope you're not trying to use a bunch of NICs as a switch.
-
Ok, so you can see how that load is not distributed across the cores. I assume re0 is your WAN and em1 the LAN you were testing from?
Just what budget are you working with?
I agree with VAMike most newer hardware should be fine here as long it's not something with loads of cores but limited single thread performance.
Steve
-
@vamike said in Budget 1Gbps hardware in 1U rack format:
You'll probably find much better options if you give up a pre-built firewall with 6 gbe ports and just pop a quad-port card into a normal 2 port server. Are you sure you need that many ports? I hope you're not trying to use a bunch of NICs as a switch.
Servers are noisy. I already have an intel and a dell. The fans run always at max speed and are not inteded to been used on home. I have my server room on top of my bedroom.... I would like to have a silent server as router.
@stephenw10 said in Budget 1Gbps hardware in 1U rack format:
Ok, so you can see how that load is not distributed across the cores. I assume re0 is your WAN and em1 the LAN you were testing from?
No idea on how to use TOP to read stuff, I always used HTOP.
I have 5 ports used right now on my P55 motherboard 4 intel and 1 realtek for the PPPoE to the ONU.
The LAGG goes to a managed switch with SERVER side vlans, VOIP with QOS priority and HOME network.
Each vlan have it's own subnet. For a total of 4 subnets.My budget goes up to 300USD shipped.
Dummy question is this PPPoE single queue problem only on freebsd? I think to have seen SMP MIPS routers with PPPoE running on multiple cores... using OpenWRT so not the offcial BSP.
-
@hitech95 said in Budget 1Gbps hardware in 1U rack format:
@vamike said in Budget 1Gbps hardware in 1U rack format:
You'll probably find much better options if you give up a pre-built firewall with 6 gbe ports and just pop a quad-port card into a normal 2 port server. Are you sure you need that many ports? I hope you're not trying to use a bunch of NICs as a switch.
Servers are noisy. I already have an intel and a dell. The fans run always at max speed and are not inteded to been used on home. I have my server room on top of my bedroom.... I would like to have a silent server as router.
Well, you decided you needed a rack. :) Racks are noisy. That said, if the fans are actually running at max speed, something is broken; servers will typically have several fan levels and won't go full blast unless one of the fans is dead. (The extra capacity is for redundancy.) There are also different levels of sound for different servers; there's quite a lot of range between "fanless" and "747", and you throw away a lot of performance and options if you insist on fanless. (And if you have other servers in the rack that have fans, that seems like a really bad deal.) In HP's lineup, for example, the bottom end is a DL20, which is fairly tiny and almost silent because its single socket E3 CPU and other components simply don't need as much cooling as, say, the dual socket DL360. In general, an integrated box is going to have cooling proportional to its requirements, and if you look for a lower power system it's going to be quieter than a higher power system. As far as the U/Y/T chips, low-TDP CPUs have basically the same performance characteristics as the higher power CPUs, but they're throttled. If you run a newish CPU at idle the fans are going to be pretty much off because it isn't drawing much power. If you have the CPU under heavy load the fans will speed up. But--if you have a low TDP CPU it just wouldn't be able to do the thing that's causing the higher TDP CPU to max out; you can get the same effect by just killing whatever is maxing out the CPU if you don't need it anyway. (Or you can limit the CPU frequency on the higher TDP CPU.)
I have 5 ports used right now on my P55 motherboard 4 intel and 1 realtek for the PPPoE to the ONU.
The LAGG goes to a managed switch with SERVER side vlans, VOIP with QOS priority and HOME network.
Each vlan have it's own subnet. For a total of 4 subnets.It's unlikely that the LAG is getting you much. You'd probably be better off putting the WAN onto one of the intel ports, then splitting up the VLANs such that the 2 highest traffic links are on their own interfaces and either trunk everything else on the last intel interface or put the lowest traffic vlan on the realtek. (Understanding that freebsd's realtek driver is really, really bad.)
My budget goes up to 300USD shipped.
I'd definitely just keep saving and waiting--nothing you get for $300 is going to be much of an upgrade from what you already have. In the meantime, see about tuning the fan speeds. More modern servers tend to handle everything automatically, older servers may need some kind of program running to set things from the OS or have settings in the bios to override that.
-
It could well be a FreeBSD only issue, or maybe a *BSD issue. I haven't ever investigated.
It's something that has relatively recently become an issue with higher speed connections becoming available using PPPoE.
Just be aware of it.Steve
-
@stephenw10 said in Budget 1Gbps hardware in 1U rack format:
Just be aware of it.
Just tried a workaround found on bsd issue tracker. But looks like it reduce the "speed" and increase ping.
-100Mbps
+5ms.
I didn't have done a proper test just a couple of speedtests.net.isr.dispatch = "deferred"
At least the queue is managed by all cores. I have installed HTOP :)
And now the load is more distributed. -
Yes, that will likely have some drawbacks. It will probably break any traffic shaping you have applied for example which might be why your ping times increase.
Steve
-
maybe you can consider atom 3000 series cpu. like netgate's sg-5100 or xg-7100 with 4-core c3558, or supermicro's c3000 series motherboard.
I am testing a supermicro C3758 barebone which has 8 cores and 4 1gbps-ports with 1u rack mount. The fan speed of this barebone can be lower than default setting(30%) if you can configure it through IPMI console tools.
-
@abcnew said in Budget 1Gbps hardware in 1U rack format:
maybe you can consider atom 3000 series cpu. like netgate's sg-5100 or xg-7100 with 4-core c3558, or supermicro's c3000 series motherboard.
Have you confirmed that hardware can do gigabit PPPoE?
-
Mmm, unfortunately I can only dream of 1G PPPoE here and I'm not sure how realistic a local test would be with PPPoE.
But I would want to run a test to be sure there. I kinda expected it to use more on the i5-750 TBH on a single core.
As another data point there is this: https://forum.netgate.com/topic/133704/poor-performance-on-igb-driver
That user seemed to be limited to 500Mbps over PPPoE with J1900.
I suspect this doesn't scale and there are a load of variables though.Steve
-
@vamike
1gbps PPPoE was an issue before because PPPoE was single threaded for a very long time in FreeBSD. (Use "PPPoE single threaded" as searching keywords.)But MikeFromOz posted 3 months ago that his i5-8400 and intel X550 can do 1gbps pppoe:
https://forum.netgate.com/topic/117313/hardware-to-achieve-gigabit-over-pppoe/5 -
@abcnew um, are you arguing that there's no need to worry about whether a low-power, low-ipc embedded cpu can do gigabit pppoe because it's possible to hit 1gbps with a 4GHz implementation of intel's latest microarchitecture?
-
Mmm, that's not a good example. The i5-8400 has a very high single thread performance. I would expect that to handle 1Gbp on a single queue easily. As the poster says there, they may have over spec'd it.
https://www.cpubenchmark.net/compare/Intel-i5-8400-vs-Intel-i5-750-vs-Intel-Celeron-J1900/3097vs772vs2131A synthetic benchmark like that can only tell you so much but it should provide some idea.
Steve