Budget 1Gbps hardware in 1U rack format



  • Hi guys,
    right now I'm running the pfsense on my old power hungry gear. (i5-750)

    I would like to move to a more power efficient processor.
    But I have no idea of the hardware to choose from.

    My internet 1Gbps link is done via PPPoE and via a specific VLAN 835.
    I also have a LAGG and 3 different VLANs/Subnet. I also use a bit of OpenVPN like remote desktop or file transfer.
    I have seen a fiew solutions based on those cpus but I have no idea on what to choose from:

    • Celeron 3855U
    • i3 3217U
    • i5 3317U

    If you know a "cheap" solution in rack 1U format that would be awesome.

    Thanks guys for help!


  • Netgate Administrator

    You are going to want high single thread performance both PPPoE (which is limited to one queue only) and OpenVPN.

    I would run some tests on your current box to see how the cores are loaded under maximum throughput conditions.

    Only the i5 3317U is comparable on single thread performance from a very brief search. If your 750 is close to maxing out one core currently you might look at something other than those laptop CPUs.

    Steve



  • @stephenw10 said in Budget 1Gbps hardware in 1U rack format:

    I would run some tests on your current box to see how the cores are loaded under maximum throughput conditions.

    Right now I have opened a bunch of speedtest/dslreport (10). I enabled Steam updates and added a fiew Linux distro downloads on torrent.
    The cpu was 15-25-30% under load.
    Load average 0.32, 0.29, 0.21
    Could you suggest a proper way to test it?

    Nicolò



  • No CPU with a "U" in the name would be a good start.


  • Netgate Administrator

    Open an SSH session to the firewall and at the command line run top -aSH. Hit q to quit out and leave the current data i the screen, you can then copy/paste it out if required. That will show you the per-core usage which is what you need to know here. A 30% overall usage could be one core at 100%.

    Steve



  • I think this is overthinking the problem. A skylake or newer G series pentium will run rings around the i5-750, will have AES-NI, and will run much more efficiently without costing too much. Anything higher-end than that will still be faster and more power efficient, but will become progressively more expensive. Avoid low-TDP chips like -U, -Y, or -T series because they aren't needed for this application, and just find the cheapest 1U rackmount you can with a reasonably modern CPU. (Understanding that you'll pay more for a rackmount.) I don't entirely understand why most of the possibilities you're looking at are 6 years old; even used you should be able to find something newer than an ivy bridge CPU.



  • @vamike said in Budget 1Gbps hardware in 1U rack format:

    I don't entirely understand why most of the possibilities you're looking at are 6 years old

    Well actually I'm looking for an already "firewall" made 1U servers wtith 6Gb eth ports.
    The CPU list is limited/old t due to my available amount of money.

    Anyway this is the top output during a speed test.

    last pid: 40076;  load averages:  0.56,  0.34,  0.28                                                                                                                                                                 up 0+20:19:03  14:28:05
    188 processes: 5 running, 155 sleeping, 28 waiting
    CPU:  4.5% user,  0.0% nice,  3.6% system, 10.9% interrupt, 80.9% idle
    Mem: 76M Active, 1210M Inact, 882M Wired, 394M Buf, 1763M Free
    Swap:
    
      PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
       11 root       155 ki31     0K    64K CPU1    1  20.0H  92.97% [idle{idle: cpu1}]
       11 root       155 ki31     0K    64K CPU2    2  20.0H  91.76% [idle{idle: cpu2}]
       11 root       155 ki31     0K    64K RUN     0  20.0H  85.57% [idle{idle: cpu0}]
       11 root       155 ki31     0K    64K CPU3    3  19.9H  59.24% [idle{idle: cpu3}]
       12 root       -92    -     0K   448K WAIT    3   5:55  41.05% [intr{irq273: re0}]
        0 root       -92    -     0K   368K -       2   0:58  13.42% [kernel{em1 que}]
    75592 root        24    0   287M   169M bpf     1   0:55   6.70% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
    75592 root        23    0   287M   169M bpf     2   0:55   6.57% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
        0 root       -92    -     0K   368K -       3   1:09   1.04% [kernel{em0 que}]
    75592 root        20    0   287M   169M nanslp  1   3:50   0.24% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
       20 root       -16    -     0K    16K -       0   0:08   0.19% [rand_harvestq]
       12 root       -60    -     0K   448K WAIT    0   1:37   0.13% [intr{swi4: clock (0)}]
    40076 root        20    0 22116K  4392K CPU0    0   0:00   0.08% top -aSH
    75592 root        20    0   287M   169M nanslp  0   0:31   0.04% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
    75037 root        20    0 24544K  8860K kqread  2   0:52   0.03% redis-server: /usr/local/bin/redis-server 127.0.0.1:6379 (redis-server){redis-server}
    71243 root        20    0   234M 44228K nanslp  0   0:20   0.03% /usr/local/bin/php -f /usr/local/pkg/pfblockerng/pfblockerng.inc dnsbl
       19 root       -16    -     0K    16K pftm    0   0:12   0.02% [pf purge]
     9406 root        20    0 78872K  9248K select  1   0:00   0.01% sshd: admin@pts/0 (sshd)
    56652 root        20    0 10376K  2104K select  0   0:07   0.01% /usr/sbin/powerd -b hadp -a hadp -n hadp
       12 root       -72    -     0K   448K WAIT    3   0:17   0.01% [intr{swi1: netisr 1}]
    28071 root        20    0 10988K  2444K nanslp  1   0:05   0.01% [dpinger{dpinger}]
    75592 root        20    0   287M   169M select  1   0:04   0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
    44926 dhcpd       20    0 16652K  8428K select  0   0:03   0.00% /usr/local/sbin/dhcpd -user dhcpd -group _dhcp -chroot /var/dhcpd -cf /etc/dhcpd.conf -pf /var/run/dhcpd.pid bridge0 lagg0.100
    75592 root        20    0   287M   169M bpf     0   0:02   0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
    39089 root        20    0 24656K 12480K select  2   0:03   0.00% /usr/local/sbin/ntpd -g -c /var/etc/ntpd.conf -p /var/run/ntpd.pid{ntpd}
       15 root       -68    -     0K   160K -       1   0:01   0.00% [usb{usbus1}]
    28071 root        20    0 10988K  2444K sbwait  2   0:01   0.00% [dpinger{dpinger}]
       15 root       -68    -     0K   160K -       2   0:01   0.00% [usb{usbus0}]
      970 root        20    0   274M 38452K kqread  2   0:01   0.00% php-fpm: master process (/usr/local/lib/php-fpm.conf) (php-fpm)
       15 root       -68    -     0K   160K -       3   0:01   0.00% [usb{usbus0}]
       15 root       -68    -     0K   160K -       2   0:01   0.00% [usb{usbus1}]
    65773 root        20    0 34752K  6396K kqread  1   0:01   0.00% /usr/local/sbin/lighttpd_pfb -f /var/unbound/pfb_dnsbl_lighty.conf
    75592 root        20    0   287M   169M nanslp  0   0:01   0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
    28071 root        20    0 10988K  2444K nanslp  0   0:01   0.00% [dpinger{dpinger}]
       12 root       -88    -     0K   448K WAIT    0   0:01   0.00% [intr{irq16: ehci0}]
       22 root       -16    -     0K    48K psleep  0   0:02   0.00% [pagedaemon{pagedaemon}]
       12 root       -88    -     0K   448K WAIT    0   0:01   0.00% [intr{irq23: ehci1}]
    75592 root        20    0   287M   169M nanslp  0   0:01   0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
       12 root       -56    -     0K   448K WAIT    1   0:02   0.00% [intr{swi5: fast taskq}]
       28 root        16    -     0K    16K syncer  0   0:23   0.00% [syncer]
    75592 root        20    0   287M   169M nanslp  1   0:01   0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
    75592 root        20    0   287M   169M nanslp  0   0:01   0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
    25657 root        20    0 12736K  2624K bpf     2   0:03   0.00% /usr/local/sbin/filterlog -i pflog0 -p /var/run/filterlog.pid
       26 root        20    -     0K    32K sdflus  2   0:01   0.00% [bufdaemon{/ worker}]
       12 root       -92    -     0K   448K WAIT    1   0:01   0.00% [intr{irq19: re1 atapci0}]
       25 root        20    -     0K    16K -       3   0:00   0.00% [bufspacedaemon]
       27 root        20    -     0K    16K vlruwt  0   0:00   0.00% [vnlru]
       26 root        20    -     0K    32K psleep  3   0:00   0.00% [bufdaemon{bufdaemon}]
    75592 root        20    0   287M   169M nanslp  2   7:00   0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
    75592 root        27    0   287M   169M uwait   0   6:11   0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
    75592 root        28    0   287M   169M uwait   3   6:10   0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
    75592 root        28    0   287M   169M uwait   0   6:10   0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
    75592 root        28    0   287M   169M uwait   2   6:08   0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
    75592 root        27    0   287M   169M uwait   1   6:06   0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
    75592 root        20    0   287M   169M uwait   0   1:09   0.00% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i bridge0 -i lagg0.100 -i pppoe0 --dns-mode{
    [2.4.3-RELEASE][admin@p55.lan]/root:
    


  • @hitech95 said in Budget 1Gbps hardware in 1U rack format:

    @vamike said in Budget 1Gbps hardware in 1U rack format:

    I don't entirely understand why most of the possibilities you're looking at are 6 years old

    Well actually I'm looking for an already "firewall" made 1U servers wtith 6Gb eth ports.
    The CPU list is limited/old t due to my available amount of money.

    Honestly, I'd just leave things alone for now. Keep watching, and pull the trigger when something better comes along for a reasonable price.

    You'll probably find much better options if you give up a pre-built firewall with 6 gbe ports and just pop a quad-port card into a normal 2 port server. Are you sure you need that many ports? I hope you're not trying to use a bunch of NICs as a switch.


  • Netgate Administrator

    Ok, so you can see how that load is not distributed across the cores. I assume re0 is your WAN and em1 the LAN you were testing from?

    Just what budget are you working with?

    I agree with VAMike most newer hardware should be fine here as long it's not something with loads of cores but limited single thread performance.

    Steve



  • @vamike said in Budget 1Gbps hardware in 1U rack format:

    You'll probably find much better options if you give up a pre-built firewall with 6 gbe ports and just pop a quad-port card into a normal 2 port server. Are you sure you need that many ports? I hope you're not trying to use a bunch of NICs as a switch.

    Servers are noisy. I already have an intel and a dell. The fans run always at max speed and are not inteded to been used on home. I have my server room on top of my bedroom.... I would like to have a silent server as router.

    @stephenw10 said in Budget 1Gbps hardware in 1U rack format:

    Ok, so you can see how that load is not distributed across the cores. I assume re0 is your WAN and em1 the LAN you were testing from?

    No idea on how to use TOP to read stuff, I always used HTOP.

    I have 5 ports used right now on my P55 motherboard 4 intel and 1 realtek for the PPPoE to the ONU.
    The LAGG goes to a managed switch with SERVER side vlans, VOIP with QOS priority and HOME network.
    Each vlan have it's own subnet. For a total of 4 subnets.

    My budget goes up to 300USD shipped.

    Dummy question is this PPPoE single queue problem only on freebsd? I think to have seen SMP MIPS routers with PPPoE running on multiple cores... using OpenWRT so not the offcial BSP.



  • @hitech95 said in Budget 1Gbps hardware in 1U rack format:

    @vamike said in Budget 1Gbps hardware in 1U rack format:

    You'll probably find much better options if you give up a pre-built firewall with 6 gbe ports and just pop a quad-port card into a normal 2 port server. Are you sure you need that many ports? I hope you're not trying to use a bunch of NICs as a switch.

    Servers are noisy. I already have an intel and a dell. The fans run always at max speed and are not inteded to been used on home. I have my server room on top of my bedroom.... I would like to have a silent server as router.

    Well, you decided you needed a rack. :) Racks are noisy. That said, if the fans are actually running at max speed, something is broken; servers will typically have several fan levels and won't go full blast unless one of the fans is dead. (The extra capacity is for redundancy.) There are also different levels of sound for different servers; there's quite a lot of range between "fanless" and "747", and you throw away a lot of performance and options if you insist on fanless. (And if you have other servers in the rack that have fans, that seems like a really bad deal.) In HP's lineup, for example, the bottom end is a DL20, which is fairly tiny and almost silent because its single socket E3 CPU and other components simply don't need as much cooling as, say, the dual socket DL360. In general, an integrated box is going to have cooling proportional to its requirements, and if you look for a lower power system it's going to be quieter than a higher power system. As far as the U/Y/T chips, low-TDP CPUs have basically the same performance characteristics as the higher power CPUs, but they're throttled. If you run a newish CPU at idle the fans are going to be pretty much off because it isn't drawing much power. If you have the CPU under heavy load the fans will speed up. But--if you have a low TDP CPU it just wouldn't be able to do the thing that's causing the higher TDP CPU to max out; you can get the same effect by just killing whatever is maxing out the CPU if you don't need it anyway. (Or you can limit the CPU frequency on the higher TDP CPU.)

    I have 5 ports used right now on my P55 motherboard 4 intel and 1 realtek for the PPPoE to the ONU.
    The LAGG goes to a managed switch with SERVER side vlans, VOIP with QOS priority and HOME network.
    Each vlan have it's own subnet. For a total of 4 subnets.

    It's unlikely that the LAG is getting you much. You'd probably be better off putting the WAN onto one of the intel ports, then splitting up the VLANs such that the 2 highest traffic links are on their own interfaces and either trunk everything else on the last intel interface or put the lowest traffic vlan on the realtek. (Understanding that freebsd's realtek driver is really, really bad.)

    My budget goes up to 300USD shipped.

    I'd definitely just keep saving and waiting--nothing you get for $300 is going to be much of an upgrade from what you already have. In the meantime, see about tuning the fan speeds. More modern servers tend to handle everything automatically, older servers may need some kind of program running to set things from the OS or have settings in the bios to override that.


  • Netgate Administrator

    It could well be a FreeBSD only issue, or maybe a *BSD issue. I haven't ever investigated.
    It's something that has relatively recently become an issue with higher speed connections becoming available using PPPoE.
    Just be aware of it.

    Steve



  • @stephenw10 said in Budget 1Gbps hardware in 1U rack format:

    Just be aware of it.

    Just tried a workaround found on bsd issue tracker. But looks like it reduce the "speed" and increase ping.
    -100Mbps
    +5ms.
    I didn't have done a proper test just a couple of speedtests.

    net.isr.dispatch = "deferred"
    

    At least the queue is managed by all cores. I have installed HTOP :)
    And now the load is more distributed.


  • Netgate Administrator

    Yes, that will likely have some drawbacks. It will probably break any traffic shaping you have applied for example which might be why your ping times increase.

    Steve



  • maybe you can consider atom 3000 series cpu. like netgate's sg-5100 or xg-7100 with 4-core c3558, or supermicro's c3000 series motherboard.

    I am testing a supermicro C3758 barebone which has 8 cores and 4 1gbps-ports with 1u rack mount. The fan speed of this barebone can be lower than default setting(30%) if you can configure it through IPMI console tools.



  • @abcnew said in Budget 1Gbps hardware in 1U rack format:

    maybe you can consider atom 3000 series cpu. like netgate's sg-5100 or xg-7100 with 4-core c3558, or supermicro's c3000 series motherboard.

    Have you confirmed that hardware can do gigabit PPPoE?


  • Netgate Administrator

    Mmm, unfortunately I can only dream of 1G PPPoE here and I'm not sure how realistic a local test would be with PPPoE.
    But I would want to run a test to be sure there. I kinda expected it to use more on the i5-750 TBH on a single core.
    As another data point there is this: https://forum.netgate.com/topic/133704/poor-performance-on-igb-driver
    That user seemed to be limited to 500Mbps over PPPoE with J1900.
    I suspect this doesn't scale and there are a load of variables though.

    Steve



  • @vamike
    1gbps PPPoE was an issue before because PPPoE was single threaded for a very long time in FreeBSD. (Use "PPPoE single threaded" as searching keywords.)

    But MikeFromOz posted 3 months ago that his i5-8400 and intel X550 can do 1gbps pppoe:
    https://forum.netgate.com/topic/117313/hardware-to-achieve-gigabit-over-pppoe/5



  • @abcnew um, are you arguing that there's no need to worry about whether a low-power, low-ipc embedded cpu can do gigabit pppoe because it's possible to hit 1gbps with a 4GHz implementation of intel's latest microarchitecture?


  • Netgate Administrator

    Mmm, that's not a good example. The i5-8400 has a very high single thread performance. I would expect that to handle 1Gbp on a single queue easily. As the poster says there, they may have over spec'd it. 😉
    https://www.cpubenchmark.net/compare/Intel-i5-8400-vs-Intel-i5-750-vs-Intel-Celeron-J1900/3097vs772vs2131

    A synthetic benchmark like that can only tell you so much but it should provide some idea.

    Steve



  • I think that for now I'll keep my hardware (tring to optimize it with downclock) or move to someting else different than pfsense.
    Here all connections are pppoe and this is a huge drawback.

    Even my old mips router with 500Mhz cpu single core can push 300Mbps! And a quadcore 2Ghz+ x86 cpu can't.


  • Netgate Administrator

    Well you can push 300Mbps, you can't push 1Gbps over PPPoE with CPU from 2014 designed to be low power. Assuming you're referring to the J1900 there.

    How much power is your current setup actually consuming?

    It clearly doesn't need a quad core CPU in that application. You might save some power by just swapping out the CPU.

    Steve



  • @hitech95 said in Budget 1Gbps hardware in 1U rack format:

    I think that for now I'll keep my hardware (tring to optimize it with downclock) or move to someting else different than pfsense.
    Here all connections are pppoe and this is a huge drawback.

    Yes. Ideally you could get a provider that doesn't use pppoe (which is horrible) but if you're stuck you're stuck. You'd probably see better performance with a linux based firewall, but nothing you wrote above suggested anything about a performance problem--you only mentioned a desire for better power efficiency. Also, please see what I wrote about getting rid of the LAG and changing how you use the existing interfaces. It's possible to degrade any system by configuring it sub-optimally.

    Even my old mips router with 500Mhz cpu single core can push 300Mbps! And a quadcore 2Ghz+ x86 cpu can't.

    Where did this 300Mbps number come from? There are plenty of quad core 2GHz+ x86 devices that can push more than that. (Although saying "2GHz x86" is mostly meaningless because it neglects to specify the implementation. A 2GHz Avoton has very different characteristics than a 2GHz Kaby Lake.) It's also worth pointing out that your mips router likely does pppoe in hardware.



  • @stephenw10 said in Budget 1Gbps hardware in 1U rack format:

    You might save some power by just swapping out the CPU.

    I wouldn't expect much of an ROI from doing that--the old chipsets were power hogs themselves, so the total idle consumption probably won't drop much.



  • @vamike said in Budget 1Gbps hardware in 1U rack format:

    Ideally you could get a provider that doesn't use pppoe

    In Italy I have seen only pppoe connections.

    but nothing you wrote above suggested anything about a performance problem--you only mentioned a desire for better power efficiency
    Right now I don't have problems except somes with Voip. (No idea if is due to the voip provider or pfSense)

    Where did this 300Mbps number come from?

    It was just a reference that looks like that BSD is not using the full potential of the hardware. In comparison small, and embedded not powerfull hardware can do more.

    Anyway the according to the sintetic benchmarks the i5 should be fine on handling the connection it have more points on the single thread than my i5-750.

    The power cunsumption should be around 150W. The processor is quite old, the motherboard is made for gaming "extreme editions" with lot of fancy stuff. so it have lot of useless stuff for a router sucking power.

    I was looking for a more efficient system.

    Why should I remove the LAG? It saved my life. I bought switch capable of managing it to have that!
    I move lot of stuff on my net and sometimes the router have to manage the data (different subnets) If I hadn't the LAG the internet connection would be slow.

    Well you can push 300Mbps, you can't push 1Gbps over PPPoE with CPU from 2014 designed to be low power. Assuming you're referring to the J1900 there.

    I was talking about i3 or i5 i listed before. The j1900 is quite a low power processor and I know the limitations.
    But the fact that I have to buy a Skylake+ CPU for my router is no sense. I'm on Ivy Bringe on my main desktop... and my router should be more powerfull/newer?

    This is no sense. Something is wrong in the "pipeline".


  • Netgate Administrator

    @vamike said in Budget 1Gbps hardware in 1U rack format:

    I wouldn't expect much of an ROI from doing that--the old chipsets were power hogs themselves, so the total idle consumption probably won't drop much.

    Yeah I agree. Even with the cost of, say, an i5-650 you probably won't see that back in power costs.
    Personally I'd probably try it just to see.

    @hitech95 said in Budget 1Gbps hardware in 1U rack format:

    The power consumption should be around 150W

    It probably isn't though. I would measure it before changing anything.

    Steve



  • @hitech95 said in Budget 1Gbps hardware in 1U rack format:

    @vamike said in Budget 1Gbps hardware in 1U rack format:

    Ideally you could get a provider that doesn't use pppoe

    In Italy I have seen only pppoe connections.

    no country has a monopoly on stupidity :)

    Where did this 300Mbps number come from?

    It was just a reference that looks like that BSD is not using the full potential of the hardware. In comparison small, and embedded not powerfull hardware can do more.

    If you offload a large part of the processing then it certainly requires less CPU to do the rest. That said, I think you're engaging in a bit of hyperbole, because it's certainly possible to find a lot of consumer routers and other small embedded chips that choke on gigabit pppoe. If you get a router along with a gigabit pppoe connection, you got something that was specifically chosen for its pppoe performance.

    Why should I remove the LAG? It saved my life. I bought switch capable of managing it to have that!
    I move lot of stuff on my net and sometimes the router have to manage the data (different subnets) If I hadn't the LAG the internet connection would be slow.

    Honestly, I have no idea what you're saying here. If you want to have different subnets you can either trunk over a vlan or use different interfaces as I suggested, link aggregation has nothing to do with it. Due to the way link aggregation works, if you have a small number of devices it's extremely unlikely that you're getting balanced high utilization across the interfaces. Much more likely, you've got high utilization on one or two interfaces, and the others are idle. Especially since your wan is currently a realtek nic (which has terrible fbsd drivers) it's almost certain you'll get more bang for the buck by moving the lan to one of the intel interfaces than hoping that the traffic spreads nicely across four interfaces internally. If your high-utilization hosts happen to hash unluckily, the LAG will actually provide less bandwidth than splitting the subnets onto separate interfaces. (In the worst case, a pair of connection endpoints will be sharing a single interface rather than being forced onto two.) In general, a lot of people think that link aggregation will provide much more benefit than it actually does. There's no way link aggregation is speeding up the WAN connection, which is capped at 1gbps.

    I was talking about i3 or i5 i listed before. The j1900 is quite a low power processor and I know the limitations.
    But the fact that I have to buy a Skylake+ CPU for my router is no sense. I'm on Ivy Bringe on my main desktop... and my router should be more powerfull/newer?

    This is no sense. Something is wrong in the "pipeline".

    There's certainly confusion here. You haven't reported a performance problem, so recommendations for new CPUs are based mostly on your request for something more efficient, and somewhat with an eye toward what you might want to implement on the connection you say you have. The newer CPUs are very, very power efficient at idle, and that's why I suggest looking at them rather than a six year old ivy bridge, given your stated requirement. If you want to add things with greater CPU requirements, like VPN, they have plenty of capacity for that also. The 3855U could probably handle things at your current level of activity, but you won't have a lot of excess capacity. If you try to do more with it and run into the limits, you've basically wasted your investment if you end up needing to upgrade. And (this is important) the U series part costs more than the entry level CPUs I was talking about--you pay extra for the throttling. The only reason to buy something in the U series is if you have a really compelling reason not to exceed a certain TDP at max load (for example, if you put the CPU in your lap and don't want to get too toasty). Again, you haven't said that you have a performance issue now, so I don't understand why you say you "need" to upgrade to skylake--you could just leave everything alone.



  • Hi,
    I currently have lot of traffic between subnets. If I have full gbit traffic the two link get saturated. This is why I have the LAG. The lag is now running on the internet NICs. The realtek is used only for WAN to ONU.

    In Italy we have 5 ISPs and all uses PPPoE. (Only one uses IP but only on VDSL)

    About the Skylake, as you have said only newer generations have low idle power. And the prices is too hight for my budget.

    At this point I'll wait for some newer solutions later.
    A mobile grade CPU would be the greatest upgrade for me. The rack mount is under the roof and ventilation is low...



  • I'm not an expert but I just purchased a new in box Dell R210 II added a legit Intel I340 from a server pull and I can have no issues obtaining my 1gb service while running Suricata and Pfblockerng.

    I got the Dell for $290 with shipping.



  • It seems that em(4) NICs have no such issue on multi-threaded PPPoE, including i340-t4 and i350-t4.