Hardware To Achieve Gigabit over PPPoE

  • Hi All,

    Our ISP has ran fibre to our location and requires us to authenticate via PPPoE to get online.

    Here's the proposed set-up:
    Fibre -> GBIC -> Switch (vlan 35 tagged) -> pfSense (currently a consumer grade router)

    My consumer router (ASUS RT-N66U) can get close, but it's bottlenecked by the CPU.
    Throughput during a speedtest:

    CPU consumption during a speedtest:

    The end goal would be to get rid of the consumer router and replace it with a pFsense server. I know that PPPoE on pfSense is single threaded which might pose an issue, which is why I created this thread.
    I would appreciate if some members of the community could share their pfSense hardware build to get as close as possible to gigabit speeds on their WAN link over PPPoE.


  • https://redmine.pfsense.org/issues/4821
    You will need some em Intel NICs and/or CPU that can do 1Gbps  on one core, some i3 should be good.

  • I was thinking only the igb driver is affected.

    em or ix should be fine, right?

  • I'm running:

    • PF 2.4.3-RELEASE-p1 (amd64)
    • CPU: Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz (usually clocks itself down to 800mhz)
    • Intel® Ethernet Converged Network Adapter X550
    • 32GB RAM

    I run PPPoE w/ VLAN and I can speedtest to 950mbit/s. The CPU is 1-2%, RAM ~6% (running cache and ramdisk).

    I'm beginning to suspect I may have over spec'd the hardware.

  • SqueezedJuice, that is exactly what I had in mine too.
    I am currently using a Linksys E6900 running the Xvrt-Vortex firmware and connecting with the same Nortel Baystack 5520 you show in your image. :)

    I got as high as 800mbps down and 700mbps up using teh E6900. On average I get 700/700.

    I want to try to maximize my Bell Fibe close to the theoretical 1gbps as possible.

    My rig:

    PF 2.4.4-RELEASE-p4 (amd64)
    Asus Q87M-E
    CPU: Intel(R) Core(TM) i5-4570 CPU @ 3.60GHz on auto clock management
    Onboard Intel gigabit nic (driver e1000)
    Intel Pro 1000/Quad PCIe Ethernet (driver e1000)
    4GB RAM

    I am running Ubuntu 18.04 LTS server image with KVM. I have pfSense running as a virtual machine allocating 2 cores and 2GB RAM.

    I get no where what you are able to achieve. The VLAN is control by the Nortel managed switch and PPPoE is authenticate with pfSense. The best I can see is 700/700. The test with speedtest.net may not be the best but it is consistent since set the destination the same and running the test back to back with my consumer grade router.

    I don't think it is RAM since I did not see any spike in RAM usage so 2GB is more than enough I think. I don't think it is the virtualized machine since currently the KVM is not running any other machine besides the pfSense. I did notice pfSense only shows stats for 1 CPU. When the speedtest is running I do see the CPU usage spike up to 95%. It is no where 1-2% as described.

    Any hints in getting your result would be appreciated.

  • I was seeing high CPU and low speeds when I ran it in a VM too (even on an 8 thread, 32GB RAM laptop). My theory is that the network card is hardware accelerating functions like PPPoE. Without PF having difrect access and ownership of a two port card it's having to do everything in the CPU.

  • Netgate Administrator

    Hard to see how that would be. I don't think I've ever seen a network card that could do any sort of offloading for PPPoE. It doesn't even split the load across the cores in software as it would for IP traffic. That's the problem with PPPoE in FreeBSD currently.
    1-2% seems suspiciously low even for an i5-8400. That will probably be all on on core still so that's likely 6x that on that core.

    I would expect a single core to be able to push 1Gbps from either of those CPUs though.


  • I've looked a heap and I can't find a smoking gun in the doucmentation.

    x550 NIC at least partially supports DPDK


    and DPDK has PPPoE support


  • I may tend to agree the VM may be impacted if I was using a virtio drivers for the card where it will utilize majority CPU. But I am using the Intel e1000 native driver.

    CPU load is quite high from pfSense GUI. However, when looking at the host's cpu load, it is quite minimal.

    I suspect it is very much with software that is not taking advantage of the multi-core or even utilizing all clock speed. At 3.6GHz, I would think it is more than enough to push it up to the gigabit theoretical limit (940mbps).

    I am going install pfSense by itself without VM running on the same hardware and see the result. My guess is that the result will be the same.

    Then after this issue is resolve, I am still trying to get as much throughput with OpenVPN on pfSense. As of now, OpenVPN server is maxing out at 45mbps. Ridiculous slow speed comparing to the symmetric gigabit service I have feeding into my home.

    There are too many information on the net with response to gigabit and openvpn solution but the difficult part is to filter what is real and what is fake.

  • I have about the same setup at my house. I have CenturyLink fiber that's symmetrical 1GB. I had been running a Asus RT-AC87R. The best speeds the Asus would see were 650+ down and 780+ up. Ironically my CL C1100Z would see 970+ down and 2200+ up. CL also uses PPPoE and the line has to be vlan tagged.
    Switching to pfSense, on a Qotom Q655G6 with an Intel i5-7400, 8gb of ram and a SSD, I get 1gb down and 1.5gb up all the time. The processor and ram are usually in single digits for % of use.
    My configuration looks like this: Fibre -> Qotom Q655G6 running pfSense, doing the PPPoe and the vlan tagging for CL -> switches and AP's in my house.

  • alpineaudio,

    How did you get from the fibre to your Qotom? I notice that unit does not have an SFP(+) port? I guess you have a optical termination to RJ45. Then you use pfSense to do pppoe and vlan tagging.

    I did some testing and found out one of slow issue. It was my Nortel/Avaya 5520 managed switch. The switch has 4 x SFP 1Giga so the max negotiation with the optical termination SFP module would be 1Giga. I would use this switch to tag my VLAN. I guess the switch can not achieve faster speed or I need to find out specific settings that will allow better throughput.

    If I use the original hub supplied by the ISP (Bell Canada) since it has the SFP and use passthrough pppoe, I can achieve 940/810mbps.

    There are lots of talk on dslforum on using a Broadcom 10Giga SFP+ nic and firmware hack to allow synching at 2.5Giga. This may be the only route to get true symmetric gigabit connection.

    I will continue playing around.

  • My ont from CL, has 4 ethernet ports and only 1 is active which is what I ran my ethernet from to my Qotom. I wasn't too concerned about SFP ports even though my switches to have them, the gig just seems enough.

  • Full duplex symmetric gig = 1GB up and 1GB down at the same time..no?

  • Yep that is what I did too and found out the Nortel 5520 is not really connecting at symmetric gigabit via the SFP ONT.

    I connected fibre (ONT) -> Bell hub (which 4 LAN and 1 WAN) -> out 1 LAN ->WAN port Asus Merlin router -> LAN -> switch that is how I was able to get 940/810.

    My next test is to replace the Merlin router with the VM pfSense and a pfSense not VM and see any difference in speed.

    I have a Broadcom 10Giga nic coming. I will experiment with pfSense when it comes.

  • Not sure if this is needed but... for a 1Gb connection speed and an MTU of 1500, you only get 940Mb max transfer because of overhead

  • Ok,

    I have some more updates. I like to record for anyone who is trying to do the same and can help them.

    I finally tested different combination and found this the optimum:

    My rig as described above:
    Asus Q87M-E
    CPU: Intel(R) Core(TM) i5-4570 CPU @ 3.60GHz on auto clock management
    Onboard Intel gigabit nic (NOTE must use KVM virtio driver)
    Intel Pro 1000/Quad PCIe Ethernet (NOTE must use KVM virtio driver)
    4GB RAM

    Host: Ubuntu 18.04.2LTS with KVM installed
    PF 2.4.4-RELEASE-p4 (amd64) running as a VM with 2 cores and 2GB RAM
    Make sure to use virtio drivers and not e1000. e1000 uses too much CPU and does reduce throughput. What I saw was about 10% reduction in throughput if using e1000.

    Fibre into ONT of ISP modem/hub -> LAN port out -> WAN port in pfSense VM -> LAN port out -> Nortel L3 switch then all device connect to the L3 switch

    I see a consistent 940/810mbps from speednet. The best I can do with OpenVPN is about 50/50mbps.

    With all that, the vCPU only see about 15 - 25% load compared to using e1000 where I see 90%.

    With this, I can run more VM on this host.

    I will continue to look into OpenVPN throughput but it does not look good until OpenVPN is rewritten to be multithreaded.

    Otherwise, i am happy at my current solution.

  • Netgate Administrator

    50Mbps for OpenVPN is waaay lower than I would expect for a single core of that hardware. Though it depends how you're testing and where to. But that's like old school Atom speed, like a D510. A single thread on the 4570 should be many times faster.


  • The test is what I would say real life usage.

    I connect VPN from my Samsung S8 with mobile LTE data to my home pfSense OpenVPN server. I pass all traffic through the VPN so every request flow through my pfSense (internet gateway). I do a speedtest.net and I see only 50/50. I even did a quick copy of a 100MB file from my home server to the phone and peak rate was around 6MB/s averaging about 3.5MB/s.

    So to me it looks like 50/50 is about as much as I can get with current setup.


    If you have some tips to improve my setup, I would much appreciate it. You are correct in saying that is old school Atom speed because I had the same speed on my Asus router with an ARM with dual core 1GHz. I would think there is some cutoff preventing it to go faster.

  • Netgate Administrator

    Hmm, well in that scenario you are moving everything twice across your WAN so it will be limited by the slowest speed but you said it's nominally 1Gbps up and down? Also it's only encrypting in one of those directions.....

    File copy may not be a good test depending on how it's done. SMB is notoriously bad over any sort of latency for example.
    An iperf test would be much better.

    Still seems really very low though. Check the output of top -aSH when testing. Do you see one core at 100%?


Log in to reply