Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    PFSense on fitlet-XA10-LAN - Decent throughput for a $350 platform?

    Scheduled Pinned Locked Moved Hardware
    18 Posts 5 Posters 5.0k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S
      sirozha Banned
      last edited by

      Hi,

      I've purchased a fitlet-XA10-LAN fanless computer to run pfSense on it. The fitlet-XA10-LAN comes with AMD A10 Micro-6700T SoC and four (4) Intel I211 NICs (one integrated on the main board and three more on a PCIe expansion card). I've installed a 4 GB DDD3 RAM and a 32 GB mSATA drive in this system. The pfSense version I installed on this system is release 2.3.2.

      Before I started using this system as my firewall, I wanted to benchmark the LAN-to-WAN throughput by running iperf3 tests.

      I had the pfSense LAN interface connected to a Cisco 3560CG switch port, and I ran iperf3 client on a Linux host connected to the same Cisco 3560CG switch. Then, I plugged a 2012 Mac Mini (i7 CPU, 16 GB RAM) directly to the pfSense WAN port and ran iperf3 server on the Mac Mini. This way I tried to simulate a NAT environment handled by pfSense with a client being on the LAN side of pfSense and a server being on the WAN (internet) side.

      The iperf3 test that I ran showed the LAN-to-WAN throughput between 257 Mbps and 288 Mbps. After some troubleshooting, I noticed that input errors (over a thousand) incremented on the pfSense LAN NIC after just one iperf3 test. Then, I decided to take the pfSense (NAT and Firewall) out of the equation, so I ran an iperf3 test from the same Linux host (iperf3 client) to the LAN IP of pfSense (iperf3 server). This time, the throughput was between 200 and 277 Mbps, and the input errors continued to increment by thousands on the LAN interface of pfSense after each such iperf3 test.

      I then ran iperf3 client on pfSense and iperf3 server on the Mac Mini. The throughput was between 600 and 621 Mbps, and there were no errors (output or input) on the pfSense WAN interface. Therefore, at that point, I tested the throughput on all three legs of the Layer 3 path from the LAN segment to the WAN segment of pfSense. It was obvious that something was causing massive input errors on the LAN interface of pfSense, which affected the LAN-to-WAN throughput. Whereas, input errors were clearly seen on the LAN interface of pfSesnse, the Cisco3560CG switch port connected to the fitlet-XA10-LAN (pfSense LAN interface) showed zero output (or input) errors.

      I thought I may have a defective LAN NIC on the fitlet-XA10-LAN, so I re-assigned the pfSense LAN interface to another NIC on the fitlet-XA10-LAN system, but after I re-ran the test, I saw identical results. I tried to assign every one of four fitlet-XA10-LAN NICs (one by one) as the pfSense LAN interface, but there was no difference whatsoever in the throughput as well as in the massive amount of input errors on the pfSense LAN interface (in all four NICs) when I re-ran the iperf3 tests.

      So, I had a suspicion that the issue was with the CPU on the fitlet-XA10-LAN not being able to process the packets in input buffers of the Intel I211 NIC quickly enough, and due to the buffer overflow, the input errors were occurring. The BIOS on this system allows to increase TDP from 4.5 Watt to 25 Watt. I consulted with the manufacturer and was told that it was safe to increase TDP values, and that any setting above 10 Watt would result in the same (highest) CPU performance.

      So, I increased TDP to 10 Watt and re-ran the same iperf3 tests. The throughput from the Linux host (iperf3 client) to the pfSense LAN interface (iperf3 server) increased from a range between 257 Mbps and 288 Mbps to a range between 538 Mbps and 572 Mbps. This was a two-fold improvement in the bandwidth, which made me happier. However, when I reviewed the LAN interface stats, I saw that the input errors continued to increment by thousands per each iperf3 test. I then proceeded with another iperf3 test with the iperf3 client running on the pfSense and the iperf3 server running on the Mac Mini (off pfSense WAN interface). The throughput (with the increased TDP) now increased from a range of 600 Mbps to 621 Mbps to 941 Mbps, which was basically the wire-speed of that leg of the L3 path. However, when I ran an iperf3 test with the client running on the Linux server (off pfSense LAN interface) and the server running on the Mac Mini (off pfSense WAN interface), the throughput continued to be dismal - between 211 Mbps and 274 Mbps (with the input errors on the pfSense LAN interface continuing to increment by thousands per each iperf3 test).

      I spent a few more hours researching a solution and stumbled upon instructions on the pfSense site on how to tune a NIC. I tried different combinations of settings in the /boot/loader.conf.local file, but the settings that made a difference were:

      hw.pci.enable_msix=0
      

      This setting increased the LAN-to-WAN throughput during an iperf3 test from a range between 211 Mbps and 274 Mbps to a range in the mid-500 Mbps. This was a two-fold improvement in the throughput range for the NAT traffic traversing pfSense (Finally!), but the input errors continued to increment on the pfSense LAN interface by thousands during each such test.

      hw.igb.fc_setting=2
      

      This setting enabled pfSense (FreeBSD) to send flow control packets to the upstream device so that the upstream device would stop sending traffic. I also enabled flowcontrol receive on the Cisco3560CG interface to which the pfSense LAN interface is connected. Enabling flow control increased the LAN-to-WAN throughput from mid-500 Mbps range to a range that oscillated right around 600 Mbps, and I was FINALLY able to get rid of input errors on the pfSense LAN interface.

      I've also tried the following settings to increase the size of input and output buffers allocated to Intel I211 NICs on the fitlet-XA10-LAN, but these settings made no noticeable difference in the throughput measured by iperf3.

      hw.igb.rxd=4096
      hw.igb.txd=4096 
      
      

      –--------------

      So, after hours of experimentations, I got to the LAN-to-WAN throughput right around 600 Mbps, and I was able to figure out who to get rid of input errors on the pfSense LAN interface, but this was done at the expense of the Cisco3560 switch port pausing to send traffic to the pfSense LAN port (due to FreeBSD sending flow control packets upstream to the Cisco3560 switch port, signaling the switch port to stop sending traffic). Therefore, the traffic is now accumulating in output buffers of the Cisco3560CG switch interface. Though this improves the throughput along the L3 path that traverses the NAT on the pfSense, this does not solve the problem of input buffers on fitlet-XA10-LAN not being processed quickly enough when they are inundated by an iperf3 test.

      Is there anything else that can be done to get the LAN-to-WAN throughput in pfSense running on fitlet-XA10-LAN to the 900 Mbps range? I bought the fitlet-XA10-LAN to be the platform on which I can run pfSense at have LAN-to-WAN throughput close to 1 Gbps, and I can only squeeze 600 Mbps at this point out of this system.

      Thank you very much!

      1 Reply Last reply Reply Quote 0
      • ?
        Guest
        last edited by

        I've purchased a fitlet-XA10-LAN fanless computer to run pfSense on it. The fitlet-XA10-LAN comes with AMD A10 Micro-6700T SoC

        This CPU (SoC) comes with a frequency of 1,2 - 2,2 GHz (Turbo-Boost) and this should sufficient for nearly
        reaching ~900 MBit/s, but CPU coire is not CPU core and so it might be that the CPU clock frequency
        is hard enough but not the CPU core itself, perhaps it should be stronger such a Celeron or i3, i5, i7 or Xeon.

        This small bare bone supports DDR3L-1333 and it could also be that the memory system is saturated and not
        the CPU itself.

        and four (4) Intel I211 NICs (one integrated on the main board and three more on a PCIe expansion card). I've installed a 4 GB DDD3 RAM and a 32 GB mSATA drive in this system. The pfSense version I installed on this system is release 2.3.2.

        Should be all fine for pfSense!

        Is there anything else that can be done to get the LAN-to-WAN throughput in pfSense running on fitlet-XA10-LAN to the 900 Mbps range?

        • I guess at first the memory system is saturated
        • And the the CPU (SoC) is to small and not powerful enough
        • mbuf size is to small
        • PowerD is not activated
        • (TRIM command is not set up and speed test was done locally)

        I bought the fitlet-XA10-LAN to be the platform on which I can run pfSense at have LAN-to-WAN throughput close to 1 Gbps, and I can only squeeze 600 Mbps at this point out of this system.

        • Set up the mbuf size to 1000000
          (Make a config backup first, please!)
        • Activate PowerD (high adaptive)
          (That the CPU frequency is able to scaling up if it is needed for more speed)
        • Activate TRIM
          (not sure if it is speeding up the throughput, but nice to have according to the mSATA drive)

        Be sure that the RAM size is not to low then you could ending up in a booting loop if you high up the
        mbuf size for your pfSense box. But I would try it for sure together with all other options to get a
        smooth and liquid running system.

        1 Reply Last reply Reply Quote 0
        • S
          sirozha Banned
          last edited by

          @BlueKobold:

          Is there anything else that can be done to get the LAN-to-WAN throughput in pfSense running on fitlet-XA10-LAN to the 900 Mbps range?

          • I guess at first the memory system is saturated
          • And the the CPU (SoC) is to small and not powerful enough
          • mbuf size is to small
          • PowerD is not activated
          • (TRIM command is not set up and speed test was done locally)

          I bought the fitlet-XA10-LAN to be the platform on which I can run pfSense at have LAN-to-WAN throughput close to 1 Gbps, and I can only squeeze 600 Mbps at this point out of this system.

          • Set up the mbuf size to 1000000
            (Make a config backup first, please!)
          • Activate PowerD (high adaptive)
            (That the CPU frequency is able to scaling up if it is needed for more speed)
          • Activate TRIM
            (not sure if it is speeding up the throughput, but nice to have according to the mSATA drive)

          Be sure that the RAM size is not to low then you could ending up in a booting loop if you high up the
          mbuf size for your pfSense box. But I would try it for sure together with all other options to get a
          smooth and liquid running system.

          I've increased mbuf to 1000000, and I enabled TRIM. This made no difference for the throughput.

          I also tried to disable C6 in BIOS, and that dropped the throughput by about 10%

          How do I activate PowerD?

          –---------
          Also, I noticed that enabling an IKE Phase 1 policy under VPN / IPSec / Tunnels reduces the LAN-to-WAN throughput by about 10% even with no active VPN connections. As soon as I disable the policy, the throughput goes back where it was before - around 600 Mbps.

          Thank you.

          1 Reply Last reply Reply Quote 0
          • ?
            Guest
            last edited by

            I've increased mbuf to 1000000. This made no difference for the throughput.

            Let it so, it could be fine working together with PowerD in combination as I see it right.

            How do I activate powerd?

            Advanced > Miscellaneous > activate PowerD (high adaptive)

            1 Reply Last reply Reply Quote 0
            • S
              sirozha Banned
              last edited by

              Thanks, I enabled PowerD and tried different settings. They all result in basically the same throughput. Nothing changed from the values I reported earlier in this thread.

              1 Reply Last reply Reply Quote 0
              • ?
                Guest
                last edited by

                Thanks, I enabled PowerD and tried different settings.

                PowerD (hi adaptive), TRIM high up the mbuf size would be the first parts I will change
                in a fresh and fully installed system (pfSense) either I get more throughput or not, and then
                if this is done I will be searching for a better throughput or see it as given that no more will be
                able to get over the top results, limited by the CPU or SoC and RAM.

                They all result in basically the same throughput. Nothing changed from the values I reported earlier in this thread.

                Did you tried out all or only one and then back and then the other part?
                If no more speed is shown you might be surely limited by that SoC or CPU in my eyes.

                1 Reply Last reply Reply Quote 0
                • W
                  whosmatt
                  last edited by

                  Sadly, I suspect the AMD SoC just won't do what you want.  I get similar throughput numbers with my AM1 system… around 600 Mbps max throughput  tops. Specs are dual cores @ 1.45 GHz, DDR3 @ 533MHz and Intel  82571EB NICs.

                  1 Reply Last reply Reply Quote 0
                  • S
                    sirozha Banned
                    last edited by

                    I may be wrong, but it appears that each NIC Is assigned to one core - at least I saw those messages output to the console when pfSense boots up. In my case, the CPU has 4 cores, but the box also has four NICs. So, if you are measuring throughput between two NICs, I believe only two CPU cores are involved. Perhaps, if I ran two iPerf tests concurrently, with each one only measuring the throughput between two unique NICs, there may be a higher total throughput through the box. I don't have the capability right now to run such at test.

                    I'm still debating to keep the fitlet-XA10-LAN or to return it and buy a Check Point. There's Check Point 620 that pretty much provides the same performance as pfSense running on this fitment - both unencrypted and encrypted throughput are basically the same. Check Point 620 can be purchased for under $300 shipped.

                    There's a also a Check Point 750 that can be had for about $600, which has the total throughput of 1 Gbps and the AES-encrypted traffic throughput of 500 Mbps.

                    I've never used Check Point, but I hear good reports about it, and it seems that to get the hardware that can run pfSense and provide wire-speed throughput with NAT and DPI Firewall rules enabled as well as to provide encrypted throughput of around 500 Mbps, one would have to spend at least USD $600 - $700. So, I don't think there's any argument that Check Point is a superior product at the same price point for similar performance. For some reason I was under the impression that pfSense can enable weaker hardware perform better than the likes of Check Point. Kind of what Ubiquiti Edge routers claim to do. After my extensive testing of pfSense on a $350 box, this does not seem to be the case.

                    1 Reply Last reply Reply Quote 0
                    • D
                      dwood
                      last edited by

                      You may find the system tune settings here worth trying:

                      https://ashbyte.com/ashbyte/wiki/pfSense/Tuning

                      I'd start with system tunables:

                      net.inet.tcp.syncookies=0
                      net.inet.raw.maxdgram=16384
                      net.inet.raw.recvspace=16384
                      net.inet.tcp.tcbhashsize=1024
                      kern.ipc.maxsockets=51200
                      kern.ipc.maxsockbuf=16777216
                      net.inet.tcp.recvbuf_max=16777216
                      net.inet.tcp.sendbuf_max=16777216
                      net.inet.tcp.recvbuf_inc=32768
                      net.inet.tcp.sendbuf_inc=32768

                      Let us know how you make out..I'll update the Quotom build thread accordingly :-)

                      1 Reply Last reply Reply Quote 0
                      • S
                        sirozha Banned
                        last edited by

                        @dwood:

                        You may find the system tune settings here worth trying:

                        https://ashbyte.com/ashbyte/wiki/pfSense/Tuning

                        I'd start with system tunables:

                        net.inet.tcp.syncookies=0
                        net.inet.raw.maxdgram=16384
                        net.inet.raw.recvspace=16384
                        net.inet.tcp.tcbhashsize=1024
                        kern.ipc.maxsockets=51200
                        kern.ipc.maxsockbuf=16777216
                        net.inet.tcp.recvbuf_max=16777216
                        net.inet.tcp.sendbuf_max=16777216
                        net.inet.tcp.recvbuf_inc=32768
                        net.inet.tcp.sendbuf_inc=32768

                        Let us know how you make out..I'll update the Quotom build thread accordingly :-)

                        I would like to know what negative effects these settings may cause. In the beginning of that page, there's a warning that some of the settings break IPSec. I'm not sure which ones break IPSec, but I need IPSec. If I change some of these settings now, could I experience some issues in the future with some other protocols?

                        1 Reply Last reply Reply Quote 0
                        • N
                          Nullity
                          last edited by

                          @sirozha:

                          @dwood:

                          You may find the system tune settings here worth trying:

                          https://ashbyte.com/ashbyte/wiki/pfSense/Tuning

                          I'd start with system tunables:

                          net.inet.tcp.syncookies=0
                          net.inet.raw.maxdgram=16384
                          net.inet.raw.recvspace=16384
                          net.inet.tcp.tcbhashsize=1024
                          kern.ipc.maxsockets=51200
                          kern.ipc.maxsockbuf=16777216
                          net.inet.tcp.recvbuf_max=16777216
                          net.inet.tcp.sendbuf_max=16777216
                          net.inet.tcp.recvbuf_inc=32768
                          net.inet.tcp.sendbuf_inc=32768

                          Let us know how you make out..I'll update the Quotom build thread accordingly :-)

                          I would like to know what negative effects these settings may cause. In the beginning of that page, there's a warning that some of the settings break IPSec. I'm not sure which ones break IPSec, but I need IPSec. If I change some of these settings now, could I experience some issues in the future with some other protocols?

                          I am pretty sure that the fastforwarding setting broke IPSec.

                          The settings in the quoted post all seem safe.

                          Please correct any obvious misinformation in my posts.
                          -Not a professional; an arrogant ignoramous.

                          1 Reply Last reply Reply Quote 0
                          • S
                            sirozha Banned
                            last edited by

                            I've tested these settings for tuning pfSense performance.

                            The best performance I get from the fitlet is:
                            WAN-to-LAN

                            
                            iperf3 -c 192.168.160.100 -i3 -fm -P3 -R
                            Connecting to host 192.168.160.100, port 5201
                            Reverse mode, remote host 192.168.160.100 is sending
                            [  4] local 192.168.200.30 port 54482 connected to 192.168.160.100 port 5201
                            [  6] local 192.168.200.30 port 54483 connected to 192.168.160.100 port 5201
                            [  8] local 192.168.200.30 port 54484 connected to 192.168.160.100 port 5201
                            [ ID] Interval           Transfer     Bandwidth
                            [  4]   0.00-3.00   sec  38.8 MBytes   109 Mbits/sec                  
                            [  6]   0.00-3.00   sec   163 MBytes   455 Mbits/sec                  
                            [  8]   0.00-3.00   sec  38.0 MBytes   106 Mbits/sec                  
                            [SUM]   0.00-3.00   sec   239 MBytes   669 Mbits/sec                  
                            - - - - - - - - - - - - - - - - - - - - - - - - -
                            [  4]   3.00-6.00   sec  36.0 MBytes   101 Mbits/sec                  
                            [  6]   3.00-6.00   sec   168 MBytes   469 Mbits/sec                  
                            [  8]   3.00-6.00   sec  35.3 MBytes  98.7 Mbits/sec                  
                            [SUM]   3.00-6.00   sec   239 MBytes   668 Mbits/sec                  
                            - - - - - - - - - - - - - - - - - - - - - - - - -
                            [  4]   6.00-9.00   sec  41.1 MBytes   115 Mbits/sec                  
                            [  6]   6.00-9.00   sec   157 MBytes   440 Mbits/sec                  
                            [  8]   6.00-9.00   sec  40.5 MBytes   113 Mbits/sec                  
                            [SUM]   6.00-9.00   sec   239 MBytes   668 Mbits/sec                  
                            - - - - - - - - - - - - - - - - - - - - - - - - -
                            [  4]   9.00-10.00  sec  16.9 MBytes   142 Mbits/sec                  
                            [  6]   9.00-10.00  sec  46.0 MBytes   386 Mbits/sec                  
                            [  8]   9.00-10.00  sec  16.7 MBytes   140 Mbits/sec                  
                            [SUM]   9.00-10.00  sec  79.6 MBytes   668 Mbits/sec                  
                            - - - - - - - - - - - - - - - - - - - - - - - - -
                            [ ID] Interval           Transfer     Bandwidth
                            [  4]   0.00-10.00  sec   133 MBytes   112 Mbits/sec                  sender
                            [  4]   0.00-10.00  sec   133 MBytes   112 Mbits/sec                  receiver
                            [  6]   0.00-10.00  sec   535 MBytes   448 Mbits/sec                  sender
                            [  6]   0.00-10.00  sec   534 MBytes   448 Mbits/sec                  receiver
                            [  8]   0.00-10.00  sec   131 MBytes   110 Mbits/sec                  sender
                            [  8]   0.00-10.00  sec   131 MBytes   110 Mbits/sec                  receiver
                            [SUM]   0.00-10.00  sec   799 MBytes   670 Mbits/sec                  sender
                            [SUM]   0.00-10.00  sec   798 MBytes   669 Mbits/sec                  receiver
                            
                            iperf Done.
                            
                            

                            LAN-to-WAN:

                            
                            [~] # iperf3 -c 192.168.160.100 -i3 -fm -P3   
                            Connecting to host 192.168.160.100, port 5201
                            [  4] local 192.168.200.30 port 54487 connected to 192.168.160.100 port 5201
                            [  6] local 192.168.200.30 port 54488 connected to 192.168.160.100 port 5201
                            [  8] local 192.168.200.30 port 54489 connected to 192.168.160.100 port 5201
                            [ ID] Interval           Transfer     Bandwidth
                            [  4]   0.00-3.00   sec  53.9 MBytes   151 Mbits/sec                  
                            [  6]   0.00-3.00   sec  74.9 MBytes   209 Mbits/sec                  
                            [  8]   0.00-3.00   sec   112 MBytes   315 Mbits/sec                  
                            [SUM]   0.00-3.00   sec   241 MBytes   675 Mbits/sec                  
                            - - - - - - - - - - - - - - - - - - - - - - - - -
                            [  4]   3.00-6.00   sec  61.7 MBytes   172 Mbits/sec                  
                            [  6]   3.00-6.00   sec  78.2 MBytes   219 Mbits/sec                  
                            [  8]   3.00-6.00   sec   104 MBytes   290 Mbits/sec                  
                            [SUM]   3.00-6.00   sec   244 MBytes   681 Mbits/sec                  
                            - - - - - - - - - - - - - - - - - - - - - - - - -
                            [  4]   6.00-9.00   sec   110 MBytes   307 Mbits/sec                  
                            [  6]   6.00-9.00   sec  61.4 MBytes   172 Mbits/sec                  
                            [  8]   6.00-9.00   sec  65.6 MBytes   184 Mbits/sec                  
                            [SUM]   6.00-9.00   sec   237 MBytes   662 Mbits/sec                  
                            - - - - - - - - - - - - - - - - - - - - - - - - -
                            [  4]   9.00-10.00  sec  20.6 MBytes   173 Mbits/sec                  
                            [  6]   9.00-10.00  sec  20.1 MBytes   168 Mbits/sec                  
                            [  8]   9.00-10.00  sec  38.8 MBytes   325 Mbits/sec                  
                            [SUM]   9.00-10.00  sec  79.5 MBytes   667 Mbits/sec                  
                            - - - - - - - - - - - - - - - - - - - - - - - - -
                            [ ID] Interval           Transfer     Bandwidth
                            [  4]   0.00-10.00  sec   246 MBytes   206 Mbits/sec                  sender
                            [  4]   0.00-10.00  sec   245 MBytes   206 Mbits/sec                  receiver
                            [  6]   0.00-10.00  sec   235 MBytes   197 Mbits/sec                  sender
                            [  6]   0.00-10.00  sec   234 MBytes   196 Mbits/sec                  receiver
                            [  8]   0.00-10.00  sec   321 MBytes   269 Mbits/sec                  sender
                            [  8]   0.00-10.00  sec   320 MBytes   268 Mbits/sec                  receiver
                            [SUM]   0.00-10.00  sec   801 MBytes   672 Mbits/sec                  sender
                            [SUM]   0.00-10.00  sec   799 MBytes   670 Mbits/sec                  receiver
                            
                            iperf Done.
                            
                            

                            The System Tunables I used to achieve these settings are:
                            kern.ipc.maxsockbuf=16777216
                            net.inet.tcp.recvbuf_max=16777216   
                            net.inet.tcp.sendbuf_max=16777216

                            All other settings either don't affect the throughput or lower it about 5-10%

                            The values for the parameters specified in the suggested tunable that are equal to 1024 and 16384 are lower than the default values in my installation of pfSense. So, I've tried those lower values suggested, but later removed the tunable to go back to the default values as there was no throughput increase in using the suggested values over the default values.

                            The last two values on the list:

                            net.inet.tcp.recvbuf_inc=32768   
                            net.inet.tcp.sendbuf_inc=32768

                            are larger than the default values for these parameters, but did not improve the throughput (and in my opinion, they actually decreased the throughput), so I tried the suggested values, but then went back to the defaults in my pfSense installation.

                            1 Reply Last reply Reply Quote 0
                            • S
                              sirozha Banned
                              last edited by

                              My L2TP over IPSec throughput is around 140 Mbps both LAN-to-WAN and WAN-to-LAN

                              1 Reply Last reply Reply Quote 0
                              • S
                                sirozha Banned
                                last edited by

                                I've finally transitioned the fitlet-XA10-LAN to production. My Internet connection is 90 Mbps / 12 Mbps (from Comcast Xfinity), which the fitlet-XA10-LAN platform can handle with ease. All of my previous tests with iPerf3 were in the lab environment just to see what the limitations the fitlet-XA10-LAN platform are, and they seem to be at around 670 Mbps throughput LAN-to-WAN and WAN-to-LAN (with NAT and Firewall enabled) and an IPSec Phase 1 policy enabled. I noticed that when I disable an IPSec Phase 1 policy, the throughput improves marginally.

                                When establishing a VPN connection to the pfSense running on fitlet-XA10-LAN, I was able to achieve a throughput of 140 Mbps via an L2TP over IPSec tunnel in the lab environment (both WAN-to-LAN and LAN-to-WAN) Because my Internet connection is lower than 140 Mbps, I can't test the limits of the VPN throughput on a live Internet connection yet - until I get a better Internet bandwidth.

                                In my opinion, the fitlet-XA10-LAN, which is made in Israel, is a solid fanless platform that performs quite well for a very small footprint. For example, when I run a bandwidth test via speediest.net, (with my Internet offering capped at 90 Mbps download bandwidth), the CPU utilization does not exceed 15%, and the ping RTT is at 11 ms to this server on the Internet via Wi-Fi from my Mac Mini (to a server located about 30 miles away). My topology is MacMini > Asus RT-N66U (AP Mode) > Cisco 3560CG (L3 switch)> fitlet-XA10-LAN (running pfSense) > Comcast Cable Modem.

                                Additionally, the Fitlet comes with a serial port to which the console output can be directed, so that one doesn't have to connect a monitor and keyboard to it in order to access pfSense via console. Additionally, BIOS can be accessed via this serial port's console as well. In other words, this serial port provides a complete appliance-like experience unlike other fanless boxes that lack a serial port.

                                –---------------

                                Whether one should consider this platform depends on what the future of one's Internet pipe is. If you think you will be getting Google Fiber (or similar offering), with a very affordable symmetric 1 Gbps Internet pipe, you should skip any hardware platform now that cannot deliver a throughput close to 1 Gbps and instead invest into something that can push at least 1 Gbps. If you think you will stay on cable (like Comcast Xfinity), then fitlet-XA10-LAN is a good choice IMHO because it will probably take another decade before Comcast will offer a symmetrical 1 Gbps for $70 like what Google Fiber is offering today.

                                A well-discussed Chinese-made fanless quad-core Celeron (with no AES hardware support) system costs $260.  Fitlet-XA10-LAN (which has a quad-core AMD CPU with hardware support for AES) comes with 4 Gigabit NICs and sells for $315 barebones. There is not much reason to have more than 16 GB of SSD and 4 GB of RAM in this box to run pfSense, so it's feasible to get the complete system for under $370. Fitlet comes with a 5-year warranty, and it's made in Israel by a company that's known for embedded systems that they supply to industrial and military sectors. They also offer a no-questions-asked return policy directly to the manufacturer - Compulab - (once your return period with the reseller runs out). The downsides of the fitlet-XA10-LAN system is that today one can buy a Check Point 620, which provides 750 Mbps throughput and 150 Mbps VPN throughput for under $300, but Check Point 620 is supposedly End Of Sale and its stock is diminishing with resellers.

                                If you want to go with a system that can push 1 Gbps (with NAT and Firewall enabled), Check Point 750 can do it. Additionally, Check Point 750 can do 500 Mbps of VPN throughput, but it costs around $600. From the information I've been able to gather so far, if you want to build a system for pfSense that can provide 1 Gbps of throughput (with NAT and Firewall enabled), you will end up paying between $600 and $700 for the hardware, so again it seems that Check Point matches pfSense-based hardware on the price-to-performance ratio at least in this SOHO/SMB segment.

                                One more thing - it may be a good idea to do inter-VLAN routing in a Layer 3 switch rather than in pfSense. There's an argument whether one should have a Layer 3 switch in a SOHO or SMB environment vs having a Layer 2 switch and doing inter-VLAN routing in a router. In my opinion, now that we are approaching Gigabit bandwidth offerings from providers like Google Fiber, L3 switching between LAN subnets (aka Inter-VLAN routing) at the firewall (or Internet router) will reduce the throughout for the traffic from/to the Internet. A decent Layer 3 managed switch can be had for about $100, so those who have multiple subnets (VLANs) on the LAN and whose Internet bandwidth is close to the hardware limitations of their firewall should consider moving the Layer 3 boundary from the firewall (Internet router) to the Layer 3 switch. Additionally, decent Layer 3 switches provide backplane throughput equal to the combined bandwidth of all of its ports, so the switching fabric should not be a bottleneck for inter-VLAN routing in such switches.

                                1 Reply Last reply Reply Quote 0
                                • S
                                  sirozha Banned
                                  last edited by

                                  PfSense has been running for over a year on the Fitlet. The uptime as of today is 414 days. Not a single hang or any need to reboot in 414 days. That’s pretty impressive, so I’m happy I got this system built over a year ago.

                                  The system is running version 2-3-2-RELEASE.

                                  1 Reply Last reply Reply Quote 0
                                  • ?
                                    Guest
                                    last edited by

                                    @sirozha:

                                    PfSense has been running for over a year on the Fitlet. The uptime as of today is 414 days. Not a single hang or any need to reboot in 414 days. That’s pretty impressive, so I’m happy I got this system built over a year ago.

                                    The system is running version 2-3-2-RELEASE.

                                    I think it's time to install some security updates ;D You are waaaaay behind.

                                    1 Reply Last reply Reply Quote 0
                                    • S
                                      sirozha Banned
                                      last edited by

                                      The auto-update is showing 2.3.3_1

                                      Is this an intermediate image that’s required to update from 2.3.2?I’ve read the release notes and it seems that I should be able to update directly from 2.3.2 to 2.4.0.

                                      Why doesn’t the auto-update show 2.4.0?

                                      1 Reply Last reply Reply Quote 0
                                      • ?
                                        Guest
                                        last edited by

                                        @sirozha:

                                        The auto-update is showing 2.3.3_1

                                        Is this an intermediate image that’s required to update from 2.3.2?I’ve read the release notes and it seems that I should be able to update directly from 2.3.2 to 2.4.0.

                                        Why doesn’t the auto-update show 2.4.0?

                                        Probably because of the auto-updater settings. You may need the intermediate version from there before you can continue. Direct upgrades are possible, but you often have to supply the direct upgrade image. This can also be done from the interface and also from the SSH console.

                                        1 Reply Last reply Reply Quote 0
                                        • First post
                                          Last post
                                        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.