Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    PfSense i7-4510U + 2x Intel 82574 + 2x Intel i350 (miniPCIE) Mini-ITX Build

    Scheduled Pinned Locked Moved Hardware
    51 Posts 11 Posters 20.6k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • P
      Paint
      last edited by

      @aGeekHere:

      What speed do you get from the squid cache?
      Download a file
      Test files here
      http://mirror.internode.on.net/pub/test/
      Then once it is downloaded try redownloading and check the speed from the squid cache

      http://mirror.internode.on.net/pub/test/ this link does not work….

      pfSense i5-4590
      940/880 mbit Fiber Internet from FiOS
      BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
      Netgear R8000 AP (DD-WRT)

      1 Reply Last reply Reply Quote 0
      • A
        asterix
        last edited by

        Use this for enabling TRIM.

        https://gist.github.com/mdouchement/853fbd4185743689f58c

        You don't need to do enable AHCI by adding ahci_load="YES" … it works for me without this.

        1 Reply Last reply Reply Quote 0
        • A
          aGeekhere
          last edited by

          this link does not work….

          Must be location blocked, try a Ubuntu iso or any large file that will cached.

          Never Fear, A Geek is Here!

          1 Reply Last reply Reply Quote 0
          • P
            Paint
            last edited by

            @Asterix:

            Use this for enabling TRIM.

            https://gist.github.com/mdouchement/853fbd4185743689f58c

            You don't need to do enable AHCI by adding ahci_load="YES" … it works for me without this.

            thanks. that worked:

            [2.3.1-RELEASE][root@pfSense.lan]/root: tunefs -p /
            tunefs: POSIX.1e ACLs: (-a)                                disabled
            tunefs: NFSv4 ACLs: (-N)                                   disabled
            tunefs: MAC multilabel: (-l)                               disabled
            tunefs: soft updates: (-n)                                 enabled
            tunefs: soft update journaling: (-j)                       enabled
            tunefs: gjournal: (-J)                                     disabled
            tunefs: trim: (-t)                                         enabled
            tunefs: maximum blocks per file in a cylinder group: (-e)  4096
            tunefs: average file size: (-f)                            16384
            tunefs: average number of files in a directory: (-s)       64
            tunefs: minimum percentage of free space: (-m)             8%
            tunefs: space to hold for metadata blocks: (-k)            6408
            tunefs: optimization preference: (-o)                      time
            tunefs: volume label: (-L)
            

            migrated my entire network over to pfsense as the main router with two AP running DDWRT. I have done a lot of tweaking, but will finalize some stuff over the weekend. I hope to then post some performance benchmarks.

            Next on to snort and traffic shaping  8) 8) 8)

            pfSense i5-4590
            940/880 mbit Fiber Internet from FiOS
            BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
            Netgear R8000 AP (DD-WRT)

            1 Reply Last reply Reply Quote 0
            • ?
              Guest
              last edited by

              You don't need to do enable AHCI by adding ahci_load="YES" … it works for me without this.

              I must consider to this, I would recommend to shorten this line in the boot/loader.conf.local file, it is not
              really needed for your pfSense machine.

              1 Reply Last reply Reply Quote 0
              • P
                Paint
                last edited by

                @BlueKobold:

                You don't need to do enable AHCI by adding ahci_load="YES" … it works for me without this.

                I must consider to this, I would recommend to shorten this line in the boot/loader.conf.local file, it is not
                really needed for your pfSense machine.

                I dont use ahci_load="YES" in my /boot/loader.conf.local file.

                I have made many System Tunable changes and loader.conf.local changes. Below are my /boot/loader.conf.local changes:

                
                legal.intel_ipw.license_ack=1
                legal.intel_iwi.license_ack=1
                aio_load="YES"
                pf_load="YES"
                pflog_load="YES
                if_em_load="YES"
                hw.em.rxd=4096
                hw.em.txd=4096
                #ahci_load="YES"
                cc_htcp_load="YES"
                net.inet.tcp.hostcache.cachelimit="0"
                hw.em.num_queues="2"
                kern.ipc.nmbclusters="1000000"
                

                pfSense i5-4590
                940/880 mbit Fiber Internet from FiOS
                BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                Netgear R8000 AP (DD-WRT)

                1 Reply Last reply Reply Quote 0
                • A
                  asterix
                  last edited by

                  Why do u need traffic shaping on a 100MBit line?

                  1 Reply Last reply Reply Quote 0
                  • P
                    Paint
                    last edited by

                    @Asterix:

                    Why do u need traffic shaping on a 100MBit line?

                    QoS for buffer float? Would you suggest otherwise?

                    pfSense i5-4590
                    940/880 mbit Fiber Internet from FiOS
                    BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                    Netgear R8000 AP (DD-WRT)

                    1 Reply Last reply Reply Quote 0
                    • W
                      whosmatt
                      last edited by

                      @Paint:

                      This whole setup only cost me $400 USD + $30 USD for a Dell PowerConnect 2716 Managed Switch from eBay. For the price, I dont think it can be beat!

                      Please tell me that switch is fanless.  If it is and has the regular Dell CLI, I want one now.

                      1 Reply Last reply Reply Quote 0
                      • P
                        Paint
                        last edited by

                        @whosmatt:

                        @Paint:

                        This whole setup only cost me $400 USD + $30 USD for a Dell PowerConnect 2716 Managed Switch from eBay. For the price, I dont think it can be beat!

                        Please tell me that switch is fanless.  If it is and has the regular Dell CLI, I want one now.

                        It is fanless, but unfortunately it only has WebGUI configuration - no CLI

                        pfSense i5-4590
                        940/880 mbit Fiber Internet from FiOS
                        BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                        Netgear R8000 AP (DD-WRT)

                        1 Reply Last reply Reply Quote 0
                        • P
                          Paint
                          last edited by

                          • Impersonation of G1100 FIOS DHCP Packet

                            • (updated instructions for the FiOS Quanum Gateway, coming soon)
                          • he.net IPv6 Tunnel

                          • Snort

                          • pfBlockerNG + DNSBL

                          • Traffic Shaper (CODELQ)

                          • ntopng

                          http://pastebin.com/DpzEjg5h

                          iperf -c 192.168.1.1 -w 64KB
                          ------------------------------------------------------------
                          Client connecting to 192.168.1.1, TCP port 5001
                          TCP window size: 64.0 KByte
                          ------------------------------------------------------------
                          [  3] local 192.168.1.50 port 8911 connected with 192.168.1.1 port 5001
                          [ ID] Interval       Transfer     Bandwidth
                          [  3]  0.0-10.0 sec  1.11 GBytes   949 Mbits/sec
                          

                          pfSense i5-4590
                          940/880 mbit Fiber Internet from FiOS
                          BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                          Netgear R8000 AP (DD-WRT)

                          1 Reply Last reply Reply Quote 0
                          • A
                            aGeekhere
                            last edited by

                            Speed from the squid cache? Also did you setup pfsense to act as your DNS server? Here is a video on it https://m.youtube.com/watch?v=s3VXLIXGazM

                            Never Fear, A Geek is Here!

                            1 Reply Last reply Reply Quote 0
                            • P
                              Paint
                              last edited by

                              @aGeekHere:

                              Speed from the squid cache? Also did you setup pfsense to act as your DNS server? Here is a video on it https://m.youtube.com/watch?v=s3VXLIXGazM

                              Yes I am using unbound as my DNS server.

                              I have not had a chance to setup squid yet - I will let you know if I do.

                              pfSense i5-4590
                              940/880 mbit Fiber Internet from FiOS
                              BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                              Netgear R8000 AP (DD-WRT)

                              1 Reply Last reply Reply Quote 0
                              • A
                                aGeekhere
                                last edited by

                                Did you incress your DNS cache and find the fastest DNS servers in your area?

                                Never Fear, A Geek is Here!

                                1 Reply Last reply Reply Quote 0
                                • P
                                  Paint
                                  last edited by

                                  @aGeekHere:

                                  Did you incress your DNS cache and find the fastest DNS servers in your area?

                                  Yea, I went through all of those settings. Thanks!

                                  pfSense i5-4590
                                  940/880 mbit Fiber Internet from FiOS
                                  BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                                  Netgear R8000 AP (DD-WRT)

                                  1 Reply Last reply Reply Quote 0
                                  • P
                                    Paint
                                    last edited by

                                    I had some issues where the em0 or em1 driver would stop responding with the following error:

                                    em0: Watchdog timeout Queue[0]-- resetting
                                    

                                    I was able to resolve the issue by allowing for IRQ (interrupt) sharing between processors (see net.isr.* commands at the bottom of the loader.conf.local file).

                                    Below are some other tweaks that I have setup. Please let me know if you have any other suggestions:

                                    /boot/loader.conf.local:

                                    #Redirect Console to UART 2
                                    comconsole_port="0x3E0"
                                    hint.uart.2.flags="0x10"
                                    #Redirect Console to UART 1
                                    #comconsole_port="0x2F8"
                                    #hint.uart.0.flags="0x0"
                                    #hint.uart.1.flags="0x10"
                                    #hint.atrtc.0.clock="0"
                                    hw.acpi.cpu.cx_lowest="Cmax"
                                    kern.ipc.nmbclusters="1000000"
                                    #kern.ipc.nmbjumbop="524288"
                                    hw.pci.do_power_suspend="0"
                                    hw.pci.do_power_nodriver="3"
                                    #hw.pci.do_power_nodriver="0"
                                    hw.pci.realloc_bars="1"
                                    hint.ral.0.disabled="1"
                                    #hint.agp.0.disabled="1"
                                    kern.ipc.somaxconn="16384"
                                    kern.ipc.soacceptqueue="16384"
                                    legal.intel_ipw.license_ack=1
                                    legal.intel_iwi.license_ack=1
                                    #
                                    # Advanced Host Controller Interface (AHCI) 
                                    #hint.acpi.0.disabled="1"
                                    #ahci_load="YES"
                                    # H-TCP Congestion Control for a more aggressive increase in speed on higher
                                    # latency, high bandwidth networks with some packet loss. 
                                    cc_htcp_load="YES"
                                    #
                                    #hw.em.rxd="1024"
                                    #hw.em.txd="1024"
                                    #hw.em.rxd="2048"
                                    #hw.em.txd="2048"
                                    hw.em.rxd="4096"
                                    hw.em.txd="4096"
                                    hw.igb.rxd="4096"
                                    hw.igb.txd="4096"
                                    #hw.igb.rxd="1024"
                                    #hw.igb.txd="1024"
                                    # Intel igb(4): freebsd limits the the number of received packets a network
                                    
                                    # card can process to 100 packets per interrupt cycle. This limit is in place
                                    # because of inefficiencies in IRQ sharing when the network card is using the
                                    # same IRQ as another device. When the Intel network card is assigned a unique
                                    # IRQ (dmesg) and MSI-X is enabled through the driver (hw.igb.enable_msix=1)
                                    # then interrupt scheduling is significantly more efficient and the NIC can be
                                    # allowed to process packets as fast as they are received. A value of "-1"
                                    # means unlimited packet processing and sets the same value to
                                    # dev.igb.0.rx_processing_limit and dev.igb.1.rx_processing_limit .
                                    hw.igb.rx_process_limit="-1"  # (default 100)
                                    hw.em.rx_process_limit="-1"
                                    #hw.em.rx_process_limit="400"
                                    #
                                    # Intel em: The Intel i350-T2 dual port NIC supports up to eight(8)
                                    # input/output queues per network port. A single CPU core can theoretically
                                    # forward 700K packets per second (pps) and a gigabit interface can
                                    # theoretically forward 1.488M packets per second (pps). Testing has shown a
                                    # server can most efficiently process the number of network queues equal to the
                                    # total number of CPU cores in the machine. For example, a firewall with
                                    # four(4) CPU cores and an i350-T2 dual port NIC should use two(2) queues per
                                    # network port for a total of four(4) network queues which correlate to four(4)
                                    # CPU cores. A server with four(4) CPU cores and a single network port should
                                    # use four(4) network queues. Query total interrupts per queue with "vmstat
                                    # -i" and use "top -H -S" to watch CPU usage per igb0:que. MSIX interrupts
                                    # start at 256 and the igb driver uses one vector per queue known as an TX/RX
                                    # pair. The default hw.igb.num_queues value of zero(0) sets the number of
                                    # network queues equal to the number of logical CPU cores per network port.
                                    # Disable hyper threading as HT logical cores should not be used in routing as
                                    # hyper threading, also known as simultaneous multithreading (SMT), can lead to
                                    # unpredictable latency spikes.
                                    hw.em.max_interrupt_rate="32000"
                                    hw.igb.max_interrupt_rate="32000" # (default 8000)
                                    #hw.em.max_interrupt_rate="8000"
                                    hw.igb.enable_aim="1"  # (default 1)
                                    hw.igb.enable_msix="1"  # (default 1)
                                    #
                                    hw.pci.enable_msix="1"
                                    hw.pci.enable_msi="1"
                                    #hw.em.msix="0"
                                    hw.em.msix="1"
                                    #hw.em.enable_msix="0"
                                    hw.em.enable_msix="1"
                                    #hw.em.msix_queues="2"
                                    #hw.em.num_queues="2"
                                    hw.em.num_queues="0"
                                    #hw.igb.num_queues="0"
                                    hw.igb.num_queues="2"
                                    net.inet.tcp.tso="0"
                                    hw.em.smart_pwr_down="0"
                                    hw.em.sbp="0"
                                    hw.em.eee_setting="0"
                                    #hw.em.eee_setting="1"
                                    #hw.em.fc_setting="3"
                                    hw.em.fc_setting="0"
                                    #
                                    hw.em.rx_int_delay="0"
                                    hw.em.tx_int_delay="0"
                                    hw.em.rx_abs_int_delay="0"
                                    hw.em.tx_abs_int_delay="0"
                                    #
                                    #hw.em.rx_abs_int_delay="1024"
                                    #hw.em.tx_abs_int_delay="1024"
                                    #hw.em.tx_int_delay="128"
                                    #hw.em.rx_int_delay="100"
                                    #hw.em.tx_int_delay="64"
                                    #
                                    # "sysctl net.inet.tcp.hostcache.list" 
                                    net.inet.tcp.hostcache.cachelimit="0"
                                    #
                                    #net.inet.tcp.tcbhashsize="2097152"
                                    #
                                    net.link.ifqmaxlen="8192"  # (default 50)
                                    #
                                    # For high bandwidth systems setting bindthreads to "0" will spread the
                                    # network processing load over multiple cpus allowing the system to handle more
                                    # throughput. The default is faster for most lightly loaded systems (default 0)
                                    #net.isr.bindthreads="0"
                                    net.isr.bindthreads="1"
                                    
                                    # qlimit for igmp, arp, ether and ip6 queues only (netstat -Q) (default 256)
                                    #net.isr.defaultqlimit="2048"
                                    net.isr.defaultqlimit="4096"
                                    
                                    # interrupt handling via multiple CPU (default direct)
                                    net.isr.dispatch="direct"
                                    #net.isr.dispatch="hybrid"
                                    
                                    # limit per-workstream queues (use "netstat -Q" if Qdrop is greater then 0
                                    # increase this directive) (default 10240)
                                    net.isr.maxqlimit="10240"
                                    
                                    # Max number of threads for NIC IRQ balancing 3 for 4 cores in box leaving at
                                    # least (default 1) one core for system or service processing. Again, if you
                                    # notice one cpu being overloaded due to network processing this directive will
                                    # spread out the load at the cost of cpu affinity unbinding. The default of "1"
                                    # is faster if a single core is not already overloaded.
                                    #net.isr.maxthreads="2"
                                    #net.isr.maxthreads="3"
                                    #net.isr.maxthreads="4"
                                    net.isr.maxthreads="-1"
                                    
                                    

                                    /etc/sysctl.conf (System Tunables)

                                    | Tunable Name | Description | Value | Modified |
                                    | net.inet.ip.forwarding | (default 0) | 1 | Yes |
                                    | net.inet.ip.fastforwarding | (default 0) | 1 | Yes |
                                    | net.inet.tcp.mssdflt | (default 536) | 1460 | Yes |
                                    | net.inet.tcp.minmss | (default 216) | 536 | Yes |
                                    | net.inet.tcp.syncache.rexmtlimit | (default 3) | 0 | Yes |
                                    | net.inet.ip.maxfragpackets | (default 13687) | 0 | Yes |
                                    | net.inet.ip.maxfragsperpacket | (default 16) | 0 | Yes |
                                    | net.inet.tcp.abc_l_var | (default 2) | 44 | Yes |
                                    | net.inet.ip.rtexpire | (default 3600) | 10 | Yes |
                                    | net.inet.tcp.syncookies | (default 1) | 0 | Yes |
                                    | net.inet.tcp.tso | Enable TCP Segmentation Offload | 0 | Yes |
                                    | hw.kbd.keymap_restrict_change | Disallow keymap changes for non-privileged users | 4 | Yes |
                                    | kern.msgbuf_show_timestamp | display timestamp in msgbuf (default 0) | 1 | Yes |
                                    | kern.randompid | Random PID modulus | 702 | Yes |
                                    | net.inet.icmp.drop_redirect | no redirected ICMP packets (default 0) | 1 | Yes |
                                    | net.inet.ip.check_interface | verify packet arrives on correct interface (default 0) | 1 | Yes |
                                    | net.inet.ip.process_options | ignore IP options in the incoming packets (default 1) | 0 | Yes |
                                    | net.inet.ip.redirect | Enable sending IP redirects | 0 | Yes |
                                    | net.inet.tcp.always_keepalive | disable tcp keep alive detection for dead peers, keepalive can be spoofed (default 1) | 0 | Yes |
                                    | net.inet.tcp.icmp_may_rst | icmp may not send RST to avoid spoofed icmp/udp floods (default 1) | 0 | Yes |
                                    | net.inet.tcp.msl | Maximum Segment Lifetime a TCP segment can exist on the network, 2*MSL (default 30000, 60 sec) | 5000 | Yes |
                                    | net.inet.tcp.nolocaltimewait | remove TIME_WAIT states for the loopback interface (default 0) | 1 | Yes |
                                    | net.inet.tcp.path_mtu_discovery | disable MTU discovery since many hosts drop ICMP type 3 packets (default 1) | 0 | Yes |
                                    | net.inet.tcp.sendbuf_max | (default 2097152) | 4194304 | Yes |
                                    | net.inet.tcp.recvbuf_max | (default 2097152) | 4194304 | Yes |
                                    | vfs.read_max | Cluster read-ahead max block count (Default 32) | 128 | Yes |
                                    | net.link.ether.inet.allow_multicast | Allow Windows Network Load Balancing and Open Mesh access points Multicast RFC 1812 | 1 | Yes |
                                    | hw.intr_storm_threshold | (default 1000) | 10000 | Yes |
                                    | hw.pci.do_power_suspend | (default 1) | 0 | Yes |
                                    | hw.pci.do_power_nodriver | (default 0) | 3 | Yes |
                                    | hw.pci.realloc_bars | (default 0) | 1 | Yes |
                                    | net.inet.tcp.delayed_ack | Delay ACK to try and piggyback it onto a data packet | 3 | Yes |
                                    | net.inet.tcp.delacktime | (default 100) | 20 | Yes |
                                    | net.inet.tcp.sendbuf_inc | (default 8192) | 32768 | Yes |
                                    | net.inet.tcp.recvbuf_inc | (default 16384) | 65536 | Yes |
                                    | net.inet.tcp.fast_finwait2_recycle | (default 0) | 1 | Yes |
                                    | kern.ipc.soacceptqueue | (default 128 ; same as kern.ipc.somaxconn) | 16384 | Yes |
                                    | kern.ipc.maxsockbuf | Maximum socket buffer size (default 4262144) | 16777216 | Yes |
                                    | net.inet.tcp.cc.algorithm | (default newreno) | htcp | Yes |
                                    | net.inet.tcp.cc.htcp.adaptive_backoff | (default 0 ; disabled) | 1 | Yes |
                                    | net.inet.tcp.cc.htcp.rtt_scaling | (default 0 ; disabled) | 1 | Yes |
                                    | kern.threads.max_threads_per_proc | (default 1500) | 1500 | Yes |
                                    | dev.em.0.fc | (default 3) | 0 | Yes |
                                    | dev.em.1.fc | (default 3) | 0 | Yes |
                                    | hw.acpi.cpu.cx_lowest | | Cmax | Yes |
                                    | kern.sched.interact | (default 30) | 5 | Yes |
                                    | kern.sched.slice | (default 12) | 3 | Yes |
                                    | kern.random.sys.harvest.ethernet | Harvest NIC entropy | 1 | Yes |
                                    | kern.random.sys.harvest.interrupt | Harvest IRQ entropy | 1 | Yes |
                                    | kern.random.sys.harvest.point_to_point | Harvest serial net entropy | 1 | Yes |
                                    | kern.sigqueue.max_pending_per_proc | (default 128) | 256 | Yes |
                                    | net.inet6.ip6.redirect | (default 1) | 0 | Yes |
                                    | net.inet.tcp.v6mssdflt | (default 1220) | 1440 | Yes |
                                    | net.inet6.icmp6.rediraccept | (default 1) | 0 | Yes |
                                    | net.inet6.icmp6.nodeinfo | (default 3) | 0 | Yes |
                                    | net.inet6.ip6.forwarding | (default 1) | 1 | Yes |
                                    | dev.igb.0.fc | (default 3) | 0 | Yes |
                                    | dev.igb.1.fc | (default 3) | 0 | Yes |
                                    | net.inet.ip.portrange.first | | 1024 | No (Default) |
                                    | net.inet.tcp.blackhole | Do not send RST on segments to closed ports | 2 | No (Default) |
                                    | net.inet.udp.blackhole | Do not send port unreachables for refused connects | 1 | No (Default) |
                                    | net.inet.ip.random_id | Assign random ip_id values | 1 | No (Default) |
                                    | net.inet.tcp.drop_synfin | Drop TCP packets with SYN+FIN set | 1 | No (Default) |
                                    | net.inet6.ip6.use_tempaddr | | 0 | No (Default) |
                                    | net.inet6.ip6.prefer_tempaddr | | 0 | No (Default) |
                                    | net.inet.tcp.recvspace | Initial receive socket buffer size | 65228 | No (Default) |
                                    | net.inet.tcp.sendspace | Initial send socket buffer size | 65228 | No (Default) |
                                    | net.inet.udp.maxdgram | Maximum outgoing UDP datagram size | 57344 | No (Default) |
                                    | net.link.bridge.pfil_onlyip | Only pass IP packets when pfil is enabled | 0 | No (Default) |
                                    | net.link.bridge.pfil_member | Packet filter on the member interface | 1 | No (Default) |
                                    | net.link.bridge.pfil_bridge | Packet filter on the bridge interface | 0 | No (Default) |
                                    | net.link.tap.user_open | Allow user to open /dev/tap (based on node permissions) | 1 | No (Default) |
                                    | net.inet.ip.intr_queue_maxlen | Maximum size of the IP input queue | 1000 | No (Default) |
                                    | hw.syscons.kbd_reboot | enable keyboard reboot | 0 | No (Default) |
                                    | net.inet.tcp.log_debug | Log errors caused by incoming TCP segments | 0 | No (Default) |
                                    | net.inet.icmp.icmplim | Maximum number of ICMP responses per second | 0 | No (Default) |
                                    | net.route.netisr_maxqlen | maximum routing socket dispatch queue length | 1024 | No (Default) |
                                    | net.inet.udp.checksum | compute udp checksum | 1 | No (Default) |
                                    | net.inet.icmp.reply_from_interface | ICMP reply from incoming interface for non-local packets | 1 | No (Default) |
                                    | net.inet6.ip6.rfc6204w3 | Accept the default router list from ICMPv6 RA messages even when packet forwarding enabled. | 1 | No (Default) |
                                    | net.enc.out.ipsec_bpf_mask | IPsec output bpf mask | 0x0001 | No (Default) |
                                    | net.enc.out.ipsec_filter_mask | IPsec output firewall filter mask | 0x0001 | No (Default) |
                                    | net.enc.in.ipsec_bpf_mask | IPsec input bpf mask | 0x0002 | No (Default) |
                                    | net.enc.in.ipsec_filter_mask | IPsec input firewall filter mask | 0x0002 | No (Default) |
                                    | net.key.preferred_oldsa | | 0 | No (Default) |
                                    | net.inet.carp.senderr_demotion_factor | Send error demotion factor adjustment | 0 (0) | No (Default) |
                                    | net.pfsync.carp_demotion_factor | pfsync's CARP demotion factor adjustment | 0 (0) | No (Default) |
                                    | net.raw.recvspace | Default raw socket receive space | 65536 | No (Default) |
                                    | net.raw.sendspace | Default raw socket send space | 65536 | No (Default) |
                                    | net.inet.raw.recvspace | Maximum space for incoming raw IP datagrams | 131072 | No (Default) |
                                    | net.inet.raw.maxdgram | Maximum outgoing raw IP datagram size | 131072 | No (Default) |
                                    | kern.corefile | Process corefile name format string | /root/%N.core | No (Default) |

                                    pfSense i5-4590
                                    940/880 mbit Fiber Internet from FiOS
                                    BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                                    Netgear R8000 AP (DD-WRT)

                                    1 Reply Last reply Reply Quote 0
                                    • P
                                      Paint
                                      last edited by

                                      I was still getting the Watchdog Queue Timeout on the em0 driver, until I got an error stating that the kernel hit the Maximum Fragment Entries in the firewall.

                                      I tweaked the Firewall Maximum Fragment Entries, Firewall Maximum Table Entries, and Firewall Maximum States in System->Advanced->Firewall & NAT to larger values and I haven't had a freeze yet!

                                      pfSense i5-4590
                                      940/880 mbit Fiber Internet from FiOS
                                      BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                                      Netgear R8000 AP (DD-WRT)

                                      1 Reply Last reply Reply Quote 0
                                      • R
                                        richtj99
                                        last edited by

                                        Hi,

                                        What was the cost of the PC & what sort of wattage is being used?

                                        THanks,
                                        Rich

                                        1 Reply Last reply Reply Quote 0
                                        • P
                                          Paint
                                          last edited by

                                          @richtj99:

                                          Hi,

                                          What was the cost of the PC & what sort of wattage is being used?

                                          THanks,
                                          Rich

                                          Not sure about the wattage, but can test. It if it's really that important.

                                          The machine with the switch was 350 uad

                                          pfSense i5-4590
                                          940/880 mbit Fiber Internet from FiOS
                                          BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                                          Netgear R8000 AP (DD-WRT)

                                          1 Reply Last reply Reply Quote 0
                                          • D
                                            duren
                                            last edited by

                                            @Paint:

                                            @mauroman33:

                                            Hi Paint,

                                            could you please run the simple OpenVPN benchmark referenced here:
                                            https://forum.pfsense.org/index.php?topic=105238.msg616743#msg616743 (Reply #9 message)

                                            Executing the command on my router with a Celeron N3150 I get
                                            27.41 real        25.62 user        1.77 sys

                                            (3200 / 27.41) = 117 Mbps OpenVPN performance (estimate)

                                            This value perfectly fits to the result of a real speed test.

                                            I recently got an upgrade to 250/100 connection and I'm considering buying a mini PC as your own if it were able to sustain this speed through the OpenVPN connection.

                                            Thanks!

                                            Here is the output:

                                            [2.3.1-RELEASE][root@pfSense.lan]/root: openvpn --genkey --secret /tmp/secret
                                            [2.3.1-RELEASE][root@pfSense.lan]/root: time openvpn --test-crypto --secret /tmp/secret --verb 0 --tun-mtu 20000 --cipher aes-256-cbc
                                            10.682u 0.677s 0:11.36 99.9%    742+177k 0+0io 1pf+0w
                                            [2.3.1-RELEASE][root@pfSense.lan]/root:
                                            

                                            (3200 / 11.36) = 281.7 Mbps OpenVPN performance (estimate)

                                            wow, I'm a little surprised, I would have thought an i7-4500U would be able to do more than ~300mbps over vpn.

                                            I'm assuming this is without AES-NI? I'd be very curious to know the throughput when it's finally here as part of OpenVPN 2.4 (https://forum.pfsense.org/index.php?topic=109539.0)

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.