Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    PfSense i7-4510U + 2x Intel 82574 + 2x Intel i350 (miniPCIE) Mini-ITX Build

    Scheduled Pinned Locked Moved Hardware
    51 Posts 11 Posters 20.6k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • A
      aGeekhere
      last edited by

      Speed from the squid cache? Also did you setup pfsense to act as your DNS server? Here is a video on it https://m.youtube.com/watch?v=s3VXLIXGazM

      Never Fear, A Geek is Here!

      1 Reply Last reply Reply Quote 0
      • P
        Paint
        last edited by

        @aGeekHere:

        Speed from the squid cache? Also did you setup pfsense to act as your DNS server? Here is a video on it https://m.youtube.com/watch?v=s3VXLIXGazM

        Yes I am using unbound as my DNS server.

        I have not had a chance to setup squid yet - I will let you know if I do.

        pfSense i5-4590
        940/880 mbit Fiber Internet from FiOS
        BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
        Netgear R8000 AP (DD-WRT)

        1 Reply Last reply Reply Quote 0
        • A
          aGeekhere
          last edited by

          Did you incress your DNS cache and find the fastest DNS servers in your area?

          Never Fear, A Geek is Here!

          1 Reply Last reply Reply Quote 0
          • P
            Paint
            last edited by

            @aGeekHere:

            Did you incress your DNS cache and find the fastest DNS servers in your area?

            Yea, I went through all of those settings. Thanks!

            pfSense i5-4590
            940/880 mbit Fiber Internet from FiOS
            BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
            Netgear R8000 AP (DD-WRT)

            1 Reply Last reply Reply Quote 0
            • P
              Paint
              last edited by

              I had some issues where the em0 or em1 driver would stop responding with the following error:

              em0: Watchdog timeout Queue[0]-- resetting
              

              I was able to resolve the issue by allowing for IRQ (interrupt) sharing between processors (see net.isr.* commands at the bottom of the loader.conf.local file).

              Below are some other tweaks that I have setup. Please let me know if you have any other suggestions:

              /boot/loader.conf.local:

              #Redirect Console to UART 2
              comconsole_port="0x3E0"
              hint.uart.2.flags="0x10"
              #Redirect Console to UART 1
              #comconsole_port="0x2F8"
              #hint.uart.0.flags="0x0"
              #hint.uart.1.flags="0x10"
              #hint.atrtc.0.clock="0"
              hw.acpi.cpu.cx_lowest="Cmax"
              kern.ipc.nmbclusters="1000000"
              #kern.ipc.nmbjumbop="524288"
              hw.pci.do_power_suspend="0"
              hw.pci.do_power_nodriver="3"
              #hw.pci.do_power_nodriver="0"
              hw.pci.realloc_bars="1"
              hint.ral.0.disabled="1"
              #hint.agp.0.disabled="1"
              kern.ipc.somaxconn="16384"
              kern.ipc.soacceptqueue="16384"
              legal.intel_ipw.license_ack=1
              legal.intel_iwi.license_ack=1
              #
              # Advanced Host Controller Interface (AHCI) 
              #hint.acpi.0.disabled="1"
              #ahci_load="YES"
              # H-TCP Congestion Control for a more aggressive increase in speed on higher
              # latency, high bandwidth networks with some packet loss. 
              cc_htcp_load="YES"
              #
              #hw.em.rxd="1024"
              #hw.em.txd="1024"
              #hw.em.rxd="2048"
              #hw.em.txd="2048"
              hw.em.rxd="4096"
              hw.em.txd="4096"
              hw.igb.rxd="4096"
              hw.igb.txd="4096"
              #hw.igb.rxd="1024"
              #hw.igb.txd="1024"
              # Intel igb(4): freebsd limits the the number of received packets a network
              
              # card can process to 100 packets per interrupt cycle. This limit is in place
              # because of inefficiencies in IRQ sharing when the network card is using the
              # same IRQ as another device. When the Intel network card is assigned a unique
              # IRQ (dmesg) and MSI-X is enabled through the driver (hw.igb.enable_msix=1)
              # then interrupt scheduling is significantly more efficient and the NIC can be
              # allowed to process packets as fast as they are received. A value of "-1"
              # means unlimited packet processing and sets the same value to
              # dev.igb.0.rx_processing_limit and dev.igb.1.rx_processing_limit .
              hw.igb.rx_process_limit="-1"  # (default 100)
              hw.em.rx_process_limit="-1"
              #hw.em.rx_process_limit="400"
              #
              # Intel em: The Intel i350-T2 dual port NIC supports up to eight(8)
              # input/output queues per network port. A single CPU core can theoretically
              # forward 700K packets per second (pps) and a gigabit interface can
              # theoretically forward 1.488M packets per second (pps). Testing has shown a
              # server can most efficiently process the number of network queues equal to the
              # total number of CPU cores in the machine. For example, a firewall with
              # four(4) CPU cores and an i350-T2 dual port NIC should use two(2) queues per
              # network port for a total of four(4) network queues which correlate to four(4)
              # CPU cores. A server with four(4) CPU cores and a single network port should
              # use four(4) network queues. Query total interrupts per queue with "vmstat
              # -i" and use "top -H -S" to watch CPU usage per igb0:que. MSIX interrupts
              # start at 256 and the igb driver uses one vector per queue known as an TX/RX
              # pair. The default hw.igb.num_queues value of zero(0) sets the number of
              # network queues equal to the number of logical CPU cores per network port.
              # Disable hyper threading as HT logical cores should not be used in routing as
              # hyper threading, also known as simultaneous multithreading (SMT), can lead to
              # unpredictable latency spikes.
              hw.em.max_interrupt_rate="32000"
              hw.igb.max_interrupt_rate="32000" # (default 8000)
              #hw.em.max_interrupt_rate="8000"
              hw.igb.enable_aim="1"  # (default 1)
              hw.igb.enable_msix="1"  # (default 1)
              #
              hw.pci.enable_msix="1"
              hw.pci.enable_msi="1"
              #hw.em.msix="0"
              hw.em.msix="1"
              #hw.em.enable_msix="0"
              hw.em.enable_msix="1"
              #hw.em.msix_queues="2"
              #hw.em.num_queues="2"
              hw.em.num_queues="0"
              #hw.igb.num_queues="0"
              hw.igb.num_queues="2"
              net.inet.tcp.tso="0"
              hw.em.smart_pwr_down="0"
              hw.em.sbp="0"
              hw.em.eee_setting="0"
              #hw.em.eee_setting="1"
              #hw.em.fc_setting="3"
              hw.em.fc_setting="0"
              #
              hw.em.rx_int_delay="0"
              hw.em.tx_int_delay="0"
              hw.em.rx_abs_int_delay="0"
              hw.em.tx_abs_int_delay="0"
              #
              #hw.em.rx_abs_int_delay="1024"
              #hw.em.tx_abs_int_delay="1024"
              #hw.em.tx_int_delay="128"
              #hw.em.rx_int_delay="100"
              #hw.em.tx_int_delay="64"
              #
              # "sysctl net.inet.tcp.hostcache.list" 
              net.inet.tcp.hostcache.cachelimit="0"
              #
              #net.inet.tcp.tcbhashsize="2097152"
              #
              net.link.ifqmaxlen="8192"  # (default 50)
              #
              # For high bandwidth systems setting bindthreads to "0" will spread the
              # network processing load over multiple cpus allowing the system to handle more
              # throughput. The default is faster for most lightly loaded systems (default 0)
              #net.isr.bindthreads="0"
              net.isr.bindthreads="1"
              
              # qlimit for igmp, arp, ether and ip6 queues only (netstat -Q) (default 256)
              #net.isr.defaultqlimit="2048"
              net.isr.defaultqlimit="4096"
              
              # interrupt handling via multiple CPU (default direct)
              net.isr.dispatch="direct"
              #net.isr.dispatch="hybrid"
              
              # limit per-workstream queues (use "netstat -Q" if Qdrop is greater then 0
              # increase this directive) (default 10240)
              net.isr.maxqlimit="10240"
              
              # Max number of threads for NIC IRQ balancing 3 for 4 cores in box leaving at
              # least (default 1) one core for system or service processing. Again, if you
              # notice one cpu being overloaded due to network processing this directive will
              # spread out the load at the cost of cpu affinity unbinding. The default of "1"
              # is faster if a single core is not already overloaded.
              #net.isr.maxthreads="2"
              #net.isr.maxthreads="3"
              #net.isr.maxthreads="4"
              net.isr.maxthreads="-1"
              
              

              /etc/sysctl.conf (System Tunables)

              | Tunable Name | Description | Value | Modified |
              | net.inet.ip.forwarding | (default 0) | 1 | Yes |
              | net.inet.ip.fastforwarding | (default 0) | 1 | Yes |
              | net.inet.tcp.mssdflt | (default 536) | 1460 | Yes |
              | net.inet.tcp.minmss | (default 216) | 536 | Yes |
              | net.inet.tcp.syncache.rexmtlimit | (default 3) | 0 | Yes |
              | net.inet.ip.maxfragpackets | (default 13687) | 0 | Yes |
              | net.inet.ip.maxfragsperpacket | (default 16) | 0 | Yes |
              | net.inet.tcp.abc_l_var | (default 2) | 44 | Yes |
              | net.inet.ip.rtexpire | (default 3600) | 10 | Yes |
              | net.inet.tcp.syncookies | (default 1) | 0 | Yes |
              | net.inet.tcp.tso | Enable TCP Segmentation Offload | 0 | Yes |
              | hw.kbd.keymap_restrict_change | Disallow keymap changes for non-privileged users | 4 | Yes |
              | kern.msgbuf_show_timestamp | display timestamp in msgbuf (default 0) | 1 | Yes |
              | kern.randompid | Random PID modulus | 702 | Yes |
              | net.inet.icmp.drop_redirect | no redirected ICMP packets (default 0) | 1 | Yes |
              | net.inet.ip.check_interface | verify packet arrives on correct interface (default 0) | 1 | Yes |
              | net.inet.ip.process_options | ignore IP options in the incoming packets (default 1) | 0 | Yes |
              | net.inet.ip.redirect | Enable sending IP redirects | 0 | Yes |
              | net.inet.tcp.always_keepalive | disable tcp keep alive detection for dead peers, keepalive can be spoofed (default 1) | 0 | Yes |
              | net.inet.tcp.icmp_may_rst | icmp may not send RST to avoid spoofed icmp/udp floods (default 1) | 0 | Yes |
              | net.inet.tcp.msl | Maximum Segment Lifetime a TCP segment can exist on the network, 2*MSL (default 30000, 60 sec) | 5000 | Yes |
              | net.inet.tcp.nolocaltimewait | remove TIME_WAIT states for the loopback interface (default 0) | 1 | Yes |
              | net.inet.tcp.path_mtu_discovery | disable MTU discovery since many hosts drop ICMP type 3 packets (default 1) | 0 | Yes |
              | net.inet.tcp.sendbuf_max | (default 2097152) | 4194304 | Yes |
              | net.inet.tcp.recvbuf_max | (default 2097152) | 4194304 | Yes |
              | vfs.read_max | Cluster read-ahead max block count (Default 32) | 128 | Yes |
              | net.link.ether.inet.allow_multicast | Allow Windows Network Load Balancing and Open Mesh access points Multicast RFC 1812 | 1 | Yes |
              | hw.intr_storm_threshold | (default 1000) | 10000 | Yes |
              | hw.pci.do_power_suspend | (default 1) | 0 | Yes |
              | hw.pci.do_power_nodriver | (default 0) | 3 | Yes |
              | hw.pci.realloc_bars | (default 0) | 1 | Yes |
              | net.inet.tcp.delayed_ack | Delay ACK to try and piggyback it onto a data packet | 3 | Yes |
              | net.inet.tcp.delacktime | (default 100) | 20 | Yes |
              | net.inet.tcp.sendbuf_inc | (default 8192) | 32768 | Yes |
              | net.inet.tcp.recvbuf_inc | (default 16384) | 65536 | Yes |
              | net.inet.tcp.fast_finwait2_recycle | (default 0) | 1 | Yes |
              | kern.ipc.soacceptqueue | (default 128 ; same as kern.ipc.somaxconn) | 16384 | Yes |
              | kern.ipc.maxsockbuf | Maximum socket buffer size (default 4262144) | 16777216 | Yes |
              | net.inet.tcp.cc.algorithm | (default newreno) | htcp | Yes |
              | net.inet.tcp.cc.htcp.adaptive_backoff | (default 0 ; disabled) | 1 | Yes |
              | net.inet.tcp.cc.htcp.rtt_scaling | (default 0 ; disabled) | 1 | Yes |
              | kern.threads.max_threads_per_proc | (default 1500) | 1500 | Yes |
              | dev.em.0.fc | (default 3) | 0 | Yes |
              | dev.em.1.fc | (default 3) | 0 | Yes |
              | hw.acpi.cpu.cx_lowest | | Cmax | Yes |
              | kern.sched.interact | (default 30) | 5 | Yes |
              | kern.sched.slice | (default 12) | 3 | Yes |
              | kern.random.sys.harvest.ethernet | Harvest NIC entropy | 1 | Yes |
              | kern.random.sys.harvest.interrupt | Harvest IRQ entropy | 1 | Yes |
              | kern.random.sys.harvest.point_to_point | Harvest serial net entropy | 1 | Yes |
              | kern.sigqueue.max_pending_per_proc | (default 128) | 256 | Yes |
              | net.inet6.ip6.redirect | (default 1) | 0 | Yes |
              | net.inet.tcp.v6mssdflt | (default 1220) | 1440 | Yes |
              | net.inet6.icmp6.rediraccept | (default 1) | 0 | Yes |
              | net.inet6.icmp6.nodeinfo | (default 3) | 0 | Yes |
              | net.inet6.ip6.forwarding | (default 1) | 1 | Yes |
              | dev.igb.0.fc | (default 3) | 0 | Yes |
              | dev.igb.1.fc | (default 3) | 0 | Yes |
              | net.inet.ip.portrange.first | | 1024 | No (Default) |
              | net.inet.tcp.blackhole | Do not send RST on segments to closed ports | 2 | No (Default) |
              | net.inet.udp.blackhole | Do not send port unreachables for refused connects | 1 | No (Default) |
              | net.inet.ip.random_id | Assign random ip_id values | 1 | No (Default) |
              | net.inet.tcp.drop_synfin | Drop TCP packets with SYN+FIN set | 1 | No (Default) |
              | net.inet6.ip6.use_tempaddr | | 0 | No (Default) |
              | net.inet6.ip6.prefer_tempaddr | | 0 | No (Default) |
              | net.inet.tcp.recvspace | Initial receive socket buffer size | 65228 | No (Default) |
              | net.inet.tcp.sendspace | Initial send socket buffer size | 65228 | No (Default) |
              | net.inet.udp.maxdgram | Maximum outgoing UDP datagram size | 57344 | No (Default) |
              | net.link.bridge.pfil_onlyip | Only pass IP packets when pfil is enabled | 0 | No (Default) |
              | net.link.bridge.pfil_member | Packet filter on the member interface | 1 | No (Default) |
              | net.link.bridge.pfil_bridge | Packet filter on the bridge interface | 0 | No (Default) |
              | net.link.tap.user_open | Allow user to open /dev/tap (based on node permissions) | 1 | No (Default) |
              | net.inet.ip.intr_queue_maxlen | Maximum size of the IP input queue | 1000 | No (Default) |
              | hw.syscons.kbd_reboot | enable keyboard reboot | 0 | No (Default) |
              | net.inet.tcp.log_debug | Log errors caused by incoming TCP segments | 0 | No (Default) |
              | net.inet.icmp.icmplim | Maximum number of ICMP responses per second | 0 | No (Default) |
              | net.route.netisr_maxqlen | maximum routing socket dispatch queue length | 1024 | No (Default) |
              | net.inet.udp.checksum | compute udp checksum | 1 | No (Default) |
              | net.inet.icmp.reply_from_interface | ICMP reply from incoming interface for non-local packets | 1 | No (Default) |
              | net.inet6.ip6.rfc6204w3 | Accept the default router list from ICMPv6 RA messages even when packet forwarding enabled. | 1 | No (Default) |
              | net.enc.out.ipsec_bpf_mask | IPsec output bpf mask | 0x0001 | No (Default) |
              | net.enc.out.ipsec_filter_mask | IPsec output firewall filter mask | 0x0001 | No (Default) |
              | net.enc.in.ipsec_bpf_mask | IPsec input bpf mask | 0x0002 | No (Default) |
              | net.enc.in.ipsec_filter_mask | IPsec input firewall filter mask | 0x0002 | No (Default) |
              | net.key.preferred_oldsa | | 0 | No (Default) |
              | net.inet.carp.senderr_demotion_factor | Send error demotion factor adjustment | 0 (0) | No (Default) |
              | net.pfsync.carp_demotion_factor | pfsync's CARP demotion factor adjustment | 0 (0) | No (Default) |
              | net.raw.recvspace | Default raw socket receive space | 65536 | No (Default) |
              | net.raw.sendspace | Default raw socket send space | 65536 | No (Default) |
              | net.inet.raw.recvspace | Maximum space for incoming raw IP datagrams | 131072 | No (Default) |
              | net.inet.raw.maxdgram | Maximum outgoing raw IP datagram size | 131072 | No (Default) |
              | kern.corefile | Process corefile name format string | /root/%N.core | No (Default) |

              pfSense i5-4590
              940/880 mbit Fiber Internet from FiOS
              BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
              Netgear R8000 AP (DD-WRT)

              1 Reply Last reply Reply Quote 0
              • P
                Paint
                last edited by

                I was still getting the Watchdog Queue Timeout on the em0 driver, until I got an error stating that the kernel hit the Maximum Fragment Entries in the firewall.

                I tweaked the Firewall Maximum Fragment Entries, Firewall Maximum Table Entries, and Firewall Maximum States in System->Advanced->Firewall & NAT to larger values and I haven't had a freeze yet!

                pfSense i5-4590
                940/880 mbit Fiber Internet from FiOS
                BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                Netgear R8000 AP (DD-WRT)

                1 Reply Last reply Reply Quote 0
                • R
                  richtj99
                  last edited by

                  Hi,

                  What was the cost of the PC & what sort of wattage is being used?

                  THanks,
                  Rich

                  1 Reply Last reply Reply Quote 0
                  • P
                    Paint
                    last edited by

                    @richtj99:

                    Hi,

                    What was the cost of the PC & what sort of wattage is being used?

                    THanks,
                    Rich

                    Not sure about the wattage, but can test. It if it's really that important.

                    The machine with the switch was 350 uad

                    pfSense i5-4590
                    940/880 mbit Fiber Internet from FiOS
                    BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                    Netgear R8000 AP (DD-WRT)

                    1 Reply Last reply Reply Quote 0
                    • D
                      duren
                      last edited by

                      @Paint:

                      @mauroman33:

                      Hi Paint,

                      could you please run the simple OpenVPN benchmark referenced here:
                      https://forum.pfsense.org/index.php?topic=105238.msg616743#msg616743 (Reply #9 message)

                      Executing the command on my router with a Celeron N3150 I get
                      27.41 real        25.62 user        1.77 sys

                      (3200 / 27.41) = 117 Mbps OpenVPN performance (estimate)

                      This value perfectly fits to the result of a real speed test.

                      I recently got an upgrade to 250/100 connection and I'm considering buying a mini PC as your own if it were able to sustain this speed through the OpenVPN connection.

                      Thanks!

                      Here is the output:

                      [2.3.1-RELEASE][root@pfSense.lan]/root: openvpn --genkey --secret /tmp/secret
                      [2.3.1-RELEASE][root@pfSense.lan]/root: time openvpn --test-crypto --secret /tmp/secret --verb 0 --tun-mtu 20000 --cipher aes-256-cbc
                      10.682u 0.677s 0:11.36 99.9%    742+177k 0+0io 1pf+0w
                      [2.3.1-RELEASE][root@pfSense.lan]/root:
                      

                      (3200 / 11.36) = 281.7 Mbps OpenVPN performance (estimate)

                      wow, I'm a little surprised, I would have thought an i7-4500U would be able to do more than ~300mbps over vpn.

                      I'm assuming this is without AES-NI? I'd be very curious to know the throughput when it's finally here as part of OpenVPN 2.4 (https://forum.pfsense.org/index.php?topic=109539.0)

                      1 Reply Last reply Reply Quote 0
                      • P
                        Paint
                        last edited by

                        @duren:

                        @Paint:

                        @mauroman33:

                        Hi Paint,

                        could you please run the simple OpenVPN benchmark referenced here:
                        https://forum.pfsense.org/index.php?topic=105238.msg616743#msg616743 (Reply #9 message)

                        Executing the command on my router with a Celeron N3150 I get
                        27.41 real        25.62 user        1.77 sys

                        (3200 / 27.41) = 117 Mbps OpenVPN performance (estimate)

                        This value perfectly fits to the result of a real speed test.

                        I recently got an upgrade to 250/100 connection and I'm considering buying a mini PC as your own if it were able to sustain this speed through the OpenVPN connection.

                        Thanks!

                        Here is the output:

                        [2.3.1-RELEASE][root@pfSense.lan]/root: openvpn --genkey --secret /tmp/secret
                        [2.3.1-RELEASE][root@pfSense.lan]/root: time openvpn --test-crypto --secret /tmp/secret --verb 0 --tun-mtu 20000 --cipher aes-256-cbc
                        10.682u 0.677s 0:11.36 99.9%    742+177k 0+0io 1pf+0w
                        [2.3.1-RELEASE][root@pfSense.lan]/root:
                        

                        (3200 / 11.36) = 281.7 Mbps OpenVPN performance (estimate)

                        wow, I'm a little surprised, I would have thought an i7-4500U would be able to do more than ~300mbps over vpn.

                        I'm assuming this is without AES-NI? I'd be very curious to know the throughput when it's finally here as part of OpenVPN 2.4 (https://forum.pfsense.org/index.php?topic=109539.0)

                        That test is relatively theoretical.

                        The processor does support AES-NI. I have made some additional tweaks and plan on adding an additional ethernet port via a Jetway i350 intel chipset minipci board.

                        I will run some more in depth tests tomorrow.

                        pfSense i5-4590
                        940/880 mbit Fiber Internet from FiOS
                        BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                        Netgear R8000 AP (DD-WRT)

                        1 Reply Last reply Reply Quote 0
                        • P
                          Paint
                          last edited by

                          @duren:

                          @Paint:

                          @mauroman33:

                          Hi Paint,

                          could you please run the simple OpenVPN benchmark referenced here:
                          https://forum.pfsense.org/index.php?topic=105238.msg616743#msg616743 (Reply #9 message)

                          Executing the command on my router with a Celeron N3150 I get
                          27.41 real        25.62 user        1.77 sys

                          (3200 / 27.41) = 117 Mbps OpenVPN performance (estimate)

                          This value perfectly fits to the result of a real speed test.

                          I recently got an upgrade to 250/100 connection and I'm considering buying a mini PC as your own if it were able to sustain this speed through the OpenVPN connection.

                          Thanks!

                          Here is the output:

                          [2.3.1-RELEASE][root@pfSense.lan]/root: openvpn --genkey --secret /tmp/secret
                          [2.3.1-RELEASE][root@pfSense.lan]/root: time openvpn --test-crypto --secret /tmp/secret --verb 0 --tun-mtu 20000 --cipher aes-256-cbc
                          10.682u 0.677s 0:11.36 99.9%    742+177k 0+0io 1pf+0w
                          [2.3.1-RELEASE][root@pfSense.lan]/root:
                          

                          (3200 / 11.36) = 281.7 Mbps OpenVPN performance (estimate)

                          wow, I'm a little surprised, I would have thought an i7-4500U would be able to do more than ~300mbps over vpn.

                          I'm assuming this is without AES-NI? I'd be very curious to know the throughput when it's finally here as part of OpenVPN 2.4 (https://forum.pfsense.org/index.php?topic=109539.0)

                          I ran this test again with my CPU set to MAX (hw.acpi.cpu.cx_lowest="Cmax") and AES-NI CPU-based Acceleration. I also have SNORT + Barnyard2 running with pfBlockerNG.
                          Here is a full list of my services: avahi, dhcpd, dnsbl, dpinger, miniupnpd, ntopng, ntpd, openvpn, radvd, snort, sshd, and unbound

                          [2.3.2-DEVELOPMENT][root@pfSense.lan]/root: openvpn --genkey --secret /tmp/secret
                          [2.3.2-DEVELOPMENT][root@pfSense.lan]/root: time openvpn --test-crypto --secret /tmp/secret --verb 0 --tun-mtu 20000 --cipher aes-256-cbc
                          10.106u 0.558s 0:10.67 99.8%    743+178k 0+0io 0pf+0w
                          

                          (3200 / 10.67) = 299.9 Mbps OpenVPN performance (estimate)

                          pfSense i5-4590
                          940/880 mbit Fiber Internet from FiOS
                          BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                          Netgear R8000 AP (DD-WRT)

                          1 Reply Last reply Reply Quote 0
                          • P
                            Paint
                            last edited by

                            I tested my OpenVPN connection through work with iperf:

                            Server:

                            iperf.exe -s -u -p 5123 -i 5 -w 64K -P 100
                            

                            Client:

                            iperf.exe -c 192.168.1.50 -u -p 5123 -b 5000m -i 5 -t 120 -w 64K -P 100
                            

                            I was able to get the following averages:

                            | Bandwidth | Jitter |
                            | 787.89 Mbits/sec | 0.078 ms |

                            pfSense i5-4590
                            940/880 mbit Fiber Internet from FiOS
                            BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                            Netgear R8000 AP (DD-WRT)

                            1 Reply Last reply Reply Quote 0
                            • P
                              pfcode
                              last edited by

                              Hi,

                              I saw you have pfBlockerNG, IPv6, unbound running, Do you use DNSBL?  if so, Do you have any issue that unbound isn't restarted properly with IPv6/DNSBL running each time pfSense IPv6 WAN IP got renewed (in fact, the IP isn't changed at all)?  see also: https://forum.pfsense.org/index.php?topic=113193.0

                              Release: pfSense 2.4.3(amd64)
                              M/B: Supermicro A1SRi-2558F
                              HDD: Intel X25-M 160G
                              RAM: 2x8Gb Kingston ECC ValueRAM
                              AP: Netgear R7000 (XWRT), Unifi AC Pro

                              1 Reply Last reply Reply Quote 0
                              • P
                                Paint
                                last edited by

                                @pfcode:

                                Hi,

                                I saw you have pfBlockerNG, IPv6, unbound running, Do you use DNSBL?  if so, Do you have any issue that unbound isn't restarted properly with IPv6/DNSBL running each time pfSense IPv6 WAN IP got renewed (in fact, the IP isn't changed at all)?  see also: https://forum.pfsense.org/index.php?topic=113193.0

                                Yes, i am also running DNSBL.

                                I haven't noticed any unbound restarts on WAN dhcp renewals. FiOS hadn't switched to DHCPv6, so I am only using DHCPv4 for my WAN and a 6to4 HE. Net Tunnel GIF

                                pfSense i5-4590
                                940/880 mbit Fiber Internet from FiOS
                                BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                                Netgear R8000 AP (DD-WRT)

                                1 Reply Last reply Reply Quote 0
                                • P
                                  Paint
                                  last edited by

                                  @pfcode:

                                  Hi,

                                  I saw you have pfBlockerNG, IPv6, unbound running, Do you use DNSBL?  if so, Do you have any issue that unbound isn't restarted properly with IPv6/DNSBL running each time pfSense IPv6 WAN IP got renewed (in fact, the IP isn't changed at all)?  see also: https://forum.pfsense.org/index.php?topic=113193.0

                                  I actually experienced this issue last night! I will post in the thread you mentioned about the issue. thank you!

                                  pfSense i5-4590
                                  940/880 mbit Fiber Internet from FiOS
                                  BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                                  Netgear R8000 AP (DD-WRT)

                                  1 Reply Last reply Reply Quote 0
                                  • P
                                    Paint
                                    last edited by

                                    I am still getting the Watchdog Queue Timeout on the em0 driver once in a while so I decided to upgrade my ethernet to the Intel i350 chipset.

                                    Jetway is the only company producing a Mini-PCI card that has this server based Intel Ethernet chipset - ADMPEIDLB - http://www.jetwaycomputer.com/spec/expansion/ADMPEIDLB.pdf

                                    I was able to speak to someone in their California headquarters (her name was Angel) and purchased this board for $75 shipped! It arrives on Thursday, so I will let everyone know updated Ethernet performance figures.

                                    pfSense i5-4590
                                    940/880 mbit Fiber Internet from FiOS
                                    BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                                    Netgear R8000 AP (DD-WRT)

                                    1 Reply Last reply Reply Quote 0
                                    • P
                                      Paint
                                      last edited by

                                      recently fixed my serial console by adding the following to my /boot/loader.conf.local:

                                      comconsole_port="0x2F8"
                                      hint.uart.0.flags="0x0"
                                      hint.uart.1.flags="0x10"
                                      

                                      as well as the following settings in the GUI:

                                      Serial.PNG
                                      Serial.PNG_thumb

                                      pfSense i5-4590
                                      940/880 mbit Fiber Internet from FiOS
                                      BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                                      Netgear R8000 AP (DD-WRT)

                                      1 Reply Last reply Reply Quote 0
                                      • P
                                        Paint
                                        last edited by

                                        I added a Jetway Mini-PCIe Intel i350 ADMPEIDLB 2x Gigabit adapter to this machine.
                                        The em(4) freebsd driver used with the on-board 2x Intel 82574 adapters would cause watchdog timeouts every 2-3 days.

                                        The Intel i350 ADMPEIDLB 2x Gigabit adapter uses the igb driver, which is much more stable.
                                        I ran some iperf tests from my HTPC - which also has a 4x i350 Intel Ethernet adapter in it - and my laptop (wireless AC) at the same time. I was able to fully saturate both adapters to gigabit speeds while also maintaining my 150/150 outbound WAN. For my setup, this adapter works perfectly!

                                        I ordered the ADMPEIDLB board for $75 + s/h directly from Jetway. They have 3 more in stock, I believe (talk to Angel on the phone, tell them Josh sent you if you want one).
                                        http://www.jetwayipc.com/content/?ADMPEIDLB_3450.html

                                        I updated my thread with my loader.conf.local and sysctl.conf settings: https://forum.pfsense.org/index.php?topic=113610.msg637025#msg637025

                                        To install the board, I removed one of the 6 UART COM ports that this machine originally came with. I was able to route the wires through that hole and Velcro the board (without the PCI bracket) to the side of the machine. Looks pretty good for a home built machine, if you ask me!

                                        20160728_183755.jpg
                                        20160728_183755.jpg_thumb
                                        20160728_184318.jpg
                                        20160728_184318.jpg_thumb

                                        pfSense i5-4590
                                        940/880 mbit Fiber Internet from FiOS
                                        BROCADE ICX6450 48Port L3-Managed Switch w/4x 10GB ports
                                        Netgear R8000 AP (DD-WRT)

                                        1 Reply Last reply Reply Quote 0
                                        • First post
                                          Last post
                                        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.