PfSense i7-4510U + 2x Intel 82574 + 2x Intel i350 (miniPCIE) Mini-ITX Build
-
I am a long time Unix/Linux user, who has mainly used DD-WRT for my home router setup (over 30 devices, 2 routers + 1 AP). Now that I have 100/100mbit Fiber Internet, I've decided it is time to venture into pfSense!
Any suggestions or questions are welcome!
I will update this post with the progress of my build. Here are the devices I plan on using in my setup:
pfSense Hardware:
Brand Name: HAMSING
Processor Main Frequency: 1.8GHz(Tubo 3.0GHz)
Processor Model:Intel I7 4500U
Model Number: HS-4500I
Hard Drive: Transcend 64GB SATA III 6Gb/s MSA370 mSATA Solid State Drive
RAM: 8GB 1600MHz DDR3L PC3-12800 ECC CL11 1.35V SODIMM
Video: VGA+HDMI
Audio: Realtek ALC6662
Network: Intel 82574 21000M
USB : 6usb2.0 2USB3.0
RS232: 6RS232
WIFI: 300MManaged Switch:
Dell PowerConnect 2716Wireless AP:
1 x Netgear R8000 with DD-WRT (used as an Access Point)
1 x Asus RT-AC66U with DD-WRT (used as an Access Point)–------ UPDATE 7/28/2016 --------
I added a Jetway Mini-PCIe Intel i350 ADMPEIDLB 2x Gigabit adapter to this machine.
The em(4) freebsd driver used with the on-board 2x Intel 82574 adapters would cause watchdog timeouts every 2-3 days.More information here: https://forum.pfsense.org/index.php?topic=113610.msg643350#msg643350
-
The setup looks great, but seems overkill for 100M internet…...
-
The setup looks great, but seems overkill for 100M internet…...
agree that it is overkill, but hey at least it is "future proof"
I was able to get the switch for 20 bucks on eBay and the whole MiniPC for less than 400 dollars. Overall I think for the price and size, you can't beat the setup above.
-
@Paint
Did you install pfSense and reached this throughput or is the speed test made under Linux or DD-WRT?
How many and what kind of packets are installed on your pfSense box? -
@BlueKobold:
@Paint
Did you install pfSense and reached this throughput or is the speed test made under Linux or DD-WRT?
How many and what kind of packets are installed on your pfSense box?The speed tests in my signature are using my Netgear R8000 Router running DD-WRT (Kong) on my 100/100 mbit fiber internet connection.
The pfSense box I described in my OP arrives tomorrow so it will be at least a week before I post performance etc.
-
The speed tests in my signature are using my Netgear R8000 Router running DD-WRT (Kong) on my 100/100 mbit fiber internet connection.
Ah ok this was not clear to me.
The pfSense box I described in my OP arrives tomorrow so it will be at least a week before I post performance etc.
I am really interested to hear about that! If you do a fresh and full install it will be really interesting
to know how good this pfSense box will be performing! -
@BlueKobold:
The speed tests in my signature are using my Netgear R8000 Router running DD-WRT (Kong) on my 100/100 mbit fiber internet connection.
Ah ok this was not clear to me.
The pfSense box I described in my OP arrives tomorrow so it will be at least a week before I post performance etc.
I am really interested to hear about that! If you do a fresh and full install it will be really interesting
to know how good this pfSense box will be performing!I am doing a clean install of pfSense 2.3.1 64-bit. I will let you know the benchmarks, etc once I have the machine built.
-
I myself considering similar box in mind my purposes were mainly future proof and the Intel NICs plus price point is very appealing. Are you running any type of VPN ipsec etc, what packages are you running?you probably could get away with the i3/4005u i5/4200u or even a braswell n3150 if your setup is similar to mine . But myself in the same boat as you started with dd-wrt and went to pfsense my reasons were related to vlan flexibility
-
I myself considering similar box in mind my purposes were mainly future proof and the Intel NICs plus price point is very appealing. Are you running any type of VPN ipsec etc, what packages are you running?you probably could get away with the i3/4005u i5/4200u or even a braswell n3150 if your setup is similar to mine . But myself in the same boat as you started with dd-wrt and went to pfsense my reasons were related to vlan flexibility
I will be using a VPN, snort, pfblocker, and possibly squid. I will post a full update once I configure the machine. I just received the MiniPC yesterday - looks like it is made pretty well actually.
-
Machine is built and pfSense is installed!
What performance tests would you like me to run (please provide the commands so I run the correct test)?
Thanks!
-
Hi Paint,
could you please run the simple OpenVPN benchmark referenced here:
https://forum.pfsense.org/index.php?topic=105238.msg616743#msg616743 (Reply #9 message)Executing the command on my router with a Celeron N3150 I get
27.41 real 25.62 user 1.77 sys(3200 / 27.41) = 117 Mbps OpenVPN performance (estimate)
This value perfectly fits to the result of a real speed test.
I recently got an upgrade to 250/100 connection and I'm considering buying a mini PC as your own if it were able to sustain this speed through the OpenVPN connection.
Thanks!
-
I would like to know the routing power and speed between two VLANs, if you get it working.
And on top a new speed test as you where showing it in your signature.Also a IPSec test would be fine to see but mostly it will not really running pending on the
circumstance that two VPN endpoints must be there.If you want to do some tuning for your pfSense box you could try out this ones;
Processor Main Frequency: 1.8GHz(Tubo 3.0GHz)
Processor Model:Intel I7 4500U- Please enable PowerD (hi adaptive)
this will scale the CPU frequency from the lowest bottom to the highest top likes needed by the system and
pending of the entire network load of your network or pfSense firewall.
Hard Drive: Transcend 64GB SATA III 6Gb/s MSA370 mSATA Solid State Drive
- If this drive is supporting TRIM, enable also the TRIM support on the pfSense box
If this mSATA will be supporting TRIM it should be a deal for you to activate the TRIM support
of the pfSense system too
RAM: 8GB 1600MHz DDR3L PC3-12800 ECC CL11 1.35V SODIMM
- Please set the mbuf size to 1000000
You will be able to realize it and not ending up in a booting loop, if you are owing
sufficient amount of RAM and your 8 GB will be ideal for that tuning.
And at last please create a /boot/loader.conf.local file if that wasn´t done right now and enter
the line with the "mbuf size" there that this will suvive all updates/upgrades of your pfSense
system from version to version, because all files will be written totally new! - Please enable PowerD (hi adaptive)
-
Hi Paint,
could you please run the simple OpenVPN benchmark referenced here:
https://forum.pfsense.org/index.php?topic=105238.msg616743#msg616743 (Reply #9 message)Executing the command on my router with a Celeron N3150 I get
27.41 real 25.62 user 1.77 sys(3200 / 27.41) = 117 Mbps OpenVPN performance (estimate)
This value perfectly fits to the result of a real speed test.
I recently got an upgrade to 250/100 connection and I'm considering buying a mini PC as your own if it were able to sustain this speed through the OpenVPN connection.
Thanks!
Here is the output:
[2.3.1-RELEASE][root@pfSense.lan]/root: openvpn --genkey --secret /tmp/secret [2.3.1-RELEASE][root@pfSense.lan]/root: time openvpn --test-crypto --secret /tmp/secret --verb 0 --tun-mtu 20000 --cipher aes-256-cbc 10.682u 0.677s 0:11.36 99.9% 742+177k 0+0io 1pf+0w [2.3.1-RELEASE][root@pfSense.lan]/root:
(3200 / 11.36) = 281.7 Mbps OpenVPN performance (estimate)
-
@BlueKobold:
I would like to know the routing power and speed between two VLANs, if you get it working.
And on top a new speed test as you where showing it in your signature.Also a IPSec test would be fine to see but mostly it will not really running pending on the
circumstance that two VPN endpoints must be there.If you want to do some tuning for your pfSense box you could try out this ones;
Processor Main Frequency: 1.8GHz(Tubo 3.0GHz)
Processor Model:Intel I7 4500U- Please enable PowerD (hi adaptive)
this will scale the CPU frequency from the lowest bottom to the highest top likes needed by the system and
pending of the entire network load of your network or pfSense firewall.
Hard Drive: Transcend 64GB SATA III 6Gb/s MSA370 mSATA Solid State Drive
- If this drive is supporting TRIM, enable also the TRIM support on the pfSense box
If this mSATA will be supporting TRIM it should be a deal for you to activate the TRIM support
of the pfSense system too
RAM: 8GB 1600MHz DDR3L PC3-12800 ECC CL11 1.35V SODIMM
- Please set the mbuf size to 1000000
You will be able to realize it and not ending up in a booting loop, if you are owing
sufficient amount of RAM and your 8 GB will be ideal for that tuning.
And at last please create a /boot/loader.conf.local file if that wasn´t done right now and enter
the line with the "mbuf size" there that this will suvive all updates/upgrades of your pfSense
system from version to version, because all files will be written totally new!Got this mostly up and working today. I am going to do some additional tweaks before I release any speed tests, but I can report that my WAN speeds are about he same (I'm capped at 100/100mbits anyway).
I tried to enable TRIM via this post: https://forum.pfsense.org/index.php?topic=83272.msg456248#msg456248
Unfortunately, after adding ahci_load to my loader.conf.local and running touch /root/TRIM_set; /etc/rc.reboot I still do not have TRIM (I dont think its a big deal though)
[2.3.1-RELEASE][root@pfSense.lan]/root: tunefs -p / tunefs: POSIX.1e ACLs: (-a) disabled tunefs: NFSv4 ACLs: (-N) disabled tunefs: MAC multilabel: (-l) disabled tunefs: soft updates: (-n) enabled tunefs: soft update journaling: (-j) enabled tunefs: gjournal: (-J) disabled tunefs: trim: (-t) disabled tunefs: maximum blocks per file in a cylinder group: (-e) 4096 tunefs: average file size: (-f) 16384 tunefs: average number of files in a directory: (-s) 64 tunefs: minimum percentage of free space: (-m) 8% tunefs: space to hold for metadata blocks: (-k) 6408 tunefs: optimization preference: (-o) time tunefs: volume label: (-L)
Here is a copy of my /boot/loader.conf.local:
ahci_load="YES" kern.ipc.nmbclusters="1000000" legal.intel_ipw.license_ack=1 legal.intel_iwi.license_ack=1
- Please enable PowerD (hi adaptive)
-
Here is the output:
[2.3.1-RELEASE][root@pfSense.lan]/root: openvpn --genkey --secret /tmp/secret [2.3.1-RELEASE][root@pfSense.lan]/root: time openvpn --test-crypto --secret /tmp/secret --verb 0 --tun-mtu 20000 --cipher aes-256-cbc 10.682u 0.677s 0:11.36 99.9% 742+177k 0+0io 1pf+0w [2.3.1-RELEASE][root@pfSense.lan]/root:
(3200 / 11.36) = 281.7 Mbps OpenVPN performance (estimate)
Thanks mate!
Now I know that I have to find my way in this cpu's class -
What's the CPU usage like during the tests? Is that test anything like iperf or dose it simulate the openvpn throughput/bandwidth. Pretty impressive results !! I'm sold
-
Got this mostly up and working today. I am going to do some additional tweaks before I release any speed tests, but I can report that my WAN speeds are about he same (I'm capped at 100/100mbits anyway).
With disabled PowerD (hi adaptive) it could be that the CPU frequency is not scaling from low to high likes it
is needed by the load, and so any kind of many tests could be not really right then! Please don´t forget this
and think about.I tried to enable TRIM via this post: https://forum.pfsense.org/index.php?topic=83272.msg456248#msg456248
Unfortunately, after adding ahci_load to my loader.conf.local and running touch /root/TRIM_set; /etc/rc.reboot I still do not have TRIM (I dont think its a big deal though)
Please use this procedure shown in that thread/post exactly! It is right matching and well working!
Enable TRIM Support in pfSenseahci_load="YES" kern.ipc.nmbclusters="1000000" legal.intel_ipw.license_ack=1 legal.intel_iwi.license_ack=1
This might be right looking to me. If you are doing tests now, you could not be running out of kernel
space or mbuf size! -
Here is the output:
[2.3.1-RELEASE][root@pfSense.lan]/root: openvpn --genkey --secret /tmp/secret [2.3.1-RELEASE][root@pfSense.lan]/root: time openvpn --test-crypto --secret /tmp/secret --verb 0 --tun-mtu 20000 --cipher aes-256-cbc 10.682u 0.677s 0:11.36 99.9% 742+177k 0+0io 1pf+0w [2.3.1-RELEASE][root@pfSense.lan]/root:
(3200 / 11.36) = 281.7 Mbps OpenVPN performance (estimate)
Thanks mate!
Now I know that I have to find my way in this cpu's classanytime! Loving this MiniPC so far!
-
What's the CPU usage like during the tests? Is that test anything like iperf or dose it simulate the openvpn throughput/bandwidth. Pretty impressive results !! I'm sold
CPU is almost non existent (less than .1-.2 on the 1min top) I will provide a more detailed update once I finish my firewall/traffic shaping/snort/country blocking setup.
I still need to do an iperf test, but I believe I will get very close to 1gbps over my LAN. Therefore, CPU is your bottleneck when using VPN. The previous test shows how fast your CPU can encrypt information and backs into a mbps theoretical max.
-
Thanks mate!
Now I know that I have to find my way in this cpu's classIf you are unsure, money is not the real problem for you and you will be having much throughput in the WAN
and LAN area or high throughput over any VPN tunnel, go and buy a Intel Xeon E3-1240v3 and 8 GB DDR3
1600MHz RAM and you will be getting out the maximum of all! Not cheap, but very effective in any kind of.
You can also save money over a longer time or get parts refurbished!I still need to do an iperf test, but I believe I will get very close to 1gbps over my LAN. Therefore, CPU is your bottleneck when using VPN. The previous test shows how fast your CPU can encrypt information and backs into a mbps theoretical max.
Set up at the LAN interface a subnet likes 192.168.xx and on the other LAN interface another one likes
172.xx.xx and then iPerf client to server test, you can repeat it through the WAN interface by setting up there
a small GB switch and set up outside the AN interface the iPerf server. -
@BlueKobold:
Thanks mate!
Now I know that I have to find my way in this cpu's classIf you are unsure, money is not the real problem for you and you will be having much throughput in the WAN
and LAN area or high throughput over any VPN tunnel, go and buy a Intel Xeon E3-1240v3 and 8 GB DDR3
1600MHz RAM and you will be getting out the maximum of all! Not cheap, but very effective in any kind of.
You can also save money over a longer time or get parts refurbished!I still need to do an iperf test, but I believe I will get very close to 1gbps over my LAN. Therefore, CPU is your bottleneck when using VPN. The previous test shows how fast your CPU can encrypt information and backs into a mbps theoretical max.
Set up at the LAN interface a subnet likes 192.168.xx and on the other LAN interface another one likes
172.xx.xx and then iPerf client to server test, you can repeat it through the WAN interface by setting up there
a small GB switch and set up outside the AN interface the iPerf server.ill do a few different tests for iperf in the next few days. I already have my DHCP server cloning my G1100 MAC and DHCP request so that I can run the FIOS MoCA G1100 Quantum Router in parallel to my pfSense box - this eliminates a double NAT situation, allows me to use my own router, and keep all of the FIOS services (Remote DVR, VoD, CallerID, etc) without the need for my backend "three router" setup.
To setup a new vlan to test a fake WAN will be a piece of cake after that :P
This whole setup only cost me $400 USD + $30 USD for a Dell PowerConnect 2716 Managed Switch from eBay. For the price, I dont think it can be beat!
-
What speed do you get from the squid cache?
Download a file
Test files here
http://mirror.internode.on.net/pub/test/
Then once it is downloaded try redownloading and check the speed from the squid cache -
What speed do you get from the squid cache?
Download a file
Test files here
http://mirror.internode.on.net/pub/test/
Then once it is downloaded try redownloading and check the speed from the squid cachehttp://mirror.internode.on.net/pub/test/ this link does not work….
-
Use this for enabling TRIM.
https://gist.github.com/mdouchement/853fbd4185743689f58c
You don't need to do enable AHCI by adding ahci_load="YES" … it works for me without this.
-
this link does not work….
Must be location blocked, try a Ubuntu iso or any large file that will cached.
-
Use this for enabling TRIM.
https://gist.github.com/mdouchement/853fbd4185743689f58c
You don't need to do enable AHCI by adding ahci_load="YES" … it works for me without this.
thanks. that worked:
[2.3.1-RELEASE][root@pfSense.lan]/root: tunefs -p / tunefs: POSIX.1e ACLs: (-a) disabled tunefs: NFSv4 ACLs: (-N) disabled tunefs: MAC multilabel: (-l) disabled tunefs: soft updates: (-n) enabled tunefs: soft update journaling: (-j) enabled tunefs: gjournal: (-J) disabled tunefs: trim: (-t) enabled tunefs: maximum blocks per file in a cylinder group: (-e) 4096 tunefs: average file size: (-f) 16384 tunefs: average number of files in a directory: (-s) 64 tunefs: minimum percentage of free space: (-m) 8% tunefs: space to hold for metadata blocks: (-k) 6408 tunefs: optimization preference: (-o) time tunefs: volume label: (-L)
migrated my entire network over to pfsense as the main router with two AP running DDWRT. I have done a lot of tweaking, but will finalize some stuff over the weekend. I hope to then post some performance benchmarks.
Next on to snort and traffic shaping 8) 8) 8)
-
You don't need to do enable AHCI by adding ahci_load="YES" … it works for me without this.
I must consider to this, I would recommend to shorten this line in the boot/loader.conf.local file, it is not
really needed for your pfSense machine. -
@BlueKobold:
You don't need to do enable AHCI by adding ahci_load="YES" … it works for me without this.
I must consider to this, I would recommend to shorten this line in the boot/loader.conf.local file, it is not
really needed for your pfSense machine.I dont use ahci_load="YES" in my /boot/loader.conf.local file.
I have made many System Tunable changes and loader.conf.local changes. Below are my /boot/loader.conf.local changes:
legal.intel_ipw.license_ack=1 legal.intel_iwi.license_ack=1 aio_load="YES" pf_load="YES" pflog_load="YES if_em_load="YES" hw.em.rxd=4096 hw.em.txd=4096 #ahci_load="YES" cc_htcp_load="YES" net.inet.tcp.hostcache.cachelimit="0" hw.em.num_queues="2" kern.ipc.nmbclusters="1000000"
-
Why do u need traffic shaping on a 100MBit line?
-
Why do u need traffic shaping on a 100MBit line?
QoS for buffer float? Would you suggest otherwise?
-
This whole setup only cost me $400 USD + $30 USD for a Dell PowerConnect 2716 Managed Switch from eBay. For the price, I dont think it can be beat!
Please tell me that switch is fanless. If it is and has the regular Dell CLI, I want one now.
-
This whole setup only cost me $400 USD + $30 USD for a Dell PowerConnect 2716 Managed Switch from eBay. For the price, I dont think it can be beat!
Please tell me that switch is fanless. If it is and has the regular Dell CLI, I want one now.
It is fanless, but unfortunately it only has WebGUI configuration - no CLI
-
-
Impersonation of G1100 FIOS DHCP Packet
-
- (updated instructions for the FiOS Quanum Gateway, coming soon)
-
he.net IPv6 Tunnel
-
Snort
-
pfBlockerNG + DNSBL
-
Traffic Shaper (CODELQ)
-
ntopng
iperf -c 192.168.1.1 -w 64KB ------------------------------------------------------------ Client connecting to 192.168.1.1, TCP port 5001 TCP window size: 64.0 KByte ------------------------------------------------------------ [ 3] local 192.168.1.50 port 8911 connected with 192.168.1.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.11 GBytes 949 Mbits/sec
-
-
Speed from the squid cache? Also did you setup pfsense to act as your DNS server? Here is a video on it https://m.youtube.com/watch?v=s3VXLIXGazM
-
Speed from the squid cache? Also did you setup pfsense to act as your DNS server? Here is a video on it https://m.youtube.com/watch?v=s3VXLIXGazM
Yes I am using unbound as my DNS server.
I have not had a chance to setup squid yet - I will let you know if I do.
-
Did you incress your DNS cache and find the fastest DNS servers in your area?
-
Did you incress your DNS cache and find the fastest DNS servers in your area?
Yea, I went through all of those settings. Thanks!
-
I had some issues where the em0 or em1 driver would stop responding with the following error:
em0: Watchdog timeout Queue[0]-- resetting
I was able to resolve the issue by allowing for IRQ (interrupt) sharing between processors (see net.isr.* commands at the bottom of the loader.conf.local file).
Below are some other tweaks that I have setup. Please let me know if you have any other suggestions:
/boot/loader.conf.local:
#Redirect Console to UART 2 comconsole_port="0x3E0" hint.uart.2.flags="0x10" #Redirect Console to UART 1 #comconsole_port="0x2F8" #hint.uart.0.flags="0x0" #hint.uart.1.flags="0x10" #hint.atrtc.0.clock="0" hw.acpi.cpu.cx_lowest="Cmax" kern.ipc.nmbclusters="1000000" #kern.ipc.nmbjumbop="524288" hw.pci.do_power_suspend="0" hw.pci.do_power_nodriver="3" #hw.pci.do_power_nodriver="0" hw.pci.realloc_bars="1" hint.ral.0.disabled="1" #hint.agp.0.disabled="1" kern.ipc.somaxconn="16384" kern.ipc.soacceptqueue="16384" legal.intel_ipw.license_ack=1 legal.intel_iwi.license_ack=1 # # Advanced Host Controller Interface (AHCI) #hint.acpi.0.disabled="1" #ahci_load="YES" # H-TCP Congestion Control for a more aggressive increase in speed on higher # latency, high bandwidth networks with some packet loss. cc_htcp_load="YES" # #hw.em.rxd="1024" #hw.em.txd="1024" #hw.em.rxd="2048" #hw.em.txd="2048" hw.em.rxd="4096" hw.em.txd="4096" hw.igb.rxd="4096" hw.igb.txd="4096" #hw.igb.rxd="1024" #hw.igb.txd="1024" # Intel igb(4): freebsd limits the the number of received packets a network # card can process to 100 packets per interrupt cycle. This limit is in place # because of inefficiencies in IRQ sharing when the network card is using the # same IRQ as another device. When the Intel network card is assigned a unique # IRQ (dmesg) and MSI-X is enabled through the driver (hw.igb.enable_msix=1) # then interrupt scheduling is significantly more efficient and the NIC can be # allowed to process packets as fast as they are received. A value of "-1" # means unlimited packet processing and sets the same value to # dev.igb.0.rx_processing_limit and dev.igb.1.rx_processing_limit . hw.igb.rx_process_limit="-1" # (default 100) hw.em.rx_process_limit="-1" #hw.em.rx_process_limit="400" # # Intel em: The Intel i350-T2 dual port NIC supports up to eight(8) # input/output queues per network port. A single CPU core can theoretically # forward 700K packets per second (pps) and a gigabit interface can # theoretically forward 1.488M packets per second (pps). Testing has shown a # server can most efficiently process the number of network queues equal to the # total number of CPU cores in the machine. For example, a firewall with # four(4) CPU cores and an i350-T2 dual port NIC should use two(2) queues per # network port for a total of four(4) network queues which correlate to four(4) # CPU cores. A server with four(4) CPU cores and a single network port should # use four(4) network queues. Query total interrupts per queue with "vmstat # -i" and use "top -H -S" to watch CPU usage per igb0:que. MSIX interrupts # start at 256 and the igb driver uses one vector per queue known as an TX/RX # pair. The default hw.igb.num_queues value of zero(0) sets the number of # network queues equal to the number of logical CPU cores per network port. # Disable hyper threading as HT logical cores should not be used in routing as # hyper threading, also known as simultaneous multithreading (SMT), can lead to # unpredictable latency spikes. hw.em.max_interrupt_rate="32000" hw.igb.max_interrupt_rate="32000" # (default 8000) #hw.em.max_interrupt_rate="8000" hw.igb.enable_aim="1" # (default 1) hw.igb.enable_msix="1" # (default 1) # hw.pci.enable_msix="1" hw.pci.enable_msi="1" #hw.em.msix="0" hw.em.msix="1" #hw.em.enable_msix="0" hw.em.enable_msix="1" #hw.em.msix_queues="2" #hw.em.num_queues="2" hw.em.num_queues="0" #hw.igb.num_queues="0" hw.igb.num_queues="2" net.inet.tcp.tso="0" hw.em.smart_pwr_down="0" hw.em.sbp="0" hw.em.eee_setting="0" #hw.em.eee_setting="1" #hw.em.fc_setting="3" hw.em.fc_setting="0" # hw.em.rx_int_delay="0" hw.em.tx_int_delay="0" hw.em.rx_abs_int_delay="0" hw.em.tx_abs_int_delay="0" # #hw.em.rx_abs_int_delay="1024" #hw.em.tx_abs_int_delay="1024" #hw.em.tx_int_delay="128" #hw.em.rx_int_delay="100" #hw.em.tx_int_delay="64" # # "sysctl net.inet.tcp.hostcache.list" net.inet.tcp.hostcache.cachelimit="0" # #net.inet.tcp.tcbhashsize="2097152" # net.link.ifqmaxlen="8192" # (default 50) # # For high bandwidth systems setting bindthreads to "0" will spread the # network processing load over multiple cpus allowing the system to handle more # throughput. The default is faster for most lightly loaded systems (default 0) #net.isr.bindthreads="0" net.isr.bindthreads="1" # qlimit for igmp, arp, ether and ip6 queues only (netstat -Q) (default 256) #net.isr.defaultqlimit="2048" net.isr.defaultqlimit="4096" # interrupt handling via multiple CPU (default direct) net.isr.dispatch="direct" #net.isr.dispatch="hybrid" # limit per-workstream queues (use "netstat -Q" if Qdrop is greater then 0 # increase this directive) (default 10240) net.isr.maxqlimit="10240" # Max number of threads for NIC IRQ balancing 3 for 4 cores in box leaving at # least (default 1) one core for system or service processing. Again, if you # notice one cpu being overloaded due to network processing this directive will # spread out the load at the cost of cpu affinity unbinding. The default of "1" # is faster if a single core is not already overloaded. #net.isr.maxthreads="2" #net.isr.maxthreads="3" #net.isr.maxthreads="4" net.isr.maxthreads="-1"
/etc/sysctl.conf (System Tunables)
| Tunable Name | Description | Value | Modified |
| net.inet.ip.forwarding | (default 0) | 1 | Yes |
| net.inet.ip.fastforwarding | (default 0) | 1 | Yes |
| net.inet.tcp.mssdflt | (default 536) | 1460 | Yes |
| net.inet.tcp.minmss | (default 216) | 536 | Yes |
| net.inet.tcp.syncache.rexmtlimit | (default 3) | 0 | Yes |
| net.inet.ip.maxfragpackets | (default 13687) | 0 | Yes |
| net.inet.ip.maxfragsperpacket | (default 16) | 0 | Yes |
| net.inet.tcp.abc_l_var | (default 2) | 44 | Yes |
| net.inet.ip.rtexpire | (default 3600) | 10 | Yes |
| net.inet.tcp.syncookies | (default 1) | 0 | Yes |
| net.inet.tcp.tso | Enable TCP Segmentation Offload | 0 | Yes |
| hw.kbd.keymap_restrict_change | Disallow keymap changes for non-privileged users | 4 | Yes |
| kern.msgbuf_show_timestamp | display timestamp in msgbuf (default 0) | 1 | Yes |
| kern.randompid | Random PID modulus | 702 | Yes |
| net.inet.icmp.drop_redirect | no redirected ICMP packets (default 0) | 1 | Yes |
| net.inet.ip.check_interface | verify packet arrives on correct interface (default 0) | 1 | Yes |
| net.inet.ip.process_options | ignore IP options in the incoming packets (default 1) | 0 | Yes |
| net.inet.ip.redirect | Enable sending IP redirects | 0 | Yes |
| net.inet.tcp.always_keepalive | disable tcp keep alive detection for dead peers, keepalive can be spoofed (default 1) | 0 | Yes |
| net.inet.tcp.icmp_may_rst | icmp may not send RST to avoid spoofed icmp/udp floods (default 1) | 0 | Yes |
| net.inet.tcp.msl | Maximum Segment Lifetime a TCP segment can exist on the network, 2*MSL (default 30000, 60 sec) | 5000 | Yes |
| net.inet.tcp.nolocaltimewait | remove TIME_WAIT states for the loopback interface (default 0) | 1 | Yes |
| net.inet.tcp.path_mtu_discovery | disable MTU discovery since many hosts drop ICMP type 3 packets (default 1) | 0 | Yes |
| net.inet.tcp.sendbuf_max | (default 2097152) | 4194304 | Yes |
| net.inet.tcp.recvbuf_max | (default 2097152) | 4194304 | Yes |
| vfs.read_max | Cluster read-ahead max block count (Default 32) | 128 | Yes |
| net.link.ether.inet.allow_multicast | Allow Windows Network Load Balancing and Open Mesh access points Multicast RFC 1812 | 1 | Yes |
| hw.intr_storm_threshold | (default 1000) | 10000 | Yes |
| hw.pci.do_power_suspend | (default 1) | 0 | Yes |
| hw.pci.do_power_nodriver | (default 0) | 3 | Yes |
| hw.pci.realloc_bars | (default 0) | 1 | Yes |
| net.inet.tcp.delayed_ack | Delay ACK to try and piggyback it onto a data packet | 3 | Yes |
| net.inet.tcp.delacktime | (default 100) | 20 | Yes |
| net.inet.tcp.sendbuf_inc | (default 8192) | 32768 | Yes |
| net.inet.tcp.recvbuf_inc | (default 16384) | 65536 | Yes |
| net.inet.tcp.fast_finwait2_recycle | (default 0) | 1 | Yes |
| kern.ipc.soacceptqueue | (default 128 ; same as kern.ipc.somaxconn) | 16384 | Yes |
| kern.ipc.maxsockbuf | Maximum socket buffer size (default 4262144) | 16777216 | Yes |
| net.inet.tcp.cc.algorithm | (default newreno) | htcp | Yes |
| net.inet.tcp.cc.htcp.adaptive_backoff | (default 0 ; disabled) | 1 | Yes |
| net.inet.tcp.cc.htcp.rtt_scaling | (default 0 ; disabled) | 1 | Yes |
| kern.threads.max_threads_per_proc | (default 1500) | 1500 | Yes |
| dev.em.0.fc | (default 3) | 0 | Yes |
| dev.em.1.fc | (default 3) | 0 | Yes |
| hw.acpi.cpu.cx_lowest | | Cmax | Yes |
| kern.sched.interact | (default 30) | 5 | Yes |
| kern.sched.slice | (default 12) | 3 | Yes |
| kern.random.sys.harvest.ethernet | Harvest NIC entropy | 1 | Yes |
| kern.random.sys.harvest.interrupt | Harvest IRQ entropy | 1 | Yes |
| kern.random.sys.harvest.point_to_point | Harvest serial net entropy | 1 | Yes |
| kern.sigqueue.max_pending_per_proc | (default 128) | 256 | Yes |
| net.inet6.ip6.redirect | (default 1) | 0 | Yes |
| net.inet.tcp.v6mssdflt | (default 1220) | 1440 | Yes |
| net.inet6.icmp6.rediraccept | (default 1) | 0 | Yes |
| net.inet6.icmp6.nodeinfo | (default 3) | 0 | Yes |
| net.inet6.ip6.forwarding | (default 1) | 1 | Yes |
| dev.igb.0.fc | (default 3) | 0 | Yes |
| dev.igb.1.fc | (default 3) | 0 | Yes |
| net.inet.ip.portrange.first | | 1024 | No (Default) |
| net.inet.tcp.blackhole | Do not send RST on segments to closed ports | 2 | No (Default) |
| net.inet.udp.blackhole | Do not send port unreachables for refused connects | 1 | No (Default) |
| net.inet.ip.random_id | Assign random ip_id values | 1 | No (Default) |
| net.inet.tcp.drop_synfin | Drop TCP packets with SYN+FIN set | 1 | No (Default) |
| net.inet6.ip6.use_tempaddr | | 0 | No (Default) |
| net.inet6.ip6.prefer_tempaddr | | 0 | No (Default) |
| net.inet.tcp.recvspace | Initial receive socket buffer size | 65228 | No (Default) |
| net.inet.tcp.sendspace | Initial send socket buffer size | 65228 | No (Default) |
| net.inet.udp.maxdgram | Maximum outgoing UDP datagram size | 57344 | No (Default) |
| net.link.bridge.pfil_onlyip | Only pass IP packets when pfil is enabled | 0 | No (Default) |
| net.link.bridge.pfil_member | Packet filter on the member interface | 1 | No (Default) |
| net.link.bridge.pfil_bridge | Packet filter on the bridge interface | 0 | No (Default) |
| net.link.tap.user_open | Allow user to open /dev/tap (based on node permissions) | 1 | No (Default) |
| net.inet.ip.intr_queue_maxlen | Maximum size of the IP input queue | 1000 | No (Default) |
| hw.syscons.kbd_reboot | enable keyboard reboot | 0 | No (Default) |
| net.inet.tcp.log_debug | Log errors caused by incoming TCP segments | 0 | No (Default) |
| net.inet.icmp.icmplim | Maximum number of ICMP responses per second | 0 | No (Default) |
| net.route.netisr_maxqlen | maximum routing socket dispatch queue length | 1024 | No (Default) |
| net.inet.udp.checksum | compute udp checksum | 1 | No (Default) |
| net.inet.icmp.reply_from_interface | ICMP reply from incoming interface for non-local packets | 1 | No (Default) |
| net.inet6.ip6.rfc6204w3 | Accept the default router list from ICMPv6 RA messages even when packet forwarding enabled. | 1 | No (Default) |
| net.enc.out.ipsec_bpf_mask | IPsec output bpf mask | 0x0001 | No (Default) |
| net.enc.out.ipsec_filter_mask | IPsec output firewall filter mask | 0x0001 | No (Default) |
| net.enc.in.ipsec_bpf_mask | IPsec input bpf mask | 0x0002 | No (Default) |
| net.enc.in.ipsec_filter_mask | IPsec input firewall filter mask | 0x0002 | No (Default) |
| net.key.preferred_oldsa | | 0 | No (Default) |
| net.inet.carp.senderr_demotion_factor | Send error demotion factor adjustment | 0 (0) | No (Default) |
| net.pfsync.carp_demotion_factor | pfsync's CARP demotion factor adjustment | 0 (0) | No (Default) |
| net.raw.recvspace | Default raw socket receive space | 65536 | No (Default) |
| net.raw.sendspace | Default raw socket send space | 65536 | No (Default) |
| net.inet.raw.recvspace | Maximum space for incoming raw IP datagrams | 131072 | No (Default) |
| net.inet.raw.maxdgram | Maximum outgoing raw IP datagram size | 131072 | No (Default) |
| kern.corefile | Process corefile name format string | /root/%N.core | No (Default) | -
I was still getting the Watchdog Queue Timeout on the em0 driver, until I got an error stating that the kernel hit the Maximum Fragment Entries in the firewall.
I tweaked the Firewall Maximum Fragment Entries, Firewall Maximum Table Entries, and Firewall Maximum States in System->Advanced->Firewall & NAT to larger values and I haven't had a freeze yet!
-
Hi,
What was the cost of the PC & what sort of wattage is being used?
THanks,
Rich