What is the biggest attack in GBPS you stopped
-
Interrupts are almost entirely due to packets per second, it's how the hardware talks to the OS. Complete guess, but I would assume(assumptions can be dangerous) that interrupts would not be high because of anything the firewall is doing, only because of lots of packets.
-
…
If the issue is an issue involving states, a good test to make an extreme could be to try a few combinations. 1mil max states with a target of 10k, should never get much past 10k, but shouldn't hit the max state limit.But if still the back-off rate is to low ?
Something other to test flushing states when storm, ../ Firewall Adaptive Timeouts :
Leave [adaptive.start] at the default (60%), but set [adaptive.end] to the value 101% of your maxstates (i.s.o. the default 120%)
As can be calculated, then the ultimate 5% (>95% maxstates) adapts much stronger flushing than the default setting.Hypothesis: if adaptation stronger with limit -> 0, then pfSense hardly chokes. True || False. ?
-
Interrupts are almost entirely due to packets per second, it's how the hardware talks to the OS. Complete guess, but I would assume(assumptions can be dangerous) that interrupts would not be high because of anything the firewall is doing, only because of lots of packets.
https://www.freebsd.org/doc/en_US.ISO8859-1/books/arch-handbook/smp-design.html
" FreeBSD deals with interrupt handlers by giving them their own thread context. Providing a context for interrupt handlers allows them to block on locks. To help avoid latency, however, interrupt threads run at real-time kernel priority. Thus, interrupt handlers should not execute for very long to avoid starving other kernel threads. In addition, since multiple handlers may share an interrupt thread, interrupt handlers should not sleep or use a sleepable lock to avoid starving another interrupt handler."
HW Interrupts will be different to sw interrupts, in that hw interrupts will be treated as more important than most but not all sw interrupts.
https://www.freebsd.org/cgi/man.cgi?query=swi&apropos=0&sektion=9
"These functions are used to register and schedule software interrupt handlers. Software interrupt handlers are attached to a software interrupt thread, just as hardware interrupt handlers are attached to a hardware interrupt thread. Multiple handlers can be attached to the same thread. Software interrupt handlers can be used to queue up less critical processing inside of hardware interrupt handlers so that the work can be done at a later time. Software interrupt threads are different from other kernel threads in that they are treated as an interrupt thread. This means that time spent executing these threads is counted as interrupt time, and that they can be run via a lightweight context switch."Windows is not immune to them either, its all in the drivers to a certain extent. https://msdn.microsoft.com/en-us/library/windows/hardware/ff540586%28v=vs.85%29.aspx
So in some respects any basic nic with little or no processing capabilities will rely more on the OS to do the packet processing and because computers are just glorified clockwork turk machines, so the OS is less likely to get out of shape with a basic nic as everything will just run like clockwork ignoring those electrons at the socket the OS is physically incapable of processing due to being tied up elsewhere, unlike with a smart nic which has various builtin processing capabilities which whilst making the packet processing quicker then causes a flood upstream in the OS itself. Its like CPU caches (L1,L2 & L3) can be a boon or a hindrance in certain circumstances as well.
With that in mind, some basic cheap nics ie realtek might actually be less hassle compared to say an intel nic when solving this sort of problem and will explain why some expensive hw in amateur hands can cause some embarrassment with customers advised to go with the latest and greatest. As the trend is generally for greater more sophisticated hack attempts so this sort of thing will only become more common and with the trend to employ younger talent straight out of uni, so the experience is lost and the cycles repeat considering the timescales of things like syn floods (1990's), rootkits (1990's) and interrupt storms (1990's) all of which were seen in the dos days before life became hidden behind gui's.
-
https://doc.pfsense.org/index.php/Tuning_and_Troubleshooting_Network_Cards#Intel_ix.284.29_Cards
"On releases prior to pfSense 2.2, the following may be necessary. If using VLANs with Intel 10 Gb ix(4) cards, some features of the driver for VLANs may need to be disabled to work correctly. For instance, to apply these settings on NIC ix0, run the following. "
I wonder if its worth increasing hw.intr_storm_threshold=10000 to something higher?
This is showing freebsd 10.1 https://calomel.org/freebsd_network_tuning.html
"For 10gig NIC's set to
9000 and use large MTU. (default 1000)
#hw.intr_storm_threshold="9000""
-
System idle, 208 interrupts per second for the one queue that seems to process ICMP from my desktop. Pinging the interface with 67.3k/sec ICMP packets, 250 interrupts per second.
Packets: sent=1422309, rcvd=1422309, error=0, lost=0 (0.0% loss) in 21.131114 sec
RTTs in ms: min/avg/max/dev: 0.003 / 0.176 / 20.716 / 0.294
Bandwidth in kbytes/sec: sent=4038.525, rcvd=4038.52533Mb/s of 64byte ICMP packets barely made a dent.
An increase of 12% cpu time(48% cpu time of the core the queue is on) and a 19% increase in interrupts doesn't seem that bad for that many packets.
From a PPS and interrupt view, my system really doesn't care. Whatever the attack that was done before, 30Mb/s was bad enough that my admin interface went offline.
-
I just realized that when ping flooding my firewall, the only thing that changed was the IRQ CPU time. I looked up ICMP and it seems the ICMP response is built right into the network stack. From what I can tell, the ICMP responses are being handled on that same realtime kernel thread. If this is the case and my admin interface stopped responding to pings because of load, that means the realtime thread on a different interface couldn't even run.
One of the features of MSI-X is interrupt masks. When a "hard" interrupt occurs, the current context thread gets interrupted and the real time thread gets scheduled. Then the thread can set a mask and block all other interrupts. It can process all of its data. Of course blocking interrupts is bad if you don't have a way to indicate there is new work, so there are "soft" interrupts. If the hardware supports it, the hardware can flag a shared memory location that new data is ready. When the current thread is done processing it's current data, it can do one last check to see if the soft interrupt was signaled. If not, unmask the interrupts and return. If it was flagged, then continue processing until all the data is work is done and no more soft interrupts have been signaled.
It may be possible that the WAN interface is in a constant state of backlog and the realtime kernel thread never unschedules because of constant backlog, starving my admin interface from CPU time.
With that in mind, some basic cheap nics ie realtek might actually be less hassle compared to say an intel nic when solving this sort of problem
With a minimum ping of 0.003ms and an average of 0.176, yet only ~250 interrupts per second, the i350 NIC is doing some nifty magic.
-
I will test with some offloading and other tunables in pfsense later when online again.
Its SYN packets that is spoofed and of various sizes. Most of them doesnt have the ACK the FW needs and the states remain open.
Its easy to fend of an ICMP flood since its predicatable traffic. In my case, when 1 core hits 100% the FW goes offline and packetloss occurs. I cant see what that specific CPU does and it would be interesting to dig deeper into that and what process that consumes the CPU. It doesnt when there is no port forward enabled but as soon as it routes, then it goes ballistic.
Why can 1 core keep back everything else??
-
Procstat -ka in Command prompt during a SYN attack.
No packet loss this time. No port forward.
-
Same but with port forward.
-
I will test with some offloading and other tunables in pfsense later when online again.
Its SYN packets that is spoofed and of various sizes. Most of them doesnt have the ACK the FW needs and the states remain open.
Its easy to fend of an ICMP flood since its predicatable traffic. In my case, when 1 core hits 100% the FW goes offline and packetloss occurs. I cant see what that specific CPU does and it would be interesting to dig deeper into that and what process that consumes the CPU. It doesnt when there is no port forward enabled but as soon as it routes, then it goes ballistic.
Why can 1 core keep back everything else??Software was always written for a single core cpu's, programmers & designers never thought we'd get multicore cpu's in the timeframe or for the little cost like we have had, so just like the Y2k bug, there has not been the planning for the future.
Now a multi core cpu still has to share resources, like L2 cache, hard disks and ram. You cant have two cores working on shared resource at the same time you get a deadlock. So the programmer needs to make decisions about what is acceptable to offload to another core and what is not.
If you want the speed keep as much as possible in a tight loop on one core, if you want to make it multi task at the expense of speed, off load more work to other cores knowing that too much swapping between cores increases the lock time and in extreme the lock time could be greater than the processing time, ergo nothing is achieved then.
Although this is framed in an software perspective, the points about multithreading is still relevant at the hw level, its just a different level of abstraction.
https://forum.pfsense.org/index.php?topic=91856.msg517843#msg517843 -
Yes… but why can other firewalls keep up with the traffic and pf cant?
If hardware was the limit here, then all tested should suffer the same faith. They dont...
Fortigates VMware Appliance and Mikrotik handles the same traffic no issues. Tell me why.... on the same ressources and in the same Hypervisor.
-
I just realized that when ping flooding my firewall, the only thing that changed was the IRQ CPU time. I looked up ICMP and it seems the ICMP response is built right into the network stack. From what I can tell, the ICMP responses are being handled on that same realtime kernel thread. If this is the case and my admin interface stopped responding to pings because of load, that means the realtime thread on a different interface couldn't even run.
One of the features of MSI-X is interrupt masks. When a "hard" interrupt occurs, the current context thread gets interrupted and the real time thread gets scheduled. Then the thread can set a mask and block all other interrupts. It can process all of its data. Of course blocking interrupts is bad if you don't have a way to indicate there is new work, so there are "soft" interrupts. If the hardware supports it, the hardware can flag a shared memory location that new data is ready. When the current thread is done processing it's current data, it can do one last check to see if the soft interrupt was signaled. If not, unmask the interrupts and return. If it was flagged, then continue processing until all the data is work is done and no more soft interrupts have been signaled.
It may be possible that the WAN interface is in a constant state of backlog and the realtime kernel thread never unschedules because of constant backlog, starving my admin interface from CPU time.
With that in mind, some basic cheap nics ie realtek might actually be less hassle compared to say an intel nic when solving this sort of problem
With a minimum ping of 0.003ms and an average of 0.176, yet only ~250 interrupts per second, the i350 NIC is doing some nifty magic.
Having some devices do some of the work can be useful, but I suspect its also helping create the problem being seen here in this instance.
http://en.wikipedia.org/wiki/Message_Signaled_Interrupts#MSI-X
https://forums.freebsd.org/threads/msi-msi-x-on-intel-em-nic.27736/
http://people.freebsd.org/~jhb/papers/bsdcan/2007/article/node8.htmlMight be useful as its looking at possibles areas for MSI-X failures.
http://comments.gmane.org/gmane.os.freebsd.stable/71699
http://christopher-technicalmusings.blogspot.co.uk/2012/12/passthrough-pcie-devices-from-esxi-to.html -
Yes… but why can other firewalls keep up with the traffic and pf cant?
Out of the box or have they been tuned?
If hardware was the limit here, then all tested should suffer the same faith. They dont…
Fortigates VMware Appliance and Mikrotik handles the same traffic no issues. Tell me why.... on the same ressources and in the same Hypervisor.
Tuning?
What makes pfsense's default settings suitable for your datacentre setup compared to my home use setup?
Its like tuning a F1 race car, its not going to do well in World Rally Championship setting is it on snow, across deserts or in forests, likewise a rally car is not going to do so well on a race track against F1 cars is it.
-
Out of the box.
Tuning no… It should default to the behaviour of handling the traffic received and shouldnt fail on 3mbit/s traffic.
You cant compare a F1 car to a WRC car. Its like comparing apples to bananas.
The only they have in common is that they are fruits :D
As does the firewalls. They handle traffic and blocks it if needed. Default behaviour. EOD.
Yes… but why can other firewalls keep up with the traffic and pf cant?
Out of the box or have they been tuned?
If hardware was the limit here, then all tested should suffer the same faith. They dont…
Fortigates VMware Appliance and Mikrotik handles the same traffic no issues. Tell me why.... on the same ressources and in the same Hypervisor.
Tuning?
What makes pfsense's default settings suitable for your datacentre setup compared to my home use setup?
Its like tuning a F1 race car, its not going to do well in World Rally Championship setting is it on snow, across deserts or in forests, likewise a rally car is not going to do so well on a race track against F1 cars is it.
-
A little weird thing I noticed…
I tried running the VM with an odd amount of cores.
It didnt change a bit, but behaviour did. 2 of the Cores are switching between 0% and 100% after the attack was stopped but none of them is 0% if the other one is...
Anybody care to explain why?
I havent seen this before and its like it wont let go.
-
Look at the code in the OS that determines what to assign to the available cores.
The OS might be "smart" enough to work out which cores are already under load and also under load for long periods of time, and thus it might assign other cores to be used.
This is windows thread scheduling https://msdn.microsoft.com/en-us/library/ms685100%28VS.85%29.aspx
Freebsd
https://calomel.org/freebsd_network_tuning.html
Things you need to look at include processor affinity & thread scheduling as mentioned in the link above and some of it shown below."######################################### net.isr. tuning begin ##############
NOTE regarding "net.isr.*" : Processor affinity can effectively reduce cache
problems but it does not curb the persistent load-balancing problem.[1]
Processor affinity becomes more complicated in systems with non-uniform
architectures. A system with two dual-core hyper-threaded CPUs presents a
challenge to a scheduling algorithm. There is complete affinity between two
virtual CPUs implemented on the same core via hyper-threading, partial
affinity between two cores on the same physical chip (as the cores share
some, but not all, cache), and no affinity between separate physical chips.
It is possible that net.isr.bindthreads="0" and net.isr.maxthreads="3" can
cause more slowdown if your system is not cpu loaded already. We highly
recommend getting a more efficient network card instead of setting the
"net.isr.*" options. Look at the Intel i350 for gigabit or the Myricom
10G-PCIE2-8C2-2S for 10gig. These cards will reduce the machines nic
processing to 12% or lower."
This might also be useful albeit for an earlier version of freebsd.
http://www.icir.org/gregor/tools/pthread-scheduling.html -
Out of the box.
Tuning no… It should default to the behaviour of handling the traffic received and shouldnt fail on 3mbit/s traffic.
You cant compare a F1 car to a WRC car. Its like comparing apples to bananas.
The only they have in common is that they are fruits :D
As does the firewalls. They handle traffic and blocks it if needed. Default behaviour. EOD.
I disagree, but thats because we dont know if the other products out of the box have additional code in order for it to adapt or change to requirements which may not be built into pfsense.
Can you provide examples of out of the box fw's which do work out of the box and can you show me what is different to pfsense in order for them to work out of the box?
The phrase, you get what you pay for springs to mind at the moment. :)
-
Mikrotik, Fortigate, ISA Server and Windows Firewall.
No other of what we have tested passed the tests.
-
FreeBSD is getting more SMP love for its network stack in 11. Each major release seems to have better core scaling for IO in general. There are some major plans to allow the network stack to both receive and send flows stickied to a single core and have flows randomly distributed among the cores.
One thing that I do not know about SMP loving is NAT. I know the NAT implementation has been single threaded for a while. It's possible it may be able to get a re-write once some of the new SMP network stack APIs are finalized. Or just got IPv6.
-
Mikrotik, Fortigate, ISA Server and Windows Firewall.
No other of what we have tested passed the tests.
Mikrotik - RouterOS based on Linux, so try some linux hacks on it for stability testing.
http://en.wikipedia.org/wiki/MikroTik#RouterOSFortigate - FortiOS based on Linux, so as above…
http://en.wikipedia.org/wiki/Fortinet#GPL_violationsWindows ISA Server Forefront no longer available as MS have announced they are dropping it so support will be gone in time. Try some windows hacks for stability testing.
This matters because freebsd is primarily aimed at stability although it has pioneered some features yet to be seen in other OS platforms and also holds an unofficial world record for the most amount of data transmitted
https://www.freebsd.org/advocacy/whyusefreebsd.htmlhttp://www.serverwatch.com/tutorials/article.php/10825_3393051_2/Differentiating-Among-BSD-Distros.htm
"FreeBSD holds the unofficial record for transferring data, having achieved more than 2 Terabytes of data from one server running the OS. It follows from this statistic that FreeBSD is also one of the most stable OSes available."The last part above is not what you want to hear considering what you are experiencing but it goes back to my point about tuning.
You can tune a little ford fiesta engine to compete on a 1/4 mile with similar performance as a bigger engined car, but that ford fiesta engine will then have no reliability and will likely explode after completing the 1/4 mile.
I guess what you need to do is define your aim's then select the correct fw according to those aim's.