What is the biggest attack in GBPS you stopped
-
I was just thinking, when my 4mil states are full up and the firewall is trying to expire states and whatever is done, scanning 4mil states could be a bit of work. I wonder how the attack would do if the state table was made to something small, like 10k states. Expiring states may require an O(n) scan when lots of states are made about the same time.
Another thought is I have "Firewall Adaptive Timeouts"(System->Advanced->Firewall/NAT) set to 4mil states, which means by the time my state table gets full, states are being expired instantly. If expiration causes a full scan of at least triggers often, then creating states quickly and just as quickly expiring them may be the cause. I don't know. Just a thought.
That could explain why it takes a few tens of seconds before I started to feel the hurt of the attack, the issue didn't full trigger until the state table got full.
edit: more thoughts
I wonder if I set the target expiration to 4mil, but set the max to something larger. I'm not sure what PFSense/FreeBSD has to do when the table gets full, it may trigger a bad code path. Maybe I should have the table set to something like 5mil max states, but leave the adaptive timeout to 4mil.
You get 1 packet come in which causes x-number-of-lines-of-code to run & some memory spaces to be filled up.
Consider the time it takes for 1 packet to be processed by the fw, then add the time for the state to expire and before long you could easily fill up the available/free ram and also swamp the cpu by getting it to run x-number-of-lines-of-code per incoming packet, by having some code assigned to 1 core dominating the CPU especially if some threaded code has a higher priority than other code.
What if you could throttle the packets coming in before the states were processed? Would that prevent the firewall from crashing/hanging?
-
What if you could throttle the packets coming in before the states were processed? Would that prevent the firewall from crashing/hanging?
Isn't that the core idea of "a DDoS shouldn't be dealt with by the firewall, but upstream?"
-
@supermule So whats different with your setup then compared to what I have setup?
edit:
CPU Microcode perhaps? My VM is running on an AMD CPU which has some bugs affecting threading performance, all the others affected who have posted are running Intel CPU's iirc so maybe thats why I dont see the problem?edit2: This might also be a factor especially considering the vmware point which I've had to tweak as its possible to get the time to drift on vm's.
http://en.wikipedia.org/wiki/HPETAnyone know if HPET is built into the pfsense builds?
http://www.freebsd.org/cgi/man.cgi?query=hpet&apropos=0&sektion=0&manpath=FreeBSD+9-current&format=htmlJust to note; I was running bare metal with the same predictable results, I posted my machine specs in this thread, but will append them to my signature.
jimp or cmb made a recommendation on this thread somewhere about tuning pfSense settings to better handle DDOS.
I know that with the default state limit of 394000, the UI and most every other service seems to lock up, but system utilization under top is minimal at best. The screen shot I posed of the console shows the system utilization with a 394K state table. Increasing the state table makes the box responsive, but the interface being attacked stops responding with 4Mbit of attack traffic.
-
I think I have an idea where the issue may be: interrupts
This console screen shot is very telling:
The system start throttling interrupts, but the CPU utilization for interrupts is conspicuously a fixed 25.0%. Either the code handling interrupts has come challenges or the system has throttled the CPU utilization for interrupts at 25%, and its hit that limit and cannot process any more.
Does anyone know if interrupt CPU limits are adjustable, and where can this be done? According to the console shot, I have some additional headroom I could allocate to interrupts to see if that helps.
See this thread with a similar set of symptoms in 2011: https://forum.pfsense.org/index.php?topic=38589.msg198765#msg198765
-
What if you could throttle the packets coming in before the states were processed? Would that prevent the firewall from crashing/hanging?
Isn't that the core idea of "a DDoS shouldn't be dealt with by the firewall, but upstream?"
Yes & no.
We have our internet feeds, we know what speed and amount of data we can get from it, ie some of it might be fast fibre but with a 10GB data limit, so is it up to us to ensure the fw can handle the speed of the data arriving surely? It also depends on what services the network provider provides to us for the money we pay for that internet feed, although depending on what country you are in, then the spooks may also have a hand in what arrives at your fw as well.
-
http://en.wikipedia.org/wiki/Interrupt_storm
"In operating systems, an interrupt storm is an event during which a processor receives an inordinate number of interrupts that consume the majority of the processor's time. Interrupt storms are typically caused by hardware devices that do not support interrupt rate limiting."
It doesnt seem unlike what I described earlier https://forum.pfsense.org/index.php?topic=91856.msg523964#msg523964 especially when considering chips are just running code stored in the chip instead of on the hard drive, ie like a BIOS and some Intel NIC's which provide some network processing capabilties unlike say a USB nic running on a rpi. ;)
Might be able to gleam some info & solutions from these links.
https://forums.freebsd.org/threads/tp-link-tl-wn781nd-version-2-works-with-10-1-but-with-one-caveat.49667/
2014https://forums.freebsd.org/threads/interrupt-storm-detected-on-irq10.17192/
2010http://lists.freebsd.org/pipermail/freebsd-questions/2011-August/232647.html
"Interrupt storms (an olde but a goode)" Bit like Rootkits which alot of people forgot about.https://forums.freebsd.org/threads/intel-dq77kb-high-interrupt-rate-when-using-hdmi.39210/
2013https://forums.freenas.org/index.php?threads/getting-lots-if-interrupt-storm-on-irq16.3425/
2011http://freebsd.1045724.n5.nabble.com/how-to-fix-quot-interrupt-storm-quot-td3819772.html
2009http://daemonforums.org/showthread.php?t=500
2008 -
Whats on IRQ 267 out of interest?
-
Whats on IRQ 267 out of interest?
irq267: em1:rx 0 3529974 22
em1 is my WAN2 interface.
Intel 82574L Gigabit Ethernet Controller.
-
Interrupts are almost entirely due to packets per second, it's how the hardware talks to the OS. Complete guess, but I would assume(assumptions can be dangerous) that interrupts would not be high because of anything the firewall is doing, only because of lots of packets.
-
…
If the issue is an issue involving states, a good test to make an extreme could be to try a few combinations. 1mil max states with a target of 10k, should never get much past 10k, but shouldn't hit the max state limit.But if still the back-off rate is to low ?
Something other to test flushing states when storm, ../ Firewall Adaptive Timeouts :
Leave [adaptive.start] at the default (60%), but set [adaptive.end] to the value 101% of your maxstates (i.s.o. the default 120%)
As can be calculated, then the ultimate 5% (>95% maxstates) adapts much stronger flushing than the default setting.Hypothesis: if adaptation stronger with limit -> 0, then pfSense hardly chokes. True || False. ?
-
Interrupts are almost entirely due to packets per second, it's how the hardware talks to the OS. Complete guess, but I would assume(assumptions can be dangerous) that interrupts would not be high because of anything the firewall is doing, only because of lots of packets.
https://www.freebsd.org/doc/en_US.ISO8859-1/books/arch-handbook/smp-design.html
" FreeBSD deals with interrupt handlers by giving them their own thread context. Providing a context for interrupt handlers allows them to block on locks. To help avoid latency, however, interrupt threads run at real-time kernel priority. Thus, interrupt handlers should not execute for very long to avoid starving other kernel threads. In addition, since multiple handlers may share an interrupt thread, interrupt handlers should not sleep or use a sleepable lock to avoid starving another interrupt handler."
HW Interrupts will be different to sw interrupts, in that hw interrupts will be treated as more important than most but not all sw interrupts.
https://www.freebsd.org/cgi/man.cgi?query=swi&apropos=0&sektion=9
"These functions are used to register and schedule software interrupt handlers. Software interrupt handlers are attached to a software interrupt thread, just as hardware interrupt handlers are attached to a hardware interrupt thread. Multiple handlers can be attached to the same thread. Software interrupt handlers can be used to queue up less critical processing inside of hardware interrupt handlers so that the work can be done at a later time. Software interrupt threads are different from other kernel threads in that they are treated as an interrupt thread. This means that time spent executing these threads is counted as interrupt time, and that they can be run via a lightweight context switch."Windows is not immune to them either, its all in the drivers to a certain extent. https://msdn.microsoft.com/en-us/library/windows/hardware/ff540586%28v=vs.85%29.aspx
So in some respects any basic nic with little or no processing capabilities will rely more on the OS to do the packet processing and because computers are just glorified clockwork turk machines, so the OS is less likely to get out of shape with a basic nic as everything will just run like clockwork ignoring those electrons at the socket the OS is physically incapable of processing due to being tied up elsewhere, unlike with a smart nic which has various builtin processing capabilities which whilst making the packet processing quicker then causes a flood upstream in the OS itself. Its like CPU caches (L1,L2 & L3) can be a boon or a hindrance in certain circumstances as well.
With that in mind, some basic cheap nics ie realtek might actually be less hassle compared to say an intel nic when solving this sort of problem and will explain why some expensive hw in amateur hands can cause some embarrassment with customers advised to go with the latest and greatest. As the trend is generally for greater more sophisticated hack attempts so this sort of thing will only become more common and with the trend to employ younger talent straight out of uni, so the experience is lost and the cycles repeat considering the timescales of things like syn floods (1990's), rootkits (1990's) and interrupt storms (1990's) all of which were seen in the dos days before life became hidden behind gui's.
-
https://doc.pfsense.org/index.php/Tuning_and_Troubleshooting_Network_Cards#Intel_ix.284.29_Cards
"On releases prior to pfSense 2.2, the following may be necessary. If using VLANs with Intel 10 Gb ix(4) cards, some features of the driver for VLANs may need to be disabled to work correctly. For instance, to apply these settings on NIC ix0, run the following. "
I wonder if its worth increasing hw.intr_storm_threshold=10000 to something higher?
This is showing freebsd 10.1 https://calomel.org/freebsd_network_tuning.html
"For 10gig NIC's set to
9000 and use large MTU. (default 1000)
#hw.intr_storm_threshold="9000""
-
System idle, 208 interrupts per second for the one queue that seems to process ICMP from my desktop. Pinging the interface with 67.3k/sec ICMP packets, 250 interrupts per second.
Packets: sent=1422309, rcvd=1422309, error=0, lost=0 (0.0% loss) in 21.131114 sec
RTTs in ms: min/avg/max/dev: 0.003 / 0.176 / 20.716 / 0.294
Bandwidth in kbytes/sec: sent=4038.525, rcvd=4038.52533Mb/s of 64byte ICMP packets barely made a dent.
An increase of 12% cpu time(48% cpu time of the core the queue is on) and a 19% increase in interrupts doesn't seem that bad for that many packets.
From a PPS and interrupt view, my system really doesn't care. Whatever the attack that was done before, 30Mb/s was bad enough that my admin interface went offline.
-
I just realized that when ping flooding my firewall, the only thing that changed was the IRQ CPU time. I looked up ICMP and it seems the ICMP response is built right into the network stack. From what I can tell, the ICMP responses are being handled on that same realtime kernel thread. If this is the case and my admin interface stopped responding to pings because of load, that means the realtime thread on a different interface couldn't even run.
One of the features of MSI-X is interrupt masks. When a "hard" interrupt occurs, the current context thread gets interrupted and the real time thread gets scheduled. Then the thread can set a mask and block all other interrupts. It can process all of its data. Of course blocking interrupts is bad if you don't have a way to indicate there is new work, so there are "soft" interrupts. If the hardware supports it, the hardware can flag a shared memory location that new data is ready. When the current thread is done processing it's current data, it can do one last check to see if the soft interrupt was signaled. If not, unmask the interrupts and return. If it was flagged, then continue processing until all the data is work is done and no more soft interrupts have been signaled.
It may be possible that the WAN interface is in a constant state of backlog and the realtime kernel thread never unschedules because of constant backlog, starving my admin interface from CPU time.
With that in mind, some basic cheap nics ie realtek might actually be less hassle compared to say an intel nic when solving this sort of problem
With a minimum ping of 0.003ms and an average of 0.176, yet only ~250 interrupts per second, the i350 NIC is doing some nifty magic.
-
I will test with some offloading and other tunables in pfsense later when online again.
Its SYN packets that is spoofed and of various sizes. Most of them doesnt have the ACK the FW needs and the states remain open.
Its easy to fend of an ICMP flood since its predicatable traffic. In my case, when 1 core hits 100% the FW goes offline and packetloss occurs. I cant see what that specific CPU does and it would be interesting to dig deeper into that and what process that consumes the CPU. It doesnt when there is no port forward enabled but as soon as it routes, then it goes ballistic.
Why can 1 core keep back everything else??
-
Procstat -ka in Command prompt during a SYN attack.
No packet loss this time. No port forward.
-
Same but with port forward.
-
I will test with some offloading and other tunables in pfsense later when online again.
Its SYN packets that is spoofed and of various sizes. Most of them doesnt have the ACK the FW needs and the states remain open.
Its easy to fend of an ICMP flood since its predicatable traffic. In my case, when 1 core hits 100% the FW goes offline and packetloss occurs. I cant see what that specific CPU does and it would be interesting to dig deeper into that and what process that consumes the CPU. It doesnt when there is no port forward enabled but as soon as it routes, then it goes ballistic.
Why can 1 core keep back everything else??Software was always written for a single core cpu's, programmers & designers never thought we'd get multicore cpu's in the timeframe or for the little cost like we have had, so just like the Y2k bug, there has not been the planning for the future.
Now a multi core cpu still has to share resources, like L2 cache, hard disks and ram. You cant have two cores working on shared resource at the same time you get a deadlock. So the programmer needs to make decisions about what is acceptable to offload to another core and what is not.
If you want the speed keep as much as possible in a tight loop on one core, if you want to make it multi task at the expense of speed, off load more work to other cores knowing that too much swapping between cores increases the lock time and in extreme the lock time could be greater than the processing time, ergo nothing is achieved then.
Although this is framed in an software perspective, the points about multithreading is still relevant at the hw level, its just a different level of abstraction.
https://forum.pfsense.org/index.php?topic=91856.msg517843#msg517843 -
Yes… but why can other firewalls keep up with the traffic and pf cant?
If hardware was the limit here, then all tested should suffer the same faith. They dont...
Fortigates VMware Appliance and Mikrotik handles the same traffic no issues. Tell me why.... on the same ressources and in the same Hypervisor.
-
I just realized that when ping flooding my firewall, the only thing that changed was the IRQ CPU time. I looked up ICMP and it seems the ICMP response is built right into the network stack. From what I can tell, the ICMP responses are being handled on that same realtime kernel thread. If this is the case and my admin interface stopped responding to pings because of load, that means the realtime thread on a different interface couldn't even run.
One of the features of MSI-X is interrupt masks. When a "hard" interrupt occurs, the current context thread gets interrupted and the real time thread gets scheduled. Then the thread can set a mask and block all other interrupts. It can process all of its data. Of course blocking interrupts is bad if you don't have a way to indicate there is new work, so there are "soft" interrupts. If the hardware supports it, the hardware can flag a shared memory location that new data is ready. When the current thread is done processing it's current data, it can do one last check to see if the soft interrupt was signaled. If not, unmask the interrupts and return. If it was flagged, then continue processing until all the data is work is done and no more soft interrupts have been signaled.
It may be possible that the WAN interface is in a constant state of backlog and the realtime kernel thread never unschedules because of constant backlog, starving my admin interface from CPU time.
With that in mind, some basic cheap nics ie realtek might actually be less hassle compared to say an intel nic when solving this sort of problem
With a minimum ping of 0.003ms and an average of 0.176, yet only ~250 interrupts per second, the i350 NIC is doing some nifty magic.
Having some devices do some of the work can be useful, but I suspect its also helping create the problem being seen here in this instance.
http://en.wikipedia.org/wiki/Message_Signaled_Interrupts#MSI-X
https://forums.freebsd.org/threads/msi-msi-x-on-intel-em-nic.27736/
http://people.freebsd.org/~jhb/papers/bsdcan/2007/article/node8.htmlMight be useful as its looking at possibles areas for MSI-X failures.
http://comments.gmane.org/gmane.os.freebsd.stable/71699
http://christopher-technicalmusings.blogspot.co.uk/2012/12/passthrough-pcie-devices-from-esxi-to.html