What is the biggest attack in GBPS you stopped
-
https://forum.pfsense.org/index.php?topic=91856.msg523921#msg523921
-
@Supermule, tell me, is the script posted in this thread the one causing the problem you see?
If so, does it also cause the problem you describe here? https://forum.pfsense.org/index.php?topic=87571.msg492268#msg492268
It seems like we are going around in circles at this stage this is why I ask.
Its also like CMB says here https://forum.pfsense.org/index.php?topic=87571.msg493401#msg493401
"DDoS is hell on stateful firewalls is the basic summary of this thread. It's not specific to anything in any particular firewall."The very nature of any stateful firewall will cause an increase in resource use like you are seeing.
With this is mind, what can you do to limit your exposure [edit]
ofto [/edit] the weakness of a stateful firewall?Lots of suggestions here
http://www.cisco.com/web/about/security/intelligence/guide_ddos_defense.htmlSome users who might be hosting a website that is providing services/products only to punters in their own country, can limit access to the ip addresses to those assigned. If its something being offered further afield like to a continent, then rinse and repeat the above but with all the continents ip address blocks.
If its something flogged globally, then consider a website sat behind the TLD specific to that country, ie if in the UK then a website assigned to a .co.uk could help, but then you'd need something to redirect the originating ip address to the correct country domain. This approach can lesson but not eradicate 100% a DDOS attack of sorts.
Perhaps having something that temporarily disables the port forward in realtime when the CPU activity reaches a threshold might be a way around the problem to avoid taking out the firewall if the other tuning options like increasing the default states et al doesnt work.
Either way, theres lots of ways to skin the cat!
DDOS attacks can't be stopped because it's trying to shove 100Gb/s down a 1Gb pipe. Lots of different IPs does not a DDOS make. 3Mb of traffic hitting a 10Gb firewall is not what anyone in their right mind would call a DDOS. If a firewall dies, it's because of a slow path that is definitely not O(1).
Packets hitting the firewall should trigger an O(1) to see if the state exists, if it does, pass, if it does not, go to next check
Packets not passed should trigger an O(n) to compare the new state against firewall rules. If good, add and pass, else block
These two I can't see being an issue without some absurdly crazy firewall rules.NAT sits in there somewhere, not sure where, and possibly some other routing related stuff
Whatever is going on, if I take the 30Mb/s that I saw SuperMule take down my firewall with, and assume I only have one CPU, then it's taking over 100k clock cycles per packet. Since I have a quad core and all 4 cores were getting hosed, and the packets were actually quite large, it was more like 1mil clockcycles per packet. Since my system was not keeping up, that means it was worse than 1mil/packet.
I'm not sure what kind of slow path warrants 1mil cycles to decide what to do with a packet. 1mil cycles is a lot of work. You can encrypt 2.3mil bits with AES. Another way to put it. 1Gb/s over SMB was about 0.5% cpu on my old 2.67ghz cpu, which once translated down to 1mhz(1mil cycles per second), then into 1500 byte packets, is about 780 packets per second. In the time PFSense processes one of these packets, my Windows box could have transferred 780 packets via SMB.
The funny thing is it isn't the number of states. SuperMule did an attack that hit a forwarded port, which means the states were being created. It was up against the 4mil state limit, yes CPU was higher than normal, but the system was perfectly stable. A similar attack against blocked ports resulted in the same thing, everything was fine. PFSense does handle lots of blocked traffic just fine. Whatever is going on is triggering something other than just blocking traffic.
Lets make an analogy. If normal person stands on scale and it says they weigh 75 tons, you don't ask them to take off their shoes. That's about the same magnitude difference.
-
I was just thinking, when my 4mil states are full up and the firewall is trying to expire states and whatever is done, scanning 4mil states could be a bit of work. I wonder how the attack would do if the state table was made to something small, like 10k states. Expiring states may require an O(n) scan when lots of states are made about the same time.
Another thought is I have "Firewall Adaptive Timeouts"(System->Advanced->Firewall/NAT) set to 4mil states, which means by the time my state table gets full, states are being expired instantly. If expiration causes a full scan of at least triggers often, then creating states quickly and just as quickly expiring them may be the cause. I don't know. Just a thought.
That could explain why it takes a few tens of seconds before I started to feel the hurt of the attack, the issue didn't full trigger until the state table got full.
edit: more thoughts
I wonder if I set the target expiration to 4mil, but set the max to something larger. I'm not sure what PFSense/FreeBSD has to do when the table gets full, it may trigger a bad code path. Maybe I should have the table set to something like 5mil max states, but leave the adaptive timeout to 4mil.
edit2:
If the issue is an issue involving states, a good test to make an extreme could be to try a few combinations. 1mil max states with a target of 10k, should never get much past 10k, but shouldn't hit the max state limit. 10k max with 10k target, 10k max with no adaptive, etc.
-
@supermule So whats different with your setup then compared to what I have setup?
edit:
CPU Microcode perhaps? My VM is running on an AMD CPU which has some bugs affecting threading performance, all the others affected who have posted are running Intel CPU's iirc so maybe thats why I dont see the problem?edit2: This might also be a factor especially considering the vmware point which I've had to tweak as its possible to get the time to drift on vm's.
http://en.wikipedia.org/wiki/HPETAnyone know if HPET is built into the pfsense builds?
http://www.freebsd.org/cgi/man.cgi?query=hpet&apropos=0&sektion=0&manpath=FreeBSD+9-current&format=html -
I was just thinking, when my 4mil states are full up and the firewall is trying to expire states and whatever is done, scanning 4mil states could be a bit of work. I wonder how the attack would do if the state table was made to something small, like 10k states. Expiring states may require an O(n) scan when lots of states are made about the same time.
Another thought is I have "Firewall Adaptive Timeouts"(System->Advanced->Firewall/NAT) set to 4mil states, which means by the time my state table gets full, states are being expired instantly. If expiration causes a full scan of at least triggers often, then creating states quickly and just as quickly expiring them may be the cause. I don't know. Just a thought.
That could explain why it takes a few tens of seconds before I started to feel the hurt of the attack, the issue didn't full trigger until the state table got full.
edit: more thoughts
I wonder if I set the target expiration to 4mil, but set the max to something larger. I'm not sure what PFSense/FreeBSD has to do when the table gets full, it may trigger a bad code path. Maybe I should have the table set to something like 5mil max states, but leave the adaptive timeout to 4mil.
You get 1 packet come in which causes x-number-of-lines-of-code to run & some memory spaces to be filled up.
Consider the time it takes for 1 packet to be processed by the fw, then add the time for the state to expire and before long you could easily fill up the available/free ram and also swamp the cpu by getting it to run x-number-of-lines-of-code per incoming packet, by having some code assigned to 1 core dominating the CPU especially if some threaded code has a higher priority than other code.
What if you could throttle the packets coming in before the states were processed? Would that prevent the firewall from crashing/hanging?
-
What if you could throttle the packets coming in before the states were processed? Would that prevent the firewall from crashing/hanging?
Isn't that the core idea of "a DDoS shouldn't be dealt with by the firewall, but upstream?"
-
@supermule So whats different with your setup then compared to what I have setup?
edit:
CPU Microcode perhaps? My VM is running on an AMD CPU which has some bugs affecting threading performance, all the others affected who have posted are running Intel CPU's iirc so maybe thats why I dont see the problem?edit2: This might also be a factor especially considering the vmware point which I've had to tweak as its possible to get the time to drift on vm's.
http://en.wikipedia.org/wiki/HPETAnyone know if HPET is built into the pfsense builds?
http://www.freebsd.org/cgi/man.cgi?query=hpet&apropos=0&sektion=0&manpath=FreeBSD+9-current&format=htmlJust to note; I was running bare metal with the same predictable results, I posted my machine specs in this thread, but will append them to my signature.
jimp or cmb made a recommendation on this thread somewhere about tuning pfSense settings to better handle DDOS.
I know that with the default state limit of 394000, the UI and most every other service seems to lock up, but system utilization under top is minimal at best. The screen shot I posed of the console shows the system utilization with a 394K state table. Increasing the state table makes the box responsive, but the interface being attacked stops responding with 4Mbit of attack traffic.
-
I think I have an idea where the issue may be: interrupts
This console screen shot is very telling:
The system start throttling interrupts, but the CPU utilization for interrupts is conspicuously a fixed 25.0%. Either the code handling interrupts has come challenges or the system has throttled the CPU utilization for interrupts at 25%, and its hit that limit and cannot process any more.
Does anyone know if interrupt CPU limits are adjustable, and where can this be done? According to the console shot, I have some additional headroom I could allocate to interrupts to see if that helps.
See this thread with a similar set of symptoms in 2011: https://forum.pfsense.org/index.php?topic=38589.msg198765#msg198765
-
What if you could throttle the packets coming in before the states were processed? Would that prevent the firewall from crashing/hanging?
Isn't that the core idea of "a DDoS shouldn't be dealt with by the firewall, but upstream?"
Yes & no.
We have our internet feeds, we know what speed and amount of data we can get from it, ie some of it might be fast fibre but with a 10GB data limit, so is it up to us to ensure the fw can handle the speed of the data arriving surely? It also depends on what services the network provider provides to us for the money we pay for that internet feed, although depending on what country you are in, then the spooks may also have a hand in what arrives at your fw as well.
-
http://en.wikipedia.org/wiki/Interrupt_storm
"In operating systems, an interrupt storm is an event during which a processor receives an inordinate number of interrupts that consume the majority of the processor's time. Interrupt storms are typically caused by hardware devices that do not support interrupt rate limiting."
It doesnt seem unlike what I described earlier https://forum.pfsense.org/index.php?topic=91856.msg523964#msg523964 especially when considering chips are just running code stored in the chip instead of on the hard drive, ie like a BIOS and some Intel NIC's which provide some network processing capabilties unlike say a USB nic running on a rpi. ;)
Might be able to gleam some info & solutions from these links.
https://forums.freebsd.org/threads/tp-link-tl-wn781nd-version-2-works-with-10-1-but-with-one-caveat.49667/
2014https://forums.freebsd.org/threads/interrupt-storm-detected-on-irq10.17192/
2010http://lists.freebsd.org/pipermail/freebsd-questions/2011-August/232647.html
"Interrupt storms (an olde but a goode)" Bit like Rootkits which alot of people forgot about.https://forums.freebsd.org/threads/intel-dq77kb-high-interrupt-rate-when-using-hdmi.39210/
2013https://forums.freenas.org/index.php?threads/getting-lots-if-interrupt-storm-on-irq16.3425/
2011http://freebsd.1045724.n5.nabble.com/how-to-fix-quot-interrupt-storm-quot-td3819772.html
2009http://daemonforums.org/showthread.php?t=500
2008 -
Whats on IRQ 267 out of interest?
-
Whats on IRQ 267 out of interest?
irq267: em1:rx 0 3529974 22
em1 is my WAN2 interface.
Intel 82574L Gigabit Ethernet Controller.
-
Interrupts are almost entirely due to packets per second, it's how the hardware talks to the OS. Complete guess, but I would assume(assumptions can be dangerous) that interrupts would not be high because of anything the firewall is doing, only because of lots of packets.
-
…
If the issue is an issue involving states, a good test to make an extreme could be to try a few combinations. 1mil max states with a target of 10k, should never get much past 10k, but shouldn't hit the max state limit.But if still the back-off rate is to low ?
Something other to test flushing states when storm, ../ Firewall Adaptive Timeouts :
Leave [adaptive.start] at the default (60%), but set [adaptive.end] to the value 101% of your maxstates (i.s.o. the default 120%)
As can be calculated, then the ultimate 5% (>95% maxstates) adapts much stronger flushing than the default setting.Hypothesis: if adaptation stronger with limit -> 0, then pfSense hardly chokes. True || False. ?
-
Interrupts are almost entirely due to packets per second, it's how the hardware talks to the OS. Complete guess, but I would assume(assumptions can be dangerous) that interrupts would not be high because of anything the firewall is doing, only because of lots of packets.
https://www.freebsd.org/doc/en_US.ISO8859-1/books/arch-handbook/smp-design.html
" FreeBSD deals with interrupt handlers by giving them their own thread context. Providing a context for interrupt handlers allows them to block on locks. To help avoid latency, however, interrupt threads run at real-time kernel priority. Thus, interrupt handlers should not execute for very long to avoid starving other kernel threads. In addition, since multiple handlers may share an interrupt thread, interrupt handlers should not sleep or use a sleepable lock to avoid starving another interrupt handler."
HW Interrupts will be different to sw interrupts, in that hw interrupts will be treated as more important than most but not all sw interrupts.
https://www.freebsd.org/cgi/man.cgi?query=swi&apropos=0&sektion=9
"These functions are used to register and schedule software interrupt handlers. Software interrupt handlers are attached to a software interrupt thread, just as hardware interrupt handlers are attached to a hardware interrupt thread. Multiple handlers can be attached to the same thread. Software interrupt handlers can be used to queue up less critical processing inside of hardware interrupt handlers so that the work can be done at a later time. Software interrupt threads are different from other kernel threads in that they are treated as an interrupt thread. This means that time spent executing these threads is counted as interrupt time, and that they can be run via a lightweight context switch."Windows is not immune to them either, its all in the drivers to a certain extent. https://msdn.microsoft.com/en-us/library/windows/hardware/ff540586%28v=vs.85%29.aspx
So in some respects any basic nic with little or no processing capabilities will rely more on the OS to do the packet processing and because computers are just glorified clockwork turk machines, so the OS is less likely to get out of shape with a basic nic as everything will just run like clockwork ignoring those electrons at the socket the OS is physically incapable of processing due to being tied up elsewhere, unlike with a smart nic which has various builtin processing capabilities which whilst making the packet processing quicker then causes a flood upstream in the OS itself. Its like CPU caches (L1,L2 & L3) can be a boon or a hindrance in certain circumstances as well.
With that in mind, some basic cheap nics ie realtek might actually be less hassle compared to say an intel nic when solving this sort of problem and will explain why some expensive hw in amateur hands can cause some embarrassment with customers advised to go with the latest and greatest. As the trend is generally for greater more sophisticated hack attempts so this sort of thing will only become more common and with the trend to employ younger talent straight out of uni, so the experience is lost and the cycles repeat considering the timescales of things like syn floods (1990's), rootkits (1990's) and interrupt storms (1990's) all of which were seen in the dos days before life became hidden behind gui's.
-
https://doc.pfsense.org/index.php/Tuning_and_Troubleshooting_Network_Cards#Intel_ix.284.29_Cards
"On releases prior to pfSense 2.2, the following may be necessary. If using VLANs with Intel 10 Gb ix(4) cards, some features of the driver for VLANs may need to be disabled to work correctly. For instance, to apply these settings on NIC ix0, run the following. "
I wonder if its worth increasing hw.intr_storm_threshold=10000 to something higher?
This is showing freebsd 10.1 https://calomel.org/freebsd_network_tuning.html
"For 10gig NIC's set to
9000 and use large MTU. (default 1000)
#hw.intr_storm_threshold="9000""
-
System idle, 208 interrupts per second for the one queue that seems to process ICMP from my desktop. Pinging the interface with 67.3k/sec ICMP packets, 250 interrupts per second.
Packets: sent=1422309, rcvd=1422309, error=0, lost=0 (0.0% loss) in 21.131114 sec
RTTs in ms: min/avg/max/dev: 0.003 / 0.176 / 20.716 / 0.294
Bandwidth in kbytes/sec: sent=4038.525, rcvd=4038.52533Mb/s of 64byte ICMP packets barely made a dent.
An increase of 12% cpu time(48% cpu time of the core the queue is on) and a 19% increase in interrupts doesn't seem that bad for that many packets.
From a PPS and interrupt view, my system really doesn't care. Whatever the attack that was done before, 30Mb/s was bad enough that my admin interface went offline.
-
I just realized that when ping flooding my firewall, the only thing that changed was the IRQ CPU time. I looked up ICMP and it seems the ICMP response is built right into the network stack. From what I can tell, the ICMP responses are being handled on that same realtime kernel thread. If this is the case and my admin interface stopped responding to pings because of load, that means the realtime thread on a different interface couldn't even run.
One of the features of MSI-X is interrupt masks. When a "hard" interrupt occurs, the current context thread gets interrupted and the real time thread gets scheduled. Then the thread can set a mask and block all other interrupts. It can process all of its data. Of course blocking interrupts is bad if you don't have a way to indicate there is new work, so there are "soft" interrupts. If the hardware supports it, the hardware can flag a shared memory location that new data is ready. When the current thread is done processing it's current data, it can do one last check to see if the soft interrupt was signaled. If not, unmask the interrupts and return. If it was flagged, then continue processing until all the data is work is done and no more soft interrupts have been signaled.
It may be possible that the WAN interface is in a constant state of backlog and the realtime kernel thread never unschedules because of constant backlog, starving my admin interface from CPU time.
With that in mind, some basic cheap nics ie realtek might actually be less hassle compared to say an intel nic when solving this sort of problem
With a minimum ping of 0.003ms and an average of 0.176, yet only ~250 interrupts per second, the i350 NIC is doing some nifty magic.
-
I will test with some offloading and other tunables in pfsense later when online again.
Its SYN packets that is spoofed and of various sizes. Most of them doesnt have the ACK the FW needs and the states remain open.
Its easy to fend of an ICMP flood since its predicatable traffic. In my case, when 1 core hits 100% the FW goes offline and packetloss occurs. I cant see what that specific CPU does and it would be interesting to dig deeper into that and what process that consumes the CPU. It doesnt when there is no port forward enabled but as soon as it routes, then it goes ballistic.
Why can 1 core keep back everything else??
-
Procstat -ka in Command prompt during a SYN attack.
No packet loss this time. No port forward.