What is the biggest attack in GBPS you stopped
-
It seems to need it bad.
But when will pfSense have it, is another interesting question.
FreeBSD is getting more SMP love for its network stack in 11. Each major release seems to have better core scaling for IO in general. There are some major plans to allow the network stack to both receive and send flows stickied to a single core and have flows randomly distributed among the cores.
One thing that I do not know about SMP loving is NAT. I know the NAT implementation has been single threaded for a while. It's possible it may be able to get a re-write once some of the new SMP network stack APIs are finalized. Or just got IPv6.
-
Last one for today. Still packetloss and an unresponsive GUI. Traffic stateless is around 15mbit/s.
I see filterlog comsuming a lot of CPU during the attack. Big difference in Vmware CPU wise… Core nr. 6 is still blasting away, but its not attack dependant. If it wasnt going at a 100% then it could have survived (maybe).
-
SYN Flood recording stateless running TOP on Console
SYN Flood recording running SYN Proxy states with limiters and TOP on Console
If you wonder why you cant see core nr4 in TOP, you are not the only one.
It runs 100% in VmWare.
-
So have you identified the code thats running on the core that maxes out when this happens?
If you havent, how can you fix the problem?
At the moment you are just reporting symptoms which as you can see by the length of the thread its not been that useful at fixing the problem so far has it?
-
I havent got a clue of where to begin and where to look.
I cant see whats using the core…
What do you make of this? Core nr. 4 says its idle at 100%
-
So have you identified the code thats running on the core that maxes out when this happens?
I'm almost certain the issue is with the network driver in FreeBSD and it's also being contributed to by PF.
When my state table is low (394K) the attack cripples the entire box with the exception of the console. PF alerts that it hit its state table max in the console. I'm not sure why a full state table creates a more significant impact on the box, but it does.
When I increase the state table I get the IRQ warning; the interrupt storm. This disables the interface being attacked and it's most likely due to the interface grabbing one CPU/core and filling it with software interrupts. PF takes all packets and puts them through the CPU, and in this case it would/should grab only one CPU. This makes sense because you don't want IRQ polling across all CPUs (I have a link to an excellent article regarding this design, I'll find it in a few). So generating an interrupt storm on any interface should max out one CPU and take that interface down because of the interrupts being generated. The CPU does not have the resources to process legit request because it's overwhelmed with interrupts.
I guess that the network driver would have to include code to drop these packets before they got to the OS/kernel. Once the kernel gets involved in processing these packets, it generates the interrupt storm, bogs down one CPU, and the interface goes down.
I also assume that there is probably some performance tuning I can do in pfSense, but I think the issue is at a lower level than that. If I have time this weekend, I'll pin up a FreeBSD 10.1 box running PF to validate these assumptions, but I have a strong feeling it's the networking driver that's creating this issue by passing every packet off to the kernel and PF for processing.
-
I disabled Device Polling and the 100% usage of the Core nr. 4 went away instantly.
This is console with Top -p running when no attack and then the SYN attack.
First one is with no portforward and the box is fine.
2nd is with portforward and it dies.
![top-p_no portforward_SYN flood.PNG](/public/imported_attachments/1/top-p_no portforward_SYN flood.PNG)
![top-p_no portforward_SYN flood.PNG_thumb](/public/imported_attachments/1/top-p_no portforward_SYN flood.PNG_thumb)
![top-p_WITH_portforward_SYN flood.PNG](/public/imported_attachments/1/top-p_WITH_portforward_SYN flood.PNG)
![top-p_WITH_portforward_SYN flood.PNG_thumb](/public/imported_attachments/1/top-p_WITH_portforward_SYN flood.PNG_thumb) -
It's possible that when the state table is full and a new packet for yet another new state comes in, if it's all being processed on the same thread, maybe a new state with a full table triggers some sort of "clean up" in an attempt to make room, and this clean up is really expensive to be doing per packet.
-
I think you mentioned this before, but I just want to make sure. When you say with/without forwarding, do you mean when targeting the forwarded port or just forwarding in general?
If it's forwarding in general, maybe it's NAT that's causing some/all of the issues. Since NAT is single threaded, if a new state is coming in and in order to forward ports, NAT needs to first re-write the header information prior to the firewall seeing the packet, now we have a single chunk of code that is acting as gatekeeper to the firewall, and it's single threaded to boot.
When you have no forwarding rules, NAT doesn't even need to be checked. But if you have one more more rules, NAT has to check the new state packet, for every new state packet that comes in, single threaded, lots of locking.
edit: I see syslogd using a lot of CPU, are you logging blocked packets? May want to disable that during the test.
-
It's also important to note that SM is running pfSense as a VM and I am running it on bare metal. This can impact the way it handles network traffic.
https://lists.freebsd.org/pipermail/freebsd-net/2015-March/041657.html
-
Okay the script fucked my unbound and it lost its PID and couldnt start…. had to revert to DNS forwarder to get internet access back...
Tim and Anthony is a great help! Getting closer....
-
Tomorrow we should be able to test on a real business class network, instead of my crappy Comcast CPE that dies. A fiber optic Internet connection through a Cisco switch port. I'll lug the VM box up there, and also see what I can scare up for bare metal…hopefully more than an unfurled tinfoil hat.
-
I havent got a clue of where to begin and where to look.
I cant see whats using the core…
What do you make of this? Core nr. 4 says its idle at 100%
Was going to say Dtrace might be a good start but then I saw this. https://forum.pfsense.org/index.php?topic=94260.0
-
So have you identified the code thats running on the core that maxes out when this happens?
I'm almost certain the issue is with the network driver in FreeBSD and it's also being contributed to by PF.
When my state table is low (394K) the attack cripples the entire box with the exception of the console. PF alerts that it hit its state table max in the console. I'm not sure why a full state table creates a more significant impact on the box, but it does.
What makes you say that?
Difficult to tell really without dtrace running wouldnt you say?
-
Yes. Could be great with better logger tools built in pfSense.
We are fighting a weird battle right now.
Sometimes it handles the attacks fine, then the same config crashes instantly on the same attack seconds later.
Only difference on my system is the the number 4 core hits 100%. When that happens then it goes down and packetloss occurs.
When it doesnt, then it can handle it. I have 8 cores and I cant see what uses that specific core.
-
I havent got a clue of where to begin and where to look.
I cant see whats using the core…
What do you make of this? Core nr. 4 says its idle at 100%
Was going to say Dtrace might be a good start but then I saw this. https://forum.pfsense.org/index.php?topic=94260.0
You can load up FreeBSD modules included with FreeBSD dist. I just tried to get it working but some trouble with the DTrace providers hindered me. After removing the /usr/lib/dtrace dir I got these not-so-clear results.
[2.2.2-RELEASE][admin@pfsense]/usr/lib: /usr/share/dtrace/toolkit/hotkernel Sampling... Hit Ctrl-C to end. dtrace: buffer size lowered to 2m dtrace: aggregation size lowered to 2m ^C FUNCTION COUNT PCNT 0xffffffff8035fbe2 1 0.0% 0xffffffff80dd0860 1 0.0% 0xffffffff80abad44 1 0.0% 0xffffffff8035fb5c 1 0.0% 0xffffffff8035fab5 1 0.0% 0xffffffff8035fbd7 1 0.0% 0xffffffff80f46fb6 1 0.0% 0xffffffff8035d4b2 1 0.0% 0xffffffff8035fbf9 1 0.0% 0xffffffff8097f4c0 1 0.0% 0xffffffff80d06b71 1 0.0% 0xffffffff8035fb8b 1 0.0% 0xffffffff80d06c28 1 0.0% 0xffffffff8035fac5 1 0.0% 0xffffffff8035fb67 1 0.0% 0xffffffff80f3275e 1 0.0% 0xffffffff8035faa0 1 0.0% 0xffffffff80f3712d 3 0.0% 0xffffffff80dd48bd 3 0.0% 0xffffffff80d069ea 8 0.0% 0xffffffff80f3726b 105 0.6% 0xffffffff80f2d8e6 17886 99.2%
Seems promising.
Edit: May be related: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=185290
-
Last Supermule said, the problem only occurs when port forwarding is enabled in NAT. My guess is NAT unless we can get a confirmation that what I read was incorrect.
-
So have you identified the code thats running on the core that maxes out when this happens?
I'm almost certain the issue is with the network driver in FreeBSD and it's also being contributed to by PF.
When my state table is low (394K) the attack cripples the entire box with the exception of the console. PF alerts that it hit its state table max in the console. I'm not sure why a full state table creates a more significant impact on the box, but it does.
What makes you say that?
Difficult to tell really without dtrace running wouldnt you say?
Yes and no. There's enough empirical evidence to support this based on the architecture of PF.
Simply stated, PF does stateful packet inspection, therefore it must pass every packet to the kernel and then up to PF. A DDOS is a capacity-maxing attack. Doesn't matter where that capacity is–bandwidth, CPU, RAM, states, IRQ interrupts, etc.--but it is designed to impair service by maxing out capacity. This particular attack will take down any device that has to do stateful packet inspection. It took down a FreeBSD 10.1/Apache 2.4 box and a CentOS 6.4/Apache 2.4. It also affected the hypervisor each of those systems were running on. Nothing that does stateful packet inspection is immune to this kind of attack.
As a comparison, on the same hypervisor as the FreeBSD and CentOS box, I was running a Windows 8.1 VM and wireshark. The ingress and egress ports being attacked were mirrored to the Windows 8.1 box. So that interface was experiencing the same thing as the pfSense box. Since the Windows 8.1 box was not doing stateful packet inspections, and it was capturing every packet that came into the interface, that box was unaffected by the attack. Again, same hypervisor, different results.
Also, to clarify another point, this isn't a security issue per se (though I did discuss the tenants of security--CIA: confidentiality, integrity, and availability with my security practice today...) when you take into consideration that the pfSense device, as a firewall and stateful packet inspection engine, maintains the confidentiality and integrity of everything behind it. It is not suited, by design, to withstand a DDOS like this. You would need another stateless packet device to mitigate this kind of attack. If you attempt to make pfSense a stateless device, many of the other features would not be possible because they require stateful packet inspection to perform their tasks.
So in summary, there is no solution to this issue because of what pfSense does and how it does it. Not only is this true for pfSense, it's true for any stateful packet inspection device.
-
Not only is this true for pfSense, it's true for any stateful packet inspection device.
As I stated earlier in this thread, the attack took down my Cisco whateverthehellitis cough "Business Class" router installed by my ISP. It wasn't even the device being attacked! It just had to hand the packets over to an unfortunate instance of pfSense 2.2.2 running in a VM with a windows box running RDP behind it.
I'm wondering if someone has thought of having the firewall do something similar to greylisting if a state table or TCP connection attempt rate starts to climb.
That still wouldn't address the issue of a malicious party sucking up all your bandwidth, though, but might give the firewall an extra layer of defense from a SYN flood type attack. I just don't know how expensive that would be, processing wise.
-
Being susceptible to DDOS is not inherent to stateful firewalls, it's about not having a slow path that kills the machine. The fast path is existing states. If the slow path really has to be as bad as it is, like 1000 times slower, then have it give up when it spends too much time. Drop the packets for the non-existent states, don't allow existing states to be punished by blocked states.
nutshell: The slow path is a corner case that is pathological and can be trigger on demand, make it lower priority so it doesn't blow stuff up. Existing state should not be affected.
edit: a lot of what I do involves Big-O scaling, edge and corner cases, and making sure the worst case allows the system to function in a well defined limp-mode. Rule of thumb, modern computers have way more CPU and memory than internet bandwidth. If your network breaks before running out of bandwidth, something is incorrectly designed.