What is the biggest attack in GBPS you stopped
-
I havent got a clue of where to begin and where to look.
I cant see whats using the core…
What do you make of this? Core nr. 4 says its idle at 100%
-
So have you identified the code thats running on the core that maxes out when this happens?
I'm almost certain the issue is with the network driver in FreeBSD and it's also being contributed to by PF.
When my state table is low (394K) the attack cripples the entire box with the exception of the console. PF alerts that it hit its state table max in the console. I'm not sure why a full state table creates a more significant impact on the box, but it does.
When I increase the state table I get the IRQ warning; the interrupt storm. This disables the interface being attacked and it's most likely due to the interface grabbing one CPU/core and filling it with software interrupts. PF takes all packets and puts them through the CPU, and in this case it would/should grab only one CPU. This makes sense because you don't want IRQ polling across all CPUs (I have a link to an excellent article regarding this design, I'll find it in a few). So generating an interrupt storm on any interface should max out one CPU and take that interface down because of the interrupts being generated. The CPU does not have the resources to process legit request because it's overwhelmed with interrupts.
I guess that the network driver would have to include code to drop these packets before they got to the OS/kernel. Once the kernel gets involved in processing these packets, it generates the interrupt storm, bogs down one CPU, and the interface goes down.
I also assume that there is probably some performance tuning I can do in pfSense, but I think the issue is at a lower level than that. If I have time this weekend, I'll pin up a FreeBSD 10.1 box running PF to validate these assumptions, but I have a strong feeling it's the networking driver that's creating this issue by passing every packet off to the kernel and PF for processing.
-
I disabled Device Polling and the 100% usage of the Core nr. 4 went away instantly.
This is console with Top -p running when no attack and then the SYN attack.
First one is with no portforward and the box is fine.
2nd is with portforward and it dies.
![top-p_no portforward_SYN flood.PNG](/public/imported_attachments/1/top-p_no portforward_SYN flood.PNG)
![top-p_no portforward_SYN flood.PNG_thumb](/public/imported_attachments/1/top-p_no portforward_SYN flood.PNG_thumb)
![top-p_WITH_portforward_SYN flood.PNG](/public/imported_attachments/1/top-p_WITH_portforward_SYN flood.PNG)
![top-p_WITH_portforward_SYN flood.PNG_thumb](/public/imported_attachments/1/top-p_WITH_portforward_SYN flood.PNG_thumb) -
It's possible that when the state table is full and a new packet for yet another new state comes in, if it's all being processed on the same thread, maybe a new state with a full table triggers some sort of "clean up" in an attempt to make room, and this clean up is really expensive to be doing per packet.
-
I think you mentioned this before, but I just want to make sure. When you say with/without forwarding, do you mean when targeting the forwarded port or just forwarding in general?
If it's forwarding in general, maybe it's NAT that's causing some/all of the issues. Since NAT is single threaded, if a new state is coming in and in order to forward ports, NAT needs to first re-write the header information prior to the firewall seeing the packet, now we have a single chunk of code that is acting as gatekeeper to the firewall, and it's single threaded to boot.
When you have no forwarding rules, NAT doesn't even need to be checked. But if you have one more more rules, NAT has to check the new state packet, for every new state packet that comes in, single threaded, lots of locking.
edit: I see syslogd using a lot of CPU, are you logging blocked packets? May want to disable that during the test.
-
It's also important to note that SM is running pfSense as a VM and I am running it on bare metal. This can impact the way it handles network traffic.
https://lists.freebsd.org/pipermail/freebsd-net/2015-March/041657.html
-
Okay the script fucked my unbound and it lost its PID and couldnt start…. had to revert to DNS forwarder to get internet access back...
Tim and Anthony is a great help! Getting closer....
-
Tomorrow we should be able to test on a real business class network, instead of my crappy Comcast CPE that dies. A fiber optic Internet connection through a Cisco switch port. I'll lug the VM box up there, and also see what I can scare up for bare metal…hopefully more than an unfurled tinfoil hat.
-
I havent got a clue of where to begin and where to look.
I cant see whats using the core…
What do you make of this? Core nr. 4 says its idle at 100%
Was going to say Dtrace might be a good start but then I saw this. https://forum.pfsense.org/index.php?topic=94260.0
-
So have you identified the code thats running on the core that maxes out when this happens?
I'm almost certain the issue is with the network driver in FreeBSD and it's also being contributed to by PF.
When my state table is low (394K) the attack cripples the entire box with the exception of the console. PF alerts that it hit its state table max in the console. I'm not sure why a full state table creates a more significant impact on the box, but it does.
What makes you say that?
Difficult to tell really without dtrace running wouldnt you say?
-
Yes. Could be great with better logger tools built in pfSense.
We are fighting a weird battle right now.
Sometimes it handles the attacks fine, then the same config crashes instantly on the same attack seconds later.
Only difference on my system is the the number 4 core hits 100%. When that happens then it goes down and packetloss occurs.
When it doesnt, then it can handle it. I have 8 cores and I cant see what uses that specific core.
-
I havent got a clue of where to begin and where to look.
I cant see whats using the core…
What do you make of this? Core nr. 4 says its idle at 100%
Was going to say Dtrace might be a good start but then I saw this. https://forum.pfsense.org/index.php?topic=94260.0
You can load up FreeBSD modules included with FreeBSD dist. I just tried to get it working but some trouble with the DTrace providers hindered me. After removing the /usr/lib/dtrace dir I got these not-so-clear results.
[2.2.2-RELEASE][admin@pfsense]/usr/lib: /usr/share/dtrace/toolkit/hotkernel Sampling... Hit Ctrl-C to end. dtrace: buffer size lowered to 2m dtrace: aggregation size lowered to 2m ^C FUNCTION COUNT PCNT 0xffffffff8035fbe2 1 0.0% 0xffffffff80dd0860 1 0.0% 0xffffffff80abad44 1 0.0% 0xffffffff8035fb5c 1 0.0% 0xffffffff8035fab5 1 0.0% 0xffffffff8035fbd7 1 0.0% 0xffffffff80f46fb6 1 0.0% 0xffffffff8035d4b2 1 0.0% 0xffffffff8035fbf9 1 0.0% 0xffffffff8097f4c0 1 0.0% 0xffffffff80d06b71 1 0.0% 0xffffffff8035fb8b 1 0.0% 0xffffffff80d06c28 1 0.0% 0xffffffff8035fac5 1 0.0% 0xffffffff8035fb67 1 0.0% 0xffffffff80f3275e 1 0.0% 0xffffffff8035faa0 1 0.0% 0xffffffff80f3712d 3 0.0% 0xffffffff80dd48bd 3 0.0% 0xffffffff80d069ea 8 0.0% 0xffffffff80f3726b 105 0.6% 0xffffffff80f2d8e6 17886 99.2%
Seems promising.
Edit: May be related: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=185290
-
Last Supermule said, the problem only occurs when port forwarding is enabled in NAT. My guess is NAT unless we can get a confirmation that what I read was incorrect.
-
So have you identified the code thats running on the core that maxes out when this happens?
I'm almost certain the issue is with the network driver in FreeBSD and it's also being contributed to by PF.
When my state table is low (394K) the attack cripples the entire box with the exception of the console. PF alerts that it hit its state table max in the console. I'm not sure why a full state table creates a more significant impact on the box, but it does.
What makes you say that?
Difficult to tell really without dtrace running wouldnt you say?
Yes and no. There's enough empirical evidence to support this based on the architecture of PF.
Simply stated, PF does stateful packet inspection, therefore it must pass every packet to the kernel and then up to PF. A DDOS is a capacity-maxing attack. Doesn't matter where that capacity is–bandwidth, CPU, RAM, states, IRQ interrupts, etc.--but it is designed to impair service by maxing out capacity. This particular attack will take down any device that has to do stateful packet inspection. It took down a FreeBSD 10.1/Apache 2.4 box and a CentOS 6.4/Apache 2.4. It also affected the hypervisor each of those systems were running on. Nothing that does stateful packet inspection is immune to this kind of attack.
As a comparison, on the same hypervisor as the FreeBSD and CentOS box, I was running a Windows 8.1 VM and wireshark. The ingress and egress ports being attacked were mirrored to the Windows 8.1 box. So that interface was experiencing the same thing as the pfSense box. Since the Windows 8.1 box was not doing stateful packet inspections, and it was capturing every packet that came into the interface, that box was unaffected by the attack. Again, same hypervisor, different results.
Also, to clarify another point, this isn't a security issue per se (though I did discuss the tenants of security--CIA: confidentiality, integrity, and availability with my security practice today...) when you take into consideration that the pfSense device, as a firewall and stateful packet inspection engine, maintains the confidentiality and integrity of everything behind it. It is not suited, by design, to withstand a DDOS like this. You would need another stateless packet device to mitigate this kind of attack. If you attempt to make pfSense a stateless device, many of the other features would not be possible because they require stateful packet inspection to perform their tasks.
So in summary, there is no solution to this issue because of what pfSense does and how it does it. Not only is this true for pfSense, it's true for any stateful packet inspection device.
-
Not only is this true for pfSense, it's true for any stateful packet inspection device.
As I stated earlier in this thread, the attack took down my Cisco whateverthehellitis cough "Business Class" router installed by my ISP. It wasn't even the device being attacked! It just had to hand the packets over to an unfortunate instance of pfSense 2.2.2 running in a VM with a windows box running RDP behind it.
I'm wondering if someone has thought of having the firewall do something similar to greylisting if a state table or TCP connection attempt rate starts to climb.
That still wouldn't address the issue of a malicious party sucking up all your bandwidth, though, but might give the firewall an extra layer of defense from a SYN flood type attack. I just don't know how expensive that would be, processing wise.
-
Being susceptible to DDOS is not inherent to stateful firewalls, it's about not having a slow path that kills the machine. The fast path is existing states. If the slow path really has to be as bad as it is, like 1000 times slower, then have it give up when it spends too much time. Drop the packets for the non-existent states, don't allow existing states to be punished by blocked states.
nutshell: The slow path is a corner case that is pathological and can be trigger on demand, make it lower priority so it doesn't blow stuff up. Existing state should not be affected.
edit: a lot of what I do involves Big-O scaling, edge and corner cases, and making sure the worst case allows the system to function in a well defined limp-mode. Rule of thumb, modern computers have way more CPU and memory than internet bandwidth. If your network breaks before running out of bandwidth, something is incorrectly designed.
-
Super, maybe this weekend we can test me turning off port forwarding and testing again, this time I'll have my console up.
-
Being susceptible to DDOS is not inherent to stateful firewalls, it's about not having a slow path that kills the machine. The fast path is existing states. If the slow path really has to be as bad as it is, like 1000 times slower, then have it give up when it spends too much time. Drop the packets for the non-existent states, don't allow existing states to be punished by blocked states.
nutshell: The slow path is a corner case that is pathological and can be trigger on demand, make it lower priority so it doesn't blow stuff up. Existing state should not be affected.
edit: a lot of what I do involves Big-O scaling, edge and corner cases, and making sure the worst case allows the system to function in a well defined limp-mode. Rule of thumb, modern computers have way more CPU and memory than internet bandwidth. If your network breaks before running out of bandwidth, something is incorrectly designed.
Not entire true.
The first test, and the screen shots are on this thread, filled the state table. First capacity limit hit, interface goes down. Second test, screen shot again posted, created an IRQ interrupt storm. That is a hardware issue (probably a driver issue, but I'll explain more). IRQ interrupt capacity hit, interface goes down.
An IRQ interrupt storm can be generated by any piece of hardware. Google it for some interesting examples. When SSDs fail in some cases it generates IRQ interrupt storm, and it affects the machine in a similar fashion.
When I increased the state limit in pfSense, I hit a system limitation where the incoming data could not be consumed fast enough by the hardware and software resources. I could probably tweak this setting, but there will always be an upper limit. Set high enough and the DDOS would consume all of my bandwidth, essentially achieving the same thing: encumbering the interface.
Some good reading if you want to tweak the performance of FreeBSD: https://calomel.org/freebsd_network_tuning.html
https://forums.freebsd.org/threads/igb-interrupt-storm-detected.9271/
http://www.keil.com/forum/21608/
From this link: http://conferences.sigcomm.org/imc/2010/papers/p206.pdf
"A packet’s journey through the capturing system begins at the network interface card (NIC). Modern cards copy the packets into the operating systems kernel memory using Di- rect Memory Access (DMA), which reduces the work the driver and thus the CPU has to perform in order to transfer the data into memory. The driver is responsible for allocat- ing and assigning memory pages to the card that can be used for DMA transfer. After the card has copied the captured packets into memory, the driver has to be informed about the new packets through an hardware interrupt. Raising an interrupt for each incoming packet will result in packet loss, as the system gets busy handling the interrupts (also known as an interrupt storm). This well-known issue has lead to the development of techniques like interrupt mod- eration or device polling, which have been proposed several years ago [7, 10, 11]. However, even today hardware inter- rupts can be a problem because some drivers are not able to use the hardware features or do not use polling—actually, when we used the igb driver in FreeBSD 8.0, which was re- leased in late 2009, we experienced bad performance due to interrupt storms. Hence, bad capturing performance can be explained by bad drivers; therefore, users should check the number of generated interrupts if high packet loss rates are observed.
The driver’s hardware interrupt handler is called imme- diately upon the reception of an interrupt, which interrupts the normal operation of the system. An interrupt handler is supposed to fulfill its tasks as fast as possible. It therefore usually doesn’t pass on the captured packets to the operating systems capturing stack by himself, because this operation would take to long. Instead, the packet handling is deferred by the interrupt handler. In order to do this, a kernel thread is scheduled to perform the packet handling in a later point in time. The system scheduler chooses a kernel thread to perform the further processing of the captured packets ac- cording to the system scheduling rules. Packet processing is deferred until there is a free thread that can continue the packet handling.
As soon as the chosen kernel thread is running, it passes the received packets into the network stack of the operat- ing system. From there on, packets need to be passed to the monitoring application that wants to perform some kind of analysis. The standard Linux capturing path leads to a subsystem called PF PACKET; the corresponding system in FreeBSD is called BPF (Berkeley Packet Filter). Improve- ments for both subsystems have been proposed."
-
You are correct.
Last Supermule said, the problem only occurs when port forwarding is enabled in NAT. My guess is NAT unless we can get a confirmation that what I read was incorrect.
-
Not only is this true for pfSense, it's true for any stateful packet inspection device.
As I stated earlier in this thread, the attack took down my Cisco whateverthehellitis cough "Business Class" router installed by my ISP. It wasn't even the device being attacked! It just had to hand the packets over to an unfortunate instance of pfSense 2.2.2 running in a VM with a windows box running RDP behind it.
I'm wondering if someone has thought of having the firewall do something similar to greylisting if a state table or TCP connection attempt rate starts to climb.
That still wouldn't address the issue of a malicious party sucking up all your bandwidth, though, but might give the firewall an extra layer of defense from a SYN flood type attack. I just don't know how expensive that would be, processing wise.
PF (at least in OpenBSD versions, probably in FreeBSD port) has the concept of max rate qualifiers on rules. If you have a copy of Hansteen's "Book of PF", pg 69. They call them "state tracking options". I think it would give you functionality similar to greylisting. I'm guessing it's implemented in PF, not sure how. Rules that have these rate limits (or state tracking options) could dump the src address into a "block these address" table, then a simple block quick from early in the rule set would jump out quicker, perhaps keeping things alive.