What is the biggest attack in GBPS you stopped
-
Normally we only run the dashboard to monitor traffic and packetloss.
If we need then run 2 more tabs. (system activity and pfinfo).
On the console go into (8) shell and type "top -P" Capital P is needed.
Then you can monitor the CPU's from there if your tabs crash.
This also removes a very important CPU metric from top. Just run top without "-P". You should see a third line on the screen for CPU metrics. This is a critically important set of metrics.
You should also be observing the interrupts that the interface is seeing because interrupts are equally as important.
-
Being susceptible to DDOS is not inherent to stateful firewalls, it's about not having a slow path that kills the machine. The fast path is existing states. If the slow path really has to be as bad as it is, like 1000 times slower, then have it give up when it spends too much time. Drop the packets for the non-existent states, don't allow existing states to be punished by blocked states.
nutshell: The slow path is a corner case that is pathological and can be trigger on demand, make it lower priority so it doesn't blow stuff up. Existing state should not be affected.
edit: a lot of what I do involves Big-O scaling, edge and corner cases, and making sure the worst case allows the system to function in a well defined limp-mode. Rule of thumb, modern computers have way more CPU and memory than internet bandwidth. If your network breaks before running out of bandwidth, something is incorrectly designed.
Not entire true.
The first test, and the screen shots are on this thread, filled the state table. First capacity limit hit, interface goes down. Second test, screen shot again posted, created an IRQ interrupt storm. That is a hardware issue (probably a driver issue, but I'll explain more). IRQ interrupt capacity hit, interface goes down.
An IRQ interrupt storm can be generated by any piece of hardware. Google it for some interesting examples. When SSDs fail in some cases it generates IRQ interrupt storm, and it affects the machine in a similar fashion.
When I increased the state limit in pfSense, I hit a system limitation where the incoming data could not be consumed fast enough by the hardware and software resources. I could probably tweak this setting, but there will always be an upper limit. Set high enough and the DDOS would consume all of my bandwidth, essentially achieving the same thing: encumbering the interface.
Some good reading if you want to tweak the performance of FreeBSD: https://calomel.org/freebsd_network_tuning.html
https://forums.freebsd.org/threads/igb-interrupt-storm-detected.9271/
http://www.keil.com/forum/21608/
From this link: http://conferences.sigcomm.org/imc/2010/papers/p206.pdf
"A packet’s journey through the capturing system begins at the network interface card (NIC). Modern cards copy the packets into the operating systems kernel memory using Di- rect Memory Access (DMA), which reduces the work the driver and thus the CPU has to perform in order to transfer the data into memory. The driver is responsible for allocat- ing and assigning memory pages to the card that can be used for DMA transfer. After the card has copied the captured packets into memory, the driver has to be informed about the new packets through an hardware interrupt. Raising an interrupt for each incoming packet will result in packet loss, as the system gets busy handling the interrupts (also known as an interrupt storm). This well-known issue has lead to the development of techniques like interrupt mod- eration or device polling, which have been proposed several years ago [7, 10, 11]. However, even today hardware inter- rupts can be a problem because some drivers are not able to use the hardware features or do not use polling—actually, when we used the igb driver in FreeBSD 8.0, which was re- leased in late 2009, we experienced bad performance due to interrupt storms. Hence, bad capturing performance can be explained by bad drivers; therefore, users should check the number of generated interrupts if high packet loss rates are observed.
The driver’s hardware interrupt handler is called imme- diately upon the reception of an interrupt, which interrupts the normal operation of the system. An interrupt handler is supposed to fulfill its tasks as fast as possible. It therefore usually doesn’t pass on the captured packets to the operating systems capturing stack by himself, because this operation would take to long. Instead, the packet handling is deferred by the interrupt handler. In order to do this, a kernel thread is scheduled to perform the packet handling in a later point in time. The system scheduler chooses a kernel thread to perform the further processing of the captured packets ac- cording to the system scheduling rules. Packet processing is deferred until there is a free thread that can continue the packet handling.
As soon as the chosen kernel thread is running, it passes the received packets into the network stack of the operat- ing system. From there on, packets need to be passed to the monitoring application that wants to perform some kind of analysis. The standard Linux capturing path leads to a subsystem called PF PACKET; the corresponding system in FreeBSD is called BPF (Berkeley Packet Filter). Improve- ments for both subsystems have been proposed."
"The first test, and the screen shots are on this thread, filled the state table. First capacity limit hit, interface goes down."
The interface should not go down because there is no more room for states. Can't add a new state, do nothing, packet is ignored. Like I said, bad design.
" Second test, screen shot again posted, created an IRQ interrupt storm. That is a hardware issue (probably a driver issue, but I'll explain more). IRQ interrupt capacity hit, interface goes down."
Exactly, hardware issue, not a lack of resources, but bad design.
"Set high enough and the DDOS would consume all of my bandwidth, essentially achieving the same thing: encumbering the interface."
This is the ONLY case a DDOS should take down a system. You can out of bandwidth. All other cases are because someone should be kicked in the head. Easy for me to say from my chair with hindsight, but true none-the-less.
In the end, a lot of these types of issues will go away with changes from FreeBSD11 and later. There are some major changes being talked about that will effectively allow near linear scaling for SMP, which forces them to re-look at a lot of their algorithms. I still think the issue is probably with NAT forwarding, which is really not part of the network stack in the "normal" way. Kind of a hack.
-
I have, and it's clear as day.
When folks run their testing, you're box will be taken down with default settings. Increase the state table to 5M. The interface will go down, and you should see an IRQ interrupt storm. Interface goes down for obvious reasons.
I've run several tests with SM using different routes, OSes, pfSense settings, and I had a quick conversation with the devs about my findings. This is very much a FreeBSD/PF/network driver issue.
Also, it's clear you didn't read any of the information I posted or any of the links. I've probably spent about 100 hours so far working through this trying to help people understand what the issue actually is. But if people want to go down this road on their own, nothing I can do to stop that. I just wanted to help you not waste your time.
You've seen (and others have seen the screen shot) showing the interrupt storm. Now Interrupt Storms affect all OS's to some degree or another, so I'd tend to agree with the statement its a FreeBSD issue rather than a pfsense issue although we can tune pfsense and thus freebsd by altering some of the settings (System Tunables) via pfsense.
Ultimatelly all OS's are just running in a tight loop, you add stuff to it like a driver via an interrupt and you trigger a whole load of additional code which then needs to be run. Now depending on how well that code is written will determine if we see pfsense (or any other firewall running on a variety of OS's) get taken out or not as it depends on a variety of factors like was the driver code written with multi core CPU's in mind or not, likewise further up the code base has that been written to be multi threaded or not?
Has anyone tried the freebsd links I've posted which show how to tune freebsd?
Dtrace will help isolate the code in freebsd which is affected and it might show us the variables we can tune to reduce the incidence of an intterupt storm but imo we need to be focusing on the interrupt storm as the cause, everything else we have seen is just a symptom of the underlying problem.
-
When I run the attacks on myself all the time to harden the damn thing, I have never ever had an interrupt error on the console.
I hardly see interrupt load (0-2%) when hit and thats not the issue IMHO.
The issue is something that acts as a bottleneck on the way through NAT.
Its no issue when NAT is not there and the traffic hits a blocked FW. When it does NAT, then it crashes (1 core hits 100%) and packetloss is observed.
So the difference between no NAT and NAT is what takes it offline.
-
When I run the attacks on myself all the time to harden the damn thing, I have never ever had an interrupt error on the console.
I hardly see interrupt load (0-2%) when hit and thats not the issue IMHO.
The issue is something that acts as a bottleneck on the way through NAT.
Its no issue when NAT is not there and the traffic hits a blocked FW. When it does NAT, then it crashes (1 core hits 100%) and packetloss is observed.
So the difference between no NAT and NAT is what takes it offline.
So NAT takes the inbound packet, rewrites portions of the header, probably redoes checksums, then does it push the mbuf back onto the stack where it gets fed into PF processing again or does it just continue running the PF rules? If it's redoing checksum, is that being offloaded to hardware or is sw doing that?
-
Dtrace will help isolate the code in freebsd which is affected and it might show us the variables we can tune to reduce the incidence of an intterupt storm but imo we need to be focusing on the interrupt storm as the cause, everything else we have seen is just a symptom of the underlying problem.
Yes, this is definitely the next step. The guidance I deceived from the devs is to get a FreeBSD 10.1 image with dtrace to identify the issue in FreeBSD and subsequently FreeBSD/FP. However, there are certain features of dtrace that are not enabled by default, so it may/will require recompiling the kernel so you can capture those things. I was informed that "there are no dtrace probes currently in sys/netpfil, (look for SDT_PROVIDER_DEFINE) so you’ll be starting from scratch."
Again, it needs to be determined that the issue is not in FreeBSD 10.1 before you troubleshoot pfSense. What's the point in putting any effort into trying to remediate this in pfSense when that may have no bearing on the issue?
-
I'll get a bare metal FreeBSD test box up by this evening. It should be a beefy enough box–a Xeon W3565 with 4GB of RAM and a GigE nic. I already have one loaded on my ESXi host.
We can test tomorrow. -
This is very interesting!
restarting services one by one to test. First doesnt do anything. 3 attacks with restarting DNS, NTP and Apinger does nothing.
Restarting snort does it. Snort restart does something to the firewall that the others dont do. Its replicable since the last 2 attacks fares in a similar fashion.
I have asked Bmeeks to tell us what Snort does to pfsense when restarted and what it resets.
Suddenly its able to handle the traffic again (much nicer graphs).
Why??
-
The "Pasta Method" of troubleshooting–throwing things against the wall to see what sticks--won't provide any value in identifying the root cause and resolving the issue. IMHO it's a waste of time.
Eliminate the most basic things first--FreeBSD, PF, and then pfSense. If you can recreate the issue in FreeBSD, poof, there's where the issue resides. Troubleshooting pfSense when the issue could be in FreeBSD is really a waste of time.
-
ok
-
When I run the attacks on myself all the time to harden the damn thing, I have never ever had an interrupt error on the console.
I hardly see interrupt load (0-2%) when hit and thats not the issue IMHO.
The issue is something that acts as a bottleneck on the way through NAT.
Its no issue when NAT is not there and the traffic hits a blocked FW. When it does NAT, then it crashes (1 core hits 100%) and packetloss is observed.
So the difference between no NAT and NAT is what takes it offline.
What model NIC are you using and are you running pfsense on ESXI thus going through ESXI's network stack?
Reason I ask is some NIC's can handle some of the network functionality that would otherwise be handled by the OS.
I wonder what NIC was in use when the photo was taken showing the interrupt storm message, it would be helpful to compare the difference in funtionality to see what work is offloaded to the OS and what is being handled by the NIC.Mer is wondering the same in the post below with the checksum sentence.
@mer:
So NAT takes the inbound packet, rewrites portions of the header, probably redoes checksums, then does it push the mbuf back onto the stack where it gets fed into PF processing again or does it just continue running the PF rules? If it's redoing checksum, is that being offloaded to hardware or is sw doing that?
Dtrace will help isolate the code in freebsd which is affected and it might show us the variables we can tune to reduce the incidence of an intterupt storm but imo we need to be focusing on the interrupt storm as the cause, everything else we have seen is just a symptom of the underlying problem.
Yes, this is definitely the next step. The guidance I deceived from the devs is to get a FreeBSD 10.1 image with dtrace to identify the issue in FreeBSD and subsequently FreeBSD/FP. However, there are certain features of dtrace that are not enabled by default, so it may/will require recompiling the kernel so you can capture those things. I was informed that "there are no dtrace probes currently in sys/netpfil, (look for SDT_PROVIDER_DEFINE) so you’ll be starting from scratch."
Again, it needs to be determined that the issue is not in FreeBSD 10.1 before you troubleshoot pfSense. What's the point in putting any effort into trying to remediate this in pfSense when that may have no bearing on the issue?
Theres no reason why we couldnt setup freebsd with some basic functionality and build up from there. Setting up pf tables for example is time consuming but not difficult, but its time consuming which is why we use pfsense as alot of the functionality is setup for us.
Maybe it would be quicker to compare the XML backups of all those affected to isolate the differences between installations installed?
MS does a nice free XML notepad app which makes it easier in a dual pane tree view on one pane, xml properties on the right pane to make it fairly quick and easy to modify xml files. I know text compare apps exist which make it useful for comparing differences in program code/html/text between versions, so maybe that would be a quicker and easier approach to take?
The XML backup compare approach would be quickest imo and we could have someone/a few people comparing the XML backups, whilst a few others maybe work on setting up freebsd from the ground up, or try a different approach in parallel?
Other alternatives/options is Dtrace but setup time is unknown as is the setup time of freebsd but wil likely be more than trying to get Dtrace running on pfsense, but we might also be able to setup more quickly than Dtrace or FreeBSD the flame graphs also mentioned earlier in the thread.
I personally would like to see Dtrace in pfsense as I think that would be a better low level form of functionality to have in pfsense going forward if you ware western, or going backwards if from the Southern American continent (different regions of the planet view the future differently ie some see it as a path laid out in front whilst others see it as a path behind their head as its unknown what the future holds but I digress). ;)
So whose in favour of what?
Thoughts otherwise we will end up going around in circles and nothing gets achieved with none of us any the wiser as to whats happening and no solution being found (if one can be found in this current version of pfsense as we dont know if it might be resolved freebsd 11 just to chuck that variable in as well).
Lots of variables but we need to sort out a plan otherwise I can only but insert the old cliche "if we dont plan then we plan to fail". ;D
-
The "Pasta Method" of troubleshooting–throwing things against the wall to see what sticks--won't provide any value in identifying the root cause and resolving the issue. IMHO it's a waste of time.
Eliminate the most basic things first--FreeBSD, PF, and then pfSense. If you can recreate the issue in FreeBSD, poof, there's where the issue resides. Troubleshooting pfSense when the issue could be in FreeBSD is really a waste of time.
I agree, first I heard that Snort was also being used in this. What version of Snort is in use as a new version has been released over the last few months.
May I propose we all submit XML backups and I can compare the differences in XML to find the common elements by all those affected?
I think this will be the quickest way to resolve or at least potentially eliminate the odds things to find the common elements which might be affecting things. *
- I say might but if its low level as in deep in the freebsd OS, different packages or parts of the system maybe calling the same parts of the OS at a low level so its not 100% foolproof comparing the XML backups but its a start which shouldnt take too much time.
Dont worry about the encryption in the XML it can be broken easily enough so best to blank your passwords and anything else you want to keep private, but dont say I didnt warn you. ;)
-
Emulating E1000 on Intel Dual Port server adapter on ESXi 4.1 U3.
Intel code is: E1G42ETBLK
-
I installed Snort quite late in the process and it didnt matter on the performance and the issue at hand.
Until i restarted it….
-
Emulating E1000 on Intel Dual Port server adapter on ESXi 4.1 U3.
Intel code is: E1G42ETBLK
Thats quite old like at least a couple years old and thats got a bug in where it can be hacked from the network stack iirc?
Re the testing the fw, I'm still get setup at this end based on what I noticed last night and posted so still double checking rules & the system is ok before the test. What time you going home tonight dont forget timezone your in?
-
CET +1 is the timezone.
I believe they solved that one in U2 or U3.
Going out eating tonight at 7.30PM local time.
-
Got your XML file, I'll compare that to mine and any other's which get pm'ed and I'll try to build a table to show the differences and the common elements to hopefully make it easier to solve.
I'm still curious to see if my home fw running pfsense 2.2.2. can be taken out so if you wanted to do a quick test, my ip is 2.101.3.83. I havent had chance to setup Skype yet as I've still got to get my mail server up and running and I dont allow ping so its not something that needs to be in the test. I've got VM recording the dashboard, pfinfo and system activity plus I'm also using a packet capture (full unlimited on the wan) so I can see whats going on. If the system falls down the ISP will automatically assign a new ip address so for the moment the 2.101.3.83 is mine to play with for now.
Drop me a PM to say when you have done, I'll PM back to let you know if I detect any problems here or not either way.
Edit.
I had PM'ed the above so something got screwy with the forum comms for it to appear here but also explains my post here https://forum.pfsense.org/index.php?topic=94573.0 which is weird as I can access the forum via a free vpn no problem but can no longer access it direct, unless Supermules XML backup triggered a snort alert which is now blocking that machine - will have to check in a moment.
Anyway have you run the script yet against my ip address? I'm still on that IP address and nothing appears to have happened if you have, so let us know Supermule if you have run the script or not.
Much obliged.
-
Anyway still nothing happened this end but Supermule did say he was going to eat tonight whatever that is, so for now I'm still on that ip address if SM pops their head back in later on tonight. I'll update the ip address when it changes next.
If anyone else wants to pm me their pf XML backup file affected who are affected by this scan I can do a comparison to see what are the common elements and what are the exclusive to you elements so hopefully we can start to narrow down what, where and when.
Just remember to blank the bits you want to keep private as encryption and stuff could be useful to the wrong people, etc etc. Once I've got them compiled I can do a table without names showing the common bits so we can then test an example with the common bits, see if it falls down and go from there to further narrow it down in the absence of anything else like Dtrace, flame graphs et al.
Edit.
ISP has forced an IP change so when Supermule touches base again I'll pass on the latest ip address change. The food must be good. :)
-
Anyway still nothing happened this end but Supermule did say he was going to eat tonight whatever that is, so for now I'm still on that ip address if SM pops their head back in later on tonight. I'll update the ip address when it changes next.
If anyone else wants to pm me their pf XML backup file affected who are affected by this scan I can do a comparison to see what are the common elements and what are the exclusive to you elements so hopefully we can start to narrow down what, where and when.
Just remember to blank the bits you want to keep private as encryption and stuff could be useful to the wrong people, etc etc. Once I've got them compiled I can do a table without names showing the common bits so we can then test an example with the common bits, see if it falls down and go from there to further narrow it down in the absence of anything else like Dtrace, flame graphs et al.
Edit.
ISP has forced an IP change so when Supermule touches base again I'll pass on the latest ip address change. The food must be good. :)
Maybe it's not the food but the beer or wine? ;D
-
Just got home from a nice dinner with friends and its 3.39AM here :D
Going to bed and will have a look during the day tomorrow. (saturday).
Got your XML file, I'll compare that to mine and any other's which get pm'ed and I'll try to build a table to show the differences and the common elements to hopefully make it easier to solve.
I'm still curious to see if my home fw running pfsense 2.2.2. can be taken out so if you wanted to do a quick test, my ip is 2.101.3.83. I havent had chance to setup Skype yet as I've still got to get my mail server up and running and I dont allow ping so its not something that needs to be in the test. I've got VM recording the dashboard, pfinfo and system activity plus I'm also using a packet capture (full unlimited on the wan) so I can see whats going on. If the system falls down the ISP will automatically assign a new ip address so for the moment the 2.101.3.83 is mine to play with for now.
Drop me a PM to say when you have done, I'll PM back to let you know if I detect any problems here or not either way.
Edit.
I had PM'ed the above so something got screwy with the forum comms for it to appear here but also explains my post here https://forum.pfsense.org/index.php?topic=94573.0 which is weird as I can access the forum via a free vpn no problem but can no longer access it direct, unless Supermules XML backup triggered a snort alert which is now blocking that machine - will have to check in a moment.
Anyway have you run the script yet against my ip address? I'm still on that IP address and nothing appears to have happened if you have, so let us know Supermule if you have run the script or not.
Much obliged.