What is the biggest attack in GBPS you stopped
-
System idle, 208 interrupts per second for the one queue that seems to process ICMP from my desktop. Pinging the interface with 67.3k/sec ICMP packets, 250 interrupts per second.
Packets: sent=1422309, rcvd=1422309, error=0, lost=0 (0.0% loss) in 21.131114 sec
RTTs in ms: min/avg/max/dev: 0.003 / 0.176 / 20.716 / 0.294
Bandwidth in kbytes/sec: sent=4038.525, rcvd=4038.52533Mb/s of 64byte ICMP packets barely made a dent.
An increase of 12% cpu time(48% cpu time of the core the queue is on) and a 19% increase in interrupts doesn't seem that bad for that many packets.
From a PPS and interrupt view, my system really doesn't care. Whatever the attack that was done before, 30Mb/s was bad enough that my admin interface went offline.
-
I just realized that when ping flooding my firewall, the only thing that changed was the IRQ CPU time. I looked up ICMP and it seems the ICMP response is built right into the network stack. From what I can tell, the ICMP responses are being handled on that same realtime kernel thread. If this is the case and my admin interface stopped responding to pings because of load, that means the realtime thread on a different interface couldn't even run.
One of the features of MSI-X is interrupt masks. When a "hard" interrupt occurs, the current context thread gets interrupted and the real time thread gets scheduled. Then the thread can set a mask and block all other interrupts. It can process all of its data. Of course blocking interrupts is bad if you don't have a way to indicate there is new work, so there are "soft" interrupts. If the hardware supports it, the hardware can flag a shared memory location that new data is ready. When the current thread is done processing it's current data, it can do one last check to see if the soft interrupt was signaled. If not, unmask the interrupts and return. If it was flagged, then continue processing until all the data is work is done and no more soft interrupts have been signaled.
It may be possible that the WAN interface is in a constant state of backlog and the realtime kernel thread never unschedules because of constant backlog, starving my admin interface from CPU time.
With that in mind, some basic cheap nics ie realtek might actually be less hassle compared to say an intel nic when solving this sort of problem
With a minimum ping of 0.003ms and an average of 0.176, yet only ~250 interrupts per second, the i350 NIC is doing some nifty magic.
-
I will test with some offloading and other tunables in pfsense later when online again.
Its SYN packets that is spoofed and of various sizes. Most of them doesnt have the ACK the FW needs and the states remain open.
Its easy to fend of an ICMP flood since its predicatable traffic. In my case, when 1 core hits 100% the FW goes offline and packetloss occurs. I cant see what that specific CPU does and it would be interesting to dig deeper into that and what process that consumes the CPU. It doesnt when there is no port forward enabled but as soon as it routes, then it goes ballistic.
Why can 1 core keep back everything else??
-
Procstat -ka in Command prompt during a SYN attack.
No packet loss this time. No port forward.
-
Same but with port forward.
-
I will test with some offloading and other tunables in pfsense later when online again.
Its SYN packets that is spoofed and of various sizes. Most of them doesnt have the ACK the FW needs and the states remain open.
Its easy to fend of an ICMP flood since its predicatable traffic. In my case, when 1 core hits 100% the FW goes offline and packetloss occurs. I cant see what that specific CPU does and it would be interesting to dig deeper into that and what process that consumes the CPU. It doesnt when there is no port forward enabled but as soon as it routes, then it goes ballistic.
Why can 1 core keep back everything else??Software was always written for a single core cpu's, programmers & designers never thought we'd get multicore cpu's in the timeframe or for the little cost like we have had, so just like the Y2k bug, there has not been the planning for the future.
Now a multi core cpu still has to share resources, like L2 cache, hard disks and ram. You cant have two cores working on shared resource at the same time you get a deadlock. So the programmer needs to make decisions about what is acceptable to offload to another core and what is not.
If you want the speed keep as much as possible in a tight loop on one core, if you want to make it multi task at the expense of speed, off load more work to other cores knowing that too much swapping between cores increases the lock time and in extreme the lock time could be greater than the processing time, ergo nothing is achieved then.
Although this is framed in an software perspective, the points about multithreading is still relevant at the hw level, its just a different level of abstraction.
https://forum.pfsense.org/index.php?topic=91856.msg517843#msg517843 -
Yes… but why can other firewalls keep up with the traffic and pf cant?
If hardware was the limit here, then all tested should suffer the same faith. They dont...
Fortigates VMware Appliance and Mikrotik handles the same traffic no issues. Tell me why.... on the same ressources and in the same Hypervisor.
-
I just realized that when ping flooding my firewall, the only thing that changed was the IRQ CPU time. I looked up ICMP and it seems the ICMP response is built right into the network stack. From what I can tell, the ICMP responses are being handled on that same realtime kernel thread. If this is the case and my admin interface stopped responding to pings because of load, that means the realtime thread on a different interface couldn't even run.
One of the features of MSI-X is interrupt masks. When a "hard" interrupt occurs, the current context thread gets interrupted and the real time thread gets scheduled. Then the thread can set a mask and block all other interrupts. It can process all of its data. Of course blocking interrupts is bad if you don't have a way to indicate there is new work, so there are "soft" interrupts. If the hardware supports it, the hardware can flag a shared memory location that new data is ready. When the current thread is done processing it's current data, it can do one last check to see if the soft interrupt was signaled. If not, unmask the interrupts and return. If it was flagged, then continue processing until all the data is work is done and no more soft interrupts have been signaled.
It may be possible that the WAN interface is in a constant state of backlog and the realtime kernel thread never unschedules because of constant backlog, starving my admin interface from CPU time.
With that in mind, some basic cheap nics ie realtek might actually be less hassle compared to say an intel nic when solving this sort of problem
With a minimum ping of 0.003ms and an average of 0.176, yet only ~250 interrupts per second, the i350 NIC is doing some nifty magic.
Having some devices do some of the work can be useful, but I suspect its also helping create the problem being seen here in this instance.
http://en.wikipedia.org/wiki/Message_Signaled_Interrupts#MSI-X
https://forums.freebsd.org/threads/msi-msi-x-on-intel-em-nic.27736/
http://people.freebsd.org/~jhb/papers/bsdcan/2007/article/node8.htmlMight be useful as its looking at possibles areas for MSI-X failures.
http://comments.gmane.org/gmane.os.freebsd.stable/71699
http://christopher-technicalmusings.blogspot.co.uk/2012/12/passthrough-pcie-devices-from-esxi-to.html -
Yes… but why can other firewalls keep up with the traffic and pf cant?
Out of the box or have they been tuned?
If hardware was the limit here, then all tested should suffer the same faith. They dont…
Fortigates VMware Appliance and Mikrotik handles the same traffic no issues. Tell me why.... on the same ressources and in the same Hypervisor.
Tuning?
What makes pfsense's default settings suitable for your datacentre setup compared to my home use setup?
Its like tuning a F1 race car, its not going to do well in World Rally Championship setting is it on snow, across deserts or in forests, likewise a rally car is not going to do so well on a race track against F1 cars is it.
-
Out of the box.
Tuning no… It should default to the behaviour of handling the traffic received and shouldnt fail on 3mbit/s traffic.
You cant compare a F1 car to a WRC car. Its like comparing apples to bananas.
The only they have in common is that they are fruits :D
As does the firewalls. They handle traffic and blocks it if needed. Default behaviour. EOD.
Yes… but why can other firewalls keep up with the traffic and pf cant?
Out of the box or have they been tuned?
If hardware was the limit here, then all tested should suffer the same faith. They dont…
Fortigates VMware Appliance and Mikrotik handles the same traffic no issues. Tell me why.... on the same ressources and in the same Hypervisor.
Tuning?
What makes pfsense's default settings suitable for your datacentre setup compared to my home use setup?
Its like tuning a F1 race car, its not going to do well in World Rally Championship setting is it on snow, across deserts or in forests, likewise a rally car is not going to do so well on a race track against F1 cars is it.
-
A little weird thing I noticed…
I tried running the VM with an odd amount of cores.
It didnt change a bit, but behaviour did. 2 of the Cores are switching between 0% and 100% after the attack was stopped but none of them is 0% if the other one is...
Anybody care to explain why?
I havent seen this before and its like it wont let go.
-
Look at the code in the OS that determines what to assign to the available cores.
The OS might be "smart" enough to work out which cores are already under load and also under load for long periods of time, and thus it might assign other cores to be used.
This is windows thread scheduling https://msdn.microsoft.com/en-us/library/ms685100%28VS.85%29.aspx
Freebsd
https://calomel.org/freebsd_network_tuning.html
Things you need to look at include processor affinity & thread scheduling as mentioned in the link above and some of it shown below."######################################### net.isr. tuning begin ##############
NOTE regarding "net.isr.*" : Processor affinity can effectively reduce cache
problems but it does not curb the persistent load-balancing problem.[1]
Processor affinity becomes more complicated in systems with non-uniform
architectures. A system with two dual-core hyper-threaded CPUs presents a
challenge to a scheduling algorithm. There is complete affinity between two
virtual CPUs implemented on the same core via hyper-threading, partial
affinity between two cores on the same physical chip (as the cores share
some, but not all, cache), and no affinity between separate physical chips.
It is possible that net.isr.bindthreads="0" and net.isr.maxthreads="3" can
cause more slowdown if your system is not cpu loaded already. We highly
recommend getting a more efficient network card instead of setting the
"net.isr.*" options. Look at the Intel i350 for gigabit or the Myricom
10G-PCIE2-8C2-2S for 10gig. These cards will reduce the machines nic
processing to 12% or lower."
This might also be useful albeit for an earlier version of freebsd.
http://www.icir.org/gregor/tools/pthread-scheduling.html -
Out of the box.
Tuning no… It should default to the behaviour of handling the traffic received and shouldnt fail on 3mbit/s traffic.
You cant compare a F1 car to a WRC car. Its like comparing apples to bananas.
The only they have in common is that they are fruits :D
As does the firewalls. They handle traffic and blocks it if needed. Default behaviour. EOD.
I disagree, but thats because we dont know if the other products out of the box have additional code in order for it to adapt or change to requirements which may not be built into pfsense.
Can you provide examples of out of the box fw's which do work out of the box and can you show me what is different to pfsense in order for them to work out of the box?
The phrase, you get what you pay for springs to mind at the moment. :)
-
Mikrotik, Fortigate, ISA Server and Windows Firewall.
No other of what we have tested passed the tests.
-
FreeBSD is getting more SMP love for its network stack in 11. Each major release seems to have better core scaling for IO in general. There are some major plans to allow the network stack to both receive and send flows stickied to a single core and have flows randomly distributed among the cores.
One thing that I do not know about SMP loving is NAT. I know the NAT implementation has been single threaded for a while. It's possible it may be able to get a re-write once some of the new SMP network stack APIs are finalized. Or just got IPv6.
-
Mikrotik, Fortigate, ISA Server and Windows Firewall.
No other of what we have tested passed the tests.
Mikrotik - RouterOS based on Linux, so try some linux hacks on it for stability testing.
http://en.wikipedia.org/wiki/MikroTik#RouterOSFortigate - FortiOS based on Linux, so as above…
http://en.wikipedia.org/wiki/Fortinet#GPL_violationsWindows ISA Server Forefront no longer available as MS have announced they are dropping it so support will be gone in time. Try some windows hacks for stability testing.
This matters because freebsd is primarily aimed at stability although it has pioneered some features yet to be seen in other OS platforms and also holds an unofficial world record for the most amount of data transmitted
https://www.freebsd.org/advocacy/whyusefreebsd.htmlhttp://www.serverwatch.com/tutorials/article.php/10825_3393051_2/Differentiating-Among-BSD-Distros.htm
"FreeBSD holds the unofficial record for transferring data, having achieved more than 2 Terabytes of data from one server running the OS. It follows from this statistic that FreeBSD is also one of the most stable OSes available."The last part above is not what you want to hear considering what you are experiencing but it goes back to my point about tuning.
You can tune a little ford fiesta engine to compete on a 1/4 mile with similar performance as a bigger engined car, but that ford fiesta engine will then have no reliability and will likely explode after completing the 1/4 mile.
I guess what you need to do is define your aim's then select the correct fw according to those aim's.
-
Attack is currently scaled down to 2mbit/s and the FW still dies despite limiting states pr. second and states pr. host.
Ran Top command and here is some info.
1 core is still blasting away at full speed. (Core 6) if you pay for IOPS/CPU in a datacenter, this is not good news.
-
It seems to need it bad.
But when will pfSense have it, is another interesting question.
FreeBSD is getting more SMP love for its network stack in 11. Each major release seems to have better core scaling for IO in general. There are some major plans to allow the network stack to both receive and send flows stickied to a single core and have flows randomly distributed among the cores.
One thing that I do not know about SMP loving is NAT. I know the NAT implementation has been single threaded for a while. It's possible it may be able to get a re-write once some of the new SMP network stack APIs are finalized. Or just got IPv6.
-
Last one for today. Still packetloss and an unresponsive GUI. Traffic stateless is around 15mbit/s.
I see filterlog comsuming a lot of CPU during the attack. Big difference in Vmware CPU wise… Core nr. 6 is still blasting away, but its not attack dependant. If it wasnt going at a 100% then it could have survived (maybe).
-
SYN Flood recording stateless running TOP on Console
SYN Flood recording running SYN Proxy states with limiters and TOP on Console
If you wonder why you cant see core nr4 in TOP, you are not the only one.
It runs 100% in VmWare.