What is the biggest attack in GBPS you stopped
-
2.1.x is 8.3 and 2.2.x is 10.1.x
-
2.1.x is 8.3 and 2.2.x is 10.1.x
Thanks. So there may be changes in driver and stack code between 8.3 and 10 that would be "interesting". I'm guessing also changes in FreeBSD PF code between the two also.
-
Lowprofile saw the same pattern. Even 2.1.5 suffered in terms of routing but didnt have the same unresponsiveness as the 2.2.x branch in 10.1 compared to 8.3.
If I do back to back testing on the systems then 2.1.5 fares much better than 2.2.2.
Both with Snort running.
-
2.2 is on 10.1, which means PF supports better threading. Maybe PF is able to consume more CPU in 2.2 than 2.1 or earlier because of this.
-
But CPU load is not the issue??
Unless it consumes the cache and cannot process the IO cue.
But it should handle 3Gbyte/S no problem. So I dont understand why it should be CPU related at all.
-
I am more into the way PF handles the packets and what it does with it internally.
pf reads copies of packets and inspects them. Maybe its in that regard something (buffer) could be to small.
So when we reach x number of PPS, then it falls apart and cant keep up.
-
In the case of that one attack kernel-que was using nearly 100% of my CPU. If it was less thread friendly, it probably would have been closer to 25% cpu. The old PF could not consume 100% of a multi-core CPU, the new PF potentially could.
-
How do you inspect the kernel queue in pf??
I could then compare the 2.1.5 vs 2.2.2 and see if there are any difference?
In the case of that one attack kernel-que was using nearly 100% of my CPU. If it was less thread friendly, it probably would have been closer to 25% cpu. The old PF could not consume 100% of a multi-core CPU, the new PF potentially could.
-
As FreeBSD grew from 4.x to 5 and up to 10 I believe they've been pushing finer and finer grained locking into the kernel (like Solaris). This means that there can be more kernel preemption happening. I'd have to dig into code to verify but I'm guessing the "kernel igb0 que" thread is the one sitting between the bottom half of the interrupt handler and the next layer up (basically holding packets). If PF is looking at packets, modifying (or not) and putting them back, there may be a lot of queue manipulation going on (grab lock, deque/enque packet, release lock) so that may cause the que threads to start sucking CPU.
This is speculation, generalities; I have not really looked at this portion of the FBSD kernel so I could be totally off base.
-
Is there any way to inspect time used by process or cpu cycles used in pfSense???
-
System Activity or "ps" will tell you total CPU time consumed. Just remember, a quad core can consume 4 CPU seconds per second.
-
I'm sure you are familiar with the wide ranging contracts that exist in the world today beit Non Disclosure Agreements, or even the more common non compete agreements as exampled here: http://pando.com/2014/03/22/revealed-apple-and-googles-wage-fixing-cartel-involved-dozens-more-companies-over-one-million-employees/
http://www.businessinsider.com/emails-eric-schmidt-sergey-brin-hiring-apple-2014-3?IR=T
http://venturebeat.com/2014/05/23/4-tech-companies-are-paying-a-325m-fine-for-their-illegal-non-compete-pact/Put simply, you are not in a position to prove your innocence, because
-
To adhere to the terms of any NDA contract you may have been forced/coerced to sign would mean any disclosure would render you in breach of your contract and liable to whatever penalties may have been included in any agreement and who in their "right" mind would put themselves at a disadvantage?
-
Even if you have not signed any NDA contract you still cant prove your innocence, ergo the spooks & govt(s) still win, its classic divide and conquer techniques, which then begs the question why trust military & govt(s) or banks who carry out activities in secret?
What I can say is trust can take ages to build up, but can be destroyed in seconds.
On the point of passing enough traffic through pfsense, this has happened with less than 1mbits of traffic, a simple web page loading can trigger the OS cores to hang. Volume is irrelevant in the example I mention, but in relation to this thread and amounts of data, I wondered two things, exploiting the CPU designs namely cache and/or something network related as also mentioned by Kom here
https://forum.pfsense.org/index.php?topic=91856.msg517296#msg517296I'm inclined at this stage to err towards something nic related but I will examine the zip posted by supermule to see if I can see anything untoward, but this could be a variation on the heap spraying exploit http://en.wikipedia.org/wiki/Heap_spraying
I wonder if those affected are running snort and if so do the problems still exist, assuming snort is already aware of the problem much like AV software need to have found a virus before it can protect against it?
All of the above is said with the best of intentions and for it to be educational to those who might not be aware of the deceit and duplicity in the world today.
Edit.
Has anyone tried an earlier version of pfsense like a 1.x version by any chance? ;)
And likewise by your logic you can't prove you haven't signed a contract with some organization bent on sowing distrust in pfSense in general, or cmb specifically. Your post here looks exactly like what I would expect such an attack to look like. Don't bother denying it, you're obviously under and NDA or non-compete.
Conspiracy theories are great entertainment, but don't get carried away by it.
I can but it will only come out in court if it ever gets to court. The Rabbit Warren runs deeper than you may want to believe!
-
-
System Activity or "ps" will tell you total CPU time consumed. Just remember, a quad core can consume 4 CPU seconds per second.
Not always, you need to understand how the L2 cache works, ie its shared between cores on Intel, but AMD tend to have a cache amount per core, ie AMD would be less prone to cache collisions unlike Intel cpu's.
-
@mer:
As FreeBSD grew from 4.x to 5 and up to 10 I believe they've been pushing finer and finer grained locking into the kernel (like Solaris). This means that there can be more kernel preemption happening. I'd have to dig into code to verify but I'm guessing the "kernel igb0 que" thread is the one sitting between the bottom half of the interrupt handler and the next layer up (basically holding packets). If PF is looking at packets, modifying (or not) and putting them back, there may be a lot of queue manipulation going on (grab lock, deque/enque packet, release lock) so that may cause the que threads to start sucking CPU.
This is speculation, generalities; I have not really looked at this portion of the FBSD kernel so I could be totally off base.
I suspect its like KOM suggested with the driver nic integrating into the OS, thats why I have suggested to others where possible to go back to a 1.x pf version to test. The earlier OS's wont be so bogged down with "new" features", but your idea of locking is on the ball in terms of how OS's handle multi threading as against preemptive threading.
-
1st one is idle and 2nd is DoS.
Load is approx. 20mbit and maybe 50k PPS
-
Out of curiosity, has any posted about this over in FreeBSD mailing lists?
-
System Activity or "ps" will tell you total CPU time consumed. Just remember, a quad core can consume 4 CPU seconds per second.
Not always, you need to understand how the L2 cache works, ie its shared between cores on Intel, but AMD tend to have a cache amount per core, ie AMD would be less prone to cache collisions unlike Intel cpu's.
Cache misses counts as CPU time. If it takes an extra 250 cycles because of a cache miss, well, that's counting against you. CPU time is the amount of time a process has been scheduled. What it does during that time is irrelevant from the schedulers's standpoint.
-
250?
[Cache misses counts as CPU time. If it takes an extra 250 cycles because of a cache miss, well, that's counting against you.
[/quote] -
I'm not sure I understand the question. 250 cycles is a number I just pulled from thin air for going to main memory. I was talking about reading CPU time from "ps" or "top" and I said on a quad core, every second of real time is 4 seconds of CPU time. firewalluser then mentioned "not always" and said some stuff about L2 cache. I was responding by saying that cache does not affect CPU time.
-
Sorry, I didn't understand the thin air part. :)
I'm not sure I understand the question. 250 cycles is a number I just pulled from thin air for going to main memory.