What is the biggest attack in GBPS you stopped
-
Thanks.
Allready changed that in system -> tunables and it made quite a difference on the low core tests.
-
Indeed. I personally hate emoticons too, but I have personally seen how a negative, or lack of postive, focus can send a whole thread into a negative, hateful tone of adversarial confrontations instead of people realizing they actually all have a common goal to solve the friggen problem and learn something.
Because neither screaming "oh noes, it suxxx, we're all doomed, use Windows Firewall instead", nor this YT testing is a way how you handle a perceived security issue.
https://www.freebsd.org/security/reporting.html
That would be the butterfly effect… or the emotions fear and anger in a biological sense which is driven by excessive dopamine levels derived from a variety of inputs ...
-
See.
I havent stated that people should use Windows Firewall instead.
I have stated that its not affected.
Not the same really…..
-
Could be useful.
https://wiki.freebsd.org/NetworkPerformanceTuning -
Indeed. I personally hate emoticons too, but I have personally seen how a negative, or lack of postive, focus can send a whole thread into a negative, hateful tone of adversarial confrontations instead of people realizing they actually all have a common goal to solve the friggen problem and learn something.
Because neither screaming "oh noes, it suxxx, we're all doomed, use Windows Firewall instead", nor this YT testing is a way how you handle a perceived security issue.
https://www.freebsd.org/security/reporting.html
That would be the butterfly effect… or the emotions fear and anger in a biological sense which is driven by excessive dopamine levels derived from a variety of inputs ...
Where do we draw the line at being educational?
-
Allready implemented under system -> tunables for what I use and the network MTU.
Could be useful.
https://wiki.freebsd.org/NetworkPerformanceTuning -
By the way. Tested 1.2.3 and i got blown out of the water instantly using 4GB ram and 4CPU's.
So the new OS' is deffo an improvement.
-
Thanks for letting us know, its been educational. ;D
-
Opnsense 4core/8GB test.
http://youtu.be/dH4ih76b_Ik
-
8 cores
http://youtu.be/-xTtzLEQx08
Not as good as hoped but not running 100% CPU like all the others. It seems that the response on the WAN graph are related to the PING on WAN.
It seems that the 2 CORE setup is the one that performs best in beginning until around 35 seconds into the attack. Then crash. 4 and 8 cores keep the GUI online.
You may be at 100% cpu, but according to the dashboard, you're running at 311mhz even when at 100%.
-
System Activity or "ps" will tell you total CPU time consumed. Just remember, a quad core can consume 4 CPU seconds per second.
Not always, you need to understand how the L2 cache works, ie its shared between cores on Intel, but AMD tend to have a cache amount per core, ie AMD would be less prone to cache collisions unlike Intel cpu's.
Cache misses counts as CPU time. If it takes an extra 250 cycles because of a cache miss, well, that's counting against you. CPU time is the amount of time a process has been scheduled. What it does during that time is irrelevant from the schedulers's standpoint.
System Activity or "ps" will tell you total CPU time consumed. Just remember, a quad core can consume 4 CPU seconds per second.
Yes
Yes & No
If no cache collisions occur then yes your "4 CPU seconds per second" would be right but when a cache collision occurs then its a matter of debate whether the cpu is giving you any cpu time useful to the task being asked of it by said software because a cache collision by definition is a failure of the cpu/core depending on where the cache collision occurs ie L1,2,3 which means no cpu processing useful to the task being asked of it as it backs out and resolves the cache collision.
To then make it a little more complicated or simpler depending on perspective, if the cache collision occurs on cache shared across all the cores then no you dont get your 4 cpu seconds per second as the CPU backs out and resolves the cache collision which holds up one or more other cores.
If the cache collision occurs on cache available only to a single core like L1 and some L2 (L2 on some chips is shared and on others its a small % of the total L2 but unique to each core), then you could consider it in your 4 cpu seconds per second statement but then there is still the matter of whether the CPU is giving you any "useful" processing time whilst it resolves the collision. Technically the time spent/clock cycles filling the cache having a collision and then resolving the collision is time wasted but it could still show as 100% core or CPU activity depending on the cache affected. So Yes when you see CPU activity at 100%, that would be correct but its not the whole picture as its hiding the cock ups of the CPU cache and the bus waits that are occuring.
Now even if we dont have any cache collisions, on a multi core cpu, time is then further spent wasted as the individual cores spend time waiting to access ram or the disk depending on bus architecture.
I've got software here which I have written which can run mulithreaded and multi cored, but its also capable of running on a single thread on a single core or x threads on a single core or x threads on x cores.
Guess which one runs the fastest?
The single threaded single core version.
Why is this?
Its because there is no time wasted handshaking between threads at the OS level and cores at the HW to access the ram and disk. Disk activity shows this up the most as disk/permanent storage is an order of magnitude slower to access even SSD's when compared to ram.
In some respects even though Arm chips are RISC ie dont have as many common tasks normally carried out by OS software functions which have made it into the cpu architecture unlike say Intels AES-NI to pick a relevant example http://en.wikipedia.org/wiki/AES_instruction_set
of where some common software functions have made it into the cpu architecture, they generally but not always tend to speed up the software but all of this ultimately depends on how the software is written and to a lessor extent the language and compiler used as optimising compilers like cache can work for you and against you as well depending on the chip used to run the software.This is why I suggested right back at the beginning to try a 1.x version of pfsense. Considering the new features and improvements to functionality made to OS's over time, not only can code be compared easily, it will be possible to workout by elimination and some observations where the problem lies. I suspect knowing how HW drivers used to be for printers especially HP printers in the Win3.1,W95,W98, NT3.5, NT4 days that the drivers have not been updated enough to keep pace with OS developments, hence why I agree with KOM and suspect its a NIC hook issue in the OS, but it will also be compounded by the multi core's seen in cpu's today which is why I also suggested for those running it virtualised like on ESXI, to restrict the core's available to 1.
Apologies if this making you suck eggs, but due to limited data ie not knowing you or your past I dont know how much you know or dont know, hence the explaination above. :)
You are correct, but I was not incorrect either. All I was saying was that you can see CPU time spent. All CPU time is the amount of time spent in a given context. yes, AMD's new arch has a much greater chance of cache line collisions, especially given the size of their L2 caches and the limited n-way associativity, but that reduces the amount of work done per unit of time, not the amount of time spent. I do agree that AMD can take more time to get the same amount of work done, but "cpu time" is still wall-clock time spent in a context.
Nice to know other people share my affection for understanding computers :-)
-
What is "KERN.IPC.NMBUF"? I can't find anything about it?
-
Kernel buffers.
https://www.google.dk/search?q=KERN.IPC.NMBUF&ie=UTF-8
-
It goes down so fast you dont see the utilization…
8 cores
http://youtu.be/-xTtzLEQx08
Not as good as hoped but not running 100% CPU like all the others. It seems that the response on the WAN graph are related to the PING on WAN.
It seems that the 2 CORE setup is the one that performs best in beginning until around 35 seconds into the attack. Then crash. 4 and 8 cores keep the GUI online.
You may be at 100% cpu, but according to the dashboard, you're running at 311mhz even when at 100%.
-
4mbps attack and 40% packetloss.
Netstat -L doesnt see any exhaustion of queues.
Anybody know how to change the backlog to 1024??
Just to see if it matters.
-
Here is the output of vmstat -z
Anybody find something unusual in this?
![pfsense.22tv - Diagnostics_ Execute command_Page_1.png](/public/imported_attachments/1/pfsense.22tv - Diagnostics_ Execute command_Page_1.png)
![pfsense.22tv - Diagnostics_ Execute command_Page_1.png_thumb](/public/imported_attachments/1/pfsense.22tv - Diagnostics_ Execute command_Page_1.png_thumb)
![pfsense.22tv - Diagnostics_ Execute command_Page_2.png](/public/imported_attachments/1/pfsense.22tv - Diagnostics_ Execute command_Page_2.png)
![pfsense.22tv - Diagnostics_ Execute command_Page_2.png_thumb](/public/imported_attachments/1/pfsense.22tv - Diagnostics_ Execute command_Page_2.png_thumb) -
Guys, you need to be much more rigorous in collecting data. You are trying to diagnose a network packet processing problem. Using the web interface to execute shell commands will not produce a consistent and reliable result. Not only is the web interface heavy weight, it is lower priority than kernel packet processing. And most importantly, your diagnostic data collection is dependent upon the behavior of the system you are trying to diagnose.
Let's assume you don't want to build a custom kernel…
You need to shed as many variables as possible and get as close to real data as you can. Turn Snort off for crying out loud. And anything else optional that might interfere with metrics. If you want to use command line tools, execute them outside of network processing. This means using the console, not ssh. Create a shell script that collects information on a periodic basis. Elevate the priority of the script to ensure timely execution. And save the output for every run.
Here is a sample script:
#!/bin/sh
ps -axuwww
While true
do
/bin/date
/usr/bin/netstat -m
sleep 2
doneHere is a sample execution:
/usr/bin/nice -n -19 myscript
-
Done it at the console at no useful output was generated for people to see.
I stopped Snort running and here is the output from the DoS.
First 2 is idle and next 2 is under DoS.
-
Done some more testing this morning.
2-3mbps is all it takes. Has downscaled the Mbufs and state max a little.
http://youtu.be/NPtDnM8ixXs
Dennypage. Thanks for the info. Want to help diagnose then contact me on PM.
-
This link is probably important to note the differences between versions: https://doc.pfsense.org/index.php/Does_pfSense_support_SMP_(multi-processor_and/or_core)_systems
2.1 was single-threaded and 2.2 is multi-threaded. That's why you're seeing an impact/performance difference between the two; it's not hard to extrapolate how and why.
I think what you're trying to determine, and this is based on my review of the thread, is which part of pf is choking. In order to determine this you need to debug each component in the chain from the NIC to the CPU and back out as well as the code. I'm not entirely sure you know programmatically where and which networking event triggers the issue inside pf, only that a large volume of data of a specific type starts the event.
You've moved beyond evaluating pf from a networking perspective and more into evaluating the codebase. This requires a different kind of data collection and troubleshooting. It also take an excruciatingly long time to identify and resolve these kinds of issues. It's a lot more than just tweaking a setting in some cases.
Best of luck in determining the root cause and solution to this issue.