RCC-VE 2440 throughput results
-
Hi,
Since the RCC-VE 2440 (or SG-2440) don't have many reports about throughput, I thought I'd share my quick findings as a new owner.
Below from left to right (with 'Connectivity' ruleset and ET mostly enabled): Snort enabled on sending and receiving interfaces, Snort enabled only on receiving interface, Snort disabled. One file was transferred from one Windows machine to another using SMB (Windows file sharing). Other enabled services include firewall/NAT.
Are these results expected? I was expecting to get closer to gigabit to be honest, since the pfSense store says Gigabit as well (although this is not the same product, per se). Good news is that Snort seems to have little effect.
![pfsense bandwidth.png](/public/imported_attachments/1/pfsense bandwidth.png)
![pfsense bandwidth.png_thumb](/public/imported_attachments/1/pfsense bandwidth.png_thumb) -
Since the RCC-VE 2440 (or SG-2440) don't have many reports about throughput, I thought I'd share my quick findings as a new owner.
Ok this would be a fine thing to share those results with us, but you should choose then also a more
respected or more common way for your test.- Install pfSense latest stable version
- Perhaps activate PowerD (Hi adaptive)
- configure only the WAN and LAN Gateways
- connect one PC on the WAN side and another one on the LAN side
- install iPerf or NetIO on the PCs and then do the test again!
Below from left to right (with 'Connectivity' ruleset and ET mostly enabled): Snort enabled on sending and receiving interfaces, Snort enabled only on receiving interface, Snort disabled.
Using Snort, Squid, DPI or AV scanning during this test would be normally not really fine, because
you would be distorting the test results more or less massively.One file was transferred from one Windows machine to another using SMB (Windows file sharing).
SMB/CIF would be the last protocol that I personally would take for those tests. iPerf or NetIO would be
the best test procedure to measure correctly the throughput. :-*Other enabled services include firewall/NAT.
SPI/NAT and firewall rules would be the normal part that is enabled in a firewall for sure.
Are these results expected?
Like you was testing, yes for sure! ::)
I was expecting to get closer to gigabit to be honest,
Then please read some lines above and do this test again. 8)
You could expect that this unit is able to come closer to this result, but please don´t forget
that pfSense is at this moment only using one CPU core on the WAN port if the pfSense box
will be using pppoe to connect to the ISP or the Internet link and then it would be more and
more a real challenge to hit 1000 MBit/s or come closer to this result.since the pfSense store says Gigabit as well (although this is not the same product, per se).
Pretty sure they are using the same unit. But not together with Snort and over SMB with only one large file.
pfSense store says…... Do you exactly know how they have done their test? Only in the labor or under real
live environment structure conditions? Using what application likes iPerf or NetIO? I really thing here is the
problem pointed, the test procedure I mean!Good news is that Snort seems to have little effect.
This would be different if you are using another program and thousands of smaller files, here in your case
snort must only be looking for one file to match. ::) -
If my only sin was using SMB instead of iPerf I think I did pretty good :P
This was just a quick real life test, nothing more nothing less. I'm aware it isn't the most scientific test possible. I may do another test at some point with iPerf, but I doubt only that change would be giving me almost 2x speed?
Some more info:
- latest pfSense (2.2.5) was used
- transfer was done from one eth port to another on the device itself (no switches/VLANS)
- connecting the same equipment and using same file on a switch results in full gigabit speed (no slowdown on computer side)
-
If my only sin was using SMB instead of iPerf I think I did pretty good :P
You will tell us something about the TCP/IP throughput by using the SMB protocol?
Would you also try out to answer all your English test questions in the Chinese language?This was just a quick real life test, nothing more nothing less.
But then please don´t compare it against the test from the pfSense store, please!
I'm aware it isn't the most scientific test possible. I may do another test at some point with iPerf, but I doubt only that change would be giving me almost 2x speed?
Why you would imagine to get the 2x speed? And from what? The SMB is not really interesting if we
talk about the available TCP/IP throughput.Some more info:
- latest pfSense (2.2.5) was used
ok
- transfer was done from one eth port to another on the device itself (no switches/VLANS)
ok
- connecting the same equipment and using same file on a switch results in full gigabit speed (no slowdown on computer side)
This might be a good thing to test your switch but nothing more.
-
SMB is an awful performance test. Involves disk activity on both sides which introduces unnecessary variables, and the protocol itself isn't quick especially where anything other than same subnet latency is involved. Just the very small increase inherent in routing between VLANs could drop it off quite a bit. Though there might be other things at play, which is why something like iperf between the hosts is a much better test.
The hardware itself is certainly capable of 1 Gbps, with capacity to spare.
-
SMB is single threaded as well and will only use one core introducing yet another bottleneck to performance on multi core/threaded systems.
-
Do you have recommendations for running iperf? Been testing with different options (including nothing except mandatory) but am having difficulties gettings past 500Mbps.
-
but am having difficulties gettings past 500Mbps.
The RCC-VE 2440 came pre-installed together with CentOS. And you might be installing pfSense
on the eMMC storage or another one likes a mSATA, where did you install pfSense?
Did you activate the following things?- PowerD (hi adaptive)
(It drives the CPU with low frequency if nothing is needed and at the maximum if also needed) - Enable TRIM support
(if a mSATA or SSD was used to install) - Do a fresh and full install and then measure the throughput please
(after the test you might be re-installing the system and configuration backup)
If you were installing the ADI image you don´t have to do such tunings shown above
it is only matching for the normal community image of pfSense.You will be able to run iPerf with the command -p to use all CPU cores and that might be also
delivers total other results against the numbers seen on the pfSense store page. - PowerD (hi adaptive)
-
Hi,
I installed pfSense according to this guide (https://www.netgate.com/docs/rcc-ve-2440/pfsense.html). I have no additional storage devices installed.
I did some more testing today. Test computers have i7 processors, one is Windows 8.1 and the other Ubuntu 14.04.3 (unfortunately I don't have access to to two Ubuntus at the moment). Iperf version 3.0.11.
I'm getting a consistent ~560Mbps +-10. Changing the server role didn't have an effect, neither starting the Ubuntu client with full affinity (I'm guessing you meant -A to use all cores), or running multiple streams with the -P flag.
In the attachment you can see how the pfSense System Activity page looks like during testing (with client/server roles switched). I don't know whether it is normal or not, but the client side would always use almost 100% of one core, while the other core is idling at around 70%. However, this didn't mean other computers had bandwidth to use (I temporarily connected and tested a third computer while the test was running, and its transfer speed dropped to almost zero).
Every once in a while (maybe 1 out of 4 tests) I would get a weird result, where the transfer speed wasn't consistent during the test, you can see this on the image as well. I don't know whether this is a hiccup with iperf or what.
Thoughts? Anything to change in the setup?
-
Bump. Any more advice?
-
What if you disable snort?