Official Realtek Driver Binary 1.95 For 2.4.4 Release
-
@stephenw10 said in Official Realtek Driver Binary 1.95 For 2.4.4 Release:
top -aSH
re0
works great and I get the full 950Mbps during speedtest.CPU load for
re0
is around 20-25% during speedtest, whereasre1
is around 10-15%.Also, does this mean anything?
Output ofpciconfig -lv
forre0
shows the correct controller, butre1
is missing thedevice
attribute.% pciconfig -lv re0 re0@pci0:2:0:0: class=0x020000 card=0x843f103c chip=0x816810ec rev=0x15 hdr=0x00 vendor = 'Realtek Semiconductor Co., Ltd.' device = 'RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller' class = network subclass = ethernet
% pciconfig -lv re1 re1@pci0:4:0:0: class=0x020000 card=0x012310ec chip=0x812510ec rev=0x00 hdr=0x00 vendor = 'Realtek Semiconductor Co., Ltd.' class = network subclass = ethernet
-
The missing description is unlikely to matter.
I was more interested in the queue or interrupt loads on your CPU cores while testing. igb will use multiple CPU cores by default. re usually isn't but this is new hardware so....
CPU usage between those NICs looks to be inline with the throughput at least.
Steve
-
@breakaway Thanks, you save me!!!
-
THANKS! it works (on Xigmanas 12.1)
Do you know if WOL works? I tried to activate it, but I wasn't been able to make it work. Not sure if it's something that I do or it's just that it's not implemented in the driver (or the board for that matters).
Thank you
Bye -
Nope sorry, I don't own any Realtek Card.
-Rico
-
Now that there is a FreeBSD package for this I would use that over a, relatively, unknown binary source:
pkg add https://pkg.FreeBSD.org/FreeBSD:11:amd64/latest/All/realtek-re-kmod-v196.04_2.txz
That would be much easier if it was in our repo though. Let me see...
Steve
-
https://redmine.pfsense.org/issues/11079
Should be in snaphots soon.
-
Looks good:
[2.5.0-DEVELOPMENT][admin@apu.stevew.lan]/root: pkg search realtek realtek-re-kmod-v196.04_2 Kernel driver for Realtek PCIe Ethernet Controllers
So do:
pkg install realtek-re-kmod
then
echo 'if_re_load="YES"' >> /boot/loader.conf.local
Then reboot and check the boot logs for:
re0: <Realtek PCIe GbE Family Controller> port 0x1000-0x10ff mem 0xf7a00000-0xf7a00fff,0xf7900000-0xf7903fff irq 16 at device 0.0 on pci1 re0: Using Memory Mapping! re0: Using 1 MSI-X message re0: ASPM disabled re0: version:1.96.04 re0: Ethernet address: 00:0d:b9:37:30:10
Current version should be installed every time. It's built against our kernel source etc.
It's unlikely this will be backported to 2.4.5 but far easier (and probably safer) to use in 2.5 when it's released or in snapshots now.
Steve
-
@stephenw10 For some reason I cannot seem to get this to load.
I had the previous driver loaded on 2.4.5 but have upgrade to 2.5.0-DevelopmentI've followed your instructions, it appears to install OK, but I can't see it in kldstat nor does my log look like yours. Do you have any suggestions on what i might be doing wrong?
[2.5.0-DEVELOPMENT][root@gw.griffo.co]/boot: cat loader.conf.local if_re_load="YES"
My logs
Dec 12 13:04:04 kernel re0: <RealTek 8168/8111 B/C/CP/D/DP/E/F/G PCIe Gigabit Ethernet> port 0xe000-0xe0ff mem 0x81300000-0x81300fff,0xa0100000-0xa0103fff irq 16 at device 0.0 on pci1 Dec 12 13:04:04 kernel re0: Using 1 MSI-X message Dec 12 13:04:04 kernel re0: Chip rev. 0x2c800000 Dec 12 13:04:04 kernel re0: MAC rev. 0x00100000 Dec 12 13:04:04 kernel miibus0: <MII bus> on re0
[2.5.0-DEVELOPMENT][root@gw.griffo.co]/boot: kldstat Id Refs Address Size Name 1 25 0xffffffff80200000 3ae7ee0 kernel 2 1 0xffffffff846fa000 1000 cpuctl.ko 3 1 0xffffffff846fb000 8c90 aesni.ko 4 1 0xffffffff84704000 37f8 cryptodev.ko 5 1 0xffffffff84708000 b28 coretemp.ko 6 1 0xffffffff84709000 26fe8 ipfw.ko 7 1 0xffffffff84730000 10e18 dummynet.ko
[2.5.0-DEVELOPMENT][root@gw.griffo.co]/boot: pkg info | grep realtek realtek-re-kmod-v196.04_2 Kernel driver for Realtek PCIe Ethernet Controllers [2.5.0-DEVELOPMENT][root@gw.griffo.co]/boot:
[2.5.0-DEVELOPMENT][root@gw.griffo.co]/boot/modules: ls -al total 1300 drwxr-xr-x 2 root wheel 512 Dec 12 13:42 . drwxr-xr-x 10 root wheel 1536 Dec 12 13:32 .. -r-xr-xr-x 1 root wheel 106328 Nov 11 13:23 bwi_v3_ucode.ko -r-xr-xr-x 1 root wheel 1168400 Nov 20 04:51 if_re.ko -rw-r--r-- 1 root wheel 148 Dec 12 13:29 linker.hints [2.5.0-DEVELOPMENT][root@gw.griffo.co]/boot/modules:
-
Nevermind, I had so many other issues I've rolled back to 2.4.5 so it's probably moot.
-
Hmm, looks like you have everything right. You could try this additional line in loader.conf.local, though I did not need it here, the loader looks there anyway:
if_re_name="/boot/modules/if_re.ko"
Steve
-
Hi all,
New to the pfSense world but wanted to follow up on this discussion. I've successfully compiled & installed the latest if_re.ko drivers from Realtek, v1.96.04, for my RTL8111/8168/8411 Gigabit ethernet controller (re0 interface). They are loaded upon boot without issue, as per kldstat and looking at logs in the web UI (Status > System Logs > System > General).
This is an integrated controller on my motherboard (old P7H55-M PRO from Asus) which I'm testing for use as WAN in a homelab setup while I wait for my Intel NICs for LAN. The link is also recognized as 1Gbps as per ifconfig (1000Base-T <full-duplex>).I am facing performance issues though when benchmarking with iperf3. Using Cat5e or Cat6 cabling all around, with short runs from the firewall to my client, and have tried new cables as well.
Using iperf3, when re0 is the SERVER, I get about 450Mbps. When re0 is the CLIENT, I get a big range, but about 450-600Mbps on average, sometimes peaking at 800Mbps for an entire run if I'm lucky. Retry (Retr column) is almost always all 0 or thereabouts.
The same client can easily get 900+ Mbps with another iperf3 server on the network.
I looked into CPU and memory usage during the tests, plenty of free memory on the system (8GB RAM on system), swap not in use at all. As for the CPU (Intel Core i3-530), usage looks something like this, with lots of idle room...
CPU: 0.7% user, 0.0% nice, 8.0% system, 16.7% interrupt, 74.6% idle
I was wondering if anyone faced similar performance issues, with lack of consistency seeming to occur as well (although never a steady 900Mbps+ or even 800Mbps+)? I saw that @nitewolfgtr and @stephenw10 spoke a bit about performance issues on a 2.5G card, but never came to any conclusions.
Thanks!
-
@networkingmicrobe I had the same issue with my Qotom with the built in realteks. It seemed to have some magic stop around 450mbits, sometimes though it would go up to 600mbits. There appeared to be plenty of CPU left but it just maxed out. I replaced it with a new box with Intel NIC's and hit wire-speed gigabit easily.
-
@networkingmicrobe said in Official Realtek Driver Binary 1.95 For 2.4.4 Release:
i3-530
That's a 2 core 4 thread CPU so 74% idle could be 1 virtual core at 100%. Potentially the core running iperf.
Try runningtop -aSH
while you test.Better to test through the firewall rather than to or from it directly.
Steve
-
@networkingmicrobe If you want to benchmark pfSense on a particular device and setup. run iperf through pfSense not on it.
It has been said MANY times on the fourms.
Using iperf on the pfsense system itself is not a valid gauge at routing performance.
-
@networkingmicrobe I'm not entirely sure, but I suspect that the driver is limited in that one TCP/UDP stream can only utilize one core on the CPU, simply put, 450Mbps is what a single core on your CPU (Intel Core i3-530) can handle if you run iperf with one client thread (should be default).
You can test that by running iperf on the client side with more threads using -P (that's a capital P), something like:
iperf -c 192.168.0.1 -P 4
Will run the client with 4 threads, each opening their own TCP stream (adjust IP address to whatever you are using).
-
@napsterbater correct, it is not a valid gauge at routing performance, however I suspect that @networkingmicrobe is attempting to find out wether or not his NIC is capable of sending data close to the 1Gbit linespeed, not the actual routing throughput.
-
@griffo - Thanks for the tip, looking forward to my Intel NICs then! Interesting that this seems to keep happening with integrated (on-motherboard) Realtek NICs, though.
@stephenw10 - You bring up a good point, thanks. That result I pasted above was actually from
top -aSH
. In that case, I'd expect the same results on an Intel NIC (over PCIe), since it may be an issue with single-core clock speed. It looks like cpu0 (first core) is being used the most during a single-stream iperf3 test, with WCPU% of[idle{idle: cpu0}]
dropping down during the test to about 50% and then back to 99% after.
I tried increasing the number of parallel streams to 2, and then 4, like @Cybermaze suggested, but saw same results in terms of throughput (~400Mbps) and core utilization (mostly cpu0, perhaps some of core1 too).@Napsterbater - I realize benchmarking pfSense is best done by going through the firewall, but what I wanted to simulate here was the maximum theoretical line speed by testing to the WAN port directly. I realize it's not at all indicative of performance through the firewall on other LAN devices, but is a 'sanity check' I'd like to try first, if that makes sense, to see if I can ever get the theoretical ~1Gbps line speeds expected to begin with. Basically what @Cybermaze suggested in their comment above.
Thanks all for your quick and informative replies, perhaps it's just a limitation of my onboard NIC for some reason.
-
iperf3 is in fact single threaded and will only ever use one CPU core. The -P switch can make it run multiple client streams, which can take advantage of multiqueue NICs for interrupt load, but it will still only use one CPU core on the client and server machine for iperf itself. Which is another good reason to test through the firewall.
If you need to test like that try running multiple iperf servers or clients on different ports simultaneously.
https://fasterdata.es.net/performance-testing/network-troubleshooting-tools/iperf/multi-stream-iperf3/
Steve
-
@Cybermaze - Small update, using multiple client streams into a pfSense iperf3 server yields similar results as a single stream of ~400-450Mbps.
However, I noticed when using multiple client streams when pfSense is the client and another device is the server, I can get steady 800Mbps throughput, much closer to theoretical 1000Mbps max line speed. Interesting...I'll look into running multiple iperf servers simultaneously as well, thanks @stephenw10.