Official Realtek Driver Binary 1.95 For 2.4.4 Release
-
@stephenw10
It does link at 2.5Gbps and my hardware is HP 290-p0043w w/ Celeron G4900 and 4GB of RAM.I have Trendnet 2.5G network adapter installed in the PCIe x1 slot.
Here's the ifconfig output. The 2.5Gbps NIC is
re1
. When I set my WAN interface back toigb0
, I get 950Mbps down on speedtest. I just ran another test onre1
and I'm getting 615Mbps down.igb0: flags=8c02<BROADCAST,OACTIVE,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=6403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6> ether ac:16:2d:95:08:dc hwaddr ac:16:2d:95:08:dc nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> media: Ethernet autoselect status: no carrier igb1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=500b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWFILTER,VLAN_HWTSO> ether ac:16:2d:95:08:dd hwaddr ac:16:2d:95:08:dd nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> media: Ethernet autoselect (1000baseT <full-duplex>) status: active igb2: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=500b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWFILTER,VLAN_HWTSO> ether ac:16:2d:95:08:dd hwaddr ac:16:2d:95:08:de nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> media: Ethernet autoselect (1000baseT <full-duplex>) status: active igb3: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=500b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWFILTER,VLAN_HWTSO> ether ac:16:2d:95:08:dd hwaddr ac:16:2d:95:08:df nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> media: Ethernet autoselect (1000baseT <full-duplex>) status: active re0: flags=8803<UP,BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=2018<VLAN_MTU,VLAN_HWTAGGING,WOL_MAGIC> ether c4:65:16:30:2b:67 hwaddr c4:65:16:30:2b:67 inet 172.16.2.1 netmask 0xffffff00 broadcast 172.16.2.255 inet6 fe80::1:1%re0 prefixlen 64 tentative scopeid 0x5 inet6 2601:2c2:780:6919:c665:16ff:fe30:2b67 prefixlen 64 tentative nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> media: Ethernet autoselect status: no carrier re1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=2018<VLAN_MTU,VLAN_HWTAGGING,WOL_MAGIC> ether 3c:8c:f8:f9:7b:30 hwaddr 3c:8c:f8:f9:7b:30 inet6 fe80::3e8c:f8ff:fef9:7b30%re1 prefixlen 64 scopeid 0x6 inet6 2001:558:6022:76:cc0e:e073:c4d8:89b7 prefixlen 128 inet <REDACTED> netmask 0xfffff800 broadcast 255.255.255.255 nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL> media: Ethernet autoselect (2500Base-T <full-duplex>) status: active enc0: flags=0<> metric 0 mtu 1536 nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> groups: enc lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6> inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x8 inet 127.0.0.1 netmask 0xff000000 nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> groups: lo pflog0: flags=100<PROMISC> metric 0 mtu 33160 groups: pflog pfsync0: flags=0<> metric 0 mtu 1500 groups: pfsync lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=500b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWFILTER,VLAN_HWTSO> ether ac:16:2d:95:08:dd inet 172.16.1.1 netmask 0xffffff00 broadcast 172.16.1.255 inet6 fe80::1:1%lagg0 prefixlen 64 scopeid 0xb inet6 2601:2c2:780:6910:ae16:2dff:fe95:8dd prefixlen 64 nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> media: Ethernet autoselect status: active groups: lagg laggproto lacp lagghash l2,l3,l4 laggport: igb1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> laggport: igb2 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> laggport: igb3 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> lagg0.20: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether ac:16:2d:95:08:dd inet 172.16.20.1 netmask 0xffffff00 broadcast 172.16.20.255 inet6 fe80::1:1%lagg0.20 prefixlen 64 scopeid 0xc inet6 2601:2c2:780:6912:ae16:2dff:fe95:8dd prefixlen 64 nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> media: Ethernet autoselect status: active vlan: 20 vlanpcp: 4 parent interface: lagg0 groups: vlan lagg0.30: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether ac:16:2d:95:08:dd inet 172.16.30.1 netmask 0xffffff00 broadcast 172.16.30.255 inet6 fe80::1:1%lagg0.30 prefixlen 64 scopeid 0xd inet6 2601:2c2:780:6913:ae16:2dff:fe95:8dd prefixlen 64 nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> media: Ethernet autoselect status: active vlan: 30 vlanpcp: 0 parent interface: lagg0 groups: vlan lagg0.40: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether ac:16:2d:95:08:dd inet 172.16.40.1 netmask 0xffffff00 broadcast 172.16.40.255 inet6 fe80::1:1%lagg0.40 prefixlen 64 scopeid 0xe inet6 2601:2c2:780:6914:ae16:2dff:fe95:8dd prefixlen 64 nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> media: Ethernet autoselect status: active vlan: 40 vlanpcp: 2 parent interface: lagg0 groups: vlan lagg0.50: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether ac:16:2d:95:08:dd inet 172.16.50.1 netmask 0xffffff00 broadcast 172.16.50.255 inet6 fe80::1:1%lagg0.50 prefixlen 64 scopeid 0xf inet6 2601:2c2:780:6915:ae16:2dff:fe95:8dd prefixlen 64 nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> media: Ethernet autoselect status: active vlan: 50 vlanpcp: 4 parent interface: lagg0 groups: vlan lagg0.60: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether ac:16:2d:95:08:dd inet 172.16.60.1 netmask 0xffffff00 broadcast 172.16.60.255 inet6 fe80::1:1%lagg0.60 prefixlen 64 scopeid 0x10 inet6 2601:2c2:780:6916:ae16:2dff:fe95:8dd prefixlen 64 nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> media: Ethernet autoselect status: active vlan: 60 vlanpcp: 1 parent interface: lagg0 groups: vlan lagg0.70: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether ac:16:2d:95:08:dd inet 172.16.70.1 netmask 0xffffff00 broadcast 172.16.70.255 inet6 fe80::1:1%lagg0.70 prefixlen 64 scopeid 0x11 inet6 2601:2c2:780:6917:ae16:2dff:fe95:8dd prefixlen 64 nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> media: Ethernet autoselect status: active vlan: 70 vlanpcp: 5 parent interface: lagg0 groups: vlan lagg0.80: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether ac:16:2d:95:08:dd inet 172.16.80.1 netmask 0xffffff00 broadcast 172.16.80.255 inet6 fe80::1:1%lagg0.80 prefixlen 64 scopeid 0x12 inet6 2601:2c2:780:6918:ae16:2dff:fe95:8dd prefixlen 64 nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> media: Ethernet autoselect status: active vlan: 80 vlanpcp: 1 parent interface: lagg0 groups: vlan lagg0.10: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether ac:16:2d:95:08:dd inet 172.16.10.1 netmask 0xffffff00 broadcast 172.16.10.255 inet6 fe80::1:1%lagg0.10 prefixlen 64 scopeid 0x13 inet6 2601:2c2:780:6911:ae16:2dff:fe95:8dd prefixlen 64 nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> media: Ethernet autoselect status: active vlan: 10 vlanpcp: 3 parent interface: lagg0 groups: vlan
-
What if you use re0? What does the CPU load look like when you're testing?
top -aSH
-
@stephenw10 said in Official Realtek Driver Binary 1.95 For 2.4.4 Release:
top -aSH
re0
works great and I get the full 950Mbps during speedtest.CPU load for
re0
is around 20-25% during speedtest, whereasre1
is around 10-15%.Also, does this mean anything?
Output ofpciconfig -lv
forre0
shows the correct controller, butre1
is missing thedevice
attribute.% pciconfig -lv re0 re0@pci0:2:0:0: class=0x020000 card=0x843f103c chip=0x816810ec rev=0x15 hdr=0x00 vendor = 'Realtek Semiconductor Co., Ltd.' device = 'RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller' class = network subclass = ethernet
% pciconfig -lv re1 re1@pci0:4:0:0: class=0x020000 card=0x012310ec chip=0x812510ec rev=0x00 hdr=0x00 vendor = 'Realtek Semiconductor Co., Ltd.' class = network subclass = ethernet
-
The missing description is unlikely to matter.
I was more interested in the queue or interrupt loads on your CPU cores while testing. igb will use multiple CPU cores by default. re usually isn't but this is new hardware so....
CPU usage between those NICs looks to be inline with the throughput at least.
Steve
-
@breakaway Thanks, you save me!!!
-
THANKS! it works (on Xigmanas 12.1)
Do you know if WOL works? I tried to activate it, but I wasn't been able to make it work. Not sure if it's something that I do or it's just that it's not implemented in the driver (or the board for that matters).
Thank you
Bye -
Nope sorry, I don't own any Realtek Card.
-Rico
-
Now that there is a FreeBSD package for this I would use that over a, relatively, unknown binary source:
pkg add https://pkg.FreeBSD.org/FreeBSD:11:amd64/latest/All/realtek-re-kmod-v196.04_2.txz
That would be much easier if it was in our repo though. Let me see...
Steve
-
https://redmine.pfsense.org/issues/11079
Should be in snaphots soon.
-
Looks good:
[2.5.0-DEVELOPMENT][admin@apu.stevew.lan]/root: pkg search realtek realtek-re-kmod-v196.04_2 Kernel driver for Realtek PCIe Ethernet Controllers
So do:
pkg install realtek-re-kmod
then
echo 'if_re_load="YES"' >> /boot/loader.conf.local
Then reboot and check the boot logs for:
re0: <Realtek PCIe GbE Family Controller> port 0x1000-0x10ff mem 0xf7a00000-0xf7a00fff,0xf7900000-0xf7903fff irq 16 at device 0.0 on pci1 re0: Using Memory Mapping! re0: Using 1 MSI-X message re0: ASPM disabled re0: version:1.96.04 re0: Ethernet address: 00:0d:b9:37:30:10
Current version should be installed every time. It's built against our kernel source etc.
It's unlikely this will be backported to 2.4.5 but far easier (and probably safer) to use in 2.5 when it's released or in snapshots now.
Steve
-
@stephenw10 For some reason I cannot seem to get this to load.
I had the previous driver loaded on 2.4.5 but have upgrade to 2.5.0-DevelopmentI've followed your instructions, it appears to install OK, but I can't see it in kldstat nor does my log look like yours. Do you have any suggestions on what i might be doing wrong?
[2.5.0-DEVELOPMENT][root@gw.griffo.co]/boot: cat loader.conf.local if_re_load="YES"
My logs
Dec 12 13:04:04 kernel re0: <RealTek 8168/8111 B/C/CP/D/DP/E/F/G PCIe Gigabit Ethernet> port 0xe000-0xe0ff mem 0x81300000-0x81300fff,0xa0100000-0xa0103fff irq 16 at device 0.0 on pci1 Dec 12 13:04:04 kernel re0: Using 1 MSI-X message Dec 12 13:04:04 kernel re0: Chip rev. 0x2c800000 Dec 12 13:04:04 kernel re0: MAC rev. 0x00100000 Dec 12 13:04:04 kernel miibus0: <MII bus> on re0
[2.5.0-DEVELOPMENT][root@gw.griffo.co]/boot: kldstat Id Refs Address Size Name 1 25 0xffffffff80200000 3ae7ee0 kernel 2 1 0xffffffff846fa000 1000 cpuctl.ko 3 1 0xffffffff846fb000 8c90 aesni.ko 4 1 0xffffffff84704000 37f8 cryptodev.ko 5 1 0xffffffff84708000 b28 coretemp.ko 6 1 0xffffffff84709000 26fe8 ipfw.ko 7 1 0xffffffff84730000 10e18 dummynet.ko
[2.5.0-DEVELOPMENT][root@gw.griffo.co]/boot: pkg info | grep realtek realtek-re-kmod-v196.04_2 Kernel driver for Realtek PCIe Ethernet Controllers [2.5.0-DEVELOPMENT][root@gw.griffo.co]/boot:
[2.5.0-DEVELOPMENT][root@gw.griffo.co]/boot/modules: ls -al total 1300 drwxr-xr-x 2 root wheel 512 Dec 12 13:42 . drwxr-xr-x 10 root wheel 1536 Dec 12 13:32 .. -r-xr-xr-x 1 root wheel 106328 Nov 11 13:23 bwi_v3_ucode.ko -r-xr-xr-x 1 root wheel 1168400 Nov 20 04:51 if_re.ko -rw-r--r-- 1 root wheel 148 Dec 12 13:29 linker.hints [2.5.0-DEVELOPMENT][root@gw.griffo.co]/boot/modules:
-
Nevermind, I had so many other issues I've rolled back to 2.4.5 so it's probably moot.
-
Hmm, looks like you have everything right. You could try this additional line in loader.conf.local, though I did not need it here, the loader looks there anyway:
if_re_name="/boot/modules/if_re.ko"
Steve
-
Hi all,
New to the pfSense world but wanted to follow up on this discussion. I've successfully compiled & installed the latest if_re.ko drivers from Realtek, v1.96.04, for my RTL8111/8168/8411 Gigabit ethernet controller (re0 interface). They are loaded upon boot without issue, as per kldstat and looking at logs in the web UI (Status > System Logs > System > General).
This is an integrated controller on my motherboard (old P7H55-M PRO from Asus) which I'm testing for use as WAN in a homelab setup while I wait for my Intel NICs for LAN. The link is also recognized as 1Gbps as per ifconfig (1000Base-T <full-duplex>).I am facing performance issues though when benchmarking with iperf3. Using Cat5e or Cat6 cabling all around, with short runs from the firewall to my client, and have tried new cables as well.
Using iperf3, when re0 is the SERVER, I get about 450Mbps. When re0 is the CLIENT, I get a big range, but about 450-600Mbps on average, sometimes peaking at 800Mbps for an entire run if I'm lucky. Retry (Retr column) is almost always all 0 or thereabouts.
The same client can easily get 900+ Mbps with another iperf3 server on the network.
I looked into CPU and memory usage during the tests, plenty of free memory on the system (8GB RAM on system), swap not in use at all. As for the CPU (Intel Core i3-530), usage looks something like this, with lots of idle room...
CPU: 0.7% user, 0.0% nice, 8.0% system, 16.7% interrupt, 74.6% idle
I was wondering if anyone faced similar performance issues, with lack of consistency seeming to occur as well (although never a steady 900Mbps+ or even 800Mbps+)? I saw that @nitewolfgtr and @stephenw10 spoke a bit about performance issues on a 2.5G card, but never came to any conclusions.
Thanks!
-
@networkingmicrobe I had the same issue with my Qotom with the built in realteks. It seemed to have some magic stop around 450mbits, sometimes though it would go up to 600mbits. There appeared to be plenty of CPU left but it just maxed out. I replaced it with a new box with Intel NIC's and hit wire-speed gigabit easily.
-
@networkingmicrobe said in Official Realtek Driver Binary 1.95 For 2.4.4 Release:
i3-530
That's a 2 core 4 thread CPU so 74% idle could be 1 virtual core at 100%. Potentially the core running iperf.
Try runningtop -aSH
while you test.Better to test through the firewall rather than to or from it directly.
Steve
-
@networkingmicrobe If you want to benchmark pfSense on a particular device and setup. run iperf through pfSense not on it.
It has been said MANY times on the fourms.
Using iperf on the pfsense system itself is not a valid gauge at routing performance.
-
@networkingmicrobe I'm not entirely sure, but I suspect that the driver is limited in that one TCP/UDP stream can only utilize one core on the CPU, simply put, 450Mbps is what a single core on your CPU (Intel Core i3-530) can handle if you run iperf with one client thread (should be default).
You can test that by running iperf on the client side with more threads using -P (that's a capital P), something like:
iperf -c 192.168.0.1 -P 4
Will run the client with 4 threads, each opening their own TCP stream (adjust IP address to whatever you are using).
-
@napsterbater correct, it is not a valid gauge at routing performance, however I suspect that @networkingmicrobe is attempting to find out wether or not his NIC is capable of sending data close to the 1Gbit linespeed, not the actual routing throughput.
-
@griffo - Thanks for the tip, looking forward to my Intel NICs then! Interesting that this seems to keep happening with integrated (on-motherboard) Realtek NICs, though.
@stephenw10 - You bring up a good point, thanks. That result I pasted above was actually from
top -aSH
. In that case, I'd expect the same results on an Intel NIC (over PCIe), since it may be an issue with single-core clock speed. It looks like cpu0 (first core) is being used the most during a single-stream iperf3 test, with WCPU% of[idle{idle: cpu0}]
dropping down during the test to about 50% and then back to 99% after.
I tried increasing the number of parallel streams to 2, and then 4, like @Cybermaze suggested, but saw same results in terms of throughput (~400Mbps) and core utilization (mostly cpu0, perhaps some of core1 too).@Napsterbater - I realize benchmarking pfSense is best done by going through the firewall, but what I wanted to simulate here was the maximum theoretical line speed by testing to the WAN port directly. I realize it's not at all indicative of performance through the firewall on other LAN devices, but is a 'sanity check' I'd like to try first, if that makes sense, to see if I can ever get the theoretical ~1Gbps line speeds expected to begin with. Basically what @Cybermaze suggested in their comment above.
Thanks all for your quick and informative replies, perhaps it's just a limitation of my onboard NIC for some reason.