APU1D4, Possibly Failing



  • I recently started having performance issues, as in can't pull more than 10Mbit on a 60Mbit link. I was thinking that maybe the 2.2.5 update caused it, but rolling back didn't solve the issue.

    Connecting my laptop outside the pfSense verifies that my ISP isn't the issue

    Running iPerf as a client to a FreeBSD server on the LAN side gets me 13Mbit as a top speed.

    Running iPerf as a server causes the pfSense box to stop replying to all LAN traffic until its rebooted.

    Has anyone else ran into this with the APU1D4 boards, trying to figure out if its dying, or maybe I somehow changed something that's causing the issue. Though I didn't change any settings around the time the problem started.



  • Hardware failure likely wouldn't exhibit itself that way. The Realtek NICs on the APUs can be finicky with certain devices, with slow performance being a symptom at times. Might want to try putting a switch in between the APU WAN NIC and your modem. The loss and re-gain of link during the reboot would be the likely trigger if that's the cause. Just power cycling the modem might be worthwhile if you haven't already.



  • I have found out that the iperf crashing as a server issue was due to me temporarily enabling device polling, removing that has solved that issue. The performance on iperf looks better now (but only at 90Mbit, connected to Dell PowerConnect 1G Switch). However throughput to the outside still wont top over 13Mbit. I don't think its due to an issue with the cable modem and the APu1D4's Relteck NIC as it was working at 50-60Mbits less than one week ago.

    I did the 2.2.5 update on Saturday and noticed the issues Sunday. Using the full backup restore option, along with configuration restore didn't resolve the issue.

    CPU & RAM usage appear normal.

    Guess I will build a VirtualBox VM on my FreeBSD server to temporarily duplicate all settings into and do a full rebuild and input settings back into the APU1D4 in case something got corrupted during the upgrade.



  • Nothing can be corrupted and make things slow, just not possible. If you have traffic shaping or limiters configured, try removing all that and see if that makes any difference. Outside of that, I'm not just dreaming up those NIC issues, sometimes they pop up out of nowhere after a link loss and re-gain. Putting a switch in between is a quick thing to try, at least if you have a spare one available.

    Reboot after disabling polling if you haven't already.



  • I don't have a spare, but I can setup a new VLAN on the LAN switch. I will need to do this anyways in order to setup the VM.
    I just don't believe that after 10 months of working fine between two devices that it suddenly wouldn't.  Unless there is a power issue, or potentially failing components. Both cable modem and pfSense are on same power strip attached to a UPS. I don't however have a new power brick for the pfSense, so I can't rule out that as an issue. I have swapped out LAN cables, just in case, tried different ports on the cable modem rebooted both devices multiple times.
    Also seeing as I can't pull 100mbps using iperf on the local LAN side, where I should be pulling 500-700 according to other posts makes me suspect that heat over time is possibly causing the APU1D4 to fail to perform to its full potential. Though its correctly installed in the enclosure with the heat sink, and the room its in has never been over 75.
    Local LAN performance between devices is running normally, I have verified that the cable modem is running correctly, the only device that doesn't seem to be running correctly is the APU1D4.
    However with CPU use and temperature reporting normally, it makes me want to believe that there is a bug or setting not correct, but I can figure out what that would be. When I did the upgrade I checked the option to create a full backup, and tried restoring that. The problem didn't resolve, but didn't exist prior to the upgrade.



  • Hello,

    if the APU1D4 was having some tunings inside, and during or after the upgrade file would be written new
    this tunings will be gone then for ever, or someone was creating a /boot/loader.conf.local file
    and place all this done tunings inside of this file, that after an update or upgrade this setting
    will be set again. Such tunings are often something like;

    • TRIM support if a mSATA or SSD is installed <- must or should be done
      Older files are getting not overwritten and so the mSATA or SSD is slowing down
    • PowerD (hi adaptive) really important on the APU1D4 <- must be done
      It could be that the APU runs only on 600MHz CPU frequency without PowerD
    • increasing the mbuf size if really needed and must be done <- could be done
      Not really important but could narrow down the entire throughput if not set in some cases

    So I really thing there are well known problems with a auto negotiation miss match on the LAN ports
    between some Modems and the APU1D4 so the tip to use a switch between them is really good in my
    eyes.



  • I actually didn't have trim enabled prior to the upgrade. It has since been enabled as part of the troubleshooting process. <– Maybe this is cause... SSD slowing down due to no trim?
    tunefs -p /
    tunefs: POSIX.1e ACLs: (-a)                                disabled
    tunefs: NFSv4 ACLs: (-N)                                  disabled
    tunefs: MAC multilabel: (-l)                              disabled
    tunefs: soft updates: (-n)                                enabled
    tunefs: soft update journaling: (-j)                      enabled
    tunefs: gjournal: (-J)                                    disabled
    tunefs: trim: (-t)                                        enabled
    tunefs: maximum blocks per file in a cylinder group: (-e)  4096
    tunefs: average file size: (-f)                            16384
    tunefs: average number of files in a directory: (-s)      64
    tunefs: minimum percentage of free space: (-m)            8%
    tunefs: space to hold for metadata blocks: (-k)            6408
    tunefs: optimization preference: (-o)                      time
    tunefs: volume label: (-L)

    Contents of /boot/loader.conf.local:
    kern.cam.boot_delay=10000
    ahci_load="YES"

    PowerD has been confirmed to be set to HiAdaptive.
    output from powerd -v
    load  0%, current freq  875 MHz ( 1), wanted freq  847 MHz
    load  4%, current freq  875 MHz ( 1), wanted freq  820 MHz
    load  10%, current freq  875 MHz ( 1), wanted freq  794 MHz
    load  0%, current freq  875 MHz ( 1), wanted freq  769 MHz
    load  0%, current freq  875 MHz ( 1), wanted freq  744 MHz
    changing clock speed from 875 MHz to 750 MHz
    load  11%, current freq  750 MHz ( 2), wanted freq  720 MHz
    load  38%, current freq  750 MHz ( 2), wanted freq  729 MHz
    load  6%, current freq  750 MHz ( 2), wanted freq  706 MHz
    load  14%, current freq  750 MHz ( 2), wanted freq  683 MHz
    load  0%, current freq  750 MHz ( 2), wanted freq  661 MHz
    load  64%, current freq  750 MHz ( 2), wanted freq 1128 MHz
    changing clock speed from 750 MHz to 1000 MHz
    load 110%, current freq 1000 MHz ( 0), wanted freq 2000 MHz
    load  45%, current freq 1000 MHz ( 0), wanted freq 2000 MHz
    load  4%, current freq 1000 MHz ( 0), wanted freq 1937 MHz
    load  7%, current freq 1000 MHz ( 0), wanted freq 1876 MHz

    I setup a new VLAN moved the WAN interface to an interface in the VLAN along with a cable to the Cable Modem. No change in performance.

    When I get home after work tonight I will try locking the interfaces at 100M full instead of 1G Auto on the pfSense and switch  to see if that helps since my INET is only 60/5Mbps Restricting to 100Mbps shouldn't hurt.

    MBUF appears to be staying below 10%, doubt increasing that will help.



  • PowerD has been confirmed to be set to HiAdaptive.

    This was the most important point for me, if not set the APU could running only on 600MHz or 800MHz
    and this could be then to slow for the needed throughput.

    When I get home after work tonight I will try locking the interfaces at 100M full instead of 1G Auto on the pfSense and switch  to see if that helps since my INET is only 60/5Mbps Restricting to 100Mbps shouldn't hurt.

    I don´t think so, because in normal you get something around of ~100 MBit/s throughput if you use 
    the GB Ports, but if you are using them only as 100 MBit/s Ports you only will be getting out something
    of about ~12 MBit/s of throughput in normal and this was also the point to ask for a probably RJ45 Port
    negotiation miss match of the WAN Ports of the modem and the pfSense box.



  • Yes 100Mbits would only give you 12Mbytes, but it most certainly will give you 100Mbits…

    Locking the LAN interface did bump my throughput to the internet from around 10Mbits to 25Mbits, still not the 60Mbits I should be seeing. Strangely though locking the WAN interface to 100Mbits as well dropped me to 5Mbits? This one baffles me a bit.

    Oddly enough the iperf results on the LAN are slower with the interface in 100Mbits than 1000Mbits, neither result is getting to 100Mbits, but throughput through the firewall is faster with the LAN at 100Mbits. Both iperf results are coming in above 60Mbits which is the download speed that my ISP provides.



  • Well it took a lots of testing with different settings, sadly the one thing that finally broke the 20-33Mbps download speed, was setting the interfaces, both LAN and WAN at 100Mbps half duplex. I can now pull 50-55Mbps on downloads. I guess I will live with this until I can find enough room in my budget to replace it.  Can't seem to find any option that doesn't use RealTek cards for under $400.
    I thought about getting carp setup with a VM for primary, but my virtual box headless server is a little low on memory so I didn't want to keep that additional load on it.
    While I was running on it in the 20-33 range most things seemed fine I guess it was likely running in that range for some time, and just recently started falling below that for some reason where I noticed it.



  • both LAN and WAN at 100Mbps half duplex.

    Some modems can be connected to the PC directly and then you could try out to set the WAN
    port to a "force 1 GBit/s full duplex" and get it right. Or alternatively you could set up a small GB LAN switch
    in front of the WAN Port, between the modem and the WAN Port.

    I can now pull 50-55Mbps on downloads.

    On test or real download?

    I guess I will live with this until I can find enough room in my budget to replace it.  Can't seem to find any option that doesn't use RealTek cards for under $400.

    pfSense Box 1
    pfSense Box 2



  • Well after searching and testing, starting a thread on pcengines forum, and some feedback. I started testing with all three interfaces instead of just the re1 & re2. I discovered something, the problem only exists when the re2 interface is one of the interfaces passing traffic. Not sure why I didn't think to try this earlier. but with the re1 as the Internet, and re0 as the LAN I can now get full 60Mbits speed with autoselect/flow control set on the interfaces, previously re2 was setup as the LAN. I am back using all the same cables, and ports as the original install with the LAN now configured on re0, instead of re2.

    re0: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
            options=8209b <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,wol_magic,linkstate>ether 00:0d:b9:3a:4e:d8
            inet6 fe80::20d:b9ff:fe3a:4ed8%re0 prefixlen 64 scopeid 0x1
            inet 192.168.5.1 netmask 0xffffff00 broadcast 192.168.5.255
            nd6 options=21 <performnud,auto_linklocal>media: Ethernet autoselect <flowcontrol>(1000baseT <full-duplex,flowcontrol,rxpause,txpause>)
            status: active

    re0: <realtek 8111="" 8168="" b="" c="" cp="" d="" dp="" e="" f="" g="" pcie="" gigabit="" ethernet="">port 0x1000-0x10ff mem 0xf7a00000-0xf7a00fff,0xf7900000-0xf7903fff irq 16 at device 0.0 on pci1
    re1: <realtek 8111="" 8168="" b="" c="" cp="" d="" dp="" e="" f="" g="" pcie="" gigabit="" ethernet="">port 0x2000-0x20ff mem 0xf7c00000-0xf7c00fff,0xf7b00000-0xf7b03fff irq 17 at device 0.0 on pci2
    re2: <realtek 8111="" 8168="" b="" c="" cp="" d="" dp="" e="" f="" g="" pcie="" gigabit="" ethernet="">port 0x3000-0x30ff mem 0xf7e00000-0xf7e00fff,0xf7d00000-0xf7d03fff irq 18 at device 0.0 on pci3

    Unless there is some strange, memory addressing issue with the re2 interface and FreeBSD, this kind of leans me back towards believing that the hardware is beginning to die.</realtek></realtek></realtek></full-duplex,flowcontrol,rxpause,txpause></flowcontrol></performnud,auto_linklocal></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,wol_magic,linkstate></up,broadcast,running,simplex,multicast>



  • Weird, yeah sounds like one of the NICs has a hardware issue of some sort.


Log in to reply