No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..
-
@stephenw10 I don't HAVE to set the speed/duplex.. I just do it to eliminate it as a variable. Its been running autonegotiation forever (which it finds the speed correctly). But when there are speed issues, the first thing is always to reduce/eliminate as many variables as possible for the cause. (its why we dumbed the installation down to pretty much factory spec).
-
Setting speed and duplex on copper gig would not a variable you should ever mess with.. It should always be left at auto.
-
@johnpoz While I do agree in principal, historically, I've had issues sometimes with Cisco and Broadcom core switches (not with THIS system mind you, but with others and eliminating autoneg got rid of those problems).
But as I'm at the end of my rope here, I'm willing to try just about anything.. (this problem has bene going on for a while, but as our data needs are ramping up, its now becoming more of an issue since we can't take advantage our SLA speed.
-
What happens many times when you try to force speed or duplex in Gigabit circuits is the other end of the conversation gets confused because Gigabit links want (just about demand, actually) to be set to auto-negotiate. If one end gets confused about speed or duplex (especially duplex), speeds will suffer tremendously.
I know where you are coming from by using prior bad experiences with auto-negotiate and thus wanting to hard-code, but that is strongly discouraged in Gigabit land. You can sometimes get away with it in 10/100 setups, but even there these days most everything expects auto-negotiate on copper.
-
Its not principle... its part of the standard..
Clause 40 (1000BASE-T) makes special use of Auto-Negotiation and requires additional MII registers. This use is summarized below. Details are provided in 40.5.
Auto-Negotiation is mandatory for 1000BASE-T (see 40.5.1). 1000BASE-T requires an ordered exchange of Next Page messages (see 40.5.1.2), or optionally an exchange of an Extended Next Page message 1000BASE-T parameters are configured based on information provided by the exchange of Next Page messages. 1000BASE-T uses MASTER and SLAVE to define PHY operations and to facilitate the timing of transmit and receive operations. Auto-Negotiation is used to provide information used to configure MASTER-SLAVE status (see 40.5.2). 1000BASE-T transmits and receives Next Pages for exchange of information related to MASTER-SLAVE operation. The information is specified in MII registers 9 and 10 (see 32.5.2 and 40.5.1.1), which are required in addition to registers 0-8 as defined in 28.2.4. 1000BASE-T adds new message codes to be transmitted during Auto-Negotiation (see 40.5.1.3). 1000BASE-T adds 1000BASE-T full duplex and half duplex capabilities to the priority resolution table (see 28B.3) and MII Extended Status Register (see 22.2.2.4). 1000BASE-T is defined as a valid value for “x” in 28.3.1 (e.g., link_status_1GigT.) 1GigT represents that the 1000BASE-T PMA is the signal source.
If your hard setting it - this for sure could cause you issues!
If you have issues with devices auto doing gig - then you need to figure out why that is to get gig.. Not hard set it.
-
Folks, while I appreciate this is a hot topic, I think we are moving away from the central issue. autoneg or not, the problem of speed still exists (setting it fixed or autoneg, has zero impact on the issue).
As I mentioned earlier, I have had autoneg on for quite a long time but as our data needs are ramping up, its now becoming more of an issue since we can't take advantage our SLA speed hence in the interest of eliminating this as a variable (I don't control the data center upstream switch but I have confirmed with them channel speed, and optimum MTU size).
-
@MrSassinak said in No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..:
Folks, while I appreciate this is a hot topic, I think we are moving away from the central issue. autoneg or not, the problem of speed still exists (setting it fixed or autoneg, has zero impact on the issue).
As I mentioned earlier, I have had autoneg on for quite a long time but as our data needs are ramping up, its now becoming more of an issue since we can't take advantage our SLA speed hence in the interest of eliminating this as a variable (I don't control the data center upstream switch but I have confirmed with them channel speed, and optimum MTU size).
You said in an earlier post you had it hard-coded. We assumed you still did.
Actually, reading through again, you said two different things. You said in one post you had it set to hard-coded. Then later you said it has been set to auto-negotiation forever. Which is it?
-
You can not actually troubleshoot a speed issue if your hard coding gig.. You can not - because now you have thrown in a known problem that could be the cause of the problem..
You need to forget the old days.. The only reason you would hard code a gig interface, is your forcing to use a lower speed than gig.
-
Also, have you tried swapping NICs if that is possible? Or swapping which NIC port is in use. Not out of the realm of possibilty that you have a physical hardware issue with a NIC or its port. Don't forget the obvious such as the cable on the pfSense WAN and LAN connection. I once had a speed/connectivity issue caused by a slightly bent gold finger contact inside an RJ45 port on a Dell motherboard. I was the second-level network support guy and my field premises tech was at the site. We had fiddled with the Cisco switch port, tried a different port, pulled out the RJ45 jack from the wall and re-punched the terminations, swapped the patch cables both in the wiring closet and at the PC. Nada. Then my field guy just happened to be peering at the right spot on the rear of the PC and saw the bent pin inside the RJ45 port on the motherboard. Swapped the motherboard and problem solved.
-
@bmeeks Historically it has been set to autonegotiation. Since the speed has become a problem, in my effort to uncover what is the root cause, I've made several changes/modifications/tests (in sequence):
- Disabling traffic shaping
- Uninstalling all packages
- Namely, rolling back all tunables
- Setting Speed-Duplex to be fixed (this once it showed it had no impact, it was set back to be autospeed duplex. I think this is the part that confused people.. it was not left on, but it was set, tested, then reset back to auto)
- And then reinstalling pfsense and then rolling back to previous configuration.
Then per akuma1x, I reset it back to factory (ie: out of the box with ZERO changes other than what's stock from the installation).
So right now, I have a clean zero configuration pfsense 2.4.4p3 install running on a Intel C3558 CPU with 8GB of RAM and 128GSSD. (nics are igb), and I am still stuck with 200Mb down and 400Mb up on a 1Gb Connection.
-
@MrSassinak said in No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..:
@bmeeks Historically it has been set to autonegotiation. Since the speed has become a problem, in my effort to uncover what is the root cause, I've made several changes/modifications/tests (in sequence):
- Disabling traffic shaping
- Uninstalling all packages
- Namely, rolling back all tunables
- Setting Speed-Duplex to be fixed (this once it showed it had no impact, it was set back to be autospeed duplex. I think this is the part that confused people.. it was not left on, but it was set, tested, then reset back to auto)
- And then reinstalling pfsense and then rolling back to previous configuration.
Then per akuma1x, I reset it back to factory (ie: out of the box with ZERO changes other than what's stock from the installation).
So right now, I have a clean zero configuration pfsense 2.4.4p3 install running on a Intel C3558 CPU with 8GB of RAM and 128GSSD. (nics are igb), and I am still stuck with 200Mb down and 400Mb up on a 1Gb Connection.
Look over my post immediately above this one. Don't discount that you may have a hardware issue somewhere.
-
@bmeeks I did try that as well.. I can't swap NICS themselves (part of the MB itself), but I did move from igb0 to igb1 (basically swapped the default LAN and WAN interfaces I used) just as a test (before the factory reset), as well as get the DC guys to test out their line to me..
I'm pretty sure I'm going to need to bring some peace offerings of the liquid or edible sort next time, since I've been asking them to double check all their connections, speed, configuration, isolation, and even run a new line to me.
-
Do you have a way to run something like an
iperf
client/server setup so you can test each pfSense interface to a local host? That would let you see if the issue is within the pfSense box or outside.For example, test from a laptop or machine directly connected to the LAN to an
iperf
instance on pfSense. Then do the same on the WAN port (or even better, though pfSense itself from LAN to WAN or vice-versa).pfSense itself can most definitely do Gigabit without a sweat. So if
iperf
indicates the problem is within the pfSense setup, my bet is hardware because the software is known to be capable of gigabit transfers with ease. -
@bmeeks Let me also add one other historical note..
Previously before moving to this hardware set, we were running a D525 1.8Ghz Atoms with 4GB Ram and 128SSD (I know.. way underpowered) but we were getting symmetrical 400Mb. (they used the intel em driver, not the igb driver). What prompted the change was ramping up the speed (and since we were rolling out VPN, needed AES-NI for more efficient connections and speeds).
At the time, I assumed the reason we could never crack above 400Mb was the system (D525's are fine for minor home use, but a heavily used 1Gb connection with TINC, IPsec, and OpenVPN connections and several outbound streaming services...that was just not going to cut it as a pfsense box). But to date, I have yet to crack the 400Mb barrier with pfsense.
I have not yet tried another system (mostly because I have a preference for pfsense (been using it commercially for a long time (since the 1.x days) and before that m0n0wall) so I'm mostly familiar with it.. and I find it to be faster than most linux derived systems (otherwise untangle would be on my radar). Love linux as a server (we have a lot), but for performance, I find BSD based systems to be better. I know my Cisco router can do the full 1G, as well as Ubiquiti Edge Router as well as Vyatta. (I tested with all of them before in this DC before moving our kit here..so I know the speed itself is good.. just have to figure out why my beloved pfsense is not doing its job.)
-
@MrSassinak said in No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..:
All the interfaces are synced at 1Gb (actually Fixed Speed/Duplex since auto-negotiate sometimes causes issues).
If you do that, you have to do it at both ends, otherwise you will likely have problem. Whether fixed 1 Gb or auto, you must have the same configuration at both ends of the cable. Generally speaking, you should use auto, unless you have some need to use fixed. One example would be if you're using fibre, where the SFP tends to be fixed.
-
@MrSassinak said in No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..:
@bmeeks Let me also add one other historical note..
Previously before moving to this hardware set, we were running a D525 1.8Ghz Atoms with 4GB Ram and 128SSD (I know.. way underpowered) but we were getting symmetrical 400Mb. (they used the intel em driver, not the igb driver). What prompted the change was ramping up the speed (and since we were rolling out VPN, needed AES-NI for more efficient connections and speeds).
At the time, I assumed the reason we could never crack above 400Mb was the system (D525's are fine for minor home use, but a heavily used 1Gb connection with TINC, IPsec, and OpenVPN connections and several outbound streaming services...that was just not going to cut it as a pfsense box). But to date, I have yet to crack the 400Mb barrier with pfsense.
I have not yet tried another system (mostly because I have a preference for pfsense (been using it commercially for a long time (since the 1.x days) and before that m0n0wall) so I'm mostly familiar with it.. and I find it to be faster than most linux derived systems (otherwise untangle would be on my radar). Love linux as a server (we have a lot), but for performance, I find BSD based systems to be better. I know my Cisco router can do the full 1G, as well as Ubiquiti Edge Router as well as Vyatta. (I tested with all of them before in this DC before moving our kit here..so I know the speed itself is good.. just have to figure out why my beloved pfsense is not doing its job.)
Try a test using
iperf
to check out the interfaces. There are many folks using pfSense for Gigabit symmetrical connections. In fact, during testing of Snort with Inline IPS mode on pfSense-2.5 the Netgate tester recorded 1.8 Gigabits/sec of sustained throughput. Without Snort running, that rose to 3.2 Gigabits/sec. So the software is certainly capable. That was three interfaces running on an SG-5100.Of course there could be an issue with a particular driver on the FreeBSD side. And since pfSense is based on FreeBSD any driver problems would show up. Don't know for sure about any identified igb issues, but then I don't keep up with that area.
-
@bmeeks I completely agree.. I did a lot of research before jumping in the forums and these days and 1G internet connection is passé. Its why I'm very puzzled.. (and why we tried a number of system tunes and other changes because I know the software is capable (based on many accounts) and I believe the hardware should be fine (since its basically the same hardware Netgate is selling for 10gb connections)
If I use a public iperf server (from internal client through pfsense to public iperf server (thankfully HE has one in the same DC I'm in), I get:
Client connecting to iperf.he.net, TCP port 5201
TCP window size: 325 KByte (default)[ 3] local 10.10.10.160 port 55814 connected with 216.218.227.10 port 5201
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 0.0 sec 109 KBytes 287 Mbits/secBut if I do an internal iperf (from internal client to pfsense on the LAN side):
Client connecting to 10.10.10.254, TCP port 5201
TCP window size: 85.0 KByte (default)[ 3] local 10.10.10.160 port 48020 connected with 10.10.10.254 port 5201
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 735 MBytes 617 Mbits/sec
[ 4] 0.00-8.03 sec 886 MBytes 926 Mbits/sec -
@MrSassinak said in No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..:
@bmeeks I completely agree.. I did a lot of research before jumping in the forums and these days and 1G internet connection is passé. Its why I'm very puzzled.. (and why we tried a number of system tunes and other changes because I know the software is capable (based on many accounts) and I believe the hardware should be fine (since its basically the same hardware Netgate is selling for 10gb connections)
If I use a public iperf server (from internal client through pfsense to public iperf server (thankfully HE has one in the same DC I'm in), I get:
Client connecting to iperf.he.net, TCP port 5201
TCP window size: 325 KByte (default)[ 3] local 10.10.10.160 port 55814 connected with 216.218.227.10 port 5201
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 0.0 sec 109 KBytes 287 Mbits/secBut if I do an internal iperf (from internal client to pfsense on the LAN side):
Client connecting to 10.10.10.254, TCP port 5201
TCP window size: 85.0 KByte (default)[ 3] local 10.10.10.160 port 48020 connected with 10.10.10.254 port 5201
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 735 MBytes 617 Mbits/sec
[ 4] 0.00-8.03 sec 886 MBytes 926 Mbits/secIs there a way for you to put something on the WAN side (another machine perhaps connected to the same switch) and do an
iperf
test through pfSense from LAN to WAN? The key is to be able to reliably eliminate any dependency on an external host. To really test the pfSense hardware and software you need aniperf
endpoint directly on the WAN network and the other endpoint directly on the LAN network.With that HE test, I suspect there are other hosts, routers or connections between you and them even if in the same DC. Just trying to make sure you test only pfSense itself. That's the only way to narrow down the problem.
Do you perhaps have another Intel server NIC that uses say the em driver that you could substitute for testing? It's certainly not impossible for there to be a software issue in the NIC driver for igb. However, if true and widespread I would expect to see a lot of posts here complaining. The igb driver is used by some of the NICs in Netgate's SG-5100.
-
Don't know if anything in this thread directly applies to you, and the hardware is likely some different, but here is a post from last year about igb throughput problems: https://forum.netgate.com/topic/133704/poor-performance-on-igb-driver/43.
And are the NIC ports on your board genuine Intel chips or are they a clone from another vendor? If not genuine Intel, that may be part of the equation to consider. I found some instances on a Google search where the HP clone of say the Intel i350 didn't perform as well with the igb driver as the native Intel card did in the same box.
-
Yeah you really need to just test pfsense here, and take the internet out of the equation completely..
Test 1
boxA --- switch --- boxBCan they do gig without pfsense between..
example
$ iperf3.exe -c 192.168.9.10 warning: Ignoring nonsense TCP MSS 0 Connecting to host 192.168.9.10, port 5201 [ 5] local 192.168.9.101 port 62734 connected to 192.168.9.10 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 110 MBytes 922 Mbits/sec [ 5] 1.00-2.00 sec 113 MBytes 952 Mbits/sec [ 5] 2.00-3.00 sec 113 MBytes 949 Mbits/sec [ 5] 3.00-4.00 sec 115 MBytes 966 Mbits/sec [ 5] 4.00-5.00 sec 113 MBytes 950 Mbits/sec [ 5] 5.00-6.00 sec 113 MBytes 949 Mbits/sec [ 5] 6.00-7.00 sec 113 MBytes 950 Mbits/sec [ 5] 7.00-8.00 sec 113 MBytes 950 Mbits/sec [ 5] 8.00-9.00 sec 113 MBytes 948 Mbits/sec [ 5] 9.00-10.00 sec 113 MBytes 950 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 1.10 GBytes 949 Mbits/sec sender [ 5] 0.00-10.03 sec 1.10 GBytes 945 Mbits/sec receiver iperf Done.
Now do test with pfsense
boxA --- wan pfsense lan --- boxB
What do you get?