-
Will igb+altq be fixed before 2.1.1 is released? This new driver resolves the MSI-X & num_queues issues I was having (thank you) but has broken altq.
My assumption was that since there wasn't a mention of this issue on the "2.1.1 New Features & Changes" page that you guys had worked around it.
If the issue won't be fixed by release, might I suggest adding a user-selectable driver option? Stable (2.4.0) vs. Traffic Shaping (2.3.1).
EDIT: Note from jimp in the stickied thread for anyone who has this issue. Sounds like it will be fixed soon. https://forum.pfsense.org/index.php/topic,71546.msg390951.html#msg390951
-
Thanks for this post. I almost installed it and need altq. You saved me the hassles of reverting.
With the "legacy TX option" that was mentioned in the thread you linked to, Do you know what that actually does? Will it revert the driver to older code that still has the "enable multiple queues" issues? I currently limit the queues to 1 on the systems that have igb drivers (they have 4 to 8 processors and about 8 interfaces each). It would be nice to not have to do that. I can't give up altq for that though.
-
Thanks for this post. I almost installed it and need altq. You saved me the hassles of reverting.
With the "legacy TX option" that was mentioned in the thread you linked to, Do you know what that actually does? Will it revert the driver to older code that still has the "enable multiple queues" issues? I currently limit the queues to 1 on the systems that have igb drivers (they have 4 to 8 processors and about 8 interfaces each). It would be nice to not have to do that. I can't give up altq for that though.
As per the patch I posted in my support ticket (http://svnweb.freebsd.org/base?view=revision&revision=248906):
IGB_LEGACY_TX will override the stack if_transmit path and
instead use the older if_start non-multiqueue capable interface.
This might be desireable for testing, or to enable the use of
ALTQ.
… so it sounds like yes, it's going to revert back to the single-queue. With any luck it will still be less buggy though, and maybe I can still keep MSI-X enabled instead of having to fall back to MSI.
-
Thanks for that info. I am just surprised that Intel cards have complications like this. I figured Intel would be one of the most reliable and trouble free 4 port 1gb cards to use. I don't have any experience with igbx cards (I have read about issues with 1.2.0 which might be fixed with 1.2.1) but I am now wondering what is the most stable, works by default even with a bunch of interfaces and CPUs, and good performing 4 port 1gb cards to use? There aren't many to choose from really. I do realize that pf is single threaded so multiple queues probably does not help as much as it sounds like it would. This is especially true since I read that the old igb (new one too?) driver just does a spinlock on the queues waiting for it's turn from other queues (or something like that) which wastes CPU when there are a ton of queues and there is a lot of waiting going on (hopefully the new driver code fixes that? but without altq right now).
EDIT: I don't remember if it is the igb driver queues or the pf code that does the spinlock when multiple queues are waiting. I will have to find that very old thread again.
-
I suspect the troubles have more to do with FreeBSD 8.x. We're not exactly talking about the newest code here (FreeBSD 8.0 was originally released in late 2009) and I'm pretty sure that the newest drivers were backported from ones Intel wrote for FreeBSD 9.x and 10.x. I think most of the issues we've been having with the newer Intel parts will go away with pfSense 2.2.
-
The next snapshots should be fine with regards to ALTQ and the new drivers imported.
-
@ermal:
The next snapshots should be fine with regards to ALTQ and the new drivers imported.
Great. Is there a specific date I should be looking for on the snapshot or are they ready now?
-
Thanks ermal. Will new installations with igb have to change something to get altq to work or will the default work?
-
@ermal:
The next snapshots should be fine with regards to ALTQ and the new drivers imported.
Great. Is there a specific date I should be looking for on the snapshot or are they ready now?
There is a set that is uploading now that is from just before his change. The next set after this will be fixed.
-
No tweaking as expected.
Next coming snapshot should have it. -
@ermal:
The next snapshots should be fine with regards to ALTQ and the new drivers imported.
Great. Is there a specific date I should be looking for on the snapshot or are they ready now?
There is a set that is uploading now that is from just before his change. The next set after this will be fixed.
Great, thanks guys. I'll give it a try on my backup box as soon as I see the next snapshot.
-
I just installed the latest snapshot and altq works. I didn't lose the multiple queues in the process either so I think this is a win-win. Now I just need 2.1.1 to go gold so I can upgrade my master system as well as my backup and bring up some 10Gbe goodness.
-
I also upgraded a non critical but used heavily cluster yesterday. I upgraded the backup member yesterday to…
2.1.1-PRERELEASE (amd64)
built on Wed Jan 22 16:56:12 EST 2014
FreeBSD 8.3-RELEASE-p14and no issues for a day and altq still worked. I upgraded the primary about 3 hours ago which got the 23rd snapshot. No issues but I upgraded them from an old 2.1.0 RC1 candidate from several months ago.
-
I also upgraded a non critical but used heavily cluster yesterday. I upgraded the backup member yesterday to…
2.1.1-PRERELEASE (amd64)
built on Wed Jan 22 16:56:12 EST 2014
FreeBSD 8.3-RELEASE-p14and no issues for a day and altq still worked. I upgraded the primary about 3 hours ago which got the 23rd snapshot. No issues but I upgraded them from an old 2.1.0 RC1 candidate from several months ago.
You're running both members of a production cluster on 2.1.1 snapshots? I ask because I've had a few instaboots on my backup box.
-
I had 1 cluster member on 1.2.1 prerelease and the other on 1.2.0 RC1 for about a day with the backup (1.2.1) as master. I just upgraded both to 1.2.1 pre-release. We will see how it goes.
This network doesn't have any complex configuration but I do use it to test new things I want to use in other production environments. It is mainly to offload web traffic to a cable modem and some other non-critical uses that does stress the connection at times. I have redundancies built into the proxy setup I implemented internally so that any downtime will shift the traffic to the production internet connection within about 10 seconds of web requests timing out (perl script that runs constantly and switches out proxy configs to a backup link if it fails).