bandwidth shaping on multiple Chelsio cards helpful?
-
I've been bandwidth shaping using IPFW on pfSense for the last few years on single Chelsio cards. I always see my interrupts coming from the chelsio cards as problematic, so before my time, we've been just adding servers. CPU usage is about 20~30%, all towards miniupnp, and the rest towards interrupts.
Would adding more Chelsio Cards and more CPU cores assist in alleviating the interrupts coming in from the single chelsio card and spread the interrupts from a single card to multiple cards be helpful?
I'm planning to test this out next year during the summer, but I'd like more information on this, and some clarification why this wouldn't work.
Currently on 4 R320s E5-1410 v2 @ 2.80GHz with Chelsio 10gbps cards, no bandwidth shaping, but restricting allowed devices by mac address using the nice ipfw mac tables, but not really going by pipe, pipes are set to 0, and ignored.
00150 allow ip from any to any MAC table(resident_macs)
Resident macs being:
--- table(resident_macs), set(0) --- kindex: 4, type: mac references: 1, valtype: legacy algorithm: mac:hash items: 58520, size: 8895336
Plan is to go with R710 with dual X5690s (6c) which run at 3.46ghz with multiple 10gbps cards. Fast processing, and about 16GB, which is 4GB each side of each proc. I don't think I can get smaller than that.
Would this be beneficial to traffic shape 1.5gbps on each chelsio card? I tried bandwidth shaping (dummynet) at 2gbps from 3,800 devices (58500 mac entries in and out) and I would get interrupts blocking everything from the chelsio. I was already warned by NetGate about rate limiting would be impractical over 2gbps with what was sold by netgate, and I agree, since all the hardware available from netgate has only a single pci-express slot accessible for 10gbps usage.
Am I thinking of interrupts correctly?
-
Hi @CuteBoi - if you don't mind me asking, what particular use case requires you to shape traffic in the multi-gigabit range? Thanks in advance.
-
Use case?
+9k active users(pipes2) on 28k devices (macs2). But I can only shape about 3k devices(macs*2) actively under 1.5gbps before I hit bottlenecks.I divide the residents depending on usage patterns on all the APs, and then centralize them via vlans to different pfSense servers acting as bridges. I monitor them. each resident has a pair of pipes, and it's shared between his devices that hit at least each pfSense gateway.
I'm trying to centralize it further, but I don't know if adding more NICs and processors would help alleviate the interrupts. The logic says yes.
After reading this, it seems like if I add more cores and more NICs, it should work, but papers don't normally guarantee that current OS's work the same as detailed. I guess I have to try:
https://www.net.in.tum.de/fileadmin/bibtex/publications/papers/MMBnet15-2.pdf