Inter-vlan routing
-
Hey guys,
We have pfSense 2.0.1 on a Dell PowerEdge 650 (Celeron 2GHz and 1GB of RAM) and we are using 2 NICs. One is quad port dlink (ste) and another one is dual port Intel gigabit NIC (em). Till now only 3 ports of dlink NIC are in use (wan, lan, dmz).
We have several IPTV developers. Their experiments cause problems. Most annoying ones are broadcasts / multicasts and fake dhcp servers.I would like to insulate those developers in their own subnet still leaving access to a few our servers and Internet.
So I tried vlans on Intel gigabit NIC. All our switches are 3com L2 ones, pfSense routes traffic between vlans. The performance was quite poor to my surprise… It floats between 20 and 30MBps and CPU load is 100%, even web interface of pfSense does not response…Question one: Is there some trick to speedup inter-vlan routing on pfSense?
Question two: Is vlan wrong technology for my case? Could someone point me to the right one please?
Thanks for your time.
-
You're using a 10 year old server, with a processor and Dlink NICs better suited to low end desktops than a server even 10 years ago, don't expect miracles. Plus 30 Mbps (assuming you mean Mbps - bits - not MBps - bytes) means it's actually pushing 60 Mbps (in+out), and I doubt that Dlink card has hardware VLAN support, aside from missing other hardware found on any good NIC that lessens CPU load. You're killing the CPU, partially because of the low end NIC, and partially because it's a low end CPU (the minimal L1 and L2 cache have a serious hit on network throughput).
The way to speed it up is to use a faster box, and a better quality NIC. A PE 1850 with Xeon procs can be found for under $100 USD on ebay and would blow away that 650, should be able to get gig wire speed on one. VLANs are your best and probably only option to accomplish what you're looking to accomplish. But if you expect to route several Gbps sustained between VLANs, a firewall of any type is not the answer, you'll need a L3 switch.
-
Thank you cmb, very detailed answer.
My second NIC is network Intel PRO/1000 Dual Port Server Adapter. That is where I play my vlan experiments. It is not clear from my original post, sorry. It has hardware supports for vlans and ‘Intelligent Offloads’. With your help I see the bottleneck: CPU and PCI-X bus.
I will check if we can fit L3 switch in budget.
Using PE 650 is company policy, I have to respect it. All the branch offices have the same hardware / software for perimeter firewall. And there are few spare boxes on stock - part of disaster recovery plan (also very handy for tests and experiments). pfSense is very good on my PE 650 box for all the other tasks. I will probably look for CPU upgrade later and try to check inter-vlan routing performance again. I am curious. ;)
Another question: Is there a way to lock routing to maximum 60% of CPU time? So there is some reserve for other pfSense tasks.
-
Ah, the Intel NIC would definitely be better. But that Celeron is such a weak CPU it's not going to do much, just putting the fastest compatible P4 CPU in there would likely make a significant difference. The only way to prevent interrupt load from overloading the system is to use polling.
-
@cmb:
The only way to prevent interrupt load from overloading the system is to use polling.
Agreed. But some Intel NICs include various forms of "interrupt moderation" and can be configured to delay interrupts to increase the likelihood the driver will be able to process multiple frames on an interrupt, thus reducing interrupt overhead.
-
Wallabybob, cmb, Thanks for your answers.
I have found P4 2.4GHz@533 CPU to replace Celeron in my PE650 box. I have done inter-vlan copy test again to see the difference. In this test my vlans are on HP NC7770 gigabit server adapter.
Transfer rate is between 45-55MBps, CPU load is within 50-65%, Web interface responds fast.
Indeed P4 is better than Celeron. :)
-
So you do mean MB/sec and not Mbps. So you were getting up to almost 250 Mbps through the Celeron. That's actually really impressive, you would drop a significant amount of money on a commercial firewall that can push that. Getting it up to 440 Mbps is even better. In both instances, that's what I would expect to see with such a CPU.
-
Transfer rate is between 45-55MBps, CPU load is within 50-65%
Interesting, obviously not limited by your CPU. If this was a PCI interface I would think that is your new limiting factor but you mentioned PCI-X. Are both VLANs on the same NIC?
Steve
-
-
Interesting, obviously not limited by your CPU. If this was a PCI interface I would think that is your new limiting factor but you mentioned PCI-X. Are both VLANs on the same NIC?
There are two slots and two network cards. One vlan per card is used.
Both connectors look like PCI-X - long green slots. But after your question I have checked documentation. One is just PCI.
Is it better idea to put all vnals on the same NIC? Are there recommendations?
-
Most important is don't use tagged and non-tagged traffic on one NIC as that can cause problems.
If you had both VLANs on one NIC then all the traffic has to share both the same cable and the same bus interface. You can potentially have 1Gbps simultaneously in both directions with gigabit ethernet but in reality you probably won't get close to that. However that would be a total throughput at the NIC to motherboard interface of 2Gbps. A single PCI bus has a bandwidth of just over 1Gbps but that is shared with all the other stuff on that bus. A PCI-X slot can handle a lot more and is on a separate bus.
If you had both VLANs on a single PCI gigabit card I would expect 440Mbps to be about the maximum theoretical speed you could ever get.
For best throughput I would try one VLAN on the NIC in the PCI-X slot and one on the NIC in the PCI slot if you can do that. As you have it.
You should probably try putting both VLANs on the PCI-X NIC if only as an experiment. PCI-X equipment tends to be much higher quality and the bus bandwidth is high enough for it not to be a restriction.
Steve
-
There are two slots and two network cards. One vlan per card is used.
Both connectors look like PCI-X - long green slots. But after your question I have checked documentation. One is just PCI.
Is it better idea to put all vnals on the same NIC? Are there recommendations?
One is a PCI-X slot, the other is a PCI 64bit/33MHz slot. It has double the bandwidth of the typical PCI slot you find on consumer boards. It should theoretically support a dual-gigabit NIC without issues.
-
Thank you very much for all your useful and prompt answers to my question.
Actually it is already answered. I am going to use some cheap L2+ / L3 switch for inter-vlan routing. Something like HP V1910-16G switch.
But my test setup is still on my desk this week. Let me know if I can do inter-vlan test for you…. Or performance test... or some other exercise :)
-
It would be interesting to compare performance, thoughput, cpu usage etc with both VLANs on the PCI-X card. If you have time. :)
Steve
-
It would be interesting to compare performance, thoughput, cpu usage etc with both VLANs on the PCI-X card. If you have time. :)
The same test played with both VLANs on the PCI-X card. Transfer rate is 50-55MBps and CPU load is 35-45% (no device polling enabled)… Not much faster...
-
A slightly reduced cpu load though. Hmm.
I guess a 64bit pci slot is not going to be restricting you. Is the PCI card 64 bit though? I'd be surprised.
Steve
-
-
It's an interesting topic this, to me at least.
I had quite a long discussion a while ago when I discovered that a box I was working on had 4 gigbit NICs that were all on the same PCI bus. I was trying to get a definitive answer on what maximum theorectical throughput would be between two interfaces on that bus but never quite got to grips with it. It would be nice to know just so I don't end up hoping to push more data than I could ever possibly achieve.
I guess with almost everything going to PCI-e it's less of an issue these days.Steve
Edit: I wonder if both your slots are on the same bus by any chance? What do you see from pciconf -lv.
-
I wonder if both your slots are on the same bus by any chance? What do you see from pciconf -lv.
Here is the output:
$ pciconf -lv hostb0@pci0:0:0:0: class=0x060000 card=0x00000000 chip=0x00171166 rev=0x32 hdr=0x00 class = bridge subclass = HOST-PCI hostb1@pci0:0:0:1: class=0x060000 card=0x00000000 chip=0x00171166 rev=0x00 hdr=0x00 class = bridge subclass = HOST-PCI skc0@pci0:0:3:0: class=0x020000 card=0x4b011186 chip=0x4b011186 rev=0x11 hdr=0x00 class = network subclass = ethernet vgapci0@pci0:0:4:0: class=0x030000 card=0x01411028 chip=0x47521002 rev=0x27 hdr=0x00 class = display subclass = VGA atapci0@pci0:0:5:0: class=0x010185 card=0x01411028 chip=0x06801095 rev=0x02 hdr=0x00 class = mass storage subclass = ATA hostb2@pci0:0:15:0: class=0x060000 card=0x02011166 chip=0x02031166 rev=0xa0 hdr=0x00 class = bridge subclass = HOST-PCI atapci1@pci0:0:15:1: class=0x01018a card=0x01411028 chip=0x02131166 rev=0xa0 hdr=0x00 class = mass storage subclass = ATA ohci0@pci0:0:15:2: class=0x0c0310 card=0x02201166 chip=0x02211166 rev=0x05 hdr=0x00 class = serial bus subclass = USB isab0@pci0:0:15:3: class=0x060100 card=0x02301166 chip=0x02271166 rev=0x00 hdr=0x00 class = bridge subclass = PCI-ISA hostb3@pci0:0:16:0: class=0x060000 card=0x00000000 chip=0x01101166 rev=0x12 hdr=0x00 class = bridge subclass = HOST-PCI hostb4@pci0:0:16:2: class=0x060000 card=0x00000000 chip=0x01101166 rev=0x12 hdr=0x00 class = bridge subclass = HOST-PCI bge0@pci0:1:3:0: class=0x020000 card=0x007c0e11 chip=0x164514e4 rev=0x15 hdr=0x00 class = network subclass = ethernet
Another note: when I do the same test without pfSense (both test workstations are on the same switch and on the same subnet) I can see transfer rate of 80-90MBps…
-
Hmmm, I don't see either em or ste interfaces in that list.
However skc0 and bge0 are on different buses. skc0 is on bus 0 along with everything else so it will have to share the bandwidth.Steve
Edit: I should have said pciconf -lc. That should show your cards capabilities further.