Inter-vlan routing
-
There are two slots and two network cards. One vlan per card is used.
Both connectors look like PCI-X - long green slots. But after your question I have checked documentation. One is just PCI.
Is it better idea to put all vnals on the same NIC? Are there recommendations?
One is a PCI-X slot, the other is a PCI 64bit/33MHz slot. It has double the bandwidth of the typical PCI slot you find on consumer boards. It should theoretically support a dual-gigabit NIC without issues.
-
Thank you very much for all your useful and prompt answers to my question.
Actually it is already answered. I am going to use some cheap L2+ / L3 switch for inter-vlan routing. Something like HP V1910-16G switch.
But my test setup is still on my desk this week. Let me know if I can do inter-vlan test for you…. Or performance test... or some other exercise :)
-
It would be interesting to compare performance, thoughput, cpu usage etc with both VLANs on the PCI-X card. If you have time. :)
Steve
-
It would be interesting to compare performance, thoughput, cpu usage etc with both VLANs on the PCI-X card. If you have time. :)
The same test played with both VLANs on the PCI-X card. Transfer rate is 50-55MBps and CPU load is 35-45% (no device polling enabled)… Not much faster...
-
A slightly reduced cpu load though. Hmm.
I guess a 64bit pci slot is not going to be restricting you. Is the PCI card 64 bit though? I'd be surprised.
Steve
-
-
It's an interesting topic this, to me at least.
I had quite a long discussion a while ago when I discovered that a box I was working on had 4 gigbit NICs that were all on the same PCI bus. I was trying to get a definitive answer on what maximum theorectical throughput would be between two interfaces on that bus but never quite got to grips with it. It would be nice to know just so I don't end up hoping to push more data than I could ever possibly achieve.
I guess with almost everything going to PCI-e it's less of an issue these days.Steve
Edit: I wonder if both your slots are on the same bus by any chance? What do you see from pciconf -lv.
-
I wonder if both your slots are on the same bus by any chance? What do you see from pciconf -lv.
Here is the output:
$ pciconf -lv hostb0@pci0:0:0:0: class=0x060000 card=0x00000000 chip=0x00171166 rev=0x32 hdr=0x00 class = bridge subclass = HOST-PCI hostb1@pci0:0:0:1: class=0x060000 card=0x00000000 chip=0x00171166 rev=0x00 hdr=0x00 class = bridge subclass = HOST-PCI skc0@pci0:0:3:0: class=0x020000 card=0x4b011186 chip=0x4b011186 rev=0x11 hdr=0x00 class = network subclass = ethernet vgapci0@pci0:0:4:0: class=0x030000 card=0x01411028 chip=0x47521002 rev=0x27 hdr=0x00 class = display subclass = VGA atapci0@pci0:0:5:0: class=0x010185 card=0x01411028 chip=0x06801095 rev=0x02 hdr=0x00 class = mass storage subclass = ATA hostb2@pci0:0:15:0: class=0x060000 card=0x02011166 chip=0x02031166 rev=0xa0 hdr=0x00 class = bridge subclass = HOST-PCI atapci1@pci0:0:15:1: class=0x01018a card=0x01411028 chip=0x02131166 rev=0xa0 hdr=0x00 class = mass storage subclass = ATA ohci0@pci0:0:15:2: class=0x0c0310 card=0x02201166 chip=0x02211166 rev=0x05 hdr=0x00 class = serial bus subclass = USB isab0@pci0:0:15:3: class=0x060100 card=0x02301166 chip=0x02271166 rev=0x00 hdr=0x00 class = bridge subclass = PCI-ISA hostb3@pci0:0:16:0: class=0x060000 card=0x00000000 chip=0x01101166 rev=0x12 hdr=0x00 class = bridge subclass = HOST-PCI hostb4@pci0:0:16:2: class=0x060000 card=0x00000000 chip=0x01101166 rev=0x12 hdr=0x00 class = bridge subclass = HOST-PCI bge0@pci0:1:3:0: class=0x020000 card=0x007c0e11 chip=0x164514e4 rev=0x15 hdr=0x00 class = network subclass = ethernet
Another note: when I do the same test without pfSense (both test workstations are on the same switch and on the same subnet) I can see transfer rate of 80-90MBps…
-
Hmmm, I don't see either em or ste interfaces in that list.
However skc0 and bge0 are on different buses. skc0 is on bus 0 along with everything else so it will have to share the bandwidth.Steve
Edit: I should have said pciconf -lc. That should show your cards capabilities further.
-
here it is:
$ pciconf -lc hostb0@pci0:0:0:0: class=0x060000 card=0x00000000 chip=0x00171166 rev=0x32 hdr=0x00 hostb1@pci0:0:0:1: class=0x060000 card=0x00000000 chip=0x00171166 rev=0x00 hdr=0x00 skc0@pci0:0:3:0: class=0x020000 card=0x4b011186 chip=0x4b011186 rev=0x11 hdr=0x00 cap 01[48] = powerspec 2 supports D0 D1 D2 D3 current D0 cap 03[50] = VPD vgapci0@pci0:0:4:0: class=0x030000 card=0x01411028 chip=0x47521002 rev=0x27 hdr=0x00 cap 01[5c] = powerspec 2 supports D0 D1 D2 D3 current D0 atapci0@pci0:0:5:0: class=0x010185 card=0x01411028 chip=0x06801095 rev=0x02 hdr=0x00 cap 01[60] = powerspec 2 supports D0 D1 D2 D3 current D0 hostb2@pci0:0:15:0: class=0x060000 card=0x02011166 chip=0x02031166 rev=0xa0 hdr=0x00 atapci1@pci0:0:15:1: class=0x01018a card=0x01411028 chip=0x02131166 rev=0xa0 hdr=0x00 ohci0@pci0:0:15:2: class=0x0c0310 card=0x02201166 chip=0x02211166 rev=0x05 hdr=0x00 isab0@pci0:0:15:3: class=0x060100 card=0x02301166 chip=0x02271166 rev=0x00 hdr=0x00 hostb3@pci0:0:16:0: class=0x060000 card=0x00000000 chip=0x01101166 rev=0x12 hdr=0x00 cap 07[60] = PCI-X 64-bit supports 133MHz, 512 burst read, 8 split transactions hostb4@pci0:0:16:2: class=0x060000 card=0x00000000 chip=0x01101166 rev=0x12 hdr=0x00 cap 07[60] = PCI-X 64-bit supports 133MHz, 512 burst read, 8 split transactions bge0@pci0:1:3:0: class=0x020000 card=0x007c0e11 chip=0x164514e4 rev=0x15 hdr=0x00 cap 07[40] = PCI-X 64-bit supports 133MHz, 512 burst read, 1 split transaction cap 01[48] = powerspec 2 supports D0 D3 current D0 cap 03[50] = VPD cap 05[58] = MSI supports 8 messages, 64 bit
-
What happened to your Intel NIC?
Steve
-
It is swapped with HP NC7770 PCI-X gigabit server adapter, see my Reply #5. Intel NIC goes to another test.
In this test my vlans are on HP NC7770 gigabit server adapter.
-
Ah yes. :-[