Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Inter-vlan routing

    Scheduled Pinned Locked Moved General pfSense Questions
    24 Posts 5 Posters 13.0k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • C
      cmb
      last edited by

      You're using a 10 year old server, with a processor and Dlink NICs better suited to low end desktops than a server even 10 years ago, don't expect miracles. Plus 30 Mbps (assuming you mean Mbps - bits - not MBps - bytes) means it's actually pushing 60 Mbps (in+out), and I doubt that Dlink card has hardware VLAN support, aside from missing other hardware found on any good NIC that lessens CPU load. You're killing the CPU, partially because of the low end NIC, and partially because it's a low end CPU (the minimal L1 and L2 cache have a serious hit on network throughput).

      The way to speed it up is to use a faster box, and a better quality NIC. A PE 1850 with Xeon procs can be found for under $100 USD on ebay and would blow away that 650, should be able to get gig wire speed on one. VLANs are your best and probably only option to accomplish what you're looking to accomplish. But if you expect to route several Gbps sustained between VLANs, a firewall of any type is not the answer, you'll need a L3 switch.

      1 Reply Last reply Reply Quote 0
      • K
        ka3ax
        last edited by

        Thank you cmb, very detailed answer.

        My second NIC is network Intel PRO/1000 Dual Port Server Adapter. That is where I play my vlan experiments. It is not clear from my original post, sorry. It has hardware supports for vlans and ‘Intelligent Offloads’. With your help I see the bottleneck: CPU and PCI-X bus.

        I will check if we can fit L3 switch in budget.

        Using PE 650 is company policy, I have to respect it. All the branch offices have the same hardware / software for perimeter firewall. And there are few spare boxes on stock - part of disaster recovery plan (also very handy for tests and experiments). pfSense is very good on my PE 650 box for all the other tasks. I will probably look for CPU upgrade later and try to check inter-vlan routing performance again. I am curious.  ;)

        Another question: Is there a way to lock routing to maximum 60% of CPU time? So there is some reserve for other pfSense tasks.

        1 Reply Last reply Reply Quote 0
        • C
          cmb
          last edited by

          Ah, the Intel NIC would definitely be better. But that Celeron is such a weak CPU it's not going to do much, just putting the fastest compatible P4 CPU in there would likely make a significant difference. The only way to prevent interrupt load from overloading the system is to use polling.

          1 Reply Last reply Reply Quote 0
          • W
            wallabybob
            last edited by

            @cmb:

            The only way to prevent interrupt load from overloading the system is to use polling.

            Agreed. But some Intel NICs include various forms of "interrupt moderation" and can be configured to delay interrupts to increase the likelihood the driver will be able to process multiple frames on an interrupt, thus reducing interrupt overhead.

            1 Reply Last reply Reply Quote 0
            • K
              ka3ax
              last edited by

              Wallabybob, cmb, Thanks for your answers.

              I have found P4 2.4GHz@533 CPU to replace Celeron in my PE650 box. I have done inter-vlan copy test again to see the difference. In this test my vlans are on HP NC7770 gigabit server adapter.

              Transfer rate is between 45-55MBps, CPU load is within 50-65%, Web interface responds fast.

              Indeed P4 is better than Celeron. :)

              1 Reply Last reply Reply Quote 0
              • C
                cmb
                last edited by

                So you do mean MB/sec and not Mbps. So you were getting up to almost 250 Mbps through the Celeron. That's actually really impressive, you would drop a significant amount of money on a commercial firewall that can push that. Getting it up to 440 Mbps is even better. In both instances, that's what I would expect to see with such a CPU.

                1 Reply Last reply Reply Quote 0
                • stephenw10S
                  stephenw10 Netgate Administrator
                  last edited by

                  @ka3ax:

                  Transfer rate is between 45-55MBps, CPU load is within 50-65%

                  Interesting, obviously not limited by your CPU. If this was a PCI interface I would think that is your new limiting factor but you mentioned PCI-X. Are both VLANs on the same NIC?

                  Steve

                  1 Reply Last reply Reply Quote 0
                  • K
                    ka3ax
                    last edited by

                    @cmb:

                    So you do mean MB/sec and not Mbps.

                    Yes, MegaBytes per second

                    @cmb:

                    you would drop a significant amount of money on a commercial firewall that can push that

                    Sure. We have switched from Juniper ssg5. :)

                    1 Reply Last reply Reply Quote 0
                    • K
                      ka3ax
                      last edited by

                      @stephenw10:

                      Interesting, obviously not limited by your CPU. If this was a PCI interface I would think that is your new limiting factor but you mentioned PCI-X. Are both VLANs on the same NIC?

                      There are two slots and two network cards. One vlan per card is used.

                      Both connectors look like PCI-X - long green slots. But after your question I have checked documentation. One is just PCI.

                      Is it better idea to put all vnals on the same NIC? Are there recommendations?

                      1 Reply Last reply Reply Quote 0
                      • stephenw10S
                        stephenw10 Netgate Administrator
                        last edited by

                        Most important is don't use tagged and non-tagged traffic on one NIC as that can cause problems.

                        If you had both VLANs on one NIC then all the traffic has to share both the same cable and the same bus interface. You can potentially have 1Gbps simultaneously in both directions with gigabit ethernet but in reality you probably won't get close to that. However that would be a total throughput at the NIC to motherboard interface of 2Gbps. A single PCI bus has a bandwidth of just over 1Gbps but that is shared with all the other stuff on that bus. A PCI-X slot can handle a lot more and is on a separate bus.

                        If you had both VLANs on a single PCI gigabit card I would expect 440Mbps to be about the maximum theoretical speed you could ever get.

                        For best throughput I would try one VLAN on the NIC in the PCI-X slot and one on the NIC in the PCI slot if you can do that. As you have it.

                        You should probably try putting both VLANs on the PCI-X NIC if only as an experiment. PCI-X equipment tends to be much higher quality and the bus bandwidth is high enough for it not to be a restriction.

                        Steve

                        1 Reply Last reply Reply Quote 0
                        • D
                          dreamslacker
                          last edited by

                          @ka3ax:

                          There are two slots and two network cards. One vlan per card is used.

                          Both connectors look like PCI-X - long green slots. But after your question I have checked documentation. One is just PCI.

                          Is it better idea to put all vnals on the same NIC? Are there recommendations?

                          One is a PCI-X slot, the other is a PCI 64bit/33MHz slot.  It has double the bandwidth of the typical PCI slot you find on consumer boards.  It should theoretically support a dual-gigabit NIC without issues.

                          1 Reply Last reply Reply Quote 0
                          • K
                            ka3ax
                            last edited by

                            Thank you very much for all your useful and prompt answers to my question.

                            Actually it is already answered. I am going to use some cheap L2+ / L3 switch for inter-vlan routing. Something like HP V1910-16G switch.

                            But my test setup is still on my desk this week. Let me know if I can do inter-vlan test for you…. Or performance test... or some other exercise :)

                            1 Reply Last reply Reply Quote 0
                            • stephenw10S
                              stephenw10 Netgate Administrator
                              last edited by

                              It would be interesting to compare performance, thoughput, cpu usage etc with both VLANs on the PCI-X card. If you have time.  :)

                              Steve

                              1 Reply Last reply Reply Quote 0
                              • K
                                ka3ax
                                last edited by

                                @stephenw10:

                                It would be interesting to compare performance, thoughput, cpu usage etc with both VLANs on the PCI-X card. If you have time.  :)

                                The same test played with both VLANs on the PCI-X card. Transfer rate is 50-55MBps and CPU load is 35-45% (no device polling enabled)… Not much faster...

                                1 Reply Last reply Reply Quote 0
                                • stephenw10S
                                  stephenw10 Netgate Administrator
                                  last edited by

                                  A slightly reduced cpu load though. Hmm.

                                  I guess a 64bit pci slot is not going to be restricting you. Is the PCI card 64 bit though? I'd be surprised.

                                  Steve

                                  1 Reply Last reply Reply Quote 0
                                  • K
                                    ka3ax
                                    last edited by

                                    @stephenw10:

                                    Is the PCI card 64 bit though? I'd be surprised.

                                    PCI card is 32bit indeed

                                    1 Reply Last reply Reply Quote 0
                                    • stephenw10S
                                      stephenw10 Netgate Administrator
                                      last edited by

                                      It's an interesting topic this, to me at least.
                                      I had quite a long discussion a while ago when I discovered that a box I was working on had 4 gigbit NICs that were all on the same PCI bus. I was trying to get a definitive answer on what maximum theorectical throughput would be between two interfaces on that bus but never quite got to grips with it. It would be nice to know just so I don't end up hoping to push more data than I could ever possibly achieve.
                                      I guess with almost everything going to PCI-e it's less of an issue these days.

                                      Steve

                                      Edit: I wonder if both your slots are on the same bus by any chance? What do you see from pciconf -lv.

                                      1 Reply Last reply Reply Quote 0
                                      • K
                                        ka3ax
                                        last edited by

                                        @stephenw10:

                                        I wonder if both your slots are on the same bus by any chance? What do you see from pciconf -lv.

                                        Here is the output:

                                        
                                        $ pciconf -lv
                                        hostb0@pci0:0:0:0:	class=0x060000 card=0x00000000 chip=0x00171166 rev=0x32 hdr=0x00
                                            class      = bridge
                                            subclass   = HOST-PCI
                                        hostb1@pci0:0:0:1:	class=0x060000 card=0x00000000 chip=0x00171166 rev=0x00 hdr=0x00
                                            class      = bridge
                                            subclass   = HOST-PCI
                                        skc0@pci0:0:3:0:	class=0x020000 card=0x4b011186 chip=0x4b011186 rev=0x11 hdr=0x00
                                            class      = network
                                            subclass   = ethernet
                                        vgapci0@pci0:0:4:0:	class=0x030000 card=0x01411028 chip=0x47521002 rev=0x27 hdr=0x00
                                            class      = display
                                            subclass   = VGA
                                        atapci0@pci0:0:5:0:	class=0x010185 card=0x01411028 chip=0x06801095 rev=0x02 hdr=0x00
                                            class      = mass storage
                                            subclass   = ATA
                                        hostb2@pci0:0:15:0:	class=0x060000 card=0x02011166 chip=0x02031166 rev=0xa0 hdr=0x00
                                            class      = bridge
                                            subclass   = HOST-PCI
                                        atapci1@pci0:0:15:1:	class=0x01018a card=0x01411028 chip=0x02131166 rev=0xa0 hdr=0x00
                                            class      = mass storage
                                            subclass   = ATA
                                        ohci0@pci0:0:15:2:	class=0x0c0310 card=0x02201166 chip=0x02211166 rev=0x05 hdr=0x00
                                            class      = serial bus
                                            subclass   = USB
                                        isab0@pci0:0:15:3:	class=0x060100 card=0x02301166 chip=0x02271166 rev=0x00 hdr=0x00
                                            class      = bridge
                                            subclass   = PCI-ISA
                                        hostb3@pci0:0:16:0:	class=0x060000 card=0x00000000 chip=0x01101166 rev=0x12 hdr=0x00
                                            class      = bridge
                                            subclass   = HOST-PCI
                                        hostb4@pci0:0:16:2:	class=0x060000 card=0x00000000 chip=0x01101166 rev=0x12 hdr=0x00
                                            class      = bridge
                                            subclass   = HOST-PCI
                                        bge0@pci0:1:3:0:	class=0x020000 card=0x007c0e11 chip=0x164514e4 rev=0x15 hdr=0x00
                                            class      = network
                                            subclass   = ethernet
                                        
                                        

                                        Another note: when I do the same test without pfSense (both test workstations are on the same switch and on the same subnet) I can see transfer rate of 80-90MBps…

                                        1 Reply Last reply Reply Quote 0
                                        • stephenw10S
                                          stephenw10 Netgate Administrator
                                          last edited by

                                          Hmmm, I don't see either em or ste interfaces in that list.
                                          However skc0 and bge0 are on different buses. skc0 is on bus 0 along with everything else so it will have to share the bandwidth.

                                          Steve

                                          Edit: I should have said pciconf -lc. That should show your cards capabilities further.

                                          1 Reply Last reply Reply Quote 0
                                          • K
                                            ka3ax
                                            last edited by

                                            here it is:

                                            
                                            $ pciconf -lc
                                            hostb0@pci0:0:0:0:	class=0x060000 card=0x00000000 chip=0x00171166 rev=0x32 hdr=0x00
                                            hostb1@pci0:0:0:1:	class=0x060000 card=0x00000000 chip=0x00171166 rev=0x00 hdr=0x00
                                            skc0@pci0:0:3:0:	class=0x020000 card=0x4b011186 chip=0x4b011186 rev=0x11 hdr=0x00
                                                cap 01[48] = powerspec 2  supports D0 D1 D2 D3  current D0
                                                cap 03[50] = VPD
                                            vgapci0@pci0:0:4:0:	class=0x030000 card=0x01411028 chip=0x47521002 rev=0x27 hdr=0x00
                                                cap 01[5c] = powerspec 2  supports D0 D1 D2 D3  current D0
                                            atapci0@pci0:0:5:0:	class=0x010185 card=0x01411028 chip=0x06801095 rev=0x02 hdr=0x00
                                                cap 01[60] = powerspec 2  supports D0 D1 D2 D3  current D0
                                            hostb2@pci0:0:15:0:	class=0x060000 card=0x02011166 chip=0x02031166 rev=0xa0 hdr=0x00
                                            atapci1@pci0:0:15:1:	class=0x01018a card=0x01411028 chip=0x02131166 rev=0xa0 hdr=0x00
                                            ohci0@pci0:0:15:2:	class=0x0c0310 card=0x02201166 chip=0x02211166 rev=0x05 hdr=0x00
                                            isab0@pci0:0:15:3:	class=0x060100 card=0x02301166 chip=0x02271166 rev=0x00 hdr=0x00
                                            hostb3@pci0:0:16:0:	class=0x060000 card=0x00000000 chip=0x01101166 rev=0x12 hdr=0x00
                                                cap 07[60] = PCI-X 64-bit supports 133MHz, 512 burst read, 8 split transactions
                                            hostb4@pci0:0:16:2:	class=0x060000 card=0x00000000 chip=0x01101166 rev=0x12 hdr=0x00
                                                cap 07[60] = PCI-X 64-bit supports 133MHz, 512 burst read, 8 split transactions
                                            bge0@pci0:1:3:0:	class=0x020000 card=0x007c0e11 chip=0x164514e4 rev=0x15 hdr=0x00
                                                cap 07[40] = PCI-X 64-bit supports 133MHz, 512 burst read, 1 split transaction
                                                cap 01[48] = powerspec 2  supports D0 D3  current D0
                                                cap 03[50] = VPD
                                                cap 05[58] = MSI supports 8 messages, 64 bit 
                                            
                                            
                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.