Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Inter-vlan routing

    Scheduled Pinned Locked Moved General pfSense Questions
    24 Posts 5 Posters 13.0k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • stephenw10S
      stephenw10 Netgate Administrator
      last edited by

      It would be interesting to compare performance, thoughput, cpu usage etc with both VLANs on the PCI-X card. If you have time.  :)

      Steve

      1 Reply Last reply Reply Quote 0
      • K
        ka3ax
        last edited by

        @stephenw10:

        It would be interesting to compare performance, thoughput, cpu usage etc with both VLANs on the PCI-X card. If you have time.  :)

        The same test played with both VLANs on the PCI-X card. Transfer rate is 50-55MBps and CPU load is 35-45% (no device polling enabled)… Not much faster...

        1 Reply Last reply Reply Quote 0
        • stephenw10S
          stephenw10 Netgate Administrator
          last edited by

          A slightly reduced cpu load though. Hmm.

          I guess a 64bit pci slot is not going to be restricting you. Is the PCI card 64 bit though? I'd be surprised.

          Steve

          1 Reply Last reply Reply Quote 0
          • K
            ka3ax
            last edited by

            @stephenw10:

            Is the PCI card 64 bit though? I'd be surprised.

            PCI card is 32bit indeed

            1 Reply Last reply Reply Quote 0
            • stephenw10S
              stephenw10 Netgate Administrator
              last edited by

              It's an interesting topic this, to me at least.
              I had quite a long discussion a while ago when I discovered that a box I was working on had 4 gigbit NICs that were all on the same PCI bus. I was trying to get a definitive answer on what maximum theorectical throughput would be between two interfaces on that bus but never quite got to grips with it. It would be nice to know just so I don't end up hoping to push more data than I could ever possibly achieve.
              I guess with almost everything going to PCI-e it's less of an issue these days.

              Steve

              Edit: I wonder if both your slots are on the same bus by any chance? What do you see from pciconf -lv.

              1 Reply Last reply Reply Quote 0
              • K
                ka3ax
                last edited by

                @stephenw10:

                I wonder if both your slots are on the same bus by any chance? What do you see from pciconf -lv.

                Here is the output:

                
                $ pciconf -lv
                hostb0@pci0:0:0:0:	class=0x060000 card=0x00000000 chip=0x00171166 rev=0x32 hdr=0x00
                    class      = bridge
                    subclass   = HOST-PCI
                hostb1@pci0:0:0:1:	class=0x060000 card=0x00000000 chip=0x00171166 rev=0x00 hdr=0x00
                    class      = bridge
                    subclass   = HOST-PCI
                skc0@pci0:0:3:0:	class=0x020000 card=0x4b011186 chip=0x4b011186 rev=0x11 hdr=0x00
                    class      = network
                    subclass   = ethernet
                vgapci0@pci0:0:4:0:	class=0x030000 card=0x01411028 chip=0x47521002 rev=0x27 hdr=0x00
                    class      = display
                    subclass   = VGA
                atapci0@pci0:0:5:0:	class=0x010185 card=0x01411028 chip=0x06801095 rev=0x02 hdr=0x00
                    class      = mass storage
                    subclass   = ATA
                hostb2@pci0:0:15:0:	class=0x060000 card=0x02011166 chip=0x02031166 rev=0xa0 hdr=0x00
                    class      = bridge
                    subclass   = HOST-PCI
                atapci1@pci0:0:15:1:	class=0x01018a card=0x01411028 chip=0x02131166 rev=0xa0 hdr=0x00
                    class      = mass storage
                    subclass   = ATA
                ohci0@pci0:0:15:2:	class=0x0c0310 card=0x02201166 chip=0x02211166 rev=0x05 hdr=0x00
                    class      = serial bus
                    subclass   = USB
                isab0@pci0:0:15:3:	class=0x060100 card=0x02301166 chip=0x02271166 rev=0x00 hdr=0x00
                    class      = bridge
                    subclass   = PCI-ISA
                hostb3@pci0:0:16:0:	class=0x060000 card=0x00000000 chip=0x01101166 rev=0x12 hdr=0x00
                    class      = bridge
                    subclass   = HOST-PCI
                hostb4@pci0:0:16:2:	class=0x060000 card=0x00000000 chip=0x01101166 rev=0x12 hdr=0x00
                    class      = bridge
                    subclass   = HOST-PCI
                bge0@pci0:1:3:0:	class=0x020000 card=0x007c0e11 chip=0x164514e4 rev=0x15 hdr=0x00
                    class      = network
                    subclass   = ethernet
                
                

                Another note: when I do the same test without pfSense (both test workstations are on the same switch and on the same subnet) I can see transfer rate of 80-90MBps…

                1 Reply Last reply Reply Quote 0
                • stephenw10S
                  stephenw10 Netgate Administrator
                  last edited by

                  Hmmm, I don't see either em or ste interfaces in that list.
                  However skc0 and bge0 are on different buses. skc0 is on bus 0 along with everything else so it will have to share the bandwidth.

                  Steve

                  Edit: I should have said pciconf -lc. That should show your cards capabilities further.

                  1 Reply Last reply Reply Quote 0
                  • K
                    ka3ax
                    last edited by

                    here it is:

                    
                    $ pciconf -lc
                    hostb0@pci0:0:0:0:	class=0x060000 card=0x00000000 chip=0x00171166 rev=0x32 hdr=0x00
                    hostb1@pci0:0:0:1:	class=0x060000 card=0x00000000 chip=0x00171166 rev=0x00 hdr=0x00
                    skc0@pci0:0:3:0:	class=0x020000 card=0x4b011186 chip=0x4b011186 rev=0x11 hdr=0x00
                        cap 01[48] = powerspec 2  supports D0 D1 D2 D3  current D0
                        cap 03[50] = VPD
                    vgapci0@pci0:0:4:0:	class=0x030000 card=0x01411028 chip=0x47521002 rev=0x27 hdr=0x00
                        cap 01[5c] = powerspec 2  supports D0 D1 D2 D3  current D0
                    atapci0@pci0:0:5:0:	class=0x010185 card=0x01411028 chip=0x06801095 rev=0x02 hdr=0x00
                        cap 01[60] = powerspec 2  supports D0 D1 D2 D3  current D0
                    hostb2@pci0:0:15:0:	class=0x060000 card=0x02011166 chip=0x02031166 rev=0xa0 hdr=0x00
                    atapci1@pci0:0:15:1:	class=0x01018a card=0x01411028 chip=0x02131166 rev=0xa0 hdr=0x00
                    ohci0@pci0:0:15:2:	class=0x0c0310 card=0x02201166 chip=0x02211166 rev=0x05 hdr=0x00
                    isab0@pci0:0:15:3:	class=0x060100 card=0x02301166 chip=0x02271166 rev=0x00 hdr=0x00
                    hostb3@pci0:0:16:0:	class=0x060000 card=0x00000000 chip=0x01101166 rev=0x12 hdr=0x00
                        cap 07[60] = PCI-X 64-bit supports 133MHz, 512 burst read, 8 split transactions
                    hostb4@pci0:0:16:2:	class=0x060000 card=0x00000000 chip=0x01101166 rev=0x12 hdr=0x00
                        cap 07[60] = PCI-X 64-bit supports 133MHz, 512 burst read, 8 split transactions
                    bge0@pci0:1:3:0:	class=0x020000 card=0x007c0e11 chip=0x164514e4 rev=0x15 hdr=0x00
                        cap 07[40] = PCI-X 64-bit supports 133MHz, 512 burst read, 1 split transaction
                        cap 01[48] = powerspec 2  supports D0 D3  current D0
                        cap 03[50] = VPD
                        cap 05[58] = MSI supports 8 messages, 64 bit 
                    
                    
                    1 Reply Last reply Reply Quote 0
                    • stephenw10S
                      stephenw10 Netgate Administrator
                      last edited by

                      What happened to your Intel NIC?

                      Steve

                      1 Reply Last reply Reply Quote 0
                      • K
                        ka3ax
                        last edited by

                        It is swapped with HP NC7770 PCI-X gigabit server adapter, see my Reply #5. Intel NIC goes to another test.

                        @ka3ax:

                        In this test my vlans are on HP NC7770 gigabit server adapter.

                        1 Reply Last reply Reply Quote 0
                        • stephenw10S
                          stephenw10 Netgate Administrator
                          last edited by

                          Ah yes.  :-[

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post
                          Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.