How to force interface to be 1000 Full duplex?



  • Greetings,

    I've very recently put to use PFS, and so I'm a little am a bit rough around the edges regarding the inner-workings.

    I have an interface:
    <intel(r) 1000="" pro="" legacy="" network="" connection="" 1.0.3="">port 0xec80-0xecbf mem 0xfdf80000-0xfdf9ffff,0xfdf60000-0xfdf7ffff irq 7 at device 9.0 on pci1  (100baseTX <full-duplex,flag0,flag1>)

    em0: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
            options=209b <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,wol_magic>capabilities=138db <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,polling,vlan_hwcsum,wol_ucast,wol_mcast,wol_magic,vlan_hwfilter>ether <removed>inet 10.20.XX.XX netmask 0xffff0000 broadcast 10.20.255.255
            inet6 fe80::XXX:XXXX:XXXX:XXXX%em0 prefixlen 64 scopeid 0x2
            nd6 options=3 <performnud,accept_rtadv>media: Ethernet autoselect (100baseTX <full-duplex>)
            status: active
            supported media:
                    media autoselect
                    media 1000baseT
                    media 1000baseT mediaopt full-duplex
                    media 100baseTX mediaopt full-duplex
                    media 100baseTX
                    media 10baseT/UTP mediaopt full-duplex
                    media 10baseT/UTP

    It's a 10/100/1000 capable card, and I have it on a 1000 based switch. However it appears the stack is forcing it to be set to 100.
    How does one force the card to set to 1000

    Thanks!

    Gabriel</full-duplex></performnud,accept_rtadv></removed></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,polling,vlan_hwcsum,wol_ucast,wol_mcast,wol_magic,vlan_hwfilter></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,wol_magic></up,broadcast,running,simplex,multicast></full-duplex,flag0,flag1></intel(r)>



  • @gjdunga:

    It's a 10/100/1000 capable card,

    Are you sure its 1000 capable? If I recall correctly, some years ago I saw some PRO 1000 cards that were only 10/100 capable (not 1000 capable).



  • Sorry, I had edited the post to include the ifconfig -m info.. Yes, It's quite capable and have had it in use @ the 1000 speed with a diffrent distro.

    Gabriel



  • Can you force it to 1000 from the switch?



  • Let me ask this a different way..

    Because this is my first time messing with Pfs/BSD, i don't know what the command is to force an interface..

    If I was using centos or another distro it would be  something like..

    
    ethtool -s eth0 speed 1000 duplex full autoneg off
    
    

    I don't see an ethtool in this flavor, so I'm wondering what the correct command is..

    To answer your question about the switch..
    No..

    Gabriel

    EDIT:  More info

    vendor=0x8086 device=0x107c subvendor=0x8086 subdevice=0x1376 class=0x020000

    Chip ID: 82541PI
    Chip Description: Gigabit Ethernet Controller (Copper) rev 5

    Intel BSD 7 Driver: http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&ProdId=1938&DwnldID=19786&ProductFamily=Ethernet+Components&ProductLine=Ethernet+Controllers&ProductProduct=Intel®+82541PI+Gigabit+Ethernet+Controller&keyword="82541PI"eng



  • Finally found the command, so BSD uses ifconfig for this stuff.. Okay.. <check>Trying:

    ifconfig em0 media 1000baseTX mediaopt full-duplex 
    

    This snippet causes the no carrier bug…

    So, okay I'm looking around, and there appears to be issues (???) with the generic Intel(R) PRO/1000 Legacy Network Connection driver [em (4)] Not only with copper versions but with fiber as well..

    looking at the Intel site, their version appears to be 6.9.21

    I know that Linux flavor version of the same driver from Intel works like a charm, mostly because I've compiled it and used it before.. (with the same hardware setup exactly)

    So..

    My question has now changed..

    What are the requirements/ steps to compile and insert this vendor specific driver into the stack?

    Again, this is my first date with BSD/Pfs I've compiled in other flavors, but not here.. Please be gentle.

    does it help that I installed the devel kernel when I did the install?

    Gabriel

    Edit:

    It may also help whoever that i'm trying the 2.0 Rc1 Version of this…

    FreeBSD secureexit 8.1-RELEASE-p2 FreeBSD 8.1-RELEASE-p2 #1: Fri Mar  4 18:16:17 EST 2011     sullrich@FreeBSD_8.0_pfSense_2.0-snaps.pfsense.org:/usr/obj.pfSense/usr/pfSensesrc/src/sys/pfSense_Dev.8  i386 
    ```</check>


  • I'm running a dual servernic variant of the intel chip (PCI-X in a PCI slot)
    I have it connected to a NETGEAR Gigabit switch (ProSafe 16)
    I don't need to force 1000Mbit  in any version of pfsense I've tried recently.
    Maybe it is the cable or the switch that is picky about transfer speeds (autoselect)
    Currently testing pfsense 2.0 RC1
    I'm running : 2.0-RC1 (amd64) built on Fri Mar 4 11:03:45 EST 2011

    em0: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
            options=9b <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum>capabilities=100db <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,polling,vlan_hwcsum,vlan_hwfilter>ether xx:xx:xx:xx:xx:xx
            inet6 xxxx::xxx:xxxx:xxxx:xxxx%em0 prefixlen 64 scopeid 0x1
            inet xx.xx.xx.xx netmask 0xfffffe00 broadcast 255.255.255.255
            nd6 options=3 <performnud,accept_rtadv>media: Ethernet autoselect (1000baseT <full-duplex>)
            status: active
            supported media:
                    media autoselect
                    media 1000baseT
                    media 1000baseT mediaopt full-duplex
                    media 100baseTX mediaopt full-duplex
                    media 100baseTX
                    media 10baseT/UTP mediaopt full-duplex
                    media 10baseT/UTP
    em1: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
            options=9b <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum>capabilities=100db <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,polling,vlan_hwcsum,vlan_hwfilter>ether xx:xx:xx:xx:xx:xx
            inet 192.168.0.1 netmask 0xffffff00 broadcast 192.168.0.255
            inet6 xxxx::xxx:xxxx:xxxx:xxxx%em1 prefixlen 64 scopeid 0x2
            nd6 options=3 <performnud,accept_rtadv>media: Ethernet autoselect (1000baseT <full-duplex>)
            status: active
            supported media:
                    media autoselect
                    media 1000baseT
                    media 1000baseT mediaopt full-duplex
                    media 100baseTX mediaopt full-duplex
                    media 100baseTX
                    media 10baseT/UTP mediaopt full-duplex
                    media 10baseT/UTP</full-duplex></performnud,accept_rtadv></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,polling,vlan_hwcsum,vlan_hwfilter></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum></up,broadcast,running,simplex,multicast></full-duplex></performnud,accept_rtadv></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,polling,vlan_hwcsum,vlan_hwfilter></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum></up,broadcast,running,simplex,multicast> 
    


  • @wallabybob:

    @gjdunga:

    It's a 10/100/1000 capable card,

    Are you sure its 1000 capable? If I recall correctly, some years ago I saw some PRO 1000 cards that were only 10/100 capable (not 1000 capable).

    I have a Dell that I thought this was true of and only needed to update the drivers…  When put into "Auto-Negotiate 1000"  it will only look to connect at 1000 and will not negotiate at a slower speed if connected to a 10/100 device. To do that you have to set it to "Auto Detect" manually but then it will only connect at 10/100. Kinda a pain but as long as I keep it on a gigabit switch Im fine...

    Its a 2004 vintage Dell desktop server machine with onboard NIC...




  • The reason I'm being all monkey about this:

    In the distro I was using before switching, it worked just fine. The card, switch, cables, you name it..

    The only thing I've changed here is switching to PFS.

    I don't see how formatting a Hard drive and installing a different OS will change the quality of the patch cable? or cause a switch to degrade! Even more-so when those cables/switch were not touched in the process..

    I can verify with a live cd that more than one OS will light the card up to 1000 and No errors, collisions, crc issues.. None..
    That would be Unbuntu 10.4 LTS, Win Xp (With intel vendor drivers), LFS with again vendor compiled driver.

    I'm sorry to be adamant about this, but If it barks like a spider, smells like a spider, and leaves stains like a spider..  It must be a spider!

    Am I to assume that one can compile with this distro, or is it purely binary?


  • Netgate Administrator

    You can't compile directly in pfSense. It's a firewall that would be an unnecessary risk. If you want to compile, for 2.0, you do so from a standard FreeBSD install of 8.1_rel.
    Out of interest what is your output from sysctl dev.em? Here's mine:

    
    [1.2.3-RELEASE]                                                                 [root@fire.box]/root(1): sysctl dev.em
    dev.em.0.%desc: Intel(R) PRO/1000 Network Connection 6.9.6
    dev.em.0.%driver: em
    dev.em.0.%location: slot=1 function=0
    dev.em.0.%pnpinfo: vendor=0x8086 device=0x1075 subvendor=0x8086 subdevice=0x1075 class=0x020000
    dev.em.0.%parent: pci2
    dev.em.0.debug: -1
    dev.em.0.stats: -1
    dev.em.0.rx_int_delay: 0
    dev.em.0.tx_int_delay: 66
    dev.em.0.rx_abs_int_delay: 66
    dev.em.0.tx_abs_int_delay: 66
    dev.em.0.rx_processing_limit: 100
    
    

    Steve



  • First thing I see, is that your running 6.9.6 on [1.2.3-RELEASE]  and I'm running [2.0-RC1] with "LEGACY" 1.0.3

    That's what I was trying to get at.. How does one get 6.9.6 onto [2.0-RC1]????

    If I need to downgrade to 1.x, so be it..  I was hoping for the updated features in 2.x

    
    dev.em.0.%desc: Intel(R) PRO/1000 Legacy Network Connection 1.0.3
    dev.em.0.%driver: em
    dev.em.0.%location: slot=9 function=0
    dev.em.0.%pnpinfo: vendor=0x8086 device=0x107c subvendor=0x8086 subdevice=0x1376 class=0x020000
    dev.em.0.%parent: pci1
    dev.em.0.nvm: -1
    dev.em.0.rx_int_delay: 0
    dev.em.0.tx_int_delay: 66
    dev.em.0.rx_abs_int_delay: 66
    dev.em.0.tx_abs_int_delay: 66
    dev.em.0.rx_processing_limit: 100
    dev.em.0.flow_control: 3
    dev.em.0.mbuf_alloc_fail: 0
    dev.em.0.cluster_alloc_fail: 0
    dev.em.0.dropped: 0
    dev.em.0.tx_dma_fail: 0
    dev.em.0.tx_desc_fail1: 0
    dev.em.0.tx_desc_fail2: 0
    dev.em.0.rx_overruns: 0
    dev.em.0.watchdog_timeouts: 0
    dev.em.0.device_control: 1077674561
    dev.em.0.rx_control: 32770
    dev.em.0.fc_high_water: 47104
    dev.em.0.fc_low_water: 45604
    dev.em.0.fifo_workaround: 0
    dev.em.0.fifo_reset: 0
    dev.em.0.txd_head: 224
    dev.em.0.txd_tail: 224
    dev.em.0.rxd_head: 131
    dev.em.0.rxd_tail: 130
    dev.em.0.mac_stats.excess_coll: 0
    dev.em.0.mac_stats.single_coll: 0
    dev.em.0.mac_stats.multiple_coll: 0
    dev.em.0.mac_stats.late_coll: 0
    dev.em.0.mac_stats.collision_count: 0
    dev.em.0.mac_stats.symbol_errors: 0
    dev.em.0.mac_stats.sequence_errors: 0
    dev.em.0.mac_stats.defer_count: 0
    dev.em.0.mac_stats.missed_packets: 0
    dev.em.0.mac_stats.recv_no_buff: 0
    dev.em.0.mac_stats.recv_undersize: 0
    dev.em.0.mac_stats.recv_fragmented: 0
    dev.em.0.mac_stats.recv_oversize: 0
    dev.em.0.mac_stats.recv_jabber: 0
    dev.em.0.mac_stats.recv_errs: 0
    dev.em.0.mac_stats.crc_errs: 0
    dev.em.0.mac_stats.alignment_errs: 0
    dev.em.0.mac_stats.coll_ext_errs: 0
    dev.em.0.mac_stats.xon_recvd: 0
    dev.em.0.mac_stats.xon_txd: 0
    dev.em.0.mac_stats.xoff_recvd: 0
    dev.em.0.mac_stats.xoff_txd: 0
    dev.em.0.mac_stats.total_pkts_recvd: 7728330
    dev.em.0.mac_stats.good_pkts_recvd: 7728330
    dev.em.0.mac_stats.bcast_pkts_recvd: 1397
    dev.em.0.mac_stats.mcast_pkts_recvd: 0
    dev.em.0.mac_stats.rx_frames_64: 1236612
    dev.em.0.mac_stats.rx_frames_65_127: 1259960
    dev.em.0.mac_stats.rx_frames_128_255: 177716
    dev.em.0.mac_stats.rx_frames_256_511: 117288
    dev.em.0.mac_stats.rx_frames_512_1023: 129781
    dev.em.0.mac_stats.rx_frames_1024_1522: 4806973
    dev.em.0.mac_stats.good_octets_recvd: 7303276885
    dev.em.0.mac_stats.good_octets_txd: 3475148119
    dev.em.0.mac_stats.total_pkts_txd: 6293613
    dev.em.0.mac_stats.good_pkts_txd: 6293613
    dev.em.0.mac_stats.bcast_pkts_txd: 1
    dev.em.0.mac_stats.mcast_pkts_txd: 5
    dev.em.0.mac_stats.tx_frames_64: 1997714
    dev.em.0.mac_stats.tx_frames_65_127: 1796254
    dev.em.0.mac_stats.tx_frames_128_255: 235933
    dev.em.0.mac_stats.tx_frames_256_511: 57109
    dev.em.0.mac_stats.tx_frames_512_1023: 62766
    dev.em.0.mac_stats.tx_frames_1024_1522: 2143837
    dev.em.0.mac_stats.tso_txd: 0
    dev.em.0.mac_stats.tso_ctx_fail: 0
    
    

    Edit:

    I out of kicks and giggles, forced an update check.. 
    Appears I don't have the newest bleeding edge.. 
    –--------------------------------------------------
      Current Version : 2.0-RC1
      Latest Version  : Fri Mar  4 22:36:09 EST 2011

    I'm forcing an update.


  • Netgate Administrator

    There are still daily snapshots built. It's unlikely there will be an em(4) update though.

    Although the numbers say one thing, the driver you are running seems to be the newer one, offering far more ajustment. Obviously the actual NICs are different chips.

    I should be updating to 2.0RC1 this weekend.

    Steve



  • Your Right, no Em(4) update.

    Thank you for your attention to this! I'm apologize for being a pain..
    If the chip set is different, that could explain why it's going legacy..
    There again I don't know enough about the setup in BSD to know squat yet. 
    If your going to be patching the RC this weekend for this, I'll happily wait!

    I guess I need to build a BSD desktop!

    Gabriel

    Edit: Oh wait.. You said "YOU" are switching to 2.0 this weekend, not doing a patch of the RC..



  • I guess this is a issue to move to a 2.0 thread?


Log in to reply