Interface Speed with SNMP
Hellow to all. First of all I would like to thank developers and community for such product as PfSense. At the moment testing configuration with 2 PFsense 1.2 boxes as router and trying to monitor them with Zenoss.
Unfortunately snmpwalk is reporting wrong interface speeds. All of them are 1Gbs, but some get 10Mbs mark, some are unknown speed and there is no possibility to find out interface speed for carpX interfaces.
I had the same issue with Ubuntu server, but I solved it by entering strict interface speed in snmpd.conf file. After that all worked correctly. But in PfSense - no luck, as well as snmpd.config file look very different from Ubuntu's.
I have read, that it could be a problem related to 1Gbs NICs connected with Auto negotiate speed and Duplex, but is there a way to set interface speed in SNMP configuration?
PFsense at the moment running on VMware ESXi.
Regards to all!
As far as I managet to go further on my problem, I noticed, that ifconfig -m le0 shows, that pfsense (FreeBSD) does not support 1000baseTX mode. And that is why SNMP is showing maximum 10Mbs interface speed. Does any body have figured out how to set pfsense FreeBSS to work on 1Gbs NICs?
To reminde - I am using ESXi server and tried vmxnet or e1000 drivers - no difference.
fastcon68 last edited by
I have used serveral diffeent gb adapters (intel, and other) PFSense reports GB interfaces without issue. I am currently just using a single dual port intel server adapter (100 mb) everything is running great.
YoMarK last edited by
AMD PCNet, the standard network device Vmware uses for unsupported Operating Systems always shows 10Mbit. This has nothing to do with pfSense.
If you use e1000 of vmxnet in de vmware .vmx file WILL show 1000baseTX <full-duplex>, the same works for Ubuntu.
Note that a interface le0 is always the AMD PCNet driver. If you for example correctly use e1000 in you .vmx file, pfSense will detect a em0 interface, otherwise you did something wrong.</full-duplex>
Thank you, for your comments on my problem. I mixed up process changing to e1000 driver ir ESXi (probably the same thing in ESX). .vmx file shoud be chenged after the VM is removed from inventory. Then after .vmx file is changed - Add to inventory again. So it worked out on "physical" interfaces and SNMP reported correct speed on 1000MBs FD.
But the problem still remains on Carp interfaces. I presumed, that carp interface settings could some how be taken from "physical" interfaces, but no understanding to me on this prob.
Really it turned out, that it has nothing to do with pfSense, although I assumed, that I can get any comments on similar situations or pointers to other places. I could say, thad I know more on pfSense, than FreeBSD. So sorry for that, but any way…
Carp interface speed remains unsolved.
Hardware used for test is pure HP based DL 360 G5 servers with 2 build in LAN interfaces and one additional 4port Intel 1Gbit interface.
YoMarK last edited by
Are you sure that your CARP interfaces are using the correct drivers(check you .vmx file)?
Note that(I don't think this is clear enough) virtual network interfaces have nothing to do with fysical interfaces. VM's are hardware independant.
You can have a 100MBit fysical interface, and when using a 1000MBit e1000 interface in your virtual machine, it will show 1000MBit/sec in you virtual machine, and can reach 1000MBit when communicating to another VM on the same fysical machine(external connections are fysically limited to 100Mbit of course).
That was the reason why I assumed, that CARP interfaces could take settings from physical ones. To be more correct - vmnics (but in this post I would chose to name them physical - NICs, which are assigned to VM during VM setup from console, but not physical NICs on server hardware). My vmx file does contain only records assigned to these "physical" nics.
ethernet0.virtualDev = "e1000"
ethernet3.virtualDev = "e1000"
CARP interfaces are created in VM using web GUI configuration. Should I manually add these drivers to vmx file? How about limitations on VM to have maximum 4 vmnics?