Intel DN2800mt x64 2.0.3-2.1 bandwidth
-
What I have gotten so far is "Device polling needs to be removed from the GUI as an option because it only breaks things worse".
Yeah, good idea…
-
I'm feeling a little defensive right now.
I'm going off to the corner to cry a while and drink coffee :P -
Beer@kejianshi:
I'm going off to the corner to cry a while and drink coffee :P
Beer >> coffee :P
P.S. Started a new thread on the device polling "feature".
-
Its a little early for me in the day to start swilling beer. I'm waiting till noon for that.
I have been wondering this for a long while but never have gotten a clear answer.
Does / can a HDD slow down the throughput of a build like this?
Is there an advantage in throughput for SDD over HDD (not talking about caching or swapping)? -
tried forwarding, gain minimal bandwidth. 32Mb exact. so now were near 352Mbish.. still way below capability
-
ive read on this forum in some post where a guy LOVES pfsense, but tried microtek. and performed 800 something on his board. I'm gonna try that real quick and see if its capable of routing on a different distro.
-
That will be interesting to see…
-
still working on it. tried monowall Bandwidth was almost halfed at 199Mb/s tried installing smoothwall but wouldn't boot the cd. sat at grub. tried mikrotik, i can't figure out the goofy key system they want to use. tried installing Untangle, sat at black screen during install. and now retrying clearos but takes awhile to install from cd.
Could all this be that it is the new NM10 chipset on this generation of atoms?
-
"Could all this be that it is the new NM10 chipset on this generation of atoms?"
That would get back to my assertion to newer and better is only better if compatibility is there 100% and its usually not in the first couple years.
But, I don't know the answer to that question. Do you have a old relic of a computer with a gigabit WAN port to try with?
If that blazes away, I'd maybe blame the new chipset.
-
"Could all this be that it is the new NM10 chipset on this generation of atoms?"
That would get back to my assertion to newer and better is only better if compatibility is there 100% and its usually not in the first couple years.
But, I don't know the answer to that question. Do you have a old relic of a computer with a gigabit WAN port to try with?
If that blazes away, I'd maybe blame the new chipset.
Nothing that would be slower then this atom.
I can't get clearos to register, im sure the firewall my district is using to block traffic has sniped my connection. I can't get any other distro to work at the moment either. not having any luck at all. the best i can do is take the card out and put it in a core 2 duo 8400 but that wouldn't prove anything other then the card works…. Im doing it.
-
There are a lot of boards using that chipset now. If it alone was causing this I'd expect more questions on the forum.
As you have found in the thread linked it's often possible to get better throughput with a Linux based OS. An unfortunate fact. However it's normally not an issue, the limit you're seeing is something more. In my opinion. ;)Steve
-
Same Nic in an optiplex 755, 109.8MB/s equals to 878Mbps. Had to use 2.0.3 on this one though, as 2.1 didnt give out an DHCP address.
-
Got my msata ssd in today so we will kill the HD theory in a bit
-
Do you have hardware flow control enabled on all the links?
vlan side of subject pf, is direct to pc. Flow control is enabled on rx/tx
Last part of the answer is incomplete. Do you have hardware flow control enabled on the NICs sourcing and sinking the traffic AND on intermediate switch ports AND on relevant pfSense physical interfaces? If so, how did you do it on the pfSense interfaces?
-
Do you have hardware flow control enabled on all the links?
vlan side of subject pf, is direct to pc. Flow control is enabled on rx/tx
Last part of the answer is incomplete. Do you have hardware flow control enabled on the NICs sourcing and sinking the traffic AND on intermediate switch ports AND on relevant pfSense physical interfaces? If so, how did you do it on the pfSense interfaces?
as for the source and client flow control is enabled. on the pfsense interface I haven't even looked at. I'll see what i can find..
as for the Hard drive SSD didn't make one bit of difference. except drop wattage. was at 20 with 3.5HDD now at 13~14 back and forth with ssd.
As I was saying I installed fresh so i didn't change anything in /boot/loader.conf.local
at the moment im trying to figure out how to get into it and add things, change etc. I'm not too familiar on how to do it.Nvm, just found edit file -
No change there either.. this is what my loader.conf.local file looks like now.
kern.cam.boot_delay=10000
kern.em.nmbclusters="131072"
hw.em.num_queues=1
hw.em.fc_setting=1does this look right or am i going to have to do one for each for example.
*hw.em0.num_queues=1
hw.em1.num_queues=1 -
So is the box using vlans directly?
Steve
-
correct. one side of the current vlan to the wan on the second pfsense box to my computer directly…...
I have one in place now thats a power hog so I dabbled in atom to save, Watt i don't know
I found it i think. I have to reinstall then change this setting to see if i have the same result. right now as i type its transfering @ 602Mbit
-
Cool - So my current drive has about a MTTF of 1.5 million hours which is similar to a great SSD, so if there is no benefit other than wattage, I'll leave it be. I usually do things for a particular reason, not just to be trendy, so if the SSD isn't going to bump my performance, I'll wait for a drive failure to replace. With this drive its probably going be about 6 more years minimum.
Still no luck tracking down the bandwidth killing culprit?
-
yeah i tracked it down, for some reason if powerd is not enabled which it isn't by default the power state of the cpu is missrepresented i believe. or it isn't running full throttle. or reported wrong, something along the lines of the speedstep control.. anyway when I set powerd to hiadaptive my bandwidth doubles. now, you would think MAXimum would be best performing. in this case it is the worst. its almost like they are totally reversed. im sure if powerd is off its max by default per say. anyway problem solved. it pulled 75MB/s from my NAS which is equivalent to 600+
Beforehand it was only pulling 22-30MB/s
As far as the SSD goes, yeah it will slightly improve performance using cache type "squid" but for operation it primarily for power savings. if you don't use any cache and not worried about an additional 5-15 watts depending on drive.. I dropped 6-7 watts swapping mine from an WD 80gb caviar. theres really not that big of advantage. especially for home use. just about any off the shelf hard drive for the past 10 years can do at least 50MB/s thats plenty for pulling in a corporate type account. The SSD will find the files a little quicker and push them along. but were talking milliseconds there.
All in all the project came from a 75W off the wall Dell optiplex 755 to a 14W atom. I can probably tweak bios settings to drop another few. While the 755 can reach higher bandwidth. i'll probably never be seeing 800+ at home. By the time we reach something like that from our ISP something better will come along.
Core 2 duo Machine = 11.09Mb/W Atom = 43Mb/W
built for $67