Intel DN2800mt x64 2.0.3-2.1 bandwidth
-
"Could all this be that it is the new NM10 chipset on this generation of atoms?"
That would get back to my assertion to newer and better is only better if compatibility is there 100% and its usually not in the first couple years.
But, I don't know the answer to that question. Do you have a old relic of a computer with a gigabit WAN port to try with?
If that blazes away, I'd maybe blame the new chipset.
Nothing that would be slower then this atom.
I can't get clearos to register, im sure the firewall my district is using to block traffic has sniped my connection. I can't get any other distro to work at the moment either. not having any luck at all. the best i can do is take the card out and put it in a core 2 duo 8400 but that wouldn't prove anything other then the card works…. Im doing it.
-
There are a lot of boards using that chipset now. If it alone was causing this I'd expect more questions on the forum.
As you have found in the thread linked it's often possible to get better throughput with a Linux based OS. An unfortunate fact. However it's normally not an issue, the limit you're seeing is something more. In my opinion. ;)Steve
-
Same Nic in an optiplex 755, 109.8MB/s equals to 878Mbps. Had to use 2.0.3 on this one though, as 2.1 didnt give out an DHCP address.
-
Got my msata ssd in today so we will kill the HD theory in a bit
-
Do you have hardware flow control enabled on all the links?
vlan side of subject pf, is direct to pc. Flow control is enabled on rx/tx
Last part of the answer is incomplete. Do you have hardware flow control enabled on the NICs sourcing and sinking the traffic AND on intermediate switch ports AND on relevant pfSense physical interfaces? If so, how did you do it on the pfSense interfaces?
-
Do you have hardware flow control enabled on all the links?
vlan side of subject pf, is direct to pc. Flow control is enabled on rx/tx
Last part of the answer is incomplete. Do you have hardware flow control enabled on the NICs sourcing and sinking the traffic AND on intermediate switch ports AND on relevant pfSense physical interfaces? If so, how did you do it on the pfSense interfaces?
as for the source and client flow control is enabled. on the pfsense interface I haven't even looked at. I'll see what i can find..
as for the Hard drive SSD didn't make one bit of difference. except drop wattage. was at 20 with 3.5HDD now at 13~14 back and forth with ssd.
As I was saying I installed fresh so i didn't change anything in /boot/loader.conf.local
at the moment im trying to figure out how to get into it and add things, change etc. I'm not too familiar on how to do it.Nvm, just found edit file -
No change there either.. this is what my loader.conf.local file looks like now.
kern.cam.boot_delay=10000
kern.em.nmbclusters="131072"
hw.em.num_queues=1
hw.em.fc_setting=1does this look right or am i going to have to do one for each for example.
*hw.em0.num_queues=1
hw.em1.num_queues=1 -
So is the box using vlans directly?
Steve
-
correct. one side of the current vlan to the wan on the second pfsense box to my computer directly…...
I have one in place now thats a power hog so I dabbled in atom to save, Watt i don't know
I found it i think. I have to reinstall then change this setting to see if i have the same result. right now as i type its transfering @ 602Mbit
-
Cool - So my current drive has about a MTTF of 1.5 million hours which is similar to a great SSD, so if there is no benefit other than wattage, I'll leave it be. I usually do things for a particular reason, not just to be trendy, so if the SSD isn't going to bump my performance, I'll wait for a drive failure to replace. With this drive its probably going be about 6 more years minimum.
Still no luck tracking down the bandwidth killing culprit?
-
yeah i tracked it down, for some reason if powerd is not enabled which it isn't by default the power state of the cpu is missrepresented i believe. or it isn't running full throttle. or reported wrong, something along the lines of the speedstep control.. anyway when I set powerd to hiadaptive my bandwidth doubles. now, you would think MAXimum would be best performing. in this case it is the worst. its almost like they are totally reversed. im sure if powerd is off its max by default per say. anyway problem solved. it pulled 75MB/s from my NAS which is equivalent to 600+
Beforehand it was only pulling 22-30MB/s
As far as the SSD goes, yeah it will slightly improve performance using cache type "squid" but for operation it primarily for power savings. if you don't use any cache and not worried about an additional 5-15 watts depending on drive.. I dropped 6-7 watts swapping mine from an WD 80gb caviar. theres really not that big of advantage. especially for home use. just about any off the shelf hard drive for the past 10 years can do at least 50MB/s thats plenty for pulling in a corporate type account. The SSD will find the files a little quicker and push them along. but were talking milliseconds there.
All in all the project came from a 75W off the wall Dell optiplex 755 to a 14W atom. I can probably tweak bios settings to drop another few. While the 755 can reach higher bandwidth. i'll probably never be seeing 800+ at home. By the time we reach something like that from our ISP something better will come along.
Core 2 duo Machine = 11.09Mb/W Atom = 43Mb/W
built for $67
-
I have an Atom boxes like yours, just not here. I'll check the powerd feature on all of them because of your hard work.
(I had a fan go bad in one, so I'm expecting it to come back to me any day now)
I'm not at all sure it even needs the dang fan but its noisy, so they are sending it to me.
-
I have an Atom boxes like yours, just not here. I'll check the powerd feature on all of them because of your hard work.
(I had a fan go bad in one, so I'm expecting it to come back to me any day now)
I'm not at all sure it even needs the dang fan but its noisy, so they are sending it to me.
Yeah they really don't i have a fan on my chassis just for the hell of it cause it was sitting there already. i could drop the fan and save another 2W. i have it trolling around 20% to keep it fairly quiet.
With the chassis open the cpu sits at 19c and closed with fan @ 10% 24c no fan may be around 30c -
Another nice bonus, with a computer directly tied into the cable modem i was pulling a solid 55.4Mbps, with this pfense on 2.1 it allows me to pull dhcp6 from modem and passes along to all machines. ran another speed test and picked up another 2Mb… so im happy ping dropped a couple around here locally but still solid 33ms midway across the country
-
Pulling that much bandwidth isn't something unique to pfsense. Even my cheapy linksys can do as high as 80 or 90. But doing anything big with the bandwidth like VPN does take something like pfsense.
-
Pulling that much bandwidth isn't something unique to pfsense. Even my cheapy linksys can do as high as 80 or 90. But doing anything big with the bandwidth like VPN does take something like pfsense.
I've messed around with linksys and dd-wrt tomato etc. yeah i can pull the bandwidth of 55 no problem on them.. but when it comes to ping their all over the place, or any heavy load.. they fall like a ton of bricks. i was even contemplating purchasing a netgear 4000 series, or maybe even an AC router. but they "ALL" have 128mg or less and the majority of them fall in the 32mb range. not so good if your trying to pull a p2p at full bandwidth.
-
Yeah - They have weak state tables and the CPUs are on the ragged edge of maxed out with a fast connection. And VPN… That maxes the CPU at only 5MB or so. Its much weaker than the pfsense if you are doing anything other than "speedtest.net"
-
I completely agree. Now I gotta make sure my coffee pot pulls its ip
-
Don't forget to check your sneakers…
-
No change there either.. this is what my loader.conf.local file looks like now.
kern.cam.boot_delay=10000
kern.em.nmbclusters="131072"
hw.em.num_queues=1
hw.em.fc_setting=1does this look right or am i going to have to do one for each for example.
*hw.em0.num_queues=1
hw.em1.num_queues=1I believe the number of queues setting has no effect on the flow control setting. I think if you want to set queues individually for each interface it would be done by hw.em.1.num_queues=1 rather than hw.em1.num.queues.