10 gigabit questions from a n00b.
-
I think this forum has a lot more high speed networking expertise than any other I'm on, and I'm hoping a little bit of off-topic stupid networking questions might be tolerated.
OK so I've done the networking for small companies I've worked for for a long time now, off and on. Only gigabit and lower, no high speed stuff. Everyone I talk to on a forum who handles 10gbE does so in a corporate environment, I need to think outside the box to get past gigabit speeds for a small number of nodes.
I have two small offices where gigabit ethernet is not cutting it. In each case I have a small number of servers accessed mostly from outside. For example, one or two NAS boxes and maybe 2 or 3 VM hosts. A handful of gigabit clients outside of that, but the key issue is performance either from VM to NAS or VM to alien VM, meaning between two different physical hosts.
I've been looking at 10 gigabit nics for awhile, and reading the currently active 10 gigabit tuning discussions here. It seems like there are quite a few reasonably priced but not scary cheap NICs out there. The problem comes with 10 gigabit switches. We need VLAN awareness at least at the VM host, and AFAIK the switch must be aware if the host is, right? So from what I know this means I need to have at least a semi-smart switch.
So real-world example, not my hardware. One Synology NAS, one 'baby dragon 2' ESXi host with 32g and an SSD, hopefully to be upgraded to a second VM host. http://www.rootwyrm.com/2011/09/better-than-ever-its-the-babydragon-ii/
I noticed that network copies between VMs on the dragon run up to about 7 gigabits/second if the data was recently handled, less if it's 'cold.' Transfers outside the host are limited by physical LAN speed, of course.
The dragon is loaded up pretty well, it has nearly constant RAM usage above 90%, CPU is typically around 50% and the SSD space is basically full. Some non-essential or occasional-use VMs are on the NAS. So first thing I want is for the guy to get another ESXi box.
IMO it would be great to get 10 gigabit performance between the two hosts and from hosts to nas. I doubt I'd need anywhere near wire speed, but we have demonstrated that ethernet is a bottleneck that we can't seem to get around by rearranging VMs on the hosts. If I could get the same 7 gigabits per second performance from ESXi to NAS it would be awesome. If I could also get it from ESXi to ESXi that would be even better.
Finally we get to the questions:
-
Is there a non-outrageously priced switch with say 6 or 8 SFP+ ports on it?
-
Can I get multiple-interface nics and go host-to-host and host-to-NAS?
-
Can I use something like a Tilera board as a switch? http://www.tilera.com/sites/default/files/productbriefs/TILEncore-Gx72_PB043-Rel_1_3_WEB.pdf
-
Is there some documentation describing the difference between real-world gigabit and real-world 10g networks to someone who's not a certified network admin?
Nobody mentions Tilera except for Tilera and Mikrotik. Email and forum responses from either company are insanely slow, which either indicates that they're busy as heck and don't have enough people to do it, or that they only show up at the office every third Wednesday. I don't know which it is. I'm really intrigued by grid computing but have no idea how well it works based on any third-party assessment.
I also have no idea how much Tilera products cost. They want you to sign an NDA to even get a price list. I'm still trying to decide if I want to go that far.
Thanks.
-
-
I forgot questions about multiple 1gbE nics per host.
These questions are for a network as described or slightly larger, say up to 2 NAS and 4 VM hosts.
-
Is it reasonable to get 4-way nics and bond them ESXi-to-ESXi to make a 4gbps network?
-
Assuming the Synology has the slots for it, is it reasonable to expect that to be bondable too?
-
Would a grid network be a help or a hindrance in real world speed? I'm thinking of the sort of thing the early Beowulf clusters used, only not nearly so many computers. Four nics, each going to a neighboring host. Probably a crazy routing table.
I know most of these questions are so insanely stupid nobody would ask them, I'm just still in sticker shock at the idea of going to 10gbE. I've already bought way more hardware in the last month than I should have, and have more to buy yet even without the 10 gigabit stuff.
Thanks.
-
-
I am going to prefix this with: this is my theory and I think it will work but I don't know for sure and haven't tested it.
X99 boards have 40 PCI-e lanes of which you can have up to 4x8 as slots..
If you put four ebay bought dual 10gbit nics in one of those motherboards, and a decent processor, memory etc…
You would end up with a pretty powerful 8 port 10 gigabit router, or bridge them for a switch. It may not be as "fast" as a high speed switching fabric switch but damn it'd be close. It may be cheaper also...
-
If you need more than 1Gbps but not as much as 10Gbps then several gigabit links in a LAGG is a common way to go. However it only helps if the traffic over it is multiple connetions. See, for example, this:
http://www.admin-magazine.com/Articles/Increasing-Throughput-with-Link-AggregationSteve
-
This is all theoretical for me above 1gbps, but from what I know about 1gbps the lack of switch-related hardware on a box seriously hampers the forwarding speed even on 1 gbps. My speculation about 10gbps would be that the lack of switch hardware would decimate forwarding on 10gbps, not talking so much about routing as much as within-the-vlan switch traffic.
You'll notice my options mentioned didn't include a cut-rate (as in cut corners) switch or a homebuilt switch. If I go with some sort of a switch, it would have to be a switch with at least VLAN support and forwarding rules, but I've tried bottom feeding and it never works out happily. I almost always build my own boxes, but not in an attempt to make it cheaper, because it never is in the long run.
I WOULD try a grid network with either bonded 1gbps nics or with multi-port 10gbps nics. And I'm loosely defining 'grid' as a series of point-to-point high speed connections directly between hosts where high speed traffic is desired, without a switch. The thing is, when you get above a certain small number of hosts with this technique you could have bought a switch, and the routing table looks crazy on stuff like this.
I haven't read Steve's link yet but will.
The reason for my link to the Tilera board is that it's an 8-port SFP+ nic with 72 64-bit cores (supposedly each is roughly equivalent to a recent MIPs core) right on the card, and oriented toward network infrastructure. If I found a happy customer who was not a member of the company or somebody who had signed a contract with them, I would probably buy one even though I don't know how much they cost. If their hype is anything close to right I think it would make a fantastic switch and router at 10gbps. The higher end gigabit switches by Mikrotik are the only hardware you can buy with these (lower-end Mikrotik) chips, and they seem to easily manage wire speed in just about any configuration tested.
Their documentation claims that they can do deep packet inspection/intrusion detection & prevention on 80gbps right on the card, and that the card can be run standalone without a host system. I'm not sure how much of that I can believe without actually seeing it, it seems like smoke and mirrors but with just a little bit too much credibility.
You can read all sorts of promises by new companies and they may be right, but inevitably they're tooting their horn more than a little bit. And they never tell you the bad parts.
There are almost certainly people on this forum who started out in my situation, needing a small number of 10gbps nodes and not being able to afford (or justify) an all-out 10gbps switch. Somehow there has to be a solution I'm not seeing.
Thanks.
-
Reading that link on aggregation means that this is not the way for me. Our normal network traffic is satisfied by a solid 1gbps link. When we have a need for high bandwidth it's usually transferring a single huge file or a single connection to a database, or something like that. In other words, the fact that a single transaction cannot go faster than a single nic in the LAG sinks that ship.
So is it feasible to do host-to-host with 10gbps hosts? Does it work the same way it does with 1gbps?
If that sort of thing works, I could get dual-port sfp+ NICs and handle everything with direct attachment through a short-range optical connection. The other guy might need more than dual-port, but for my situation I don't see any immediate need for more than dual port.
-
Also, does the fact of these being virtual machine hosts affect the ability to direct-attach? I know I'll need to get NICs which are suitable for virtual network connections, can't remember the name for it. But will that affect the ability to directly attach?
Thanks.