Questions about 10 gbps nics



  • Hi,

    Trying to educate myself about 10gbps networking.  I have recently purchased a SuperMicro board to be used as a router, details here: http://www.supermicro.com/products/motherboard/Atom/X10/A1SRM-LN7F-2758.cfm

    All questions pertain to buying a good quality nic, probably Intel.

    • Is it feasible to put a 10 gigabit card in the pcie slot?  It's a 4-lane pcie2 slot in an 8-lane socket.

    • Why is it that a single-port 10gbE (fiber or copper) uses 2 pcie lanes but pretty much every two-port I can find wants 8 lanes?  Does the nic use all these lanes or just 4 lanes?

    • Is it reasonable that this board can route 10gbps traffic while also handling a high speed VPN on a gigabit connection?

    • Is it reasonable to put a 2-interface nic in this slot, and route between 2 10gbps connections?

    I've been watching the 10gbe tuning thread, and some other 10 gigabit threads.  You don't have to restate things found there unless I'm obviously missing it.

    Details about my box:

    • 16gb ECC registered RAM in 2 slots.

    • 1x OCZ Vector 150, 250g

    • 1x spinner, 512g

    • This was going to be a VM host, but I realize there probably won't be much left if I'm routing 10gbE traffic.

    Details about my network:

    • This is a home office.

    • Will have gigabit ethernet in less than 2 years, I'm planning for it.  Will have 200 mbps after I get everything set up.

    • I have 3 boxes planned where 10-gigabit networking woudl be really nice:  A NAS, and 2x VM hosts.

    • I might buy 3x 2-way nics, one for each, and direct connect to put off buying a switch until I can afford something decent.

    My need for high speed networking at the moment is sporadic.  It's generally a single transfer or single network socket, either a database backup or a VM backup going across the wire, or a database connection to a really big database.  In the event of these transfers, time is money.  The rest of the time 1gbps would be fine.

    Thanks.



  • I found this: http://www.tested.com/tech/457440-theoretical-vs-actual-bandwidth-pci-express-and-thunderbolt/

    Basically it says that you need 2 pcie lanes for 10gbps in one direction.

    Electrically you will be able to plug any pcie card into any pcie slot but you will only get the speed of the device with fewer lanes

    10gbps cards do 10gbps up and 10gbps down, for a combined throughput of 20gbps total. With 4 lanes you can expect 20gbps total.

    That said: when you are doing transfers you're not likely to max out up and down at the same time….



  • http://www.lannerinc.com/products/x86-network-appliances/?option=com_content&view=article&id=1596:fw-8894&catid=25:rackmount

    This looks more up your alley. You could combine everything into one box, run VMs galore and if you need to you can upgrade to 40gbps.


  • Netgate Administrator

    @Keljian:

    With 4 lanes you can expect 20gbps total.

    That does depend on which PCIe version you have though:
    http://en.wikipedia.org/wiki/PCI_Express#History_and_revisions

    That Supermicro board is using PCIe 2.0 so the 4x slot will give you 16Gbps total.

    Steve



  • OK that helps a lot.

    It seems that most of these single-port NICs are half duplex then.

    @Keljian, I don't see a price tag on that but it looks to be a whole lot more money than I have on hand.  I already bought the supermicro board and have it set up already, except no time to put an operating system on it.  It is plenty for what I had originally planned.

    I knew the pcie version on this board before I bought it, I didn't like it but it still looked to be the best option for me.

    @steve, actually if I can get that it would probably be fine.  It would be neat if I could route between two 10gbE networks at full speed but it's not really critical.  If I can get just one good NIC into a switch from the c2758 box then all my 1gbps subnets can hit close to wire speed to the fast network.

    If I could have routed full speed then that would save me some hardware purchases, but since I can't just having a high speed uplink port would be enough.  And I seriously doubt I'll saturate both directions on a 10 gbps nic in this office for quite awhile.  My use cases just don't have that sort of load.

    Generally speaking there's not a lot of equipment here that could benefit from 10gbps.  I have those three items planned and will have to build most of it yet so it's not an immediate problem.

    I'm trying to do this the right way, but without spending a lot of money right now.  I can swing the some NICs but a switch with actual VLANs and routing capabilities looks pretty intimidating.

    Thanks guys you've been a big help here.


  • Netgate Administrator

    It's hard for me to really comment on this because I've not tried to push very high speed traffic of this order but….
    I doubt the 16Gbps limit of the PCIe bus will be the limiting factor in your setup. That CPU will really fly under 2.2 but even so.
    Though this post suggest it will get close to 10Gbps with the same CPU:
    https://forum.pfsense.org/index.php?topic=71949.msg449762#msg449762

    Steve



  • I'm very conscious that we're looking into a crystal ball to predict the future.  I've been watching that tuning thread for quite awhile, I don't understand everything there but it gives me words to search on.

    Going into this I anticipated two physical routing devices.  I am building for gigabit Internet that is not here yet, using a c2758 board for a VPN which I'm hoping comes close to being able to keep up with the internet connection.

    The 10gbE idea was not originally for this box.  Advice here suggested I needed only one pfSense instance, which I'm still not sure I believe can be as secure as two instances, especially separated by being on different physical hardware and layered.

    Realistically speaking, I can't see any of these large transfers going faster than the storage medium.  A file transfer needs to go to a disk or SSD.  A database query is running against a disk of some sort.  There might be a certain amount of information cached but it's not going to last all that long.  I'm trying to save time that's worth saving, but if the network gets so fast that it's done in a few seconds and I sit there twiddling my thumbs looking at page 1 of the output then that's useless optimization.

    There is a very real advantage/profit for me in getting some file needed for customer support onto the correct machine and running, and in getting test hardware that can run at close to the speeds the customer uses, but no money for me in crazy-fast transfers that only make a few minutes difference in real life.

    In other words, 10gbE is worth a look, but an Enterprise-class SAN or any device that can keep a 10gbE card saturated in both directions is not.

    Your post for the FW-8894 gives me something to look at in terms of server hardware.  It would be really nice to get hardware compression support and maybe encryption support directly on VM hosts, if I can get guest OS tools to recognize the functionality.  I need to look at benchmarks, some of my VMs will benefit from lots of small cores but I need a couple with faster, beefier cores too.

    Thanks for your time.



  • To an extent I have to fall back on "are you sure you are not throwing hardware at a software problem?"

    I don't know what data you are transferring that you need >100MB per sec(ie gigabit). Certainly most database use wouldn't fall into that category as you run into limits with disk access and other network connections.

    Really the only valid use I can think of that shifts around that amount of data is video work.

    Therefore my question is
    : what are you transferring? (I don't need specifics, just the type of data) and what type of compression are you using? And what compression tools are you using?

    If encryption is required, are you compressing first? As compressing already encrypted data is a mistake.



  • This is a home office, I'm a software developer.  I work in collaboration with several other developers on the same projects.  It's custom software using databases.  We're basically using home offices. We don't want any of this on the cloud because of contractual obligations for security.  I have been using my place as a traditional home office user, meaning no access in and going to another office for servers.  I have a server here and a client and a couple small appliances right now but only for my part of the work.  I want to start using my better Internet connection for the benefit of my group, and I'm trying to do it right.  Much of what I have planned is not purchased yet.

    The hardware I recently purchased (SuperMicro c2758 board) will be a VM host with pfSense.  PfSense will be VPN, UTM and router.  This box will totally isolate normal household use including wifi from my professional setup.  If I go with 10gbE then this box may also connect to 10gbE, although all the hardware I have 10gbE on will also have regular gigabit so it might not be practical, in fact the more I think of it the less I think there's a need for the c2758 to have 10gbE.

    The data I'm transferring is a database backup.  Often in the 50 gb range in size.  If I have to transfer data then generally it's a 'production system down' type of issue where a bunch of people are sitting around on their butts waiting for the system to come back up.  The only video around here (my network) is the normal YouTube stuff, and occasionally we watch a TV show online.  That doesn't even come close to stressing our existing 60mbps connection.  Oh yeah, a lot of skype video for my wife's family.  None of that will be happening during business hours, since she's working elsewhere and I'm working here.  We do use Skype and join.me and webex for desktop shares and conference calls though.  Again, nothing so fast that it couldn't happen over wifi, and that stuff would never touch a 10gbE nic anyway, or any server it's attached to.

    Back to the point, generally somebody from our (me and my coworkers) side is hooked to their server and diagnosing what they can there, while another is getting the backup in case we need it.  I'm trying to make a setup that is a workable replacement for a customer setup.

    With current hardware and software, it often takes more than 2 hours to compress the file backup using zip or 7z on their end.  A QuickAssist card on their end could reduce that to less than a minute and would require no extra CPU from their often heavily loaded database server.  Then transferring across the Internet can take a bit too, especially since the current servers on our side are behind an insanely slow connection.  If I get some of the servers here then the transfer time will be much smaller.

    So on my end, I'm going to upload the file from their server, (probably their server is the client in other words) and then move it to the appropriate database server and unzip/unencrypt/restore.  That's the first task on my end.

    From that point I'm either diagnosing data or running an app against that data trying to recreate the problem.

    Right now we're not encrypting at all due to lack of time.  We're relying on a VPN connection to the customer site to transfer data using RDP usually, which sucks because if we drop the connection then the transfer fails.  One of the things I'm trying to address is a more robust way of doing this that isn't too hard to get the customer's IT security guys to sign off on.  QA cards offering both encryption and compression acceleration is going to be a lot easier to sell IMO.

    If I had 10gbE then I would start putting VM images on a shared drive on the NAS, or might try to make a SAN and forego tcp/ip on 10gbE.  The file would already be on that physical hardware (NAS) from the VPN transfer (using 1gbE since it's from outside), and then running a VM across the wire would take the 10gbE link.  The advantage to me would be simpler backups and fewer steps.

    Compression tools are 7z if we can get the customer to install it on their server, otherwise just a zip.  If I can get a QuickAssist card over there it would be whatever the best compression is that the card offers.  Again if we get QA then we'll have a strong encryption key with a public key on their side, encrypt the zip in a second or two and send it on its way.  We'd probably script that.

    The 10gbE might be overkill here, I don't know yet.  I haven't bought anything on the assumption of 10gbE being a key player, and have lots to set up here anyway.  I'm going to have to run actual numbers, maybe get some actual throughput data from a customer site to see how much they actually use when in production.  I don't know that part.

    So the image shows the basic physical block diagram.

    The linksys will be its own network, the only thing I might allow is access to printers and the like, which would be on a VLAN.  There are a couple small appliances that would go into a general purpose VLAN too, like DNS and DHCP and a stratum 1 time server.  I didn't draw a VLAN arrangement because right now I'm concerned with physical layout.

    The home net would have anything hooked to wifi and anything not in my office.  There are some wired wall ports in my house, those would go on linksys, which I'm considering to be a low security network.  This net would have TVs and BluRay and phones and whatever else is around.  It would not be allowed anywhere except the Internet and maybe my utility VLAN.

    The high speed net will have VM hosts and a NAS.  Each of these has or will have at least one gigabit NIC anyway, so it's possible I'll just put a gigabit switch on there and hook it to pfSense if the 10gbE numbers don't work.

    The low speed work net has a few boxes on it, including my workstation.

    VLANs:

    • DMZ will have at least one VM guest for file transfers, and at least one http server.

    • Utility will have a printer and DNS and DHCP, no access at all from outside.

    • Low speed office net will require a switch, but doesn't have to be a smart switch.  Already have it, I get close to line speed.  It's adequate for the purpose.

    • High speed work net will have a couple VMs in the DMZ but will otherwise only be accessible from the low speed work net.

    Sorry this is such a big post.




  • Thankyou for the description, now I will do my best to help you find an answer :)  (This is a really interesting problem, which is the kind of thing I like working on - please see my private message if you want to get in touch directly)

    First of all - pfSense is definitely capable of doing all of that work in one box.

    Second - 7zip makes use of AES-NI, so is likely to be reasonably fast with encryption on cpus that support it (4-6 gig a second is likely, certainly it won't slow things down significantly). Some example speeds here:
    http://www.reddit.com/r/hardware/comments/2ckwai/aesni_and_hyperthreading_in_i7_cpu/

    Thirdly - your bottleneck is your compression primarily. Compiling using a different compiler doesn't seem to make much of a difference with 7zip, but microsoft's visual C compiler seems to do the best job, and you may be able to shave a bit of time off as per here: http://www.behardware.com/articles/847-13/the-impact-of-compilers-on-x86-x64-cpu-architectures.html

    Fourthly you need more processor power to compress faster - here is an example of what different processors can do compression wise with 7zip: http://techreport.com/review/27018/intel-xeon-e5-2687w-v3-processor-reviewed/7 - basically the more horsepower you have, the faster you go.

    Once you have the power/compression speeds under control, you're going to need I/O - very fast I/O. Most sata SSDs will saturate a 6Gps (500~ MB/s) bus, so you're covered there at your side. Thing is, you're to going to get things uncompressing faster than your client's I/O even with QuickAssist..





  • Keljian,

    I appreciate your permission to ping you privately, but as long as I'm not sharing trade secrets I'd just as soon have it be public.  I learned a lot from reading other peoples' support threads, it seems selfish to hide mine from them.

    I guess I need to restate my goals here with 10gbE.  I'm trying to eliminate bottlenecks that significantly hurt.  It may be that 10gbE doesn't help, I don't know.

    Compression accelerator looks really interesting.  I might have to look at that.

    aes and hyperthreading:  This was a really interesting read.  The AES part is new to me, and the hyperthreading is something I've been puzzled about for awhile, not from not knowing what it was but rather why people are still so confused after all this time.  It's two prefetch modules for a single core.  Again, trying to eliminate bottlenecks.

    It's too bad they didn't include atom c2000 processors in their benchmarks.  I'm really curious.  I guess I'll get some first-hand experience fairly soon anyway.

    We seem to be on the same page right now.

    Once I get to the point of max load on the VM hosts and NAS I don't care if there's network speed left over.  I'm not chasing infinity here, I just want to kill bottlenecks.  The 10gbps nics will be the last thing I install, guessing from where I sit now.  I'm just trying to understand all the issues.

    These e5 chips look incredibly interesting to me.  I'm going to have to google some database performance benchmarks on them.

    Thanks again for your time.



  • I don't know about your databases, but the ones I typically work with have lots of padding data, where a 20GB file will only have 1GB of actual data. Most of the time, using a fast compression is all that is needed. SQL2008, or was it 2008R2, and newer support compressed back-ups. They back up nearly as fast as an uncompressed backup.



  • Usually a 50gb backup for us translates to about 7 or 8 gb zipped.  The ratio I think depends on how densely populated the rows are and for us a lot of columns have some sort of value in every row.  The biggest zipped file I can recall was a little over 20 gb.  We push our customers to prune their data but they hate doing it.  Some have gone so far as to add disk storage several times to accommodate everything, which IMO is crazy.

    Right now the backups are going to a 25/3 connection.  Once I get my network together I'll upgrade to 200/45?  Can't remember the upload speed right now, it's not as important.  We never send back to the customer in a high pressure situation, it's always our direction.

    Enterprise mssql will compress, but AFAIK the cheaper versions don't, and can't restore the zipped backups.  For my home support I'm not forking over the price for a full enterprise database, that's real money.  I can check again though, maybe they changed something.  I'm getting the cheapest one I can get away with, our apps don't use the extra features anyway.  But a lot of times our database winds up on an Enterprise server because that's what they're running.

    Really in my scenario I think getting the backups, zipping/encrypting, transferring and decrypt/unzip is the lion's share of the bottleneck.  The Internet speed is going to take care of part of that, the QuickAssist hardware on my end will take care of part of it, and hopefully I can get some sort of acceleration on their end.

    After I get all that set up, I'll evaluate if adding 10gbE between the 'big three' makes a difference.  The hardware will surely come with gigabit nics anyway, so I won't lose out by waiting.

    Thanks.



  • Its worth looking at this board instead if you want to effectively use 10gbe http://www.supermicro.com/products/motherboard/Atom/X10/A1SRM-2758F.cfm
    It has both a x4 and x8 PCIe slots. I use mine with a dual port 10gbe card and an additional i350 quad giving me a total of 8 1gbe NICS.
    I work from home with large video and disk images (software development) and occasionally other developers and this gives me enough ports to firewall subnets effectively as well as 10gig throughput from my primary workstation (Macbook pro + Thunedrbolt>PCIe adapter) to the FreeNAS box. My 40TB FreeNAS archive can provide traditional platter based disk access at 500MB/s (10 * 4TB disks in RAIDZ2) which is fast enough to store and retrieve large images without inconvenience. The network itself is capable of much more (9.91gbps, 1MB/s).
    This kind of stuff is far from plug and play and in my experience is an exercise is in balancing the bottlenecks between all of the components, i.e storage disks, server CPU, transports mechanism (CIFS vs NFS etc), switches, cables and network stack configuration (tune for latency or throughput). Plugging a 10gig card into a 4x slot is a compromise already but it won't be your biggest limiting factor. Theres also a huge difference with routing and bridging the interfaces obviously and introducing jumbo packets is likely a requirements due to the limited processor specs on those Atom boxes and that can cause further hassles.

    Edit: if you haven't bought your 10gig cards yet, Chelsio 5th gen cards are likely to see an increase in performance over Intels cheaper x520/x540 hardware)



  • 10G: we ship Chelsio T5 on the c2758.
    We may pick up the Intel x710.

    Everything else is crap.

    We've enabled AES-NI/AES-GCM, more work to be done.  Linux does 840Gb/s IPSec on c2758 platform. We do less, investigating.

    We will enable QAT on this platform and faster THIS YEAR.  C2758 should be good for 8Gb/s IPSec on c2758 with QAT.

    We have hw coming this year that will do 6 x 10G with IPSec @ 60Gb/s with headroom.

    Yes.I.Said.This.Year.

    Many in this thread have zero clue.  Half-duplex 10G?  WTF, over?



  • OK and I see all of those require an 8-lane pcie-v3 slot.  That's really what I needed to know.  It would have been nice to get at least a single-lane 10gbE port into a switch or something when the time came, but I guess it is what it is.

    Thanks.



  • @gonzopancho:

    10G: we ship Chelsio T5 on the c2758.
    We may pick up the Intel x710

    Many in this thread have zero clue.  Half-duplex 10G?  WTF, over?

    I admit I made a mistake, the figures I quoted were based on the assumption that PCI-e was serial, it isn't. Would have been nice to have it corrected rather than being told I have no clue.. But whatever..

    Per lane, PCI-e, in each direction (full duplex):

    v1.x: 250 MB/s (2.5 GT/s)
    v2.x: 500 MB/s (5 GT/s)
    v3.0: 985 MB/s (8 GT/s)
    v4.0: 1969 MB/s (16 GT/s)


  • Netgate Administrator

    PCIe is serial it's just not over single communication medium like, say, 10base2 Ethernet.
    I think Jim was pointing out that 10Gbit Ethernet is not half duplex unless presumably you've wired it very very wrong.  ;) A simple misunderstanding.
    In fact my earlier post was incorrect. I said 4 lanes of PCIe 2.0 would give you 16Gbps total but in fact that's in both directions. So a 4X slot could saturate a 10Gb Ethernet link in theory if nothing else throttles the data.

    Steve



  • OK for the record I'm pretty familiar with gigabit and lower ethernet, but not at certified network admin level.  I've been "the guy" for years because I've typically worked at small companies and had an interest, while nobody else does.

    10 gigabit is a whole different ballgame and that's why I started the thread.  Everyone makes mistakes and I'm not holding any grudges or making judgements.

    I have three use cases for 10gbps and it will be awhile before I implement any of them.  They are:

    • A small number of hosts (probably 3) with 2-port 10gbps nics with direct interconnect.

    • The same number of hosts (maybe +1 in the case below) connected directly to a managed or smart switch which can handle routing and some sort of security directly.

    • My new but as yet unconfigured router which is a SuperMicro c2758 board with a single-port 10gbps nic to hook into the above switch.

    All of the main systems will be VM hosts.  Probably the router will be too, although the plan is to install that in several different ways to evaluate what Atom c2000 systems can do for other aspects of my network.  So the NICs need to aware of virtualization optimizations.

    I can see that the Chelsio nics would work for any of the three main systems.

    For the router, if the 10gbps switch can handle VLANs and some fairly simple firewall rules between them, all I would need is to allow near-wire-speed gigabit VLAN traffic to hit the servers without the server-side nic or my router as the bottleneck.

    I can see right off the bat that the board I have can't route at high speed between two NICs at 10gbps with the 4-lane pciev2 slot it has, and having it route the high speed traffic through a single port NIC is not reasonable.  So really I'm just worried about high speed VPN performance plus routing with the 7 gigabit nics and a possible 10gbps nic.

    So I'm still looking for a possible single-port NIC that can work with a 4-lane pciev2 slot which is good enough to do the job.

    Thanks.



  • Interesting topic…. But what is QAT or where does it stand for?


  • Netgate Administrator



  • I'm just a n00b but IMO if you're doing any sort of VPN without QAT hardware you're probably doing it wrong.

    The software doesn't support it yet but it will, I'm guessing soon.


  • Netgate Administrator

    Ha! Well it depends if you need the throughput. I have an OpenVPN server running here at home to use for remote access and my hardware is way too old to support Quickassist. It's still fast enough to stream Dr Who to America though so that's fine (if you ask my sister!). Fast enough to secure my traffic when I'm using public wifi also.

    Steve



  • @kroberts:

    I'm just a n00b but IMO if you're doing any sort of VPN without QAT hardware you're probably doing it wrong.

    The software doesn't support it yet but it will, I'm guessing soon.

    Aes-ni is more than enough for a good proportion of vpn use..



  • When did girls start watching Dr. Who?!!?  I've never heard of such a thing.

    Technically I don't "need" acceleration, but if you're buying hardware in anticipation of gigabit Internet and want a VPN which can even come close to that speed, you're going to need at least AES-NI.

    I'm a bit too suspicious to put all my eggs in that one basket for encryption acceleration though, which is why I'm so excited about QAT.  I also have a significant need for compression acceleration.


  • Netgate Administrator

    @kroberts:

    When did girls start watching Dr. Who?!!?  I've never heard of such a thing.

    When they started giving the role to actors like David Tennant and Matt Smith.  ::)

    Steve



  • @stephenw10:

    PCIe is serial is just not over single communication medium like, say, 10base2 Ethernet.
    I think Jim was pointing out that 10Gbit Ethernet is not half duplex unless presumably you've wired it very very wrong.  ;) A simple misunderstanding.
    In fact my earlier post was incorrect. I said 4 lanes of PCIe 2.0 would give you 16Gbps total but in fact that's in both directions. So a 4X slot could saturate a 10Gb Ethernet link in theory if nothing else throttles the data.

    Steve

    I could be wrong, but I thought half duplex only worked with 10BaseT and 100BaseT networks.  As soon as we got to 1000BaseT, if the connection isn't running in full duplex it, it isn't functioning at all.

    Regardless, I find this thread to be a very interesting read.



  • Forget half duplex, what I was getting at was that you won't see the full bandwidth if you don't have the bandwidth over a PCI-e slot



  • @kroberts:

    I'm just a n00b but IMO if you're doing any sort of VPN without QAT hardware you're probably doing it wrong.

    The software doesn't support it yet but it will, I'm guessing soon.

    http://www.dumpaday.com/?attachment_id=58505



  • @Keljian:

    @kroberts:

    I'm just a n00b but IMO if you're doing any sort of VPN without QAT hardware you're probably doing it wrong.

    The software doesn't support it yet but it will, I'm guessing soon.

    Aes-ni is more than enough for a good proportion of vpn use..

    Probably, and it's the best you can get right now, so…



  • @vsxi-13:

    @stephenw10:

    PCIe is serial is just not over single communication medium like, say, 10base2 Ethernet.
    I think Jim was pointing out that 10Gbit Ethernet is not half duplex unless presumably you've wired it very very wrong.  ;) A simple misunderstanding.
    In fact my earlier post was incorrect. I said 4 lanes of PCIe 2.0 would give you 16Gbps total but in fact that's in both directions. So a 4X slot could saturate a 10Gb Ethernet link in theory if nothing else throttles the data.

    Steve

    I could be wrong, but I thought half duplex only worked with 10BaseT and 100BaseT networks.  As soon as we got to 1000BaseT, if the connection isn't running in full duplex it, it isn't functioning at all.

    Regardless, I find this thread to be a very interesting read.

    Half-duplex gigabit links connected through hubs are allowed by the specification(*), but the relevant sections of the specification is not updated anymore and full-duplex is used exclusively with switches.

    (*) A single repeater per collision domain is defined in IEEE 802.3 2008/2012 Section 3:41



  • @stephenw10:

    I have an OpenVPN server running here at home to use for remote access and my hardware is way too old to support Quickassist.

    I suspect this will change soon enough.



  • Hello kroberts,

    perhaps some informations interesting for you?
    New Boards with build in Dual 10 GbE or SFP+

    Do you know HotLave?
    They are producing 1 GB, 10 GB and 40 GB Intel based NICs!

    One tip of me by side to you, build with the D-1500 based boards a pfSense based
    firewall and with the Xeon E3 a NAS or Server, but please don´t connect the
    pfSense based firewall direct over 10 GBit/s this will be not the best effort for the
    throughput, you will be better going with a Infinion SX2 card, that can be connected
    directly from the pfSense based firewall to the NAS and it will be serving more speed
    and throughout as the 10 GBit/s SFP+ option as I see it right.



  • @BlueKobold:

    Hello kroberts,

    perhaps some informations interesting for you?
    New Boards with build in Dual 10 GbE or SFP+

    Do you know HotLave?
    They are producing 1 GB, 10 GB and 40 GB Intel based NICs!

    One tip of me by side to you, build with the D-1500 based boards a pfSense based
    firewall and with the Xeon E3 a NAS or Server, but please don´t connect the
    pfSense based firewall direct over 10 GBit/s this will be not the best effort for the
    throughput, you will be better going with a Infinion SX2 card, that can be connected
    directly from the pfSense based firewall to the NAS and it will be serving more speed
    and throughout as the 10 GBit/s SFP+ option as I see it right.

    We'll likely be moving to Xeon-D (Supermicro at first, something better to follow).

    All the HotLava 10Gbps NICs appear to be based on Intel 82599ES.  These work, but don't work as well as Fortville (Intel) or T5 (Chelsio).


Log in to reply