Recommandations on hardware for gigabit WAN



  • Hello

    I'm looking to build a new pfsense box.
    The box should be able to handle a 1000/1000 internet connection.
    The motherboard should be mini-itx size
    Also, on the LAN side I have a game server, so stable low latency is vital

    I would also like the box to push a decent amount of bandwidth using OpenVPN or something similar, actually the closer to gigabit throughput the better.
    I'm not using any of the packages as it is right now, but I would like a large overhead on performance if the need rises in the future.

    Going through the forums I've been looking at Supermicro A1SRi-2758F
    Will this be good enough for my use? Should I consider anything else? Any recommendations on amount of RAM?

    Thanks



  • I'm looking to build a new pfsense box.
    The box should be able to handle a 1000/1000 internet connection.

    If you will be able to get a static public IP address from your ISP that you are able to put it in the WAN setup
    the better for you and also faster it will be, related to the circumstance that over PPPoE "only"one CPU core
    will be in usage!!! But with a static IP address you could be using all CPU cores in the WAN area and so it
    will be also faster then. The SG-4860 or SG8860 will be also fine for your needs as I see it right and a real
    pre-tuned pfSense image is then also pre-installed and available for this boxes!

    The motherboard should be mini-itx size
    Also, on the LAN side I have a game server, so stable low latency is vital

    Then set up a DMZ and LAN zone and buy two switches that are really fast and managed.
    Layer2 for the DMZ and Layer2 or Layer3 for the LAN area as you will need it. I would preffer
    to solve this with the following switches art this time.

    • Cisco SG200-series Layer2
    • Cisco SG300-series Layer3
    • D-Link DGS1510-series Layer3
    • Zyxel GS1910-series Layer2

    I would also like the box to push a decent amount of bandwidth using OpenVPN or something similar, actually the closer to gigabit throughput the better.

    The SG-4860 is able to realize 500 MBit/s over IPSec (AES-GCM) and the SG-8860 much more over this
    throughput! But the SG-8860 is not silent as you wish this to be for a more private or home

    I'm not using any of the packages as it is right now, but I would like a large overhead on performance if the need rises in the future.

    There are some interesting units for you, that should perhaps also named here in this thread;

    • Supermicro C2758 (as you found it) + RAM + case + SSD + PSU (the self made box ever)
    • SG-8860 from pfSense shop + mSATA (enough for all + Support and future proof)
    • mini ITX Board + E3-12xx v3/v4 CPU + RAM + case + SSD + PSU (really powerful)
    • Supermicro X10SDV-6C+-TLN4F + RAM + case + M.2 SSD + PSU (pfSense bomb)

    Going through the forums I've been looking at Supermicro A1SRi-2758F

    It is the pfSense basis within at this time, but in some month (Q1/2016) we will see more Supermicro D-15x8
    boards coming out and being more and more sold cheaper and cheaper, so this could be changing in 2016.

    Also a Gigabyte GA-6ILSL with an Intel Xeon E3-12xx v3 is very powerful and fine for any setup you will be
    needing and you can pay.

    Will this be good enough for my use?

    Yes, for sure it will be enough for that action you where told about here.

    Should I consider anything else? Any recommendations on amount of RAM?

    2 GB - normal pfSense firewall only
    2 GB - 4 GB pfSense firewall, VPN and Snort
    4 GB - 8 GB pfSense firewall, VPN, Snort and Squid & SquidGuard
    8 GB - 16 GB pfSense firewall, VPN, Snort, Squid & SquidGuard, HAVP & ClamAV,
    plus NIC tuning (mbuf size), Squid tuning (Squid RAM size) and/or high
    throughput to servers or DMZ clients and LAN clients.

    Possible spare parts for the Supermicro A1SRi-2758F are the following;

    ….push a decent amount of bandwidth using OpenVPN or something similar,

    I would be like using the IPSec VPN together with the AES-GCM algorithm over OpenVPN now!
    AES-NI is used by AES-GCM and this is used by IPSec, so you will be able to get 4x or x5 more throughput
    then using other things for your VPN. Only my opinion over that. If Intel QuickAssist or AES-GCM will be ready
    for usage in pfSense you might be also able to push the OpenVPN part too.



  • Hello

    Thank you for your detailed answer, I just want to clarify a few things:

    This is my current setup:

    This is not working well, the game server has some pretty high cpu and RAM demands and to begin with I would like to split up pfSense and the gameserver and stop using esxi.

    This is what I had imagined my network would look like after splitting it up:

    However the way you explain it is this I think:

    Is this correct?
    I am unsure how I would benefit from your suggestion as I have never used managed switches or vlans, but if there's an advantage to it in my setup I will try to read up on it



  • Picture 3 is in my opinion the best choice to realize it.



  • Realistically speaking, picture 2 and picture 3 will both work equally well for you.  You can get almost the exact same results either way with some careful planning.

    1. Picture 2 will require you to use vLANs and vLAN tagging to segregate your game server onto a DMZ.  Picture 3 can be done with or without vLANs, and no vLAN tagging is required.
    2. Picture 3 will allow you to beter prioritize traffic to the game server.  As a simple example, if the LAN and the DMZ are attached to two separate NICs on your router, you could prioritize all traffic on the DMZ port over traffic on the LAN port.  In contrast, picture 2 has all game and LAN traffic moving from the single switch to the router along a single fiber.  You can still prioritize game traffic over LAN traffic, but it won't be as effective because the switch could send the game and LAN traffic to your router in any sequence or order it wants.  A managed switch with vLANS can alleviate this by allowing you to implement QoS on the switch, prioritizing which traffic gets sent to the router first.  With FIG. 3, you don't have to worry about QoS on the switch, so a cheaper unmanaged switch would work just fine.


  • @rajl:

    Realistically speaking, picture 2 and picture 3 will both work equally well for you.  You can get almost the exact same results either way with some careful planning.

    1. Picture 2 will require you to use vLANs and vLAN tagging to segregate your game server onto a DMZ.  Picture 3 can be done with or without vLANs, and no vLAN tagging is required.
    2. Picture 3 will allow you to beter prioritize traffic to the game server.  As a simple example, if the LAN and the DMZ are attached to two separate NICs on your router, you could prioritize all traffic on the DMZ port over traffic on the LAN port.  In contrast, picture 2 has all game and LAN traffic moving from the single switch to the router along a single fiber.  You can still prioritize game traffic over LAN traffic, but it won't be as effective because the switch could send the game and LAN traffic to your router in any sequence or order it wants.  A managed switch with vLANS can alleviate this by allowing you to implement QoS on the switch, prioritizing which traffic gets sent to the router first.  With FIG. 3, you don't have to worry about QoS on the switch, so a cheaper unmanaged switch would work just fine.

    Thank you for clearing it up for me, while reading it, I suddenly realized that security-wise my current approach is not the best idea. (Game server on same LAN as the rest of my equipment and just basic port forwards.)
    I think I'll do your version of fig. 3 - two unmanged switches since I already have a couple of unmanaged gigabit switches lying around it would not give me any additional costs.

    Edit:
    On second thought I might go with a managed switch, I just realized that I could actually prioritize all the other PC's as well - The possibilities!



  • I've been putting some extra thought into the actual pfSense hardware:

    SuperMicro A1SRI-2758F-O(4 x i354) + 2x8GB ECC RAM ~ 495€
    ASRock H110M-ITX(1 x I219v) + 2x8GB Non-ECC RAM +I5-6600 + Intel 350-T2 ~ 529€
    Asus P10S-I(2 x I210at) + 2x8GB ECC RAM + E3-1220 V5 ~ 538€
    Gigabyte GA-6LISL(2 x I210at) + 2x8GB ECC RAM + E3-1220 v3 ~  567€

    A bit over my price point:
    –---------------------------------------------------------------------------
    Supermicro X10SDV-6C+-TLN4F + 2x8GB ECC RAM ~ 848€

    All of them will be using the following:

    Enclosure: Streacom F7C Alpha without ODD (This will allow me to use the PCI-E slot on the motherboard if I should need it)
    PSU: PicoPSU 12V
    SSD: Samsung 850 EVO

    I know that the Asrock+I5 setup isn't exactly servergrade but the cost is on par with the other options with the exception of a better network card than the two xeon options.

    I must say I'm leaning towards either the Asus P10S setup or the Supermicro A1SRI-2758F setup.
    If anyone wants to share their thoughts on any of the listed setups I would appreciate it, especially the I354 vs I350 vs I210 options.

    I realize with only two network interfaces I would need to go for a managed switch or an additional PCI-E network card.



  • Definitely #3 - with separate ports you can segregate your security and use the traffic shaper to give the gaming server priority over the users - the users will hardly notice this but the game server will probably appear to be a lot more responsive.



  • SuperMicro A1SRI-2758F-O(4 x i354) + 2x8GB ECC RAM ~ 495€

    Would be a cool set up and reaching for a longer time as you might be imagine.

    Realistically speaking, picture 2 and picture 3 will both work equally well for you.  You can get almost the exact same results either way with some careful planning.

    With the network schematic shown in picture 3, mostly the schematic shown in picture 2 will be prevent!
    Or in shorter words picture three is preventing you from doing the "failure" of picture 2.

    1. Picture 2 will require you to use vLANs and vLAN tagging to segregate your game server onto a DMZ.  Picture 3 can be done with or without vLANs, and no vLAN tagging is required.

    With inter-VLAN hopping you would be not really cut each from another one. And the
    traffic from the LAN would also not spread away from the DMZ homed game server.

    For the LAN you should go with a managed switch and for the DMZ you only need a dump Layer2 Switch.



  • Thank you all for your answers

    So this is my final shopping list:

    Supermicro A1SRI-2758F-O ~ 388€
    2 x Kingstom ValueRAM Server Premier 8GB (KVR16LSE11/8KF) ~ 116€
    120GB Intel 320 (Got one I don't use) ~ 0€
    PicoPUSU-80 (Got an AC adapter I don't use) ~ 28€
    M300 Enclosure ~ 70€ (Changed it from the streacom F7C since I'll get a smaller footprint, and the PCI above the board should be a non-issue with this particular motherboard)
    Shipping for PicoPSU and M300 ~ 13.50€

    Total cost ~ 615,5€

    It will be a couple of days before I order, so if anyone wants to suggest some last minute changes, please let me know.

    I'm still contemplating the managed switch, but I'm considering the Cisco sg200-08 based on a search on the pfSense forums



  • I'm still contemplating the managed switch, but I'm considering the Cisco sg200-08 based on a search on the pfSense forums

    The SG200-08 is for ~115 € here and the Cisco SG300-08 is for ~139 € here so it could be nice to get a Layer3
    switch for some coin more! Think about, what you need or want also think on upcoming things in your network.

    The Layer3 switch is able to route the entire traffic and also between the VLANs if they are some in usage
    and the pfSense box will be freed form this load.

    Alternatively switches:

    • ZYXEL GS1920-24 24-port GbE L2 Smart Switch ~99 €

    • ZYXEL GS1920-24 28-port GbE L2 Smart Switch ~135 € (24 RJ45 + 4 SFP ports)

    • Netgear GS108Tv2 ~65 €

    • Netgear JGS524Ev2 ~155 €

    • Netgear JGS516PE ~199 €

    • TP-Link TL-SG3424 ~199 €

    • Cisco SG300-10 ~ 190 €

    All in all this are nice switches I am self or friends have in usage without any problems.



  • Personally, I would stick to one box/esxi.

    How much load (in terms of MHz as esxi talks in MHz) is the game server putting on the box and how much memory?

    Remember also that kvm/proxmox/ovirt etc have less overhead than esxi and could run Centos in openVZ

    I would opt for a secondhand haswell based i7/asrock b85 box with 32 gig of ram, a 10 gbit Nic (Chelsio t420-so-cr) hooked up to a mikrotik crs226. Have separate vswitches and do it all virtually.



  • @Keljian:

    Personally, I would stick to one box/esxi.

    How much load (in terms of MHz as esxi talks in MHz) is the game server putting on the box and how much memory?

    Remember also that kvm/proxmox/ovirt etc have less overhead than esxi and could run Centos in openVZ

    I would opt for a secondhand haswell based i7/asrock b85 box with 32 gig of ram, a 10 gbit Nic (Chelsio t420-so-cr) hooked up to a mikrotik crs226. Have separate vswitches and do it all virtually.

    If memory serves it seems to hover at around 12000MHz on a decent night with players populating most of the servers.
    Symtpoms when the servers are loaded are:

    • Server framerate fluctuates

    • Latency rises for all players

    • Players get dropped for no apparent reason

    I'm not sure the build you are suggesting is an upgrade to my current ESXI system? or a standalone pfSense box?
    Anyways unless I get one of them extended ITX boards from Asrock it will be almost impossible to get 32GB DDR3

    My current ESXI build is a I7-4770 with 16GB of RAM, this would be repurposed to running CentOS for my game server, no hypervisor.



  • My suggestion would be to beef up your esxi machine- is itx a requirement? There are some 8 and 10 core xeons showing up on eBay secondhand cheaply…



  • @Keljian:

    My suggestion would be to beef up your esxi machine- is itx a requirement? There are some 8 and 10 core xeons showing up on eBay secondhand cheaply…

    The footprint is somewhat important, however it is quite possible even with mini-itx:
    http://www.asrockrack.com/general/productdetail.asp?Model=EPC612D4I#Specifications

    It was my first intention to upgrade the ESXI, however the upgrade path is quite expensive.

    Asrock EPC612D4I - 260€
    CPU: E5-2670V3 from Hong Kong(Ebay - 388€
    RAM: 2x16GB DDR4 SO-DIMM ECC ~ 351€
    Enclosure: Cooltek Coolcube ~45€
    PSU: Silverstone SX500-LG ~ 99€
    SSD: Reused ~ 0€

    Total: 1143€
    I might be able to get 300€ for my old ESXI server so the total cost would be
    843€

    if I decided to go micro-atx the cost would be:
    ASRock EPC612D4U ~ 280€
    CPU: E5-2670V3 from Hong Kong(Ebay - 388€
    RAM: 2x16GB DDR4 DIMM ECC ~ 233€
    Enclosure: Cooltek Coolcube Maxi ~58€
    PSU: Sea Sonic G Series 360 ~ 73€
    SSD: Reused ~ 0€
    Total: 1032€

    Total cost after selling current ESXI:  732€

    The cost would still be higher than simply splitting it up and getting a dedicated pfSense box.
    On top of that everything is passively cooled as it is right now, I could make a Xeon almost as quiet, but it would cost me at least 100€ more to do so.
    The idle power consumption would most likely be on par with my 4770+Supermicro pfSense box.

    So, if we take the ESXI box one step further and include my current NAS option:

    HFX PowerNAS
    I7-4770T
    Running Windows 10
    Sabnzbd
    Mediabrowser server
    Flexraid

    ASRock EPC612D4U ~ 280€
    CPU: E5-2670V3 from Hong Kong(Ebay - 388€
    RAM: 2x16GB DDR4 DIMM ECC ~ 233€
    Enclosure: Fractal Design Node 804 ~ 112€
    PSU: Sea Sonic Platinum Series 400 Fanless ~ 132€
    SSD: Reused ~ 0€
    CPU Cooler: Noctua NH-D14 ~74 €
    I added some options to make it more quiet

    Total: 1219€
    Suppose I can get 300€ for my current ESXI server and 300€ for my HFX Powernas NAS:
    Total after they're sold: 619€

    An interesting idea, I'll need to think about it.



  • My home server is described in a bit of detail here: https://forums.servethehome.com/index.php?threads/constellation-my-new-home-home-office-server-build.6801/

    It has plenty of cycles free, but I don't run game servers.

    I highly recommend the 10gbit nic (chelsio or mellanox - Both are cheap on ebay - $70USD for a dual port chelsio, or $20 for a single port mellanox) into a mikrotik CRS226 or the smaller version CRS210, to feed your home network. That said, you can't have that and a HBA in the same box if you go ITX.

    One particularly nice thing about the Chelsio is it shows up as multiple devices in ESXi  (al-la SR-IOV) so you can do passthrough to more than one VM easily if you want to.

    Other options for you to consider:
    Xeon-D (8 core/16 thread low power, 2.6ghz turbo) - Broadwell based low power. High threads, 2x10gbit integrated 10gb-T nics

    Xeon E5-2670 - AKA sandy bridge generation 8 core - lots of these going on ebay cheaply these days. http://www.ebay.com.au/itm/151964337392 for instance - 2.6 GHz, 3.3 GHz Turbo, 8 Cores, 16 Threads, 20 MB cache.



  • @Keljian:

    My home server is described in a bit of detail here: https://forums.servethehome.com/index.php?threads/constellation-my-new-home-home-office-server-build.6801/

    It has plenty of cycles free, but I don't run game servers.

    I highly recommend the 10gbit nic (chelsio or mellanox - Both are cheap on ebay - $70USD for a dual port chelsio, or $20 for a single port mellanox) into a mikrotik CRS226 or the smaller version CRS210, to feed your home network. That said, you can't have that and a HBA in the same box if you go ITX.

    Other options for you to consider:
    Xeon-D (8 core/16 thread low power, 2.6ghz turbo) - Broadwell based low power. High threads, 2x10gbit integrated 10gb-T nics

    Xeon E5-2670 - AKA sandy bridge generation 8 core - lots of these going on ebay cheaply these days. http://www.ebay.com.au/itm/151964337392 for instance - 2.6 GHz, 3.3 GHz Turbo, 8 Cores, 16 Threads, 20 MB cache.

    To be precise, I'm running CS:GO servers, unfortunately they're not multithreaded on Linux meaning each instance likes to hog a full core on my 4770, this is somewhat problematic if I get a low-frequency high-core count CPU.
    Secondly, the Sandy Bridge xeons are cheap, but their idle consumption seem to be substantial compared to their Haswell-E counterparts.
    The cost of a Sandy Bridge vs Haswell-E will of course be able to pay for the increased consumption of the Sandy Bridge for ~ 4 years, but it sorta bothers me.

    Still, it is something to consider.

    There's one thing I don't fully understand about your recommendations - The 10gbit NIC - Why is it a good idea when my LAN is 1 gigabit and WAN is also 1 gigabit? Wouldn't it just sit at 10% capacity since the bottleneck is well.. everywhere else?

    Also, The Mikrotik switch, what's the advantage of getting this switch instead of say - the Cisco SG300-10?

    I've always used unmanaged switches so this is new territory for me



  • Ok so you run a CS:GO server, nas, pfsense(presumably dhcp), and likely other stuff on your esxi box.

    The nas and some cs:go traffic is local, as is dhcp and a few other things. So you don't limit your local bandwidth or latency passing data through to the net, it makes sense to feed your switch with 10gbit.

    You may only use 4-5 gbit at peak, but 10gbit gear is relatively cheap anyhow and you get a bit of future proofing in the mix

    My reason for suggesting the Mikrotik switches is they are effectively "dumb" switches with some management functions, but more specifically they easily and cheaply link 10gbit sfp+ as a trunk to multiple port 1 gbit twisted pair. So as a core switch they are good value.  They can also handle vlans



  • @Keljian:

    Ok so you run a CS:GO server, nas, pfsense(presumably dhcp), and likely other stuff on your esxi box.

    The nas and some cs:go traffic is local, as is dhcp and a few other things. So you don't limit your local bandwidth or latency passing data through to the net, it makes sense to feed your switch with 10gbit.

    My reason for suggesting the Mikrotik switches is they are effectively "dumb" switches with some management functions, but more specifically they easily and cheaply link 10gbit sfp+ as a trunk to multiple port 1 gbit twisted pair. So as a core switch they are good value.  They can also handle vlans

    At the moment they are split up NAS on its own hardware by itself and pfSense + CentOS(For game server) On an ESXI box, however if I was going to beef up the ESXI machine it seemed like a good idea to consolidate all machines onto 1 ESXI host, it would be easier to justify the increased cost and footprint this way.
    with ECC memory and servergrade hardware I could also consider Freenas instead of FlexRAID

    Good point on the mikrotik switch + 10gbit nic, I'll have to take that into account.
    On a sidenote I began to wonder if I could do away with the media converter and attach the fiber from my ISP directly to pfSense if I got a dual port chelsio, but I have to research that before making any assumptions.

    I'll have consider my options, thank you for providing some new ideas on how to set up my network



  • To further clarify:

    The way I see it you have two streams, external (as in up and down from the internet) and local

    External traffic you will limit to Game serving/downloading/uploading etc.

    Local traffic could be:

    • Traffic To/From your NAS

    • DHCP

    • Traffic between clients on your network

    • Upstream (external/internet) traffic to the internet from your clients

    • Downstream traffic (external/internet) traffic to your clients

    • Game traffic to the server

    • Update traffic if you share updates across the clients on your network

    • Wireless traffic including negotiations/wireless overhead

    • Bonjour server/client traffic

    • Mythtv traffic, if you run something like a Silicondust HDhomerun

    • Remote Desktop traffic to your VMs

    • IP Camera feeds

    While you can have all of this on 1gbit, if you have multiple VMs accessing the local network clients and visa versa there could be latency induced due to lack of bandwidth, or traffic during peaks. If you did it on 10gbit, basically you end up with a topology that looks like two or more switches (the VSwitches, and the physical switches) and a 10gbit trunk between them.

    Therefore even if your local traffic takes up 5gbit or so with the NAS VM streaming 4K to multiple clients @ 1gbit, it's not going to impact upon your game performance as you still have plenty of spare network bandwidth to share around. Obviously if you had 10 clients grabbing files from your file server at 1gbit and your fileserver VM could manage 10gbit worth of data being read at the same time, that would flood the network, but it's a highly unlikely scenario in home use.

    As for running fibre to pfsense - I can't say whether this will work - you'll have to speak to your fibre provider and work out the setup you have. Typically speaking fibre providers don't like this as they run VoIP and other services over the fibre also, thus have redundancies built in for emergencies.



  • @Keljian:

    To further clarify:

    The way I see it you have two streams, external (as in up and down from the internet) and local

    External traffic you will limit to Game serving/downloading/uploading etc.

    Local traffic could be:

    • Traffic To/From your NAS

    • DHCP

    • Traffic between clients on your network

    • Upstream (external/internet) traffic to the internet from your clients

    • Downstream traffic (external/internet) traffic to your clients

    • Game traffic to the server

    • Update traffic if you share updates across the clients on your network

    • Wireless traffic including negotiations/wireless overhead

    • Bonjour server/client traffic

    • Mythtv traffic, if you run something like a Silicondust HDhomerun

    • Remote Desktop traffic to your VMs

    While you can have all of this on 1gbit, if you have multiple VMs accessing the local network clients and visa versa there could be latency induced due to lack of bandwidth, or traffic during peaks. If you did it on 10gbit, basically you end up with a topology that looks like two or more switches (the VSwitches, and the physical switches) and a 10gbit trunk between them.

    Therefore even if your local traffic takes up 5gbit or so with the NAS VM streaming 4K to multiple clients @ 1gbit, it's not going to impact upon your game performance as you still have plenty of spare network bandwidth to share around. Obviously if you had 10 clients grabbing files from your file server at 1gbit and your fileserver VM could manage 10gbit worth of data being read at the same time, that would flood the network, but it's a highly unlikely scenario in home use.

    As for running fibre to pfsense - I can't say whether this will work - you'll have to speak to your fibre provider and work out the setup you have. Typically speaking fibre providers don't like this as they run VoIP and other services over the fibre also, thus have redundancies built in for emergencies.

    It actually makes alot of sense now, especially if I put the NAS on the ESXI box, I can easily see 1 PC saturate a 1Gbit link when accessing the NAS - The mechanical harddrives would most likely end up being the bottleneck if two tried to access it, but still, you would easily saturate a 1gbit link.

    I've been looking a bit more into the NIC, but I do not fully understand the tranceiver module.
    If I buy the Mikrotik CRS226 and a 10Gbit nic wouldn't I need a tranceiver module in both the Mikrotik CR226 and the NIC? If yes - How do I figure out which type I should use and what cable(or optical wire) to go along with it?
    Looking up the X520-DA2 on the intel webpage gave me a list of modules I could use and a reference to a "direct attach cable" that just needed to comply with some specifications that I'm guessing is pretty standard for direct attach cables.
    I am guessing that I could use the direct attach cable since the switch and NIC will be placed right next to each other, but I do not know if this is 'best practice'

    I've been looking at the Intel X520-DA2 instead of the ones you suggested, the only reason being it seemed better supported in ESXI.



  • I wrote this in PM for another user:

    You need SFP+ transmitters at both ends of the 10gbit connection (which are about $20 each) (ebay or http://www.fs.com/c/10g-sfp-plus_63 - the cheapest model will do 300 metres or approx 1000 ft) http://www.fs.com/10gbase-sr-sfp-850nm-300m-multi-mode-optical-transceiver-p-11589.html - if in doubt about compatibility, select cisco as the compatible brand.

    You need fibre to join the transmitters (which is relatively cheap, you're looking for OM3 or OM4 Multimode - Duplex - optic fibre, with LC-LC. LC-LC stands for Lucent Connector - Lucent Connector) - here is one example: http://www.fs.com/lc-lc-duplex-10g-om4-50-125-multimode-fiber-patch-cable-p-17235.html - ebay is also an option but is more expensive.

    You of course need a 10gbit ethernet card for the server. The Mellanox ConnectX-2 (MNPA19-XTR) is very cheap on ebay ($18-20) and highly recommended, as is the Chelsio t420-SO-CR (dual port, approx $75 on ebay).

    You need a switch that can handle it. Mikrotik crs210 (2x10 gig  + 8x1gig ports) and crs226(2x10 gig+24x1 gig ports) are both reasonably well liked, and quiet/cheap for what they are, approx prices are $180 and $240 respectively - ebay or amazon. Despite their names, do not use these for routing (that is what pfsense is for!).

    http://www.amazon.com/MikroTik-CRS210-8G-2S-IN-Router-Switch/dp/B00RSNN17G
    http://www.amazon.com/Mikrotik-CRS226-24G-2S-IN-Gigabit-Router/dp/B00KVF7S40

    More when I have a moment



  • You do not want an Intel 10gig card as with some of them you need to use Intel transceivers, which can be quite expensive.

    Really both Chelsio or Mellanox are well supported by ESXI as they are server/enterprise hardware. Neither brand needs optics keyed to them, they work with any optics.

    You can use a passive twinax cable, however they have higher latency and lower length than optical. On the other hand they are cheaper up to 5 meters, and can do up to 10 meters. As you mentioned, if they are right next to each other, then twinax(sfp+) will work for you.

    With software raid and 4+ Rotating disks you should be able to push at least 400 megabytes a second if it is tuned correctly.



  • @Keljian:

    You do not want an Intel 10gig card as with some of them you need to use Intel transceivers, which can be quite expensive.

    Really both Chelsio or Mellanox are well supported by ESXI as they are server/enterprise hardware. Neither brand needs optics keyed to them, they work with any optics.

    You can use a passive twinax cable, however they have higher latency and lower length than optical. On the other hand they are cheaper up to 5 meters, and can do up to 10 meters. As you mentioned, if they are right next to each other, then twinax(sfp+) will work for you.

    With software raid and 4+ Rotating disks you should be able to push at least 400 megabytes a second if it is tuned correctly.

    Thanks for making me understand it, I think I get it now,  one of my concerns was the vendor lock, I could see that the Intel one was locked, but if the Chelsio and Mellanox aren't locked it will just make things easier(and cheaper)

    I think I'll go with a Mellanox card, both Mellanox and Chelsio seem to be mostly sold in the US, so I have to calculate shipping costs and +25% import costs, it will quickly run up.

    Also,  I'll go with the fiber modules + cable you linked me to, the fs.com store will ship it very cheap if I'm patient.

    FlexRAID is not really RAID, it's a set of disks with an added parity drive, meaning if I move one of the disks(except the parity disk) to another PC, I'll see the contents like any other non-raid disk.

    This has some benefits

    • I can lose 1 disk and rebuild it
    • If I lose two disks before rebuilding, I'll 'only' lose the contents of the two disks and not the entire set
    • You can manage singular disks like any other non-raid disk.

    The downside is you'll only get the read and write rate of 1 disk, if the contents accessed is on the same disk.
    Therefore the previously mentioned mechanical hdd limit

    I've been using an adaptec 52445 in the past, but sold it off with my old fileserver(Took up too much room).
    Compared to the Adaptec interface Flexraid doesn't seem as polished, but it did the job when I lost a disk.
    I am however contemplating either

    A) Going back to hardware RAID 6 since now I have room for the controller on a matx platform
    or
    B) Get some more memory and try out Freenas

    Ah well, that was a sidestep into something completely unrelated to pfsense, better get back on track.

    Thank you for the feedback, I think I have all of it covered right now, now it's just a matter of ordering the items and assembling them



  • A few last things.

    Considering the Mellanox cards are so inexpensive, and you will need to pay for shipping, I suggest you buy one or two more than you need in case you get a dud. This way if one is dead, you don't have to pay shipping twice.

    Also, the Connectx-3 is a newer model and is still receiving updates this(or the Chelsio mentioned) would be the preferred card.

    Regardless of which Mellanox card you get, it would pay to put it in a Windows box and update the firmware before use.



  • @Keljian:

    A few last things.

    Considering the Mellanox cards are so inexpensive, and you will need to pay for shipping, I suggest you buy one or two more than you need in case you get a dud. This way if one is dead, you don't have to pay shipping twice.

    Also, the Connectx-3 is a newer model and is still receiving updates this(or the Chelsio mentioned) would be the preferred card.

    Regardless of which Mellanox card you get, it would pay to put it in a Windows box and update the firmware before use.

    I managed to find a new MCX312A-XCBT for 110€ including shipping from Europe, so I decided to go with that.

    Still need the rest of the hardware, but your suggestion about the mikrotik switch + 10gbit card seemed like a good idea regardless of what setup I'll end up with.

    I'll probably order the tranceivers + cable today as well, and I'll probably order the switch soon, that way I can get to know it using my existing setup before ordering new hardware.



  • @LeetDonkey:

    The box should be able to handle a 1000/1000 internet connection.
    The motherboard should be mini-itx size
    Also, on the LAN side I have a game server, so stable low latency is vital

    I would also like the box to push a decent amount of bandwidth using OpenVPN or something similar, actually the closer to gigabit throughput the better.
    I'm not using any of the packages as it is right now, but I would like a large overhead on performance if the need rises in the future.

    Going through the forums I've been looking at Supermicro A1SRi-2758F
    Will this be good enough for my use? Should I consider anything else? Any recommendations on amount of RAM?

    1. Forget ITX
    2. A barebones midtower is significantly cheaper than customizing an ITX server
    3. E3 Xeons and similar desktop-class cpus are significantly faster than that atom, see #2
    4. OpenVPN is currently implemented in a fashion that you want fast single-threaded performance, more cores mean little, see #3
    5. E3/desktop cores are currently 1-2 generations ahead of E5 cores (IPC ~+5%/generation/avg) and a much much cheaper way to put your single-threaded program in the fastest possible core, see #4


  • @Aluminum:

    @LeetDonkey:

    The box should be able to handle a 1000/1000 internet connection.
    The motherboard should be mini-itx size
    Also, on the LAN side I have a game server, so stable low latency is vital

    I would also like the box to push a decent amount of bandwidth using OpenVPN or something similar, actually the closer to gigabit throughput the better.
    I'm not using any of the packages as it is right now, but I would like a large overhead on performance if the need rises in the future.

    Going through the forums I've been looking at Supermicro A1SRi-2758F
    Will this be good enough for my use? Should I consider anything else? Any recommendations on amount of RAM?

    1. Forget ITX
    2. A barebones midtower is significantly cheaper than customizing an ITX server
    3. E3 Xeons and similar desktop-class cpus are significantly faster than that atom, see #2
    4. OpenVPN is currently implemented in a fashion that you want fast single-threaded performance, more cores mean little, see #3
    5. E3/desktop cores are currently 1-2 generations ahead of E5 cores (IPC ~+5%/generation/avg) and a much much cheaper way to put your single-threaded program in the fastest possible core, see #4

    Already did, I'm going for at least mATX right now, I'm still restricted when it comes to physical size, but mATX gives a much broader selection.
    In fact I found an ATX case that was perfect for me, but alas, it's out of production… the Lian Li PC-V650.
    Right now I'm considering the PC-V354 as an alternative, but that's only mATX, for almost the same physical size the V-650 would enable me to use a full ATX board.

    Indeed it is, it's also easier to customize

    3+4+5)
    The problem with quad core xeons is that it doesn't put me in a better position than I am now, sure I could virtualize pfSense and my NAS, but the game server would most likely still need to be run serperate, that's why I wanted to go with E5 xeons, to consolidate all systems on to one ESXI host.
    The problem being of course - price, even with the ES xeons on ebay the setup would be somewhat costly.
    To mitigate this slightly I'm considering an Asrock X99 board, they support xeons + ECC but also support running all cores at full turbo multiplier.
    Alot of possibilities, but right now I'm trying to determine what option would suit my usage best.



  • Would a Core2Quad with 4GB of RAM running pfSense be able to handle gigabit WAN and LAN speeds?

    You can find quad core Dell Optiplex 755's on ebay all day long for $99 - built in Intel gigabit NIC and I put an Intel CT gigabit PCIe card into the graphics slot and it works perfectly. You could even find a dual or quad port Intel NIC for not too much and get more than 1 port out of the PCIe slot - probably from $15 for an older one to $45 for a Chinese i350-T2 to $60 for a Chinese i350-T4 or $200 or so for an official Intel i350 depending on how many ethernet ports you need. Power supply is 235W on an Optiplex 755/760/780.

    I have a SFF (small form factor) which only has the graphics PCIe slot and a PCI slot but it might fit your size requirements. It's quiet and can lay flat or on its side.  Built in NIC for each model…
    755  Intel 82566DM - Gigabit
    760  Intel 82567LM - Gigabit
    780  Intel 82567LM - Gigabit

    Here are the 4 form factors for Dell Optiplex 755
    http://www.dell.com/downloads/global/products/optix/en/opti_755_techspecs.pdf

    The USFF is a no go because it has an external brick power supply and does not have a PCIe slot. You probably want SFF (small form factor).


    For those that don't mind a little bit of basic modding, you should be able to get an Optiplex 755 and drop an Intel Xeon X3363 in for a bump to 2.83GHz.
    eBay X3363 is $30 and search eBay for "lga 771 775 adapter" for a stick on adapter to get the X3363 (quad core) to work in a 755. You just need to update to the latest BIOS first. This should still get the PC in under $100.
    Good site for the mod
    http://www.delidded.com/lga-771-to-775-adapter/

    ******** x 2
    Super overkill I'm sure but if the AES-NI instruction set in the processor is important for VPN - and I'm not sure if it even matters if your processor is powerful enough (core2quad) you can get a Dell Precision T5500 on ebay with a Xeon 56xx processor (around $150 - $200 shipped with 8GB RAM). Obviously big, heavy and way more of a power hog than an Optiplex 755.

    ******** x 3
    If you get a Core2Duo Optiplex 755, 760 or 780, all 3 form factors look and operate the same. If you want to bump the 760 or 780 to an X3363, the microcode isn't in the BIOS for either of these, so you have to flash the microcoded BIOS first, then do the X3363 upgrade as outlined above.
    The Microcode is already in the BIOS for the 755 so no preflashing is necessary.
    https://www.bios-mods.com/forum/Thread-OptiPlex-360-380-760-780-960-Xeon-LGA-771-E0-1067A-Microcode

    Since this can all be done with these 3 boxes (755, 760 and 780) you should be able to find something affordable on ebay or used/refurbed.



  • @LeetDonkey:

    3+4+5)
    The problem with quad core xeons is that it doesn't put me in a better position than I am now, sure I could virtualize pfSense and my NAS, but the game server would most likely still need to be run serperate, that's why I wanted to go with E5 xeons, to consolidate all systems on to one ESXI host.
    The problem being of course - price, even with the ES xeons on ebay the setup would be somewhat costly.
    To mitigate this slightly I'm considering an Asrock X99 board, they support xeons + ECC but also support running all cores at full turbo multiplier.
    Alot of possibilities, but right now I'm trying to determine what option would suit my usage best.

    If you really want to go that route, consider that E5 16xx Xeons are often unlocked, and no I do not mean those wacky engineering samples on ebay. Conflicting reports on the cheapest versions, but a 1660v3 is confirmed unlocked if you want to have 8 fast cores and eat ECC cake too. If you don't want to pay the OCD-uber-all-in-one ESXi tax (I definitely see a pattern in the people that post their builds…) but you still want performance its cheaper to just run multiple real servers.

    I have a 1680v2 (8 core Ivy for X79/C602) that eats through torture tests at 4.5Ghz, YMMV. Another fun fact: Broadwell (E5 v4) is coming soon to socket 2011v3 and will be a drop-in upgrade after bios update on decent X99/C612 boards.



  • @andrews:

    Would a Core2Quad with 4GB of RAM running pfSense be able to handle gigabit WAN and LAN speeds?

    Typical routing and standard firewall duties? Sure.

    OpenVPN or similar? Not even close.



  • @andrews:

    Would a Core2Quad with 4GB of RAM running pfSense be able to handle gigabit WAN and LAN speeds?

    Aluminum pointed it out pretty well, on top of that I'm currently considering putting everything on an ESXI host, so in either case it's not going to be enough.
    Also, a Core2Quad is pretty dated, and the idle power consumption is somewhat high compared to later CPUs
    As a platform to get to know pfSense on I can see it's attractive if the price is right, but for my use I need a bit more performance.

    @Aluminum:

    If you really want to go that route, consider that E5 16xx Xeons are often unlocked, and no I do not mean those wacky engineering samples on ebay. Conflicting reports on the cheapest versions, but a 1660v3 is confirmed unlocked if you want to have 8 fast cores and eat ECC cake too. If you don't want to pay the OCD-uber-all-in-one ESXi tax (I definitely see a pattern in the people that post their builds…) but you still want performance its cheaper to just run multiple real servers.

    I have a 1680v2 (8 core Ivy for X79/C602) that eats through torture tests at 4.5Ghz, YMMV. Another fun fact: Broadwell (E5 v4) is coming soon to socket 2011v3 and will be a drop-in upgrade after bios update on decent X99/C612 boards.

    Well I am also considering putting pfSense and NAS on 1 pc with ESXI and reinstall the gameserver baremetal on my I7-4770.
    This would most likely be the chepaest way to go.

    if I contend with putting just pfSense and NAS on 1 PC, a quad core xeon should be sufficient.
    I could get a LGA2011-3 board with a quadcore xeon(they're not that expensive new or on the second hand market) and if I ever wanted to migrate the gameserver to ESXI I could get a CPU upgrade - perhaps when broadwell-EP cpus are available second hand.
    ES xeons are attractive when you look at the price, but if I'm going to use it for NAS there's too many unanswered questions when it comes to ES vs retail.



  • You don't need a quad core Xeon for pfsense and a NAS.

    You just don't.

    One fast core is enough for pfsense at 1gig. Your nas will most likely only need 1 (especially if you use a raid card) or two.

    I am in the habit of giving my VMs 2 vcpus, and for a while there I was running on 2 (hyper threaded) cores with both my software raid nas and pfsense. It didn't miss a beat. The only reason I went to 4 cores is because I do heavy work on a Windows VM on the same box occasionally.

    If all you are doing is running a NAS and pfsense, then I would encourage you to consider skylake i3s.



  • @Keljian:

    You don't need a quad core Xeon for pfsense and a NAS.

    You just don't.

    One fast core is enough for pfsense at 1gig. Your nas will most likely only need 1 (especially if you use a raid card) or two.

    I am in the habit of giving my VMs 2 vcpus, and for a while there I was running on 2 (hyper threaded) cores with both my software raid nas and pfsense. It didn't miss a beat. The only reason I went to 4 cores is because I do heavy work on a Windows VM on the same box occasionally.

    If all you are doing is running a NAS and pfsense, then I would encourage you to consider skylake i3s.

    If all it was doing was sharing files that would be correct.
    It is however doing a bit more than that at the moment:

    Among other things:

    • Emby media server
    • SABNzbd
    • x264 encoding(I would love a Xeon behemoth for this)

    Also, if I go the freenas way xeon + ecc seem to be the recommended setup.

    I could also go with the original plan of a dedicated pfsense box and leave my NAS and gameserver alone, but I still think it makes good sense to consider combining at least pfsense and my current NAS setup

    NAS might be a wrong term as NAS is only part of the role it fulfills, but I think that Synology, Freenas and others has blurred the lines between a 'generic' multipurpose server and a dedicated NAS.

    If I insist on Xeon + ECC this is the cheapest way to get ECC support and quad cores on LGA1151 & LGA2011-3

    MSI C236M WORKSTATION - 167€
    Xeon E3-1220V5 - 218€

    Total 385€

    Asrock X99m Killer - 179€
    Xeon E5-1620V3 1 - 311€

    Total 490€

    if I could live with a slower CPU:
    Xeon E5-2603V3 - 245€ (Performance would be similar to my current NAS in multithreaded applications like x264)

    Total - 424€

    So, getting the cheapest ECC enabled motherboard + the slowest CPUs for each platform the difference is 39€ (Of course the skylake platform will be the fastest at this lineup)
    Getting CPUs of almost similar performance, the difference is 105€

    So the real question is: Would I want to upgrade to a faster CPU with more cores in the future or not?

    No - LGA1151

    Yes - Another question rises - Would I want to upgrade the CPU on my existing platform or build a completely new platform?

    Upgrade CPU - LGA2011-3
    New platform - LGA1151 seems like the cheapest/most performing option then.

    Either of the mentioned options would require me to run the game server on a serparate pc, the LGA2011 platform would enable me to move it to ESXI with a CPU upgrade.

    Anyways, alot to consider



  • I have been doing a lot of transcoding lately and have been using mediacoder with the nvidia (nvenc) encoder for h265. The speeds are incredible and much faster than you could achieve on a CPU (450-515fps, only limited by decode speed). The quality is very good.

    The cheapest card with this ASIC is the gtx960, which you could pass through to a VM…..



  • @Keljian:

    I have been doing a lot of transcoding lately and have been using mediacoder with the nvidia (nvenc) encoder for h265. The speeds are incredible and much faster than you could achieve on a CPU (450-515fps, only limited by decode speed). The quality is very good.

    The cheapest card with this ASIC is the gtx960, which you could pass through to a VM…..

    That's actually pretty sweet, last time i checked NVENC or CUDA encoding there was a significant quality difference between that and CPU encoding, it seems that they've narrowed the gap.
    I have a GTX 970 in my own pc, at the speed it can encode I don't really need to have the NAS do it.
    Skylake with igpu should even support it via quicksync
    There are still some limitations(8 bit vs 10 bit) but I can live with that.



  • Well the 970 is very very fast and will use less power encoding too..:)