OpenVPN bandwidth, cpu utilization, multi-client, .... Do I understand this correctly?

  • Greetings netgate community,

    I'm new here! hello hi and hello.

    I'm in the planning stages to mature a the networking/server infrastructure at the small company I work for. I want to deploy solutions to improve data protection at rest and in flight. We're around 70 users and growing fast. (could be 200 in 6-8 years).

    It's not uncommon for more than half of our employees to be on the road at any given time. Time to batten down the hatches: VPN, drive encryption, device management, configuration compliance, vulnerability assessments, security information and event management, endpoint monitoring, full packet capture, etc etc.. pfsense looks like a great component to put in the core of our network to juggle some capabilities and properly isolate various networks, but I'm not clear on whether openVPN will scale for this sort of deployment.

    I want the connection experience to be the same whether users are in the office, at home, or on the road, so I intend to transform the entire access layer of the building network into an "untrusted" network. I intend to configure all domain joined, company issued computing devices to connect using the same full tunnel VPN capability for a unified connection experience in and out of the office.

    If I understand correctly, openVPN supports a "full tunnel" mode that will tunnel ALL TCP/IP traffic on the client device through the VPN. I would like the effective gateway of the domain joined client devices to always be the gateway at our office secure-side network, so that all network traffic to/from the device can be looped through the secured side of the network and captured/inspected by an SIEM (Security onion) regardless of where the client computer connects to the domain/internet (either local untrusted network or on the road). Is my interpretation of this correct?

    If I understand correctly, openVPN supports a "bridged" mode, that extends the IP addressing scheme of the "secure" side network out to openVPN clients, so all devices share the same broadcast domain with no NAT required. I would like this for the sake of simple file sharing capability from windows file servers.

    If I understand correctly, an openVPN tunnel runs on a single process, single thread? Bandwidth is limited to ~50-300Mbps depending on the CPU and many other variables (assuming no constraints on the network connection otherwise).

    What I'm not totally 100% clear on, is whether the server side automatically spawns a new process for each openVPN tunnel/client connection. If I can get 100Mbps or better on average to most clients I'd be very happy, though 200+ would be preferred. The kicker is, will it scale to multiple CPU cores on the server side as multiple clients connect automatically?

    The openVPN hardware guides and pfsense hardware guides imply that the server side does scale to many-cores just fine for multiple connected clients (an example scenario with many hundreds of clients and 2.5Gbps needing approximately 16 fast cores comes to mind). However, there are dozens of threads around the interwebs, including openVPN documentation at clearOS talking about the single threaded bandwidth bottleneck issue with openVPN, and it's not obvious in those threads/guides if they are strictly speaking about a client side bandwidth concern (single tunnel from a single client). I would like to build out a load-balancing/failover dual pfsense box configuration that can move a few Gbps of aggregate openVPN traffic in worst case scenarios (many simultaneous connections). Is this feasible? Will this require an abnormal configuration to accomplish? (one response I was reading, suggested I would need to establish a separate interface for each VPN

    I'm looking at single and dual socket EPYC and Skylake Scalable solutions in 2U form factor. likely 24+ cores per box, 2 boxes configured for load-balance/fail-over. Want to make sure we can leverage the high core count to scale out VPN connections to many users. If this isn't the case I may need to approach this problem with a different solution. To be clear, the cost of building a pair of $10-15K servers to support this is not an issue, if it works, I don't care if we need a ton of CPU power to deploy. There are advantages to openVPN that I really want to incorporate here (primarily, the fact that it is likely to work almost anywhere, from behind almost any other network/firewall situation). I don't want to buy $25,000 worth of equipment then find out I have 95 cores twiddling thumbs and the entire organization sharing 100Mbps of bandwidth.

    I appreciate any insight/ideas/corrections on this matter.

    I haven't had time to do more thorough testing, but will no doubt run a non-production environment starting with VM's and moving onto real hardware for many months to learn all the quirks and work out the kinks before moving users/devices into the new domain.


  • @eric-marshall

    I guess that was just way TL/DR. Sorry Guys.