Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    How can I get OpenVPN to use QAT acceleration offload?

    OpenVPN
    openvpn quickassist wireguard
    2
    3
    1.3k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • E
      ensnare
      last edited by

      I'm running Pf+ 21.02.2-RELEASE and just uninstalled Wireguard (sad face), which I was using previously to connect to Private Internet Access. I have now resorted back to OpenVPN where single-threaded CPU utilization has jumped significantly and throughput tanked.

      I have QAT offload enabled which works great for IPSec (AES-128 CBC and GCM). But when I set AES-128 CBC or GCM in OpenVPN, there appears to be no QAT acceleration. I can verify that by entering "vmstat -I" from a console and the counters do not increase.

      Am I doing something wrong? Or, any other way to get OpenVPN to take advantage of QAT acceleration? Thanks

      J 1 Reply Last reply Reply Quote 1
      • J
        johnnyfive @ensnare
        last edited by johnnyfive

        @ensnare My guess is it's related to this issue. There's no actual QAT acceleration of OpenSSL. I tried compiling it myself, but I can't manage to get a FreeBSD build environment up to Intel's standards. All pfsense needs to do is ship the built binary in /usr/lib/engines/ and it will work.

        related thread on OpenSSL & QAT

        E 1 Reply Last reply Reply Quote 0
        • E
          ensnare @johnnyfive
          last edited by

          @johnnyfive Yeah this is the problem - what a shame. It would be really great to have full acceleration using QuickAssist!

          1 Reply Last reply Reply Quote 2
          • First post
            Last post
          Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.