• Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login
Netgate Discussion Forum
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login

WireGuard in pfSense 2.5 Performance

Scheduled Pinned Locked Moved WireGuard
47 Posts 16 Posters 9.5k Views
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • D
    dennis_s
    last edited by Jan 27, 2021, 9:52 PM

    Our new blog compares the kernel-resident implementation of WireGuard performance vs the "WireGuard Go" port.

    1 Reply Last reply Reply Quote 5
    • P
      Pippin
      last edited by Jan 28, 2021, 1:15 PM

      Glad to see Netgate shares these results in an honest way 😉

      I gloomily came to the ironic conclusion that if you take a highly intelligent person and give them the best possible, elite education, then you will most likely wind up with an academic who is completely impervious to reality.
      Halton Arp

      1 Reply Last reply Reply Quote 0
      • D
        dirtyfreebooter
        last edited by Jan 28, 2021, 3:59 PM

        The blog posts mentions setting MSS to 1380. Is that a Wireguard setting? Wireguard interface setting? Global pfSense setting? Thanks!

        Y C 2 Replies Last reply Feb 1, 2021, 12:53 PM Reply Quote 1
        • Y
          yon 0 @dirtyfreebooter
          last edited by Feb 1, 2021, 12:53 PM

          @dirtyfreebooter

          i will mtu setup 8920 mss may setup 1380 and 1360 ipv6

          1 Reply Last reply Reply Quote 0
          • C
            cmcdonald Netgate Developer @dirtyfreebooter
            last edited by cmcdonald Mar 1, 2021, 5:34 PM Mar 1, 2021, 5:32 PM

            @dirtyfreebooter If I understand the GUI correctly, then the value entered into the MSS field on the interface settings really should be the MTU value, and 40 bytes are substracted from the value in the MSS field to account for the TCP/IP header. So if you enter 1420 for both MTU and MSS, an MSS clamp of 1420-40=1380 will be applied. This doesn't appear to happen OOTB, even though it probably should for most cases (especially if your other interfaces are at 1500). Entering 1420 in the MSS field on both ends of my routed WG link fixed all my issues with random TCP connections failing, random TLS failures, etc.

            Need help fast? https://www.netgate.com/support

            D 1 Reply Last reply Mar 1, 2021, 5:49 PM Reply Quote 1
            • D
              dem @cmcdonald
              last edited by Mar 1, 2021, 5:49 PM

              @vbman213 said in WireGuard in pfSense 2.5 Performance:

              If I understand the GUI correctly, then the value entered into the MSS field on the interface settings really should be the MTU value...

              It sure sounds that way. What value do you see in /tmp/rules.debug?

              D 1 Reply Last reply Mar 1, 2021, 6:07 PM Reply Quote 0
              • D
                dem @dem
                last edited by Mar 1, 2021, 6:07 PM

                Never mind, I checked myself and putting 1420 in the MSS field in the GUI results in max-mss 1380 in the rules.

                C 1 Reply Last reply Mar 1, 2021, 7:26 PM Reply Quote 0
                • C
                  cmcdonald Netgate Developer @dem
                  last edited by Mar 1, 2021, 7:26 PM

                  @dem I wonder if this is worth opening a redmine issue for. I can't see a reason why the max-mss shouldn't be set to to 1380 by default (1420-40) (...and rewording the GUI might be useful as well).

                  Need help fast? https://www.netgate.com/support

                  Y 1 Reply Last reply Mar 2, 2021, 1:50 PM Reply Quote 0
                  • Y
                    yon 0 @cmcdonald
                    last edited by Mar 2, 2021, 1:50 PM

                    According to the report that I found the problem and submitted, wireguard has bugs in the linux kernel, I don't know if freebsd pfsense is involved. This is about mtu icmp and other issues

                    [wireguard kernel bug(link url)

                    C 1 Reply Last reply Mar 2, 2021, 3:09 PM Reply Quote 0
                    • C
                      cmcdonald Netgate Developer @yon 0
                      last edited by Mar 2, 2021, 3:09 PM

                      @yon-0 So it does look like issues with path discovery, icmp, etc. That would make sense. I still think at least in the interim, that an MSS clamp should be enabled by default in pfSense until there is an upstream fix.

                      Need help fast? https://www.netgate.com/support

                      C 1 Reply Last reply Mar 2, 2021, 3:13 PM Reply Quote 0
                      • C
                        cmcdonald Netgate Developer @cmcdonald
                        last edited by cmcdonald Mar 2, 2021, 3:14 PM Mar 2, 2021, 3:13 PM

                        https://redmine.pfsense.org/issues/11600

                        Need help fast? https://www.netgate.com/support

                        Y 1 Reply Last reply Mar 6, 2021, 1:01 PM Reply Quote 0
                        • Y
                          yon 0 @cmcdonald
                          last edited by Mar 6, 2021, 1:01 PM

                          @rcmcdonald91

                          They are still working on repairing...

                          commit ee576c47db60432c37e54b1e2b43a8ca6d3a8dca upstream.
                          
                          The icmp{,v6}_send functions make all sorts of use of skb->cb, casting
                          it with IPCB or IP6CB, assuming the skb to have come directly from the
                          inet layer. But when the packet comes from the ndo layer, especially
                          when forwarded, there's no telling what might be in skb->cb at that
                          point. As a result, the icmp sending code risks reading bogus memory
                          contents, which can result in nasty stack overflows such as this one
                          reported by a user:
                          
                              panic+0x108/0x2ea
                              __stack_chk_fail+0x14/0x20
                              __icmp_send+0x5bd/0x5c0
                              icmp_ndo_send+0x148/0x160
                          
                          In icmp_send, skb->cb is cast with IPCB and an ip_options struct is read
                          from it. The optlen parameter there is of particular note, as it can
                          induce writes beyond bounds. There are quite a few ways that can happen
                          in __ip_options_echo. For example:
                          
                              // sptr/skb are attacker-controlled skb bytes
                              sptr = skb_network_header(skb);
                              // dptr/dopt points to stack memory allocated by __icmp_send
                              dptr = dopt->__data;
                              // sopt is the corrupt skb->cb in question
                              if (sopt->rr) {
                                  optlen  = sptr[sopt->rr+1]; // corrupt skb->cb + skb->data
                                  soffset = sptr[sopt->rr+2]; // corrupt skb->cb + skb->data
                                  // this now writes potentially attacker-controlled data, over
                                  // flowing the stack:
                                  memcpy(dptr, sptr+sopt->rr, optlen);
                              }
                          
                          In the icmpv6_send case, the story is similar, but not as dire, as only
                          IP6CB(skb)->iif and IP6CB(skb)->dsthao are used. The dsthao case is
                          worse than the iif case, but it is passed to ipv6_find_tlv, which does
                          a bit of bounds checking on the value.
                          
                          This is easy to simulate by doing a `memset(skb->cb, 0x41,
                          sizeof(skb->cb));` before calling icmp{,v6}_ndo_send, and it's only by
                          good fortune and the rarity of icmp sending from that context that we've
                          avoided reports like this until now. For example, in KASAN:
                          
                              BUG: KASAN: stack-out-of-bounds in __ip_options_echo+0xa0e/0x12b0
                              Write of size 38 at addr ffff888006f1f80e by task ping/89
                              CPU: 2 PID: 89 Comm: ping Not tainted 5.10.0-rc7-debug+ #5
                              Call Trace:
                               dump_stack+0x9a/0xcc
                               print_address_description.constprop.0+0x1a/0x160
                               __kasan_report.cold+0x20/0x38
                               kasan_report+0x32/0x40
                               check_memory_region+0x145/0x1a0
                               memcpy+0x39/0x60
                               __ip_options_echo+0xa0e/0x12b0
                               __icmp_send+0x744/0x1700
                          
                          Actually, out of the 4 drivers that do this, only gtp zeroed the cb for
                          the v4 case, while the rest did not. So this commit actually removes the
                          gtp-specific zeroing, while putting the code where it belongs in the
                          shared infrastructure of icmp{,v6}_ndo_send.
                          
                          This commit fixes the issue by passing an empty IPCB or IP6CB along to
                          the functions that actually do the work. For the icmp_send, this was
                          already trivial, thanks to __icmp_send providing the plumbing function.
                          For icmpv6_send, this required a tiny bit of refactoring to make it
                          behave like the v4 case, after which it was straight forward.
                          
                          1 Reply Last reply Reply Quote 0
                          • B
                            brians
                            last edited by Mar 8, 2021, 4:12 AM

                            Here is real world performance using a custom pfSense 2.5 at home... it is an older HP EliteDesk 800 G1, quad core i5-4570, 12GB RAM, 40GB SSD. I added a second intel NIC for WAN.

                            My pfSense at home is on a Telus gigabit purefibre connection 1Gbps up/down. Remote site with WireGuard is an SG-5100 21.02 on Telus managed business fibre symmetrical 1Gbps up/down.

                            Here is screenshot during 70GB of files transferred over SMB from a local Windows 2016 Server to an OMV NAS on remote end, which took about 13 minutes.

                            ae23a945-28a3-454f-aae2-4f31c2b0c408-image.png

                            X 1 Reply Last reply Mar 8, 2021, 5:11 PM Reply Quote 2
                            • X
                              xparanoik @brians
                              last edited by Mar 8, 2021, 5:11 PM

                              @brians Thanks for sharing! Would you be mind running iperf3 tests and share those as well? That'd remove any bottlenecks from SMB protocol or your NAS disks. You seem to have a very good setup since both locations share the same ISP, so I am curious to see iperf3 tests. Thanks!

                              B 1 Reply Last reply Mar 9, 2021, 1:33 AM Reply Quote 0
                              • B
                                brians @xparanoik
                                last edited by brians Mar 9, 2021, 1:48 AM Mar 9, 2021, 1:33 AM

                                @xparanoik
                                I waited until after work to do.
                                9307c7a5-0fe0-4ea1-917a-29ae376e4ed1-image.png
                                This is from a Windows 10 PC 192.168.10.140 at home connected to pfsense at work 192.168.21.1

                                In past testing sometimes I get a bit higher send from my house in the 900's but today didn't seem to.

                                X 1 Reply Last reply Mar 9, 2021, 12:52 PM Reply Quote 1
                                • X
                                  xparanoik @brians
                                  last edited by Mar 9, 2021, 12:52 PM

                                  @brians Nice, thanks for sharing

                                  P 1 Reply Last reply Mar 15, 2021, 7:53 PM Reply Quote 0
                                  • P
                                    perlenbacher @xparanoik
                                    last edited by perlenbacher Mar 15, 2021, 7:54 PM Mar 15, 2021, 7:53 PM

                                    WireGuard performance should soon be much improved:

                                    https://www.phoronix.com/scan.php?page=news_item&px=FreeBSD-New-WireGuard

                                    link text

                                    KOMK 1 Reply Last reply Mar 15, 2021, 8:02 PM Reply Quote 3
                                    • KOMK
                                      KOM @perlenbacher
                                      last edited by Mar 15, 2021, 8:02 PM

                                      Oof. Not exactly a shining endorsement. I feel bad for Netgate here. They paid for Wireguard in FreeBSD because nobody else gave a damn and then a month after release, the protocol creator shows up and redoes it all for free.

                                      C H 2 Replies Last reply Mar 16, 2021, 2:59 AM Reply Quote 2
                                      • C
                                        cmcdonald Netgate Developer @KOM
                                        last edited by Mar 16, 2021, 2:59 AM

                                        @kom ugh... I’ll be anxiously biting my nails. The next 24-48 hrs are delicate for everyone involved.

                                        Need help fast? https://www.netgate.com/support

                                        1 Reply Last reply Reply Quote 2
                                        • D
                                          dirtyfreebooter
                                          last edited by Mar 16, 2021, 5:04 AM

                                          https://lists.zx2c4.com/pipermail/wireguard/2021-March/006499.html

                                          JFC, this is not shaping up to be professional conversation and collaboration. Netgate/pfSense I am so disappointed... Argh...

                                          ? 1 Reply Last reply Mar 16, 2021, 9:15 PM Reply Quote 2
                                          • First post
                                            Last post
                                          Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.
                                            This community forum collects and processes your personal information.
                                            consent.not_received