Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Weird encrypted traffic (HTTPS) issue over IPSec

    Scheduled Pinned Locked Moved IPsec
    8 Posts 2 Posters 780 Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S
      silviub
      last edited by silviub

      Hello,

      The setup's quite simple: PFSense 2.7.0 on one side, PFSense 2.6.0 on the other side, connected via IPSec so that traffic can freely flow from one router to the other.
      Now, the issue is that the traffic does flow. On any port BUT 443. Actually, it's not the port but the OpenSSL encryption it seems. On one side I've setup an Nginx server with a self signed cert (just for testing) and while traffic on port 80 works, traffic on port 443 doesn't. I wanted to see if the port's the issue (maybe I have some config that I'm missing about port 443) so I switched encrypted traffic to 8443 and unencrypted traffic to port 443. Unencrypted works (port 443) but encrypted doesn't (port 8443), so I'm thinking that it has something to do with the traffic being encrypted.
      Still, this setup worked until last night when.... well, nothing. I didn't change anything in the configs when this started, I wasn't even online at that time, it just seemed to start dropping traffic.

      I did some tcpdump on the enc0 interface, on both sides. What I'm seeing is traffic leaving one enc0 interface but never reaching the other. Weirdly enough, I see this on both sides....? What the hell might be the issue?

      Thank you.

      S 1 Reply Last reply Reply Quote 0
      • S
        silviub @silviub
        last edited by

        I'll reply to my own question, as I kind of fixed it.
        After inspecting the traffic via tcpdump, I saw that packets with a length of 1448 and over were not being passed through the tunnel. Went on both PFSenses, System -> Advanced -> Firewall & NAT and check Enable Maximum MSS, setting it to 1400 - even though the default is 1400.

        After doing this on both sides (later tests revealed that it's enough to do it on one side), HTTPS connections started to work properly.

        Anyone got any clue what might have happened though?!

        keyserK 1 Reply Last reply Reply Quote 1
        • keyserK
          keyser Rebel Alliance @silviub
          last edited by

          @silviub While it may look like 1400 is default - it’s actually only the default suggestion. MSS clamping is not enabled by default, and if you enable it - but do not enter a size, then 1400 will be default.

          Love the no fuss of using the official appliances :-)

          S 1 Reply Last reply Reply Quote 0
          • S
            silviub @keyser
            last edited by

            @keyser Ok, that makes sense. Still, what doesn't make sense: why did it work until last evening without enabling the MSS clamping?

            keyserK 1 Reply Last reply Reply Quote 0
            • keyserK
              keyser Rebel Alliance @silviub
              last edited by

              @silviub Agree, that is a bit weird. Could i it be a certificate renewal? Otherwise it might be some changes in MTU on the Internet route between them

              Love the no fuss of using the official appliances :-)

              S 1 Reply Last reply Reply Quote 0
              • S
                silviub @keyser
                last edited by silviub

                @keyser it can't be the cert, as I've set up today a dummy machine on which I installed Nginx and set up a self signed cert and I had no luck with that either.
                As for the MTU change, I guess it should also affect traffic via public IP, right? I mean, if the MTU has changed on the route from site A to site B, https requests from Site A to Site B should also have a problem, right? I'm asking if testing this would make sense, as I can set a public IP on a test machine and run a request to that public IP.

                Let me make this even weirder: I have disabled the MSS clamping again, and fair enough, the issue presents itself. The even weirder thing: once in 10 - 15 requests actually work. The request is 100% the same, I'm just using curl https://10.41.1.203/ which 95% of the time fails but 5% is responding with the expected response: HTTP Code 301, Location: https://10.41.1.203/ui/

                I don't understand what is happening. The issue is fixed by enabling the MSS clamping, but I'm trying to understand what the hell's happening.

                keyserK 1 Reply Last reply Reply Quote 0
                • keyserK
                  keyser Rebel Alliance @silviub
                  last edited by

                  @silviub I’ll admit that is strange. The public test however will likely not expose anything because that route very likely has “ICMP Packet needs fragmenting” working (your session will adjust itself to the actual MTU size).
                  The “ICMP packet needs fragmenting” feature does NOT work in pfSense IPSec tunnels - which is why you can encounter packets that are never tunneled (like your SSL session setup).
                  But pfSense IPSec normally fragments all packets (including do not fragment packets) and reassembles them on the other side (completely transparently) as long as the MTU does not go beyond the LINK MTU on either side of the tunnel. So equally strange is the fact that you do see some packets not being tunneled. I have no idea, but I would guess it happens because the packet size of the SSL setup is larger than the LAN Interface MTU (even though that should not happen, I have seen it happen multiple times from webservers and radius servers in particular). Perhaps do a packet capture on the webserver LAN side of pfsense and check the MTU of packets when it succedes and when it fails?

                  Love the no fuss of using the official appliances :-)

                  S 1 Reply Last reply Reply Quote 0
                  • S
                    silviub @keyser
                    last edited by

                    @keyser let me start by saying that I appreciate the time you're putting into this. Right now, it's just a matter of curiosity but it bugs me to the core as I don't know why it needs MSS clamping all of the sudden.
                    Now, I've ran the test and captured the packets (see the attached pcap files) but I am unable to determine why it's working / not working. I've included two pcaps, one with the non-working HTTPS connection, one with the working one. From my point of view (I'm sure I'm missing something), it doesn't look like an issue. I don't know why all the retransmissions in the non-working but as I said, I'm probably missing something.

                    P.S. 10.41.199.205 is the HTTPS server, 172.31.254.251 is the client.

                    Thank you.
                    working.pcap non-working.pcap

                    1 Reply Last reply Reply Quote 0
                    • First post
                      Last post
                    Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.