HE IPv6 Tunnel terrible bandwidth



  • Hello

    I setup a HE IPv6 Tunnel which is "working" so far. I can ping IPv6 Hosts in my local network and the internet. The tunnel has very little latency (< 25 ms) and pings are working reliable.
    But I'm not able to get more than 1Kb/s trough the tunnel. The bandwidth of the tunnel is just very very terrible. I tried many different MTU's from 1452 to 1280 and they all didn't seem to make any difference.

    I did make a HTTP request via CURL on a Linux box: https://youtu.be/0-BJNQR8UT8
    The same happens if I run CURL directly from pfSense.

    I attached a screenshot from Wireshark with a few captured lines while I tried to download a package.

    I'm running pfSense 2.3.2-RELEASE-p1 (amd64) got a IPv4 bandwidth of 50 MBits/s

    Do you have any idea what I'm doing wrong?

    Thank you for your help.





  • Nobody?


  • Banned

    WFM. Perhaps try https://forums.he.net/ instead?


  • Rebel Alliance Global Moderator

    why is your box sending a windows size update to 16777216… That is a freaking HUGE window size...  Not going to help for sure..

    I can tell you I am using HE, and I am not seeing any sort of issue.  Which tunnel broker are you going through?  I use chicago for example - check the HE site for any issues they might have on the one your using.

    Also you prob get better support on their forums for issues with something like this..  But I can tell you I have not had to tweak or mess with any MSS settings or MTU and looks fine to me.. And that test server in freak Alaska.. Im in Chicago ;)




  • Hi

    I'm using the server in Zürich (Switzerland) which is my nearest server. I could try another one.
    I asked them in their forums (about a week ago) but nobody could help me. (https://forums.he.net/index.php?topic=3670.0)



  • I tested it with the server in Frankfurt Germany (216.66.80.30) and it shows the exact same behavior.
    Ping6 works and curl with IPv6 is very slow like in the video I linked in the first post.


  • Rebel Alliance Global Moderator

    So maybe where you grabbing the files from have problems.. Do the ipv6 speedtest I did.. http://ipv6-test.com/



  • I did the Curl with google.com
    With IPv4 google.com loads very fast and without any issues.

    I tried to load the website http://ipv6-test.com/speedtest/ but after 1 min it barely loads the "ipv6 test" logo in the top left corner.

    If I disable IPv6 I can run the speedtest which shows me about 45 Mbit/s.


  • Rebel Alliance Global Moderator

    well if it was HE, ie would think their status and forums would reflect there being issues.  There status page for their tunnel endpoints have in the past shown when there were problems.

    My guess would be you something on your end.. If your trying to use a window size of that 16million that could be an issue.. There is a very through test site here http://netalyzr.icsi.berkeley.edu/ but it uses java.. When I get a chance either when I get home or vpn to my home network I run a test it does do ipv6 testing..



  • xfinity speedtests do ipv6 also i see peak time problems and have not identified where yet  most likely windstream network



  • @johnpoz:

    well if it was HE, ie would think their status and forums would reflect there being issues.  There status page for their tunnel endpoints have in the past shown when there were problems.

    My guess would be you something on your end.. If your trying to use a window size of that 16million that could be an issue.. There is a very through test site here http://netalyzr.icsi.berkeley.edu/ but it uses java.. When I get a chance either when I get home or vpn to my home network I run a test it does do ipv6 testing..

    Where I'm supposed to change the TCP window size?

    Is there a maybe a UDP Speedtest from CLI or something like that?


  • Rebel Alliance Global Moderator

    are you on windows or linux, ie centos??

    If on linux clearly you messed with

    net.core.wmem_max
    net.core.rmem_max
    net.ipv4.tcp_rmem
    net.ipv4.tcp_wmem

    16 million is sure and the hell not the default..



  • The are the values of my test system (CentOS 7):

    [root@test01 ~]# sysctl -n net.core.wmem_max
    212992
    [root@test01 ~]# sysctl -n net.core.rmem_max
    212992
    [root@test01 ~]# sysctl -n net.ipv4.tcp_rmem
    4096    87380  6291456
    [root@test01 ~]# sysctl -n net.ipv4.tcp_wmem
    4096    16384  4194304

    I checked them on a few other of my CentOS systems and they all give the same results back. I did not make any kernel modifications via sysctl.
    The bandwidth issue is also visible when I do a curl -vv http://www.google.com from the PfSense Box