• Hello,

    I was thinking to use a small portion of my SSD to store some ~4 GB ISO files and access them via SFTP on my pfsense.
    Access will be local only, without outside access.

    All seems well but have a small problem with the throughput.
    Connected to the pfsense SFTP locally, the download speed stabilizes at ~ 75 MBps and ~86 MBps upload.
    CPU usage does not go over 20% during SFTP downloads/uploads.
    Through the same pfsense, downloading from an internet based SFTP server, the download and upload speeds are ~90 MBps, so I am thinking pfsense nics and the switch can deliver the speed, but somehow, I cannot get close to gigabit speeds from my pfsense.

    diskinfo shows good speeds

    [2.3-BETA][root@pfSense.local]/dev: diskinfo -t /dev/nvme0ns1
    /dev/nvme0ns1
            512                  # sectorsize
            128035676160 # mediasize in bytes (119G)
            250069680      # mediasize in sectors
            0                      # stripesize
            0                      # stripeoffset

    Seek times:
            Full stroke:          250 iter in  0.009355 sec =    0.037 msec
            Half stroke:        250 iter in  0.007768 sec =    0.031 msec
            Quarter stroke:    500 iter in  0.014024 sec =    0.028 msec
            Short forward:    400 iter in  0.008610 sec =    0.022 msec
            Short backward:  400 iter in  0.012877 sec =    0.032 msec
            Seq outer:        2048 iter in  0.026922 sec =    0.013 msec
            Seq inner:        2048 iter in  0.025721 sec =    0.013 msec
    Transfer rates:
            outside:      102400 kbytes in  0.077488 sec =  1321495 kbytes/sec
            middle:        102400 kbytes in  0.066514 sec =  1539526 kbytes/sec
            inside:          102400 kbytes in  0.066588 sec =  1537815 kbytes/sec

    Any clue how can I squeeze a bit more speed from SFTP on pfsense?
    One idea that comes to mind is that on the external SFTP, I run the "optimized" HPN version of OpenSSH. Might this be the only difference it takes to reach the gigabit target?

    Thank you.

  • Rebel Alliance Developer Netgate

    Being an endpoint for encrypted traffic is much, much harder than passing its packets!

    pfSense is optimized for passing packets through (firewall/router use case) so that behavior doesn't surprise me too much. Also your choice of SFTP encryption cipher (and core count!) can affect your throughput. You say the CPU never went above 20% but if you had 4+ cores that could mean one core was at or near its peak usage.

    Also, mind your B's and b's when discussing throughput. 75-86MBytes/s is still good throughput for encrypted traffic (~600-700Mbit/s), especially for the types of systems that typically run pfSense.


  • Thank you for taking your time to reply.

    Regarding the B's and b's :)
    I am used to: MBps for Megabytes/second and Mbps for Megabits/second. As in 1 MBps is 8 Mbps.

    I guess until I will find a way to put a HPN SSH on my pfsense, I should be happy with my current performance, even tho I still wonder why the same system can sustain higher upload speed than download. Around ~20 MBps higher that is. Maybe support for Skylake and my Intel i219-V is not yet mature enough. I really have no clue.

  • Rebel Alliance Developer Netgate

    Encryption vs Decryption, perhaps.

    Either way, I would not use the firewall as file storage, but that's me. :-)


  • It seems there is something wrong in pfsense regarding sftp speed, at least for my platform.
    Just tried the same test under IPFire and surprise: 100 MBps (megabytes to be clear) up and down.


  • ipfire = linux
    pfSense = freebsd

    cats & dogs are not of the same species, they behave differently.
    they do have similarities: 4 legs, 2 eyes, ….

    getting closer to your problem, you'd need to run freebsd 10.3 on that system and try then.
    if the default freebsd performs differently from pfSense, then you can open a ticket @ https://redmine.pfsense.org


  • I truly fail to see the point in cats and dogs thrown into this discussion.
    Both Linux and BSD can run an OpenSSH daemon, so at least they share a common dog (or cat) ancestor.
    I had some spare time today and tested FreeBSD 10.3 today. Works as expected: 100 MBps up and down.


  • Stock FreeBSD also isn't pushing the traffic through pf, scrub, etc. Disable the packet filter under System>Advanced, Firewall/NAT and you'll have an equivalent test from that perspective. Try that and see if that's the difference.


  • @cmb:

    Stock FreeBSD also isn't pushing the traffic through pf, scrub, etc. Disable the packet filter under System>Advanced, Firewall/NAT and you'll have an equivalent test from that perspective. Try that and see if that's the difference.

    Cats, dogs (love them both) or throughput related.  This is not a 2.3 snag.

    Please punt