High memory usage on hyper-v compared to vsphere



  • Hi,
    I transfered some part of infrastructure from vsphere to hyperv on windows server 2019.
    The same VM (pfsense 2.4.4 p3, 512MB ram) , show about 30% memory usage on vsphere and about 85% on hyperv (from dashboard).
    I thought some problem with import of configuration, then I tried a fresh install, but also on fresh install (without any package and configuration) the memory on hyperv still about 55% without packages and increase to 85% with radius and ssltunnel.
    pfsense memory.JPG

    Vsphere VM top:
    54 processes: 1 running, 53 sleeping
    CPU: 0.7% user, 0.0% nice, 2.4% system, 0.8% interrupt, 96.1% idle
    Mem: 18M Active, 192M Inact, 126M Wired, 39M Buf, 114M Free
    Swap: 512M Total, 512M Free

    Hyperv VM top:
    54 processes: 1 running, 53 sleeping
    CPU: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
    Mem: 49M Active, 4224K Inact, 48M Laundry, 296M Wired, 25M Buf, 53M Free
    Swap: 824M Total, 7540K Used, 816M Free

    Vsphere VM State table size 4% (2045/46000)
    Vsphere VM MBUF Usage 17% (4560/26584)

    Hyperv VM State table size 0% (25/46000)
    Hyperv VM MBUF Usage 0% (508/1000000)

    The vsphere VM is a 2.x updated VM, while hyperv VM is a new 2.4.4 p3 .
    It seems that MBUF default is not anymore 26584 but is 1000000.

    I thought it was because of MBUF, so I tried to put it back the 26584. Used kern.ipc.nmbclusters System Tunables, rebooted but the MBUF dimension is the same, so I had to tune it from /boot/loader.conf.local to work.
    Unfortunately, this changed nothing, the amount of memory used is exactly the same.

    The main thing is that on vsphere the memory was okay, stable around 30% with packages and connection, not so much wired memory and no swap at all, and now it is 85% ram and swap used.

    It seems also the jumbo clusters increased to higher value

    [2.4.4-RELEASE][root@HYPERV]/root: netstat -m
    2/28/30 mbufs in use (current/cache/total)
    0/508/508/26584 mbuf clusters in use (current/cache/total/max)
    0/0 mbuf+clusters out of packet secondary zone in use (current/cache)
    0/0/0/524288 4k (page size) jumbo clusters in use (current/cache/total/max)
    0/0/0/524288 9k jumbo clusters in use (current/cache/total/max)
    0/0/0/2397 16k jumbo clusters in use (current/cache/total/max)

    0K/1023K/1023K bytes allocated to network (current/cache/total)
    0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
    0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
    0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
    0/0/0 requests for jumbo clusters denied (4k/9k/16k)
    0 sendfile syscalls
    0 sendfile syscalls completed without I/O request
    0 requests for I/O initiated by sendfile
    0 pages read by sendfile as part of a request
    0 pages were valid at time of a sendfile request
    0 pages were requested for read ahead by applications
    0 pages were read ahead by sendfile
    0 times sendfile encountered an already busy page
    0 requests for sfbufs denied
    0 requests for sfbufs delayed

    [2.4.4-RELEASE][root@VSPHERE]/root: netstat -m
    3126/3459/6585 mbufs in use (current/cache/total)
    2714/1846/4560/26584 mbuf clusters in use (current/cache/total/max)
    2714/1840 mbuf+clusters out of packet secondary zone in use (current/cache)
    0/223/223/13291 4k (page size) jumbo clusters in use (current/cache/total/max)
    0/0/0/3938 9k jumbo clusters in use (current/cache/total/max)
    0/0/0/2215 16k jumbo clusters in use (current/cache/total/max)

    6209K/5448K/11658K bytes allocated to network (current/cache/total)
    0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
    0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
    0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
    0/0/0 requests for jumbo clusters denied (4k/9k/16k)
    0 sendfile syscalls
    0 sendfile syscalls completed without I/O request
    0 requests for I/O initiated by sendfile
    0 pages read by sendfile as part of a request
    0 pages were valid at time of a sendfile request
    0 pages were requested for read ahead by applications
    0 pages were read ahead by sendfile
    0 times sendfile encountered an already busy page
    0 requests for sfbufs denied
    0 requests for sfbufs delayed

    I tuned them like the vsphere VM
    kern.ipc.nmbclusters="26584"
    kern.ipc.nmbjumbop="13291"
    kern.ipc.nmbjumbo9="3938"
    kern.ipc.nmbjumbo16="2215"
    but nothing changed with memory.

    This is very strange, because nbmclusters allocate memory so anyway the memory occupied would have to move when I move the value of nbm and not stay the same.

    The same problem is present with opn sense clean installation (226MB of RAM on vsphere vs 340MB of RAM on hyperv) , also with load average on hyperv more than double compared to vmware.

    Some help?
    Thanks



  • Can you really not allocate even a GB RAM to PF? I have 1.5GB on mine running on 2012r2 (and no tuning).
    c8f52fef-35d4-4adc-9f64-9feb14f03fd8-image.png



  • @provels , hello.
    The question, for me, is not if I can allocate less than 1GB of ram, because the answer was already yes; in the original VM, in fact, pfsense works at 30% with 512 of ram. Most of my pfsense are 512MB where is not required differently.
    81837767-c437-47c4-9e0a-90f3240d3eb9-image.png

    I would like to stay focused at the specific problem:

    • Full installation on vsphere, with radius and ssltunnel, use about 32% of 512MB and no swap.
    • Full installation on hyperv, with radius and ssltunnel, use about 85% of 512MB and some swap.
    • Modify MBUF, didn't move one single MB of RAM.

    Tried with copied machine and with new installation, nothing change.



  • @m4rv1n Not less, at least or more. Be a trailblazer, then. Figure it out. ;) Good luck!



  • @provels , for sure I will continue to find out what is the problem, especially for environment when you have to think about size of system and RAM is not free, and keeping in mind that hundred of system run perfectly with the same size and configuration at less than 35%.
    I add, If a tomorrow you will move to vsphere and the pfsense will stay at 85% with the same configuration of now, that you are at 34%, I will try to help you too ☺
    If you have some news or some idea, with knowledge of pfsense/freebsd, or also just some test, for example clean installation of pfsense with 512MB in your 2012r2 environment, you are and will be welcome, also just to share knowledge. Thank you for the good luck, I will put together with heavy test, and if I will have news I will update this thread. The final aim is to help most people we can.


  • Netgate Administrator

    Same NIC type in both?



  • @stephenw10 hello.

    They are different NIC type, one is one vsphere hypervisor and another one is on hyper-v.

    Vsphere 1Gb
    em0: <Intel(R) PRO/1000 Legacy Network Connection 1.1.0> port 0x2000-0x203f mem 0xfd560000-0xfd57ffff,0xfdff0000-0xfdffffff irq 18 at device 0.0 on pci2

    Hyperv 10Gb
    hn0: <Hyper-V Network Interface> on vmbus0

    I tried also with VM generation 1 on hyperv and nothing changed.
    Actually the hyperv VM is doing nothing, so no traffic, states or vpn.

    @stephenw10
    Notice that also by changing the mbuf cluster size, nothing changed about RAM

    mbuf clusters 1000000 -> 26584
    4k (page size) jumbo clusters 524288 -> 13291
    9k jumbo clusters 524288 -> 3938
    16k jumbo clusters 2397 -> 2215


  • Netgate Administrator

    Same RAM usage using legacy NICs in Hyper-V?


Log in to reply