TM107710 Thank you for being patient while working on this issue.
We can see 50% of memory is allocated only for shared_buffers that explains the high memory consumption.
In PostgreSQL, there is a configuration parameter called huge_pages that controls the use of huge memory pages for shared_buffers. For your server, this parameter is set to "try," which means that PostgreSQL will attempt to use huge pages if available, but will fall back to regular pages if they are not. During the last restart, PostgreSQL successfully allocated huge pages for the shared_buffers value, which is set to 8 GB for this server. Given that the server has a total memory of 16 GB, you will observe in the metrics that memory consumption is at a minimum of > 50%. This explains the observed increase in memory usage following the maintenance.
Huge Pages is a good feature, it can improve the performance of PostgreSQL by reducing the overhead of managing a large number of small memory pages and improve Translation Look aside Buffer (TLB) hit rates, which can reduce the number of times the CPU needs to access the page table. But if you want to 100% disable, they can set huge_pages server parameter to "OFF".
Why it was allocated this time, and was not before, because of the fact huge pages require contiguous blocks of memory. If the system does not have sufficient contiguous memory available at the time PostgreSQL starts or attempts to allocate huge pages, it will not be able to use them. Fragmentation in the system's memory can affect the availability of contiguous memory blocks, in other words previously was not used the huge pages due to that, take into account the setting of PG parameter is set to “try.
Please comment below if you have additional ask.
Please don't forget to mark as accept answer if the reply was helpful.
Regards,
Oury