Native Memory Not Released After Watermarker.close() — Confirmed via RSS vs NMT Delta Analysis

Hello GroupDocs Support Team,

We are using GroupDocs.Watermark for Java in our Spring Boot application running on Kubernetes (Java 8, OpenJDK, -Xmx5120m). We have observed that native memory is not being released after Watermarker.close() is called, and we would like to confirm whether this is a known issue or expected behavior.

Environment

  • GroupDocs.Watermark for Java
  • Java 8 (OpenJDK, java-8-openjdk-amd64)
  • Spring Boot, Kubernetes Pod
  • JVM flags: -Xmx5120m -Xms5120m -XX:+UseParallelGC -XX:NativeMemoryTracking=summary

What We Observed

We measured RSS (Resident Set Size) before and after processing watermark requests using /proc/1/status:

Before watermark (2 PDF files): VmRSS: 3,356,916 kB
After watermark  (2 PDF files): VmRSS: 3,580,956 kB
Increase                       :         +224,040 kB (+219MB)
After 11 minutes (no activity) : VmRSS: 3,580,932 kB  (unchanged)

The two files processed were 5.9MB and 2.7MB respectively.

What We Verified

To determine whether this was a JVM heap issue or native memory issue, we compared JVM Native Memory Tracking (NMT) before and after:

NMT Total committed — Before: 5,811MB
NMT Total committed — After : 5,812MB
NMT increase                :    +1MB
RSS increase                : +219MB

All NMT categories (Java Heap, Class, Thread, Code, GC, Internal) showed virtually no change, while RSS increased by 219MB. This strongly suggests that GroupDocs.Watermark is allocating memory directly via OS-level native calls, outside of JVM-tracked memory.

We also performed the following additional verifications:

  1. Watermarker is properly closed via try-with-resources — close() is guaranteed to be called
  2. Forced GC via jcmd 1 GC.run — RSS did not decrease after GC
  3. Waited 11 minutes after processing — RSS remained unchanged
  4. Old Gen heap usage was only 6.43% at the time — GC trigger conditions were far from met, ruling out GC timing as the cause

Our Questions

  1. Is it expected behavior that GroupDocs.Watermark allocates native memory outside the JVM heap?
  2. Is native memory expected to be released upon close() / dispose()? If so, why is RSS not decreasing in our tests?
  3. Is there a known workaround or configuration option to ensure native memory is returned to the OS after processing?
  4. Is this a known issue in the current version, and if so, is there a fix planned?

We are experiencing repeated OOM Kill events in our Kubernetes Pod (limit: 10Gi) due to this memory accumulation, and resolving this is critical for stable operation.

Thank you for your time and support. We look forward to your response.

Hello,

Thank you for the detailed report and the NMT/RSS measurements. Below are answers to your questions and some recommendations.

  1. Is it expected that GroupDocs.Watermark allocates native memory outside the JVM heap?

Yes. GroupDocs.Watermark for Java uses native engines (including Aspose.PDF and other Aspose components) for PDF and other document formats. These engines allocate memory outside the Java heap for document structures, fonts, images, and internal buffers. That is why NMT shows almost no change while RSS increases: NMT only tracks memory allocated by the JVM, not memory allocated by native code (e.g. via JNI/native libraries).

  1. Is native memory expected to be released on close() / dispose()? Why does RSS not decrease?

From the API contract, calling Watermarker.close() (or dispose()) is the correct way to release resources. In our implementation, close() disposes the underlying document object (e.g. for PDF, the Aspose PDF document is disposed). So at the Java/API level, resources are released as designed.

However, even when the native layer frees memory, the process RSS often does not go down because:

  • Many native allocators (e.g. glibc on Linux) keep freed memory in the process for reuse and do not return it to the OS, so RSS can stay high after disposal.

  • The native libraries may use internal caches or pools that are not fully released on dispose.

So the behaviour you see (RSS staying high after close() and after GC) can occur even when our API is used correctly and dispose is called. It is a known kind of behaviour when using JVM applications with native libraries.

  1. Is there a workaround or configuration to return native memory to the OS after processing?

There is no API option in GroupDocs.Watermark to “force” return of native memory to the OS; that is controlled by the native allocator and the underlying libraries.

Practical options that help in environments like Kubernetes are:

  • Run watermarking in a separate process (e.g. a worker pod or one-off job that exits after handling a batch of files). When the process exits, the OS reclaims all its memory, including native. This avoids long-term growth of RSS in a long-running service.

  • Increase the pod memory limit (e.g. above 10Gi) if you need to keep processing in the same long-running process, and/or reduce concurrency (fewer simultaneous watermark operations) to lower peak native memory usage.

  1. Is this a known issue in the current version, and is a fix planned?

The fact that RSS may not decrease after close() when using native document engines is a known type of limitation in JVM + native library setups. We have noted your case for our product and dependency (e.g. Aspose) tracking. Any future improvements would depend on changes in the native layers and allocator behaviour; we do not have a specific fix or timeline to share at this time.

We recommend using a separate process (or short-lived worker pods) for heavy or batch watermarking so that the OS reclaims memory when the process ends. If you share your deployment pattern (e.g. request-per-pod vs shared service), we can suggest a more concrete setup.

Thank you for your patience and for the thorough diagnostics (NMT, RSS, try-with-resources, GC.run). If you have further details or logs, we can keep them on file for future improvements.