We're sorry GroupDocs doesn't work properply without JavaScript enabled.

Free Support Forum - groupdocs.com

Memory Optimization

GroupDocs viewer seems to to use alot more memory than our previous library for converting PDFs to Images. Are there any memory optimization settings that can be adjusted to limit memory usage?


Are there any memory optimization settings that can be adjusted to limit memory usage?

We do not provide any settings that can be used to limit memory usage.

Can you please share a file and a sample code you’re using and how much memory is used when you convert PDF to Image?

This is an example of the method that converts a file to images. The file is too big to upload, about 40mb.We’ve always allocated 900 MB memory to our service that does the conversions. Previous library we used did not make a huge memory issue. GroupDocs seems to use a huge amount of memory though.

public List<MemoryStream> ParsePagesToImages(Stream fileStream, string extension, int? maxWidth = null, int? maxHeight = null) {
            var pages = new List<MemoryStream>();

            var fileType = FileType.FromExtension(extension);
            LoadOptions loadOptions = new LoadOptions(fileType);

            using (Viewer viewer = new Viewer(fileStream, loadOptions)) {
                PageStreamFactory pageStreamFactory = new PageStreamFactory(pages);

                JpgViewOptions viewOptions = new JpgViewOptions(pageStreamFactory);

                if (maxWidth != null) {
                    viewOptions.MaxWidth = maxWidth.Value;

                if (maxHeight != null) {
                    viewOptions.MaxHeight = maxHeight.Value;


            return pages;


Please take into account that memory consumption will depend on the count of pages you’re converting as all the pages are stored in MemoryStream. It may be reasonable to store pages on disk or split rendering.

Does split rendering mean rendering pages individually, if so does GroupDocs only load the page its attempting to render then I can have it render the next so that total memory isn’t as high. Also, does it work that way when loading from a stream or only when loading file from disk?


Possibly “paging” would be a better term to use here.

For example, if you’re building API that is using Viewer to convert PDF to Images you can render the first set of pages on the first API call, the second set of pages on the next API call, and so on instead of rendering the complete file. Please note that you’ll have to pay with CPU and Memory because on each API call you will open the file and create its object model in memory. Of course, the actual numbers should be measured using a benchmark.