Hello,
I’m working with astrophysical simulation data and using Python to generate .vtkhdf
files based on vtkOverlappingAMR
. The code I’m using was created by adapting a sample provided for converting Overlapping AMR data to the .vtkhdf
format. The output files load correctly in ParaView 5.13 when the dataset is small. However, when I apply it to moderately larger data, opening the resulting file causes extremely high memory usage on my system, eventually leading to a full system freeze and a Blue Screen of Death (BSOD).
The .vtkhdf
file includes scalar fields such as density, pressure and velocity. I’m trying to understand whether this kind of memory issue is expected when working with larger AMR datasets in this format, or if there are known practices to optimize file generation or control loading behavior in ParaView.
Questions:
- Are there known memory limitations or considerations when working with large
.vtkhdf
files containing overlapping AMR data? - Does ParaView load all AMR blocks and fields into memory at once when reading
.vtkhdf
, or is there a way to control this behavior? - Are there recommended practices for generating or visualizing large
.vtkhdf
files to avoid such system-level crashes?
Thanks in advance for your guidance!