About VTKHDF and vtkOverlappingAMR


In my application, I have a list of AMR blocks of cells, but the blocks are not sorted level by level; and I’d like to write a vtkhdf file that be read to fill a vtk. By reading the vtkhdf page, I think a vtkhdf writer currently needs to write blocks sorted level by level. Am I right ? Is there a way for my app to sort the block ? Could the vtkhdf support a list of blocks where level could change from block to block ?
thank you.

@lgivord @danlipsa @Francois_Mazen

Hi @pkestene

vtkhdf writer currently needs to write blocks sorted level by level. Am I right ?

Yes, the AMR file format is built around levels of axis-aligned blocks.

Could you share the code of your app to help with the sorting issue?


To be more precise, currently I write a parallel/HDF5 file (wrapped into xdmf) to expose my data as unstructured grid to paraview (but they really are AMR).

Writing parallel/HDF5 is efficient, I don’t need to sort my blocks level by level (inside each MPI process).
Maybe I should keep my current writer, just investigate writing a vtk reader, which would recast my data into vtkNonOverlappingAMR (which is more accurate than vtkOverlappingAMR in my case actually) ?

Actually, my blocks are ordered in memory (and in the hdf5 file) according to a space filling curve (here the Morton curve); that is why I was asking about sorting blocks by level. I guess I need to pay the price of re-ordering either when writing or reading if I want to use vtk(Non)OverlappingAMR instead of unstructured grid + xdmf…

Currently, non-overlapping AMR exists as VTK data object (VTK documentation here) but the format is not supported in VTKHDF at the moment.

This is something that we can add to the VTKHDF reader and writer, just reach me at francois.mazen@kitware.com if you want to discuss it.

Otherwise, I agree that you have to sort the level for each of your node which could be costly due to the Morton curve ordering. We can’t advise to do it or not, until you’ve tried it as the performance would likely depends on your data type and size.

Hope to talk you soon!