How to preserve the color of vtkPolyData after voxelizing it

Hi, everyone!
After voxelizing VtkPolydata, i need to make the corresponding part of the voxel grid model display the color corresponding to the original model.
The result I want is as shown below:The first picture is the original model, and the second picture is the voxelized mesh model.


Currently, the voxel grid can be obtained through the official website examples PointOccupancy and MarchingCubes, but the original corresponding color cannot be obtained.
To sum up, I have two questions: whether the voxel grid is obtained correctly, and how the color of the original model is mapped to the voxel grid.
Could you give me some interesting suggestions?
Thanks for your reply!

Hello,

If your output object is vtkImage, then you may try vtkResampleToImage: VTK: vtkResampleToImage Class Reference . If the output object is something else, you may try vtkResampleWithDataSet: VTK: vtkResampleWithDataSet Class Reference . When reading the documentation, the “Source” object is the original one while the object dubbed “Input” is the one to receive the collocated values of the “Source”.

best,

PC

Thanks for your reply!
I’ll try it!

I performed ResampleWithDataSet on paraview, source data Array is model.vtp and destination Mesh is voxel.vtp


But the result is still no corresponding color.
I don’t know if what I’m doing is correct.

:warning::point_up:

Thanks your reply!
But, I can’t achieve my results based on the information you provided.
If you have the code or method to implement it, could you share it with me?

What are you trying? How far have you gone? Sharing the source code of your current attempt is good start.

Thank you so much for patiently answering my questions!
First I tried paraview related Filter.I selected the model.vtp file, which has attribute data displacement. After performing the resampleToImage filtering operation on vtp. The attribute component of the resulting color has displacement. However, Point Gaussian can render colors that are different from the original model phenotype.


Second,I tried the code, the reference example is MarchingCubes, the following is the code content.

        double bounds[6];
        vtkSmartPointer<vtkResampleToImage> resampler = 
        vtkSmartPointer<vtkResampleToImage>::New();
        resampler->SetInputDataObject(polyData);
        resampler->SetSamplingDimensions(32, 32, 32); 
        resampler->Update();
        resampler->GetOutput()->GetBounds(bounds);

        vtkNew<vtkVoxelModeller> voxelModeller;
        voxelModeller->SetSampleDimensions(32, 32, 32);
        voxelModeller->SetModelBounds(bounds);
        voxelModeller->SetScalarTypeToFloat();
        voxelModeller->SetMaximumDistance(1);

        voxelModeller->SetInputData(resampler->GetOutput());
        voxelModeller->Update();
        vtkNew<vtkImageData> volume;
        volume->DeepCopy(voxelModeller->GetOutput());

        vtkSmartPointer<vtkDataSetSurfaceFilter> surfaceFilter = 
        vtkSmartPointer<vtkDataSetSurfaceFilter>::New();
        surfaceFilter->SetInputData(voxelModeller->GetOutput());
        surfaceFilter->Update();

        //vtkNew<vtkFlyingEdges3D> surface;
        vtkNew<vtkMarchingCubes> surface;
        double isoValue = 0.5;
        surface->SetInputData(volume);
        surface->ComputeNormalsOn();
        surface->SetValue(0, isoValue);

        vtkXMLPolyDataWriter* writer = vtkXMLPolyDataWriter::New();
        writer->SetInputConnection(surface->GetOutputPort());
        const std::string fileName = (yj_output_file_base + "voxel.vtp");
        writer->SetFileName(fileName.c_str());
        writer->Write();
        writer->Delete();`

Finally, the size of the vtp file output by the above code is only 3KB, and there is no relevant displacement-related attribute component.
I will keep the latest updates here!
I‘m looking forwad for your reply.

Why don’t you simply render resampler’s output? Simplify your pipeline. Once you get a minimum working example, improve the pipeline.

Thanks your reply!
First, Today I have new progress, I can achieve the effect I need in paraView.
Before mapping, I need to process model.vtp through the Delaunay3D1 filter to obtain the data information of Delaunay3D1, and then use resampleToDataSet to map the attribute data on Delaunay3D1 (as the source mesh) to the voxel grid ( as the destination mesh).
The following is a simple mind map.


Below is the rendering

I may need to choose better voxelization to achieve what I want.

Second, The voxel grid can also be obtained through resampleToImage and mapped to the correct data, but I found a problem. Some models cannot obtain the voxel grid through the resampleToImage filter, just like the model above I cannot obtain. But I can get it by switching to another model. When I adjust the Sampling Dimensions of the model, the model will gradually become incomplete. As shown in Figure a, Sampling Dimensions= 100 * 100 * 100, and Figure B is Sampling Dimensions= 32. 3232
a:
img_v3_028a_970918bb-2ae1-4e93-a272-7dc95a11dbeg
b:

Last, I don’t know what causes this effect, but if you know why, I’d love to hear it. At present, I will choose to use resampleToDataSet to achieve my effect, and make relevant improvements based on your latest suggestions.
I‘m looking forwad for your reply!

I believe this is self-evident. If you sample your model in a 100x100x100 grid, you get a raster much more detailed than a 32x32x32 grid. At some point you will loose information.

Thanks your reply!
I have another question, how did you know that using vtkResampleToImage and vtkResampleWithDataSet might achieve the desired effect? If possible, can you share your thought process?
I‘m looking forwad for your reply!

vtkImage is a somewhat different class because it is optimized for large, regularly spaced datasets (Cartesian grids). Hence, it makes sense to use anything related to it when it comes to regular grids. Of course, you can have a volume as a vtkStructuredGrid or even a vtkUnstructureGrid but these result in far greater memory footprint and lower rendering performance unnecessarily.

Thanks your reply!
I will keep the latest updates here!

I have the latest progress now.
I found that the reason why some samples could not be sampled before was because the spatial position of the voxel vertices was outside the original model.
Now I am using vtkProbeFilter and want to change to a more advanced search strategy through the SetFindCellStrategy class, such as selecting the nearest unit or vertex as attribute data.
I have not seen similar examples. You Can you provide me with some help?

vtkSmartPointer<vtkProbeFilter> probe_filter = vtkSmartPointer<vtkProbeFilter>::New();
        probe_filter->SetInputConnection(threshold->GetOutputPort());
        probe_filter->SetSourceData(reader->GetOutput());
        vtkSmartPointer<vtkFindCellStrategy> findCellStrategy = vtkSmartPointer<vtkFindCellStrategy>::New(); // ???
        findCellStrategy->Initialize(pointSet);// ???
        probe_filter->SetFindCellStrategy(findCellStrategy);
        probe_filter->Update();

I‘m looking forwad for your reply!

So, I believe the original question has been answered. If you have further questions, please, start new ones.

Thanks your reply!
At present, the problem has not been solved, but I have some ideas for solving it.
I will open a new problem based on your suggestion.
Thank you for your help during this time!

This looks good to me.

Yes! But it still has problems, and for me there is still some time, I should be able to do better! :grinning: