This prints 73.9275 which is expected to be zero in ideal world. To give an idea, this translates to 0.012 Hausdorff distance.
In fact, translating sphere1 back and forth adds floating-point error to which the vtkDecimatePro is, for some reason, sensitive.
I also noticed that decimating the same input with different scales produces similar results. I believe this is related to the described issue.
Note I slightly modified the original code which uses homemade nested functions so that might not compile at the first shot.
Environment
OS: Windows 10 Pro
Compiler: Microsoft (R) C/C++ Optimizing Compiler Version 19.29.30147 for x64
Build variant: Release
VTK version: 9.1.0
P.S. I tried to submit this issue in the GitLab repo but was identified as a spam!
This comparison code seems strange. Why aren’t you computing the root mean squared distance?
When decimating a mesh with many nearly-identical facets (e.g. like the output of vtkSphereSource), I’m not very surprised that small roundoff errors cause significant differences in the output. Consider the description of the algorithm:
Each vertex in the mesh is classified and inserted into a priority queue. The priority is based on the error to delete the vertex and retriangulate the hole.
With a shape like a sphere, there are going to be many, many vertices that have nearly identical “priority”. So even very small roundoff errors will change the priority of the vertices, thereby changing the result.
I seek for zero difference so the comparison is better to be L1 like. Even taking the RMS distance would end with non-zero difference. The Hausdorff distance (using cell locators) of 0.012 also confirms this observation.
Concerning the shape, we noticed the same issue over different shapes with different point densities. The sphere is just an example. As a matter of fact, we look for identical outputs for morphologically identical but shifted inputs.
I doubt that the vtkDecimatePro is not the only filter using a priority queue. Does that mean these filters would have the same issue? Also, Does that mean that one may encounter the same issue over different operating systems? Do you suggest a workaround anyway?
You know that shifting the input adds noise to the data, in the form of roundoff error. But you’re still hoping to achieve zero difference in the output? In other words, you want to achieve exact math with floating-point numbers.
It might be possible to reduce the difference (but not make it zero) by normalization of the input before applying the algorithm. In your decimation function, you can normalize the input like this:
compute the center of the input (e.g. center of its bounding box)
use this to shift the input so that it is centered at the origin
apply the decimation filter
shift the output back to the original location
This normalization will provide more consistent results for the output. You still won’t get identical outputs, but they should be closer. I also recommend increasing the precision of the filter:
Well, the whole post was to look for a way to robustify the filter algorithm against floating-point rounding errors.
Yes, I already tried the normalization and it enhanced the results as you expected.
I also recommend increasing the precision of the filter: