vtkDecimatePro sensitivity: subject to floating-point error and scale

Hi,

vtkDecimatePro seems to have slightly different outputs for the “same” input:

vtkSmartPointer<vtkPolyData> createSphere(
   const double* center, 
   const double radius, 
   const int phiRes = 30,
   const int thetaRes = 30)
{
   vtkNew<vtkSphereSource> sphereSource;
   sphereSource->SetCenter(center);
   sphereSource->SetRadius(radius);
   sphereSource->SetPhiResolution(phiRes);
   sphereSource->SetThetaResolution(thetaRes);
   sphereSource->Update();

   return sphereSource->GetOutput();
}

vtkSmartPointer<vtkPolyData> transform(
   const vtkSmartPointer<vtkPolyData> mesh,
   const vtkSmartPointer<vtkHomogeneousTransform> transform)
{
   vtkNew<vtkTransformPolyDataFilter> transformFilter;
   transformFilter->SetInputData(mesh);
   transformFilter->SetTransform(transform);
   transformFilter->Update();
   return transformFilter->GetOutput();
}

vtkSmartPointer<vtkPolyData> decimate(
   const vtkSmartPointer<vtkPolyData> polydata,
   const double percentageOfPointsToRemove)
{
   vtkNew<vtkDecimatePro> decimater;
   decimater->SetInputData(polydata);
   decimater->SetTargetReduction(percentageOfPointsToRemove);
   decimater->SetPreserveTopology(1);
   decimater->Update();

   return decimater->GetOutput();
}

// Create unitary sphere
const double radius = .5;
const double sphereCenter[3] = {0., 0., 0.};
auto sphere0 = createSphere(sphereCenter, radius);
auto sphere1 = createSphere(sphereCenter, radius);

vtkNew<vtkTransform> tr;
tr->Translate(1., 1., 1.);
tr->Update();

sphere1 = transform(sphere1, tr);
tr->Inverse();
sphere1 = transform(sphere1, tr);

auto sphere0_decimated = decimate(sphere0, .5);
auto sphere1_decimated = decimate(sphere1, .5);

double distance = 0.;
for (vtkIdType i = 0; i < sphere0_decimated->GetNumberOfPoints(); ++i)
   distance += vtkMath::Distance2BetweenPoints(sphere0_decimated->GetPoint(i), sphere1_decimated->GetPoint(i));
   
std::cout << "distance: " << distance;

This prints 73.9275 which is expected to be zero in ideal world. To give an idea, this translates to 0.012 Hausdorff distance.

In fact, translating sphere1 back and forth adds floating-point error to which the vtkDecimatePro is, for some reason, sensitive.
I also noticed that decimating the same input with different scales produces similar results. I believe this is related to the described issue.

Note I slightly modified the original code which uses homemade nested functions so that might not compile at the first shot.

Environment
OS: Windows 10 Pro
Compiler: Microsoft (R) C/C++ Optimizing Compiler Version 19.29.30147 for x64
Build variant: Release
VTK version: 9.1.0

P.S. I tried to submit this issue in the GitLab repo but was identified as a spam!

This comparison code seems strange. Why aren’t you computing the root mean squared distance?

When decimating a mesh with many nearly-identical facets (e.g. like the output of vtkSphereSource), I’m not very surprised that small roundoff errors cause significant differences in the output. Consider the description of the algorithm:

Each vertex in the mesh is classified and inserted into a priority queue. The priority is based on the error to delete the vertex and retriangulate the hole.

With a shape like a sphere, there are going to be many, many vertices that have nearly identical “priority”. So even very small roundoff errors will change the priority of the vertices, thereby changing the result.

Hello David,

Thank you for the quick reply.

I seek for zero difference so the comparison is better to be L1 like. Even taking the RMS distance would end with non-zero difference. The Hausdorff distance (using cell locators) of 0.012 also confirms this observation.

Concerning the shape, we noticed the same issue over different shapes with different point densities. The sphere is just an example. As a matter of fact, we look for identical outputs for morphologically identical but shifted inputs.

I doubt that the vtkDecimatePro is not the only filter using a priority queue. Does that mean these filters would have the same issue? Also, Does that mean that one may encounter the same issue over different operating systems? Do you suggest a workaround anyway?

You know that shifting the input adds noise to the data, in the form of roundoff error. But you’re still hoping to achieve zero difference in the output? In other words, you want to achieve exact math with floating-point numbers.

It might be possible to reduce the difference (but not make it zero) by normalization of the input before applying the algorithm. In your decimation function, you can normalize the input like this:

  1. compute the center of the input (e.g. center of its bounding box)
  2. use this to shift the input so that it is centered at the origin
  3. apply the decimation filter
  4. shift the output back to the original location

This normalization will provide more consistent results for the output. You still won’t get identical outputs, but they should be closer. I also recommend increasing the precision of the filter:

decimator->SetOutputPointsPrecision(vtkAlgorithm::DOUBLE_PRECISION);

Well, the whole post was to look for a way to robustify the filter algorithm against floating-point rounding errors.
Yes, I already tried the normalization and it enhanced the results as you expected.

I also recommend increasing the precision of the filter:

decimator->SetOutputPointsPrecision(vtkAlgorithm::DOUBLE_PRECISION);

This one I completely missed. Thanks for the pointer!