Slight offset in vtkWindowedSincPolyDataFilter if normalization is enabled

When using vtkWindowedSincPolyDataFilter with NormalizeCoordinates enabled then we have found that smoothing result is slightly shifted when the bounding box of the mesh is changed. (originally reported by a 3D Slicer user here).

Example

  • Data set: download
  • Processing parameters: NumberOfIterations = 20, PassBand = 0.00158489
  • VTK version 9.0.20201111

No smoothing (perfect alignment):

Smoothing with NormalizeCoordinates enabled (~0.1mm shift):

Smoothing with NormalizeCoordinates disabled (perfect alignment):


According to the filter’s documentation, coordinate normalization is supposed to improve the results.

Should we disable coordinate normalization to get consistent results?
What problems we may encounter if we disable coordinate normalization?

@will.schroeder

Interesting. I wonder if there is some sort of integral math occuring during the normalization process. Or somehow processing an intermediate iteration. I’ll take a look tomorrow morning.

Thanks in advance!

Considering that smoothing moves the surface by several tenths of a millimeter, the one tenth of a millimeter shift would be acceptable. The main issue is inconsistency - that the movement of the surface depends on the bounding box size.

I’ve downloaded the data and will check it out soon.

Changing the API / behavior is something I would do only as a last resort, although the documentation certainly needs updating. It wouldn’t surprise me if there are folks in the oil/gas/geomapping areas with extremely large coordinate values who may rely on normalization. I’m guessing that this normalization was originally added for a reason due to numerical sensitivity and this may be an example of this sensitivity. Or it could be a bug :slight_smile: It’s probably worth looking at Taubin’s original paper to see if there are any comments related to this behavior.

I did not find any mention of coordinate normalization in Taubin’s original paper.

If coordinate normalization is only recommended for special cases then we are completely fine with disabling it.

We used coordinate normalization because there was the scaling issue in the initial implementation and they we kept it because the documentation suggested that coordinate normalization should be turned on for better stability (an it is just not enabled by default to maintain exact backward compatibility).

Hi @will.schroeder , @lassoan
It is really nice work. Is there any filter that can smooth by joining the outer points (vtkWindowedSincPolyDataFilter seems to shrink the object)?
Thanks.

The filter smoothes out the ripples, which is the desired behavior.

If you computed some kind of convex hull then that would artificially inflate the volume.

This issue has come up again. This time, it is very simple and clear case and it is hard to justify the error.

Example use case: We have a head CT, we create a binary image using thresholding to segment the bone, convert to polydata - everything looks good. However, when we apply vtkWindowedSincPolyDataFilter, the scaling of the resulting polydata can be changed by up to about 0.7% (either making the output smaller or larger, depending on the chosen number of iterations).

The image below shows result with 43 iterations. You can see that the smoothed red contour appears shifted towards outside in the slice views, because the output is somewhat larger. It is also clear that the smoothed red model is larger because the outer surface is all read, while inside the cavity mostly the non-smoothed yellow model is visible.

With 78 iterations, the scaled model is somewhat smaller than the input (the red model mostly appears inside):

When choosing to use 60 iterations, the scaling is looks about correct - you can see that the non-smoothed yellow surface oscillates around the smoothed red contour very similarly both inside and outside:

Unfortunately, the error does not converge to 0 as the number of iterations grow, but it is periodic, so we cannot set number of iterations to a large number to avoid the error. For example, for passband value of 0.063, the period is 60 iterations. Smoothed polydata is too large when number of iterations = 43, 103, 163, …; too small when number of iterations is 78, 148, 208, … Scaling is correct between them at every half period at 30, 60, 90, … Unfortunately, the period length depends on the passband value, so we cannot just hardcode to always use 30 iterations.

This video shows the smoothing result with varying the number of iterations changing from 1 to 300:

The challenge of making the filter preserve the overall scale of the mesh is well known and there is a mechanism implemented in the filter to address it but it seems that it is not working well and I don’t see any obvious reasons why.

Test data set: https://github.com/lassoan/PublicTestingData/releases/download/data/HeadCT-threshold270-surface.vtp

Code to reproduce the problem:

# Reproduce shift by smoothing

inputModelFilename = "path/to/HeadCT-threshold270-surface-ncd.vtp"

#iterations = 43  # too large
#iterations = 60  # correct size
iterations = 78  # too small

passBand = 0.063

# Read input
inputModelReader = vtk.vtkXMLPolyDataReader()
inputModelReader.SetFileName(inputModelFilename)
inputModelReader.Update()
inputPolyData = inputModelReader.GetOutput()

# Apply smoothing
smoother = vtk.vtkWindowedSincPolyDataFilter()
smoother.SetInputData(inputPolyData)
smoother.SetPassBand(passBand)
smoother.SetNumberOfIterations(iterations)
smoother.NormalizeCoordinatesOn()
smoother.Update()

# Render results

renderWindow = vtk.vtkRenderWindow()
renderWindow.SetSize(600, 600)

ren = vtk.vtkRenderer()
renderWindow.AddRenderer(ren)
iren = vtk.vtkRenderWindowInteractor()
iren.GetInteractorStyle().SetCurrentStyleToTrackballCamera()
iren.SetRenderWindow(renderWindow)

mapperInput = vtk.vtkPolyDataMapper()
mapperInput.SetInputData(inputPolyData)
actorInput = vtk.vtkActor()
actorInput.SetMapper(mapperInput)
actorInput.GetProperty().SetColor(1, 1, 0)
ren.AddActor(actorInput)

mapper = vtk.vtkPolyDataMapper()
mapper.SetInputConnection(smoother.GetOutputPort())
actor = vtk.vtkActor()
actor.SetMapper(mapper)
actor.GetProperty().SetColor(1, 0, 0)
actor.GetProperty().BackfaceCullingOff()
ren.AddActor(actor)

iren.Initialize()
ren.GetActiveCamera().Pitch(90)
ren.ResetCamera()
ren.GetActiveCamera().Yaw(30)
ren.ResetCamera()
ren.GetActiveCamera().Zoom(1.5)

renderWindow.Render()
iren.Start()

@will.schroeder could you look into this? Many people use this filter to create 3D-printable models from images and it would be important to avoid any unnecessary systematic scaling errors (even if it is less than 1%).

Thanks Andras, I will peer into this. My suspicion is there some numerical noise effects at play here. Although I have to say, if I remember correctly from the original paper, I thought the authors were advocating for much fewer iterations (on the order of a couple dozen). In the smoothing algorithms I’m familiar with, high numbers of iterations cause bad things to happen, e.g., Laplacian smoothing causes extreme shrinkage.

Thanks a lot for looking into this.

To me it seems that this is actually not an iterative method, the method just happens to compute the polynomials using a recursion. The “number of iterations” should be actually called “degree of polynomial”. Probably that’s why the filter is remarkably stable even when doing hundreds of “iterations”. I’ve tried to switch to double precision point data and it made no difference.

Smoothing results typically start to become acceptable after 20-30 iterations, and then the shape of the output remains about the same and only scaling oscillates. The first time the scaling is right and the output is smooth enough is at about 60 iterations. The difficulty is that we cannot specify a number of iterations that works well for all cases. You can find precomputed result images for 1-300 iterations here.

I hear you. The original paper http://mesh.brown.edu/taubin/pdfs/taubin-sg95.pdf talks about some of the tradeoffs and we need to revisit that. One thing I am pondering is that the threaded implementation is a little different in that there is a double-buffering approach, as compared to in place processing. I am wondering if this might be related. Have you tried going back to the original non-threaded implementation and compared that?

Just to point out if people didn’t look, the movie that @lassoan posted in the link above is very informative: https://1drv.ms/f/s!Arm_AFxB9yqH0LpHERdqOKhkQPFGNg?e=5hYq5h (scroll down to the bottom to see it)

I’ve tested it now with VTK-8.2 and it behaves the same way (scaling is inaccurate).

I’ve replaced the Hamming window by a Hanning window in the CoeffcientsWorker:

w[i] = 0.5 * (1 + cos(((double)i) * vtkMath::Pi() / (double)(numIters + 1)));

This greatly reduced the oscillation of the scaling, and most importantly, the oscillation is getting smaller as the number of iterations grow. Above 60 iterations or more all the results look acceptable. This was only tested on one mesh, for one passband value (0.063), but though I would share it immedately because it might be useful for your investigation.

Very nice Andras. I can see how using a Hanning window would reduce oscillation since the function shoulder points touch zero etc. I’ll run some tests over the weekend to see if I can spot some issues, but you are probably in a better position to assess whether this change should be made (since you have the data and workflows to test harder than I can at this point).

The other thing I noticed is that since the number of iterations is much larger than expected when this code was written, it may be useful to investigate threading some of the weight calculations for performance reasons.

I’ve implemented and tested all window functions that were described in the Taubin paper. It turned out that Blackman is even better. It has no oscillations and provides the correct scale within 10-20 iterations.

Smoothing result for passband=0.0, iterations changed between 1 to 300:

Hamming (original) - lot of oscillations in scale regardless of how many iterations we do:

Hanning - perhaps more oscillation for very low iteration count, but as the number of iterations grows, there is convergence to the correct size scale:

Blackman - no oscillations, immediately converges to the correct size scale:

I’ve tested on about other images with different passband and iterations and Blackman seems to be better than originally used Hamming for every case. I’m wondering if we should change the default window function to Blackman or keep the current inaccurate behavior by default.

There are a number of similar fixes in VTK that are not applied by default (vtkSelectPolyData::EdgeSearchMode, vtkProcrustesAlignmentFilter::StartFromCentroid) because they change the output, what existing users may not want. However, this means that all new users of VTK get suboptimal behavior by default. Is there some CMake flag or other mechanism that we could use to set defaults that lead to better results or better backward compatibility? I know about deprecation macros, but this is not exactly the same (we don’t need to force every users to change their software to use this new mode, existing users could still keep do what they have been doing, we would just want to change the default behavior for new users).

I’ve created a merge request with the suggested change (kept the suboptimal but backwad-compatible Hamming function as default).

Fantastic We should also try the Nuttall window (with continuous first derivative). It’s not mentioned in the article AFAIK but the derivative continuity may be a good thing.

IMO I would set the default to provide the best behavior. We could argue about this, but this change is relatively subtle and will not break API etc. I know I am aggressive on this regard, but there is significant debt incurred when we try and keep track of every single change, and I’ve seen cases where users end up with suboptimal use of filters because they don’t know about the use of a particular button to push, which hurts the perception of VTK etc. As a new user I know I want systems to do the “best” thing and I don’t want to be burdened with needing to dig into the bowels of a complex algorithm. Plus it’s not clear how to legacy this over time…

I’ve implemented Nuttall and indeed it performs slightly better than Blackman, so I’ve set Nuttall as default.

The fix has been integrated into VTK. To illustrate the effect of what has changed, here is the result (in red) of smoothing a bumpy surface (white), generated by the newly added smoothCylNuttall test:

image

  • Left: result when using the old Hamming window function. Outer surface of the cylinder is mostly white, inside surface is mostly red. This indicates that the surface has slightly shrunk as a result of smoothing.
  • Right: result when using the new Nuttall window function. Both the inner and outer surface of the cylinder is uniformly red-and-white, which indicates that the smoothing evened out the bumps without changing the overall size.
3 Likes