I’m representing essentially the same data as lines and as tubes/spheres, as separate mappers and actors. I’m expecting the tubes to cover the lines (which mathematically would be inside the tubes) where they coincide. But it seems there is some accuracy issue and sometimes the lines are drawn “on top” of the tubes, depending on the zooming factor and orientation. For example, these screenshots differ only in slight orientation changes: the first one is what I expect, the second one is weird and wrong.
The tube diameter is 0.1, so not ridiculously small, and I still see the same when scaling everything up by 10. Is there some setting to increase the accuracy? Or is there something else I’m missing?
That effect is likely caused by scene misconfiguration. For example, if the size of the view frustum is six-seven orders of magnitude greater than the separation between the lattice and the tubes, you can certainly expect Z-buffer accuracy issues. I think you could post the code that configures the graphics system. The six-seven orders of magnitude figure comes from a typical graphics card using 32-bit float depth buffers. Some may even have 24- or 16-bit z-buffers (check whether yours is no such case), which worsen the accuracy issue.
That’s probably the issue. I get the artifacts whenever the ratio of the clipping range is 1000 (0.02202210246807234, 22.02210246807234), and not if it’s smaller (0.05935975293667006, 21.495882829046867). Now, if you take 1000 squared, that’s indeed six orders of magnitude…
It also seems that VTK is limiting the ratio to exactly 1000 at most. So I wonder if I could limit it to something slightly smaller… Is there any way to affect the clipping range behavior?
Edit, it’s also happening for somewhat smaller ratios. But the ratio is changing a bit too much for very small camera movements. Consider this series:
I hardly changed the view with a small rotation, and the ratio goes from 1000 to ~100. And the artifact goes away from the spheres first and then from the tubes.
24-bit is a typical figure. An IEEE 24-bit floating point number has a mantissa of 15 bits. That means the depth test fails if the depth difference between two fragments is 4-5 orders of magnitude below the depth of the view frustum. Suppose you have a 10-kilometer clipping range. If your scene has objects less than 1 meter appart, you can expect occlusion artifacts like that.
IEEE doesn’t define a 24-bit float, as far as I’m aware. A 24-bit depth buffer gives an integer value in the range [0,16777215]. The mapping of depth to depth buffer Z for an integer depth buffer is: