vtkLookupTable range max + eps

I’m using a LUT in my application since vtk6 and I had to add (probably in vtk8)

if data.dtype == np.float32 or data.dtype == np.float64:
    table.SetRange(np.min(data), np.max(data) + 0.00001)
else:
    table.SetRange(np.min(data), np.max(data))

then I removed the condition (for vtk9) and I always use the + 0.00001 because all image types sometime needed it. Sometime, the pixels equal to the max value were black (transparent?), or purple (even on a grayscale LUT) or other colors that I don’t remember now. For example, a seemingly random set of my binary image screenshots are totally black. They are ok with the epsilon fix.

Is it normal that I need to add an epsilon like this? It “looks” wrong.

If you are using a LUT to display images then please use the SetRampToLinear() option. Otherwise the LUT will use an S-curve (i.e. a gradual tail-off at both ends).

The floating-point math for applying the range is inexact (it contains an optimization that changes a division into a multiplication by reciprocal), but I’m still surprised that the epsilon is needed. Especially since no-one has ever commented on it before in the twenty-some years that vtkLookupTable has been implemented this way.

Can you give a min and max for which the epsilon was definitely required?

I tested more and it seems to be because I’m using a vtkWindowLevelLookupTable. Here’s an example that’s supposed to show a black square + white ‘t’ on a blue background.

width = 30
height = 30
data = np.zeros((width, height), dtype='uint8')
data[:, 8] = 1
data[8, :] = 1

data_importer = vtk.vtkImageImport()
data_str = data.tostring()
data_importer.SetDataSpacing(1.0, 1.0, 0.0)
data_importer.SetDataOrigin(0.5, 0.5, 0.0)
data_importer.SetWholeExtent(0, width - 1, 0, height - 1, 0, 0)
data_importer.SetDataExtentToWholeExtent()
data_importer.SetDataScalarTypeToUnsignedChar()
data_importer.SetNumberOfScalarComponents(1)
data_importer.CopyImportVoidPointer(data_str, len(data_str))
data_importer.Update()

lut = vtk.vtkWindowLevelLookupTable()
lut.SetRange(np.min(data), np.max(data))
lut.SetAlpha(1.0)
lut.SetRampToLinear()
lut.SetAlphaRange(0.0, 1.0)
lut.SetHueRange(0.0, 0.0)
lut.SetSaturationRange(0.0, 0.0)
lut.SetValueRange(0.0, 1.0)
lut.SetNumberOfTableValues(256)
lut.Build()

color = vtk.vtkImageMapToColors()
color.SetLookupTable(lut)
color.PassAlphaToOutputOn()
color.SetInputConnection(data_importer.GetOutputPort())

actor = vtk.vtkImageActor()
actor.SetInterpolate(False)
actor.GetMapper().SetInputConnection(color.GetOutputPort())

renderer = vtk.vtkRenderer()
renderer.SetBackground(0.0, 0.0, 1.0)
renderer.AddActor(actor)
renderer.ResetCamera()

render_window = vtk.vtkRenderWindow()
render_window.AddRenderer(renderer)

iren = vtk.vtkRenderWindowInteractor()
iren.SetRenderWindow(render_window)
iren.Initialize()
iren.Start()

Most of the time I get a different color for the ‘t’, but it’s often equal or near the background value. The color is ok when I add an epsilon. What’s the problem?

I can confirm the behavior, and it is weird as heck.

Only vtkWindowLevelLookupTable seems to have this problem. The problem went away when I used vtkLookupTable instead.

Note that vtkWindowLevelLookupTable always builds a linear grayscale ramp with a linear alpha ramp, so SetAlphaRange, SetHueRange, SetSaturationRange, SetValueRange, and SetRampToLinear are all ignored by vtkWindowLevelLookupTable. This class is designed to be used as follows:

# can use SetRange or SetTableRange instead of Window/Level
lut = vtk.vtkWindowLevelLookupTable()
lut.SetWindow(1.0)
lut.SetLevel(0.5)
lut.Build()

Of course, even when used like this it still demonstrates the strange behavior. My suspicion is that its Build() method is missing code that is needed to set the min, max, and out-of-range color table entries.
If this is indeed the problem, then I will be able to submit a bug fix before the VTK 9.1 release.

My recommendation is to use vtkLookupTable in place of vtkWindowLevelLookupTable. Also, I do not recommend using an alpha ramp unless you specifically need one, i.e. use

SetAlphaRange(1.0, 1.0)

instead of

SetAlphaRange(0.0, 1.0)

Typically one would use an alpha ramp OR a grayscale ramp, but rarely does it make sense for a single lut to use both.

Hi Nil, I used your sample code to test a fix for vtkWindowLevelLookupTable. The fix has now been merged into the VTK master branch.

20 or so years in C++. Python and its libraries like NumPy have a handling of floating point types that may cause unexpected behavior when pushing values to VTK.

You will have to explain this to me. An IEEE float is the same in numpy as in C++. When a numpy array is passed to VTK, e.g. as an image, the values are bit-for-bit identical.

Of course both languages follow IEEE standard. But here the similarities end. As you know Python is not strongly typed. Conversions, memory alignhment, optimizations, etc. taking place both in Python context and inside the libraries may result in fluctuations in the least significant bits:

From: Data types — NumPy v1.26 Manual

Extended Precision

Python’s floating-point numbers are usually 64-bit floating-point numbers, nearly equivalent to np.float64 . In some unusual situations it may be useful to use floating-point numbers with more precision. Whether this is possible in numpy depends on the hardware and on the development environment: specifically, x86 machines provide hardware floating-point with 80-bit precision, and while most C compilers provide this as their long double type, MSVC (standard for Windows builds) makes long double identical to double (64 bits). NumPy makes the compiler’s long double available as np.longdouble (and np.clongdouble for the complex numbers). You can find out what your numpy provides with np.finfo(np.longdouble) .

NumPy does not provide a dtype with more precision than C’s long double ; in particular, the 128-bit IEEE quad precision data type (FORTRAN’s REAL*16 ) is not available.

For efficient memory alignment, np.longdouble is usually stored padded with zero bits, either to 96 or 128 bits. Which is more efficient depends on hardware and development environment; typically on 32-bit systems they are padded to 96 bits, while on 64-bit systems they are typically padded to 128 bits. np.longdouble is padded to the system default; np.float96 and np.float128 are provided for users who want specific padding. In spite of the names, np.float96 and np.float128 provide only as much precision as np.longdouble , that is, 80 bits on most x86 machines and 64 bits in standard Windows builds.

Be warned that even if np.longdouble offers more precision than python float , it is easy to lose that extra precision, since python often forces values to pass through float . For example, the % formatting operator requires its arguments to be converted to standard python types, and it is therefore impossible to preserve extended precision even if many decimal places are requested. It can be useful to test your code with the value 1 + np.finfo(np.longdouble).eps .

So, I wouldn’t assume C++<---->Python<----->NumPy data exchange goes without any issues.

take care,

Paulo

I agree that it’s useful to be aware of edge cases, but what you’re doing here is scaremongering.

The “extended precision” argument is not applicable when one is simply moving 64-bit or 32-bit floats from numpy to VTK. The “bit fluctuations” that you mention only occur when actually performing mathematical operations, and the warnings about x87 80-bit floats are just as valid in pure C++ code as they are in Python and NumPy.

In other words, I agree that math in pure Python vs. NumPy vs. C++ might possibly give three different answers. But I’m not sure how this is relevant to simply moving floats (of the non-extended variety) from numpy <–> C++.

Wow, thank you for the advices and for the quick fix!