What’s your opinion about the use of size_t for arguments in VTK that are defined as vtkIdType? Look at this:
// p is a vector
std::size_t i, num = p.size();
vtkIdList *cell = vtkIdList::New();
cell->SetNumberOfIds(num); // what happens if num is bigger than int?
for (i = 0; i < num; i++) {
cell->SetId(i, p[i]); // also here
}
size_t isn’t viable because the fact that vtkIdType is signed is important. The other is that 64-bit builds with 32-bit ID types are still useful (decent memory savings).
Usually we just static_cast the warnings away though proper checks should actually be done.
Python and Java do not have native unsigned integer types, so it’s good to limit the use of unsigned types in the VTK API. But I agree that vtkIDType is inconvenient when working with STL.
I think the most we can expect is an ssize() method (though not even that; seems we get std::ssize(container) instead). Hopefully these have assertions or something internally if the size does grow beyond std::ssize_t.
By default, vtkIdType is configured to have the same width as size_t. The only time you will see a 32-bit vtkIdType with a 64-bit size_t is if you build VTK with VTK_USE_64BIT_IDS=OFF.
Signed vs. unsigned is a problem. This code can be optimized much more easily in C or C++:
for (int i = 0; i <= N; ++i)
do_something(i);
because signed overflow is UB. Because it is UB, the compiler can assume it doesn’t happen and can unroll your loop. Change int to unsigned, and if the compiler can’t prove N isn’t UINT_MAX, it has to pessimize.
In any case, .size() on containers will be unsigned forever and it looks like std::ssize(container) will be the C++20 solution for these things.
But vtkIdType can be 32-bit even on 64-bit platforms, so range checking is actually always needed.