Hello,
I need to determine whether points from one polydata project onto cells in another polydata.
I could use vtkOBBTree.IntersectWithLine() but I’m concerned about performance.
It seems that GPU based ray casting could solve the problem with speed and I see that VTK supports GPU base ray casting via vtkGPUVolumeRayCastMapper and vtkOSPRayPolyDataMapperNode and others, but I can’t figure out how I could leverage these classes to do what I want. Basically, I don’t want to render using raycasting, I just want to project a bunch of rays emanating from the points in one polydata, along their normals, and figure out which cell (if any) they intersect in the other polydata.
Any ideas, hints, examples on how to do this will be gratefully received,
It seems super easy to do this using Intel EMBREE, which conveniently has Python bindings so unless I’m missing the obvious solution in VTK I’m eaning towarsd this approach instead
During my graduate years i was dealing with the same problem when i needed raytracing for global ambient occlusion. For CPU, I had to go with IGL, which integrates embree, and for GPU i had to go with Nvidia OptiX. I don’t know if embree now supports GPU also. But it would be really interesting to add CPU support using embree and build a structure like vtkOBBTree, or GPU support using OptiX inside VTK-m and build a similar structure.
Indeed, it appears to not be useful directly, but could be a basis for implementing a ray casting filter in VTK based on Embree. I’m afraid I’m not aware of anything in or planned for VTK that provides the functionality you are looking for.
I quickly tested Embree for point projection onto polydata and it was very easy to do. I strip the points, trias and quads out of the target polydata as numpy arrays, construct Embree gomety directly from these arrays and add them to an Embree Scene. Trias and Quads have to be in separate geometries.
For testing I use another polydata and extract the points and normals as numpy arrays and use them to directly construct an EMbree RayHit object. For my test I only use the normals but I should really use the reverse normals too
Then I intersect the Rayhit with the Scene and the intersections are added to the RayHit - Index of the cell that was hit, uv location on that cell, global location of the intersections and distance from the source to the intersection.
I’m using the method that terminates after the first intersection and an infinte ray length, but there are other options, incuding all intersections, closest point etc.
The only issue I have at the moment is that rays that originate from a location that lises ON a cell are ignored which isn’t good for my application. There does seem to be an Embree option to disable this behavior but it’s a compile time option…
Exactly. The PointLocator seemed to me to be an ideal first use case to demonstrate the utility of applying optimized RT capabilities for general processing in VTK, but I’ve got a couple of others in my wish list to try later. I am excited to learn about this case too and am hoping that embree as an optional part of VTK will be helpful here.
Something that is puzzling med. I am using VTK 9.2.5
The code below works great as anticipated. However, if I include an autoinit in the CMakeLists.txt file, the memory layout is changed and it no longer works.
#include <vtkActor.h>
#include <vtkCellArray.h>
#include <vtkNamedColors.h>
#include <vtkPolyData.h>
#include <vtkPolyDataMapper.h>
#include <vtkProperty.h>
#include <vtkSmartPointer.h>
#include <vtkTriangle.h>
#include <embree3/rtcore.h>
int main(int, char*[])
{
// Create a triangle
vtkNew<vtkPoints> points;
points->InsertNextPoint(0.0, 0.0, 0.0);
points->InsertNextPoint(1.0, 0.0, 0.0);
points->InsertNextPoint(0.0, 1.0, 0.0);
vtkNew<vtkTriangle> triangle;
triangle->GetPointIds()->SetId(0, 0);
triangle->GetPointIds()->SetId(1, 1);
triangle->GetPointIds()->SetId(2, 2);
vtkNew<vtkCellArray> triangles;
triangles->InsertNextCell(triangle);
// Create a polydata object
vtkNew<vtkPolyData> trianglePolyData;
// Add the geometry and topology to the polydata
trianglePolyData->SetPoints(points);
trianglePolyData->SetPolys(triangles);
int numberOfFaces = trianglePolyData->GetNumberOfCells();
std::cout << "nFaces: " << numberOfFaces << std::endl;
vtkPoints* points0 = trianglePolyData->GetPoints();
vtkDataArray* vertices = points0->GetData();
RTCDevice device;
RTCScene scene;
static const char config[] = "frequency_level=simd128";
device = rtcNewDevice(config);
scene = rtcNewScene(device);
RTCGeometry geom = rtcNewGeometry(device, RTC_GEOMETRY_TYPE_TRIANGLE);
rtcSetSharedGeometryBuffer(geom,
RTC_BUFFER_TYPE_VERTEX,
0, /* slot */
RTC_FORMAT_FLOAT3,
(void*) vertices->GetVoidPointer(0),
0, /* byteOffset */
3*sizeof(float), /* stride */
3*numberOfFaces);
for (int i = 0 ; i < 3 ; i++) {
std::cout << "Point " << i << ": " <<
*(static_cast<float*>(vertices->GetVoidPointer(0)) + i*3 + 0) << ", " <<
*(static_cast<float*>(vertices->GetVoidPointer(0)) + i*3 + 1) << ", " <<
*(static_cast<float*>(vertices->GetVoidPointer(0)) + i*3 + 2) << std::endl;
}
unsigned* ib =
(unsigned*) rtcSetNewGeometryBuffer(geom,
RTC_BUFFER_TYPE_INDEX, 0,
RTC_FORMAT_UINT3, 3*sizeof(unsigned), 1);
for( int i = 0; i < numberOfFaces; i++) {
vtkSmartPointer<vtkIdList> face = vtkSmartPointer<vtkIdList>::New();
trianglePolyData->GetCellPoints(i,face);
ib[i*3 + 0] = (unsigned) face->GetId(0);
ib[i*3 + 1] = (unsigned) face->GetId(1);
ib[i*3 + 2] = (unsigned) face->GetId(2);
}
rtcCommitGeometry(geom);
rtcAttachGeometry(scene, geom);
rtcReleaseGeometry(geom);
rtcCommitScene(scene);
RTCRayHit rayhit;
rayhit.ray.org_x = 0.f; rayhit.ray.org_y = 0.f; rayhit.ray.org_z = -1.f;
rayhit.ray.dir_x = 0.f; rayhit.ray.dir_y = 0.f; rayhit.ray.dir_z = 1.f;
rayhit.ray.tnear = 0.f;
rayhit.ray.tfar = std::numeric_limits<float>::infinity();
// You need to initialize this otherwise, you will get undefined behavior. This was
// introduced in Embree3.
for (size_t i = 0; i < RTC_MAX_INSTANCE_LEVEL_COUNT; i++)
rtcRayBuffer[iRay].hit.instID[i] = RTC_INVALID_GEOMETRY_ID;
rtcRayBuffer[iRay].hit.geomID = RTC_INVALID_GEOMETRY_ID;
rtcRayBuffer[iRay].hit.primID = RTC_INVALID_GEOMETRY_ID;
rtcRayBuffer[iRay].ray.mask = 0;
rtcRayBuffer[iRay].ray.time = 0.0f;
rayhit.hit.geomID = RTC_INVALID_GEOMETRY_ID;
RTCIntersectContext context;
rtcInitIntersectContext(&context);
rtcIntersect1(scene, &context, &rayhit);
if (rayhit.hit.geomID != RTC_INVALID_GEOMETRY_ID) {
std::cout << "Intersection at t = " << rayhit.ray.tfar << std::endl;
} else {
std::cout << "No Intersection" << std::endl;
}
rtcReleaseScene(scene);
rtcReleaseDevice(device);
return EXIT_SUCCESS;
}
I don’t know. As I recall autoinit helps to set up VTK’s library loading/initialization. The details could be found in the VTK 6 transition guide where that was introduced or likley by asking this list in dedicated post.
There is some variability in vtkUnstructuredGrid/vtkPolyData internal data layout that makes GetVoidPointer() kind of suspicious, likewise thing with vtkPoints float/doubleness, but I don’t know how either would be affected by autoinit.
I figured it out using valgrind, I left a variable uninitialized and by accident an error was only triggered if I added the autoinit statement in the CMakeLists.txt file. I have added a comment about this in the initial post.