Interest in GPU image processing pipeline

So lets start with an understanding what the goals of VTK-m
are:

  • To have visualization algorithms executed and performant
    on exascale HPC machines. For example machine such as Summit,
    and the announced Frontier.

  • Provide developers a single source model, where they can
    write small worklets (aka functors) in C++ and have those
    compiled and executed using different acceleration languages
    such as OpenMP, TBB, and CUDA.

  • Provide a collection of feature rich parallel primitives
    such as those seen in std++17 parallel proposal or Thrust.
    We currently have more parallel primitives compare to c++17
    but less than Thrust.

  • Provide a collection of abstractions for common scientific
    visualization concepts such as accessing the point of a cell,
    point neighborhoods, cell/point locators, and so on.

If you are interested in finding more out about the concepts
of VTK-m I recommend our users guide ( http://m.vtk.org/images/c/c8/VTKmUsersGuide.pdf )

So as this relates to filters and gpu memory residency.

VTK-m has a design that fully supports keeping data on the GPU as
long as it is needed. The original challenge that we had was that
as VTK and VTK-m use slightly different memory layouts we had to
bring the data back to the host after executing VTK-m algorithms inside
VTK to make sure that downstream VTK filters could access the data.

This data transfer model is alleviated in our new design for VTK-m/VTK
filters that we are currently developing ( https://gitlab.kitware.com/vtk/vtk/merge_requests/5395 ).
In this model we have a VTK-m dataset in VTK and this allows
for memory transfers to be kept between filters. This also will work
with GPU’s that support pageable UVM memory and will allow CPU’s
to access GPU allocated memory without a full copy.

If you have any questions on VTK-m in general please ask :slight_smile: