Proposal: Introduce data object caching in vtkCompositeDataPipeline


I work on an FEA application. Our pipeline data objects consist of composite
data structures of vtkUnstructuredGrid or vtkPolyData. Both the number of leaf
blocks and the size of the blocks may be large. During interactive processing
typically only a small subset of the blocks change, e.g when working on a dam
model the bedding which is usually a complex data object does not change at
all, or modelling a complex building may mean that only one or a few of
several hundreds of parts change. However all leaf objects in the composite
pipeline are processed, including the unchanged ones.

The vtkCompositeDataPipeline pipeline executive executes its algorithm
assuming the input data is a vtkCompositeDataSet. Algorithms may natively deal
with these inputs or, as is the case with many algorithms, expect a
non-composite data object e.g. vtkPointSet. The vtkCompositeDataPipeline
creates an output composite data object with the same structure as the input
and executes the algorithm for each leaf data object of the input composite
datastructure to generate the corresponding output data object.

Algorithms are re-executed when inputs change or ivars change. In the latter
case we must assume that each leaf object will yield different output.
However, when the algorithm’s state remains the same but the structure of the
composite input changes we could save a lot of processing time by not
re-computing blocks for which nothing has changed.


My proposal is to cache pairs of input and output leaf blocks so that output
blocks can be reused instead of recomputed.

A new class vtkDataObjectCache stores pairs of leaf vtkDataObject pointers.
Its interface is tailored to working with vtkCompositeDataSet objects during
the execution of an algorithm:

// empties the cache
bool Clear()
// update the cache based on only the input data set
bool Update( vtkCompositeDataSet* in )
// update the output objects in the cache
bool Finalize( vtkCompositeDataSet* in, vtkCompositeDataSet* out )
// find the output leaf object associated with the input leaf object that the
// iterator points to
vtkDataObject* FindObject( vtkCompositeDataIterator* iter )


The vtkCompositeDataPipeline can use a vtkDataObjectCache for reuse of output
leaf objects under the following conditions:

  1. the algorithm is not composite-aware (as deduced from the required input
    data types);
  2. The input is not flagged to be released (by the GlobalReleaseDataFlag or
    the RELEASE_DATA property) the cache is cleared;
  3. There is only one input composite data object;
    A protected method could be added to vtkCompositeDataPipeline to check these

Before iterating over the input data set, the cache is updated with the
current state of the input dataset:

  • If the algorithm’s MTime is later than that of the cache it is cleared.
  • If there are other input data objects besides the input composite data
    object, and any of them has an MTime later than that of the cache, the cache
    is cleared.
  • If the cache is not empty or has not been cleared, it is updated based on
    the input composite object, adding/removing invalid cached items. An item is
    valid only if the input or output leaf objects exist and the output leaf
    object is newer than the input leaf object based on MTime comparison.
    If the cache changes, Modified is called.

During execution, the cache queried while iterating over the input leaf objects:

  • If an existing output object is found it is used as output;
  • If no existing input-output object pair the algorithm is invoked;

At the end of execution the cache is finalized by iterating over the input and
output data objects and storing all output leaf data objects.
If the cache changes, Modified is called.

If the algorithm satisfies the above conditions but has multiple outputs a cache
is needed for each output.


I have several vtkPassInputTypeAlgorithm instantiations that are only
composite aware in a limited sense because the processing details depend on
the meta-data of a block. These algorithms could also benefit from caching but
they are not covered in the above scenario.

By creating a vtkDataObjectCache as separate object it can also be used by
algorithms that are composite data aware or have multiple (composite) inputs.
These algorithms need to store the vtkDataObjectCache as a member and populate
it at the end of or during the RequestData call, and the conditions for reuse
of leaf objects must be implemented locally. Specifically the abstract base
class vtkPassInputTypeAlgorithm can be modified to have a vtkDataObjectCache
which gets populated/used in the RequestDataObject method.

Controlling the use of caching

If caching is not wanted, a global flag and local ivar on vtkAlgorithm (e.g.
GlobalDataObjectCacheFlag, and vtkAlgorithm::SetDataObjectCache()) could be
used to switch it on/off for all or specific algorithms.


There’s a couple of points I would like to discuss:

  1. What are your thoughts on the feasibility and usefulness?
  2. Is this proposal generally enough usable to justify implementation in
    vtkCompositeDataPipeline or should we create a vtkCachedCompositeDataPipeline?
  3. Is this going to work for all composite data structures, or should it be
    limited to vtkDataObjectTree implementations?
  4. Are there more restrictions to using the cache in the
    vtkCompositeDataPipeline other than the ones mentioned?
  5. If an error occurs during algorithm execution should anything special
    happen to the cache?
  6. The vtkCompositeDataPipeline is already a fairly complex class, adding this
    caching behavior complicates it even more. My current understanding is that
    I can do all cache work in ExecuteEach but I may be wrong …
  7. If the cache is integrated in vtkCompositeDataPipeline,
    vtkThreadedCompositeDataPipeline can/must use it as well but I have not
    looked at how/whether the cache behavior should be implemented to safely
    work with the threaded pipeline and I don’t have a lot of experience with

If this proposal is supported I would very much appreciate some additional
help/support with the details of design and implementation.

Check this proposal out:

I think the objectives are similar for both proposals.

Oh wow. It’s great to see that work is going on already. I’m a VTK user so I tend to neglect the Paraview discourse site.

Can you make a crude estimate about when the cache implementation is going to be available?

The implementation of the cache is one part of my proposal, the other is how to use it to avoid calling algorithms in the composite data pipeline. Do you have thoughts or implementation plans for that part already too?

I am not actively working on it, I am afraid. There’s no direct funding for it, so I was just playing with it in my spare time.

I am not sure this should be necessary. If an algorithm can short circuit its computation if data is found in cache, then we won’t have to worry about skipping the algorithm for leaf nodes for which it has already computed results. What am I missing?

I am not sure this should be necessary. If an algorithm can short circuit its computation if data is found in cache, then we won’t have to worry about skipping the algorithm for leaf nodes for which it has already computed results. What am I missing?

You’re not missing anything I think, I think we’re using different words for the same issue I suppose.