Simplifying algorithms by making parallel/composite-data support universal

Time and time again we run into implementations of algorithms of this form:

  • a basic implementation A
  • a subclass of A, say B, to handle composite-dataset input
  • a subclass of B, say C, to handle parallel/distributed use-cases

Such implementations are not only annoying to write but also tedious to debug, and obfuscate the algorithm to a point where it is barely understandable what’s the intent. The XML readers/writers and there PXML counterparts are a good example. Same is true for filters like vtkProbeFilter, vtkCompositeProbeFilter and vtkPProbeFilter. We really need to start moving away from such artificial splits.

To facilitate that, here’s what I believe is needed:

  • move parallel controller and basic distributed processing support to a core module. algorithms should be able to depend on global controller without any hesitation.
  • allow core filters to use DIY, as appropriate. This makes DIY a core dependency. DIY simplifies composite + distributed processing incredibly easy and hence we should use it as freely as possible too.

Thoughts? What do folks think?