I am very used to the idea of pipelines in vtk, but I end up follow this convention for a loop
I have always been unclear on the proper way to use them, some filters allow and others don’t? can the pipelines recursively update when connected with ports?
for file_ in files:
reader = vtk.vtkXMLPolyDataReader()
reader.SetFileName(file_)
reader.Update()
normals = vtk.vtkPolyDataNormals()
normals.SetInputConnection(reader.GetOutputPort())
normals.Update()
writer = vtk.vtkOBJWriter()
writer.SetInputConnection(normals.GetOutputPort())
writer.SetFileName("new_name_{0}".format(file_))
.., etc.
I think with pipelines I should be able to do something like this:
reader = vtk.vtkXMLPolyDataReader()
normals = vtk.vtkPolyDataNormals()
normals.SetInputConnection(reader.GetOutputPort())
writer = vtk.vtkOBJWriter()
writer.SetIntputConnection(normals.GetOutputPort())
for file_ in input_test:
reader.SetFileName(file_)
writer.SetFileName("new_name_{0}".format(file_))
writer.Update()
is that correct, is there a better way? Obviously I am going to test it out now that I have written the example but
I was able to get the second code snippet working and it makes sense that it does.
I think the thing I have hard time wrapping my head around is knowing what the correct mental model is for knowing when the updates happen and how do they happen, “recursively”? and in what instances does the approach fail?
I know vaguely there has been some work to separate data changes from functional changes. input data versus a filter flag. How does that fit into the correct mental mode to have for pipelines?
There is no recursion taking place there. The longer learning curve is a characteristic of pipeline-oriented APIs. A pipeline makes building a scene more difficult, since it is a more lower level design. That’s a price to pay for in exchange of making things easier to parallel and scale up. Now, if you want to build scenes easily, you should go for a scene-graph oriented API.
I recommend implementing all the examples to get the hang of it. And since you’re Python, it makes very easy to do experiments with VTK.
You have also to think how a 3D API works. To achieve high performance, it avoids the so-called immediate mode whenever possible. The immediate mode is that command-by-command issued to OpenGL to render a scene we first learn in Computer Graphics 101 in those glut exercises (by glut I mean the OpenGL Utilities Toolkit, not the muscle! ). The imediate mode is easier to understand but it is a naïve approach.
Professional 3D applications never have immediate mode design in their code. Instead, they use what is called the retained mode. To put it simple, part of the code you feed things to the graphics card such as geometry, textures, tesselation, etc. In another part of the code you set scene rendering. This non-sequential thinking makes things a bit difficult to understand for the beginner, but eventually you become familiar with it with enough practice. Hence, high performance APIs like VTK, OpenInventor, Ogre3D, etc. are designed so they interface to OpenGL in retained mode.
Here’s a more in-depth look at the cons/pros of both modes: https://stackoverflow.com/questions/6733934/what-does-immediate-mode-mean-in-opengl . Notice how the code in immediate mode is quite simple. A code in retained mode that renders the same scene is much more complex, but it certainly performs much better if you scale the scene up.