How to render multiple volumes with correct z-order?

Hello everyone, I’m new to VTK and I have a very basic problem but I can’t find a way to solve it.
I’m trying to display two volumes that have no overlap by adding two vtkVolume into one renderer. However, the volume that is added later will always be in front of another volume. I tried to add other 3D objects but only volume has this z-order issue.
Below is an example that shows the problem:
Animation

import vtkmodules.all as vtk

render = vtk.vtkRenderer()
render.SetBackground(25 / 255, 25 / 255, 64 / 255)

# Volume
for idx in range(2):
    img = vtk.vtkImageData()
    img.SetDimensions(100, 100, 100)
    img.AllocateScalars(vtk.VTK_DOUBLE, 1)
    for i in range(100):
        for j in range(100):
            for k in range(100):
                img.SetScalarComponentFromDouble(i, j, k, 0, 1000 * idx)

    volumeMapper = vtk.vtkSmartVolumeMapper()
    volumeMapper.SetInputData(img)

    volumeActor = vtk.vtkVolume()
    volumeActor.SetMapper(volumeMapper)
    volumeActor.SetPosition(150 * idx, 0, 0)
    render.AddActor(volumeActor)

# Cube
for idx in range(2):
    cubeSource = vtk.vtkCubeSource()
    cubeSource.SetXLength(100)
    cubeSource.SetYLength(100)
    cubeSource.SetZLength(100)
    cubeMapper = vtk.vtkPolyDataMapper()
    cubeMapper.SetInputConnection(cubeSource.GetOutputPort())
    cubeActor = vtk.vtkActor()
    cubeActor.SetMapper(cubeMapper)
    cubeActor.GetProperty().SetColor(255 * idx, 255 * (1 - idx), 0)
    cubeActor.SetPosition(50 + 150 * idx, 200, 0)
    render.AddActor(cubeActor)

renWin = vtk.vtkRenderWindow()
renWin.AddRenderer(render)
iren = vtk.vtkRenderWindowInteractor()
iren.SetRenderWindow(renWin)
iren.SetInteractorStyle(vtk.vtkInteractorStyleTrackballCamera())
iren.Initialize()
iren.Start()

I added two volumes (black and white) and two cubes (red and green) to the renderer. As you can see, the white volume is always in front of the black volume because it was added later. The orders between volumes and cubes, and between cubes are correct. Why would this happen and what is the right way to add multiple volumes? Any idea is appreciated, thank you!

Only a single volume can be rendered correctly using single-volume mappers. You need to use multi-volume mapper if you want to render multiple volumes. Unfortunately, this mapper has still a number of small issues (custom transforms, clipping, sampling distance computation, etc. do not work well).

1 Like

I’m having the same problem here but in vtkjs and not python. I don’t see a multi-volume mapper in my case… is it going to be implemented any time soon?

According to this discussion, multi-volume rendering might be implemented in vtk.js sometime in the future.

Thanks for the help! I switched to “vtkGPUVolumeRayCastMapper” and “vtkMultiVolume” and right now the depth orders are correct. Below is the updated code in case anyone needs my solution:

multiVolMapper = vtk.vtkGPUVolumeRayCastMapper()
multiVolActor = vtk.vtkMultiVolume()
multiVolActor.SetMapper(multiVolMapper)
for idx, vol_path in enumerate(["./test_data/1.tif", "./test_data/2.tif"]):
    vReader = vtk.vtkTIFFReader()
    vReader.SetFileName(vol_path)
    volumeActor = vtk.vtkVolume()
    volumeActor.SetPosition(150 * idx, 0, 0)
    volumeActor.GetProperty().SetColor(vtk.vtkColorTransferFunction())
    multiVolMapper.SetInputConnection(idx, vReader.GetOutputPort())
    multiVolActor.SetVolume(volumeActor, idx)
render.AddVolume(multiVolActor)

There is still one problem left. In my application I need to set the volume blend mode to MIP, but when I call “multiVolMapper.SetBlendModeToMaximumIntensity()” I get the following error. Is it because it’s not implemented yet? It works well on the single-volume case.

ERROR: In vtkShaderProgram.cxx, line 452
vtkShaderProgram (000001400AA257A0): 1: #version 150
2: #ifdef GL_ES
3: #ifdef GL_FRAGMENT_PRECISION_HIGH
4: precision highp float;
5: precision highp sampler2D;
6: precision highp sampler3D;
7: #else
8: precision mediump float;
9: precision mediump sampler2D;
10: precision mediump sampler3D;
11: #endif
12: #define texelFetchBuffer texelFetch
13: #define texture1D texture
14: #define texture2D texture
15: #define texture3D texture
16: #else // GL_ES
17: #define highp
18: #define mediump
19: #define lowp
20: #if __VERSION__ == 150
21: #define texelFetchBuffer texelFetch
22: #define texture1D texture
23: #define texture2D texture
24: #define texture3D texture
25: #endif
26: #endif // GL_ES
27: #define varying in
28: 
29: 
30: /*=========================================================================
31: 
32:   Program:   Visualization Toolkit
33:   Module:    raycasterfs.glsl
34: 
35:   Copyright (c) Ken Martin, Will Schroeder, Bill Lorensen
36:   All rights reserved.
37:   See Copyright.txt or http://www.kitware.com/Copyright.htm for details.
38: 
39:      This software is distributed WITHOUT ANY WARRANTY; without even
40:      the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
41:      PURPOSE.  See the above copyright notice for more information.
42: 
43: =========================================================================*/
44: 
45: //////////////////////////////////////////////////////////////////////////////
46: ///
47: /// Inputs
48: ///
49: //////////////////////////////////////////////////////////////////////////////
50: 
51: /// 3D texture coordinates form vertex shader
52: in vec3 ip_textureCoords;
53: in vec3 ip_vertexPos;
54: 
55: //////////////////////////////////////////////////////////////////////////////
56: ///
57: /// Outputs
58: ///
59: //////////////////////////////////////////////////////////////////////////////
60: 
61: vec4 g_fragColor = vec4(0.0);
62: 
63: //////////////////////////////////////////////////////////////////////////////
64: ///
65: /// Uniforms, attributes, and globals
66: ///
67: //////////////////////////////////////////////////////////////////////////////
68: vec3 g_dirStep;
69: float g_lengthStep = 0.0;
70: vec4 g_srcColor;
71: vec4 g_eyePosObj;
72: bool g_exit;
73: bool g_skip;
74: float g_currentT;
75: float g_terminatePointMax;
76: 
77: // These describe the entire ray for this scene, not just the current depth
78: // peeling segment. These are texture coordinates.
79: vec3 g_rayOrigin; // Entry point of volume or clip point
80: vec3 g_rayTermination; // Termination point (depth, clip, etc)
81: 
82: // These describe the current segment. If not peeling, they are initialized to
83: // the ray endpoints.
84: vec3 g_dataPos;
85: vec3 g_terminatePos;
86: 
87: float g_jitterValue = 0.0;
88: 
89: 
90: 
91: out vec4 fragOutput0;
92: 
93: 
94: uniform sampler3D in_volume[2];
95: uniform vec4 in_volume_scale[2];
96: uniform vec4 in_volume_bias[2];
97: uniform int in_noOfComponents;
98: 
99: uniform sampler2D in_depthSampler;
100: 
101: // Camera position
102: uniform vec3 in_cameraPos;
103: uniform mat4 in_volumeMatrix[3];
104: uniform mat4 in_inverseVolumeMatrix[3];
105: uniform mat4 in_textureDatasetMatrix[3];
106: uniform mat4 in_inverseTextureDatasetMatrix[3];
107: uniform mat4 in_textureToEye[3];
108: uniform vec3 in_texMin[3];
109: uniform vec3 in_texMax[3];
110: uniform mat4 in_cellToPoint[3];
111: // view and model matrices
112: uniform mat4 in_projectionMatrix;
113: uniform mat4 in_inverseProjectionMatrix;
114: uniform mat4 in_modelViewMatrix;
115: uniform mat4 in_inverseModelViewMatrix;
116: in mat4 ip_inverseTextureDataAdjusted;
117: 
118: // Ray step size
119: uniform vec3 in_cellStep[2];
120: uniform vec2 in_scalarsRange[8];
121: uniform vec3 in_cellSpacing[2];
122: 
123: // Sample distance
124: uniform float in_sampleDistance;
125: 
126: // Scales
127: uniform vec2 in_windowLowerLeftCorner;
128: uniform vec2 in_inverseOriginalWindowSize;
129: uniform vec2 in_inverseWindowSize;
130: uniform vec3 in_textureExtentsMax;
131: uniform vec3 in_textureExtentsMin;
132: 
133: // Material and lighting
134: uniform vec3 in_diffuse[4];
135: uniform vec3 in_ambient[4];
136: uniform vec3 in_specular[4];
137: uniform float in_shininess[4];
138: 
139: // Others
140: vec3 g_rayJitter = vec3(0.0);
141: 
142: uniform vec2 in_averageIPRange;
143: vec4 g_eyePosObjs[2];
144: 
145: 
146:       
147:  const float g_opacityThreshold = 1.0 - 1.0 / 255.0;
148: 
149: 
150: 
151: 
152: 
153: #define EPSILON 0.001
154: 
155: // Computes the intersection between a ray and a box
156: // The box should be axis aligned so we only give two arguments
157: struct Hit
158: {
159:   float tmin;
160:   float tmax;
161: };
162: 
163: struct Ray
164: {
165:   vec3 origin;
166:   vec3 dir;
167:   vec3 invDir;
168: };
169: 
170: bool BBoxIntersect(const vec3 boxMin, const vec3 boxMax, const Ray r, out Hit hit)
171: {
172:   vec3 tbot = r.invDir * (boxMin - r.origin);
173:   vec3 ttop = r.invDir * (boxMax - r.origin);
174:   vec3 tmin = min(ttop, tbot);
175:   vec3 tmax = max(ttop, tbot);
176:   vec2 t = max(tmin.xx, tmin.yz);
177:   float t0 = max(t.x, t.y);
178:   t = min(tmax.xx, tmax.yz);
179:   float t1 = min(t.x, t.y);
180:   hit.tmin = t0;
181:   hit.tmax = t1;
182:   return t1 > max(t0, 0.0);
183: }
184: 
185: // As BBoxIntersect requires the inverse of the ray coords,
186: // this function is used to avoid numerical issues
187: void safe_0_vector(inout Ray ray)
188: {
189:   if(abs(ray.dir.x) < EPSILON) ray.dir.x = sign(ray.dir.x) * EPSILON;
190:   if(abs(ray.dir.y) < EPSILON) ray.dir.y = sign(ray.dir.y) * EPSILON;
191:   if(abs(ray.dir.z) < EPSILON) ray.dir.z = sign(ray.dir.z) * EPSILON;
192: }
193: 
194: // the phase function should be normalized to 4pi for compatibility with surface rendering
195: //VTK::PhaseFunction::Dec
196: 
197: uniform sampler2D in_colorTransferFunc_0[1];
198: uniform sampler2D in_colorTransferFunc_1[1];
199: 
200: 
201:         
202:  bool l_firstValue;        
203:  vec4 l_maxValue;
204: 
205: 
206: 
207: 
208: 
209: 
210: 
211: uniform sampler3D in_transfer2DYAxis;
212: uniform vec4 in_transfer2DYAxis_scale;
213: uniform vec4 in_transfer2DYAxis_bias;
214: 
215: 
216: float computeGradientOpacity(vec4 grad, const in sampler2D gradientTF)
217: {
218:   return texture2D(gradientTF, vec2(grad.w, 0.0)).r;
219: }
220: 
221: 
222: uniform sampler2D in_opacityTransferFunc_0[1];
223: uniform sampler2D in_opacityTransferFunc_1[1];
224: float computeOpacity(vec4 scalar, const in sampler2D opacityTF)
225: {
226:   return texture2D(opacityTF, vec2(scalar.w, 0)).r;
227: }
228: 
229: 
230: //VTK::ComputeRGBA2DWithGradient::Dec
231: 
232: vec4 computeGradient(in vec3 texPos, in int c, in sampler3D volume, in int index)
233: {
234:   return vec4(0.0);
235: }
236: 
237: 
238: //VTK::ComputeDensityGradient::Dec
239: 
240: //VTK::ComputeVolumetricShadow::Dec
241: 
242:       
243: vec4 computeLighting(vec3 texPos, vec4 color, const in sampler3D volume, const in sampler2D opacityTF, const int volIdx, int component)      
244:   {      
245:   vec4 finalColor = vec4(0.0);
246: 
247:   finalColor = vec4(color.rgb, 0.0);      
248:   finalColor.a = color.a;      
249:   return clamp(finalColor, 0.0, 1.0);      
250:   }
251: 
252: vec4 computeColor(vec3 texPos, vec4 scalar, float opacity, const in sampler2D colorTF, const in sampler3D volume, const in sampler2D opacityTF, const int volIdx)
253: 
254: {
255:   return clamp(computeLighting(texPos, vec4(texture2D(colorTF,
256:                          vec2(scalar.w, 0.0)).xyz, opacity), volume, opacityTF,volIdx, 0), 0.0, 1.0);
257: }
258: 
259: 
260:         
261: vec3 computeRayDirection()        
262:   {        
263:   return normalize(ip_vertexPos.xyz - g_eyePosObj.xyz);        
264:   }
265: 
266: //VTK::Picking::Dec
267: 
268: //VTK::RenderToImage::Dec
269: 
270: //VTK::DepthPeeling::Dec
271: 
272: uniform float in_scale;
273: uniform float in_bias;
274: 
275: //////////////////////////////////////////////////////////////////////////////
276: ///
277: /// Helper functions
278: ///
279: //////////////////////////////////////////////////////////////////////////////
280: 
281: /**
282:  * Transform window coordinate to NDC.
283:  */
284: vec4 WindowToNDC(const float xCoord, const float yCoord, const float zCoord)
285: {
286:   vec4 NDCCoord = vec4(0.0, 0.0, 0.0, 1.0);
287: 
288:   NDCCoord.x = (xCoord - in_windowLowerLeftCorner.x) * 2.0 *
289:     in_inverseWindowSize.x - 1.0;
290:   NDCCoord.y = (yCoord - in_windowLowerLeftCorner.y) * 2.0 *
291:     in_inverseWindowSize.y - 1.0;
292:   NDCCoord.z = (2.0 * zCoord - (gl_DepthRange.near + gl_DepthRange.far)) /
293:     gl_DepthRange.diff;
294: 
295:   return NDCCoord;
296: }
297: 
298: /**
299:  * Transform NDC coordinate to window coordinates.
300:  */
301: vec4 NDCToWindow(const float xNDC, const float yNDC, const float zNDC)
302: {
303:   vec4 WinCoord = vec4(0.0, 0.0, 0.0, 1.0);
304: 
305:   WinCoord.x = (xNDC + 1.f) / (2.f * in_inverseWindowSize.x) +
306:     in_windowLowerLeftCorner.x;
307:   WinCoord.y = (yNDC + 1.f) / (2.f * in_inverseWindowSize.y) +
308:     in_windowLowerLeftCorner.y;
309:   WinCoord.z = (zNDC * gl_DepthRange.diff +
310:     (gl_DepthRange.near + gl_DepthRange.far)) / 2.f;
311: 
312:   return WinCoord;
313: }
314: 
315: /**
316:  * Clamps the texture coordinate vector @a pos to a new position in the set
317:  * { start + i * step }, where i is an integer. If @a ceiling
318:  * is true, the sample located further in the direction of @a step is used,
319:  * otherwise the sample location closer to the eye is used.
320:  * This function assumes both start and pos already have jittering applied.
321:  */
322: vec3 ClampToSampleLocation(vec3 start, vec3 step, vec3 pos, bool ceiling)
323: {
324:   vec3 offset = pos - start;
325:   float stepLength = length(step);
326: 
327:   // Scalar projection of offset on step:
328:   float dist = dot(offset, step / stepLength);
329:   if (dist < 0.) // Don't move before the start position:
330:   {
331:     return start;
332:   }
333: 
334:   // Number of steps
335:   float steps = dist / stepLength;
336: 
337:   // If we're reeaaaaallly close, just round -- it's likely just numerical noise
338:   // and the value should be considered exact.
339:   if (abs(mod(steps, 1.)) > 1e-5)
340:   {
341:     if (ceiling)
342:     {
343:       steps = ceil(steps);
344:     }
345:     else
346:     {
347:       steps = floor(steps);
348:     }
349:   }
350:   else
351:   {
352:     steps = floor(steps + 0.5);
353:   }
354: 
355:   return start + steps * step;
356: }
357: 
358: //////////////////////////////////////////////////////////////////////////////
359: ///
360: /// Ray-casting
361: ///
362: //////////////////////////////////////////////////////////////////////////////
363: 
364: /**
365:  * Global initialization. This method should only be called once per shader
366:  * invocation regardless of whether castRay() is called several times (e.g.
367:  * vtkDualDepthPeelingPass). Any castRay() specific initialization should be
368:  * placed within that function.
369:  */
370: void initializeRayCast()
371: {
372:   /// Initialize g_fragColor (output) to 0
373:   g_fragColor = vec4(0.0);
374:   g_dirStep = vec3(0.0);
375:   g_srcColor = vec4(0.0);
376:   g_exit = false;
377: 
378:           
379:   // Get the 3D texture coordinates for lookup into the in_volume dataset        
380:   g_rayOrigin = ip_textureCoords.xyz;      
381:       
382:   // Eye position in dataset space      
383:   g_eyePosObj = in_inverseVolumeMatrix[0] * vec4(in_cameraPos, 1.0);      
384:   g_eyePosObjs[0] = in_inverseVolumeMatrix[1] * vec4(in_cameraPos, 1.0);      
385:   g_eyePosObjs[1] = in_inverseVolumeMatrix[2] * vec4(in_cameraPos, 1.0);
386:       
387:   // Getting the ray marching direction (in dataset space)      
388:   vec3 rayDir = computeRayDirection();      
389:       
390:   // 2D Texture fragment coordinates [0,1] from fragment coordinates.      
391:   // The frame buffer texture has the size of the plain buffer but       
392:   // we use a fraction of it. The texture coordinate is less than 1 if      
393:   // the reduction factor is less than 1.      
394:   // Device coordinates are between -1 and 1. We need texture      
395:   // coordinates between 0 and 1. The in_depthSampler      
396:   // buffer has the original size buffer.      
397:   vec2 fragTexCoord = (gl_FragCoord.xy - in_windowLowerLeftCorner) *      
398:                       in_inverseWindowSize;      
399:       
400:   // Multiply the raymarching direction with the step size to get the      
401:   // sub-step size we need to take at each raymarching step      
402:   g_dirStep = (ip_inverseTextureDataAdjusted *      
403:               vec4(rayDir, 0.0)).xyz * in_sampleDistance;      
404:   g_lengthStep = length(g_dirStep);      
405:           
406:  float jitterValue = 0.0;          
407:         
408:     g_rayJitter = g_dirStep;        
409:         
410:   g_rayOrigin += g_rayJitter;        
411:       
412:   // Flag to determine if voxel should be considered for the rendering      
413:   g_skip = false;
414: 
415:   
416: 
417:         
418:   // Flag to indicate if the raymarch loop should terminate       
419:   bool stop = false;      
420:       
421:   g_terminatePointMax = 0.0;      
422:       
423:   vec4 l_depthValue = texture2D(in_depthSampler, fragTexCoord);      
424:   // Depth test      
425:   if(gl_FragCoord.z >= l_depthValue.x)      
426:     {      
427:     discard;      
428:     }      
429:       
430:   // color buffer or max scalar buffer have a reduced size.      
431:   fragTexCoord = (gl_FragCoord.xy - in_windowLowerLeftCorner) *      
432:                  in_inverseOriginalWindowSize;      
433:       
434:   // Compute max number of iterations it will take before we hit      
435:   // the termination point      
436:       
437:   // Abscissa of the point on the depth buffer along the ray.      
438:   // point in texture coordinates      
439:   vec4 rayTermination = WindowToNDC(gl_FragCoord.x, gl_FragCoord.y, l_depthValue.x);      
440:       
441:   // From normalized device coordinates to eye coordinates.      
442:   // in_projectionMatrix is inversed because of way VT      
443:   // From eye coordinates to texture coordinates      
444:   rayTermination = ip_inverseTextureDataAdjusted *      
445:                     in_inverseVolumeMatrix[0] *      
446:                     in_inverseModelViewMatrix *      
447:                     in_inverseProjectionMatrix *      
448:                     rayTermination;      
449:   g_rayTermination = rayTermination.xyz / rayTermination.w;      
450:       
451:   // Setup the current segment:      
452:   g_dataPos = g_rayOrigin;      
453:   g_terminatePos = g_rayTermination;      
454:       
455:   g_terminatePointMax = length(g_terminatePos.xyz - g_dataPos.xyz) /      
456:                         length(g_dirStep);      
457:   g_currentT = 0.0;
458: 
459:   
460: 
461:   //VTK::RenderToImage::Init
462: 
463:   //VTK::DepthPass::Init
464: 
465:   //VTK::Matrices::Init
466: 
467:   g_jitterValue = jitterValue;
468: }
469: 
470: /**
471:  * March along the ray direction sampling the volume texture.  This function
472:  * takes a start and end point as arguments but it is up to the specific render
473:  * pass implementation to use these values (e.g. vtkDualDepthPeelingPass). The
474:  * mapper does not use these values by default, instead it uses the number of
475:  * steps defined by g_terminatePointMax.
476:  */
477: vec4 castRay(const float zStart, const float zEnd)
478: {
479:   //VTK::DepthPeeling::Ray::Init
480: 
481:   
482: 
483:   //VTK::DepthPeeling::Ray::PathCheck
484: 
485:           
486:   // We get data between 0.0 - 1.0 range        
487:   l_firstValue = true;        
488:   l_maxValue = vec4(0.0);
489: 
490:   /// For all samples along the ray
491:   while (!g_exit)
492:   {
493:           
494:     g_skip = false;
495: 
496:     
497: 
498:     
499: 
500:     
501: 
502:     //VTK::PreComputeGradients::Impl
503: 
504:         if (!g_skip)
505:     {
506:       vec3 texPos;
507:       texPos = (in_cellToPoint[1] * in_inverseTextureDatasetMatrix[1] * in_inverseVolumeMatrix[1] *
508:         in_volumeMatrix[0] * in_textureDatasetMatrix[0] * vec4(g_dataPos.xyz, 1.0)).xyz;
509:       if ((all(lessThanEqual(texPos, vec3(1.0))) &&
510:         all(greaterThanEqual(texPos, vec3(0.0)))))
511:       {
512:         vec4 scalar = texture3D(in_volume[0], texPos);
513:         scalar = scalar * in_volume_scale[0] + in_volume_bias[0];
514:         scalar = vec4(scalar.r);
515:         g_srcColor = vec4(0.0);
516:         g_srcColor.a = computeOpacity(scalar,in_opacityTransferFunc_0[0]);
517:         if (g_srcColor.a > 0.0)
518:         {
519:           g_srcColor = computeColor(texPos, scalar, g_srcColor.a, in_colorTransferFunc_0[0], in_volume[0], in_opacityTransferFunc_0[0], 0);
520:           g_srcColor.rgb *= g_srcColor.a;
521:           g_fragColor = (1.0f - g_fragColor.a) * g_srcColor + g_fragColor;
522:         }
523:       }
524: 
525:       texPos = (in_cellToPoint[2] * in_inverseTextureDatasetMatrix[2] * in_inverseVolumeMatrix[2] *
526:         in_volumeMatrix[0] * in_textureDatasetMatrix[0] * vec4(g_dataPos.xyz, 1.0)).xyz;
527:       if ((all(lessThanEqual(texPos, vec3(1.0))) &&
528:         all(greaterThanEqual(texPos, vec3(0.0)))))
529:       {
530:         vec4 scalar = texture3D(in_volume[1], texPos);
531:         scalar = scalar * in_volume_scale[1] + in_volume_bias[1];
532:         scalar = vec4(scalar.r);
533:         g_srcColor = vec4(0.0);
534:         g_srcColor.a = computeOpacity(scalar,in_opacityTransferFunc_1[0]);
535:         if (g_srcColor.a > 0.0)
536:         {
537:           g_srcColor = computeColor(texPos, scalar, g_srcColor.a, in_colorTransferFunc_1[0], in_volume[1], in_opacityTransferFunc_1[0], 1);
538:           g_srcColor.rgb *= g_srcColor.a;
539:           g_fragColor = (1.0f - g_fragColor.a) * g_srcColor + g_fragColor;
540:         }
541:       }
542: 
543:     }
544: 
545: 
546:     //VTK::RenderToImage::Impl
547: 
548:     //VTK::DepthPass::Impl
549: 
550:     /// Advance ray
551:     g_dataPos += g_dirStep;
552: 
553:           
554:     if(any(greaterThan(max(g_dirStep, vec3(0.0))*(g_dataPos - in_texMax[0]),vec3(0.0))) ||      
555:       any(greaterThan(min(g_dirStep, vec3(0.0))*(g_dataPos - in_texMin[0]),vec3(0.0))))      
556:       {      
557:       break;      
558:       }      
559:       
560:     // Early ray termination      
561:     // if the currently composited colour alpha is already fully saturated      
562:     // we terminated the loop or if we have hit an obstacle in the      
563:     // direction of they ray (using depth buffer) we terminate as well.      
564:     if((g_fragColor.a > g_opacityThreshold) ||       
565:        g_currentT >= g_terminatePointMax)      
566:       {      
567:       break;      
568:       }      
569:     ++g_currentT;
570:   }
571: 
572:            
573:   g_srcColor = computeColor(l_maxValue,         
574:                             computeOpacity(l_maxValue));         
575:   g_fragColor.rgb = g_srcColor.rgb * g_srcColor.a;         
576:   g_fragColor.a = g_srcColor.a;
577: 
578:   return g_fragColor;
579: }
580: 
581: /**
582:  * Finalize specific modes and set output data.
583:  */
584: void finalizeRayCast()
585: {
586:   
587: 
588:   
589: 
590:   
591: 
592:   
593: 
594:   //VTK::Picking::Exit
595: 
596:   g_fragColor.r = g_fragColor.r * in_scale + in_bias * g_fragColor.a;
597:   g_fragColor.g = g_fragColor.g * in_scale + in_bias * g_fragColor.a;
598:   g_fragColor.b = g_fragColor.b * in_scale + in_bias * g_fragColor.a;
599:   fragOutput0 = g_fragColor;
600: 
601:   //VTK::RenderToImage::Exit
602: 
603:   //VTK::DepthPass::Exit
604: }
605: 
606: //////////////////////////////////////////////////////////////////////////////
607: ///
608: /// Main
609: ///
610: //////////////////////////////////////////////////////////////////////////////
611: void main()
612: {
613:       
614:   initializeRayCast();    
615:   castRay(-1.0, -1.0);    
616:   finalizeRayCast();
617: }


ERROR: In vtkShaderProgram.cxx, line 453
vtkShaderProgram (000001400AA257A0): 0(574) : error C1103: too few parameters in function call
0(573) : error C7011: implicit cast from "vec4" to "vec3"
0(574) : error C7011: implicit cast from "float" to "vec4"
0(574) : error C1103: too few parameters in function call


ERROR: In vtkOpenGLGPUVolumeRayCastMapper.cxx, line 2833
vtkOpenGLGPUVolumeRayCastMapper (0000014004855600): Shader failed to compile

tif_test_data.zip (322.7 KB)

This is one of the several bugs/limitations of multi-volume rendering. The fix might be easy (if it was just not tested) or very hard (if the whole rendering technique needs to be reimplemented for multi-volume rendering).