2D Transferfunction - Shader failed to compile - ScatteringBlending

Hi all,

I need assistance with implementing my vtkGPUVolumeRayCastMapper pipeline in python. I want to use a 2D transfer function and make the volume something as realistic as possible using the following parameters.

  • Global Illumination Reach
  • VolumetricScatteringBlending

However, when I use a nonzero value for VolumetricScatteringBlending, the program crashes and I get a repeating error message:

Shader failed to compile

781: //////////////////////////////////////////////////////////////////////////////
782: void main()
783: {
784:       
785:   initializeRayCast();    
786:   castRay(-1.0, -1.0);    
787:   finalizeRayCast();
788: }

2023-06-14 10:54:21.960 (   8.762s) [                ]   vtkShaderProgram.cxx:453    ERR| vtkShaderProgram (000001F6DBCD66D0): 0(367) : error C0000: syntax error, unexpected identifier, expecting ',' or ';' at token "shadow"

2023-06-14 10:54:21.970 (   8.773s) [                ]vtkOpenGLGPUVolumeRayCa:2833   ERR| vtkOpenGLGPUVolumeRayCastMapper (000001F6DB2CF6E0): Shader failed to compile

Can anyone help me with this problem, if you need more information let me know.

Thanks in advance for your help!

Best regards
David

Volumetric scattering is not supported with 2D transfer functions yet.

Thanks for the quick feedback @sankhesh!

I have four subsequent questions:

  1. Why can paraview handle the volumetric scattering model with 2D transfer function?
  2. Do you think it’s possible to implement a custom workaround to do this? Maybe with an own shader Replacement?
  3. Is support for Volumetric Scattering with 2D transfer-function already in development?
  4. Do you know of an alternative method to realistically shade the volume?

Thank you so much for your support!

David

@DMelenberg Glad you checked against ParaView. My brain cells didn’t remember it but yes, scattering is supported with 2D transfer functions. This means you’re running into an issue because of something else. What are the parameters you set on the mapper and property?

Thanks again for the fast reply @sankhesh. I have output a few parameters that are the same between two renderings. The only adjustment I have made is to set the parameter SetVolumetricScatteringBlending from 0 to 0.1. As a result, when the program was run again, no rendering occurred.

The Parameters are written as an json-File right after calling render_window.Render()

        "Volume Property": {
            "Transfer Function": "_2D_Testing_",
            "Shade": "1",
            "Shading Properties": {
                "Ambient": "0.1",
                "Diffuse": "0.9",
                "Specular": "1.0",
                "Specular Power": "100.0"
            },
            "Scattering Anisotropy": "0.0",
            "Interpolation Type": "Linear",
            "Scalar Opacity Unit Distance": "1.0",
            "Independent Components": "1",
            "####################": ""
        },
        "Volume": {
            "Position": "(-275.0, -275.0, -212.5)",
            "Origin": "(275.0, 275.0, 212.5)",
            "Center": "(0.5000000000000284, 0.5, 0.5)",
            "Orientation": "(-90.0, -0.0, -180.0)",
            "Dimensions": {
                "X": "(-274.00000000000006, 275.0000000000001)",
                "Y": "(-211.50000000000009, 212.50000000000009)",
                "Z": "(-274.00000000000006, 275.00000000000006)"
            },
            "Shader Name": "vtkOpenGLShaderProperty",
            "Volume Scale": "(1.0, 1.0, 1.0)",
            "####################": ""
        },
        "Volume Mapper": {
            "Blend Mode": "0",
            "Volumetric Scattering Blending Mode": "0.10000000149011612",
            "Global Illumination Reach": "0.6000000238418579",
            "Auto Adjust Sample Distances": "1",
            "Sample Distance": "1.0",
            "Algorithm": "vtkOpenGLGPUVolumeRayCastMapper",
            "####################": ""

Furthermore the rendering-Backend is OpenGL2.

The fact that my transfer function works as a 4-component vtkImageData object without volumetric scattering can’t be the reason, can it?

Thank you for your help!

David

Thanks David.

If you’re using a 2D transfer function, it is expected to be a 4-component float vtkImageData.

The parameters you printed seem fine to me. Could you please share the full error report? It should have printed the dynamically generated shader string with line numbers and the errors themselves.

Hello @sankhesh, thanks again for your fast reply. Hope you had a great weekend.

To check the data type, I have printed the data type and number of components in the lines where the transfer function is assigned to the volume property.

self.volume_property.SetTransferFunctionModeTo2D()
self.volume_property.SetTransferFunction2D(self.two_dimensional_tf)
print('#############################################')
print(' -->  Transfer-Function-Datatype: ', type(self.two_dimensional_tf))
print(' -->  Transfer-Function-Dimensions: ', self.two_dimensional_tf.GetDimensions())
print(' -->  Transfer-Function-Komponents: ', self.two_dimensional_tf.GetNumberOfScalarComponents())

With the following result:

#############################################
 -->  Transfer-Function-Datatype:  <class 'vtkmodules.vtkCommonDataModel.vtkImageData'>
 -->  Transfer-Function-Dimensions:  (256, 256, 1)
 -->  Transfer-Function-Komponents:  4
#############################################

So the Datatype should be correct. Below you can see the entire error message from the vtkOutputWindow

ERROR: In vtkShaderProgram.cxx, line 452
vtkShaderProgram (0000023F57F066F0): 1: #version 150
2: #ifdef GL_ES
3: #ifdef GL_FRAGMENT_PRECISION_HIGH
4: precision highp float;
5: precision highp sampler2D;
6: precision highp sampler3D;
7: #else
8: precision mediump float;
9: precision mediump sampler2D;
10: precision mediump sampler3D;
11: #endif
12: #define texelFetchBuffer texelFetch
13: #define texture1D texture
14: #define texture2D texture
15: #define texture3D texture
16: #else // GL_ES
17: #define highp
18: #define mediump
19: #define lowp
20: #if __VERSION__ == 150
21: #define texelFetchBuffer texelFetch
22: #define texture1D texture
23: #define texture2D texture
24: #define texture3D texture
25: #endif
26: #endif // GL_ES
27: #define varying in
28: 
29: 
30: /*=========================================================================
31: 
32:   Program:   Visualization Toolkit
33:   Module:    raycasterfs.glsl
34: 
35:   Copyright (c) Ken Martin, Will Schroeder, Bill Lorensen
36:   All rights reserved.
37:   See Copyright.txt or http://www.kitware.com/Copyright.htm for details.
38: 
39:      This software is distributed WITHOUT ANY WARRANTY; without even
40:      the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
41:      PURPOSE.  See the above copyright notice for more information.
42: 
43: =========================================================================*/
44: 
45: //////////////////////////////////////////////////////////////////////////////
46: ///
47: /// Inputs
48: ///
49: //////////////////////////////////////////////////////////////////////////////
50: 
51: /// 3D texture coordinates form vertex shader
52: in vec3 ip_textureCoords;
53: in vec3 ip_vertexPos;
54: 
55: //////////////////////////////////////////////////////////////////////////////
56: ///
57: /// Outputs
58: ///
59: //////////////////////////////////////////////////////////////////////////////
60: 
61: vec4 g_fragColor = vec4(0.0);
62: 
63: //////////////////////////////////////////////////////////////////////////////
64: ///
65: /// Uniforms, attributes, and globals
66: ///
67: //////////////////////////////////////////////////////////////////////////////
68: vec3 g_dirStep;
69: float g_lengthStep = 0.0;
70: vec4 g_srcColor;
71: vec4 g_eyePosObj;
72: bool g_exit;
73: bool g_skip;
74: float g_currentT;
75: float g_terminatePointMax;
76: 
77: // These describe the entire ray for this scene, not just the current depth
78: // peeling segment. These are texture coordinates.
79: vec3 g_rayOrigin; // Entry point of volume or clip point
80: vec3 g_rayTermination; // Termination point (depth, clip, etc)
81: 
82: // These describe the current segment. If not peeling, they are initialized to
83: // the ray endpoints.
84: vec3 g_dataPos;
85: vec3 g_terminatePos;
86: 
87: float g_jitterValue = 0.0;
88: 
89: 
90: 
91: out vec4 fragOutput0;
92: 
93: 
94: uniform sampler3D in_volume[1];
95: uniform vec4 in_volume_scale[1];
96: uniform vec4 in_volume_bias[1];
97: uniform int in_noOfComponents;
98: 
99: uniform sampler2D in_depthSampler;
100: 
101: // Camera position
102: uniform vec3 in_cameraPos;
103: uniform mat4 in_volumeMatrix[1];
104: uniform mat4 in_inverseVolumeMatrix[1];
105: uniform mat4 in_textureDatasetMatrix[1];
106: uniform mat4 in_inverseTextureDatasetMatrix[1];
107: uniform mat4 in_textureToEye[1];
108: uniform vec3 in_texMin[1];
109: uniform vec3 in_texMax[1];
110: uniform mat4 in_cellToPoint[1];
111: // view and model matrices
112: uniform mat4 in_projectionMatrix;
113: uniform mat4 in_inverseProjectionMatrix;
114: uniform mat4 in_modelViewMatrix;
115: uniform mat4 in_inverseModelViewMatrix;
116: in mat4 ip_inverseTextureDataAdjusted;
117: 
118: // Ray step size
119: uniform vec3 in_cellStep[1];
120: mat4 g_eyeToTexture = in_inverseTextureDatasetMatrix[0] * in_inverseVolumeMatrix[0] * in_inverseModelViewMatrix;
121: mat4 g_texToView = in_modelViewMatrix * in_volumeMatrix[0] *in_textureDatasetMatrix[0];
122: uniform vec2 in_scalarsRange[4];
123: uniform vec3 in_cellSpacing[1];
124: 
125: // Sample distance
126: uniform float in_sampleDistance;
127: 
128: // Scales
129: uniform vec2 in_windowLowerLeftCorner;
130: uniform vec2 in_inverseOriginalWindowSize;
131: uniform vec2 in_inverseWindowSize;
132: uniform vec3 in_textureExtentsMax;
133: uniform vec3 in_textureExtentsMin;
134: 
135: // Material and lighting
136: uniform vec3 in_diffuse[4];
137: uniform vec3 in_ambient[4];
138: uniform vec3 in_specular[4];
139: uniform float in_shininess[4];
140: 
141: // Others
142: vec3 g_rayJitter = vec3(0.0);
143: 
144: uniform vec2 in_averageIPRange;
145: vec4 g_eyePosObjs[1];
146: uniform bool in_twoSidedLighting;
147: 
148: uniform float in_giReach;
149: uniform float in_anisotropy;
150: uniform float in_volumetricScatteringBlending;
151: 
152: #define TOTAL_NUMBER_LIGHTS 5
153: #define NUMBER_POS_LIGHTS 0
154: vec4 g_fragWorldPos;
155: uniform vec3 in_lightAmbientColor[TOTAL_NUMBER_LIGHTS];
156: uniform vec3 in_lightDiffuseColor[TOTAL_NUMBER_LIGHTS];
157: uniform vec3 in_lightSpecularColor[TOTAL_NUMBER_LIGHTS];
158: uniform vec3 in_lightDirection[TOTAL_NUMBER_LIGHTS];
159: vec3 g_lightDirectionTex[TOTAL_NUMBER_LIGHTS];
160: 
161:       
162:  const float g_opacityThreshold = 1.0 - 1.0 / 255.0;
163: 
164: 
165: 
166: 
167: 
168: #define EPSILON 0.001
169: 
170: // Computes the intersection between a ray and a box
171: // The box should be axis aligned so we only give two arguments
172: struct Hit
173: {
174:   float tmin;
175:   float tmax;
176: };
177: 
178: struct Ray
179: {
180:   vec3 origin;
181:   vec3 dir;
182:   vec3 invDir;
183: };
184: 
185: bool BBoxIntersect(const vec3 boxMin, const vec3 boxMax, const Ray r, out Hit hit)
186: {
187:   vec3 tbot = r.invDir * (boxMin - r.origin);
188:   vec3 ttop = r.invDir * (boxMax - r.origin);
189:   vec3 tmin = min(ttop, tbot);
190:   vec3 tmax = max(ttop, tbot);
191:   vec2 t = max(tmin.xx, tmin.yz);
192:   float t0 = max(t.x, t.y);
193:   t = min(tmax.xx, tmax.yz);
194:   float t1 = min(t.x, t.y);
195:   hit.tmin = t0;
196:   hit.tmax = t1;
197:   return t1 > max(t0, 0.0);
198: }
199: 
200: // As BBoxIntersect requires the inverse of the ray coords,
201: // this function is used to avoid numerical issues
202: void safe_0_vector(inout Ray ray)
203: {
204:   if(abs(ray.dir.x) < EPSILON) ray.dir.x = sign(ray.dir.x) * EPSILON;
205:   if(abs(ray.dir.y) < EPSILON) ray.dir.y = sign(ray.dir.y) * EPSILON;
206:   if(abs(ray.dir.z) < EPSILON) ray.dir.z = sign(ray.dir.z) * EPSILON;
207: }
208: 
209: // the phase function should be normalized to 4pi for compatibility with surface rendering
210: 
211: float phase_function(float cos_angle)
212: {
213:   return 1.0;
214: }
215:     
216: 
217: 
218: 
219: 
220: 
221: 
222: 
223: 
224: 
225: vec4 g_gradients_0[1];
226: 
227: 
228: uniform sampler2D in_transfer2D_0[1];
229: uniform sampler3D in_transfer2DYAxis;
230: uniform vec4 in_transfer2DYAxis_scale;
231: uniform vec4 in_transfer2DYAxis_bias;
232: 
233: 
234: //VTK::ComputeGradientOpacity1D::Dec
235: 
236: float computeOpacity(vec4 scalar)
237: {
238:   return texture2D(in_transfer2D_0[0],
239:     vec2(scalar.a, g_gradients_0[0].w)).a;
240: }
241: 
242: 
243: vec4 computeRGBAWithGrad(vec4 scalar, vec4 grad)
244: {
245:   return texture2D(in_transfer2D_0[0],
246:     vec2(scalar.a, grad.w));
247: }
248: 
249: 
250: // c is short for component
251: vec4 computeGradient(in vec3 texPos, in int c, in sampler3D volume,in int index)
252: {
253:   // Approximate Nabla(F) derivatives with central differences.
254:   vec3 g1; // F_front
255:   vec3 g2; // F_back
256:   vec3 xvec = vec3(in_cellStep[index].x, 0.0, 0.0);
257:   vec3 yvec = vec3(0.0, in_cellStep[index].y, 0.0);
258:   vec3 zvec = vec3(0.0, 0.0, in_cellStep[index].z);
259:   vec3 texPosPvec[3];
260:   texPosPvec[0] = texPos + xvec;
261:   texPosPvec[1] = texPos + yvec;
262:   texPosPvec[2] = texPos + zvec;
263:   vec3 texPosNvec[3];
264:   texPosNvec[0] = texPos - xvec;
265:   texPosNvec[1] = texPos - yvec;
266:   texPosNvec[2] = texPos - zvec;
267:   g1.x = texture3D(volume, vec3(texPosPvec[0]))[c];
268:   g1.y = texture3D(volume, vec3(texPosPvec[1]))[c];
269:   g1.z = texture3D(volume, vec3(texPosPvec[2]))[c];
270:   g2.x = texture3D(volume, vec3(texPosNvec[0]))[c];
271:   g2.y = texture3D(volume, vec3(texPosNvec[1]))[c];
272:   g2.z = texture3D(volume, vec3(texPosNvec[2]))[c];
273: 
274:   // Apply scale and bias to the fetched values.
275:   g1 = g1 * in_volume_scale[index][c] + in_volume_bias[index][c];
276:   g2 = g2 * in_volume_scale[index][c] + in_volume_bias[index][c];
277: 
278:   // Scale values the actual scalar range.
279:   float range = in_scalarsRange[4*index+c][1] - in_scalarsRange[4*index+c][0];
280:   g1 = in_scalarsRange[4*index+c][0] + range * g1;
281:   g2 = in_scalarsRange[4*index+c][0] + range * g2;
282: 
283:   // Central differences: (F_front - F_back) / 2h
284:   g2 = g1 - g2;
285: 
286:   float avgSpacing = (in_cellSpacing[index].x +
287:    in_cellSpacing[index].y + in_cellSpacing[index].z) / 3.0;
288:   vec3 aspect = in_cellSpacing[index] * 2.0 / avgSpacing;
289:   g2 /= aspect;
290:   float grad_mag = length(g2);
291: 
292:   // Handle normalizing with grad_mag == 0.0
293:   g2 = grad_mag > 0.0 ? normalize(g2) : vec3(0.0);
294: 
295:   // Since the actual range of the gradient magnitude is unknown,
296:   // assume it is in the range [0, 0.25 * dataRange].
297:   range = range != 0 ? range : 1.0;
298:   grad_mag = grad_mag / (0.25 * range);
299:   grad_mag = clamp(grad_mag, 0.0, 1.0);
300: 
301:   return vec4(g2.xyz, grad_mag);
302: }
303: 
304: 
305: //VTK::ComputeDensityGradient::Dec
306: 
307: float volumeShadow(vec3 sample_position, vec3 light_pos_dir, float is_Pos,  in int c, in sampler3D volume, int index, float label)
308: {
309: 
310:   float shadow = 1.0;
311:   vec3 direction = vec3(0.0);
312:   vec3 norm_dir = vec3(0.0);
313:   float maxdist = 0.0;
314:   float scalar;
315:   vec4 gradient;
316:   float opacity = 0.0;
317:   vec3 color;
318:   Ray ray;
319:   Hit hit;
320:   float sampled_dist = 0.0;
321:   vec3 sampled_point = vec3(0.0);
322:     
323:   // direction is light_pos_dir when light is directional
324:   // and light_pos_dir - sample_position when positional
325:   direction = light_pos_dir - is_Pos * sample_position;
326:   norm_dir = normalize(direction);
327:   // introduce little offset to avoid sampling shadows at the exact
328:   // sample position
329:   sample_position += g_lengthStep * norm_dir;
330:   direction = light_pos_dir - is_Pos * sample_position;
331:   ray.origin = sample_position;
332:   ray.dir = norm_dir;
333:   safe_0_vector(ray);
334:   ray.invDir = 1.0/ray.dir;
335:   if(!BBoxIntersect(vec3(0.0), vec3(1.0), ray, hit))
336:   {
337:     // it can happen around the bounding box
338:     return 1.0;
339:   }
340:   if(hit.tmax < g_lengthStep)
341:   {
342:     // if we're too close to the bounding box
343:     return 1.0;
344:   }
345:   // in case of directional light, we want direction not to be normalized but to go
346:   // all the way to the bbox
347:   direction *= pow(hit.tmax / length(direction), 1.0 - is_Pos);
348:   maxdist = min(hit.tmax, length(direction));
349:   maxdist = min(in_giReach, maxdist);
350:   if(maxdist < EPSILON) return 1.0;
351: 
352:     
353:   float current_dist = 0.0;
354:   float current_step = g_lengthStep;
355:   float clamped_step = 0.0;
356:   while(current_dist < maxdist)
357:   {
358:     clamped_step = min(maxdist - current_dist, current_step);
359:     sampled_dist = current_dist + clamped_step * g_jitterValue;
360:     sampled_point = sample_position + sampled_dist * norm_dir;
361:       scalar = texture3D(volume, sampled_point)[c];
362:   scalar = scalar * in_volume_scale[index][c] + in_volume_bias[index][c];
363:   gradient = computeGradient(sampled_point, c, volume, index);
364:   vec4 lutRes = computeRGBAWithGrad(vec4(scalar), gradient);
365:     opacity = lutRes.a;
366:     color = lutRes.xyz
367: 
368:     shadow *= 1.0 - opacity;
369:     current_dist += current_step;
370:   }
371:   return shadow;
372: }
373:   
374: 
375:       
376: vec4 computeLighting(vec4 color, int component, float label)      
377: {      
378:   vec4 finalColor = vec4(0.0);
379:   vec4 shading_gradient = computeGradient(g_dataPos, component, in_volume[0], 0);
380:   vec4 gradient = shading_gradient;
381: 
382:   g_fragWorldPos = g_texToView * vec4(g_dataPos, 1.0);
383:   if (g_fragWorldPos.w != 0.0)
384:   {
385:   g_fragWorldPos /= g_fragWorldPos.w;
386:   }
387:   vec3 viewDirection = normalize(-g_fragWorldPos.xyz);
388:   vec3 ambient = vec3(0,0,0);
389:   vec3 diffuse = vec3(0,0,0);
390:   vec3 specular = vec3(0,0,0);
391:   vec3 vertLightDirection;
392:   vec3 normal = normalize((in_textureToEye[0] * vec4(shading_gradient.xyz, 0.0)).xyz);
393:   vec3 lightDir;
394:         
395:   for (int dirNum = NUMBER_POS_LIGHTS; dirNum < TOTAL_NUMBER_LIGHTS; dirNum++)
396:   {
397:     vertLightDirection = in_lightDirection[dirNum];
398:     float nDotL = dot(normal, vertLightDirection);
399:     if (nDotL < 0.0 && in_twoSidedLighting)
400:     {
401:       nDotL = -nDotL;
402:     }
403:     if (nDotL > 0.0)
404:     {
405:       float df = max(0.0, nDotL);
406:       diffuse += (df * in_lightDiffuseColor[dirNum]);
407:       vec3 r = normalize(2.0 * nDotL * normal - vertLightDirection);
408:       float rDotV = dot(-viewDirection, r);
409:       if (rDotV > 0.0)
410:       {
411:         float sf = pow(rDotV, in_shininess[component]);
412:         specular += (sf * in_lightSpecularColor[dirNum]);
413:       }
414:     }
415:     ambient += in_lightAmbientColor[dirNum];
416:   }
417:   finalColor.xyz = in_ambient[component] * ambient +
418:                    in_diffuse[component] * diffuse * color.rgb +
419:                    in_specular[component] * specular;
420: 
421:       vec3 view_tdir = normalize((g_eyeToTexture * vec4(viewDirection, 0.0)).xyz);
422: 
423:   vec3 secondary_contrib = vec3(0.0);
424:   vec3 tex_light = vec3(0.0);
425:   shading_gradient.w = length(shading_gradient.xyz);
426:   vec3 diffuse_light = vec3(0.0);
427:   float attenuation = 0.0;
428:   float vol_shadow = 0.0;
429:   float phase = 1.0;
430:     
431:   for(int dirNum = NUMBER_POS_LIGHTS; dirNum < TOTAL_NUMBER_LIGHTS; dirNum++)
432:   {
433:     tex_light = g_lightDirectionTex[dirNum];
434:     phase = phase_function(dot(normalize(-tex_light), view_tdir));
435:     vol_shadow = volumeShadow(g_dataPos, tex_light, 0.0, component, in_volume[0], 0, label);
436:     secondary_contrib += vol_shadow * phase * color.rgb * in_diffuse[component] * in_lightDiffuseColor[dirNum];
437:     secondary_contrib += in_ambient[component] * in_lightAmbientColor[dirNum];
438:   }
439:         float vol_coef = 2.0 * in_volumetricScatteringBlending * exp( - 2.0 * in_volumetricScatteringBlending * shading_gradient.w * color.a);
440: 
441:   finalColor.xyz = (1.0 - vol_coef) * finalColor.xyz + vol_coef * secondary_contrib;
442:             
443:   finalColor.a = color.a;      
444:   return finalColor;      
445:   }
446: 
447: vec4 computeColor(vec4 scalar, float opacity)
448: {
449:   vec4 color = texture2D(in_transfer2D_0[0],
450:     vec2(scalar.w, g_gradients_0[0].w));
451:   return computeLighting(color, 0, 0);
452: }
453: 
454: 
455:         
456: vec3 computeRayDirection()        
457:   {        
458:   return normalize(ip_vertexPos.xyz - g_eyePosObj.xyz);        
459:   }
460: 
461: //VTK::Picking::Dec
462: 
463: //VTK::RenderToImage::Dec
464: 
465: //VTK::DepthPeeling::Dec
466: 
467: uniform float in_scale;
468: uniform float in_bias;
469: 
470: //////////////////////////////////////////////////////////////////////////////
471: ///
472: /// Helper functions
473: ///
474: //////////////////////////////////////////////////////////////////////////////
475: 
476: /**
477:  * Transform window coordinate to NDC.
478:  */
479: vec4 WindowToNDC(const float xCoord, const float yCoord, const float zCoord)
480: {
481:   vec4 NDCCoord = vec4(0.0, 0.0, 0.0, 1.0);
482: 
483:   NDCCoord.x = (xCoord - in_windowLowerLeftCorner.x) * 2.0 *
484:     in_inverseWindowSize.x - 1.0;
485:   NDCCoord.y = (yCoord - in_windowLowerLeftCorner.y) * 2.0 *
486:     in_inverseWindowSize.y - 1.0;
487:   NDCCoord.z = (2.0 * zCoord - (gl_DepthRange.near + gl_DepthRange.far)) /
488:     gl_DepthRange.diff;
489: 
490:   return NDCCoord;
491: }
492: 
493: /**
494:  * Transform NDC coordinate to window coordinates.
495:  */
496: vec4 NDCToWindow(const float xNDC, const float yNDC, const float zNDC)
497: {
498:   vec4 WinCoord = vec4(0.0, 0.0, 0.0, 1.0);
499: 
500:   WinCoord.x = (xNDC + 1.f) / (2.f * in_inverseWindowSize.x) +
501:     in_windowLowerLeftCorner.x;
502:   WinCoord.y = (yNDC + 1.f) / (2.f * in_inverseWindowSize.y) +
503:     in_windowLowerLeftCorner.y;
504:   WinCoord.z = (zNDC * gl_DepthRange.diff +
505:     (gl_DepthRange.near + gl_DepthRange.far)) / 2.f;
506: 
507:   return WinCoord;
508: }
509: 
510: /**
511:  * Clamps the texture coordinate vector @a pos to a new position in the set
512:  * { start + i * step }, where i is an integer. If @a ceiling
513:  * is true, the sample located further in the direction of @a step is used,
514:  * otherwise the sample location closer to the eye is used.
515:  * This function assumes both start and pos already have jittering applied.
516:  */
517: vec3 ClampToSampleLocation(vec3 start, vec3 step, vec3 pos, bool ceiling)
518: {
519:   vec3 offset = pos - start;
520:   float stepLength = length(step);
521: 
522:   // Scalar projection of offset on step:
523:   float dist = dot(offset, step / stepLength);
524:   if (dist < 0.) // Don't move before the start position:
525:   {
526:     return start;
527:   }
528: 
529:   // Number of steps
530:   float steps = dist / stepLength;
531: 
532:   // If we're reeaaaaallly close, just round -- it's likely just numerical noise
533:   // and the value should be considered exact.
534:   if (abs(mod(steps, 1.)) > 1e-5)
535:   {
536:     if (ceiling)
537:     {
538:       steps = ceil(steps);
539:     }
540:     else
541:     {
542:       steps = floor(steps);
543:     }
544:   }
545:   else
546:   {
547:     steps = floor(steps + 0.5);
548:   }
549: 
550:   return start + steps * step;
551: }
552: 
553: //////////////////////////////////////////////////////////////////////////////
554: ///
555: /// Ray-casting
556: ///
557: //////////////////////////////////////////////////////////////////////////////
558: 
559: /**
560:  * Global initialization. This method should only be called once per shader
561:  * invocation regardless of whether castRay() is called several times (e.g.
562:  * vtkDualDepthPeelingPass). Any castRay() specific initialization should be
563:  * placed within that function.
564:  */
565: void initializeRayCast()
566: {
567:   /// Initialize g_fragColor (output) to 0
568:   g_fragColor = vec4(0.0);
569:   g_dirStep = vec3(0.0);
570:   g_srcColor = vec4(0.0);
571:   g_exit = false;
572: 
573:           
574:   // Get the 3D texture coordinates for lookup into the in_volume dataset        
575:   g_rayOrigin = ip_textureCoords.xyz;      
576:       
577:   // Eye position in dataset space      
578:   g_eyePosObj = in_inverseVolumeMatrix[0] * vec4(in_cameraPos, 1.0);      
579:   g_eyePosObjs[0] = in_inverseVolumeMatrix[0] * vec4(in_cameraPos, 1.0);
580:       
581:   // Getting the ray marching direction (in dataset space)      
582:   vec3 rayDir = computeRayDirection();      
583:       
584:   // 2D Texture fragment coordinates [0,1] from fragment coordinates.      
585:   // The frame buffer texture has the size of the plain buffer but       
586:   // we use a fraction of it. The texture coordinate is less than 1 if      
587:   // the reduction factor is less than 1.      
588:   // Device coordinates are between -1 and 1. We need texture      
589:   // coordinates between 0 and 1. The in_depthSampler      
590:   // buffer has the original size buffer.      
591:   vec2 fragTexCoord = (gl_FragCoord.xy - in_windowLowerLeftCorner) *      
592:                       in_inverseWindowSize;      
593:       
594:   // Multiply the raymarching direction with the step size to get the      
595:   // sub-step size we need to take at each raymarching step      
596:   g_dirStep = (ip_inverseTextureDataAdjusted *      
597:               vec4(rayDir, 0.0)).xyz * in_sampleDistance;      
598:   g_lengthStep = length(g_dirStep);      
599:           
600:  float jitterValue = 0.0;          
601:         
602:     g_rayJitter = g_dirStep;        
603:         
604:   g_rayOrigin += g_rayJitter;        
605:       
606:   // Flag to determine if voxel should be considered for the rendering      
607:   g_skip = false;
608: 
609:   
610: 
611:         
612:   // Flag to indicate if the raymarch loop should terminate       
613:   bool stop = false;      
614:       
615:   g_terminatePointMax = 0.0;      
616:       
617:   vec4 l_depthValue = texture2D(in_depthSampler, fragTexCoord);      
618:   // Depth test      
619:   if(gl_FragCoord.z >= l_depthValue.x)      
620:     {      
621:     discard;      
622:     }      
623:       
624:   // color buffer or max scalar buffer have a reduced size.      
625:   fragTexCoord = (gl_FragCoord.xy - in_windowLowerLeftCorner) *      
626:                  in_inverseOriginalWindowSize;      
627:       
628:   // Compute max number of iterations it will take before we hit      
629:   // the termination point      
630:       
631:   // Abscissa of the point on the depth buffer along the ray.      
632:   // point in texture coordinates      
633:   vec4 rayTermination = WindowToNDC(gl_FragCoord.x, gl_FragCoord.y, l_depthValue.x);      
634:       
635:   // From normalized device coordinates to eye coordinates.      
636:   // in_projectionMatrix is inversed because of way VT      
637:   // From eye coordinates to texture coordinates      
638:   rayTermination = ip_inverseTextureDataAdjusted *      
639:                     in_inverseVolumeMatrix[0] *      
640:                     in_inverseModelViewMatrix *      
641:                     in_inverseProjectionMatrix *      
642:                     rayTermination;      
643:   g_rayTermination = rayTermination.xyz / rayTermination.w;      
644:       
645:   // Setup the current segment:      
646:   g_dataPos = g_rayOrigin;      
647:   g_terminatePos = g_rayTermination;      
648:       
649:   g_terminatePointMax = length(g_terminatePos.xyz - g_dataPos.xyz) /      
650:                         length(g_dirStep);      
651:   g_currentT = 0.0;
652: 
653:   
654: 
655:   //VTK::RenderToImage::Init
656: 
657:   //VTK::DepthPass::Init
658: 
659:   
660:   for(int i=0; i<TOTAL_NUMBER_LIGHTS; i++)
661:   {
662:     g_lightDirectionTex[i] = (g_eyeToTexture * vec4(-in_lightDirection[i], 0.0)).xyz;
663:   }
664:   
665: 
666:   g_jitterValue = jitterValue;
667: }
668: 
669: /**
670:  * March along the ray direction sampling the volume texture.  This function
671:  * takes a start and end point as arguments but it is up to the specific render
672:  * pass implementation to use these values (e.g. vtkDualDepthPeelingPass). The
673:  * mapper does not use these values by default, instead it uses the number of
674:  * steps defined by g_terminatePointMax.
675:  */
676: vec4 castRay(const float zStart, const float zEnd)
677: {
678:   //VTK::DepthPeeling::Ray::Init
679: 
680:   
681: 
682:   //VTK::DepthPeeling::Ray::PathCheck
683: 
684:   
685: 
686:   /// For all samples along the ray
687:   while (!g_exit)
688:   {
689:           
690:     g_skip = false;
691: 
692:     
693: 
694:     
695: 
696:     
697: 
698:     g_gradients_0[0] = computeGradient(g_dataPos, 0, in_volume[0], 0);
699: 
700: 
701:           
702:     if (!g_skip)      
703:       {      
704:       vec4 scalar;      
705:       
706:       scalar = texture3D(in_volume[0], g_dataPos);      
707:         
708:       scalar.r = scalar.r * in_volume_scale[0].r + in_volume_bias[0].r;        
709:       scalar = vec4(scalar.r);             
710:       g_srcColor = vec4(0.0);             
711:       g_srcColor.a = computeOpacity(scalar);             
712:       if (g_srcColor.a > 0.0)             
713:         {             
714:         g_srcColor = computeColor(scalar, g_srcColor.a);           
715:         // Opacity calculation using compositing:           
716:         // Here we use front to back compositing scheme whereby           
717:         // the current sample value is multiplied to the           
718:         // currently accumulated alpha and then this product           
719:         // is subtracted from the sample value to get the           
720:         // alpha from the previous steps. Next, this alpha is           
721:         // multiplied with the current sample colour           
722:         // and accumulated to the composited colour. The alpha           
723:         // value from the previous steps is then accumulated           
724:         // to the composited colour alpha.           
725:         g_srcColor.rgb *= g_srcColor.a;           
726:         g_fragColor = (1.0f - g_fragColor.a) * g_srcColor + g_fragColor;             
727:         }      
728:       }
729: 
730:     //VTK::RenderToImage::Impl
731: 
732:     //VTK::DepthPass::Impl
733: 
734:     /// Advance ray
735:     g_dataPos += g_dirStep;
736: 
737:           
738:     if(any(greaterThan(max(g_dirStep, vec3(0.0))*(g_dataPos - in_texMax[0]),vec3(0.0))) ||      
739:       any(greaterThan(min(g_dirStep, vec3(0.0))*(g_dataPos - in_texMin[0]),vec3(0.0))))      
740:       {      
741:       break;      
742:       }      
743:       
744:     // Early ray termination      
745:     // if the currently composited colour alpha is already fully saturated      
746:     // we terminated the loop or if we have hit an obstacle in the      
747:     // direction of they ray (using depth buffer) we terminate as well.      
748:     if((g_fragColor.a > g_opacityThreshold) ||       
749:        g_currentT >= g_terminatePointMax)      
750:       {      
751:       break;      
752:       }      
753:     ++g_currentT;
754:   }
755: 
756:   
757: 
758:   return g_fragColor;
759: }
760: 
761: /**
762:  * Finalize specific modes and set output data.
763:  */
764: void finalizeRayCast()
765: {
766:   
767: 
768:   
769: 
770:   
771: 
772:   
773: 
774:   //VTK::Picking::Exit
775: 
776:   g_fragColor.r = g_fragColor.r * in_scale + in_bias * g_fragColor.a;
777:   g_fragColor.g = g_fragColor.g * in_scale + in_bias * g_fragColor.a;
778:   g_fragColor.b = g_fragColor.b * in_scale + in_bias * g_fragColor.a;
779:   fragOutput0 = g_fragColor;
780: 
781:   //VTK::RenderToImage::Exit
782: 
783:   //VTK::DepthPass::Exit
784: }
785: 
786: //////////////////////////////////////////////////////////////////////////////
787: ///
788: /// Main
789: ///
790: //////////////////////////////////////////////////////////////////////////////
791: void main()
792: {
793:       
794:   initializeRayCast();    
795:   castRay(-1.0, -1.0);    
796:   finalizeRayCast();
797: }


ERROR: In vtkShaderProgram.cxx, line 453
vtkShaderProgram (0000023F57F066F0): 0(368) : error C0000: syntax error, unexpected identifier, expecting ',' or ';' at token "shadow"


ERROR: In vtkOpenGLGPUVolumeRayCastMapper.cxx, line 2833
vtkOpenGLGPUVolumeRayCastMapper (0000023F579EBDE0): Shader failed to compile

@DMelenberg I see the syntax error in the shader code but I am not sure why you run into it. Which version of VTK are you using? Are you using any shader replacements? Are you setting any shader replacements?

Hello @sankhesh. Its really strange.
I am not using any shader replacements nor am i setting shader replacements in my code. :thinking:
I am only setting shader Properties, like Ambient etc.
Im using the latest vtk version 9.2.6.

Of course, but I wanted to mention it anyway: with SetShade=0 the volumetric scattering model works

Thank you very much for your help.

David

@sankhesh
I tried again today to see if I can render with the following parameters.

"set_shade": 0,
"global_illumination_reach": 1,
"volumetric_scattering_blending": 1,

As it turns out, it makes no difference whether I set volumetric_scattering_blending to 0 or 1. Both in the case of a 2D transfer function.

I can’t relate to this, as I created a rendering on June 9th with these exact settings, which looked like this:

With exactly the same parameters I get today:

I compared my code with Local History and it is identical in the essential steps of the pipeline.

David

@DMelenberg, when you set Shade = 0 that turns off all lighting in the volume mapper including light scattering. So the behavior you saw in your latest experiment is expected.

Can you try the latest master?
Instructions: https://gitlab.kitware.com/vtk/vtk/-/blob/master/Documentation/dev/build.md

Hi @sankhesh.
Thanks for your fast feedback!
I am unfortunately not familiar with CMake. I have installed vtk with pip :sweat_smile:

Ah, in that case, try downloading the latest release wheel via pip.

Instructions: Additional Python Wheels - VTK documentation

Thanks @sankhesh!!

I got rid of the Error-Message!

But a new Problem appeared :sweat_smile:

As soon as I don’t make any cursor interaction with the interactor, the volume loses its global shading. And this is independent of 1D or 2D transfer function.

Here an example:

A little bit moved in InteraktionStyle TrackballCamera.

To better understand. Here are the parameters of the two renders. Except for the pointers of the lights and the changed camera position, the metadata is identical.