Error on linux when creating QVTKOpenGLNativeWidget

I am using VTK 9 + Qt6 in windows without any issue. I am now trying to build my application on linux, but I have errors when I instanciate the QVTKOpenGLNativeWidget and do nothing more than displaying it.

    mVTKWidget = new QVTKOpenGLNativeWidget;
    mVTKWidget->setMinimumSize( 400, 200 );
    lVLayout->addWidget( mVTKWidget );

It gives me error:

2021-06-25 16:00:47.175 (   2.920s) [        593F58C0]   vtkShaderProgram.cxx:452    ERR| vtkShaderProgram (0x560b2d21cea0)

2021-06-25 16:00:47.176 (   2.921s) [        593F58C0]   vtkShaderProgram.cxx:453    ERR| vtkShaderProgram (0x560b2d21cea0): 0:31(22): error: syntax error, unexpected NEW_IDENTIFIER, expecting '{'

2021-06-25 16:00:47.176 (   2.921s) [        593F58C0]vtkOpenGLVertexArrayObj:265    ERR| vtkOpenGLVertexArrayObject (0x560b2d732100): attempt to add attribute without a program for attribute ndCoordIn
2021-06-25 16:00:47.176 (   2.921s) [        593F58C0]vtkOpenGLQuadHelper.cxx:62    WARN| Error binding ndCoords to VAO.
2021-06-25 16:00:47.176 (   2.921s) [        593F58C0]vtkOpenGLRenderWindow.c:1031   ERR| vtkGenericOpenGLRenderWindow (0x560b2bdc3340): Couldn't build the shader program for resolving msaa.

Then it still kinda work.
If I display points, everything is fine. But if I try to display vtkQuad or vtkHexahedron using a vtkCellArray, it crashes with this error:

2021-06-25 16:27:10.714 (   7.349s) [        537E58C0]   vtkShaderProgram.cxx:452    ERR| vtkShaderProgram (0x55a52b556c60): 1: #version 140

2021-06-25 16:27:10.751 (   7.386s) [        537E58C0]   vtkShaderProgram.cxx:453    ERR| vtkShaderProgram (0x55a52b556c60): 0:133(45): error: `gl_PrimitiveID' undeclared
0:133(45): error: operands to arithmetic operators must be numeric
0:133(18): error: no function with name 'texelFetchBuffer'
0:134(41): warning: `texColor' used uninitialized
0:135(41): warning: `texColor' used uninitialized
0:136(35): warning: `texColor' used uninitialized

Im trying to run this on a Ubuntu 20.04.

Any idea on why could cause this?
Do I need to install something on my Ubuntu?
Or maybe a compilation option?


Hi, David,

Make sure you have the latest version of your graphics card driver, which includes the OpenGL backend. If you’re using WLS, make sure you have WLS2.

take care,


Im trying to run it on virtual box.
3D acceleration was disabled, so I tried to enable it, and force openGL3 with ‘LIBGL_ALWAYS_SOFTWARE=1’, but I have the same errors.

Is it supposed to work on virtualbox?

By they way, its good to know for wsl2

LIBGL_ALWAYS_SOFTWARE should be zero, right?

VB used to have GPU passthrough, but it has been dropped some versions ago. So I guess you have to stick to software-only OpenGL. But I think you’re better off by trying WSL2 if you have Windows 10.

Ok, I just setup server X to use WSL2. I followed this:

I have the same errors.

Setup info, does it look correct?

glxinfo -B
name of display:
display:  screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
    Vendor: Mesa/ (0xffffffff)
    Device: llvmpipe (LLVM 11.0.0, 256 bits) (0xffffffff)
    Version: 20.2.6
    Accelerated: no
    Video memory: 11074MB
    Unified memory: no
    Preferred profile: core (0x1)
    Max core profile version: 4.5
    Max compat profile version: 3.1
    Max GLES1 profile version: 1.1
    Max GLES[23] profile version: 3.2
OpenGL vendor string: Mesa/
OpenGL renderer string: llvmpipe (LLVM 11.0.0, 256 bits)
OpenGL core profile version string: 4.5 (Core Profile) Mesa 20.2.6
OpenGL core profile shading language version string: 4.50
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile

OpenGL version string: 3.1 Mesa 20.2.6
OpenGL shading language version string: 1.40
OpenGL context flags: (none)

OpenGL ES profile version string: OpenGL ES 3.2 Mesa 20.2.6
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20

Hmmm… I may be missing something, but why do use a X server to use WSL? WSL is supposed to allow you to run Linux executables directly under Windows. No need for emulation, X clients/servers, etc. Just run the binary as you’d do with a .EXE.

As far as I understood, you need a X server, and if I try without one I have an error

There is a preview build (Run Linux GUI apps with WSL | Microsoft Docs) that allow wsl to run linux app without X server, but its not released yet

True. GUI programs need to stream their graphical output somewhere.

And from a comment, it seems its an Xserver implementation:

By default, its graphic drivers (llvmpipe) are run in software mode so if you want to do some serious hardware acceleration GUI, you have to install these:

1. GPU drivers specifically for WSL functionality
2. Latest mesa drivers from PPA. 

Right now, hardware acceleration in WSL only up to OpenGL 3.0 at the moment so running OpenGL 4.0 applications will throw out an error. Also for AMD GPU users, it is strongly recommended to install OpenGL & OpenCL compatibility pack from windows store. Windows 11 is already supporting WSL2 gui but with the right setup you can get linux gui apps launched without any 3rd party apps.

It seems to me that your program is trying to call the deprecated texelFetchBuffer function in one of your shader scripts: updating texelFetchBuffer to texelFetch · Issue #1 · raganmd/learningGLSL · GitHub, so it fails to compile, triggering the error of your original post. According to what I found in the Internet, you should change your program to use the texelFetch function instead.

Well, for the first error, im not doing anything, im just instanciating and displaying the QVTKOpenGLNativeWidget.

And for the crash, the only different code is this:

    int i                              = 0;
    vtkSmartPointer<vtkPoints> lPoints = vtkSmartPointer<vtkPoints>::New();
    vtkSmartPointer<vtkActor> lActor   = vtkSmartPointer<vtkActor>::New();
        // Create a quad on the four points
        vtkSmartPointer<vtkQuad> lQuad = vtkSmartPointer<vtkQuad>::New();
        // Create a cell array to store the quad in
        vtkSmartPointer<vtkCellArray> lQuads = vtkSmartPointer<vtkCellArray>::New();

        for( auto it = lCenterPoints.begin(); it < lCenterPoints.end(); ++it, ++i )
            lPoints->InsertNextPoint( std::get<0>( *it ), std::get<1>( *it ), std::get<2>( *it ) );
            lPoints->InsertNextPoint( std::get<0>( *it ), std::get<1>( *it ), std::get<2>( *it ) );
            lPoints->InsertNextPoint( std::get<0>( *it ), std::get<1>( *it ), std::get<2>( *it ) );
            lPoints->InsertNextPoint( std::get<0>( *it ), std::get<1>( *it ), std::get<2>( *it ) );

            lQuad->GetPointIds()->SetId( 0, 4 * i );
            lQuad->GetPointIds()->SetId( 1, 4 * i + 1 );
            lQuad->GetPointIds()->SetId( 2, 4 * i + 2 );
            lQuad->GetPointIds()->SetId( 3, 4 * i + 3 );

            lQuads->InsertNextCell( lQuad );

        // Create a polydata to store everything in
        vtkSmartPointer<vtkPolyData> lPolydata = vtkSmartPointer<vtkPolyData>::New();

        // Add the points and quads to the dataset
        lPolydata->SetPoints( lPoints );
        lPolydata->SetPolys( lQuads );
        lPolydata->GetCellData()->SetScalars( lColors );

        // Setup actor and mapper
        vtkSmartPointer<vtkPolyDataMapper> lMapper = vtkSmartPointer<vtkPolyDataMapper>::New();
        lMapper->SetInputData( lPolydata );
        lActor->SetMapper( lMapper );

So I have no idea where I would use something how/where I could use texelFetchBuffer.

And our QA team, just tested it on a VmWare virtual machine (ubuntu 20.04), and its working.