Thanks for the additional information. I can confirm that using the default linear interpolation you get a highly zoomed image in Slicer like this:
After switching to cubic interpolation you get this - no more diamond-shaped interpolation artifacts:
You can temporarily switch to cubic interpolation in 3D Slicer by typing this into the Python console:
sliceNode = slicer.mrmlScene.GetNodeByID('vtkMRMLSliceNodeRed')
reslice = slicer.app.applicationLogic().GetSliceLogic(sliceNode).GetBackgroundLayer().GetReslice()
reslice.SetInterpolationModeToCubic()
slicer.util.forceRenderAllViews()
I think we just use simple linear interpolation in Slicer because this has not been reported as a problem over the last 20 years and thousands of medical image computing projects. We can easily make cubic interpolation option available from the GUI or maybe switch to cubic interpolation automatically when a certain zoom factor is reached.
In 3D Slicer, we perform our image reslicing and layer blending pipeline on CPU. This has many advantages: no need to transfer the volume to GPU memory (there is no risk that we cannot visualize a volume because it does not fit into GPU memory), we can display non-linearly transformed volumes without warping the entire 3D volume (just by passing the transformation pipeline to the reslice filter), use standard VTK filters for blending and other processing, etc.
Of course running this pipeline on the GPU it is considerably slower than the GPU, but since on a typical computer and typical images the speed is acceptable (about 50fps on an i7 laptop for a 512^3 volume), we are not in a rush to move everything to the GPU.