I have a large set of ct images of heart. The ct volumes created from the image set need to be processed. But before processing, I need to check the orientation of each volume to see the long axis of the heart pointing “upward”. Because the orientation may vary for each volume. How can I automate the orientation check so that it does not need to be done manually?
Cardiac CTs are usually acquired in head-first-supine patient position and so the images should already have a standard orientation. Was some unknown conversion applied on your images that resulted in unknown orientation?
Oh, I see. It sounds like a fun project. Do you use any contrast agent for the scans? If not then in CT it would be hard to reliably find the heart without relying on other structures. I would probably use volume rendering to visualize bones and define a few landmarks on the spine manually to align the specimen to a standard template using landmark registration. If the embryos are in somewhat similar pose then you may do initial alignment by segmenting bones (simple thresholding) then compute principal axes directions on it. You might even try to register them using intensity-based automatic registration. For all these, you need to split volumes (if multiple eggs were acquired in one scan) and remove the egg shell from the image, which should be straightforward (segment the air using thresholding and keeping largest island and then add a small margin). 3D Slicer (VTK-based medical image computing application) has all these tools implemented and you can automate processing Python scripting. If you need help in how to perform these steps using 3D Slicer then write your questions to https://discourse.slicer.org.