How to identify limbs from a 3D image of a human body


I’m working on 3D scans of the human body and I need to identify the limbs. For example, I need to compare measurements of the left (or right) arm from scans taken from the same person over a period of time. The person will always have their arms spread out to the sides and with legs apart. The pose will most probably not be identical on every scan

I’m hoping someone has done something similar to this so that I do not have to reinvent the wheel.



That is a typical problem solvable by a convolutional neural network (deep learning). There are plenty of CNN architectures out there such as this one: . Maybe you can find some ready-to-use Python script out there. Alternatively, you can try OpenCV’s Haar cascade: Body Detection with Computer Vision | by Instrument | Instrument Stories | Medium . If you don’t know OpenCV, you can think of it roughly as an “inverse VTK”, that is, it takes a raster image and try to find the “actors” in it. While VTK is a visualisation technology (computer world → human world), OpenCV is a computer vision one (human world → computer world).



ITK (VTK’s sister library specialized in raster image/volume processing) also has probably something that can help you with that such as ITK Snap: ITK-SNAP Home .

This looks like a case for deformable registration. While you could solve the problem using ITK directly, it is better to use one of the tools built on ITK such as elastix, BRAINS or ANTs. A good GUI tool to try them out is Slicer.

1 Like