Image Segmentation and Labeling

Dear All,
I am working on DICOM Images provided in the Mendeley Dataset [Source (https://data.mendeley.com/datasets/zbf6b4pttk/2)]. I have done following:-
a. Extracted Mid-Sagittal Views for both T1 and T2 Weighted Images.
b. Performed basic pre-processing for Image preparation.
c. Created BW (Mask) as well as Pseudo Colored Image.
Now I am trying to:-
a. Segment out the Lumbar Vertebrates and Sacrum Vertebrates (Using regionprops)
b. Mark the Centroids on the Lumbar and Sacrum Bones.
c. Assign labels to Image like L1, L2..L5 (Lumbar) and S for Sacrum (I am assuming S1 as Sacrum Bone and ignoring other levels of Sacrum vertebrates).
d. Input the labelled images to Deep Network for training followed by testing/validation, purposes.
Please suggest the best possible practices for image segmentation for this particular problem.

 Réponse acceptée

Image Analyst
Image Analyst le 15 Août 2020
What I'd do is first call bwareafilt() to get blobs only with a certain area range. Then ask regionprops() for the bounding box and centroid. If the bounding box height and width are not both in a reasonable range, then throw out those blobs. Hopefully what remains is the square vertebrae. Then you can identify the individual vertebra by the y coordinate of the centroids. Pretty easy but let me know if you can't figure it out.
labeledImage = bwlabel(mask);
props = regionprops(labeledImage, 'Area', 'Centroid', 'BoundingBox');
allAreas = [props.Area] % Inspect this to find the area1 and area2
mask = bwareafilt(mask, [area1, area2]);
props = regionprops(mask, 'Centroid', 'BoundingBox');
bb = vertcat(props.BoundingBox)
widths = bb(:, 3)
heights = bb(:, 4)
aspectRatios = widths ./ heights
keepers = (widths > width1 & widths < width2) & (heights > height1 & heights < height2);
mask = ismember(labeledImage, find(keepers));
labeledImage = bwlabel(mask);
props = regionprops(labeledImage, 'Centroid');
xy = vertcat(props.Centroid)
y = xy(:, 2)
[sortedY, sortOrder] = sort(y, 'ascend')
% Sort x and props the same way, from top to bottom based on y
props = props(sortOrder);
x = props(xy(:, 1));
% Label them with a number
for k = 1 : length(props)
xt = x(k);
yt = sortedY(k);
str = sprintf('Blob #%d', k);
text(xt, yt, str, 'Color', 'r', 'FontSize', 20, 'FontWeight', 'bold')
end
So you can see it's pretty easy. It's untested but only requires a few things to be figured out. Figure out the parameters for width1, etc. and plug them in. Let me know if you have any trouble.
You can use ismember to extract a binary image of only a particular vertebra, like the 3rd one, 4th one, etc. and then save those with imwrite for a training set for your deep learning network for that particular vertebra. Like
thisVertebra = ismember(labeledImage, sortOrder(k));
filename = whatever....
imwrite(filename, thisVertebra);
or whatever.

13 commentaires

RFM
RFM le 15 Août 2020
Dear Image Analyst,
Thank you for the quick and detailed reply. Actually the part where you told me to filter out the region of interest, I am pretty much doing a similar implementation. Like mentioned in the question post that by using regionprops I am using MajorAxis, MinorAxis, Area and Solidicity checks to filter out the lumbar and sacrum vertebrates. Not getting very consistent results, though but out of 500 plus images I am getting on average 400 plus images which are desired.
I was also able to mark centroids / bounding box on the segmented regions too...
As it is clearly seen that I am having filtering issues if first image is seen while the second image is showing desired outcome.
THE MAJOR ISSUE:-
a. How can I create a groundtruth data with Lables L1-L5 and S for all the images automatically without manual annotation.
b. Training and Testing of Network will be done subsequently.
Regards
Image Analyst
Image Analyst le 15 Août 2020
If the problem is that the sacrum is connected to that other clutter and forming a huge blob, then you'll just have to figure out some ad hoc code to separate them - some code that gives better segmentation. It looks like there is a black line between them. Perhaps your threshold was too low and it included some dark pixels in there. Try raising the threshold. Or you could use segnet to train it with shapes that you want. We've done that before. It will clip off portions that go off into lala land and give you a reasonable shape. But you'd have to train it by manually painting/labeling the sacrum for hundreds of images.
RFM
RFM le 15 Août 2020
Can you share a good resource for SegNet? Actually I have not previously used it so having difficulty in its implementation. The dataset that I am using had also mentioned use of SegNet but that is for Axial Views only. Thanks.
RFM
RFM le 16 Août 2020
Thank you. Appreciate taking your time and effort.
Looks like the most laborious part is the creation of annotated images (Ground Truth) and seemingly there isn't any work around available to ease this process.
Image Analyst
Image Analyst le 16 Août 2020
Yes, that's one of the drawbacks of Deep Learning over traditional methods
  1. Need to label ground truth images, often manually or manual with computer assistance.
  2. Need for hundreds or thousands of images to train with.
RFM
RFM le 1 Sep 2020
Modifié(e) : RFM le 1 Sep 2020
I need some assistance in Image Binarization.
Test Image. (Once I apply Otsu Method for Thresholding, I get disconnected regions)
Original Image
Otsu Method
The Region in Blue is missing Bone Structure.
Any help.
Image Analyst
Image Analyst le 2 Sep 2020
Yes, Otsu is often a lousy method. You need to employ more sophisticated methods. Check out the literature: Vision Bibliography
RFM
RFM le 3 Sep 2020
Thank you for the suggestion. But can you comment on the idea of smoothing or averaging the pixels in a certain window and then binarize the smoothed/averaged out Image, should it work?
In the Literature, mostly Otsu (global) is referred, I have also tried the Adaptive Method for Thresholding as well.
Image Analyst
Image Analyst le 6 Sep 2020
Not sure what to say. Smoothing/blurring will definitely change the image and the binarized shapes would be different. You just have to try it and see what you get with your images. Maybe it will be better, maybe not. You should also try using different connectivity 4 instead of the default 8, and see if that helps.
I know Otsu is often/usually used in papers, but that doesn't mean it's good or the best. It might mean that it worked okay for their images and it was easy since it's a built-in threshold. I find that it's often not good because it's best for images that have a nice bimodal histogram where the splitting threshold is rather obvious. For images with many humps in the histogram, or like what I often encounter, a single mode that is skewed to one side, I find that the triangle threshold works better. I'm attaching that so if you want to try it and compare it, you can.
RFM
RFM le 8 Sep 2020
Dear Image Analyst,
Can you share your ideas about this Question?
Regards
P.S. I tried looking for a direct mean to contact you regarding this question, but I am afraid I couldn't find one.
Hamza Ayari
Hamza Ayari le 25 Mai 2022
can you send the mask code for it ?
thank you
Image Analyst
Image Analyst le 25 Mai 2022
@Hamza Ayari I don't know what code you're asking for. I don't have any vertebrae-finding code.

Connectez-vous pour commenter.

Plus de réponses (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by