How to blend an image patch within an image?
28 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
I have an image where I am taking a 128x128 patch from, performing some operation, and then reinserting it back into the image.

The image on the right shows a clear boundary between the new image patch and the surrounding image. My question is, is there some blending operation to make the values of the patch more similar to the surroundings? I tried a basic conv2 to blur the boundaries, but it didn't seem to help. I also tried to pad the patch with 0's to make it the same size as the original image, then use
imfuse(im, patch, 'blend')
to try and blend it, but it still give very distinct boundaries/values inside the patch don't match the outside.

Any advice would be greatly appreciated, thanks.
0 commentaires
Réponse acceptée
DGM
le 7 Avr 2022
Modifié(e) : DGM
le 7 Avr 2022
First off, imfuse() is more of a tool for offhand comparison of two images, not as a practical image composition or blending tool. The best it can do is a uniform 50% opacity composition, which is what you're doing there.
There are many ways to do image blending, but at this point, I don't yet see that it's necessary. I'd start with a basic logical composition. Since the ROI is rectangular and aligned to the grid, it can be reduced to simple indexing.
inpict = imread('cameraman.tif');
% extract a sample region
sampx = 100:200;
sampy = 50:150;
roisample = inpict(sampy,sampx,:);
% just modify the sample for sake of demonstration
roisample = imgaussfilt(roisample,1.5);
% insert the sample back into the image
outpict = inpict;
outpict(sampy,sampx,:) = roisample;
% show the result
imshow(outpict)
EDIT: If the ROI becomes more complex than this (non-rectangular, not grid-aligned), you'd use some form of mask to combine the images. The two simplest composition methods are logical masking (using a binarized mask as a logical index array) and opacity/alpha blending, which is simply a weighted summation.
Logical composition:
inpict = imread('cameraman.tif');
% create a logical mask with a rectangular region in it
mask = false(size(inpict));
mask(50:100,100:150) = true;
% create a modified copy of the entire image
foreground = imgaussfilt(inpict,1.5);
% compose both images using the mask
outpict = inpict;
outpict(mask) = foreground(mask);
imshow(outpict)
Multiplicative composition:
inpict = imread('cameraman.tif');
% create a numeric mask with a rectangular region in it
mask = zeros(size(inpict));
mask(50:100,100:150) = 1;
% feather the mask edges (for example)
mask = imgaussfilt(mask,10);
% create a modified copy of the entire image
foreground = imgaussfilt(inpict,1.5);
% compose both images using the mask
outpict = im2double(inpict).*(1-mask) + im2double(foreground).*mask;
outpict = im2uint8(outpict);
imshow(outpict)
If the latter case is what's required, MIMT replacepixels() is generally far easier to deal with, but it's not part of IPT or base MATLAB.
inpict = imread('cameraman.tif');
% create a numeric mask with a rectangular region in it
mask = zeros(size(inpict));
mask(50:100,100:150) = 1;
% feather the mask edges (for example)
mask = imgaussfilt(mask,10);
% create a modified copy of the entire image
foreground = imgaussfilt(inpict,1.5);
% compose both images using the mask
% doing this with replacepixels() is concise and avoids class-dependence
outpict = replacepixels(foreground,inpict,mask);
imshow(outpict)
2 commentaires
DGM
le 8 Avr 2022
Sorry about the late reply. If you don't want to feather the mask, but you want to just shift the colors back to be in a similar range, I guess you could. I was thinking the point of adjusting the area was to increase the contrast, so shifting the levels back would seem counterproductive.
Start with an unadjusted example:
inpict = imread('pout.tif');
% extract a sample region
sampx = 100:200;
sampy = 50:150;
roisample = inpict(sampy,sampx,:);
% just modify the sample for sake of demonstration
roisample = imadjust(roisample,[0.29 0.78],[0 1],0.85); % some modification
% insert the sample back into the image
outpict = inpict;
outpict(sampy,sampx,:) = roisample;
% show the result
imshow(outpict)
The levels in the sample region can be shifted back to match those in the original image. Any sort of nonlinear operations won't be undone, so the applied gamma still makes the mask edges slightly visible in this example. I'm not sure what operations you performed on the area and how closely you need the result to match its surroundings.
% extract a sample region
sampx = 100:200;
sampy = 50:150;
roisample = inpict(sampy,sampx,:);
% just modify the sample for sake of demonstration
samprange = stretchlim(roisample,0); % need this later
roisample = imadjust(roisample,[0.29 0.78],[0 1],0.85); % some modification
% shift sample extrema back to those in the original image region
% the only thing that's left is the gamma adjustment
newsamprange = stretchlim(roisample,0);
roisample = imadjust(roisample,newsamprange,samprange);
% insert the sample back into the image
outpict = inpict;
outpict(sampy,sampx,:) = roisample;
% show the result
imshow(outpict)
Plus de réponses (0)
Voir également
Catégories
En savoir plus sur Image Processing Toolbox dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!