MATLAB Answers

2

Can we change the input size of a pretrained network for transfer learning

Asked by Arjun Desai on 26 May 2018
Latest activity Commented on by Lenin Falconi on 16 Apr 2019
I want to use transfer learning on the Resnet-50 architecture trained on Imagenet. I noticed that the input size into the Resnet-50 architecture is [224 224 3]. However my images are [150 150 3]. I was wondering if there were a way to change the input size of the input layer rather than resizing my images.

  2 Comments

I am also interested in this answer.
Hello I think you could try with the following code:
%% Load Data
% unzip('v_200x400.zip');
imds = imageDatastore('v_200x400', ...
'IncludeSubfolders',true, ...
'LabelSource','foldernames');
%% Resizing images to fin googleNet
imds.ReadSize = numpartitions(imds);
imds.ReadFcn = @(loc)imresize(imread(loc),[224,224]);
This code I used to resize images from bigger than 224 to 224. However since you are upsampling. This routine might not be of much help since we are not controlling the interpolation needed to fill the missing pixels when you go from 150 to 224. I would you suggest to check
https://la.mathworks.com/help/images/ref/imresize.html Pay attention to Interpolation Methods some are better than others to upsampling. Others better for downsampling
I leave there this code that I used although it is python, it could be written in Matlab code:
# Rescaling:
# We have two problems if image is smaller than desired or bigger than desired
# Upsampling:
# image width < new width, and image Height < new height
if (width<=new_w) and (height<=new_h):
# USE CUBIC INTERPOL
img_scaled = cv2.resize(img_seg, (new_w, new_h), interpolation=cv2.INTER_CUBIC)
else:
# Downdampling
img_scaled = cv2.resize(img_seg, (new_w, new_h), interpolation=cv2.INTER_AREA)

Sign in to comment.

1 Answer

Answer by BERGHOUT Tarek on 16 Apr 2019

yes this methode is cold:
1- if you are changing the neumber of neurons from N to n where N>n: this is called :'constructive' learning
2- if you are changing the neumber of neurons from N to n where N<n: this is called :'distructive' learning
but retraining alwayse requeired , not from the begining but from the final weights that u have in final net
if you want to found some papaers in this area use this key words :
neural networks with additive hidden nodes ; distructive neural nets .....etc. good luck.

  1 Comment

I am having a hard time understanding your answer. Sure that is something new to me about constructive and distructive learning.
However, it seems like Arjun Desai wants to change the input layer of Resnet 50 so that he can test transfer learning on his/her images. As far as I understand, since Resnet 50 is trained with specific natural image dataset with the dimensions 224x224x3, I don't think the input layer could be changed because this would affect all internal dimensions of the ConvNet arquitecture.
Because of that I aided with a code to simply change image size.
About retraining, not from the beginning I agree totally with you. That's what should happen with transfer, but in order to that happen The image of interest must go forward through the ConvNet until it reaches the las Full Connecting Layers to start retraining or a pertinent node in case of fine tuning. But again image should be in the dimensions expressed by the architecture.
Time ago I found a paper that proposed an arquitecture to deal with images from different sizes. If I found it I will share the name....

Sign in to comment.