How to adapt the output size of a given feature map in the Deep Learning Toolbox by using the "pool" operation?

7 vues (au cours des 30 derniers jours)
I understand that there are currently "averagePooling2dLayer, maxPooling2dLayer, globalAveragePooling2dLayer, globalMaxPooling2dLayer" and so on for the "pool" operation. globalMaxPooling2dLayer", "maxpool" and other direct call functions, but nothing like pytorch's "adaptive_max_pool2d". function that can directly specify the size of the output feature map for a pool operation?
The following simple example is pytorch code, how can matlab achieve the same purpose?
input = torch.rand(8,3,224,224) # input tensor, NCHW
outSize = (20,20) # specify output tensor size, H_out*W_out
output = F.adaptive_max_pool2d(input,outSize) # adaptive output
print(output.shape) # result--> torch.Size([8, 3, 20, 20])

Réponse acceptée

KaSyow Riyuu
KaSyow Riyuu le 15 Avr 2022
Stride = ( InputSize / OutputSize )
KernelSize = InputSize - ( OutputSize - 1) * Stride
Padding = 0
you can do adaptivepool like this
function Output = AdaptiveMaxPool(Input,OuputSize)
InputSize = size(Input)
Stride = floor( InputSize ./ OutputSize ) ;
KernelSize = InputSize - ( OutputSize -1) .* Stride ;
Output = maxpool(Input , KernelSize, "Stride", Stide) ;
end

Plus de réponses (0)

Catégories

En savoir plus sur Sequence and Numeric Feature Data Workflows dans Help Center et File Exchange

Produits


Version

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by