We Are Going To Discuss About Runtime Error: mat1 and mat2 shapes cannot be multiplied in pytorch. So lets Start this Python Article.
Runtime Error: mat1 and mat2 shapes cannot be multiplied in pytorch
- How to solve Runtime Error: mat1 and mat2 shapes cannot be multiplied in pytorch
The size mismatch error is shown as
32x119072 and 800x300
. The first shape refers to the input tensor, while the second is the parameter of the layer. If you look into your model definition you will see that it matches the first fully connected layer, the one following the flatten. Indeed,nn.Linear(800, 300)
was expecting800
-feature tensors, but got119072
-feature tensors instead.
You need to modify this linear layer to match the incoming tensor flattened spatial shape. But notice how this value will depend on the image that is fed to the CNN, ultimately this will dictate the size of the tensor fed to the classifier. The general way to solve this is to use adaptive layers: such asnn.AdaptiveMaxPool2d
which will always provide the same output shape regardless of the input dimension size. - Runtime Error: mat1 and mat2 shapes cannot be multiplied in pytorch
The size mismatch error is shown as
32x119072 and 800x300
. The first shape refers to the input tensor, while the second is the parameter of the layer. If you look into your model definition you will see that it matches the first fully connected layer, the one following the flatten. Indeed,nn.Linear(800, 300)
was expecting800
-feature tensors, but got119072
-feature tensors instead.
You need to modify this linear layer to match the incoming tensor flattened spatial shape. But notice how this value will depend on the image that is fed to the CNN, ultimately this will dictate the size of the tensor fed to the classifier. The general way to solve this is to use adaptive layers: such asnn.AdaptiveMaxPool2d
which will always provide the same output shape regardless of the input dimension size.
Solution 1
The size mismatch error is shown as 32x119072 and 800x300
. The first shape refers to the input tensor, while the second is the parameter of the layer. If you look into your model definition you will see that it matches the first fully connected layer, the one following the flatten. Indeed, nn.Linear(800, 300)
was expecting 800
-feature tensors, but got 119072
-feature tensors instead.
You need to modify this linear layer to match the incoming tensor flattened spatial shape. But notice how this value will depend on the image that is fed to the CNN, ultimately this will dictate the size of the tensor fed to the classifier. The general way to solve this is to use adaptive layers: such as nn.AdaptiveMaxPool2d
which will always provide the same output shape regardless of the input dimension size.
Original Author Ivan Of This Content
Conclusion
So This is all About This Tutorial. Hope This Tutorial Helped You. Thank You.