1D CNN on Pytorch: mat1 and mat2 shapes cannot be multiplied (10×3 and 10×2)

We Are Going To Discuss About 1D CNN on Pytorch: mat1 and mat2 shapes cannot be multiplied (10×3 and 10×2). So lets Start this Python Article.

1D CNN on Pytorch: mat1 and mat2 shapes cannot be multiplied (10×3 and 10×2)

  1. How to solve 1D CNN on Pytorch: mat1 and mat2 shapes cannot be multiplied (10×3 and 10×2)

    The shape of the output of the line x = self.layer2(x) (which is also the input of the next line x = self.fc1(x)) is torch.Size([1, 10, 3]).
    Now from the definition of self.fc1, it expects the last dimension of it's input to be 10 * 1 * 1 which is 10 whereas your input has 3 hence the error.
    I don't know what it is you're trying to do, but assuming what you want to do is;
    label the entire 500 size sequence to one of two labels, the you do this.
    # replace self.fc1 = nn.Linear(10* 1 * 1, 2) with self.fc1 = nn.Linear(10 * 3, 2) # replace x = self.fc1(x) with x = x.view(1, -1) x = self.fc1(x)
    label 10 timesteps each to one of two labels, then you do this.
    # replace self.fc1 = nn.Linear(10* 1 * 1, 2) with self.fc1 = nn.Linear(2, 2)
    The output shape for 1 will be (batch size, 2), and for 2 will be (batch size, 10, 2).

  2. 1D CNN on Pytorch: mat1 and mat2 shapes cannot be multiplied (10×3 and 10×2)

    The shape of the output of the line x = self.layer2(x) (which is also the input of the next line x = self.fc1(x)) is torch.Size([1, 10, 3]).
    Now from the definition of self.fc1, it expects the last dimension of it's input to be 10 * 1 * 1 which is 10 whereas your input has 3 hence the error.
    I don't know what it is you're trying to do, but assuming what you want to do is;
    label the entire 500 size sequence to one of two labels, the you do this.
    # replace self.fc1 = nn.Linear(10* 1 * 1, 2) with self.fc1 = nn.Linear(10 * 3, 2) # replace x = self.fc1(x) with x = x.view(1, -1) x = self.fc1(x)
    label 10 timesteps each to one of two labels, then you do this.
    # replace self.fc1 = nn.Linear(10* 1 * 1, 2) with self.fc1 = nn.Linear(2, 2)
    The output shape for 1 will be (batch size, 2), and for 2 will be (batch size, 10, 2).

Solution 1

The shape of the output of the line x = self.layer2(x) (which is also the input of the next line x = self.fc1(x)) is torch.Size([1, 10, 3]).

Now from the definition of self.fc1, it expects the last dimension of it’s input to be 10 * 1 * 1 which is 10 whereas your input has 3 hence the error.

I don’t know what it is you’re trying to do, but assuming what you want to do is;

  1. label the entire 500 size sequence to one of two labels, the you do this.
# replace self.fc1 = nn.Linear(10* 1 * 1, 2) with
self.fc1 = nn.Linear(10 * 3, 2)

# replace x = self.fc1(x) with
x = x.view(1, -1)
x = self.fc1(x)
  1. label 10 timesteps each to one of two labels, then you do this.
# replace self.fc1 = nn.Linear(10* 1 * 1, 2) with
self.fc1 = nn.Linear(2, 2)

The output shape for 1 will be (batch size, 2), and for 2 will be (batch size, 10, 2).

Original Author Nerveless_child Of This Content

Conclusion

So This is all About This Tutorial. Hope This Tutorial Helped You. Thank You.

Also Read,

ittutorial team

I am an Information Technology Engineer. I have Completed my MCA And I have 4 Year Plus Experience, I am a web developer with knowledge of multiple back-end platforms Like PHP, Node.js, Python and frontend JavaScript frameworks Like Angular, React, and Vue.

Leave a Comment