RuntimeError: “nll_loss_forward_reduce_cuda_kernel_2d_index” not implemented for ‘Int’: Pytorch

We Are Going To Discuss About RuntimeError: “nll_loss_forward_reduce_cuda_kernel_2d_index” not implemented for ‘Int’: Pytorch. So lets Start this Python Article.

RuntimeError: “nll_loss_forward_reduce_cuda_kernel_2d_index” not implemented for ‘Int’: Pytorch

  1. How to solve RuntimeError: “nll_loss_forward_reduce_cuda_kernel_2d_index” not implemented for 'Int': Pytorch

    In my case, I solved this problem by converting the type of targets to torch.LongTensor before storing the data into the GPU as follows:
    for inputs, targets in data_loader: targets = targets.type(torch.LongTensor) # casting to long inputs, targets = inputs.to(device), targets.to(device) ... ... loss = self.criterion(output, targets)

  2. RuntimeError: “nll_loss_forward_reduce_cuda_kernel_2d_index” not implemented for 'Int': Pytorch

    In my case, I solved this problem by converting the type of targets to torch.LongTensor before storing the data into the GPU as follows:
    for inputs, targets in data_loader: targets = targets.type(torch.LongTensor) # casting to long inputs, targets = inputs.to(device), targets.to(device) ... ... loss = self.criterion(output, targets)

Solution 1

In my case, I solved this problem by converting the type of targets to torch.LongTensor before storing the data into the GPU as follows:

for inputs, targets in data_loader:
    targets = targets.type(torch.LongTensor)   # casting to long
    inputs, targets = inputs.to(device), targets.to(device)
    ...
    ...
 
    loss = self.criterion(output, targets)

Original Author Phoenix Of This Content

Solution 2

I guess you followed Python Engineer’s tutorial on YouTube (I did too and met with the same problems !). @Phoenix ‘s solution worked for me. All I needed to do was cast the label (he calls it target) like this :

for epoch in range(num_epochs):
    for (words, labels) in train_loader:
        words = words.to(device)
        labels = labels.type(torch.LongTensor) # <---- Here (casting)
        labels = labels.to(device)
        
        #forward
        outputs = model(words)
        loss = criterion(outputs, labels)
        
        #backward and optimizer step
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

    if (epoch + 1) % 100 == 0:
        print(f'epoch{epoch+1}/{num_epochs}, loss={loss.item():.4f}')

It worked and the evolution of the loss was printed in the terminal.
Thank you @Phoenix !

P.S. : here is the link to the series of videos I got this code from : Python Engineer’s video (this is part 4 of 4)

Original Author Elias ALICHE Of This Content

Solution 3

Just verify what your model is returning,it should be float type i.e your outputs variable
Else change it to type float
I think you have returned int type in forward method

Original Author Prajot Kuvalekar Of This Content

Solution 4

In my case, using torch.autocase took care of this error when using the criterion:

with torch.autocast('cuda'):
    loss = self.criterion(out, torch.tensor(labels).cuda())

Original Author Mona Jalal Of This Content

Conclusion

So This is all About This Tutorial. Hope This Tutorial Helped You. Thank You.

Also Read,

ittutorial team

I am an Information Technology Engineer. I have Completed my MCA And I have 4 Year Plus Experience, I am a web developer with knowledge of multiple back-end platforms Like PHP, Node.js, Python and frontend JavaScript frameworks Like Angular, React, and Vue.

Leave a Comment