loss.backward() error?

5 posts / 0 new
Last post
Renee Zacharowicz
loss.backward() error?

HW3 Part I

I am facing a 'RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn'

I thought it was due to an issue trying to backprop onto the embedding (nn.Linear) + mean but the error persists despite requires_grad flag set to false...

Anyone else facing the same issue or might know what I am doing wrong?
Any direction much appreciated 

 

Jonathan Shlomi
Hi Renee,

Hi Renee,

can you post your code (a link to a github repo is ok too)?

Renee Zacharowicz
So sorry I missed your reply

So sorry I missed your reply -- Here is a link to my buggy code...thank you!!
https://github.com/ren-e1011/DL1010/blob/master/Tutorial6/Renee_HW3_part_1_point_cloud_mnist.ipynb

Jonathan Shlomi
you should not turn off the

you should not turn off the gradient of your embedding layer - you want to train that as well as the classifier. otherwise the embedding is just a random transformation of your input.

same with the .clone().detach() you put on the mean -

the first part of the network (the node embedding and mean over the nodes) are a part of your computation graph, why did you feel you need to detach them? (just trying to understand what you missed so I can explain it better)

also, you should avoid logging things like the training accuracy at every single batch - thats going to slow down your training

Renee Zacharowicz
Thank you for these points!

Thank you for these points!

I had turned off embedding grads in an attempt to fix the error...but after reading your post, yeah it doesnt make much sense

Unclear the source of the error but a kernel restart and it seems to be working now...