Why doesn't torch.autograd compute the gradient in this case?
import torch
x = torch.tensor([1., 2., ], requires_grad=True)
y = torch.tensor([x[0] + x[1], x[1] - x[0], ], requires_grad=True)
z = y[0] + y[1]
z.backward()
x.grad
Output is a blank line (None). The same occurs for x[0].grad. Why?
PS: In retrospect, I realize the motivation for making y a tensor with requires_grad was so I could examine its own gradient. I learned that one must use retain_grad for that here: Why does autograd not produce gradient for intermediate variables?