What is the tf.stop_gradient() equivalent (provides a way to not compute gradient with respect to some variables during back-propagation) in pytorch?
            Asked
            
        
        
            Active
            
        
            Viewed 1.6k times
        
    13
            
            
        
        aerin
        
- 20,607
 - 28
 - 102
 - 140
 
- 
                    1Do any of these answer your question? https://datascience.stackexchange.com/questions/32651/what-is-the-use-of-torch-no-grad-in-pytorch https://stackoverflow.com/questions/56816241/difference-between-detach-and-with-torch-nograd-in-pytorch/56817594 – Stef Sep 16 '20 at 14:30
 
2 Answers
23
            
            
        Could you check with x.detach().
        Deepali
        
- 271
 - 1
 - 7
 
- 
                    Link to the documentation : https://pytorch.org/docs/master/generated/torch.Tensor.detach.html – Astariul Apr 19 '21 at 01:56
 
7
            
            
        Tensors in pytorch have requires_grad attribute. Set it to False to prevent gradient computation for that tensors.
        Shai
        
- 111,146
 - 38
 - 238
 - 371