Often, I code on my laptop which is not equipped with a GPU (MacBook, if it makes a difference). Then files are transferred to a server with a GPU. I just want to perform a sanity-check on my code, before running it on the server, in order to avoid errors related to tensors being on different devices. I am looking for a gpu emulator, which takes in some tenosrs and outputs some other random tensors.
            Asked
            
        
        
            Active
            
        
            Viewed 446 times
        
    1 Answers
-2
            
            
        Just add .to('cuda:0') to your model which inherits nn.Module and any tensor you created within the forward/backward pass.
Note that cuda:0 means the gpu of index 0.
Moreover, I'd like to define a hyper parameter dictionary to pass to the model, one can easily define hparams['device'] = 'cpu'/'cuda:0'/'cuda:1' in the dict, and when initialize the model, the property self.device = hparams['device'] is set, so that any tensor/module in the model can be easily migrated to any device as configured by adding .to(self.device).
 
    
    
        namespace-Pt
        
- 1,604
- 1
- 14
- 25
- 
                    Thanks, that is what I normally do. Unfortunately, there are too many details, and some might be forgotten. I need a way to check before running the code. – Arman Mar 22 '21 at 17:24
 
    