I am aware that Tensorflow can explicitly place computation on any devices by "/cpu0" or "/gpu0". However, this is hard-coded. Is there any way to iterate all visible devices with built-in API?
            Asked
            
        
        
            Active
            
        
            Viewed 493 times
        
    0
            
            
        
        Jes
        
- 2,614
 - 4
 - 25
 - 45
 
- 
                    Possible duplicate of [How to get current available GPUs in tensorflow?](https://stackoverflow.com/questions/38559755/how-to-get-current-available-gpus-in-tensorflow) – E_net4 Jun 23 '17 at 18:49
 
1 Answers
0
            
            
        Here is what you would like to have:
import tensorflow as tf
from tensorflow.python.client import device_lib
def get_all_devices():
    local_device_protos = device_lib.list_local_devices()
    return [x.name for x in local_device_protos]
all_devices = get_all_devices()
for device_name in all_devices:
    with tf.device(device_name):
        if "cpu" in device_name:
            # Do something
            pass
        if "gpu" in device_name:
            # Do something else
            pass
Code is inspired from the best answer here: How to get current available GPUs in tensorflow?
        Guillaume Chevalier
        
- 9,613
 - 8
 - 51
 - 79