I recently set up WSL2 (Windows 10, 21H2) with CUDA/Docker support to train some neural networks. For this I followed the instructions from this guide: https://docs.nvidia.com/cuda/wsl-user-guide/index.html
Once I was done, I ran an image as specified in the document I linked:
docker run -it --gpus all -p 8888:8888 tensorflow/tensorflow:latest-gpu-py3-jupyter
At this point I am able to launch the browser and work on a Jupyter Notebook with GPU support. I verify this by running:
import tensorflow as tf
tf.config.list_physical_devices('GPU')
The output confirms the GPU is available:
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')].
After I'm done I save my changes and stop CTRL-C out of the container.
Up to this point, everything works as I expect.
To resume work I've been using docker start -i happy_bose where happy_bose is the name of the container I ran earlier.
However, GPU support is non-existent (the call to tf.config.list_physical_devices('GPU') returns an empty list).
I was expecting docker start -i happy_bose to start the container with GPU support but this is not the case. Is there any way to reuse a container with GPU support or do I need to create it and start it using docker run every time?