System Information
OS: All Unix-based OSes
Python version: 3.x
Graphics card: All NVIDIA graphics cards
Blender Version
I guess this should be applicable to any version of Blender with CUDA support
Short description of error
I recently ran into an issue when trying to move my data to GPU using PyTorch's Python API. By reading a couple of threads[1][2][3] I noticed that CUDA is not fork-safe unfortunately. The only way the problem can get resolved is by not calling any cuInit() driver before calling a forked process (it looks like you can do whatever you want in the forked process without causing this issue).
After some trials and errors I realized that doing import bpy is causing this issue for me as I guess the import process is calling the cuInit() function to initialize CUDA drivers somewhere while getting loaded. In PyTorch, they avoided any call that causes CUDA initialization until such a call is actually needed and they called this fix Lazy Init. I'm not guessing what is happening exactly when doing import bpy but I guess this problem can get resolved if the call to cuInit() is only done when people have to change any settings related to the GPU or they explicitly click on GPU rendering in Cycles or basically do anything that clearly points that people are going to use GPU for something (including clicking on EEVEE to start realtime rendering)
Here is a potentially helpful comment from someone in PyTorch's Slack channel who has a better idea of what's happening under the hood:
CUDA, as a complex, multithreaded set of libraries, is totally and permanently incompatible with a fork() not immediately followed by exec(). That means the multiprocessing method fork cannot work, unless the fork is done before CUDA is initialized (by direct or indirect call to cuInit()). Once a process goes multithreaded or initializes CUDA, it’s usually too late.Second, torch.cuda.manual_seed() is lazy, meaning it will not initialize CUDA if it hasn’t been done already. That’s a good thing, for the reasons above.
P.S. I'm not entirely sure but this might also be relevant to this bug or this bug that I reported earlier this year.
Exact steps for others to reproduce the error
python3 -m pip install torch
or
conda install pytorch
Compile Blender (master branch) as Python module with the following CMake flags:
-DCMAKE_INSTALL_PREFIX=/usr/local/lib/python3.6/dist-packages \ -DWITH_PYTHON_INSTALL=OFF \ -DWITH_PYTHON_MODULE=ON \ -DPYTHON_ROOT_DIR=/usr/local \ -DPYTHON_SITE_PACKAGES=/usr/local/lib/python3.6/dist-packages \ -DPYTHON_INCLUDE=/usr/include/python3.6/ \ -DPYTHON_INCLUDE_DIR=/usr/include/python3.6m \ -DPYTHON_LIBRARY=/usr/lib/python3.6/config-3.6m-x86_64-linux-gnu/libpython3.6.so \ -DPYTHON_VERSION=3.6 \ -DWITH_OPENAL=OFF \ -DWITH_OPENCOLORIO=ON \ -DWITH_GAMEENGINE=OFF \ -DWITH_PLAYER=OFF \ -DWITH_INTERNATIONAL=OFF \ -DCMAKE_BUILD_TYPE:STRING=Release
Note that I manually change PYTHON_VERSION_MIN="3.7" in install_deps.sh to `PYTHON_VERSION_MIN="3.6".
Then in Python:
#main.py
from multiprocessing import Process
import torch
import numpy as np
def moveDataToGPU(procID, importBpy=False):
if importBpy:
# doing import bpy inside the forked process or outside of it (before moveDataToGPU)
#would still cause things to crash
import bpy
# Doing the next two lines before moveDataToGPU would cause this function to crash
# but having Lazy Initialization of CUDA drivers makes it fork-safe
# Look at the followings for a better idea of what's going on:
# https://github.com/pytorch/pytorch/blob/master/torch/cuda/random.py
# https://github.com/pytorch/pytorch/blob/master/torch/cuda/__init__.py
torch.cuda.manual_seed(1)
print(torch.cuda.get_rng_state().sum())
data = np.random.uniform(0, 1, (5, 5))
print ('data created for procID ' + str(procID))
torchData = torch.from_numpy(data)
torchData = torchData.cuda()
print ('successfully moved the data to GPU for procID ' + str(procID))
print ('')
forkedProcess = Process(target=moveDataToGPU, kwargs={'procID': 0, 'importBpy': False})
forkedProcess.start()
forkedProcess.join()
forkedProcess = Process(target=moveDataToGPU, kwargs={'procID': 1, 'importBpy': False})
forkedProcess.start()
forkedProcess.join()