Tutorial :Can I share cuda GPU device memory between host processes?



Question:

Is it possible to have two or more linux host processes that can access the same device memory? I have two processes streaming high data rate between them and I don't want to bring the data back out of the GPU to the host in process A just to pass it to process B who will memcpy h2d back into the GPU.

Combining the multiple processes into a single process is not an option.


Solution:1

My understanding of the CUDA APIs is that this cannot be done. The device pointers are relative to a given CUDA context, and there's no way to share those between processes.


Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Previous
Next Post »