It dawned on me that for compute shaders doing non-graphics output stuff that it would be nice to assign that to the integrated GPU or a second card so it is not interfering with the real graphics output pipeline.
For example, I process depth camera data and do various kinds of processing to effectively get my own user tracking and skeleton data and just use the derived numbers. Is there a way to do this? Thanks!
I don’t think that would work, unless the data-only compute shader is running in its own vvvv instance (which is an option though). But it’s something to look into, thanks.
Ideally I would like to be able to assign the compute shader to an unused GPU in the same instance, as DX11 (as I understand it) automatically uses the GPU for the display the renderer is on. Oh sensei @Vux, possible?
Running multiple instances is a valid strategy - you are able to make concurrent GPU calls, depth image processing instance may run on 30fps only - sharing the results and maybe textures to the rendering instance.
Source code tells me DX11 nodes work only with one adapter at a time but maybe @vux does have some insights
Yeah I do run multiple instances in some cases, like handling four cameras at once, but in cases where low latency is important or I’m just using one camera for simplicity it’s nice to have it all in one. When you go to multiple vvvv instances then you have to implement all this process checking and recovery logic for non-supervised installations. And my depthcam plugins use polling checks on the camera so its framerate does not determine the overall FPS. But these compute shaders may not be making any real impact on the output graphics pipeline, I was just curious to check.