Deinterlacing BNC/S-Video-Capture


enybody know, which multi-BNC(or S-Video)-capture-cards supports deinterlacing during capturing with vvvv?

Holo3D does, but we were unhappy with the image quality.
Some Blackmagic cards promise to do the job too.

The Hauppauge Impact VCB does not. (otherwise it’s a friendly and cheap card)

for max:
Do you have a Holo3d ?
you are unhappy in YUV or SDI ?

AFAIK (coorect me, joreg) with either input it had color seams which could not be corrected. We have two of them and they both show this issue. The deinterlacing was fine, though.

I know nothing about it, but how about the DScaler node? And you also can’t use the deinterlacing feature of the Videotexture(EX9.Texture.WMR9 YUVMixingMode)?

ah, and in a recent we simply resorted to setting the videocamera (which ws a nice upper-class Canon model) to 1/25th of a second, which made it deliver non-interlaced video.

we were unhappy with either YUV and SDI mode because it was impossible to set the chroma delay parameter to a value without artifacts it is also necessary to start that provided application once before you can use it within vvvv.
i guess it should be able to write a pixel shader to correct that chroma delay driver bug but it might be too much efforts.

i still feel that writing an adaptive deinterlacing pixel shader should be possible and would be the preferred solution in almost all cases.
i was never be able to find any pointers to advanced shader based deinterlacing algorithms though.

it is just that joreg needs to implement a pin in the videotexture which outputs 1 or 0 whether a new video frame is received in the current vvvv frame…

i have find this about shaders and deinterlacing

from my understanding of the way pixel shaders work, the problem that the guys are having sounds like a trivial one and should be completely solvable. it sounds like that they areafter some simple BOB algorithm. this article explains some interlacing basics

De-interlacing - an overview by G. de Haan and E.B. Bellers is going into considerable detail of adaptive motion deinterlacing and includes many references.
just no mentioning of shaders…

i think you don’t really need the videotextures output. because: smooth video deinterlaced playback does only work with 50fps anyway.

so if you have a 25fps interlaced input just set videotexture to wait for every 2nd frame to get a vvvv rate of 50fps. then you now that every second frame you have a new texture. in theory this should be enough information for a pixelshader. in practice there may be timing issues…but a simple bob-deinterlacer could do for a proof of this concept. anybody?

@joreg: but you´ll need to now on which frame the whole thing starts. otherwise you´d have a 50% chance that odd and even fields are swapped, right? but i agree that for a proof of concept shader this issue could be ignored…

right…maybe you could take the VideoIns Enable pin as trigger for the first frame and then modulo 2?

i mean the frame where you set VideoIn.Enabled = 1 should be the first.

might work. but every dropped frame will lead to a desaster. until the next dropped frame.

for info i find this code to deinterlacing (blend) (thera are anothers shaders) in Media player classic:

sampler s0 : register(s0);
float4 p0 : register(c0);
float4 p1 : register(c1);

  • define width (p00)

  • define height (p01)

  • define counter (p02)

  • define clock (p03)

  • define one_over_width (p10)

  • define one_over_height (p11)

  • define PI acos(-1)

float4 main(float2 tex : TEXCOORD0) : COLOR
float4 c0 = tex2D(s0, tex);

float2 h = float2(0, 1/height);
float4 c1 = tex2D(s0, tex-h);
float4 c2 = tex2D(s0, tex+h);
c0 = (c0*2+c1+c2)/4;

return c0;


My work around for this is to capture at 720*288 instead of 576, less overhead, and gets rid of a field!
I was just about to try writing a shader to deinterlace, then I remebered I’d done this before.