A set of nodes to encode and decode Linear (or Longitudinal) Timecode (LTC) in VL.Audio.

Code borrowed from VVVV.Audio by @tonfilm. Depends on libtc by Robin Gareus and LTCSharp (enclosed) by @elliotwoods.

Initial development and release sponsored by wirmachenbunt.

It’s not a proper nuget yet for issues stated below but it can be used just like one. The thing to keep in mind: If you export your application you’ll have to copy the Ijwhost.dll from VL.Audio.LTC\lib\net6.0-windows or VL.Audio.LTC\runtimes\win-x64\native into the application folder manually.

Issues with C++/CLI

C++/CLI libraries in .NET Core need a shim called Ijwhost.dll for finding and loading the runtime. Currently there seems to be no “official” method on how to deal with this shim when having a C++/CLI library in a NuGet package. Some say to include it in the package other say that this can easily lead to dll hell (what happens when two packages come with different versions of the shim?). Here are some related github issues with further info:

Tried the “workaround” of including a manifest file for Ijwhost.dll. But:

  • vvvv only finds the shim when located alongside LTCSharp.dll, e.g. both files are in VL.Audio.LTC\lib\net6.0-windows
  • when the shim is located in VL.Audio.LTC\runtimes\win-x64\native as suggested by some vvvv doesn’t pick it up
  • in neither case the shim gets copied to the output when exporting a document referencing VL.Audio.LTC

It’d be nice if the devvvvs could look into this.*

Some Remarks

LTC only supports framerates of 24, 25 and 30 FPS that are tradionally encoutered in the context of film/video. As long as software runs at those framerates all should be fine and dandy, seconds will be split into [0 … 23], [0 … 24], [0 … 29] frames.

60 (or 50) fps like you’d more likely use in real time media are not supported though. You need to pick a LTC standard that is a divisor of your desired framerate. The count of frames returned will then be multiplied by the ratio of that division. So for a desired Mainloop framerate of 60 FPS you’d pick an LTC standard that uses 30 FPS, the decoder will then ideally return the following frames [0,0,1,1,2,2,3,3 … 29,29] and you need to account for that (see Interpolate on the ToSeconds node).

But things can get finicky. During testing I encountered for example the following [0,0,1,1,1,2,3,3 … 29,29] and @tgd even reported that some frames totally got lost like [0,0,1,1,1,1,3,3 … 29,29]. From what I can tell this is more likely to happen when using WASAPI or ASIO with a audio buffer size >256. Might be hardware and/or driver dependend. So make sure to check the continuity of the incoming frames (by queuing them for example).

TBH if you are working in a professional context and can spare the money using a dedicated timecode reader card might be the better option.

* I wanted to see how it works with other C++/CLI nugets so I tried VST.Net (which was also my cheat sheet for the manifest stuff) but unfortunately vvvv doesn’t seem to like it. I can reference it and it shows up in the Solution Explorer but no classes or methods are accessible. Idk if this is related to my original issue or something different.


incredible work, recently I jumped into TCNet, which seems to be the succeeder of LTC, however I am facing a similar issue when it comes to FPS, so I was thinking, in your case scenario, how possible is it, instead of make the division, finding the ratio and multiply it again, to just use an internal clock and keep it in sync?
This is my holy grail atm, cause I tried different approaches but none of them is really good for Real Time stuff.

1 Like

Nice work, and thanks for documenting the process. So you basically got the original code from my vvvv beta package working in 2 hours and then spent 2 weeks getting the setup working, is that a correct asumption? :)

1 Like

Basically yep… C++/CLI seems to be such a mess in this usecase. Was wondering if it’d be possible to do the wrapper using P/Invoke instead but I guess this will just open another can of worms.

1 Like

Guess it’s possible in theory but most likely pretty hard to get right and pretty easy to mess up. The ImagePlayer tries do this if I am not mistaken but according to @tgd it’s not working properly.

I’ve always wanted to tryout syncing computer clocks via PTP for example but never got around to it.


I’ve done this before when frame syncing video over networks, basically you run an adjustable clock and nudge it my micro seconds if it runs faster/slower than the incoming clock, with an ignore if there is a gap, so the clock free sync’s until it picks up a regular tc again. I’ll see if I can find a patch if it helps, but I seem to remember it being fairly straight froward, as internal clocks are pretty steady, and you are just adjusting for ticks in the mainloop/video playback decode.


Yes, if you only need some vvvv machines to sync, the vvvv clock sync is the easiest, it should be way below a frame in precision. See these help patches:


I think the point here is when you want to interact with running show systems and need an official standard?

Yes the standard time is LTC, and from that I sync an internal clock to it that gives milliseconds time, which filters out the frame repeats and possible drop outs.


Yes, that would be the best way. Just manage an offset for a local stopwatch.

@bjoern you can do that by calculating the offset and smoothing at the exact moment when the new LTC time frame arrives. Then it doesn’t matter how many frames it’s working with. The only important thing is that you don’t do it in the main loop but when the time is actually decoded. You can store the last few 100 offsets and calculate weighted average. That should give you a good estimate of what the offset should be.

1 Like

Let’s see if @catweasel can unearth that patch before I try to reinvent the wheel. Or if someone else wants to give it a go, I’ll be happy to add it.
TBH I am currently not that motivated to put even more time into this :)


So there is a deeper issue here I think, I’m running a stop watch that adds to the seconds counter (on the right), middle is TC as digital, and the 3rd column is the frames from TC. So you can se that the seconds tick over, but that the frames are still on the previous count. I was measuring frame difference and occasionally it would go over 1 sec, so I had to see why!
I’ve added the patch below, it may need some extra logic to make it stop and start better, and to allow jumping/scrubbing time. Thats just if the output is more than x out then bang the S+H, trouble is with the frames being out of sync, it needs to be more than 1 second, or just use the seconds as scrub, and it will be back in sync after the next second count. TC isn’t really meant to be scrubbed, and as it used to be used to sync tape, you always have a preroll to get tapes to speed, so it is probably within spec to resync on the seconds anyway.

Looking back at my old patch, I was actually use an audio file of TC and multimedia timing it into sync (on seconds again) and using the file position out to drive my timeline keyfames.

Explanation Overview LTC.vl (60.6 KB)

VL.Audio.LTC.vl (124.6 KB)
I’ve reworked from seconds slightly, instead of round I am using a frac so we don’t get any rounding errors, this helps in encode, and I think stops some of the wierd repeats (although some still slip through) , but also I have seen a frame skip ahead on changing whole seconds, so frame 1 then frame 0 appearing, and this is what the Frac fixes.
Also attached is my test patch, so see if this works better for you.

Explanation Overview LTC0.1.vl (66.0 KB)

@bjoern yes I was expecting that this may be overcomplicated. Ideally I would like to interpolate frames in between a second. Since the most precise representation of time (cause of missing or scrambled data) is a second itself, my thought was not to take in account the frames coming through or the ms I am calculating out of them.
But interpolate from second to second and introduce it, but I am still wondering if this is the proper direction and if it is safe for production.

That’s what @catweasel is doing in the latest patch he posted if I understood it correctly.

Also not sure. Especially if this is done on multiple clients independently. Or would be the way to only have one “server” that reads / interpolates LTC and then sends that time (via UPD for example) to other vvvv clients?

Not really sure what that means.

Expanding a bit on this subject.
LTC is cool, but implementing MTC aka Midi time code should be relatively simple and would enable the use of something like the MOTU micro express, that can convert both ways between LTC and MTC and works standalone.

So having both MTC and LTC would be both achieveable and desirable.

1 Like

I would say the way to find out if it is production ready is to test it! I think that for run time, is a show where the clock starts and runs to the end, this will work fine. At edit time you might hit stop and be a frame or 2 out. But as the input to output latency of the encode decode is a frame or 2, I’m not sure it gets that much better. The sync works by reseting a stopwatch, which gives ms not patch frame time precision, and when I tested simply setting it running, and not reseting everyframe, it does keep perfect sync, certainly over the minute or 2 it was running, so resting every second should be as bang as you can get really.
Re: multiple clients, I would say that I would run time code to each, as they should all get the second tick within a sound card buffer size of each other, and udp latency is variable anyway.
Tonfilm I think is talking about putting the stop watch in the decoder, and it having an async process I think, the stop watch is async itself though I believe?

There is a MTC Out node implementation here

but in this case MTC IN is needed.

Could you move the Midi discussion to another thread please?

Just released a “proper” nuget thanks to @Elias for the help.
Added @catweasel 's fixed version of FromSeconds and a new version of ToSeconds that “discards” the frames and uses a stopwatch instead, I also kept the old interpolation version. The new one is called ToSeconds (TimeCode Freewheel) the old one is now called ToSeconds (TimeCode Interpolate) (maybe someone can come up with better names).
Haven’t tested the FreeWheel version in a “multi-client-scenario” yet.

In order to use this library you have to install the nuget that is available via nuget.org. For information on how to use nugets with VL, see Managing Nugets in the VL documentation. As described there you go to the commandline and then type:

nuget install VL.Audio.LTC