Audiopack Midi Vst graphic live patch

Hi to all patchers!
I’m a student exploring the fascinating vvvv world!
i’m trying to develop a graphic sound reactive patch but i’m stucked in the middle of nowhere. I watched so many tut and read a lot of documentation on this amazing forum but i wasn’t able to change the situation.
i’ve a external midi keyboard, a korg nanopad and a external sound card. these device will be connected all together in vvvv for generate graphics during live musical performances. Keyboard for piano/pad/lead VST; soundcard with guitar rig VST; nanopad fror the drum session using another VST.

I’m using vvvv audiopack and my first idea was to generate midi data by my external devices, route the signal to VSThost node and send audio signal out for hear it. while a FFT node, attached to VSThost node analize the sound and split it in different freq. channels that cause a reaction for the work in progress graphic part. i can’t understand why FFT (4 channels) is not working properly. the node are splitting the frequencies in a really wired way getting out high frequencies when i play basses and viceversa. The frequencies is not balanced. is the FFT working like a spectrum analyser?

Another goal of my project is to visualize in a renderer midi data sent by keyboard and nanopad, but i can’t
understand if it’s possible interact with the Vaudio Midiin node or if there is another way to do that.

Thank you all for the support!

first off, I think it would be helpful if you post you patch, then it is much easier to help you out.

Second, FFT works by analysing the audio into half the amount of frequency bands that you set with Buffer Size. they are linearly distributed between DC and half the sample rate you are operating at in equally sized frequency bands,
where each band show the energy at that frequency (@tonfilm please confirm or correct this, especially the DC part). our hearing is not linear so you need to do some summing of the frequency bands to get something that is meaningful to humans, that is what is done in the 4 chan FFT, but that works on DShow audio drivers so it will not work with VAudio, it should however be pretty simple to change it to work with VAudio.

Third, there might be a bug right now where FFT (VAudio Sink) is not working properly until you save and reopen the patch)

Fourth, come to Node17 and join the VAudio Basics workshop, it should be pretty simple after that workshop to make such audio reactive patches

ok, thanks Sunep!
i’ve set the patch to be understandable and post it! inside you can find a description of the entire (simple) prcess.
My objective is to get a range of numeric slice from FFT synchronized with freq. channels. The second objective is to uderstand properly how to get midi data from MidiIn node (if is possible)

came to the Node17 workshop will be very amazing for me, but i’m in the middle of university exam rush! (the patch i’m developing is for one of it)

EngineWork02.v4p (481.8 KB)

hi,

after reading your patch I choose a slightly different approach the FFT Solution. I used the rea-fir vst to filter the audio signal for the low, mid and high frequencies and track their volume outputs. These three outputs can be easily used for manipulating your spheres. It is also a little bit easier to filter precisely which frequency range you want to use, but there is a little latency…

You custom FFT node was missing, so it is difficult to guess what leads to the strange freq. reaction…

Here the patch:

audio.zip (2.1 MB)

And if you have to rebuild the vst settings:

The Midi Node can be used with the midi split node to get readable midi data. In the Patch is a little example how you can get readable notes out of the midi data. It only looks for the velocity of your midi notes and sets a boolean true if the velocity is greater than 1. It should work with midi controllers that use the velocity to trigger notes on or off (i tested it only with the ni maschine…)

this is a convolution equalizer with extreme latency… you should use a simple FIR EQ or Filter with zero latency set to low-pass, band-pass and high-pass to achieve this, also because sound quality and phase issues are not of any concern here.

also note that you can spread VSTs! one VSTHost node can open all 3 instances of the plugin with individual settings. just spread any of the input pins of the VSTHost node.

you can find that out very easily by connecting a OSC node set to sinewave to the FFT. then sweep the frequency value of the OSC and check which slices of the FFT react. this way you don’t have to do all the maths with samplerate and buffer size.

to get the midi values, use the MidiSplit node as @jjh pointed out.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.