im doing a small Experiment for vocal notations and need some help.
Its real time notation of speaking and singing.
Basically trying to Move a Circle between -1 to +1 on Y-Axis depending on the Voices Pitch.
So limited to ~37,6831Hz - 1495,5576Hz as suggested by other software.
Ive tried the Pitch Tracker Contribution but it will only detect very clear notes. No speaking. Singing works but its very good for instruments. ( Its Vaudio Version is not working for me )
Now im trying to use the VAudio Gist node but im confused by its output.
If its based on this it should work in a similar way like the pitch Tracker Contribution: https://github.com/adamstark/Gist
The OfxGist Version of this can Limit Min and Max Input Fequency and will Output pYIN Pitch Method Pitch. Resulting in a -1 if its not accurate. Its working very well for Vocals
tested it here : https://code.soundsoftware.ac.uk/projects/tony
Here is a Patch Screeny of what im trying to do:
Heres the patch
Gist VocalPitch.v4p (32.7 KB)
Is using the Vaudio Gist Version possible to get less jittery results?
How can i Filter the incoming Audio? Limit Incoming Spectrum?
How can i Ignore inaccurate Pitch Results or define better rules for ignoring its Output?
Frequency Difference, or RMS does not work that well for this…
This application works really accurate and seems to be based on the same Pitch detection. What am i doing wrong :) My main Problem is that when ur speaking results are very Random. I know that this makes somewhat sense but i need a way of filtering out results that appear inaccurate. Short Peaks while talking for example…