my knowledge in vvvv is little, but still I have an Idea about something that could maybe be realized in vvvv…
I wanna create an Equalizer that , instead of letting new leds flash, will enhance an already existing image by adding new parts to it as volume, as different frequencys increase and decrease.
My strongest Idea at the moment is having a photographed skyline, where the skyscrapers go up down like the amps of a music equalizer - with the input controlled by vvvv!
edit: just noted that this project might suite the dreams and rock’n’roll forum better…admins consider moving!
wow that thing on vimeo is about precisely what I wanted to do, just that I wanna create a module which can handle any audio input.
Do you still know which module by tonfilm it was?
And apart from that, Im thinking more about the input beeing not a file but rather an external source, e.g. a djset or something!
Well I think what I actually will use, is the fft4channels module included with the beatdetector, and use the outputs bass, lo, himid and high as the 4 amps for the equalizer.
Well, so I need to think how to add parts of an image as the volume of one of the outs increases.
But how do i do that?
Is there a node in vvvv tha can show images, thus jpeg´s and stuff??
of cours there is, see the chapter ‘texturing’ at this site: DX9 Rendering
please note, that the examples there are made with an old version, in the new versions you have to connenct the quad to the renderer.
Ok, had a first try, with a switch between two textures ( one containing a skyscraper higher then the other), while the switch would always turn on when a higher bassfrequency would come through (e.g. a kickdrum). Well it slows down the cpu way too much to get it synchronized to a slow technobeat, since loading the texture the my pc apperntly a few milliseconds too much and I get skipping! Id need something that would only ad a part to an image and then remove it again when the volume goes down…
Hmm yeah that works out…for 1 Of the 4 frequences that should be rendered! I can easily connect a filetexture to the getslice, and let the bassfrequence output connect to the index so it chooses which textures to show- BUT the problem I have is that I wanna do the same with the rest of the frequencys, while everyone one of them has individual textures to be shown. But the quad apparently accepts only one texture input - probably its just a basic spreading issue that I have, but already after hours of patching I couldnt figure out how I can let four skyscrapers grow individually according to the given input…
the index pin of the GetSlice node (like most of the input pins) accepts also a spread.
if you have the frequencies as a spread, just put them all in the index pin, if not, use a Cons or Vector join to do so. you might want to add some offsets to each index with a + node.
now that you have a spread of 4 textures at the texture input of the quad, the quad is drawn 4 times, but all on the same position. to put the quads at different places, connect a Transform 2d node and spread the x and/or y position…
Actually I wanna have the objects to be drawn in one and the same quad, IF thats possible. I need beacause the background is important. I just tried to use a barspread for spreading the frequencies, since it works pretty similar to an equalizer . But yeah if I’m puting 16 barspreaded slices ( the number may very depending on how high the frquencys are) into the index pin of the getslice, I get no reaction in my renderer :/ - see picture below!
Apart from that Im interested on how the renderer “prefers” to show a texture in front if there are multiple inputs of textures, Im just sending 16 different filenames into it, but I see barely 3 of them.
Thanks for the help so far and hope I come to what I want,