I’m researching regarding reactive visuals. My ultimate goal is to be able to perform music and have visuals that change according to the music, without the need of a human controlling them. I’m still early on the development of the idea, and one of the first things i’d like to come up with is a list of meaningful musical gestures/parameters that could drive the visuals.
I still don’t want to get into the details of implementation, but given that ableton live is my musical tool, the ideas will be somewhat tied to its way of working.
So far I have the following:
This category comprises the main aspects of working live with the sequencer:
All transmited from the audio computer to the video computer via MIDI/OSC.
This category comprises data that can be obtained from analysing the audio data:
For both categories, visuals could react in an immediate fashion (such as drawing something on screen on every beat detected) or due to changes over time (such as a filter going up or down).
I think this list gives me quite an interesting point of departure for developing more the idea, what do you think? Any other parameters that would be really interesting to drive the music?
THANKS GUYS a lot, any input will help this research!
I like the idea. I’m new to vvvv, but I’ve programmed computers for 50 years, and I use python and lots of other languages, and I’ve worked with dataflow language representations and XML, so I’m happy to learn anything new I can, and will be happy to contribute whatever knowledge I have.
Suggestions for other inputs include: Note sequences, as in relative note changes on an instrument, or changes in rhythm patterns, or instrument combinations. Length of note (as in staccato vs legato) could be used. One interesting animation could be of an orchestra, with sections of the orchestra affected by which instruments are currently playing in the midi stream, e.g. notes rising from the proper section and floating or zipping around above the section. vvvv should be great for this type of thing. More generally, any aspect of music that creates an effect in a person could be used as input as long as vvvv can recognize it. An example would be repetitions, and number of repetitions, but practically anything detectable is fair game. Using vvvv the generated effects can be incredibly varied.
Ignore the patch. It helped me understand why I couldn’t get MidiNote to work. It works now. The problem was that another program had the midi port I was trying to use. It seemed to last across a reboot of my XP, but went away when I reassigned the midi port in the other program to a different port.
You know how to display a spread of values in an iobox? because hovering the mouse over the outlet of midinote shows just the first slice of the whole output spread. and by default an attached iobox also shows just one slice.
spreading 16 channels and 128 nodes result in an output spread of 16*128= 2048.in such a huge spread an change of a midinote can be easily overseen. i made a patch where just one channel shows at once but in the iobox you can see all 128 notes for this channel. try if you see something…
mh just to be sure that the midinote node really doesnt receive anything try out this patch. it shows on which indice a change happens…
maybe you really dont receive anything, in this case i have no clue what it could be. maybe it blocked by another programm? hey and make sure to close midiox in case you have it open while trying to get midi in vvvv becaue because midiox wants the midiport for itself!
When I decided to retry Anvil after rebooting, I was surprised to find that it came up with the B-side of the 2x2 selected. I hadn’t expected that. When I cleared that condition, vvvv was able to use the port. Apparently the port stays selected even when the offending program isn’t active. And through a reboot? That’s hard to believe. Anybody have another suggestion for what happened?
Thanks for the help, electromeier. It was very, very timely!