hey velcrome,
there was some vodka also :)
regarding the Live setup it’s quite simple i parse tracks and playing clips name’s to vvvv
the track name define the category
the clip name the patch or the media file to recall (evaluated patch switcher with dynamic R / S nodes)
each track has a Max for live device with 8 parameters assigned to the corresponding vvvv content :
Mesh, Gsfx, medias, Texture Transform, mesh Transform…etc
all is based on name matching with some special keywords
eg: a clip named “random” on the media track use the mesh count and assign n textures (media or generative) to the mesh
it’s of course bi-directional and wysiwyg :
recalling a patch update the 8 parameters values and names in live
you can combine this with some immediate, pickup automation’s behaviors
you also have the possibility to prepare or record on the fly automation curves inside your clips.
here mrboni is right the data’s sent by live are generic
the question is do we need linear sequencing like timeliner, duration…
or non linear like ableton, Vux’s Feraltic Timeline ?
i guess it massively depends on projects…
but i’m pretty sure we all spend a huge amount of time reinventing (the visual sequencing) wheel each time we start a new project! please don’t tell me i’m the only one :)
using ableton may solve a part of this and allow to get the best of both linear
and non-linear worlds
you can use a master clip to trig scenes with predefined timings and content
or manually trig scenes, clips or sequence of clips (launch clip feature) in
a non-linear way.
some more cool features are
the tempo quantification and the multi user (up to 6) with different hardware…
i don’t find the time to try your Message contrib but i’m sure it’s the way to go! so next time it’s champagne!