The Sleeping Orchestra

hi folks,

i am working on the project where the aim is to trigger the soundbank thru the movement of the people moving in the sleep.
videos are recorded at time lapse (1 frame/second).
i use eyes web, freeframe and vvvv for this.

there are few questions i have:

  • i would play ~40 videos at the same time. due of cpu and memory reasons i think i would need more than one machine, thats oke - my question is how to render all the videos in cluding the analyses side by side on the walls (i will not use xx beamers)
  • is there a way to merge any number of video outs into one with vvvv?

the first very small preview has been in berlin (is in germany), but i would prefer not to use so many computer, because of throubles with so much hardware.

presumably the videos could be smaller than pal, with the low frame rate you could maybe get away with 1 machine. if you have 40 videos at once thats a grid 8 by 5 so maybe they could be 100px wide?

thx, catweasel, i think i have to be more precise:

i do analyse the movies (one night of sleep @ time lapse (1 frame/sec) = 18 minutes) in the normal PAL resolution (codec indepedent), because i need changes (even smallest) in the video (IR recorded), extracting them thru a frame analyser for use with eyesWeb/freeFrame/vvvv.
because i do not want to play movies in one system, and analyse them and trigger or fire on the other system i plan to this on one cluster of machines - play and analyse include audio at once.

well, i am programmer, not an artist, but in this case i do worry about the amount of hardware i need to use for the performance - this is my biggest problem. i do not want to destroy aesthetics of the room with roaring fervent pyramide full of cables, cables, fans and cables…;-)

hmm, i know, that all sounds a bit weird - at the moment i do record movies with old people having a nap in the parks and kids sleeping in their strollers and stuff. hmmm…however, thanks for any suggestion, have a few nice workfree days, cheers.

yeah sounds weird. but nice.
dont know if i get it right… you want to analyse 40!? videos at the same time? pre-recorded or realtime footage? (you have 40 ir cams?! or just one with a fish-eye?)

if so i would look for antoher way to get the data (maybe a kind of pressure sensors inside the beds) i think that could save a lot of … all.
what i dont understand: if you use eyesweb for recognizing, why you need freeframe in v4 too?
for multiple sound playback with one machine, maybe max/msp or pd is more what you´r looking for…


Id cheat and record the tracking data, in a similar way to the FFT recorder, and then play the data back along with the movie, would that work?

@catweasel, this may work. but i also wish to have some realtime component, interaction stuff. is that possible to do? hmmm…

good idea, anyway, thx catweasel - my horizon is a bit mor wide now…

@milo: no, no, no, sorry, it’s not realtime (where to put all the sleeping people)…hehe…;-)
i have prerecorded footages of sleep of many friends and other people, i’m collecting this since longer time…


Yeah you could keep some realtime stuff in, live camera and you asleep on the bed tired out from all your manic patching!
Or you could mix the movies on screen, and interpolate between the the data assigned to each maybe, this could be automatic or manual…?
Good luck, and keep us informed!