I can’t quite figure out how to start this but I’m sure its possible. Is there a way to use the pipet to sample in 3d space.
As an example, i have a matrix of led tube, (like versa tubes) 10 x 10 and I’m trying to pass a 3 object through the space (a sphere). I’m trying to sample where this intersects and output the data so i can do volumetric video in real time.
Im trying to use 10 cameras to look at a scene at intervals with a very small gap between the near and far plane but I’m struggling to get this to work.
Ive thought about this in the past but haven’t tried to implement it until now, its tricky!
Hey AndyC i have that idea long time ago, i never try it, but we are doing the same aproach! :)
but on node13 there was a LED matrix with all this already solved, with the advantage that it calculated the internal geometry ( you get only a slice where you see a circule, but its really a fill circule, al the interlal points are “solid” ) …
i dont have the files, i had a 1tb hard drive crash yesterday, i had all the node13 fileserver backup :(… im tring to contact NSA so they can give me theyr backup, still now response ( jejej ) …
besides that, I think your aproach is valid… try the following, on your 10 render outputs of the 10 slices, limit the BackBuffer with/heigth to 10, that will give you a much clean area… after that i will add the 10 slices on 1 big render, and use pipet with that…
next step, well, you are planing to do this with DMX or with Arduino ? do you have a ethernet shield ? we can try to send all the data trough OSC, or RAW UDP also…
btw your first patch works, on the second upload you made the ADC look at dont work…
Hi vjc4, i guessed i would not have been the first to do this :)
what was the technique used on node 13 for this, one day ill eventually make it to one of the nodes!
Hope you manage to recover from your crash, fingers crossed!
Theres a couple reasons I’m doing this, one is that some guys here at work who “hopefully may” allow me to have a play and try this on some suitable hardware they have rigged for a show I’m working on. Currently they have their animators animating to an unwrapped UV which matches the pixel map. Secondly i want to prove this works for myself, i have ideas for a future project, projecting on multiple layers of gauze.
Oh, thanks for letting me know about the broken patch, the previous one didn’t work on a friends computer when i sent him it! I need to double check my relative file paths i guess.
Try using an ortho node rather than a perspective transform.
What I’ve done to create ‘solids’ rather than shells is to spread the scale so you have object inside object, only works with primatives of course.
Ideally voxels is the way to go however, but you’ll need to brush up your maths to go far with them!
The option is to use layers of 2d objects, depending on your resolution that can be easier and more reliable, for my current project, all the volume stuff I did has now gone, and its all layers of 2d, which is a shame as I like the clean lines of volumes, but thats clients for you!