great to see you do vlender in vl!
here are a couple of thoughts:
- naming: the “vl” is not really important for the name. i’d rather suggest something like: Camera2Blender (Transform)
- in the main vl2blender vl-patch: you use a pad named AspectRatio: instead you can simply make a connection from where you write the pad to where you read from it (ie. you can remove the pad as it is not necessary)
- is there a reason you do part of the transform-conversion inside the BlenderTransformations patch and other parts (the conversion to radians) outside? i’d put it all in the conversion patch, for clarity.
- same for FovFinder and Fov2fmm, looks to me they could be in one operation. wouldn’t even have to be a patch since all is only a stateless conversion.
- is there a reason you do the encoding to OSC in vvvv and not directly in vl?
- is there a reason you encode to 8 different osc messages instead of only one with multiple values, or at least only one for translate, rotate, shift, lens?
let me know if you have any questions regarding any of the above and i’ll elaborate…
I am glad that you already had a look on this, I was expecting it (it is my first attempt since node 15).
naming: indeed but as far as I can guess it is not bad to motivate people to use vl :P if you believe that this is a redundance I ll remove it ofcourse.
yes I forgot to remove it , I was sure that I ll need it at some point but…
not really I thought that it will be easier to follow it up and more readable (I keep organised in that way - smaller groups all my patches in general) and yes I still have a problem in understanding how the old spreads are working inside VL (with the “ForEach” surrounding) and thus why I didnt finish the blender2vvvv camera
as above, I cant get used to VL so fast, one thing at a time (it is really a hell to try things without help patches), esecially when I have understand less than the 50% of the whole VL thing.
I didnt know that it works that way too, in the very first edition it was the only work around to change the fields quickly and dynamically
I saw many times the video-tutorials on vimeo and some spare too in youtube with Tebjan and Elias, I checked also what delegates are in C# (in general) but I feel that I am missing many parts for VL, so I still have some fundumental questions:
In such a case like mine “vvvv2blender camera” shall I use stateful or stateless VL templates ?
assuming that this is not very important I am going with stateful (because I have a loose idea that it gives me more capabilites in terms of scaling)
For the ease of use I picked also Process, instead of record or class
but when shall I use a record instead of aclass (can this be clarified in a futured tutorial maybe ? ) and how the heck I ll initialize a spread so it can be updated with Pads (that was my main question). My intuition answered “Create and Update”, but this is not the case since I am in a Process - but then I though “come on, you can change it to a Class” and then I noticed that this (Create-Update) couldnt be applied in such a way and then I tried again another work around and then… This is what we call “Kykeon”… to be lost in an almost infinite loop without having any certain result or any certain direction to follow)
and my second (simpler) question is:
assuming that I solved all my problems, even my fianancial ones, how can I get the index of a slice (former slice) - I found IndexOf, but without any sort of discription it was really difficult to find out what it was doing - so that I can update the value (setslice) that I want when the bin size of my indecies array is changing (in VL the following example is in vvvv) ?
thx in advance, I want to learn VL, I can see its potentional and I believe trully that this will be (hopefuly) the future of programming.
- naming: the vl nodes already have a special icon, so no need really, to point that out any further. nodenames should best describe what the node does, not how it is implemented
- re BlenderTransformations: there isn’t a definite right/wrong in the way you modularize your patches. it just occured to me that in this case i’d argue the full transformation (including conversion to radians) is part of the same operation
- i understand it is hard without helppatches, thanks for still taking the effort!
once you become a bit more familiar with VL you’ll intuitively go for the stateless template in such a case. why? because all you’re doing here is do a transformation of data. convert from Transformations to OSC. as the patch does not need to remember any data between frames you’re not storing any state and thus just have a stateless operation. having said that, you can still use the normal (stateful) template anyway, because it will always work. using the stateless template is merely a way to help you create better/cleaner patches but you don’t have to start with that already…take your time. the only real reason for using the stateful template in your case is when you’d also use the udp-client in vl directly…but let’s leave that for now since it is still quite experimental…
yes, always just use Process for a start. don’t bother with Record/Class. those are only needed when you want to create instances of patches dynamically (think particlesystem), not for things like this (conversion of data).
absolutely, still to come!
i can understand, thanks for that lesson!
i’m afraid question 2 i don’t understand. IndexOf returns you the index of the slice that has the value you’re looking for. can you rephrase the question? maybe i’m missing what is happening outside of the screenshot (on the top)…
That was a huge amount of helpful points, thank you Joreg, no further questions conceringn these topics at the moment.
As far for my last question, yes I can rephrase it (foreign languages is not my strongest point)
In vvvv I am capturing the OSC messages and getting them as spreads, so the bin size is changing accordingly to the OSC bundled message size. To achieve that I am using a Select (node) and I am feeding it Select:select (inlet) with the OSCDecoder:Match Count(outlet) so I am getting the Slices’ index (with value = True) from the Select:Former Slice(outlet) and from there I am passing the newly generated spread of indecies to a SetSlice:Index(inlet) and by this flow I am able to update the specific values by selecting the right ones from OSCDecoder:Arguments(outlet).
I didint want to ask you how I ll do it in vl, but if you have any directions or if you have understood my problem anything would be helpful at that point (from a nod to a node to a patch).
Thank you again for your time and your quick responses !
Here is my progress so far. I know, I am missing many parts in theory.
there is a blend file for testing purposes under assets folder
blender2vvvv.7z (84.5 KB)
ouright, first i should say that you could make your life easier if instead of sending all 9 values as individual osc-messages, you send them in one message with 9 arguments. like that you’d not have to worry at all about how to collect them again in vvvv. but maybe this is a limitation on the blender side?
anyway, find attached a solution of what i think you were looking for, in vvvv and the same in VL for you to compare. let me know if this needs more explanations.
blender2vvvv.zip (6.6 KB)
mixed type messages are a great folly, I would recommend NOT doing it, if you can.
rather bundle messages of simple types together, this is much more useful, and usually better supported. in the wild, many developers mix, because they want to optimize it, when in fact it just makes it harder to read, and optimization like that hardly pays off
the reason for this is that you loose semantics when you mix it (i.e. only one address for all data, instead of one address for each semantic chunk of data).
instead of sending one bundle with a message with the id, a message with the new color palette and a message with a description, you would have to know in advance, that the id is always the integer at first place, the color palette would need a fixed length, and the description is always the string in the end.
it is much more resiliant and easier to handle, when you keep them messages purely typed. send arrays if you want to, but never mix semantically separate data into a single message. that’s what bundles are for!
@joreg sorry about the multiple edits of this thread, but I was so excited at some point and then I realised that I was looking the vvvv solution with the framedelay…
So right now I have to deal with another problem. The provided solution is working realy nice but as you predicted well for the blender side the propertis are being sent as a bundle.
This is why I am treating it as a spread by using UDP:Queue Mode->Spread (instead of Discard) and OSCDecoder:Match Rule-> All (instead of Last).
With your configuration the values are being able to be updated one by one, so the users have to do it by themself in blender. This is in oposition with what I had imagine in the first place id est to move freely the camera in blender and update automatically the camera in vvvv.
With my configuration I think that I had what @velcrome is mentioning (bundles sort to speak of, correct me if I am wrong) and I was capable to do the changes simultaneously (by moving freely as I said the camera in blender).
Although, I ll study the VL patch for a deeper understanding of VL.
Thanks both of you again!
nono, the example i provided is doing it your way (one osc-message per value). but indeed i forgot to set the UDPServers QueueMode to ‘Spread’ which could potentially have lost you some message (bundles/osc-packages). but this is easily fixed, as you can see in this version:
receivingOSC.zip (7.1 KB)
i think you’re confusing the idea of bundles with udp-packages. you’d typically not worry about bundles since this is the OSC default since many years now and it doesn’t really matter… what is happening with Discard vs. Spread though is related to the UDP packages. one osc-bundle will be one udp-package and by setting the queue-mode to spread we simply make sure that all packages that arrived within a vvvv-frame are being taken care of.
note: i did not change the MatchRule of the OSCDecoder in vvvv. that was left as in your example above. i think though, it is actually be better set to ‘Last’ because it is only about individual osc-addresses. and there you’d typically only want to use the last value that arrived per address.
(reading through the above again, i understand that this can still be confusing, let me know if i have to try again)
@velcrome: think about the thing that he wants to transfer here as a single type called ‘Camera’ that consists of some parameters (in that case 9 floats, 3 of which make a translate, another 3 a rotate, another 2 shift and the last one some fov) so i’d argue it could actually make sense to simplify to one osc-message. but of course i don’t know all the details about where the parameters do originate from on the blender side. maybe splitting in view/projection messages could make sense…
@joreg Yes you are right, and the patch is working as being expected to.
NOT vvvv related stuff :
From the Blender side the things are a bit fuzzy and strange, you see Python is taking control over everything running in the window as it is responsible for the UI and for any Operator that is running under the hood, so you have always to interrupt the main thread to do fancy stuff with the blender’s “core” (per se) (ie draw a box in 3d viewport or bake textures).
This addon (and it is a great addon believe me) is running as a “modal operator”, that means this thing is literally a script running in a parallel thread and is constantly being updated for what is going on in the Scene and interrupting by sending back status flags (such as ‘FINISHED’ , ‘CANCELED’ etc). That’s why it is so handy and nice this addon, because it lets you work in the viewport and in the same time(almost in the same time) it sends data via the socket to any OSC compatible platform.
To sum up, there are two options, even you put a function in the “modal operator” (hardcoded) to retrieve camera position and orientation or you just use the “Keyset” (much-much easier) and treat it as animation data (blender vector object =python list=typical array=>spread in vvvv).
Although it is really hard to take the camera transformations and it is even harder (impossible I think) to set the camera transforamtion matrices via python ,beside the differences between Blender’s and vvvv’s transformations’ matrices and their interpretation.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.