Intersection of meshes, spacial detection, etc

Hi, often i deal with a problem of detection in space, in dx9 there was intersect nodes which worked with meshes, sometimes i did buttons with them, sometimes they let me walk on terrain etc.

but what about dx11 and detection in space? Most of the time i am actually using things like length, or = (first pin is position of objects, second pin position of detection and epsilon is the size of cube thats detectin)

In unity you can very easily setup raycasting, or detection in meshes, if there would be few nodes that can do these things, it would be awesome boost in functionality. I know that detection of any mesh that enters some area sounds like job for whole physics engine/vvvv50, but i think we would benefit with few simpler nodes - node(detection mesh input, detection mesh transformation, mesh input, mesh transformation)

is it hard to do? i know there is for example few nodes to detect kinect point cloud with few boxes, maybe i can dig around there, but is operation like this too hard to execute on cpu rather then gpu?

When i was working on installation for a clinet which needed touching meshes that have been animated i have spent so much time to getting right transformations for meshes and setting up intersect, i think i had to in the end remake whole project from dx11 to dx9.

And now if i have an idea for a game like thing in vvvv, in which i have to for example transform a cube to any rotation and scale, how to do this without physical engine in simplest way? (for example make a long corridor scale 10,1,1, with random rotation) In past i even did array of cubes that i could arrange to the shape of detection i wanted, but thats not really fun

i am now kinda spoiled by the easy use of unity in this kind of thing

thanks :3

u can use pipet for that, depends on what exacactly u want to detect position o the mesh or subset number… There is more tricky way when u can readback object id directly from shader but i did’t had time to produce it. Also multiple renders implementation would be tricky…
can send u patch in the morning, it’s on another machine.

pipet for checking if X mesh is inside Y mesh? :3
like depth pipet?

Pipet is mostly use in game to detect touch on 3d object. Each subset can have a color and when you touch the screen you pipet this pixel and check who this color belongs to.

In your case, you could make a module using DX9 stuff and send the result to the rest of your application that is using DX11.

RenderPipet is one i use to select subsets
The one that will tell u the XYZ of the touch u can grab from projector calibration patch.

pipet for checking if X mesh is inside Y mesh?
mmm dunno… the raycast is not exactly the option i think

RenderPipet.v4p (23.6 kB)

tnx for sharing antokhio

well i am interested in that spatial recognition not linked to a screen but rather to a position in virtual space, something like colliders that acts like triggers, but without physical engine

that actually sounds like a work for bullet, u just kinda move static objects from external parms, not sure about collision tests, if remeber last dx11 x64 version had some troubles like objects not colliding…
U can try 32 bits… I know there is memory leak in bullet witch makes it crash when alot of objects collide together, but in ur case that might be not that bad.

yeah i just hate workflow with physicall engines in vvvv, its so messy and using the whole thing just for a simple collision is meh

i am curious if there are any alternatives

I was about to ask the same question, it really would be great to have the INTERSECT(DX11.Geometry Mesh) node, I can’t see much 3d games made in v4 without it…
Otherwise we should be able to do some ray-casting in compute shader? Anyone knows the approach to this?

Would be great to have as an easy module, but not so super easy to make one that has good performance I guess.

For sure you would want to do some really rough checks on the bounding box or sphere first, maybe even do a stage before that where you check if they are even in the same vicinity.

raycasting- which I guess would come last, I think vux did an example somewhere in geometry shader? Mini xmas pack?

maybe i misunderstood. you mean vuxens spheretracing example? see this thread:
anyone out there making progess with raymarching, spheretracing, volume rendering etc. in dx11? i got totally lost.

@everyoneishappy Thankx a lot for the raycasting gsfx example I am gonna try and conver this to some sort of compute shader with collision detection output. Will report on this very soon, I am doing a dx11 project where i need sharp collision detection - the raycasting way. Will share my findings, failing is not an option! lol

@evvvvil i really hate when a project needs to pass that stage, it makes for some sleepless nights … worst case scenario you can still make dx9 scene for collision detection and pass info to dx11 final output, there are some crazy differences in how dx9/dx11 understand mesh and its orientations etc.

I think i did this hack once but it needed a lot of tinkering