Discuss about how to do this project

I am a newbie in vvvv.
Recently I saw the following project “엔에이유 NAU on Instagram: "(short version) 엔에이유가 제공하는 새로운 차원의 경험, Fuse 이 작품은 여러분의 모든 움직임을 새롭게 변환합니다 작품 공간에 한 발짝 들어오는 순간부터 당신의 모든 동작이 Fuse와 하나되는 마법 같은 경험을 기대해주세요 별도의 모션캡쳐 장비를 입지 않아도 경험할 수 있는 Fuse는 2024년 하반기 대한민국 서울의 다양한 장소에 등장할 준비 중입니다 NAU presents a new dimension of experience, Fuse. This creation transforms every movement you make. The moment you step into the art space, expect a magical experience where all your actions become Fused You can enjoy Fuse without any motion capture equipment It‘s set to make its appearance in mult"
I checked the message area and saw that she seemed to use vvvv’s Fuse to make it, but I also saw that she said that AI technology was used.

I would like to ask if you can guess what techniques she used in this project.
Like what equipment he used (webcam? Kinect … ?)

If you have any ideas please share with me thanks!

Maybe media pipe for the tracking, which could be the ai reference and fuse for the gfx?

what a nice project

Pretty sure their poject (that specific visual) is called Fuse and it’s not really related to VL.Fuse.

https://nerdyau.com/fuse

You can do similar visuals in vvvv though of course.

Wondering if the delay between movement and visuals is intentional or if the motion capturing lags that much. AI might refer to move.ai.

looks like unreal engine with particle emitter on rigged mesh. they showed this feature some versions ago. ATM this is not yet possible in gamma, because there is no support for controlling rigs. You can bake an animation into a rigged model but idk of it is possible yet to access its vertices in gamma.

it seems to be gamma and FUSE
image

2 Likes

I think it not only use gamma&fuse .
Maybe use move.ai capture people
I saw the designer say they use much commercial tool to work it.
Now I try to contact move.ai to get some test!

We have some open source Unity projects that have similiar effects, such as,

These projects use our webcam based mocap solution as input

It is also possible to implement these in Unreal Engine.

Late to the party:)

If it is FUSE and vvvv gamma, then I would love to know how to achieve such visuals - for me, it doesn’t look like Stride engine graphics.

As noticed before, vvvv gamma doesn’t support the loading of rigged meshes, so, if you want to go with rigging/skinning, you have to implement your own system.
On the other hand, it is possible to go with a procedural approach and generate random points (random distribution on sphere, box, capsule, whatever) at runtime based only on joint data. In this case, you still need to find a way to send/receive joint data, but it seems like a simple task for OSC, in case if tracking system that is used allows to send only bone/joint data.
If the procedural approach sounds good to you, I could share some of my patches that solve a similar task (the only difference is that pre-recorded animation data is used).

And yes, speaking outside the univvvverse, most of the body tracking systems provide native integration with popular game engines (UE, Unity).