This is far from my first attempt to figure out how to use the laser library implementation. But I come across that either I don’t understand how to do it right, or it’s bugs.
I’m trying this implementation, but in recent years I’ve tried others with about the same result:
The main thing I get is very crooked figures with very low resolution, although my colleagues are connected via TD and everything works properly and with very high quality. I seem to be missing something, but I can’t figure out what it is. Either there’s a bug somewhere.
I wouldn’t call that “hacks”, these are convenient workflow features that you would need in a laser controller to design the output and offset the physical properties of a specific laser model. Some lasers are faster and have better stop/move behavior than others.
Yes, you can definitely add these features to the patch as well. I just thought it would be faster just to copy the C# code.
It’s very interesting. Can you give some hints to reproduce this workflow?
Maybe you can point in which source code to look for it?
I don’t really understand what “stop values” are yet, but it’s already clear that it’s something very important. Also, I want to understand, did Tebjan’s original approach use some kind of adaptive rate tweaking?
Hey @yar , sorry just seing this so I don’t know if my answer is going to be any relevant or helpful now…
we have been working on a Gamma port of the EtherDream.NET nuget that is available with @sebescudie , I still have some work to do on it before sharing after some funny things were spotted on a show we have on tour currently.
as for the incomplete circle you are showing and the crooked figures you are reporting, it’s just a matter of tweaking the start/end repeat points in my opinion. ILDA communication is pretty basic by itself and just a list of points (coords+color), all the “intelligence” in drawing your shapes are beforehand in how you build your spread of points before sending, and for this you need to experiment and understand how the mirrors/diode behave and adapt all the settings (start/end points, point repeat, etc.) to every shape in your show based on its scale and complexity (welcome to delightful world of laser 😅)
You have probably done that since your post, but just in case you left that aside for lack of a little guidance, this might inspire you to give it another try!
I have been using EtherDream extensively in the last years and it has perfectly done the job for me.
The tricky part is rather on the software side (talking to the EtherDream) and building your own laser figures properly, as opposed to Pangolin that handles all the job for you. If you are just starting out and think you can do with basic shapes and effects, Pangolin is probably a good way to start. But if you need anything fancy and/or sophisticated real-time/interaction then I guess there’s no alternative to raw ILDA, and EtherDream is really just that, an ILDA interface.
I’m curious to hear about the downsides you heard about it from professionnals.
All problems I had appeared to be in our port of the library to vvvv and/or how I drew my shapes.
DMX/sACN: some lasers have DMX input indeed, but as far as I know (never used it) it is more like macro-variables for scaling/rotating/translating pre-recorded shapes or effects, you will be very limited (a bit like pre-recorded effects in lighting fixtures).
Hope it will help!
And indeed for more questions you’d probably be better off starting a dedicated thread 🤓
@TremensS It just so happens that sometimes I have the equipment and sometimes I don’t.This topic is immortal as long as there are no simple laser tools - practically the base of modern media.
I will try to check your tricks. But I have other problems - it seems that I send too many dots and copies are formed, sometimes it seems that something is duplicated or breaks off in the middle and starts to draw again. In general, I get unstable, glitchy results.
There seems to be something with the sampling. I had a guess that the level of sampling must be related to the number of dots being sent.
But I suspect that I just wasn’t keeping track of whether etherdream was ready to accept a new batch of dots.
I also want to figure out if it’s all worth putting in a separate loop?
@TremensS Thanks so much for your detailed response. I was completely overwhelmed and I couldn’t read it earlier.
Ath the moment I use pangolin with Beyond although I found out that it was possible to control live the galvos however I didn’t manage to do it at the end, so I had to render (live) my stuff and pass it through spout and ndi to madmapper (madlaser) a thing that I want to avoid next time.
I also asked why some people don’t like Etherdream and I have to admit that I was not satisfied so much by their answers. Basically the conclusion seems to be the rent-companies marketing, specifically machines like the Kvant 30 (as in our case) being sold to the customers with Pangolin-Beyond licenses. So it seems that in some productions it is one way, since I am not getting involved to the pre-production / ordering thing and the light setup.
Anyways, I feel like this topic must expand and grow a bit more, so I ll open a new thread.
@nissidis one of the main reasons rental services don’t like Etherdreams so much is that these things can blow up galvanometers. I myself have witnessed a laser end up exploding in inexperienced hands. A lot of small fragments from the mirror are left behind the window and that’s about it. In a way, it’s equipment for nerds
I’ve seen your shows on instagram. Do you use pangolin, but just pre-record and transfer to MadLaser?
@yar hello! Not exactly but something like this, my initial plan was to use livecontrol thru Beyond with OSC. I didn’t make it although I was sending and receiving properly the OSC data, I was missing something (hope next time to have more time to prepare stuff).
But for the show everything was in real-time, no prerendered content, the only drawback for my workflow was that I have to put madlaser (streaming via spout and NDI) in the middle to do the vectorization for me as you guessed above.
Beside that, i kept everything else on vvvv-gamma side, interaction with the beams, content rendering in multiple textures and streaming.