I am using the API of the Theta S camera, it is based on json communication, more info here: https://developers.theta360.com/en/docs/v2.1/api_reference/
So far it is working as supposed but there I can t get the live preview as it is a multi part mjpeg stream. I am not sure whether it is even possible with the HTTP nodes or the https://vvvv.org/contribution/vobjects addon.
EDIT: I can see that when I send the POST to get the stream, it is coming in (I can see the bandwidth used in Windows Monitor) but the raw output is not changing, I fear that those nodes are not prepared to accept this kind of data stream.
I’m not familiar with this kind of streaming method but the vobject http nodes can be set to show data immediately when it’s received rather than when entire data is ready. maybe it will help?
Hi, is it something I can change in vvvv or the node has to be modified?
you can set it in vvvv in Send (Http) node via “Completion On” pin and set that to “ResponseHeadersRead”.
Ok I think I managed to get the stream without changing the method, but now the problem is that each section of the response represent a mjpeg frame so that is a compressed jpg image (or so I believe). I am trying to write it down to a file and read it back with no results so far, probably I am messing up with the raw bytes.
Is there a way to decode the frame within vvvv?
try DynamicTexture (EX9.Texture Raw)
Isn, t dynamic texture expecting raw pixel information? What I ve got here is a compressed jpg data.
I attach a file with 2 frames decoded to string.
preview_string.zip (64.8 KB)
did you check its helppatch?
Ok I should have done as usual… so yes it does but I can t separate properly the frames yet
I am not sure it is the best and most elegant solution but it works
preview_raw.zip (1.0 MB)
test_theta_preview.zip (2.7 MB)
So far I am still failing at getting the stream as it comes in with both vobjects and NetworkHTTP-REST nodes. If I stop the stream the raw bytes are sent out of the node but there is no apparent way to get the stream as it comes in. Another issue I can see is that the stream is adding up on the RAM consuption and following commands won t clear that memory up…
Yes I thought about that as well, but it doesn’t look like it isgoing to be that easy, it is not just an mjpeg URL you can access, you’ll probably need a nodejs server and whatnots…
On the other hand I am thinking about ditching everything I ve done till this moment (that is using the API) and go the PTP way, using PTP has the advantage to free the usb and hdmi connection to get the texture in a more friendly stream, commands in hex are a bit of a pita tough.
Finally, after spending a couple of days trying to get some PTP commands to work on Windows I realised that when the camera is in streaming mode the PTP interface is not available so… we are back at the same point. It seems like there is no way to get full resolution images while at the same time having a decent stream coming in, except for the low res mjpeg preview at 10 fps.
not sure i really get the problem.
you have to make a webrequest (http post) to a certain port and the response stream will (continuously) deliver the frames, right?
maybe you want to check out the experimental package in VL. there are nodes for lowlevel handling of webrequest/response stuff. girlpower/_Experimental/Async is a simple example of requesting a file, and reading the response stream (and async writing it to disk).
Hallo, I think this is not a solution as the problem is that apparently there are no nodes that can handle a multipart http request.
https://channel9.msdn.com/coding4fun/articles/MJPEG-Decoder well then write one. it’s pretty easy ;) even creating a dx11 texture out of it isn’t hard either
apparently because you tried or assuming?
looking at .net WebRequest class it can do exactly that. and the whole class is available as VL nodes.
you can create a webrequest with a header configured to your likes, read write the request stream and read write the response stream, which should contain your mjpeg data in some form
@microdee: the problem is not decoding the MJPEG stream as Joreg wrote, the dynamic texture will take care of that (it does!). Also consider there is not a *.mjpeg url you can get, it is an http call and the stream within its response, also a payload must be sent, anyway the problem with th e current http nodes is that they won’ t show the response as long as they are running.
@woei: of course I am talking about vvvv nodes, VL is a brand a new (and brave) world and I may need a little while ot catch up, (doing my home works tough)
above thing i’ve linked will get mjpeg stream from http too, afaik it’s just what you need with 4 effective lines of code. I don’t think so you need .mjpeg at the end of the url if the camera has some resource-url mapping or service stuff going on (aka http://dom.ain/feed )