EDIT: I can see that when I send the POST to get the stream, it is coming in (I can see the bandwidth used in Windows Monitor) but the raw output is not changing, I fear that those nodes are not prepared to accept this kind of data stream.
Ok I think I managed to get the stream without changing the method, but now the problem is that each section of the response represent a mjpeg frame so that is a compressed jpg image (or so I believe). I am trying to write it down to a file and read it back with no results so far, probably I am messing up with the raw bytes.
Is there a way to decode the frame within vvvv?
So far I am still failing at getting the stream as it comes in with both vobjects and NetworkHTTP-REST nodes. If I stop the stream the raw bytes are sent out of the node but there is no apparent way to get the stream as it comes in. Another issue I can see is that the stream is adding up on the RAM consuption and following commands won t clear that memory up…
Yes I thought about that as well, but it doesn’t look like it isgoing to be that easy, it is not just an mjpeg URL you can access, you’ll probably need a nodejs server and whatnots…
On the other hand I am thinking about ditching everything I ve done till this moment (that is using the API) and go the PTP way, using PTP has the advantage to free the usb and hdmi connection to get the texture in a more friendly stream, commands in hex are a bit of a pita tough.
Finally, after spending a couple of days trying to get some PTP commands to work on Windows I realised that when the camera is in streaming mode the PTP interface is not available so… we are back at the same point. It seems like there is no way to get full resolution images while at the same time having a decent stream coming in, except for the low res mjpeg preview at 10 fps.
you have to make a webrequest (http post) to a certain port and the response stream will (continuously) deliver the frames, right?
maybe you want to check out the experimental package in VL. there are nodes for lowlevel handling of webrequest/response stuff. girlpower/_Experimental/Async is a simple example of requesting a file, and reading the response stream (and async writing it to disk).
apparently because you tried or assuming?
looking at .net WebRequest class it can do exactly that. and the whole class is available as VL nodes.
you can create a webrequest with a header configured to your likes, read write the request stream and read write the response stream, which should contain your mjpeg data in some form
@microdee: the problem is not decoding the MJPEG stream as Joreg wrote, the dynamic texture will take care of that (it does!). Also consider there is not a *.mjpeg url you can get, it is an http call and the stream within its response, also a payload must be sent, anyway the problem with th e current http nodes is that they won’ t show the response as long as they are running.
@woei: of course I am talking about vvvv nodes, VL is a brand a new (and brave) world and I may need a little while ot catch up, (doing my home works tough)
above thing i’ve linked will get mjpeg stream from http too, afaik it’s just what you need with 4 effective lines of code. I don’t think so you need .mjpeg at the end of the url if the camera has some resource-url mapping or service stuff going on (aka http://dom.ain/feed )