How to listen to "Post-requests" via http or/and reading body

Hi

I think we managed to find out, that what we want is not possible with vvvv right now - but in the case anyone else is searching for somehting similiar or if someone actually has a surprising solution to this - I am posting it anyway.

We are trying to connect smartwatches with vvvv using JSON via a http server.

Using JSON for communication is generally super easy and very convenient.

Our Problem is: we do not get any http-Body in vvvv or anything else which could give us the Posts the watches send to the server.

What would work is to write the JSON into the Header and use vvvv with HTTP (network Receiver) to listen to the server.

But we would really like to write the JSON into the Body not the header and not use key-value pairs like the HTTP (network Receiver) does. This seems to not be possible with anything which is excisting in vvvv right now. Even so it would generally be very nice to have a listener which shows the Body.

So right now we are trying to write a plugin which listens to post-requests and which will let us see the body and not only the header.

woei pointed us into a good direction and suggested this as a start:
-> https://msdn.microsoft.com/en-us/library/system.net.httplistener%28v=vs.110%29.aspx

hm. indeed. HTTP handling has been a bit clunky in v4.

You might want out try out this (rather quick) sketch: https://github.com/jens-a-e/hhhhttp-listener

Please ping me through the issues there, if any; or PR.

A quick Perf-test shows ~2.6k requests/second on 500 concurrent requests. Not too bad, but not really fast either. I guess, one would need to bump the MainLoop ;)

$ wrk -t2 -c500 -d5s http://192.168.1.148:8080                                                                                                                             feature/client-refactor-models 
Running 5s test @ http://192.168.1.148:8080
  2 threads and 500 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    94.03ms   44.84ms 372.61ms   92.55%
    Req/Sec     1.38k   480.12     2.20k    69.79%
  13315 requests in 5.05s, 2.50MB read
  Socket errors: connect 249, read 0, write 0, timeout 0
Requests/sec:   2635.63
Transfer/sec:    507.09KB

YEA thanks - what a pleasent surprise. this looks very promissing.

It does work for me as long as I am posting from the localhost as done in your helppatch.

But my http knowledge is very limited. So most likely the following problem is based on my confusion regarding http.

Unfortunately I am not able to post anything from another PC using the http (network post).

I do receive something on the server side if I use the standart http (network receiver) but not with your plugin.
(it also does not work if I just use my IP-adress instead of “localhost” using your helppatch, without another pc - only local)

I am not quite sure what I have to write as Adress in that case (both on the client and server side). Or if I already have to use ACLs for this…

I was not able to find out what “SCL” is supposed to be (you wrote in your helppatch: “…Add an SCL for “http://*:8080” and enable the appropriate users.”)

If I read your http-listener plugin correctly. It does open a server and it does run on a seperate thread, right? that is really nice!

and I have no idea why you have to add the proxyport to the adress even for the http (network post) in your helppatch to make it work, even so it is already specified on a pin .

There were multiple things in your response, so i split them :) TL;DR: Your problem is related to windows, not http knowledge. It has become a longer reply than i intened. maybe it helps to clarify the HTTP stuff :)

Access Control Lists:

Ups. There is a typo, it should be ‘ACL’ not ‘SCL’.

Yes, allowing proccess in windows to listen to a network socket (considered insecure by windows, other than the local machine itself) is … let’s say ‘unconventional’ to set up . However, this is needed as soon as one wants to serve on interfaces other than ‘localhost’. It involves setting the type of traffic/packets, the interface (usually identified by the IP), the port and the user running the process.

It is possible to run some special netsh .... command (as Admin) to allow serving, e.g. http (https,) on a certain address schema.

It much easier using the tool, mentioned in the patch, to do this. This needs to done only once on the machine. When you run it, select the ACL tab and add the following schemes:

  • ‘http://*:8080’
  • http://+:8080’ (this work most flawlessly, although the pattern is not conventional)

Go to the properties of each and make sure the user is allowed to listen and delegate. Optionally, add ‘Everyone’ or ‘Jeder’ as a user group with the same liberal rights. Interestingly one need not set an executable binary related to this.

Now you should be able to ‘listen’ to all (*, +) http traffic coming in on port 8080; with the HttpListener or the HttpReceiver node (or any other node or process, consider security ;). I really mean, consider the security. Depending on where the machine is located, this can be a risk; a moving laptop with an HTTP service running along… But we all are careful patchers anyways.

I found the project on Codeplex and it seems, by looking at the source, that this could alos be moved into the plugin, but is not straight forward.

When you want to listen on a different port, add more schemas and your done.

HTTP:

On the left side of the test patch, the HTTP ‘sending’ nodes are just for testing, you could also use a browser, but that way one doesn’t. Setting a proxy can be a requirement, when you have to use one; 8080 is confusing here, but it is a quasi standard port for proxies. Maybe there should be PR to make them hidden by default. AFAIK these are very old nodes…

The HTTP (well, 1.x at least) is a stateless protocol, transported over TCP. A good server manages the connections for you, so you just have to take care of the requests. It check if it is a valid HTTP quest, aggregates packets, etc. The HTTPListener is doing just that. It can be queried for incoming connections (a HttpListenerContext), which the plugin does, in a background thread. It pumps the contexts in a concurrent queue and puts them in a spread on every frame. So, there might be backlog, of your FPS drops…

However, these Contexts must be handled (read ‘.Close()`ed’), otherwise the remote party (the client, browser, etc) will have dangling open connections and at some point decides to close them due to timeout. At least a properly implemented client does. The server also manages things like ‘KeepAlive’, etc.
With the plugin you can do this with the Writer (HTTP), NotFound (HTTP) or Abort (HTTP). They ‘Close()’ the context, finally. So make sure a spread of contexts always ends up in one of these! The patch shows an example by responding to different paths (urls) differently, simulating a “Not Found, 404” for anything else than ‘/’ or ‘/test’.

The reason, why this is a separate node, is that one can now also use the Reader (HTTP) and examine/process the request data/object before computing a response. Also shown in the patch. With he old nodes, this was not possible (as you pointed out) or made a Framedelay unavoidable.

So one other benefit, besides accessing the request body (header, method, etc) is that each request, gathered between two frames, can be served maximum as fast as the next frame.

Maybe this can be done ‘properly’ with VL and async subpatches though.

By the way, another advantage of handling the 'HttpListenerContext’s separatley is, that the Stream’s (Request.InputStream and Response.OutputStream) are not read when accepting requests. They are read only in the related nodes (read: as late as possible). This way one could handle a request with a large body (think file upload), but not bloat the process memory, by just not reading it in. On the other hand it is possible to do so :)

Also note, that you have to filter the incoming requests by their ‘HttpMethod’; you have to deal with the decision how to respond to GET, POST, PUT, DELETE, HEAD, etc. You can do so by applying the [ OR away…

This reminds me, there should be a Close bang on the terminating nodes. That way one could write to a reponse over multiple frames and close it, when the whole thing is sent; e.g. sending a large file or buffer, without dropping frames.