A small proof of concept for working with ComfyUI’s workfows.
You will need to install Newton.JSON nuget
ComfyUIGamma.vl (63.7 KB)
A small proof of concept for working with ComfyUI’s workfows.
You will need to install Newton.JSON nuget
ComfyUIGamma.vl (63.7 KB)
Nice!
I’m playing with the attached json workflow and the SaveImageWebsocket node in ComfyUI:
Any idea on how to display an image from the bytes?
Here’s my working code from an app on the web:
ws.current.onmessage = (message) => {
if (typeof message.data !== "string") {
const reader = new FileReader();
reader.onload = (event) => {
console.log("BLOB MESSAGE:", event.target?.result);
const arrayBuffer = event.target?.result as ArrayBuffer;
const imageData = arrayBuffer.slice(8);
const blob = new Blob([imageData], { type: "image/png" });
const url = URL.createObjectURL(blob);
setImageSrc(url);
};
I cannot find any node to get the bytes, and I have no idea on how to create a texture…
Is there a way to achieve this with the set of node from the VL.IO.Websocket nuget?
Am I missing something?
Cheers
PS: Can’t upload json on this thread…
hi @robe
I used to get bytes of images, but honestly, if you’re a beginner, I wouldn’t advise you to do that - for that you need to build a system that gets tasks executed, then extracts the names of files from them, and then fetches bytes from the server with a special request.
If you are working locally, just specify the file path explicitly.
Use FileWatcher or socket data about the execution to display the result after execution.
upd: But saving the image to a websocket is interesting! I just now realized what you meant. I’ll have a look at it in my spare time.
Hi @yar ,
Thanks for the quick answer!
The SaveImageWebsocket comes from the ComfyUI repository as an example in the custom_nodes folder.
I’ve succeeded in getting the image via WS in the following test app:
The idea is to have only the SaveImageWebsocket node in the workflow and the previews disabled. In this way, all the messages that aren’t String are bytes of an image.
Is there a node or a region in Gamma to process the output from the Websocket node and get the latest bytes message instead of the strings?
EDIT: If you want to try it, there’s an example workflow in the repo:
@robe look at Data output – it’s observable.
https://thegraybook.vvvv.org/reference/libraries/reactive.html
Also, can suggest to enable “browsable packages” option and to look at “sources” of nodes involved in processing.
Hi @Yar, thanks for the hints! Of course, I read the grey book several times, that part on reactive programming is pretty poor… Exploring the VL packages helps, but that domain is pretty hard to grasp for a noob like me (Every time I get back on Gamma, I always feel like a beginner) It is quite difficult to guess the name of the nodes/region, paradigms, etc etc,
This scenario is pretty simple: To make things even simpler I’m using the VL.SimpleHTTP package for the POST(Blocking) request on /prompt
In the attached patch, I tried everything I could imagine(as a beginner) to get the WS Binary shortcut and get the image:
As you can see in the patch I created all the Journey of requests for the SaveImage approach. Getting the binary image from the WebSocket would be a REAL shortcut.
gamma-comfy.zip (37.4 KB)
Thanks in advance for your help.
And thanks to anyone who contributes to the Gamma project
After a long break, I’m here again scratching my head to understand things outside my comfy zone…
You’ll need something like this:
WebSocketClient returns bytes from its outputs (actually WSMessage containing byte data). The essence of Observable is that you can create a chain of programmed events and your code (node or group of nodes) subscribes to perform certain actions on the event.
In the screenshot, the following happens: each message received by the WebSocketClient is converted to bytes, and those messages that could not be converted to JSON are discarded. In any case, you will need to further collect the resulting bytes (e.g. via SpreadBuilder or something else), group them and figure out how to display them. Can I suggest you do that? If you feel strong enough)
upd: you can do that too:
Woah! That’s Pure Gold!
Thanks @yar! Your help was more than welcome! (and essential!)
I implemented both ways. The approach in upd2 seems more clean, but I can’t get it to work.
I can get the bytes! To do that you need to use the SaveImageWebsocket node in Comfy. To be sure to receive on WS only one image for each POST on /prompt, the workflow must contain only that node that manages base64:
And previews must be turned off :
Here’s a .zip containing a comfy workflow that uses SDXL turbo and the SaveImageWebSocket Node:
gamma-comfy.zip (35.4 KB)
There’s also the patch, if you want to have a look, I made a switch to test the WS and the HTTP way
Of course, I feel brave enough, or at least I’m trying to…
At the moment, as you can see in the patch, my try was unsuccessful
If I set ToImage to R8G8B8A8 the node fails…
Anyway
Thanks for the help and for the trick to use the console…
Bye