HTMLTexture: Scrolling via touch (commandline --touch-events=enabled)

Hi everybody, hi Elias,

i want to benefit from touch event functionality coming with Chromium Embedded Framework (CEF) respectively HTMLTexture (EX9.Texture URL).

By configuring the chromium flag “touch-events=enabled” the website normally scrolls when interacting on a touchscreen (as you know it from your smartphone).

According to Elias’ blog post HTMLTexture should consider command line flags
for HTMLTexture via vvvvs command line arguments.

Therefore in “args.txt” i added:


Somehow nothing changes and touch interactions still result in selecting the content of the website (as we are used to it for mouse interactions) instead of scrolling the website.

Tested with:

  • vvvv_45beta34.2_x86
  • Win7
  • liyama ProLite T2336MSC-B1 Touchmonitor

Opening the same test urls 1 2 directly in Chrome/Chromium of the test-app for CEF everything works fine…

The functionality of the flag can be tested directly in Chrome browser at url: chrome://flags/

thanks :-)

htmltexture-touch-debugging.v4p (14.2 kB)
args.txt (22 Bytes)

There are no touchevents fired. Likely because the vvvv mouse is passed as mouse events, which makes sense of course.

You could try to fire “manual” touchevents via the JavaScript pin instead of passing in the mouse.

i’m not sure whether the CEF built in touch funtionality has generally anything to do with windows touch functionality, because it’s activateable and useable without any touch device (just via traditional mouse), too - at least in chrome browser (when flag is enabled) and with the CEF test-app (by default).

therefore i think it could and should generally be implemented for vvvvs’ HTMLTexture, too.

By doing so patchers would not need to built their own logic for dragging and scrolling via touch, especially as dragging/scrolling within a website and scrolling the whole HTMLTexture can not be distinguished on patch level, as HTMLTexture does (and can?) not output information on wether an interactive HTML object is clicked.

I’m not sure if CEF’s DragHandler is a starting point for the implementations approach…

There’s no official support for touch events in CEF using off-screen rendering (in case you test with cefclient.exe make sure to start it with --off-screen-rendering-enabled). See
As of now there’s no way to pass touch events down to the CEF browser, only mouse and keyboard events can be passed. The above mentioned issue tries to add those missing methods, but as you can see it’s rather old, and did never make it into the master branch of CEF - probably because it’s windows only and has no tests attached to it. So that’s basically the starting point…

The drag handler you mentioned is about drag/drop, not touch events. For example dragging a file into the browser window.

elias, thanks for that research!

haven’t tried --off-screen-rendering-enabled yet, as i have no computer at the moment… just two thoughts for the moment:

  • 1
    inputting touch-events directly into HTMLTexture would have been nice. but wouldn’t it also be possible to get the native touch-behaviour with scrolling and pinching by only using mouse-events?

i know that i can set the flag --touch-events=enabled in Chrome browser and cefclient.exe even if i have no device for touch-input connected at all. what in my case then happens it that e.g. dragging a website at the background results in scrolling the website. haven’t tried yet to get multiple cursors running, but would make sense that then even pinch gestures would be working. so in my incoherent understanding to use the native touch functionality of CEF it would be suitable to convert vvvv-touch-events into vvvv-mouse-events and pass those!?

  • 2
    would it alternatively be possible to not set -off-screen-rendering-enabled, but then to set the drawn cef browser window to presentation mode in oder to again have something like a off-screen-rendered result? or is presentation mode something like a ‘real fullscreen’ mode which needs to be applied on a monitor device?

Ad1) I can’t answer that either you’ll need to check for yourself. Translating touch to mouse should be doable with the existing node set.
Ad2) It would behave and feel exactly like the existing HTML renderer node, except that it uses chromium and not internet explorer. Not sure if that’s what you want…