Hello Guys,
As some of you know, I do UX/UI design for living. There is a point we feel strained by the mouse performing excessive works, Wacom tablets help to an extend but still it demands us to perform each step.
While designing we have like 10 thoughts in advance and time taken to perform all of them may takes from 20 seconds to even 20 mins. Its like tailing behind a slow moving car, without manually interfacing with mouse and tablet…how about interacting with mind sensors like Emotiv?
We have a contribution here
art-and-brainwaves-with-vvvv
A bridge to translate our thoughts to perform, Left click, Right click, Middle click, Zoom, Pan and Ability to cache our thoughts upto 10-20 commands can change the whole game with the way we interact with computer, Atleast for the designers. I have photoshop in mind but will also apply for most digital artist.
I thought about some ways to make this work, Without the device I am assuming we may use the sensor data to configure to mouse global node so it controls the mouse overall. Also a filter with a key press or something to process the command only when active to avoid processing unnecessary commands.
If this works like a charm, this can steal the headlines and open up all new crowd of designers to VVVV.
Does this sounds interesting? or the sensor is not stable enough to draw so much adaptation ?
Cheers!