Digital interfaces manipulation aid using Eye-Tracking and AI

on

Since my mother started using touchscreen devices, I’ve had the opportunity to observe how the relationship between elders and technology could be tricky in ways I couldn’t imagine before.

I supposed the major problem would be to understand the semiotic of the interfaces, the meaning of all the different signs and symbols currently in use in the main common applications and OS.

I was worried about the lack of conventions over the behaviour of the interface’s elements, and the gestures a user should learn in order to master the software.

I was wrong. Or, at least, these wasn’t the major problems.

The thing that create more trouble is the manipulation of the device, especially in a mobile where the sensible part – the screen – is quite half of the object.

My father recently started to use a tablet to read books, and he ended up placing the device on the desk on a stand avoiding to touch it because, since he’s used to manipulate books, he placed the finger on the screen, triggering a bunch of functions not required but implemented in the interface.

My mother on the other hand, keep pressing links too slowly, performing a “press and hold” instead of the “tap”.  Latency, very well shaped on the actual target of these devices, is not good enough for them, is too fast and unforgiving.

In the “first principle of interaction design” Bruce Tognazzini suggest the use of eye-tracking system to improve the precision of link triggering.

Basically he suggest that if you look in a certain point and at the same time you tap on that particular spot, probably that’s the action you are willing to perform.

From this point I started wondering if could be possible to use a similar principle to give a better and more tailored experience to the user. Theoretically the machine should adapt to the user and not the opposite.

Maybe, using artificial intelligence (AI) could be possible to “teach” the device to behave differently with different users, so my mother’s “press and hold” could be correctly interpreted by the device as a simple “tap”.

There are a lot of problems to solve, both on the hardware and software side, but imagine a device that can recognise you and adapt the speed and sensitivity on your personal parameters, capable of wait for a slower users, and speed up for a faster one. Capable of understand if it’s and intentional triggering or an incidental one.

Of course the user should be able to enable or disable it, the system will require some time to learn (store information to be compared and process the correct response) from the user and should be possible to use maybe the frontal built-in camera as tracking devices. It should also work in a low-light environment, and it should works from a acceptable distance (for example, the iris-scan built in the Lumia 950 require the user to come closer than 30cm from the sensor in order to recognise it).

But it’s something I think worth to be considered.

I’ll publish on my website the concept of this project, and I’ll update it during the time being, hopefully one day I’ll be able to create a prototype, unfortunately it requires some technical competencies I do not possess, anyway I’m more than open to collaboration.