Tags: AI ux

Developed in the 1970’s, the ’Mirror Test’ involves putting a child around 18 months of age in front of a mirror with a coloured sticker on their forehead to determine whether they possess the ability of self-recognition. Often, a child younger than 18 months will ignore the sticker; older, they will reach up, and remove it. This is taken as an indication the child perceives the reflected image as itself, rather than that of another.

This self-realisation marks one of the boundaries of the brain’s development; in this case, obtaining “theory of mind”. This is “the ability to attribute mental states—beliefs, intents, desires, emotions, knowledge, etc.— to oneself, and to others, and to understand that others have beliefs, desires, intentions, and perspectives that are different from one's own.”

A special application of this is the ability to model the thoughts of others to guess their intentions. In social animals, this acts as a sort of philosophical lubricant; transforming collections of hungry, “red in tooth and claw”, creatures into a functioning society, greater than the sum of its individuals.  

Our smartest televisions have the smallest sense of self-awareness. They know what a viewer is watching, some even down to the individual video frame. Applications can guess at who is watching, but as OTT marketing and advertisers will tell you, this is not an exact science. In a world where entire households share a single login, it’s often difficult to distinguish which individual (or group of individuals!) is watching. Television is, often, blind.

When a device we are watching reaches out to ask. “Are you still watching?”, it feels profound, with almost a sense of intelligence, but this is just an insincere act of kindness masked to prevent wasted bandwidth.

The technology is available to improve the ‘intelligence’ of our ‘smart’ machines. Almost everyone living in a developed country carries a mobile phone that continually outputs trackable radio signals, like Wi-Fi and Bluetooth. It would be easy for these devices to measure signal strength and determine the number of people watching TV. Tracking how close they’re sitting to the television and small movements is also possible.

This opens a world of possibilities for operators to improve the overall consumer experience. Content could be paused if a viewer were to leave a room. Video could be sacrificed for higher quality audio for users who have popped to the kitchen, helping save on power and bandwidth. Whether a phone is in a person’s hand or resting on the sofa can offer insights on customer engagement. The possibilities are endless.  

Technological advancements are on the cusp of taking this vision even further. Imagine a TV that could not only see who is watching, but also track their eye movements. Where on the screen are viewers looking? How engaged are they? How long does it take them to understand and navigate the user interface?

As we move from a time when engagement is guessed, to an era when it can be precisely measured, we have to step back and ask; what does that mean for our industry? Opportunities for advertisers are obvious, but there is so much more.

Our visual reality is constructed. What you think you see is a memory buffer of your self-created view. Of your 190 degrees of visual field of view, less than 10% of that is used to focus on detail - only 1.5% on extreme detail. The rest is devoted to much lower resolution and movement. A future application of this in VR is to just render in detail what the viewer’s eye needs. The first device with fast enough eye tracking will be able to deliver superior quality rendering speeds by devoting CPU and GPU resources solely to where detail is needed. In theory, this same approach could be applied to phones and TVs. Why completely decompress a frame when no one is looking at a particular patch of pixels? Why pick a 4K stream when the viewer is so far away from the TV set that a lower quality stream would look the same?

Reliable eye tracking enables something even more exciting. Imagine a user interface that could measure how long it takes to be understood, its “cognitive load” constantly analysed, and evolve itself over time to measurably improve the viewers experience.

Why not take this further? Over time, devices could learn how your brain works and begin to craft an interface in real-time that works best for your methods of navigation. To paraphrase Oprah, “everyone gets an interface!”

Perhaps, after an initial “Cambrian Explosion” of UI body-plans, some clear superior layouts will emerge that fit the majority of viewers; or we could get a multitude of species that reflect different segments of viewers. We won’t know until we try.

But even that in of itself is an unknown. It could be that as a society we reject giving our devices cameras which can be used to study and classify us, or even the ability for software to evolve in the ways described. If this is the case, we have to ask ourselves, where do we go from here?

Either way, exciting times are ahead.

New Call-to-action

Subscribe To Our Blog

New Call-to-action