iUi

Eyefluence was in the midst of a pivot when I came aboard. Its proprietary eye-tracking technology had been developed to enable “locked-in” quadriplegics to communicate. The system had been a life-saver for many, but the company was floundering. It needed to expand its market to survive, and it had set its sights on the emerging XR industry.

We weren’t the only player in this arena. But our competitors were mostly pitching passive use-cases: foveated rendering and heatmap analysis. Eyefluence founder Jim Marggraff had a hunch that gaze might serve as an active input, a central component of an interaction system that might in fact be superior to manual controls.

This went against prevailing wisdom. It’s well-known that our gaze isn’t entirely under our conscious control: it flits about distractedly while we think, or it can be captured by passing shiny objects. The “dwell-based” input system that had been devised for quadriplegics was functional but slow: to signal intent, the user generally had to focus on an affordance until a progress-circle filled.

Marggraff understood that in order to be viable for a general audience, a gaze-based UI needed to operate at the speed of thought. The eyes are, after all, the fastest-moving organs in the human body, and the organs most closely wired to the brain. How to develop a UI that was both lightning fast and foolproof, one that might enable the user to perform all the functions of a traditional interface without triggering “inadvertent activations?” This was the challenge I took on as Creative Director.

We dove into the physiology, psychology and physics of our subject. The eye exhibits two types of motion: the saccade, a staccato glance between two fixed points, and the pursuit, the fluid motion that obtains when gaze is locked on a moving subject. Our tracking technology enabled us to reliably distinguish between these two actions, and with this assurance we set about constructing our interaction system.

The centerpiece of the system was the “activation saccade,” an intentional glance between two points. Because the saccade is ballistic, we found we could accurately determine the endpoint before the fact. This enabled us to serve up feedback before the triggering glance was complete. The user experienced this as the system “reading my mind,” something akin to magic.

Keeping the user on-task is a prime challenge of the gaze-based UI: if I’m using my eyes both to read my environment and to work the controls, I can easily lose focus. We pioneered a technique called “eye-holding,” a subtle moving particle to lead the gaze to the desired spot.

By utilizing saccade and pursuit in various combinations, we enabled a range of interactions: scroll and rotate, pan and zoom, search and organize, and more. We even tested a gaze-based version of the Android swipe keyboard. The system was instantiated in a suite of demos, which we exhibited at conventions and private presentations. We demonstrated that, with very little practice, the user could learn to perform complex tasks, rapidly and intuitively, using just her eyes. This showcase eventually attracted the notice of Google, which acquired Eyefluence in 2016.

I worked closely with Marggraff to define the system and spec out the demos. We typically brainstormed them together on wipe boards. I would capture the boards and turn them into documentation and wireframes, working mostly in Illustrator. I then directed a small team of Unity developers through the iterative build process, occasionally making adjustments in-engine.