Skip to content

Commit

Permalink
Merge pull request #617 from pupil-labs/update-gaze-meta
Browse files Browse the repository at this point in the history
add meta image tags
  • Loading branch information
marc-tonsen committed Aug 15, 2023
2 parents 4464d64 + c258e67 commit 3a90e4e
Showing 1 changed file with 26 additions and 20 deletions.
46 changes: 26 additions & 20 deletions src/alpha-lab/gaze_contingency_assistive.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,14 @@
title: A practical guide to implementing gaze contingency in assistive technology
description: "Gaze contingent systems for assistive technology"
permalink: /alpha-lab/gaze-contingency-assistive
meta:
- name: twitter:image
content: "https://img.youtube.com/vi/cuvWqVOAc5M/sddefault.jpg"
- property: og:image
content: "https://img.youtube.com/vi/cuvWqVOAc5M/sddefault.jpg"
tags: [Neon]
---

# A practical guide to implementing gaze contingency for assistive technology

<TagLinks />
Expand All @@ -17,7 +23,7 @@ Imagine a world where transformative assistive solutions enable you to browse th

'Gaze contingency' refers to a type of human–computer interaction where interfaces or display systems adjust their content based on the user's gaze. It's commonly used for assistive applications as it enables people to interact with a computer or device using their eyes instead of a mouse or keyboard, like in the video above. This is particularly valuable for individuals with physical disabilities and offers new opportunities for communication, education, and overall digital empowerment.

## Limitations and current prospects
## Limitations and current prospects

Gaze-contingent assistive technologies have become much easier to use recently thanks to advancements in the field of eye tracking. Traditional assistive systems require frequent calibration, which can be problematic in practice, as highlighted in the story of [Gary Godfrey](https://pupil-labs.com/blog/community/cycling-for-als/). Modern calibration-free approaches like [Neon](https://pupil-labs.com/products/neon/) overcome this issue and provide a more robust and user-friendly input modality for producing gaze data.

Expand All @@ -27,24 +33,24 @@ We have prepared a guide to aid you in creating your very own gaze-contingent as

**Mapping gaze to screen**

Neon is a wearable eye tracker that provides gaze data in scene camera coordinates, i.e., relative to its forward-facing
camera. We therefore need to transform gaze from *scene-camera* to *screen-based* coordinates in real time, such that
the user can interact with the screen. Broadly speaking, we need to locate the screen, send gaze data from Neon to the
Neon is a wearable eye tracker that provides gaze data in scene camera coordinates, i.e., relative to its forward-facing
camera. We therefore need to transform gaze from _scene-camera_ to _screen-based_ coordinates in real time, such that
the user can interact with the screen. Broadly speaking, we need to locate the screen, send gaze data from Neon to the
computer, and map gaze into the coordinate system of the screen.

To locate the screen, we use [AprilTags](https://april.eecs.umich.edu/software/apriltag) to identify the image of the
screen as it appears in Neon’s scene camera. Gaze data is transferred to the computer via Neon's
[Real-time API](/neon/real-time-api/introduction/). We then transform gaze from *scene camera* to *screen-based*
coordinates using a [homography](https://en.m.wikipedia.org/wiki/Homography_(computer_vision)) approach like the [Marker Mapper](/enrichments/marker-mapper/)
enrichment we offer in Pupil Cloud as a post-hoc solution. The heavy lifting of all this is handled by
To locate the screen, we use [AprilTags](https://april.eecs.umich.edu/software/apriltag) to identify the image of the
screen as it appears in Neon’s scene camera. Gaze data is transferred to the computer via Neon's
[Real-time API](/neon/real-time-api/introduction/). We then transform gaze from _scene camera_ to _screen-based_
coordinates using a [homography](<https://en.m.wikipedia.org/wiki/Homography_(computer_vision)>) approach like the [Marker Mapper](/enrichments/marker-mapper/)
enrichment we offer in Pupil Cloud as a post-hoc solution. The heavy lifting of all this is handled by
our [Real-time Screen Gaze](https://github.com/pupil-labs/realtime-screen-gaze/) package (written for this guide).

**Gaze-controlling a mouse**

The second challenge is using screen-mapped gaze to control an input device, e.g., a mouse. To demonstrate how to do this, we
wrote the [Gaze-controlled Cursor Demo](https://github.com/pupil-labs/gaze-controlled-cursor-demo). This demo leverages the
[Real-time Screen Gaze](https://github.com/pupil-labs/realtime-screen-gaze/) package to obtain gaze in screen-based coordinates, and then uses
that to control a mouse in a custom browser window, as shown in the video above. A simple dwell-time filter implemented
The second challenge is using screen-mapped gaze to control an input device, e.g., a mouse. To demonstrate how to do this, we
wrote the [Gaze-controlled Cursor Demo](https://github.com/pupil-labs/gaze-controlled-cursor-demo). This demo leverages the
[Real-time Screen Gaze](https://github.com/pupil-labs/realtime-screen-gaze/) package to obtain gaze in screen-based coordinates, and then uses
that to control a mouse in a custom browser window, as shown in the video above. A simple dwell-time filter implemented
in the demo enables mouse clicks when gaze hovers over different elements of the browser.

**It’s your turn…**
Expand All @@ -55,16 +61,16 @@ Follow the steps in the next section to be able to use your gaze to navigate a w

1. Follow the instructions in [Gaze-controlled Cursor Demo](https://github.com/pupil-labs/gaze-controlled-cursor-demo) to download and run it locally on your computer.
2. Start up [Neon](/neon/getting-started/first-recording.html), make sure it’s detected in the demo window, then check out the settings:
- Adjust the `Tag Size` and `Tag Brightness` settings as necessary until all four AprilTag markers are successfully tracked (markers that are not tracked will display a red border as shown in the image below).
- Modify the `Dwell Radius` and `Dwell Time` values to customize the size of the gaze circle and the dwell time required for gaze to trigger a mouse action.
- Click on `Mouse Control` and embark on your journey into the realm of gaze contingency.
- Right-click anywhere in the window or on any of the tags to show or hide the settings window.
- Adjust the `Tag Size` and `Tag Brightness` settings as necessary until all four AprilTag markers are successfully tracked (markers that are not tracked will display a red border as shown in the image below).
- Modify the `Dwell Radius` and `Dwell Time` values to customize the size of the gaze circle and the dwell time required for gaze to trigger a mouse action.
- Click on `Mouse Control` and embark on your journey into the realm of gaze contingency.
- Right-click anywhere in the window or on any of the tags to show or hide the settings window.

<img src="../media/alpha-lab/Settings-gaze-controlled-cursor-demo.png"/>
<img src="../media/alpha-lab/Settings-gaze-controlled-cursor-demo.png"/>

## What's next?
## What's next?

The packages we created contain code that you can build on to fashion your own custom implementations, opening up
The packages we created contain code that you can build on to fashion your own custom implementations, opening up
possibilities for navigation, typing on a virtual keyboard, and much more.

Dig in and hack away. The potential is boundless. Let us know what you build!
Expand Down

0 comments on commit 3a90e4e

Please sign in to comment.