Skip to content

Commit

Permalink
Merge pull request #601 from pupil-labs/phone_alphalabs
Browse files Browse the repository at this point in the history
alpha lab content: neon with phone screens
  • Loading branch information
N-M-T committed May 30, 2023
2 parents f22cb1f + 7258daf commit e25dda0
Show file tree
Hide file tree
Showing 9 changed files with 153 additions and 7 deletions.
4 changes: 4 additions & 0 deletions src/.vuepress/config.js
Original file line number Diff line number Diff line change
Expand Up @@ -413,6 +413,10 @@ module.exports = {
title: "Generate scanpaths with Reference Image Mapper",
path: "scanpath-rim",
},
{
title: "Uncover gaze behaviour on phone screens with Neon",
path: "phone-screens",
},
],
},
sidebarDepth: 1,
Expand Down
6 changes: 6 additions & 0 deletions src/alpha-lab/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,12 @@ export default {
to: "/alpha-lab/nerfs",
img: "nerf.png",
},
{
title: "Neon and mobile apps!",
text: "Evaluating Neon's accuracy on phone screens.",
to: "/alpha-lab/phone-screens",
img: "phone.png",
},
],
banner: this.loadRandomImage(),
};
Expand Down
14 changes: 7 additions & 7 deletions src/alpha-lab/map-your-gaze-to-a-2d-screen.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@ tags: [Pupil Invisible, Neon, Cloud]
<TagLinks />
<Youtube src="OXIUjIzCplc"/>

In this guide, we will show you how to map and visualise gaze onto a screen with dynamic content, e.g. a video, web browsing or any other content of your choice, using the [Reference Image Mapper](https://docs.pupil-labs.com/enrichments/reference-image-mapper/) enrichment and a few clicks.
In this guide, we will show you how to map and visualise gaze onto a screen with dynamic content, e.g. a video, web browsing or any other content of your choice, using the [Reference Image Mapper](/enrichments/reference-image-mapper) enrichment and a few clicks.

::: tip
**Note:** This tutorial requires some technical knowledge, but don't worry. We made it almost click and run for you! You can learn as much or as little as you like.
:::

## What you'll need

Before continuing, ensure you are familiar with the [Reference Image Mapper](https://docs.pupil-labs.com/enrichments/reference-image-mapper/) enrichment. Check out [this explainer video](https://www.youtube.com/watch?v=ygqzQEzUIS4&t=56s) for reference.
Before continuing, ensure you are familiar with the [Reference Image Mapper](/enrichments/reference-image-mapper) enrichment. Check out [this explainer video](https://www.youtube.com/watch?v=ygqzQEzUIS4&t=56s) for reference.

We recommend you run the enrichment, e.g. with a short recording of your desktop + monitor/screen to ensure it's working okay. Once satisfied, you can use the same reference image + scanning recording for your dynamic screen content.

Expand All @@ -36,25 +36,25 @@ Let's assume you have everything ready to go – your participant is sat infron

So that we can capture your participant's visual interactions with the screen content, we will need to make sure that both the _eye tracking_ **and** _screen recordings_ happen at the same time.

Importantly, both sources (eye tracking and screen recording) record individually. As such, you'll need what we call an [event annotation](https://docs.pupil-labs.com/invisible/basic-concepts/events/) to synchronise them later.
Importantly, both sources (eye tracking and screen recording) record individually. As such, you'll need what we call an [event annotation](/invisible/basic-concepts/events/) to synchronise them later.

The [event annotation](https://docs.pupil-labs.com/invisible/basic-concepts/events/) should be used to indicate the beginning of the _screen content recording_ in the _eye tracking recording_, and be named `start.video`.
Check [here](https://docs.pupil-labs.com/invisible/basic-concepts/events/) how you can create these events in the Cloud.
The [event annotation](/invisible/basic-concepts/events/) should be used to indicate the beginning of the _screen content recording_ in the _eye tracking recording_, and be named `start.video`.
Check [here](/invisible/basic-concepts/events/) how you can create these events in the Cloud.

::: tip
**Tip:**
When you initiate your recordings, you'll need to know when the screen recording started, relative to your eye tracking recording. Thus, start your eye tracker recording first, and make sure that the eye tracker scene camera faces the OBS program on the screen. Then, start the screen recording.
<br>
<br>
By looking at the screen when you press the button, you'll have a visual reference to create the [event annotation](https://docs.pupil-labs.com/invisible/basic-concepts/events/) later in Cloud.
By looking at the screen when you press the button, you'll have a visual reference to create the [event annotation](/invisible/basic-concepts/events/) later in Cloud.
<br>
<br>
**Recap**: Eye tracking **first**; screen recording **second**
:::

## Once you have everything recorded

- Create a new [Reference Image Mapper](https://docs.pupil-labs.com/enrichments/reference-image-mapper/) enrichment, or add your new eye tracking recordings to an existing enrichment. Run the enrichment, and download the results by right-clicking the enrichment in Cloud once it's computed (see the screenshot below).
- Create a new [Reference Image Mapper](/enrichments/reference-image-mapper) enrichment, or add your new eye tracking recordings to an existing enrichment. Run the enrichment, and download the results by right-clicking the enrichment in Cloud once it's computed (see the screenshot below).

<div class="pb-4" style="display:flex;justify-content:center;">
<v-img
Expand Down
136 changes: 136 additions & 0 deletions src/alpha-lab/phone-neon.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
---
title: Uncover gaze behaviour on phone screens with Neon
description: "Evaluate Neon's accuracy on phone screens"
permalink: /alpha-lab/phone-screens
tags: [Neon, Cloud]
---
# Uncover gaze behaviour on phone screens with Neon


<TagLinks />
<Youtube src="gp5O1uskDME"/>


Have you ever wondered what your eyes focus on when scrolling through your favourite app?

In this article, we test whether Neon, our latest eye tracker, can accurately capture and characterise viewing behaviour
while gazing at small icons on mobile applications. To achieve this, we used some of Alpha Lab's existing tutorials, including
how to generate scanpaths, define areas of interest (AOIs) and calculate outcome metrics, and map gaze onto dynamic content.

Shall we start?

## What you'll need
Below you can find the tools we used for this project. Using these, you can replicate the content of this article with
your own applications.

### Cloud enrichment
- [Reference Image Mapper](/enrichments/reference-image-mapper/)

### Alpha Lab tutorials
- [How to generate scanpaths](/alpha-lab/scanpath-rim/)
- [How to define areas of interest (AOIs) and calculate basic metrics](/alpha-lab/gaze-metrics-in-aois/)
- [How to map and visualise gaze onto dynamic screen content](/alpha-lab/map-your-gaze-to-a-2d-screen/)

### How we used them
We first used the Reference Image Mapper enrichment to create a 3D model of a phone positioned on a desk and subsequently to map gaze onto a 2D image of the phone-on-desk. Then we used Alpha Lab tutorials to process the exported results, generate advanced visualisations, and compute outcome metrics.

:::tip
:bulb:
When preparing the Reference Image Mapper Enrichment, make sure your phone is stable on a phone mount or stand. The
scanning video needed for this tool requires relatively static features in the environment. If there is a lot of movement
or the objects change in appearance or shape, the mapping can fail. More on this [in the docs](/enrichments/reference-image-mapper#setup/)!
:::

## Gaze behaviour on mobile apps: Insights from Neon
### Heatmaps and scanpaths
What catches your attention and how do you visually navigate through the interface/page of a mobile app?

Two visualisations that help to illustrate these patterns are heatmaps and scanpaths (left and right panel below). Heatmaps show the areas of the app that receive the most attention, with warmer colours indicating more fixations, and can be generated natively in Pupil Cloud. Meanwhile, scanpaths trace the path of the eye movements, showing the sequence of fixations and eye movements that occur during the visual exploration. The circle size of the scanpath below reflects fixation duration: The bigger the circle, the longer the user fixated on this area of the screen. Use the scanpath tutorial linked above to generate this visualisation.

<div class="mcontainer">
<div class="col-mcontainer">
<v-img class="rounded" :src="require(`../media/alpha-lab/1.phone-heatmap.jpeg`)" title="Saliency map over a phone screen" alt="Saliency map over a phone screen" cover/>
</div>
<div class="col-mcontainer">
<v-img class="rounded" :src="require(`../media/alpha-lab/2.phone-nadia_scanpath.jpeg`)" title="Scanpath over a phone screen" alt="Scanpath over a phone screen" cover/>
</div>
</div>

### Calculation of gaze metrics on AOIs

Analysing eye tracking data can provide valuable insights into user behaviour, but simply looking at visualisations like
heatmaps and scanpaths may not always reveal the full story. This is why we opted for a quantitative analysis of our data
as well by calculating gaze metrics, such as [dwell time](/alpha-lab/gaze-metrics-in-aois/#dwell-time) and
[time to first contact](/alpha-lab/gaze-metrics-in-aois/#time-to-first-contact). These metrics offer tangible and
quantitative outcomes about the salience of each AOI: Longer dwell time implies longer total fixation on a specific AOI
and could be considered as a proxy of attentional allocation. Conversely, the shorter the time to first contact, the faster
this AOI captured the user's attention, pointing to increased salience of this area. Follow along with the AOI tutorial
for these calculations and charts!

<div class="pb-4" style="display:flex;justify-content:center;">
<v-img class="rounded" :src="require(`../media/alpha-lab/3.phone-dwell-time.png`)" title="Graph showing dwell time on defined AOIs over the phone screen" alt="Graph showing dwell time on defined AOIs over the phone screen" cover/>
</v-img>
</div>

<div class="pb-4" style="display:flex;justify-content:center;">
<v-img class="rounded" :src="require(`../media/alpha-lab/4.phone-first-contact.png`)" title="Graph showing time to first contact on defined AOIs over the phone screen" alt="Graph showing time to first contact on defined AOIs over the phone screen" cover/>
</v-img>
</div>


### Map your gaze onto dynamic phone screen content
So far, we've only scratched the surface by examining static images. Now, it's time to dive into the dynamic world of
our smartphones and explore gaze behaviour more naturally while scrolling. Use the dynamic screen mapping tutorial for this one!

Checking out the recording, Neon's accuracy richly captures gaze behaviour and provides a nice high-level overview of
what the wearer was looking at.

Visualisations are great, but the real power of this tool is that it generates a CSV file containing gaze data mapped
onto the screen, in 2D x, y coordinates. This offers many possibilities for further customisation and in-depth analysis.

<Youtube src="RKrf3YQjzao"/>

## In the wild

That's all fine, but what does it look like to interact with phone screens when out and about? Consider this scenario:
Imagine searching for products on your mobile app while looking at the physical products on a supermarket shelf. We
brought Neon into the wild to assess its performance beyond controlled environments. See an example of real-world user
behaviour below!

<Youtube src="enkOC7_wf0U"/>

## Let's wrap it up!

This article has unveiled the remarkable capability of Neon's calibration-free eye tracking to capture
viewing behaviour during mobile app interactions - both inside and out and about. This tutorial, and its outcomes,
are not limited to our specific use case and could be particularly useful for other types of UI/UX research. By combining
Neon with the techniques we've highlighted here, you can gain invaluable insights into user engagement.

Curious about how Neon can fit into your work? Need assistance in implementing your own analysis pipelines? Reach out to
us [by email](mailto:info@pupil-labs.com) or visit our [Support Page](https://pupil-labs.com/products/support/)!

<style scoped>
.mcontainer{
display: flex;
flex-wrap: wrap;
}
.col-mcontainer{
flex: 50%;
padding: 0 4px;
}
@media screen and (min-width: 1025px) and (max-width: 1200px) {
.col-mcontainer{
flex: 100%;
}
}
@media screen and (max-width: 800px) {
.col-mcontainer{
flex: 50%;
}
}
@media screen and (max-width: 400px) {
.col-mcontainer{
flex: 100%;
}
}
</style>
Binary file added src/media/alpha-lab/1.phone-heatmap.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/media/alpha-lab/2.phone-nadia_scanpath.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/media/alpha-lab/3.phone-dwell-time.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/media/alpha-lab/4.phone-first-contact.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/media/alpha-lab/phone.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit e25dda0

Please sign in to comment.