An Advanced Eye-Tracking Data Analysis Software for Researchers
SPEED is a Python-based project for processing, analyzing, and visualizing eye-tracking data. This project offers three main components: a user-friendly Desktop App, a powerful speed-analyzer
Python package, and an experimental speed-analyzerR
package for R.
SPEED is designed for cognitive and behavioral experiments, providing a modular workflow that saves time and computational resources. From data processing to generating plots and videos, SPEED offers a comprehensive suite of tools for eye-tracking analysis.
To get started, you can either download the pre-compiled desktop application or install the Python package for coding use.
An application with a graphical user interface (GUI) for a complete, visually-driven analysis workflow.
.zip
file for your operating system (Windows or macOS).SpeedApp
executable.speed-analyzer
(Python Package for Developers)The core analysis engine of SPEED, now available as a reusable package. It’s designed for automation and integration into custom data pipelines.
You can install the package directly from the Python Package Index (PyPI) using pip:
pip install speed-analyzer==5.3.8.3
The package exposes a main function, run_full_analysis
, that takes paths and options as arguments. See the example_usage.py
file for a complete demonstration.
Here is a basic snippet:
import pandas as pd
from speed_analyzer import run_full_analysis
Define input and output paths
raw_path = "./data/raw"
unenriched_path = "./data/unenriched"
output_path = "./analysis_results"
subject_name = "participant_01"
The speed-analyzer package allows you to define Areas of Interest (AOIs) on-the-fly, directly in your code. This is the recommended workflow when you do not have a pre-existing enriched_data_path. The system is designed to handle a list of multiple, mixed-type AOIs in a single analysis run.
When you provide the defined_aois
parameter, the software will automatically generate new enriched data files (gaze_enriched.csv
, fixations_enriched.csv
) where each gaze point and fixation is mapped to the name of the AOI it falls into.
You define AOIs by creating a list of Python dictionaries. Each dictionary must have three keys: name, type, and data.
my_aois = [
{ "name": "AOI_Name_1", "type": "...", "data": ... },
{ "name": "AOI_Name_2", "type": "...", "data": ... },
]
Use this for a fixed rectangular region that does not move throughout the video. The data is a dictionary containing the pixel coordinates of the rectangle’s corners.
static_aoi = {
"name": "Control_Panel",
"type": "static",
"data": {'x1': 100, 'y1': 150, 'x2': 800, 'y2': 600}
}
Use this to have an AOI automatically follow an object detected by YOLO. This requires setting run_yolo=True. The data is the integer track_id of the object you want to follow. You would typically get the track_id from a preliminary YOLO analysis
object_id_to_track = 1
dynamic_auto_aoi = {
"name": "Tracked_Ball",
"type": "dynamic_auto",
"data": object_id_to_track
}
Use this to define a custom path for a moving and resizing AOI. You set the AOI’s position and size at specific frames (keyframes), and the software will interpolate its position for all frames in between. The data is a dictionary where keys are frame indices and values are tuples of coordinates (x1, y1, x2, y2).
manual_keyframes_aoi = {
"name": "Animated_Focus_Area",
"type": "dynamic_manual",
"data": {
0: (50, 50, 250, 250), # Position at the start (frame 0)
1000: (400, 300, 600, 500), # Position at frame 1000
2000: (50, 50, 250, 250) # Return to the start position at frame 2000
}
}
You can combine any number of AOIs of any type into a single list and pass it to the analysis function.
import pandas as pd
from speed_analyzer import run_full_analysis
# 1. Define paths
raw_path = "./data/raw"
unenriched_path = "./data/unenriched"
output_path = "./analysis_results_multi_aoi"
subject_name = "participant_02"
# 2. Define multiple, mixed-type AOIs
my_aois = [
{ "name": "Left_Monitor", "type": "static", "data": {'x1': 0, 'y1': 0, 'x2': 960, 'y2': 1080}},
{ "name": "Right_Monitor", "type": "static", "data": {'x1': 961, 'y1': 0, 'x2': 1920, 'y2': 1080}},
{ "name": "Moving_Target", "type": "dynamic_auto", "data": 3 }
]
# 3. Run the analysis
run_full_analysis(
raw_data_path=raw_path,
unenriched_data_path=unenriched_path,
output_path=output_path,
subject_name=subject_name,
run_yolo=True, # Required for the 'dynamic_auto' AOI
defined_aois=my_aois # Pass the complete list of AOIs
)
The real-time window provides a suite of interactive tools:
For more advanced use cases and automation, a command-line interface is also available.
# Example: Run real-time analysis with a specific YOLO model and define two static AOIs
python realtime_cli.py --model yolov8n.pt --record --aoi "Screen,100,100,800,600" --aoi "Panel,850,300,1200,500"
You can also use a simulated data stream for testing without a physical device:
# Run with a mock device for testing purposes
python realtime_cli.py --use-mock
To ensure maximum scientific reproducibility and to eliminate any issues with installation or dependencies, we provide a pre-configured Docker image that contains the exact environment to run the speed-analyzer
package.
You must have Docker Desktop installed on your computer. You can download it for free from the official Docker website.
docker pull ghcr.io/danielelozzi/speed:latest
Run the Analysis:
To launch an analysis, you need to use the docker run
command. The most important part is to “mount” your local folders (containing the data and where to save the results) inside the container.
Here is a complete example. Replace the /path/to/...
placeholders with the actual absolute paths on your computer.
docker run --rm \
-v "/path/to/your/RAW/folder:/data/raw" \
-v "/path/to/your/un-enriched/folder:/data/unenriched" \
-v "/path/to/your/output/folder:/output" \
ghcr.io/danielelozzi/speed:latest \
python -c "from speed_analyzer import run_full_analysis; run_full_analysis(raw_data_path='/data/raw', unenriched_data_path='/data/unenriched', output_path='/output', subject_name='docker_test')"
Command Explanation:
docker run --rm
: Runs the container and automatically removes it when finished.-v "/local/path:/container/path"
: The -v
(volume) option creates a bridge between a folder on your computer and a folder inside the container. We are mapping your data folders into /data/
and your output folder into /output
inside the container.ghcr.io/danielelozzi/speed:latest
: The name of the image to use.python -c "..."
: The command that is executed inside the container. In this case, we launch a Python script that imports and runs your run_full_analysis
function, using the paths internal to the container (/data/
, /output/
).This approach guarantees that your analysis is always executed in the same controlled environment, regardless of the host computer.
SPEED 5.3.8.3 operates on a two-step workflow designed to save time and computational resources.
This is the main data processing stage. You run this step only once per participant for a given set of events. The software will:
events.csv
into the GUI, allowing you to select which events to analyze.This step creates a processed_data
directory containing intermediate files. Once this is complete, you do not need to run it again unless you want to analyze a different combination of events.
After the core analysis is complete, you can use the dedicated tabs in the GUI to generate as many plots and videos as you need, with any combination of settings, without re-processing the raw data.
The “Generate Plots” tab allows you to create a wide range of visualizations for each event segment. All plots are saved in PDF format for high-quality figures suitable for publications. The available plot categories are:
Simply select the desired plot types in the GUI and click “GENERATE SELECTED PLOTS”. The software will use the pre-processed data to generate the figures for all selected events.
The “Generate Videos” tab allows you to create highly customized videos with synchronized data overlays.
enriched
gaze data and synchronizes different screen recording clips to specific events. Between events, a gray screen is shown. A dedicated editor allows you to map video files to events.To generate a video:
New tools are available for more in-depth, interactive analysis.
nsi_results.csv
file, containing the NSI value for each time window.This feature provides a powerful, user-driven way to analyze visual attention patterns during specific moments of a recording.
SPEED integrates the powerful YOLO (You Only Look Once) object detection model to add a layer of semantic understanding to the eye-tracking data. When this option is enabled during the “Core Analysis” step, the software analyzes the video to detect and track objects frame by frame.
track_id
to each detected object throughout its appearance in the video. The results are saved in a cache file (yolo_detections_cache.csv
) to avoid re-processing on subsequent runs.person_1
, car_3
), including the total number of fixations it received and the average pupil diameter when looking at it.person
, car
).track_id
generated by YOLO can be used to define a “Dynamic AOI”, where the Area of Interest automatically follows a specific object.This feature transforms raw gaze coordinates into meaningful interactions with the environment, opening up new possibilities for analyzing human behavior in complex scenes.
SPEED now supports a powerful multi-stage analysis workflow.
yolov8n.pt
from the “Re-ID Model” dropdown) and a tracker configuration. This enhances the tracking algorithm (like BoT-SORT) by using appearance features to re-identify an object that has been occluded or has left and re-entered the scene. This helps ensure that person_1
who disappears and reappears is still identified as person_1
, rather than being assigned a new ID like person_5
.How It Works in the GUI:
yolo_detections_cache.csv
file.
default_yaml.yaml
tracker configuration, which is set up for Re-ID. You can also provide your own custom tracker configuration file.*-cls.pt
) from the dropdown.yolo_classification_results.csv
.Classification vs. Re-identification
track_id
for the same object over time.These tools can be used independently or together to build a rich, multi-layered understanding of the scene content.
To run the project from source or contribute to development, you’ll need Python 3 and several libraries.
conda create --name speed
conda activate speed
conda install pip
pip install -r requirements.txt
# Navigate to the desktop_app folder
cd SPEED
python -m desktop_app.GUI
generate_synthetic_data.py
)Included in this project is a utility script to create a full set of dummy eye-tracking data. This is extremely useful for testing the SPEED software without needing Pupil Labs hardware or actual recordings.
Run the script from your terminal:
python generate_synthetic_data.py
The script will create a new folder named synthetic_data_output
in the current directory.
This folder will contain all the necessary files (gaze.csv
, fixations.csv
, external.mp4
, etc.), ready to be used as input for the GUI application or the speed-analyzer
package.
It is also possible to generate a synthetic streaming for realtime with GUI using:
python generate_synthetic_stream.py
or for LSL testing:
python lsl_stream_simulator.py
SPEED 5.3.8.3 introduces a new feature to convert processed eye-tracking data into a format compatible with the Brain Imaging Data Structure (BIDS), following the BEP020 for Eye Tracking guidelines. This facilitates data sharing and standardization for the research community.
01
).01
).reading
, visualsearch
).speed-analyzer
packageThis functionality is also available via the convert_to_bids
function.
from pathlib import Path
from speed_analyzer import convert_to_bids
# Define input and output paths
unenriched_path = Path("./data/unenriched")
bids_output_path = Path("./bids_dataset")
# Define BIDS metadata
subject = "01"
session = "01"
task = "visualsearch"
# Perform the conversion
convert_to_bids(
unenriched_dir=unenriched_path,
output_bids_dir=bids_output_path,
subject_id=subject,
session_id=session,
task_name=task
)
SPEED can also load and analyze eye-tracking datasets already structured according to the BIDS standard.
sub-...
folders).Subject
, Session
, and Task
identifiers you want to load._eyetrack.tsv.gz
, _events.tsv
, etc.) into a temporary folder in the “un-enriched” format that the software can analyze.speed-analyzer
packageThe load_from_bids
function converts a BIDS dataset and returns the path to a temporary “un-enriched” folder.
from pathlib import Path
from speed_analyzer import load_from_bids, run_full_analysis
# 1. Define the BIDS dataset path and metadata
bids_input_path = Path("./bids_dataset")
subject = "01"
session = "01"
task = "visualsearch"
# 2. Run the conversion to obtain an "un-enriched" folder
temp_unenriched_path = load_from_bids(
bids_dir=bids_input_path,
subject_id=subject,
session_id=session,
task_name=task
)
print(f"BIDS data ready for analysis in: {temp_unenriched_path}")
# 3. You can now use this folder for full analysis with SPEED
# (Note: A RAW folder is not needed in this case)
run_full_analysis(
raw_data_path=str(temp_unenriched_path), # Use the same folder for simplicity
unenriched_data_path=str(temp_unenriched_path),
output_path="./analysis_from_bids",
subject_name=f"sub-{subject}_ses-{session}"
)
To enhance interoperability with medical imaging systems and workflows, SPEED now supports basic import and export of eye-tracking data using the DICOM standard.
Inspired by standards for storing time-series data, this feature encapsulates gaze coordinates, pupil diameter, and event markers into a single DICOM file using the Waveform IOD (Information Object Definition). This allows eye-tracking data to be archived and managed within Picture Archiving and Communication Systems (PACS).
The functionality is accessible through dedicated buttons in the graphical interface.
PatientName
tag in the DICOM file..dcm
file..dcm
file containing the eye-tracking waveform data..csv
formats required for analysis.speed-analyzer
PackageYou can also access the DICOM conversion tools programmatically.
Use the convert_to_dicom
function to export your data.
from pathlib import Path
from speed_analyzer import convert_to_dicom
# 1. Define paths and patient information
unenriched_path = Path("./data/unenriched")
output_dicom_file = Path("./dicom_exports/subject01.dcm")
output_dicom_file.parent.mkdir(exist_ok=True)
patient_info = {
"name": "Subject 01",
"id": "SUB01"
}
# 2. Run the conversion
convert_to_dicom(
unenriched_dir=unenriched_path,
output_dicom_path=output_dicom_file,
patient_info=patient_info
)
print(f"DICOM file successfully created at: {output_dicom_file}")
Use the load_from_dicom function to import a DICOM file for analysis. The function returns the path to a temporary “un-enriched” folder.
from pathlib import Path
from speed_analyzer import load_from_dicom, run_full_analysis
# 1. Define the path to the DICOM file
dicom_file_path = Path("./dicom_exports/subject01.dcm")
# 2. Load and convert the DICOM data
# This creates a temporary folder with the required CSV files
temp_unenriched_path = load_from_dicom(dicom_path=dicom_file_path)
print(f"DICOM data is ready for analysis in: {temp_unenriched_path}")
# 3. Use the temporary path to run a full analysis with SPEED
run_full_analysis(
raw_data_path=str(temp_unenriched_path), # For DICOM import, raw and unenriched can be the same
unenriched_data_path=str(temp_unenriched_path),
output_path="./analysis_from_dicom",
subject_name="Subject_01_from_DICOM"
)
This tool is developed by the Cognitive and Behavioral Science Lab (LabSCoC), University of L’Aquila and Dr. Daniele Lozzi.
If you use this script in your research or work, please cite the following publications:
It is also requested to cite Pupil Labs publication, as requested on their website https://docs.pupil-labs.com/neon/data-collection/publication-and-citation/
If you also the Computer Vision YOLO-based feature, please cite the following publication:
If you use the BIDS converter, please cite the BIDS format for eyetracker:
If you use the DICOM converter, please cite the DICOM inspiration paper:
This is the list of features and improvements planned for future versions of SPEED:
This code is written in Vibe Coding with Google Gemini 2.5 Pro