Speed Analyzer Documentation
Welcome to the Speed Analyzer library documentation. This library is a comprehensive toolkit for processing, analyzing, visualizing, and converting eye-tracking data. It includes modules for real-time analysis, video generation with data overlays, object detection correlation, and conversion to standard scientific formats like BIDS and DICOM.
Modules
Below is an overview of the different modules available in this library. Each module is designed to handle a specific part of the eye-tracking data analysis workflow.
BIDS Converter
Handles the conversion of raw eye-tracking data to the Brain Imaging Data Structure (BIDS) format and vice-versa.
DICOM Converter
Manages the conversion of eye-tracking data into a DICOM Waveform object for integration into medical imaging workflows.
Device Converters
A high-level module for converting data from specific commercial eye-tracking devices (e.g., Tobii) into standard formats.
Real-time Analyzer
Connect to a Pupil Labs Neon device, stream data in real-time (including LSL), perform live multi-task object detection, and visualize data on the live video feed.
Event-based Analysis
Segments data based on events, calculates summary statistics, and generates various scientific plots on demand.
Video Generator
Creates custom videos by overlaying synchronized eye-tracking data and object detection results onto the original scene video.
YOLO Analyzer
Applies YOLO models to detect objects and correlates this information with eye-tracking data to generate statistics.
Data Viewer / Editor
A powerful GUI tool for visualizing BIDS/DICOM/Un-enriched data, adding/editing events, running multi-task YOLO analysis, and exporting videos with custom overlays.
Data Plotter
An interactive tool to plot time-series data (pupil, gaze, fixations, etc.) and calculate statistics on user-selected time ranges.
LSL Time Series Viewer
A real-time plotting tool that discovers and displays any LSL data stream on the network, ideal for monitoring live data.
NSI Calculator
A post-analysis interactive tool to calculate the Normalized Switching Index (NSI) within user-defined time windows.
BIDS Converter (bids_converter.py
)
This module provides functions to convert eye-tracking data to and from the Brain Imaging Data Structure (BIDS) format. BIDS is a standard for organizing and describing neuroimaging and psychological data, which makes datasets easier to share and use.
convert_to_bids(...)
Converts raw eye-tracking data from a standard directory structure into the BIDS format. It creates the necessary folder hierarchy and generates BIDS-compliant .tsv.gz
and .json
files.
Parameters:
unenriched_dir
(Path): The path to the directory containing the raw eye-tracking CSV files.output_bids_dir
(Path): The root directory where the BIDS dataset will be created.subject_id
(str): The subject identifier (e.g., "01").session_id
(str): The session identifier (e.g., "01").task_name
(str): The name of the task performed during the recording.
load_from_bids(...)
Loads an eye-tracking dataset from a BIDS-compliant directory and converts it back into the raw format expected by other modules in this library.
Parameters:
bids_dir
(Path): The root directory of the BIDS dataset.subject_id
(str): The subject identifier to load.session_id
(str): The session identifier to load.task_name
(str): The task name to load.
Returns:
A Path
object pointing to a temporary directory containing the converted, "un-enriched" data files.
DICOM Converter (dicom_converter.py
)
This module handles the conversion of eye-tracking data into and out of the DICOM (Digital Imaging and Communications in Medicine) format. It uses the Waveform IOD to store time-series data like gaze position and pupil diameter within a standard DICOM file.
convert_to_dicom(...)
Reads raw eye-tracking data from CSV files and packages it into a single DICOM file.
Parameters:
unenriched_dir
(Path): The directory containing the raw CSV files.output_dicom_path
(Path): The full path where the output.dcm
file will be saved.patient_info
(dict): A dictionary containing patient information, such asname
andid
.
load_from_dicom(...)
Extracts eye-tracking data from a DICOM file and reconstructs the original raw data CSV files in a temporary directory.
Parameters:
dicom_path
(Path): The path to the input DICOM file.
Returns:
A Path
object pointing to a temporary directory containing the reconstructed data files.
Device Converters (device_converters.py
)
This module is designed to act as a high-level interface for converting data from specific, proprietary eye-tracker formats (like those from Tobii) into standardized formats like BIDS.
convert_device_data(...)
Acts as a dispatcher or router function. It takes the name of a device and the desired output format, then calls the appropriate conversion function.
Parameters:
device_name
(str): The name of the source device (e.g., "tobii").source_folder
(str): The path to the folder containing the raw data from the device.output_folder
(str): The path where the converted data will be saved.output_format
(str): The desired output format (e.g., "bids").bids_info
(Optional[Dict]): Required dictionary for BIDS conversion.
Real-time Analyzer (realtime_analyzer.py
)
This module provides the RealtimeNeonAnalyzer
class, a powerful tool for connecting to a Pupil Labs Neon eye-tracking device, streaming data in real-time, and performing live analysis and visualization.
Class: RealtimeNeonAnalyzer
Manages the entire real-time analysis workflow, from device connection to data visualization and recording.
Key Methods:
__init__(...)
: Initializes the analyzer, loading the specified YOLO model.connect(...)
: Searches for and connects to a Neon device on the local network.start_recording(...)
: Starts recording raw video streams and eye-tracking data.stop_recording()
: Stops the recording and finalizes all data files.add_event(...)
: Adds a timestamped event marker to the recording.add_static_aoi(...)
: Defines a static Area of Interest for live analysis.process_and_visualize(...)
: Fetches the latest data, runs YOLO, and draws all enabled overlays onto the scene video frame.close()
: Safely disconnects from the device.
New Features:
- **Audio Recording**: Can now record audio alongside video streams.
- **LSL Streaming**: Can stream gaze, events, and video data over the network via Lab Streaming Layer.
- **Interactive Filtering**: A panel to dynamically show/hide specific object classes or individual tracked objects from the YOLO overlay.
- **Advanced Visualizations**: Includes a dynamic heatmap, gaze path with fading trail, and a dedicated blink detection plot.
Event-based Analysis (speed_script_events.py
)
A powerful analysis script to segment eye-tracking data based on recorded events, compute a comprehensive set of metrics for each segment, and generate a variety of scientific plots on demand.
run_analysis(...)
The main function for the analysis stage. It loads data, segments it by event, calculates summary statistics, and saves the processed data for later use.
Functionality:
- Loads all relevant CSV files (events, gaze, fixations, etc.).
- Segments data based on event timestamps.
- Calculates features like fixation count, duration, pupil diameter, and gaze speed for each segment.
- Saves processed data per segment to
.pkl
files for efficient plotting. - Saves an aggregated summary of all features to a main CSV file.
generate_plots_on_demand(...)
Generates visualizations based on the pre-processed data. It uses multiprocessing to efficiently create plots for each segment.
Generated Plots:
- Histograms of fixation, blink, and saccade durations.
- Scanpath plots and heatmaps for gaze and fixations.
- Time-series plots for pupillometry and gaze fragmentation.
- Spectral analysis (periodogram and spectrogram) for pupil data.
Video Generator (video_generator.py
)
This module is dedicated to creating high-quality videos that visualize eye-tracking data by overlaying it onto the original scene recording. It's highly customizable, allowing the user to select which data streams and visualizations to include.
create_custom_video(...)
This is the main function that orchestrates the entire video generation process. It reads the source video and eye-tracking data, iterates through each frame, draws the selected overlays, and encodes the final video file.
Parameters:
data_dir
(Path): Directory containing synchronized data and the scene video.output_dir
(Path): Directory where the final video will be saved.subj_name
(str): The subject identifier.options
(dict): A dictionary of boolean flags to control overlays (e.g.,overlay_gaze
,overlay_pupil_plot
,overlay_dynamic_heatmap
).un_enriched_mode
(bool): If false, looks for enriched data to add more context.selected_events
(list, optional): A list of event names to trim the video to.
YOLO Analyzer (yolo_analyzer.py
)
This module integrates YOLO (You Only Look Once) object detection models into the eye-tracking analysis workflow. It runs a YOLO model on the scene video, tracks objects, and correlates this with eye movements to understand what was being looked at.
run_yolo_analysis(...)
The main function of the module. It orchestrates the entire process from loading data to running the model, correlating results, and saving statistical outputs.
Parameters:
data_dir
(Path): Path to the directory containing eye-tracking data and the scene video.output_dir
(Path): Path where analysis results will be saved.yolo_models
(Dict[str, str]): A dictionary mapping a task (e.g., 'detect', 'segment') to a model file name (e.g., 'yolov8n.pt').custom_classes
(Optional[List[str]]): A list of class names for zero-shot detection.yolo_detections_df
(Optional[pd.DataFrame]): A pre-filtered DataFrame of YOLO detections to use, bypassing video analysis.
Functionality:
- **Multi-Task Analysis**: Can run detection, segmentation, and pose estimation models simultaneously on the video.
- Runs YOLO tracking on the video or loads from a cache file.
- Correlates fixations with detected object bounding boxes (or segmentation masks).
- Calculates statistics like fixation count per object and average pupil size.
- Saves all results to CSV files.
Data Viewer (data_viewer.py
)
This module provides a comprehensive graphical interface for advanced, interactive exploration and analysis of a recorded session. It can load data from BIDS, DICOM, or un-enriched folders and combines event management with powerful, on-the-fly computer vision analysis.
Class: DataViewerWindow
A Toplevel Tkinter window that allows users to visualize data, manage events, and run complex analyses directly on a video.
Key Methods:
- Flexible Data Loading: Load data from BIDS, DICOM, un-enriched folders, or even a single video file.
- Interactive Timeline: Play video with synchronized audio. Add, remove, and drag-and-drop event markers directly on the timeline.
- Multi-Task YOLO Analysis: Select and run different YOLO models for object detection, segmentation, and pose estimation simultaneously. The results from all tasks are merged and visualized.
- Interactive Filtering: Use a tabbed interface to view detected objects, segmented areas, and poses. Users can toggle the visibility of entire classes or individual tracked instances.
- Rich Overlays: Toggle visualizations for gaze point, gaze path, dynamic heatmaps, pupillometry plots, and AOIs.
- Video Export: Export the current view as a new video file, with all selected overlays and optional audio baked in.
- Data Persistence: When saving, the editor returns both the modified event data and a DataFrame containing only the YOLO detections that were visible (not filtered out) by the user.
Data Plotter (data_plotter.py
)
This module provides an interactive window for visualizing all time-series data from an analysis on a single, scrollable, and zoomable chart. It's an excellent tool for detailed data exploration and quick statistical analysis.
Class: DataPlotterWindow
A Toplevel Tkinter window that loads data from an output folder (or an un-enriched folder) and plots it on synchronized axes.
Key Features:
- Synchronized Plots: Visualizes pupillometry, gaze position, fixations, saccades, and events on a shared time axis.
- Interactive Navigation: Use the standard Matplotlib toolbar to zoom and pan. A horizontal scrollbar allows for easy navigation through long recordings.
- Statistical Analysis on Selection: Click and drag on any plot to select a time range. The tool instantly calculates and displays descriptive statistics (mean, median, std dev, etc.) for all data types within that specific interval.
- Event and Blink Visualization: Events are marked with vertical lines and labels, while blinks are shown as shaded regions, providing context to the time-series data.
- Flexible Data Loading: Can load data from a full SPEED analysis output folder or directly from an un-enriched data folder.
LSL Time Series Viewer (lsl_time_series_viewer.py
)
This module provides a real-time data visualization tool for any data stream available on the local network via Lab Streaming Layer (LSL). It is designed to function like a simple EEG or oscilloscope, plotting multi-channel numerical data as it arrives.
Class: LSLTimeSeriesViewer
A Toplevel Tkinter window that discovers, connects to, and plots LSL streams.
Key Features:
- Stream Discovery: Automatically scans the network for available LSL streams and displays them in a list for selection.
- Multi-Stream Visualization: Can connect to and display multiple streams simultaneously on the same plot.
- Real-time Plotting: Uses Matplotlib to render scrolling time-series plots, showing the most recent data within a configurable time window (default: 10 seconds).
- Event Marker Support: If a stream of type 'Markers' is selected, it displays incoming string events as labeled vertical lines on the plot.
- Thread-Safe Operation: All LSL data reception and processing happens in background threads to ensure the user interface remains responsive.
- Simple Controls: Includes buttons to refresh the stream list and to start/stop the real-time visualization.
NSI Calculator (nsi_calculator.py
)
This module provides an interactive, post-analysis tool for calculating the Normalized Switching Index (NSI). The NSI is a metric that quantifies the frequency of gaze shifts between different Areas of Interest (AOIs). This tool is available in the main GUI after a full analysis has been completed with at least two AOIs defined.
Class: NsiCalculatorWindow
A Toplevel Tkinter window that allows users to define time windows and calculate NSI for those specific periods.
Key Features:
- Interactive Timeline: Users can drag on the video timeline to create one or more time windows for analysis.
- Contextual Activation: The tool is only enabled in the GUI after a `RUN FULL ANALYSIS` is complete and if at least two AOIs were used in that analysis.
- Targeted Calculation: The NSI is calculated for each defined time window, considering only the gaze data within that window.
- Data-driven: It uses the `gaze_enriched.csv` file generated during the main analysis, which contains the mapping of gaze points to AOI names.
- CSV Output: The results are saved to `nsi_results.csv`, with each row containing the NSI value, the list of AOIs, and the start/end timestamps of the analysis window.