pymovements.datasets.GazeBaseVR#

class pymovements.datasets.GazeBaseVR(name: str = 'GazeBaseVR', mirrors: tuple[str] = ('https://figshare.com/ndownloader/files/', ), resources: tuple[dict[str, str]] = ({'filename': 'gazebasevr.zip', 'md5': '048c04b00fd64347375cc8d37b451a22', 'resource': '38844024'}, ), experiment: Experiment = <pymovements.gaze.experiment.Experiment object>, filename_format: str = 'S_{round_id:1d}{subject_id:d}_S{session_id:d}_{task_name}.csv', filename_format_dtypes: dict[str, type] = <factory>, custom_read_kwargs: dict[str, Any] = <factory>, column_map: dict[str, str] = <factory>, trial_columns: list[str] = <factory>, time_column: str = 'n', time_unit: str = 'ms', pixel_columns: list[str] | None = None, position_columns: list[str] = <factory>, velocity_columns: list[str] | None = None, acceleration_columns: list[str] | None = None, distance_column: str | None = None)#

GazeBaseVR dataset [Lohr et al., 2023].

This dataset includes binocular plus an additional cyclopian eye tracking data from 407 participants captured over a 26-month period. Participants attended up to 3 rounds during this time frame, with each round consisting of two contiguous sessions.

Eye movements are recorded at a sampling frequency of 250 Hz a using SensoMotoric Instrument’s (SMI’s) tethered ET VR head-mounted display based on the HTC Vive (hereon called the ET-HMD) eye tracker and are provided as positional data in degrees of visual angle.

In each of the two sessions per round, participants are instructed to complete a series of tasks, a vergence task (VRG), a smooth pursuit task (PUR), a video viewing task (VID), a reading task (TEX), and a random saccade task (RAN).

Check the respective paper for details [Lohr et al., 2023].

name#

The name of the dataset.

Type:

str

mirrors#

A tuple of mirrors of the dataset. Each entry must be of type str and end with a ‘/’.

Type:

tuple[str, …]

resources#

A tuple of dataset resources. Each list entry must be a dictionary with the following keys: - resource: The url suffix of the resource. This will be concatenated with the mirror. - filename: The filename under which the file is saved as. - md5: The MD5 checksum of the respective file.

Type:

tuple[dict[str, str], …]

experiment#

The experiment definition.

Type:

Experiment

filename_format#

Regular expression which will be matched before trying to load the file. Namedgroups will appear in the fileinfo dataframe.

Type:

str

filename_format_dtypes#

If named groups are present in the filename_format, this makes it possible to cast specific named groups to a particular datatype.

Type:

dict[str, type], optional

column_map#

The keys are the columns to read, the values are the names to which they should be renamed.

Type:

dict[str, str]

custom_read_kwargs#

If specified, these keyword arguments will be passed to the file reading function.

Type:

dict[str, Any], optional

Examples

Initialize your PublicDataset object with the GazeBase definition:

>>> import pymovements as pm
>>>
>>> dataset = pm.Dataset("GazeBaseVR", path='data/GazeBaseVR')

Download the dataset resources:

>>> dataset.download()

Load the data into memory:

>>> dataset.load()
__init__(name: str = 'GazeBaseVR', mirrors: tuple[str] = ('https://figshare.com/ndownloader/files/', ), resources: tuple[dict[str, str]] = ({'filename': 'gazebasevr.zip', 'md5': '048c04b00fd64347375cc8d37b451a22', 'resource': '38844024'}, ), experiment: Experiment = <pymovements.gaze.experiment.Experiment object>, filename_format: str = 'S_{round_id:1d}{subject_id:d}_S{session_id:d}_{task_name}.csv', filename_format_dtypes: dict[str, type] = <factory>, custom_read_kwargs: dict[str, Any] = <factory>, column_map: dict[str, str] = <factory>, trial_columns: list[str] = <factory>, time_column: str = 'n', time_unit: str = 'ms', pixel_columns: list[str] | None = None, position_columns: list[str] = <factory>, velocity_columns: list[str] | None = None, acceleration_columns: list[str] | None = None, distance_column: str | None = None) None

Methods

__init__([name, mirrors, resources, ...])

Attributes

acceleration_columns

distance_column

experiment

filename_format

mirrors

name

pixel_columns

position_columns

resources

time_column

time_unit

trial_columns

velocity_columns

filename_format_dtypes

column_map

custom_read_kwargs