Working with Local Dataset#
In this tutorial, we will show how to use your own local dataset with the Dataset class. The Dataset class can help you to manage and process your eyetracking data.
Preparations#
We import pymovements
as the alias pm
for convenience.
[1]:
import pymovements as pm
/home/docs/checkouts/readthedocs.org/user_builds/pymovements/envs/stable/lib/python3.9/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
For demonstration purposes, we will use the raw data provided by the Toy dataset, a sample dataset that comes with pymovements.
We will download the resources of this dataset the directory to simulate a local dataset for you. All downloaded archive files are automatically extracted and then removed. The directory of the dataset will be data/my_dataset
.
After that we won’t use the python class anymore and delete the object (the files on your system will stay in place). Don’t worry if you’re confused about these lines as they are not relevant to your use case.
Just keep in mind that we now have some files with gaze data in the directory data/my_dataset
.
[2]:
toy_dataset = pm.Dataset('ToyDataset', path='data/my_dataset')
toy_dataset.download(remove_finished=True)
del toy_dataset
Downloading http://github.com/aeye-lab/pymovements-toy-dataset/zipball/6cb5d663317bf418cec0c9abe1dde5085a8a8ebd/ to data/my_dataset/downloads/pymovements-toy-dataset.zip
pymovements-toy-dataset.zip: 100%|██████████| 3.06M/3.06M [00:00<00:00, 3.62MB/s]
Checking integrity of pymovements-toy-dataset.zip
Extracting pymovements-toy-dataset.zip to data/my_dataset/raw
Define your Experiment#
To use the Dataset class, we first need to create an Experiment instance. This class represents the properties of the experiment, such as the screen dimensions and sampling rate.
[3]:
experiment = pm.gaze.Experiment(
screen_width_px=1280,
screen_height_px=1024,
screen_width_cm=38,
screen_height_cm=30.2,
distance_cm=68,
origin='upper left',
sampling_rate=1000,
)
Parameters for File Parsing#
We also define a filename_format
which is a pattern expression used to match and extract values from filenames of data files in the dataset. For example, r'trial_{text_id:d}_{page_id:d}.csv'
will match filenames that follow the pattern trial_{text_id}_{page_id}.csv
and extract the values of text_id
and page_id
for each file.
[4]:
filename_format = r'trial_{text_id:d}_{page_id:d}.csv'
Both values of text_id
and page_id
are numeric. We can use a map to define the casting of these values.
[5]:
filename_format_dtypes = {
'text_id': int,
'page_id': int,
}
We can also adjust how the CSV files are read. Here, we specify that the separator in the CSV files is a tab (‘:nbsphinx-math:`t’`).
[6]:
custom_read_kwargs = {
'separator': '\t',
}
Column Definitions#
The trial_columns
argument can be used to specify which columns define a single trial.
This is important for correctly applying all preprocessing methods.
For this very small single user dataset a trial ist just defined by text_id
and page_id
.
[7]:
trial_columns = ['text_id', 'page_id']
The time_column
and pixel_columns
arguments can be used to correctly map the columns in your dataframes. If the time unit differs from the default milliseconds ms
one must also specify the time_unit
for correct computations.
Depending on the content of your dataset, you can alternatively also provide position_columns
, velocity_columns
and acceleration_columns
.
Specifying these columns is needed for correctly applying preprocessing methods. For example, if you want to apply the pix2deg
method, you will need to specify pixel_columns
accordingly.
If your dataset has gaze positions available only in degrees of visual angle, you have to specify the position_columns
instead.
[8]:
time_column = 'timestamp'
time_unit = 'ms'
pixel_columns = ['x', 'y']
Define and load the Dataset#
Next we use all these definitions and create a DatasetDefinition
by passing in the root directory, Experiment instance, and other optional parameters such as the filename regular expression and custom CSV reading parameters.
[9]:
dataset_definition = pm.DatasetDefinition(
name='my_dataset',
experiment=experiment,
filename_format=filename_format,
filename_format_dtypes=filename_format_dtypes,
custom_read_kwargs=custom_read_kwargs,
time_column=time_column,
time_unit=time_unit,
pixel_columns=pixel_columns,
)
Finally we create a Dataset
instance by using the DatasetDefinition
and specifying the directory path.
[10]:
dataset = pm.Dataset(
definition=dataset_definition,
path='data/my_dataset/',
)
If we have a root data directory which holds all your local datasets we can further need to define the paths of the dataset.
The dataset
, raw
, preprocessed
, and events
parameters define the names of the directories for the dataset, raw data, preprocessed data, and events data, respectively.
[11]:
dataset_paths = pm.DatasetPaths(
root='data/',
raw='raw',
preprocessed='preprocessed',
events='events',
)
dataset = pm.Dataset(
definition=dataset_definition,
path=dataset_paths,
)
Now let’s load the dataset into memory. Here we select a subset including the first page of texts with ID 1 and 2.
[12]:
subset = {
'text_id': [1, 2],
'page_id': 1,
}
dataset.load(subset=subset)
100%|██████████| 2/2 [00:00<00:00, 21.83it/s]
[12]:
<pymovements.dataset.dataset.Dataset at 0x7efca37c2c40>
Use the Dataset#
Once we have created the Dataset instance, we can use its methods to preprocess and analyze data in our local dataset.
[13]:
dataset.gaze[0].frame
[13]:
time | stimuli_x | stimuli_y | text_id | page_id | pixel |
---|---|---|---|---|---|
i64 | f64 | f64 | i64 | i64 | list[f64] |
2415266 | -1.0 | -1.0 | 1 | 1 | [176.8, 140.2] |
2415267 | -1.0 | -1.0 | 1 | 1 | [176.7, 139.8] |
2415268 | -1.0 | -1.0 | 1 | 1 | [176.7, 139.3] |
2415269 | -1.0 | -1.0 | 1 | 1 | [176.6, 139.3] |
2415270 | -1.0 | -1.0 | 1 | 1 | [176.7, 139.3] |
2415271 | -1.0 | -1.0 | 1 | 1 | [176.8, 139.5] |
2415272 | -1.0 | -1.0 | 1 | 1 | [177.3, 139.8] |
2415273 | -1.0 | -1.0 | 1 | 1 | [177.8, 140.0] |
2415274 | -1.0 | -1.0 | 1 | 1 | [178.3, 140.0] |
2415275 | -1.0 | -1.0 | 1 | 1 | [178.3, 139.9] |
2415276 | -1.0 | -1.0 | 1 | 1 | [178.0, 140.2] |
2415277 | -1.0 | -1.0 | 1 | 1 | [177.7, 140.4] |
… | … | … | … | … | … |
2438308 | -1.0 | -1.0 | 1 | 1 | [649.1, 633.7] |
2438309 | -1.0 | -1.0 | 1 | 1 | [648.8, 633.9] |
2438310 | -1.0 | -1.0 | 1 | 1 | [649.1, 634.1] |
2438311 | -1.0 | -1.0 | 1 | 1 | [649.6, 634.2] |
2438312 | -1.0 | -1.0 | 1 | 1 | [650.1, 634.1] |
2438313 | -1.0 | -1.0 | 1 | 1 | [650.0, 634.0] |
2438314 | -1.0 | -1.0 | 1 | 1 | [649.9, 633.9] |
2438315 | -1.0 | -1.0 | 1 | 1 | [649.9, 633.9] |
2438316 | -1.0 | -1.0 | 1 | 1 | [650.1, 633.7] |
2438317 | -1.0 | -1.0 | 1 | 1 | [650.2, 633.5] |
2438318 | -1.0 | -1.0 | 1 | 1 | [650.0, 633.2] |
2438319 | -1.0 | -1.0 | 1 | 1 | [649.7, 633.1] |
Here we use the pix2deg
method to convert the pixel coordinates to degrees of visual angle.
[14]:
dataset.pix2deg()
dataset.gaze[0].frame
100%|██████████| 2/2 [00:00<00:00, 12.97it/s]
[14]:
time | stimuli_x | stimuli_y | text_id | page_id | pixel | position |
---|---|---|---|---|---|---|
i64 | f64 | f64 | i64 | i64 | list[f64] | list[f64] |
2415266 | -1.0 | -1.0 | 1 | 1 | [176.8, 140.2] | [-11.420403, -9.148145] |
2415267 | -1.0 | -1.0 | 1 | 1 | [176.7, 139.8] | [-11.422806, -9.157834] |
2415268 | -1.0 | -1.0 | 1 | 1 | [176.7, 139.3] | [-11.422806, -9.169943] |
2415269 | -1.0 | -1.0 | 1 | 1 | [176.6, 139.3] | [-11.42521, -9.169943] |
2415270 | -1.0 | -1.0 | 1 | 1 | [176.7, 139.3] | [-11.422806, -9.169943] |
2415271 | -1.0 | -1.0 | 1 | 1 | [176.8, 139.5] | [-11.420403, -9.1651] |
2415272 | -1.0 | -1.0 | 1 | 1 | [177.3, 139.8] | [-11.408386, -9.157834] |
2415273 | -1.0 | -1.0 | 1 | 1 | [177.8, 140.0] | [-11.396367, -9.15299] |
2415274 | -1.0 | -1.0 | 1 | 1 | [178.3, 140.0] | [-11.384348, -9.15299] |
2415275 | -1.0 | -1.0 | 1 | 1 | [178.3, 139.9] | [-11.384348, -9.155412] |
2415276 | -1.0 | -1.0 | 1 | 1 | [178.0, 140.2] | [-11.39156, -9.148145] |
2415277 | -1.0 | -1.0 | 1 | 1 | [177.7, 140.4] | [-11.398771, -9.143301] |
… | … | … | … | … | … | … |
2438308 | -1.0 | -1.0 | 1 | 1 | [649.1, 633.7] | [0.240135, 3.033792] |
2438309 | -1.0 | -1.0 | 1 | 1 | [648.8, 633.9] | [0.232631, 3.038748] |
2438310 | -1.0 | -1.0 | 1 | 1 | [649.1, 634.1] | [0.240135, 3.043704] |
2438311 | -1.0 | -1.0 | 1 | 1 | [649.6, 634.2] | [0.252642, 3.046182] |
2438312 | -1.0 | -1.0 | 1 | 1 | [650.1, 634.1] | [0.265149, 3.043704] |
2438313 | -1.0 | -1.0 | 1 | 1 | [650.0, 634.0] | [0.262648, 3.041226] |
2438314 | -1.0 | -1.0 | 1 | 1 | [649.9, 633.9] | [0.260146, 3.038748] |
2438315 | -1.0 | -1.0 | 1 | 1 | [649.9, 633.9] | [0.260146, 3.038748] |
2438316 | -1.0 | -1.0 | 1 | 1 | [650.1, 633.7] | [0.265149, 3.033792] |
2438317 | -1.0 | -1.0 | 1 | 1 | [650.2, 633.5] | [0.26765, 3.028836] |
2438318 | -1.0 | -1.0 | 1 | 1 | [650.0, 633.2] | [0.262648, 3.021402] |
2438319 | -1.0 | -1.0 | 1 | 1 | [649.7, 633.1] | [0.255144, 3.018924] |
We can use the pos2vel
method to calculate the velocity of the gaze position.
[15]:
dataset.pos2vel(method='savitzky_golay', degree=2, window_length=7)
dataset.gaze[0].frame
100%|██████████| 2/2 [00:00<00:00, 31.85it/s]
[15]:
time | stimuli_x | stimuli_y | text_id | page_id | pixel | position | velocity |
---|---|---|---|---|---|---|---|
i64 | f64 | f64 | i64 | i64 | list[f64] | list[f64] | list[f64] |
2415266 | -1.0 | -1.0 | 1 | 1 | [176.8, 140.2] | [-11.420403, -9.148145] | [-0.772495, -4.238523] |
2415267 | -1.0 | -1.0 | 1 | 1 | [176.7, 139.8] | [-11.422806, -9.157834] | [-0.686663, -4.671012] |
2415268 | -1.0 | -1.0 | 1 | 1 | [176.7, 139.3] | [-11.422806, -9.169943] | [-0.257498, -3.806023] |
2415269 | -1.0 | -1.0 | 1 | 1 | [176.6, 139.3] | [-11.42521, -9.169943] | [1.459231, -1.557032] |
2415270 | -1.0 | -1.0 | 1 | 1 | [176.7, 139.3] | [-11.422806, -9.169943] | [4.034446, 1.556983] |
2415271 | -1.0 | -1.0 | 1 | 1 | [176.8, 139.5] | [-11.420403, -9.1651] | [6.695697, 3.459956] |
2415272 | -1.0 | -1.0 | 1 | 1 | [177.3, 139.8] | [-11.408386, -9.157834] | [7.983442, 3.20046] |
2415273 | -1.0 | -1.0 | 1 | 1 | [177.8, 140.0] | [-11.396367, -9.15299] | [6.78167, 3.200507] |
2415274 | -1.0 | -1.0 | 1 | 1 | [178.3, 140.0] | [-11.384348, -9.15299] | [3.948804, 2.941092] |
2415275 | -1.0 | -1.0 | 1 | 1 | [178.3, 139.9] | [-11.384348, -9.155412] | [0.343335, 3.460254] |
2415276 | -1.0 | -1.0 | 1 | 1 | [178.0, 140.2] | [-11.39156, -9.148145] | [-1.717019, 4.152379] |
2415277 | -1.0 | -1.0 | 1 | 1 | [177.7, 140.4] | [-11.398771, -9.143301] | [-1.974598, 5.36358] |
… | … | … | … | … | … | … | … |
2438308 | -1.0 | -1.0 | 1 | 1 | [649.1, 633.7] | [0.240135, 3.033792] | [-0.268006, 0.708004] |
2438309 | -1.0 | -1.0 | 1 | 1 | [648.8, 633.9] | [0.232631, 3.038748] | [2.23337, 2.566488] |
2438310 | -1.0 | -1.0 | 1 | 1 | [649.1, 634.1] | [0.240135, 3.043704] | [4.109403, 2.566496] |
2438311 | -1.0 | -1.0 | 1 | 1 | [649.6, 634.2] | [0.252642, 3.046182] | [5.181423, 0.707998] |
2438312 | -1.0 | -1.0 | 1 | 1 | [650.1, 634.1] | [0.265149, 3.043704] | [4.73475, -0.530993] |
2438313 | -1.0 | -1.0 | 1 | 1 | [650.0, 634.0] | [0.262648, 3.041226] | [3.037385, -1.769984] |
2438314 | -1.0 | -1.0 | 1 | 1 | [649.9, 633.9] | [0.260146, 3.038748] | [1.518691, -2.654987] |
2438315 | -1.0 | -1.0 | 1 | 1 | [649.9, 633.9] | [0.260146, 3.038748] | [0.268004, -3.451512] |
2438316 | -1.0 | -1.0 | 1 | 1 | [650.1, 633.7] | [0.265149, 3.033792] | [-0.357339, -3.982536] |
2438317 | -1.0 | -1.0 | 1 | 1 | [650.2, 633.5] | [0.26765, 3.028836] | [-0.982682, -3.982549] |
2438318 | -1.0 | -1.0 | 1 | 1 | [650.0, 633.2] | [0.262648, 3.021402] | [-1.69736, -3.54005] |
2438319 | -1.0 | -1.0 | 1 | 1 | [649.7, 633.1] | [0.255144, 3.018924] | [-2.233368, -2.389544] |