Video-EEG Encoding-Decoding Dataset KU Leuven
Description
If using this dataset, please cite the following paper and the current Zenodo repository.
This dataset is described in detail in the following paper:
Introduction
The research work leading to this dataset was conducted at the Department of Electrical Engineering (ESAT), KU Leuven.
This dataset contains electroencephalogram (EEG) data collected from 19 young participants with normal or corrected-to-normal eyesight when they were watching a series of carefully selected YouTube videos. The videos were muted to avoid the confounds introduced by audio. For synchronization, a square box was encoded outside of the original frames and flashed every 30 seconds in the top right corner of the screen. A photosensor, detecting the light changes from this flashing box, was affixed to that region using black tape to ensure that the box did not distract participants. The EEG data was recorded using a BioSemi ActiveTwo system at a sample rate of 2048 Hz. Participants wore a 64-channel EEG cap, and 4 electrooculogram (EOG) sensors were positioned around the eyes to track eye movements.
The dataset includes a total of (19 subjects x 63 min + 9 subjects x 24 min) of data. Further details can be found in the following section.
Content
- YouTube Videos: Due to copyright constraints, the dataset includes links to the original YouTube videos along with precise timestamps for the segments used in the experiments.
- Raw EEG Data: Organized by subject ID, the dataset contains EEG segments corresponding to the presented videos. Both EEGLAB .set files (containing metadata) and .fdt files (containing raw data) are provided, which can also be read by popular EEG analysis Python packages such as MNE.
- The naming convention links each EEG segment to its corresponding video. E.g., the EEG segment 01_eeg corresponds to video 01_Dance_1, 03_eeg corresponds to video 03_Acrob_1, Mr_eeg corresponds to video Mr_Bean, etc.
- The raw data have 68 channels. The first 64 channels are EEG data, and the last 4 channels are EOG data. The position coordinates of the standard BioSemi headcaps can be downloaded here: https://www.biosemi.com/download/Cap_coords_all.xls.
- Due to minor synchronization ambiguities, different clocks in the PC and EEG recorder, and missing or extra video frames during video playback (rarely occurred), the length of the EEG data may not perfectly match the corresponding video data. The difference, typically within a few milliseconds, can be resolved by truncating the modality with the excess samples.
- Signal Quality Information: A supplementary .txt file detailing potential bad channels. Users can opt to create their own criteria for identifying and handling bad channels.
The dataset is divided into two subsets: Single-shot and MrBean, based on the characteristics of the video stimuli.
Single-shot Dataset
The stimuli of this dataset consist of 13 single-shot videos (63 min in total), each depicting a single individual engaging in various activities such as dancing, mime, acrobatics, and magic shows. All the participants watched this video collection.
Video ID | Link | Start time (s) | End time (s) |
---|---|---|---|
01_Dance_1 | https://youtu.be/uOUVE5rGmhM | 8.54 | 231.20 |
03_Acrob_1 | https://youtu.be/DjihbYg6F2Y | 4.24 | 231.91 |
04_Magic_1 | https://youtu.be/CvzMqIQLiXE | 3.68 | 348.17 |
05_Dance_2 | https://youtu.be/f4DZp0OEkK4 | 5.05 | 227.99 |
06_Mime_2 | https://youtu.be/u9wJUTnBdrs | 5.79 | 347.05 |
07_Acrob_2 | https://youtu.be/kRqdxGPLajs | 183.61 | 519.27 |
08_Magic_2 | https://youtu.be/FUv-Q6EgEFI | 3.36 | 270.62 |
09_Dance_3 | https://youtu.be/LXO-jKksQkM | 5.61 | 294.17 |
12_Magic_3 | https://youtu.be/S84AoWdTq3E | 1.76 | 426.36 |
13_Dance_4 | https://youtu.be/0wc60tA1klw | 14.28 | 217.18 |
14_Mime_3 | https://youtu.be/0Ala3ypPM3M | 21.87 | 386.84 |
15_Dance_5 | https://youtu.be/mg6-SnUl0A0 | 15.14 | 233.85 |
16_Mime_6 | https://youtu.be/8V7rhAJF6Gc | 31.64 | 388.61 |
MrBean Dataset
Additionally, 9 participants watched an extra 24-minute clip from the first episode of Mr. Bean, where multiple (moving) objects may exist and interact, and the camera viewpoint may change. The subject IDs and the signal quality files are inherited from the single-shot dataset.
Video ID | Link | Start time (s) | End time (s) |
---|---|---|---|
Mr_Bean | https://www.youtube.com/watch?v=7Im2I6STbms | 39.77 | 1495.00 |
Acknowledgement
This research is funded by the Research Foundation - Flanders (FWO) project No G081722N, junior postdoctoral fellowship fundamental research of the FWO (for S. Geirnaert, No. 1242524N), the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 802895), the Flemish Government (AI Research Program), and the PDM mandate from KU Leuven (for S. Geirnaert, No PDMT1/22/009).
We also thank the participants for their time and effort in the experiments.
Contact Information
Executive researcher: Yuanyuan Yao, [email protected]
Led by: Prof. Alexander Bertrand, [email protected]
Files
SingleShot.zip
Files
(32.8 GB)
Name | Size | Download all |
---|---|---|
md5:3dd4daf20c4e51531738fb426040f691
|
5.0 GB | Preview Download |
md5:64788ce59c316c7cb8126e1f8c1d2dde
|
27.8 GB | Preview Download |