MobileStress

Welcome to the repository of Mobile-Stress, a public dataset containing smartphone typing and sensor information, as well as data from physiological sensors, intended for research on stress. This dataset includes data from 20 participants, who completed typing-intensive exercises while under neutral and stressed conditions.

Request access to the dataset

In order to obtain access to Mobile-Stress, please contact the research staff Dennis Silva (silva.dennis@tamu.edu). In the e-mail request, please include your name, e-mail, institution (if part of any), and intended purpose (e.g., personal use, research, commercial use).

Experimental protocol

Each participant enrolled in our studies completed four sessions each. The high-level representation of each task is displayed in the image below:

As shown in the image above, the protocol is divided into two major blocks, a neutral and stress block – counterbalanced to avoid carryover effects. To elicit the desired mental state, we first used a priming task (Stroop Color-Word Test [CWT]; article) prior to the typing tasks in both blocks. We used the same priming task for both conditions, but they were configured in a more challenging fashion for the stress block (congruent vs. incongruent CWT; article) to elicit a stressful mental state. Second, while completing the writing task in the stressful block, participants were also asked to complete mental arithmetic tasks. In between both blocks, participants watched a 5-minute relaxing video with landscape images and a relaxing background audio played on the smartphone’s speaker. The purpose of this task was to bring participants to a neutral or relaxed state prior to starting the next block.

Typing task

Writing Task The user interface of the writing task is shown below. We displayed classical paintings and asked participants to describe them. Specifically, we instructed participants to “describe the characters in the paintings, what activities they are engaged in, their clothing, and the environment they are in”. The app displayed up to three paintings per condition (i.e., neutral, stressed), and asked participants to type at least 200 words before submitting the description of each painting. After the participant submitted their text for a particular painting, the app loaded the next one and cleaned the text area component. Participants were given ten minutes to complete the typing task, and whatever they had written once the time was over was submitted as their final description. In total, we used 24 paintings in the experiments (4 sessions × 2 conditions per session × 3 paintings per condition), which were never repeated for the same participant.

In the neutral condition, participants had no interruptions and were allowed to describe the paintings freely; left-hand side of the figure shown above. However, during the stressed condition, besides describing the paintings, participants also completed mental arithmetic tasks (MAT), which popped up every 30 seconds on average – the time interval between MATs followed a Poisson distribution with mean of 30 seconds and was used to avoid participants anticipating the following MAT after 30 seconds exactly. An example of a MAT is shown on the right-hand side of the figure shown above. Further, the app played a loud buzzer whenever the participants selected a wrong option or when they failed to make a choice within five seconds. The purpose of the MATs during the stressed condition was to keep a stressful mental state following the priming task.

Self-reporting

Participants were asked to complete a self-assessment questionnaire after the CWT and typing tasks in both blocks. This process allowed us to have a better understanding of the participants’ perceived mental state and cognitive workload after performing the different tasks in the experimental protocol. For self-reporting, we implemented two psychometric instruments widely used in psychophysiological studies. First, the Self-Assessment Manikin (article), where participants reported their perceived valence (i.e., degree of pleasantness) and arousal (i.e., degree of excitement) using a 7-point Likert scale. Second, the NASA Task Load Index (NASA TLX) (article), which measures perceived task workload in six dimensions, namely, mental demand, physical demand, temporal demand, performance, effort, and frustration in a 20-point scale each. The self-reporting user interface is shown below:

Sensors used

Participants worn two physiological sensors while they completed the protocol: first, an Empatica E4 wristband (url), used to collect heartrate, second, a Zephyr Bioharness 3.0 chest strap (url), embedded with heartrate and breathing rate sensors, and third, a Thought Technology electrodermal activity (EDA) sensor (url). EDA sensors are commonly attached to the fingers or palms, but since having a sensor in either of these locations would impact proper smartphone usage, we attached the EDA sensor to the sole of participants’ right foot.

Status of the dataset

All participants enrolled in the study successfully completed four sessions, a total of 80 sessions. Most of the sessions completed by the participants had no technical issues, as seen in the image below. In eight of the 80 sessions, we had technical issues with the EDA sensor which resulted in partial or complete loss of EDA data for those sessions. The most common reasons for these technical issues were sensor failure, electrode detached from the participants’ foot during the session, and battery issues. In three out of the 80 sessions, the keylogging module developed encountered issues saving the keystroke data collected to the phone’s internal memory. Finally, in three of the 80 sessions, we had issues with the Bioharness sensor, that were caused because of sensor failure. None of the sessions had technical issues with EDA acquision, Bioharness recording, and keylogging simultaneously.

The dataset

As we have mentioned above, if you are interested in downloading Mobile-Stress, please reach out to Dennis Silva (silva.dennis@tamu.edu).

Note: the clock of the smartphone and physiological sensors were synchronized prior to the beginning of each session. Therefore, the column time across all .CSV files refer to the time elapsed from the same shared clock.

Typing data

The typing information is available in this url. Each column in this .CSV file is explained below:

Column Explanation
index unique identifier for each row
participant_id number between 1 and 20 representing each participant in the dataset
session number between 1 and 4 representing the session number
condition represents under which condition the typing session was completed: E: neutral, H: stressed
time time elapsed since the current session has started
key key involved in the keystroke event (e.g., A, DELETE, SPACE)
event_type represents the different types of keystroke events registered: 0: key-down, 1: key-up, 2: keydrag
pressure force exerted into the touch screen for the current keystroke event
x x coordinate of the keystroke event
y y coordinate of the keystroke event

Physiological sensors

Thought Technology Electrodermal activity

Electrodermal activity data was pre-processed using NeuroKit 2 (url). The electrodermal activity is provided in this url. Each column in this .CSV file is detailed below:

Column Explanation
participant_id number between 1 and 20 representing each participant in the dataset
session number between 1 and 4 representing the session number
condition number representing under which condition the typing session was completed: 0: neutral, 1: stressed
eda_phasic phasic component of the electrodermal activity
eda_tonic tonic component of the electrodermal activity
Bioharness Zephyr 3.0

Bioharness data is provided with one sample per second. The data was pre-processed internally and is available in this url. Each column in this .CSV file is detailed below:

Column Explanation
index unique identifier for each row
participant_id number between 1 and 20 representing each participant in the dataset
session number between 1 and 4 representing the session number
condition represents under which condition the typing session was completed: E: neutral, H: stressed
time time of the day following the format mm/dd/yyyy HH:MM:ss.SSS
HR heart rate inferred by the sensor
BR breathing rate inferred by the sensor
Empatica E4

Empatica E4 data is provided with one sample per second. Unlike the Bioharness Zephyr chestrap, Empatica E4 infers heart rate using a photoplethysmogram sensor. The data was preprocessed internally and is available in this url. Each column in this .CSV file is detailed below:

Column Explanation
index unique identifier for each row
participant_id number between 1 and 20 representing each participant in the dataset
session number between 1 and 4 representing the session number
condition represents under which condition the typing session was completed: E: neutral, H: stressed
time time of the day following the format mm/dd/yyyy HH:MM:ss
HR heart rate inferred by the sensor

Accelerometer

We captured accelerometer data at a frequency of 60Hz using the smartphone’s built-in sensor. The accelerometer data is available in this url. Each column in this .CSV file is detailed below:

Column Explanation
participant_id number between 1 and 20 representing each participant in the dataset
session number between 1 and 4 representing the session number
condition represents under which condition the typing session was completed: E: neutral, H: stressed
x accelerometer registered for the x-axis
y accelerometer registered for the y-axis
z accelerometer registered for the z-axis

Self-Reports

As discussed earlier, we also collected perceived valence/arousal (SAM) and mental workload (NASA-TLX) after the CWT and typing tasks. The self-report data is available in this url. Each column in this .CSV file is detailed below:

Column Explanation
participant_id number between 1 and 20 representing each participant in the dataset
session number between 1 and 4 representing the session number
task represents the task performed: 0: CWT, 1: typing task
condition number representing under which condition the typing session was completed: 0: neutral, 1: stressed
arousal degree of excitement (the higher the more energetic) the participant was feeling during the task
valence degree of pleasantness (the higher the more positive) the participant was feeling during the task
TLX-1 NASA-TLX mental demand (the higher, the more demanding)
TLX-2 NASA-TLX physical demand (the higher, the more demanding)
TLX-3 NASA-TLX temporal demand (the higher, the more demanding)
TLX-4 NASA-TLX performance (the higher, the better the performance)
TLX-5 NASA-TLX effort (the higher, the more effort was put to perform the task)
TLX-6 NASA-TLX frustration (the higher, the more demanding)

License

The dataset is released under the CC BY-NC 4.0 license, a summary of the license can be found here, and the full license can be found here. For any usage that is not covered by the CC BY-NC 4.0 license, please contact Dr. Ricardo Gutierrez-Osuna (rgutier@tamu.edu).

Acknowledgments

The development of this dataset was made possible with financial support from the National Science Foundation under Grant No. 1704636. We also would like to thank all participants for their time and commitment.