Publications

  • 2019
  • The Effects of Sharing Awareness Cues in Collaborative Mixed Reality
    T Piumsomboon, A Dey, B Ens, G Lee, M Billinghurst

    Piumsomboon, T., Dey, A., Ens, B., Lee, G. and Billinghurst, M., 2019. The Effects of Sharing Awareness Cues in Collaborative Mixed Reality. Frontiers in Robotics and AI, 6, p.5.

    @ARTICLE{10.3389/frobt.2019.00005,

    AUTHOR={Piumsomboon, Thammathip and Dey, Arindam and Ens, Barrett and Lee, Gun and Billinghurst, Mark},

    TITLE={The Effects of Sharing Awareness Cues in Collaborative Mixed Reality},

    JOURNAL={Frontiers in Robotics and AI},

    VOLUME={6},

    PAGES={5},

    YEAR={2019},

    URL={https://www.frontiersin.org/article/10.3389/frobt.2019.00005},

    DOI={10.3389/frobt.2019.00005},

    ISSN={2296-9144},

    ABSTRACT={Augmented and Virtual Reality provide unique capabilities for Mixed Reality collaboration. This paper explores how different combinations of virtual awareness cues can provide users with valuable information about their collaborator’s attention and actions. In a user study (n=32, 16 pairs), we compared different combinations of three cues: Field-of-View (FoV) frustum, Eye-gaze ray, and Head-gaze ray against a baseline condition showing only virtual representations of each collaborator’s head and hands. Through a collaborative object finding and placing task, the results showed that awareness cues significantly improved user performance, usability, and subjective preferences, with the combination of the FoV frustum and the Head-gaze ray being best. This work establishes the feasibility of room-scale MR collaboration and the utility of providing virtual awareness cues.}
    }
    Augmented and Virtual Reality provide unique capabilities for Mixed Reality collaboration. This paper explores how different combinations of virtual awareness cues can provide users with valuable information about their collaborator’s attention and actions. In a user study (n=32, 16 pairs), we compared different combinations of three cues: Field-of-View (FoV) frustum, Eye-gaze ray, and Head-gaze ray against a baseline condition showing only virtual representations of each collaborator’s head and hands. Through a collaborative object finding and placing task, the results showed that awareness cues significantly improved user performance, usability, and subjective preferences, with the combination of the FoV frustum and the Head-gaze ray being best. This work establishes the feasibility of room-scale MR collaboration and the utility of providing virtual awareness cues.
  • Indexing Multivariate Mobile Data through Spatio-Temporal Event Detection and Clustering
    Reza Rawassizadeh, Chelsea Dobbins, Mohammad Akbari and Michael Pazzani

    Reza Rawassizadeh, Chelsea Dobbins, Mohammad Akbari and Michael Pazzani, “Indexing Multivariate Mobile Data through Spatio-Temporal Event Detection and Clustering” in Sensors, vol. 19, no. 3, pp. 448, 2019. Doi: https://doi.org/10.3390/s19030448

    Mobile and wearable devices are capable of quantifying user behaviors based on their contextual sensor data. However, few indexing and annotation mechanisms are available, due to difficulties inherent in raw multivariate data types and the relative sparsity of sensor data. These issues have slowed the development of higher level human-centric searching and querying mechanisms. Here, we propose a pipeline of three algorithms. First, we introduce a spatio-temporal event detection algorithm. Then, we introduce a clustering algorithm based on mobile contextual data. Our spatio-temporal clustering approach can be used as an annotation on raw sensor data. It improves information retrieval by reducing the search space and is based on searching only the related clusters. To further improve behavior quantification, the third algorithm identifies contrasting events within a cluster content. Two large real-world smartphone datasets have been used to evaluate our algorithms and demonstrate the utility and resource efficiency of our approach to search.
  • Detecting and Visualizing Context and Stress via a Fuzzy Rule-Based System During Commuter Driving
    Chelsea Dobbins and Stephen Fairclough

    Chelsea Dobbins and Stephen Fairclough, “Detecting and Visualizing Context and Stress via a Fuzzy Rule-Based System During Commuter Driving” in 2019 IEEE International Conference on Pervasive Computing and Communications (PerCom’19), Kyoto, Japan, 11th – 15th March, 2019, pp. 499–504. Doi: https://doi.org/10.1109/PERCOMW.2019.8730600

    Stress is a negative emotion that occurs in everyday life, such as driving. Recurrent exposure to stress can be detrimental to cardiovascular health in the long term. Nevertheless, the development of adaptive coping strategies can mitigate the influence of everyday stress on cardiovascular health. Understanding context is essential to modelling the occurrence of stress and other negative emotions during everyday life. However, driving is a highly dynamic environment, whereby the context is often described using ambiguous linguistic terms, which can be difficult to quantify. This paper proposes a Fuzzy Logic Mamdani Model to automatically estimate different categories of driving context. The system is comprised of two Membership Functions (MFs), which converts the inputs of speed and traffic density into linguistic variables. Our approach then uses these data to identify six states of driving – Idling, Journey Impedance, High Urban Workload, Low Urban Workload, High Non-Urban Workload and Low Non-Urban Workload. An interactive visualization has then been implemented that links this fuzzy logic model with psychophysiological data to identify the context of stress experienced on the road. The system has been validated using real-world data that has been collected from eight participants during their daily commuter journeys.
  • 2018
  • A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014
    Arindam Dey, Mark Billinghurst, Robert W Lindeman, J Swan

    Dey A, Billinghurst M, Lindeman RW and Swan JE II (2018) A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014. Front. Robot. AI 5:37. doi: 10.3389/frobt.2018.00037

    @ARTICLE{10.3389/frobt.2018.00037,
    AUTHOR={Dey, Arindam and Billinghurst, Mark and Lindeman, Robert W. and Swan, J. Edward},
    TITLE={A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014},
    JOURNAL={Frontiers in Robotics and AI},
    VOLUME={5},
    PAGES={37},
    YEAR={2018},
    URL={https://www.frontiersin.org/article/10.3389/frobt.2018.00037},
    DOI={10.3389/frobt.2018.00037},
    ISSN={2296-9144},
    }
    Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We summarize the high-level contributions from each category of papers, and present examples of the most influential user studies. We also identify areas where there have been few user studies, and opportunities for future research. Among other things, we find that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing. This research will be useful for AR researchers who want to follow best practices in designing their own AR user studies.
  • He who hesitates is lost (… in thoughts over a robot)
    James Wen, Amanda Stewart, Mark Billinghurst, Arindam Dey, Chad Tossell, Victor Finomore

    James Wen, Amanda Stewart, Mark Billinghurst, Arindam Dey, Chad Tossell, and Victor Finomore. 2018. He who hesitates is lost (...in thoughts over a robot). In Proceedings of the Technology, Mind, and Society (TechMindSociety '18). ACM, New York, NY, USA, Article 43, 6 pages. DOI: https://doi.org/10.1145/3183654.3183703

    @inproceedings{Wen:2018:HHL:3183654.3183703,
    author = {Wen, James and Stewart, Amanda and Billinghurst, Mark and Dey, Arindam and Tossell, Chad and Finomore, Victor},
    title = {He Who Hesitates is Lost (...In Thoughts over a Robot)},
    booktitle = {Proceedings of the Technology, Mind, and Society},
    series = {TechMindSociety '18},
    year = {2018},
    isbn = {978-1-4503-5420-2},
    location = {Washington, DC, USA},
    pages = {43:1--43:6},
    articleno = {43},
    numpages = {6},
    url = {http://doi.acm.org/10.1145/3183654.3183703},
    doi = {10.1145/3183654.3183703},
    acmid = {3183703},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {Anthropomorphism, Empathy, Human Machine Team, Robotics, User Study},
    }
    In a team, the strong bonds that can form between teammates are often seen as critical for reaching peak performance. This perspective may need to be reconsidered, however, if some team members are autonomous robots since establishing bonds with fundamentally inanimate and expendable objects may prove counterproductive. Previous work has measured empathic responses towards robots as singular events at the conclusion of experimental sessions. As relationships extend over long periods of time, sustained empathic behavior towards robots would be of interest. In order to measure user actions that may vary over time and are affected by empathy towards a robot teammate, we created the TEAMMATE simulation system. Our findings suggest that inducing empathy through a back story narrative can significantly change participant decisions in actions that may have consequences for a robot companion over time. The results of our study can have strong implications for the overall performance of human machine teams.
  • Effects of Manipulating Physiological Feedback in Immersive Virtual Environments
    Arindam Dey, Hao Chen, Mark Billinghurst, Robert W Lindeman

    Dey, A., Chen, H., Billinghurst, M. and Lindeman, R.W., 2018, October. Effects of Manipulating Physiological Feedback in Immersive Virtual Environments. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play (pp. 101-111). ACM.

    @inproceedings{dey2018effects,
    title={Effects of Manipulating Physiological Feedback in Immersive Virtual Environments},
    author={Dey, Arindam and Chen, Hao and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play},
    pages={101--111},
    year={2018},
    organization={ACM}
    }
    Virtual environments have been proven to be effective in evoking emotions. Earlier research has found that physiological data is a valid measurement of the emotional state of the user. Being able to see one's physiological feedback in a virtual environment has proven to make the application more enjoyable. In this paper, we have investigated the effects of manipulating heart rate feedback provided to the participants in a single user immersive virtual environment. Our results show that providing slightly faster or slower real-time heart rate feedback can alter participants' emotions more than providing unmodified feedback. However, altering the feedback does not alter real physiological signals.
  • Effects of Sharing Real-Time Multi-Sensory Heart Rate Feedback in Different Immersive Collaborative Virtual Environments
    Arindam Dey, Hao Chen, Chang Zhuang, Mark Billinghurst, Robert W Lindeman

    Dey, A., Chen, H., Zhuang, C., Billinghurst, M. and Lindeman, R.W., 2018, October. Effects of Sharing Real-Time Multi-Sensory Heart Rate Feedback in Different Immersive Collaborative Virtual Environments. In 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (pp. 165-173). IEEE.

    @INPROCEEDINGS{8613762,
    author={A. {Dey} and H. {Chen} and C. {Zhuang} and M. {Billinghurst} and R. W. {Lindeman}},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
    title={Effects of Sharing Real-Time Multi-Sensory Heart Rate Feedback in Different Immersive Collaborative Virtual Environments},
    year={2018},
    volume={},
    number={},
    pages={165-173},
    keywords={feedback;groupware;virtual reality;immersive collaborative virtual environments;real-time multisensory heart rate feedback;collaborative VR environments;real-time heart rate feedback participants;providing heart rate feedback;single user environments;providing physiological feedback;Heart rate;Collaboration;Real-time systems;Physiology;Avatars;Task analysis;Virtual environments},
    doi={10.1109/ISMAR.2018.00052},
    ISSN={1554-7868},
    month={Oct},}
    Collaboration is an important application area for virtual reality (VR). However, unlike in the real world, collaboration in VR misses important empathetic cues that can make collaborators aware of each other's emotional states. Providing physiological feedback, such as heart rate or respiration rate, to users in VR has been shown to create a positive impact in single user environments. In this paper, through a rigorous mixed-factorial user experiment, we evaluated how providing heart rate feedback to collaborators influences their collaboration in three different environments requiring different kinds of collaboration. We have found that when provided with real-time heart rate feedback participants felt the presence of the collaborator more and felt that they understood their collaborator's emotional state more. Heart rate feedback also made participants feel more dominant when performing the task. We discuss the implication of this research for collaborative VR environments, provide design guidelines, and directions for future research.
  • A Lifelogging Platform Towards Detecting Negative Emotions in Everyday Life using Wearable Devices
    Chelsea Dobbins, Stephen Fairclough, Paulo Lisboa and Félix Fernando González Navarro

    Chelsea Dobbins, Stephen Fairclough, Paulo Lisboa and Félix Fernando González Navarro, “A Lifelogging Platform Towards Detecting Negative Emotions in Everyday Life using Wearable Devices” in 2018 IEEE International Conference on Pervasive Computing and Communications (PerCom’18), Athens, Greece, 19th – 23rd March, 2018, pp. 306 – 311, Doi: https://doi.org/10.1109/PERCOMW.2018.8480180

    Repeated experiences of negative emotions, such as stress, anger or anxiety, can have long-term consequences for health. These episodes of negative emotion can be associated with inflammatory changes in the body, which are clinically relevant for the development of disease in the long-term. However, the development of effective coping strategies can mediate this causal chain. The proliferation of ubiquitous and unobtrusive sensor technology supports an increased awareness of those physiological states associated with negative emotion and supports the development of effective coping strategies. Smartphone and wearable devices utilise multiple on-board sensors that are capable of capturing daily behaviours in a permanent and comprehensive manner, which can be used as the basis for self-reflection and insight. However, there are a number of inherent challenges in this application, including unobtrusive monitoring, data processing, and analysis. This paper posits a mobile lifelogging platform that utilises wearable technology to monitor and classify levels of stress. A pilot study has been undertaken with six participants, who completed up to ten days of data collection. During this time, they wore a wearable device on the wrist during waking hours to collect instances of heart rate (HR) and Galvanic Skin Resistance (GSR). Preliminary data analysis was undertaken using three supervised machine learning algorithms: Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA) and Decision Tree (DT). An accuracy of 70% was achieved using the Decision Tree algorithm.
  • Detecting Negative Emotions During Real-Life Driving via Dynamically Labelled Physiological Data
    Chelsea Dobbins and Stephen Fairclough

    Chelsea Dobbins and Stephen Fairclough, “Detecting Negative Emotions During Real-Life Driving via Dynamically Labelled Physiological Data” in 2018 IEEE International Conference on Pervasive Computing and Communications (PerCom’18), Athens, Greece, 19th – 23rd March, 2018, pp. 830 -835, Doi: https://doi.org/10.1109/PERCOMW.2018.8480369

    Driving is an activity that can induce significant levels of negative emotion, such as stress and anger. These negative emotions occur naturally in everyday life, but frequent episodes can be detrimental to cardiovascular health in the long term. The development of monitoring systems to detect negative emotions often rely on labels derived from subjective self-report. However, this approach is burdensome, intrusive, low fidelity (i.e. scales are administered infrequently) and places huge reliance on the veracity of subjective self-report. This paper explores an alternative approach that provides greater fidelity by using psychophysiological data (e.g. heart rate) to dynamically label data derived from the driving task (e.g. speed, road type). A number of different techniques for generating labels for machine learning were compared: 1) deriving labels from subjective self-report and 2) labelling data via psychophysiological activity (e.g. heart rate (HR), pulse transit time (PTT), etc.) to create dynamic labels of high vs. low anxiety for each participant. The classification accuracy associated with both labelling techniques was evaluated using Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM). Results indicated that classification of driving data using subjective labelled data (1) achieved a maximum AUC of 73%, whilst the labels derived from psychophysiological data (2) achieved equivalent performance of 74%. Whilst classification performance was similar, labelling driving data via psychophysiology offers a number of advantages over self-reports, e.g. implicit, dynamic, objective, high fidelity.
  • The Influence of Game Demand on Distraction from Experimental Pain: A fNIRS Study
    Kellyann Stamp, Chelsea Dobbins, Stephen Fairclough and Helen Poole

    Kellyann Stamp, Chelsea Dobbins, Stephen Fairclough and Helen Poole, "The Influence of Game Demand on Distraction from Experimental Pain: A fNIRS Study" in Frontiers in Human Neuroscience Conference Abstract: 2nd International Neuroergonomics Conference, Philadelphia, PA, USA, June 27-29, 2018, Doi: 10.3389/conf.fnhum.2018.227.00020

    Video games are the most effective form of distraction from procedural pain compared to other distraction techniques, such as watching television or reading a book (Hussein, 2015). The degree of cognitive engagement with the game is a strong influence on the capacity of game-playing to distract from pain. By increasing game demand to a level that demands maximum levels of attention, it is possible to optimise distraction from pain; however, if the game becomes too difficult, it will fail to act as a distraction.
  • Signal Processing of Multimodal Mobile Lifelogging Data towards Detecting Stress in Real-World Driving
    Chelsea Dobbins and Stephen Fairclough

    Chelsea Dobbins and Stephen Fairclough, “Signal Processing of Multimodal Mobile Lifelogging Data towards Detecting Stress in Real-World Driving” in IEEE Transactions on Mobile Computing, vol. 18, no. 3, pp. 632 – 644, 2018. Doi: https://doi.org/10.1109/TMC.2018.2840153

    Stress is a negative emotion that is part of everyday life. However, frequent episodes or prolonged periods of stress can be detrimental to long-term health. Nevertheless, developing self-awareness is an important aspect of fostering effective ways to self-regulate these experiences. Mobile lifelogging systems provide an ideal platform to support self-regulation of stress by raising awareness of negative emotional states via continuous recording of psychophysiological and behavioural data. However, obtaining meaningful information from large volumes of raw data represents a significant challenge because these data must be accurately quantified and processed before stress can be detected. This work describes a set of algorithms designed to process multiple streams of lifelogging data for stress detection in the context of real world driving. Two data collection exercises have been performed where multimodal data, including raw cardiovascular activity and driving information, were collected from twenty-one people during daily commuter journeys. Our approach enabled us to 1) pre-process raw physiological data to calculate valid measures of heart rate variability, a significant marker of stress, 2) identify/correct artefacts in the raw physiological data and 3) provide a comparison between several classifiers for detecting stress. Results were positive and ensemble classification models provided a maximum accuracy of 86.9% for binary detection of stress in the real-world.
  • Towards Clustering of Mobile and Smartwatch Accelerometer Data for Physical Activity Recognition
    Chelsea Dobbins and Reza Rawassizadeh

    Chelsea Dobbins and Reza Rawassizadeh, “Towards Clustering of Mobile and Smartwatch Accelerometer Data for Physical Activity Recognition” in Informatics, vol. 5, no. 2, pp. 29, 2018. Doi: https://doi.org/10.3390/informatics5020029

    Mobile and wearable devices now have a greater capability of sensing human activity ubiquitously and unobtrusively through advancements in miniaturization and sensing abilities. However, outstanding issues remain around the energy restrictions of these devices when processing large sets of data. This paper presents our approach that uses feature selection to refine the clustering of accelerometer data to detect physical activity. This also has a positive effect on the computational burden that is associated with processing large sets of data, as energy efficiency and resource use is decreased because less data is processed by the clustering algorithms. Raw accelerometer data, obtained from smartphones and smartwatches, have been preprocessed to extract both time and frequency domain features. Principle component analysis feature selection (PCAFS) and correlation feature selection (CFS) have been used to remove redundant features. The reduced feature sets have then been evaluated against three widely used clustering algorithms, including hierarchical clustering analysis (HCA), k-means, and density-based spatial clustering of applications with noise (DBSCAN). Using the reduced feature sets resulted in improved separability, reduced uncertainty, and improved efficiency compared with the baseline, which utilized all features. Overall, the CFS approach in conjunction with HCA produced higher Dunn Index results of 9.7001 for the phone and 5.1438 for the watch features, which is an improvement over the baseline. The results of this comparative study of feature selection and clustering, with the specific algorithms used, has not been performed previously and provides an optimistic and usable approach to recognize activities using either a smartphone or smartwatch.https://doi.org/10.3390/informatics5020029