Mark Billinghurst

Mark Billinghurst

Mentor

Prof. Mark Billinghurst has a wealth of knowledge and expertise in human-computer interface technology, particularly in the area of Augmented Reality (the overlay of three-dimensional images on the real world).

In 2002, the former HIT Lab US Research Associate completed his PhD in Electrical Engineering, at the University of Washington, under the supervision of Professor Thomas Furness III and Professor Linda Shapiro. As part of the research for his thesis titled Shared Space: Exploration in Collaborative Augmented Reality, Dr Billinghurst invented the Magic Book – an animated children’s book that comes to life when viewed through the lightweight head-mounted display (HMD).

Not surprisingly, Dr Billinghurst has achieved several accolades in recent years for his contribution to Human Interface Technology research. He was awarded a Discover Magazine Award in 2001, for Entertainment for creating the Magic Book technology. He was selected as one of eight leading New Zealand innovators and entrepreneurs to be showcased at the Carter Holt Harvey New Zealand Innovation Pavilion at the America’s Cup Village from November 2002 until March 2003. In 2004 he was nominated for a prestigious World Technology Network (WTN) World Technology Award in the education category and in 2005 he was appointed to the New Zealand Government’s Growth and Innovation Advisory Board.

Originally educated in New Zealand, Dr Billinghurst is a two-time graduate of Waikato University where he completed a BCMS (Bachelor of Computing and Mathematical Science)(first class honours) in 1990 and a Master of Philosophy (Applied Mathematics & Physics) in 1992.

Research interests: Dr. Billinghurst’s research focuses primarily on advanced 3D user interfaces such as:

  • Wearable Computing – Spatial and collaborative interfaces for small wearable computers. These interfaces address the idea of what is possible when you merge ubiquitous computing and communications on the body.
  • Shared Space – An interface that demonstrates how augmented reality, the overlaying of virtual objects on the real world, can radically enhance face-face and remote collaboration.
  • Multimodal Input – Combining natural language and artificial intelligence techniques to allow human-computer interaction with an intuitive mix of voice, gesture, speech, gaze and body motion.

Projects

  • Adaptive Virtual Interfaces

    This project explores the possibilities of making virtual reality interfaces more effective by making them adaptive to the users' emotional and cognitive needs.

Publications

  • Exploration of an EEG-Based Cognitively Adaptive Training System in Virtual Reality
    Arindam Dey, Alex Chatburn, Mark Billinghurst

    A. Dey, A. Chatburn and M. Billinghurst, "Exploration of an EEG-Based Cognitively Adaptive Training System in Virtual Reality," 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 2019, pp. 220-226.

    @INPROCEEDINGS{8797840,
    author={A. {Dey} and A. {Chatburn} and M. {Billinghurst}},
    booktitle={2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    title={Exploration of an EEG-Based Cognitively Adaptive Training System in Virtual Reality},
    year={2019},
    volume={},
    number={},
    pages={220-226},
    keywords={Virtual Reality;Cognitively Adaptive Training;Electroencephalography;Alpha Activity;H.1.2 [Models and Principles]: User/Machine Systems-Human Factors;H.5.1 [Multimedia Information Systems]: Artificial-Augmented and Virtual Realities},
    doi={10.1109/VR.2019.8797840},
    ISSN={2642-5254},
    month={March},}
    Virtual Reality (VR) is effective in various training scenarios across multiple domains, such as education, health and defense. However, most of those applications are not adaptive to the real-time cognitive or subjectively experienced load placed on the trainee. In this paper, we explore a cognitively adaptive training system based on real-time measurement of task related alpha activity in the brain. This measurement was made by a 32-channel mobile Electroencephalography (EEG) system, and was used to adapt the task difficulty to an ideal level which challenged our participants, and thus theoretically induces the best level of performance gains as a result of training. Our system required participants to select target objects in VR and the complexity of the task adapted to the alpha activity in the brain. A total of 14 participants undertook our training and completed 20 levels of increasing complexity. Our study identified significant differences in brain activity in response to increasing levels of task complexity, but response time did not alter as a function of task difficulty. Collectively, we interpret this to indicate the brain's ability to compensate for higher task load without affecting behaviourally measured visuomotor performance