Steve Bromley, Cyril Rebetez, Luis Duarte
Abstract: Within the last few years, there has been a massive amount of interest and investment in virtual reality (VR), with high profile headsets in production from Sony, Oculus Rift, Valve and Samsung. When these products launch, a whole new audience who have never tried virtual reality before will be exposed to the technology and its potential.
Virtual Reality has long been known to be a potential cause of Simulator Sickness, a form of Motion Sickness, from early VR experiments in the nineties onwards. This is obviously an undesirable effect for the end users and teams working on VR content have been communicating on the need to avoid or reduce simulator sickness. To help with these efforts, user researchers need to be able to detect and quantify simulator sickness.
The current main method for measuring this is the Simulator Sickness Questionnaire (SSQ). This is a short survey which can be administered before and after a play session, to identify a change in the symptoms associated with simulator sickness. The main advantage of this method is that it is widely used and quick to administer. Other methods do exist, such as the Postural Equilibrium Test, but these are more cumbersome and usually require the player to stand. Attempts have also been made using psychophysiological methods, with some promises.
However there are a number of drawbacks with the Simulator Sickness Questionnaire which make it impractical for video games research. Because many people do not feel simulator sickness regardless of the experience, a large number of participants are required before conclusions can be drawn. When iterating through different versions of a game, this quickly becomes expensive and time consuming.
The SSQ can also only identify whether simulator sickness occurred during the session, and doesn’t help pinpoint what was the cause of the sickness, which is of limited use to developers.
There is therefore a need to develop a methodology and tool for Simulator Sickness, which can draw meaningful conclusions from statistically low numbers of users and combine this with qualitative data identifying the causes of Simulator Sickness. [PDF]
Rina R. Wehbe and Lennart E. Nacke, University of Waterloo, Canada
Abstract: By overlapping information from a variety of techniques, researchers are able to gain a better overall picture of the user experience. In Games User Research (GUR) a variety of methodologies are in use ranging from qualitative approaches (e.g. interviews), quantitative approach (e.g. metrics), as well as, physiological approaches (e.g. electroencephalography (EEG)). With the combination of different techniques, synchrony of data collection becomes essential. In the presented paper, details such as sampling rate, marker placement, and time stamps are discussed. [PDF]
Jean-Luc Potte, Ubisoft, France
Abstract: At Ubisoft, user-tests involving 16 participants playing the same game over several days are frequent. When they occur comes the challenge of deciding how much video data you should use. Do you record the screen of each participant during the whole session? Do you limit the recording to only specific parts of the session? And if something exceptionally interesting happens to one of your participant, will the video be enough to make an informed analysis of his experience at the time?
This talk will present a methodology currently in development that tries to retain the benefits of screen-recording while reducing the quantity of data recorded. The talk will present how we used this methodology on several Ubisoft games, the principles behind it and the problems we faced using it. The talk will make the case for the use of smaller, but more meaningful data sets in order to save time, effort and money. [PDF]
Ansgar Depping and Regan Mandryk, University of Saskatchewan, Canada
Abstract: This position paper aims to illustrate the benefits of understanding player attribution in the context of games user research. [PDF]
Johanna Pirker and Christian Gütl, Graz University of Technology, Austria
Abstract: Think-aloud protocols and eye-tracking studies have been shown to be valuable tools to evaluate different systems, such as websites, software, or games. Some first studies also suggest the combination of eye-tracking and think-aloud protocols to gain deeper insights in usability and design issues. The outcomes of this combined evaluation approach are often used to give feedback to design strategies. However, this combined evaluation approach is scarcely integrated in the design process. In this paper we explain how eye-tracking combined with think-aloud protocols can be used to redesign user interactions with an existing system with the goal to integrate game design elements and gamification strategies. This approach can be used already in early application stages to design and such systems before the integration of the main game design elements. In a first experiment, we use the outcomes of the combined evaluation approach to make first design decisions for gamification strategies. First findings indicate that a combination of think-aloud protocols with eye-tracking can give insights in the design of proper tasks goals and a fitting feedback design. [PDF]
Jonathan Barbara, Saint Martin’s Institute of Higher Education, Malta
Consistency across experiences in a transmedial production is a criterion of user experience that has so far eluded capture. This position paper highlights the challenges met in designing a tool to measure consistency and suggests a consistency scale model across the four dimensions of transmedia experiences. [PDF]
Simon Wallner, Vienna University of Technology, Austria
Ephemeral tools, tools that provide data only in the moment and in real time without further retaining it for later analysis, can be a valuable addition to the testing setup for smaller and larger game productions. As absurd as it sounds to not hold on to as much data from a user test as possible, the use of ephemeral tools can be justified by lower cost, quicker use and a faster turnaround. [PDF]
Naeem Moosajee, Thomas Galati, Brandon Drenikow, Pejman Mirza-Babaei, UOIT, InterGUR Lab
Abstract: Formative user test sessions are becoming more integrated in game development cycles. However, user tests are not always feasible or affordable for smaller independent game studios, as they require specialised equipment and expertise. Given the recent growth and prevalence of independent developers, there is a need to adapt games user research methods to conduct fast, easy-to-learn, and affordable user tests. Therefore, our research focuses on developing tools for integrating games user research more effectively in the development cycles of independent studios. The paper showcases two of these tools that are currently underdevelopment in our lab. [PDF]