GAP: Developing Out-Game Assessment

The H2020 funded Gaming for Peace-project enters now a more mature stage, where all components that have been created over the past years are tied together into a coherent product, i.e. the serious game for peacekeepers. Key in understanding whether the GAP-application delivers in terms of operational suitability, functionality, content, usability and user experience are the evaluation activities.

Over the past months the Consortium has been working on developing the evaluation methodology for out-of-game assessment. This specific assessment is done before and after the game, with the game players. It consists of an individual unconscious bias-test and a questionnaire on gender, communication and culture, in which soft skills are broken down into categories (competencies) reflecting the learning objectives. Rating is via the Likhert Scale. 

One key issues encountered in the development hereof is the scalability of this out-of-game evaluation, in terms of both content (number of indicators and analyses) and in terms of deployment (how to reach the users). The current set-up is pen and paper, and this can become challenging when having three sets of questionnaires for ten users. The follow-up in terms of analysing the responses and tracking learning becomes a burden for a trainer and organisation. Also, it is error prone and does not allow aggregated analyses. 

Usually, this administrative burden leads to just a few indicators being pushed, as trainers feel overwhelmed. The consequence hereof is that it will not give that granular insight in the achievement of the learning objectives that trainees, trainers and organisations require.

One key improvement the consortium want to test in the coming months is whether this out-of-game assessment can be digitized. The idea is to support the trainers in the methodology workshops with a data-collection application, that works on iPads and in browers. It will push the unconscious bias-tests and soft skills questionnaires before and after the game, enabling to measure the achievement of learning objectives. Together with the in-game assessment, this will be a powerful tool to understand how the trainee is progressing.  

To achieve this result, a significant amount of work has been done and still has to be done. Currently, the first iteration of the questions have been digitised and a demo set-up has been produced, that has been field tested in May 2018 in Finland. The participants feedback has been very positive and informs the next steps, which will be refining the question set and preparing it for aggregated analysis, preparing the trainers guidelines and defining the analytical needs of trainers.