The written contributions to this workshop are now published:
- Methods and Tools for (Semi-)Automated Evaluation in Long-Term In-the-Wild Deployment Studies (Koch, Fietkau, Draheim, Schwarzer, von Luck) (workshop summary)
- Setting up a long term evaluation environment for interactive semi-public information displays (Koch, Fietkau, Stojko)
- Using an Elastic Stack as a Base for Logging and Evaluation of Public Displays (Rohde, Koch, Stojko)
- A New Software Toolset for Recording and Viewing Body Tracking Data (Fietkau)
- Evaluating the engagement of users from public interactive displays (Mathes, Cabalo, Gatzemeyer)
- Using machine learning to determine attention towards public displays from skeletal data (Lacher, Bieschke, Michalowski, Münch)
- Evaluation of a gamification approach for increasing interaction with public displays (Buhl, Buller, Engel)
We are a small team of researchers with an interest in long-term HCI deployment studies and their evaluation. If that’s one of your topics of interest or expertise, we would love to meet you at our half-day workshop at Mensch und Computer 2023 in Rapperswil, Switzerland.
HCI research increasingly focuses on long-term evaluation of real-world field deployments. This includes challenging tasks such as observing usage behavior for longer periods of time and around the clock. Existing methods such as in-situ observations and manual video analysis are hardly sufficient for these situations. Automated evaluation techniques, using different kinds of sensor hardware including body tracking cameras, have been used in recent studies to capture usage behavior in long-term evaluations more efficiently. However, these automatic approaches may not be the only feasible option.
In our workshop we aim to gather and reflect on the current state of the art regarding this development as well as outline perspectives for future research. Topics for discussion are, for example, existing methods and tools, noise and errors in sensor data, best practices for the correlation of automated observations with ground truth data, field work (e.g., interviews and observations) for the contextualization of findings, ethical concerns, and algorithmic approaches for detecting patterns in data.
When designing and evaluating technology in HCI research, the increasing complexity and ubiquity of technological artifacts is combined with the emerging need to take entire sociotechnical systems into account. Methodologies for collecting, combining, and analyzing data are also increasing in maturity. For example, the use of both quantitative and qualitative usage behavior data (i.e., mixed methods) is becoming more common in real-life field deployment studies.
One area where we can observe this development is ambient display research. Here, research questions have started to arise that can only be meaningfully investigated in the field. These questions encompass topics such as user behavior (e.g., walking paths, interaction phases), user experience, acceptance (e.g., with respect to privacy or data protection), and the social impact of new technologies. In investigating these questions, the ecological validity of the collected data is crucial, i.e., whether data was collected in a realistic environment reflecting authentic usage behavior.
A recent trend is to augment and automate the processes of data collection and analysis using optical sensors (e.g., 3D cameras). Such research finds motivation in learning more about the spatial, temporal, and social behavior of users. Questions revolve around, for example, the impact of the presence of interactive systems on user walking paths, or how different interaction techniques attract potential users.
Long-term sensor data complements system interaction logs and makes passive as well as active use explicit. However, such methods have yet to prove their applicability in long-term field deployment studies. Methods need to be developed to integrate their data into an overarching research design.
Discussion in the workshop will be conducted in the context of two fundamental questions: (1) What is the current state of the art in automated data processing for evaluation in HCI field deployment studies? (2) How does this knowledge need to be advanced practically (e.g., development of new tools) and methodologically (e.g., introduction of new means for data analysis)?
This workshop aims to act as a forum for researchers active in this area, to promote exchange and collaborative work. As a result, the workshop can bring forth insights into tools and methods used by different research groups, and collaborative development of tools and methods in the future. Let’s prepare and exchange our experiences and ideas, come together to expand our shared understanding of the challenges and solutions, and then, if possible, pave the way for future collaborations.
Call for Papers
For this workshop, we accept submissions of papers that present a finished or ongoing contribution which is related to the workshop topic and has not been published yet. Apart from recent and current projects, this may also be an opportunity to revisit past experiments that have already appeared in publications and present additional details on evaluation methods.
We are open to a variety of types of contributions, including but not limited to:
What values and guidelines should steer future empirical long-term evaluations? Potential topics include: contextual aspects, privacy and (de-)anonymization, user autonomy, …
How do you approach the issues of data collection and interpretation of long-term empirical deployment studies? Which approaches can be generalized and what are their advantages?
Have you/has your group created novel tools for handling data from long-term empirical deployment studies? What is their purpose and can they be made available to other researchers?
Reports of prior long-term evaluations you have done: How did you collect data? How was it interpreted? What conclusions were you able to reach?
Papers need to fit the MuC 2023 short paper format with some adjustments:
- Max. 6 pages in ACM SigConf single-column format, excluding references
- Submission language: English
- Anonymization: no
- No separate poster or video is required
Submissions are to be made through the MuC 2023 ConfTool, although at the time of this writing, the system is not set up for workshop submissions yet. The submission deadline is
June 12th, 2023 . Notifications about acceptance or rejection will be sent on or before June 21st, with the final version deadline on July 20th, 2023.
All accepted contributions will be published in the Digital Library „Mensch-Computer Interaktion“ of the Gesellschaft für Informatik e.V. (Society for Computer Science) as open access publications. The conference organizers will require non-exclusive usage rights for your submission to that effect.
This workshop will take place on Sunday, September 3rd from 9:00 AM to 12:00 PM. It will be split into two sections:
- Short presentations of the submitted papers.
- A structured collaborative phase tackling some of the central questions through discussion and documentation.
Please note: You can also register for the workshop without submitting a paper (e.g., if you just want to partake in the discussions), assuming there is enough capacity.
Registration for the workshop is limited to Mensch und Computer 2023 attendees. However, if you are not interested in the full conference, it is possible to register for a workshop day ticket for a significantly reduced fee. See MuC 2023 registration information for details.
We are the research team working on the DFG-funded project “Investigation of the honeypot effect on (semi-)public interactive ambient displays in long-term field studies.” The idea for this workshop stems from this context. Through it, we would like to extend our discussion beyond our own scope and learn more about your projects as well as the methods and tools you use.
Michael KochUniversity of the Bundeswehr Munich
Julian FietkauUniversity of the Bundeswehr Munich
Susanne DraheimHamburg University of Applied Sciences
Jan SchwarzerHamburg University of Applied Sciences
Kai von LuckHamburg University of Applied Sciences
For questions about this workshop, please contact Julian Fietkau via email or on Mastodon.