Authors: G Leiva, V Frau, U Radhakrishnan, MS Lunding

Published in: Proceedings of the ACM on Human-Computer Interaction 9 (4), 1-20, 2025

DOI: 10.1145/3734185

Abstract

Prototyping multimodal interaction in XR (Augmented/Virtual Reality) is challenging for non-technical designers. Moreover, existing XR prototyping tools do not support concurrent multimodal interaction and lack an overall view of the interaction design of the experience. We created Mucho, a novel no-coding immersive tool for prototyping interactive multimodal XR experiences. Mucho supports three modes: In recording, designers can demonstrate input examples to populate a timeline representation with events. In playback, designers can create a video prototype based on the timeline while adding system actions at the corresponding timeframes. In live mode, Mucho creates a state machine to test the interactive experience in runtime. We present an evaluation by demonstration to illustrate how Mucho can prototype a diverse set of examples including modalities such as hand gestures, proximity, speech, and gaze. Finally, we discuss the limitations of our approach and outline future directions to support immersive prototyping of multimodal XR experiences.