Effects of Uncertainty and Cognitive Load on User Trust in Predictive Decision Making
Abstract
Rapid increase of data in different fields has been resulting in wide applications of Machine Learning (ML) based intelligent systems in predictive decision making scenarios. Unfortunately, these systems appear like a ‘black-box’ to users due to their complex working mechanisms and therefore significantly affect the user’s trust in human-machine interactions. This is partly due to the tightly coupled uncertainty inherent in the ML models that underlie the predictive decision making recommendations. Furthermore, when such analytics-driven intelligent systems are used in modern complex high-risk domains (such as aviation) - user decisions, in addition to trust, are also influenced by higher levels of cognitive load. This paper investigates effects of uncertainty and cognitive load on user trust in predictive decision making in order to design effective user interfaces for such ML-based intelligent systems. Our user study of 42 subjects in a repeated factorial design experiment found that both uncertainty types (risk and ambiguity) and cognitive workload levels affected user trust in predictive decision making. Uncertainty presentation leads to increased trust but only under low cognitive load conditions when users had sufficient cognitive resources to process the information. Presentation of uncertainty under high load conditions (when cognitive resources were short in supply) leads to a decrease of trust in the system and its recommendations.
Domains
Computer Science [cs]Origin | Files produced by the author(s) |
---|
Loading...