Augmenting Visualization with Natural Language Translation of Interaction: A Usability Study



As visualization tools get more complicated, users often find it increasingly difficult to learn interaction sequences, recall past queries, and interpret visual states. We examine a query-to-question (Q2Q) supporting system that takes advantage of natural language generation (NLG) techniques to automatically translate and display query interactions as natural language questions. We focus on a symmetric pattern of multiple coordinated views, cross-filtered views, that involves only nominal/categorical data. We describe a study of the effects of pairing a visualization with a Q2Q interface on several aspects of usability. Q2Q produces considerable improvements in learnability, efficiency, and memorability of visualization in terms of speed and the length of interaction sequences that users follow, along with a modest decrease in error ratio. From a visual language perspective, we analyze how Q2Q speeds up users’ comprehension of interaction, particularly when a visualization representation has deficiencies in illustrating hidden items or relationships.