Evaluation in visualization remains a difficult problem because of the unique constraints and opportunities inherent to visualization use. While many potentially useful methodologies have been proposed, there remain significant gaps in assessing the value of the open-ended exploration and complex task-solving that the visualization community holds up as an ideal. In this paper, we propose a ... read moremethodology to quantitatively evaluate a visual analytics (VA) system based on measuring what is learned by its users as the users reapply the knowledge to a different problem or domain. The motivation for this methodology is based on the observation that the ultimate goal of a user of a VA system is to gain knowledge of and expertise with the dataset, task, or tool itself. We propose a framework for describing and measuring knowledge gain in the analytical process based on these three types of knowledge and discuss considerations for evaluating each. We propose that through careful design of tests that examine how well participants can reapply knowledge learned from using a VA system, the utility of the visualization can be more directly assessed.read less
Chang, Remco, Caroline Ziemkiewicz, Roman Pyzh, Joseph Kielman, and William Ribarsky. "Learning-Based Evaluation of Visual Analytic Systems." Proceedings of the 3rd BELIV'10 Workshop on BEyond Time and Errors: Novel evaLuation Methods for Information Visualization - BELIV '10 (2010). doi:10.1145/2110192.2110197.