Showing posts with label evaluation. Show all posts
Showing posts with label evaluation. Show all posts

Friday, May 18, 2012

Spotted: How accurate are models of text corpora? -- Interpretation and trust: designing model-driven visualizations for text analysis

Interpretation and trust: designing model-driven visualizations for text analysis

Jason Chuang, Daniel Ramage, Christopher Manning, Jeffrey Heer

Statistical topic models can help analysts discover patterns in large text corpora by identifying recurring sets of words and enabling exploration by topical concepts. However, understanding and validating the output of these models can itself be a challenging analysis task. In this paper, we offer two design considerations - interpretation and trust - for designing visualizations based on data-driven models. Interpretation refers to the facility with which an analyst makes inferences about the data through the lens of a model abstraction. Trust refers to the actual and perceived accuracy of an analyst's inferences. These considerations derive from our experiences developing the Stanford Dissertation Browser, a tool for exploring over 9,000 Ph.D.

Sunday, December 4, 2011

Spotted: Empirical Studies in Information Visualization: Seven Scenarios

Sheelagh and catherine are leaders in viz evaluation. Should be interesting. 

PrePrint: Empirical Studies in Information Visualization: Seven Scenarios

We take a new, scenario based look at evaluation in information visualization. Our seven scenarios, evaluating visual data analysis and reasoning, evaluating user performance, evaluating user experience, evaluating environments and work practices, evaluating communication through visualization, evaluating visualization algorithms, and evaluating collaborative data analysis were derived through an extensive literature review of over 800 visualization publications. These scenarios distinguish different study goals and types of research questions and are illustrated through example studies. Through this broad survey and the distillation of these scenarios we make two contributions. One, we encapsulate the current practices in the information visualization research community and, two, we provide a different approach to reaching decisions about what might be the most effective evaluation of a given information visualization. Scenarios can be used to choose appropriate research questions and goals and the provided examples can be consulted for guidance on how to design one's own study.


Saturday, November 12, 2011

Report: Kosara on Vizweek: Visualization is Growing Up

Maturity means effectiveness. 

Visualization is Growing Up

Several topics at this year's VisWeek conference have come up because visualization is playing a bigger role in important decisions. When the consequences can be severe, it is important to know whether a visualization actually works, whether we can trust it, and what biases it might present.