Monday, January 14, 2008

Visual meaning readings: cognitive science

Hi folks,

Here's the second set of readings, focusing on the perspective from cognitive scientists:

Best, Ben.


Ryan said...

I'm not sure where you want the reactions so I figure the comments seemed like a good place. The first paper from Zhang and Norman was interesting and had some pretty good experimental work to back their ideas.

Experiment 1 seemed the weakest and least interesting of the 3 experiments. Due to the changing rules, this experiment didn't seem to really have all that much to do with the rest of the paper. It didn't really measure how representations affected the problem solving ability, but instead how changing the problem changed problem.

Experiment 2 was much more interesting to me. It makes sense to me that moving some rules into the physical setup of the problem would allow the subjects to solve the problems faster. With all internal rules, you have to keep consciously thinking over each rule and checking to make sure you're following them. With external rules you can use some subconscious thought processes to make sure you don't put a little coffee cup into the bigger one. I bet that if you could give subjects a large number of tests with some of the same internal rules over and over, they would eventually develop a similar subconscious process to handle those rules, and the difference between making the internal and external would disappear.

Experiment 3 was also interesting and it also made a lot of intuitive sense to me. Just like with the papers from Tuesday, I feel like I can work much more intuitively with categorizing and prioritizing things by position than by other attributes like size, shape or color.

The second paper by MacKinlay was intriguing, but didn't seem to come to any real conclusions. I can definitely see the utility of a system like the one they describe as I try to teach students, with no practical computer background besides MySpace, to use Excel and create charts and graphs. The paper seems to be relatively positive about the system, but doesn't really discuss much in the way of actual results. A simple experiment would have been nice. Give a few different charts produced by the APT system to human subjects for them to rate in usefulness and compare that to the effectiveness order produced by APT.

Okay, after looking over this I think I wrote too much, but oh well.

Stuart Heinrich said...
This comment has been removed by the author.
Stuart Heinrich said...

Representations in Distributed Cognitive Tasks

The authors showed that tasks utilizing implicitly understood rules are easier than tasks whichare logically identical but require memorization of newly stated rules. They also showed that rules pertaining to location, size, and color varied in difficulty (from easiest to hardest). Both of these facts could be useful in the creation of problems designed to be perceptually easier or harder.

Although the authors are probably looking for ways to make cognition easier, it's probably more likely that this kind of information be used against us in the design of standardized tests!

Unfortunately, the data sets are quite small so it is hard to be confident in the results. However, they appear to be conclusive in that their results are mostly consistent. My only other criticism is that the paper was quite lengthy for such a small amount of results.

Automating the Design of Graphical Presentations

The authors do not significantly motivate the problem they are trying to solve. They point out that some work must be taken in designing a graphical presentation, and jump immediately to the conclusion that this would be better done by an automated program. However, they seem to be looking for a solution
to a problem that really does not exist, because I cannot think of any situation in which such decisions would be better made by an algorithm -- even a good one. In order for an application to collect data, the designer must by definition already want to present some particular aspect of that data -- or else
they wouldn't know what to collect. This defines a specific set of data and the proper style of presentation is practically a one-to-one mapping.

Even if an algorithm were capable of making this decision for a human (which is, in my estimation, not at all a costly decision for a human to make) the human must still provide the contextual information that describes what properties of the information are important. Defining this information must be at LEAST as difficult as designing the ideal graphical representation, if not harder. Don't people usually already know what kind
of graph they want to make? Who wants to have the program say, "no! You should make a pie graph!" when you want a bar graph?

The only situation in which I can think of where it would be necessary or desireable to have a computer design the graphical presentation is if the computer also independently decided to collect the data for some specific purpose, but this implies an autonomous intelligent computer agent doing its own original research, which is not on the visible horizon of future technologies...which is good, because they might put us all out of work

With that said, I think that their method of defining a graphical language that attempts to represent the information content of the data in a complete and concise way is not bad, although it may be overdoing it a bit to put it into the scope of a grammar, when it might be more simply represented by a set of components and/or features that are known to cover the required bases.

Unknown said...

Zhang & Norman:
So my first question is: were these academics politically correct before PC was all over the place? I ask because they mention the "Hobbits-Orcs Problem" which, as it turns out, is the same as the Missionaries and Cannibals problem.

This paper is about internal and external representations. My first thought while reading this was whether we can really know what our internal representation of a concept, thought, etc is? Do we even know this now versus when this was published in '94? It seems to me there is still much mystery about the brain and how it processes things so how can internal representations be accurately discussed here?

One simple thing that really struck me was their concepts of the theorist and task performer. The theorist works in representations and the performer just solves a problem. We often work as both so it was interesting thinking about things from only one point of view (like their example of the theorist seeing 3 representations of the tower of hanoi but the solver seeing 3 separate problems).

I think I need a better explanation of how to separate internal and external. At times in the paper the difference was as clear as day, but at others as clear as mud. I originally thought it was a discrete difference, but there's a lot of fuzzy gray area (the authors often switch whether a given rule is internal or external).

In all I really enjoyed this paper, very interesting.

I think the author chose a very hard problem to tackle here. Defining something like "what looks good" with an algorithm is no easy task, possibly impossible in the general case. I think this work is useful though, in that it at least creates some understanding about the process of creation, even if the end product is not the best solution. I would disagree with Stu that there is "no situation" where an algorithm is useful for creating representations. For one, Stu says there is often a one-to-one mapping so why can't a computer make this decision rather than making someone press a button? Secondly, in a concentrated study with limited data it is probably obvious what is needed on the visualization end. However, look at a study like the Women's Health Initiative or something like that. Massive amounts of data that's going to take humans years and years to comb through. An automated system that could make meaningful graphs out of portions of this data would probably be very useful and may even relate data that researchers would not think to put together.

Ben Watson said...

Hey folks! Thanks for all your comments.

In reaction to Ryan:

Zhang & Norman: I liked exp 1 just as much as 2 and 3. Exp 1 indeed showed that more rules make things harder (more constraints) -- up until the point that the next constraint reduced the problem space significantly. The really interesting thing though was that when the rule space was externalized (all but one), increased rules did not harm performance as much, nor did the fourth rule improve performance. The authors basically say that with externalized rules, a certain cognitive load disappeared, revealing a portion of cognitive work that strictly increased as a function of number of rules (they call it "planning", but I'm not sure what that means in this context).

On experiment 3, yes, the authors show spatial externalization is better. In fact, they hypothesize that the reason it's better is because spatial is needed to do conjunctive comparisons (e.g. it's red and on the left end) efficiently. Apparently size and color is harder. But also, they point out that it's better to truly externalize the ordinal dimension (disk size) of the problem: size and location are ordinal, but color is not. Size also has the advantage of being spatial!

More about Stu and Alex's contribs oon.