Archive for March, 2010

Choosing Colors for Data Visualization

March 2, 2010

This article explains that a good use of colors can enhance and clarify a presentation, when used poorly it will have a negative effect. The use of color is all about function: what information are you trying to convey, and how color can enhance it. The author uses a lot of examples to help us understand what the effects of colors are. Within this summary I’ll take some of the conclusions about the examples which I think could be important within my thesis.

One of the functions of colors is to distinguish one element from another, but one should not forget that all visible parts of a presentation must be some color and if all are taken together they must be effective. Effective in this case is making it easy for the viewer to understand the roles and relationships between the elements. To do this one could define categories of information and group and order the information. Using color will group related items and command attention in proportion to importance.

A next step is choosing an effective set of colors, to understand this the author explains the principles of color design. contrasting colors are different, analogous colors are similar. Contrasting draws attention, analogy groups. In color design, color is specified by three dimensions. The first, hue is the color’s name and is typically drawn as a hue circle. Analogous hues are close together and contrasting hues are on the opposite side of the hue circle. Next is the value of a color which is the perceived lightness or darkness of the color. Contrast in value defines legibility as well as having a powerful effect on attention. Last is the chroma which indicates how bright, saturated, vivid or colorful a color is. High chroma colors are vivid and bright. Using darker and grayer has many benefits: looks less garish, more sophisticated, …

Different dimensions have different application to information display. Making related items the same color (analogues hue) is a powerful way to label and group. Hue contrast is easy to overuse to the point of visual clutter, a better approach is to use a few high chroma colors as color contrast in a presentation consisting primarily of grays and muted colors.


Legibility means being able to read, decipher, discover and to be understood. Difference in value between the symbol and its background is important for legibility. The higher the luminance contrast (difference in value) , the easier it is to see the edge between one shape and another. Variation in luminance can also be used to separate overlaid values into layers, where low contrast layers can sit behind high contrast ones without causing visual clutter. A primary rule in many forms of design is “get it right in black and white”, meaning that important information would be legible even if chroma were reduced to zero.

Summarized these previous statements tell us to “assign color according to function”.

  • use contrast to highlight
  • analogy to group
  • control value contrast for legibility

Most design situations, the best results are achieved by limiting hue to a palette of two or three colors, and using hue and chroma variations within these hues to create distinguishable different colors. The article gives some examples that makes things more clear. ColorBrewer is a website that helps choose colors for data display and is refered to by the paper. The examples of the paper always use a white background and the contextual information are shades of gray. A general rules is to make background white and its supporting information shades of gray this provides the most effective foundation for your color palette.

The paper ends by a few notes on background color, noting that most color palettes are designed to be printed on white paper. White as a background color has the advantage that the human visual system is designed to adapt its color perception relative to the local definition of white. A white background gives a stable definition of white, and a stable “surface” to focus on.


This paper helped me realize that colors are very important to making a visualization easy to understand, I already applied the contrast rule for all of my tags within my graph. I will most likely change my background of my application to white and give the supporting information appropriate colors.

Toward Measuring Visualization Insight

March 2, 2010

This paper starts with telling us that one of the purposes of visualization is gaining insight. It is hard to define insight when it comes to visualizations so the article identifies some essential characteristics of insight. Insight is: complex, deep qualitative, unexpected and relevant. An insight is more interesting if it has more of these characteristics. Often visualizations are evaluated using controlled experiments. When benchmark tasks are used in these controlled experiments they are not proper tools for measuring insight. This method depends on the fact that these benchmark tasks and metrics represent insight. According to the author there are four fundamental problems compared to the previously mentioned characteristics:

  • they must be predefined by test administrators, leaving little room for unexpected insight and even forcing users into a line of thought that they might not otherwise take.
  • they need definitive completion times
  • they must hav definitive answers that measure accuracy
  • they require simple answers

This forces the experimenter to search-like tasks that don’t represent insight well. These benchmark tasks are far too simplistic and constrained to indicate insight of a visualization. A claim often made to generalize results of simple benchmark tasks is that complex tasks are build from simple tasks. The author counters this, first of all efficiency of simple benchmark tasks is often due to specific visualization interface features that don’t generalize to more complex tasks. Second a clear decomposition doesn’t exist yet. Another problem often arising in the interpretation of the benchmark results is the tradeoff between performance and accuracy. Often users are forced to continue until correctly completing a tasks, leading to trail-and-error approach by users and a misrepresentation of accuracy. It is concluded that controlled experiments on benchmarks are not the right method to evaluate insight.

First of all the author suggest to include more complex benchmark tasks, this still involves some uncertainty because these tasks generally support visualization overviews rather that detail views. Another method could be to let users interpret visualization into a textual answer but this is difficult to score. Allowing multiple-choice could again lead towards biasing the user. Also these methods lead to longer tasks times and a larger group of participants to be tested to get statistically significant results.

A second suggestion is made to eliminating benchmark tasks and letting researchers observe what insights users gain on their own. Using a open-ended protocol is a possible method, users are instructed to explore the data and report their insights. A qualitative insight analysis is also a possible solution like the think-aloud protocol. For each insight, a coding method quantifies various metrics (insight category, complexity, …), these categories can be assigned to common clusters like usability, … The coding converts qualitative data to quantitative but still is subjective but supports the qualitativeness of insight. The advantage of eliminating benchmark tasks is that they reveal what insights visualization users gained. The measures are closely related to the fundamental characteristics of insight (previously mentioned). These insights can also be compared to insights a researcher expected users to gain.

The author concludes with pointing to the fact that both types of controlled experiments are needed. Benchmark tasks for low-level effects and the eliminating of benchmarks tasks for a broader insight. Noted is that if combining both approaches into a single experiment, benchmark taks should not precede the open-ended portion this could lead to constraining the user.


This article helped me to understand that I need to pay more attention to the open-ended portion of the evaluation of my visualizations. I will combine both methods to gain more information, in my previous evaluations I allowed the user to explore the visualisation for a very short time this should be extended. I’ll also need to note what I kind of insights I’d like to achieve from my visualization and compare these to the insights gained from the evaluation. In my previous evaluation I also noticed how hard is to find a good benchmark to test the visualization, this article confirms my thoughts that these are often to simple and force the user in a certain direction. I’ll also need to pay more attention to how I will formulate my question to not bias a user.