From page 42 of Universal Principles of Design:
Comparison: a method of illustrating relationships and patterns in system behaviors by representing two or more system variables in a controlled way.
I would hope we all know what a comparison is, but thinking about them from a scholarly perspective may aid us in designing reports, dashboards, and applications that communicate more accurately and effectively. The book goes on to list three key techniques for making valid comparisons, which I will now discuss.
Apples to apples
Comparison data should be presented using common measures and common units.
In my experience, this is most frequently violated as it relates to time. For instance, imagine comparing the total monthly sales for two calendar months containing greatly differing numbers of business days.
A real-life example I have seen of this was in a daily report containing a comparison of weekly sales averages for the current and prior month. The problem is that a week was added to the denominator as if it had already elapsed as of the first day of that week. Here is how it would have worked for a report run on March 9:
Note that the weekly sales average for March is artificially low, as the current week in progress is already being counted as a full week in the calculation. The average daily sales have actually improved from 18.2 to 20.1, month over month. Projecting the total sales for the current week based on the number of days that have elapsed would be more accurate.
Another that I frequently see in newspapers and magazines is money that is not adjusted for inflation over time, like this screen capture from an article in Advertising Age. How instructive is it to see that the average price of a movie theater ticket has shot upward from $1.75 to $6.88 from 1970 to 2006? Without accounting for inflation, the price of most things track upward in a diagonal line, and a dollar in 1970 to a dollar in 2006 is not an apples to apples comparison.
Comparison data should be presented in a single context, so that subtle differences and patterns in the data are detectable.
Here is something to which I alluded in an earlier post: Mint.com chose to display monthly spending category amounts in a pie chart, with a slider to change the date. The reason this design is problematic is that it’s difficult for your brain to store multiple pieces of information in short term memory, necessitating incessant flipping back and forth to try to identify differences.
Try it yourself with this simple test:
[swfobj src=”http://axisgroup.com/wp-content/uploads/2009/03/pie-line.swf” alt=”You need Flash to see this awesomeness” allowfullscreen=”false”]
Which type of graph took longer to answer the question? Can you remember the dimension amounts from one month after you have flipped to another month? It’s not easy, but that’s what you have to do in order to divine what has increased or decreased.
Even comparing the slice sizes in a single month can be difficult when the relative amounts are close. (I’m parroting Stephen Few here, who has written extensively about the weaknesses of pie charts.)
This problem largely boils down to being able to choosing the best chart type, given your data, and there are a few places to help you get started:
- Juice Analytics Chart Chooser
- The Extreme Presentation Method chart chooser
- Effectively Communicating Numbers: Selecting the Best Means and Manner of Display (PDF)
Claims about evidence or phenomena should be accompanied by benchmark variables so that clear and substantive comparisons can be made.
A number in a vacuum is meaningless. For instance, let’s say your company’s revenue was $1 million in 2008. Okay…is that good or bad?
- If your revenue was $500,000 in 2007, it would be pretty good.
- If your revenue was $2 million in 2007, it would be pretty bad.
- If your projected revenue in 2008 was $750,000, it would be good.
- If your competitors increased their revenue by a much larger percentage than your company did between 2007 and 2008, it would be bad.
- If your revenue was the same in 2007, but your expenses in 2008 were much lower than in 2007, it would be good.
As you can see, without benchmarks like past performance, goals, projections, competitor performance, and complementary metrics, it’s difficult to know the significance of a number.
How will you know if you’re doing a good job? The simplest test is to show your data to somebody who is not familiar with it and see if they can accurately say what is going well, what isn’t, and what outliers they can identify.