Posts tagged “mental model”

Mixed Signals

cool
From a recent rental, here’s a dashboard indicator I’d never seen before. As far as I could figure out, while the car is warming up, the engine temperature light shows a green “cool” indicator. At least, it disappeared after a few minutes, so I concluded that was associated with the car warming up. We don’t want the engine to be too cold, and any indicator at all is perhaps a bad (or less good) thing, so it seemed to be a warning. But green is good, so is it good that it’s lit up? Is it good that the engine is cool? Is it bad when it goes out?

See more of my Vancouver pictures here.

Mythological optimizations as satisfying as real ones?

netflixui.jpg

When Netflix movies arrive, the barcode on the DVD envelope peeks through a window on the back of the outer envelope. When I put the DVD back in and seal it up before returning it, should I make sure the barcode is still lined up? There’s no indication this is necessary, nor is it very easy to do since there are eight different ways (four edges and front/back) to orient the DVD).

At this point in Netflix’s history there has been a lot written about their sorting process and envelope design; the whole Netflix experience smacks of optimization (plenty of feedback by email or RSS, consistently rapid shipping in either direction, and of course, the throttling scandal). So it makes some sort of sense that they are scanning incoming packages and those that are scannable will be returned (and the next movie sent out) fastest.

According to general consensus and the official word, this is false. It makes no difference; it’s only scanned when it’s sent out, not when it comes back in.

This gap between perception and reality can create real challenges for companies that deliver technology solutions, hoping that the user’s mental model matches to the engineers or designer’s mental model. We worked with a software vendor who had a loyal customer base using a time-intensive transactional system. We heard many stories from these customers about how the system “really” worked. Some had conducted experiments to document their beliefs. Even as our client brought in increasingly senior technologists to explain the way their product worked, people found ways to justify their own model. The technology decisions in the product were arbitrary (some thresholds for the number of milliseconds, or the number of transactions, etc. were refined to some point over time, from 25 to 15 to 10). The fact that the system was being tweaked created mistrust and lent credence to the customer’s theories about what was really going on behind the scenes. Transparency isn’t sufficient; there were other business decisions our client was making that were not seen as being in the best interest of their customers and so that really colored how they viewed the partial information about the technological workings.

Arthur C. Clarke famously said “Any sufficiently advanced technology is indistinguishable from magic.” Put another way, we often develop complex and irrational mental models about technology. The joke that “a clean car goes faster” demonstrates how we attach emotional attributes to some product or system, despite an intellectual awareness that it isn’t true.

I just sealed up my Netflix envelope; it took some will power to not fiddle with the barcode. Sure, there’s the written word that says it won’t make a difference. But, it just might, maybe, right?

Series

About Steve