Better be deaf

As promised, today, I am going to share with you an amusing anecdote which demonstrates how a poorly designed user interface can ruin a good and simple system into something which is hardly usable.

I already talked you about my elevator front panel. I know another device in my apartment building which deserves, I believe, some words: my entry phone!

In fact, my entry phone has nothing special compared to the other entry phones in the building. They are all the same. Its usage is also very usual. Each phone is connected to a microphone located at the entrance of the building. When a visitor wants to enter the building, she presses a button close to the door. The host hears a short tone melody which warns about the presence of a visitor. The host comes to the entry phone, and speaks into the microphone to the visitor. Eventually, the host lets the visitor enter the building by pressing another button. As I said, nothing unusual.

But the entry phone has a second functionality. There is also a ring at the door of the apartment. That ring is connected to entry phone as well. When a visitor rings, the host hears a melody which warns about the presence of the visitor (deja vu?). The host comes at the door, looks into the peephole, and eventually opens the door and welcomes the visitor.

Perhaps you already figured out what is the problem: the tone melodies. In both the scenarios, the entry phone plays a tone melody and there is no easy way to know where the visitor is: at the entrance of the building, or at the entrance of the apartment. One could argue that any visitor will first come at the entrance of the building, but it is no help for several reasons: the entrance of the building was sometimes left unclosed by a previous visitor; or you are sometimes visited by your own neighbor; or some people are actually visiting all the inhabitants so they need obviously to enter the building only once.

At the beginning, I did believe it was only me. Perhaps I have a bad sound memory, or perhaps I would finally learn for what event is each melody. But surprisingly, ten years after, the problem remains the same. Each time the phone entry plays the little music, I don't know where to go: at the entry phone or at the door. My wife does experience the exact same problem. But it seems my neighbors too! A few weeks ago, I needed to talk to one of them. So I rang at his door. After some seconds, I clearly hear my neighbor not too far behind the door trying to vainly speak in the entry phone: "Hello? Hello? Who is it?" So I just knock at the door to tell him he should stop speaking alone in the microphone and come at the door instead.

The fact is that the system exposes two different features through a problematic user interface:
  • It uses a single communication channel: sound. No light, no display, nothing else. But it's not the worst issue, as most of the existing user interfaces do the same without any major problem (phones, traffic signs, and even software applications which most of the time, focus on visual user experience only.)
  • Both the features are exposed through symbols (the melodies) which are totally disconnected from their meaning. They do not evoke anything. They are just a short abstract series of unrelated tones without any semantic. You have to be very imaginative to be able to associate them with one feature or another.
I am unsure what solution could have been chosen to design an effective interface for that system: visual support, multiple sound source (at the door and at the entry phone), better distinct melodies, artificial voice? (I'm a bit reluctant about using artificial human voice when not absolutely necessary. It tends to be quickly irritating. This is something I will probably come back later in a future post.)

What I know for sure is that the designer should have issued a meeting notice for some brainstorm with his colleagues.



The lead developer of NDepend, Patrick Smacchia, explains what he considers a key to a successful interface design:

Make the simple things simple and hard things possible. IMHO, this tenet applies perfectly in how UI should be designed. Typically, the most direct way to use a UI control should result in the most awaited feature from a user perspective (make the simple things simple). Then, some extra/hidden UI control facilities can be added to the control to support more in-depth scenario (make hard things possible).

But achieving simplicity in interface design is everything but an easy task. A common mistake (but apparently not so commonly known) is the oversimplification. It consists in an unfortunate too low threshold in what the designer considers simple things and hard things. It's an error because it may lead the average user to frustration. Every time she needs to activate such a designated "hard" feature, although she would consider it as a basic feature, she has to manipulate a more complex interface to access to the hidden functionality.

A good example is the automatic collapsed/expanded menu of Microsoft Office 2003. Why features like text replace or page setup were put by default in the "hard things" list? The authors have apparently decided that those functionnalities are less commonly used. Perhaps it is true actually, for most users. But what about people who, for a reason or another, needs to use them twenty or fourty times a day?
But generally speaking, determining what is simple and what is hard is a difficult task; especially if you target a wide audience from the modest grandma to the tech geek. Mitigation is always possible by providing some options for personal customization, or automatic adaptation based on the most frequently used commands (like the menus described aboved, in fact). But in the real world (I mean the non-IT world), this is sometimes just impossible. Look at this remote control device I saw recently in a paper advertisment.

Although very simple, the idea is great and not so common for that kind of device. When folded, it presents just a few buttons for the very basic features: turn the TV on and off, change the channels, etc. Only when unfolding it, you can access to the advanced functionalities. One can suppose that a feature like video recording or TV internet access is probably only accessible from inside the remote control. This is surely not an issue for grandma. But for the tech geek, after all, it’s perhaps not a so well designed device.

Think twice about your audience and its level of expertise before implementing such a solution, whatever your system is software- or hardware-based.


Turn right... but not so much!

In his excellent blog, Jeff Atwood already talked several times about road signs in the perspective of user interface design. Indeed, as a professional software developer, I have occasionally to design interfaces (end-user panels, API, or frameworks); and I share the fascination of Jeff for road signs and for symbols usage in general.

In a previous article, I already made an analogy between road signs and software interfaces, about the necessity to design consistent models.

But even though you follow your own design rules and make a consistent interface, you still have make a proper usage of it. Let's continue our analogy with traffic signs and look at the following picture I took recently there.

So what? Obviously, you must turn right soon. But why installing two nearly identical traffic signs located at 10 meters from each other. The first sign indicates that the road turns 90° right; but a few meters later, it's only 45°.

According to the local driving rules (Page 121), the two signs have actually the same meaning: they indicate to the drivers the obligatory way(s) to follow according to the direction indicated by the arrow. So in fact, I cannot see any logical nor legal reason for the presence of those signs with a different arrow.

This example is a perfect illustration of an interface with a good design (simple sign intuitively understandable) but used improperly or inconsistently. Or did I miss something ?


Written interfaces and localization

I live in a non-English speaking country. Although a significant part of the population can speak English a little, the vast majority of it does not know a single word. This is the case for my wife. She perfectly speaks Portuguese and French, and she has some solid notions of Italian and Spanish, but no English at all.

Of course, there is nothing wrong. The most annoying for her is the plethora of little devices we daily uses (computers, recorders, music players, etc.) and which have been designed for international marketing but without any consideration for localization of the user interface. Look at that radio alarm for instance:

Each time my wife wants to change the settings, she has to ask me whether she must switch it on or off. That is because she has no idea what "on" or "off" means. For her, it could be so well written "sglurmf" and "zxqwaghs"! Without any reference, it is very difficult to remember a foreign word and its meaning.

In the IT world and especially on the Internet, the possibility to switch between languages is so common that we do not notice it any more. But unfortunately, it is not so easy for non-software user interfaces.