Last week my colleague, Emily McManus, Managing Editor of TED, wrote How to rate complicated content? It's a fascinating piece that ponders the challenges and options a content creator has when trying to openly rate their own content. Give it a read.
After reading Emily's piece, I wanted to look at the question from a user perspective. I wanted to think about what factors might motivate a person to rate a bit of content, as that should inform the rating system's design. Based on past and general experience with ratings systems, I came up with the following possible motivators. Ultimately, of course, I'd want to do research with actual users to vet my thoughts.
Note: none of these possible motivators are necessarily mutually exclusive:
A person may rate something in the hopes that doing so will result in better future recommendations for themselves. They will perceive that a downvote will result in fewer similar recommendations, while an upvote will result in more. The design implication here is a clear mental model: the user should understand that their rating is self-serving.
The act of rating may stem from a sense of altruism. Hard to believe these days, but some people are genuinely motivated to help others—even online! They may see a positive rating as a way to help people find something good, and a negative rating as a way to protect someone from making a mistake. The implication here is to design a system that encourages trust and good behavior, and offers a meaningful ego play for thoughtful ratings. Likewise, it should discourage bad or disingenuous actors from gaming the system.
Egos may be involved. Depending on the context, people like the halo offered by rating something they love. They may hope the rating confers the cool factor of the item being rated back onto themselves. In these cases, people usually want to broadcast what they love rather than what they hate, though there can be exceptions. An obvious design implication is that the ego play should have appropriate reach; a cloistered system without reach is going to be problematic (see, for example, Apple’s failed Ping social network built into iTunes).
People may want to rate things as a way to leave feedback for an org. They perceive the rating as a way to encourage the good and punish the bad. There may be a bit of altruism here depending on the context and the organization—people might see their feedback as a form of guidance. But there is also a bit of self-interest here as well; users might see a negative rating, for example, as a way to discourage the org from doing something they find objectionable. A design implication is that the act of rating should be portrayed as valuable to, and appreciated by, the org. Moreover, this kind of rating system might benefit from a feedback-response loop: rather than the rating being a terminus, it might be the beginning of an exchange between the org and the user.
In short, a good rating system not only helps the platform on which it exists, it should have meaning for the people who use it. It should feel transparent, honest, usable, and engaging. Mostly, it should have a perceived and real value for the user.