Elizabeth Barrette (ysabetwordsmith) wrote,
Elizabeth Barrette
ysabetwordsmith

  • Mood:

Award Scoring

This article talks about the arbitrary nature of awards. But it doesn't do a good job of exploring different types of award structure or how to make them more fair if that's what you want to do. So let's see ...


* Popularity contests. You want to know what people like, so you invite a vast number of folks to vote, and whatever floats to the top wins. The fair aspect is that these are very cosmopolitan in letting lots of people participate; the unfair aspect is that they have no guidance other than personal taste. It doesn't say a thing about quality, only crowd appeal. It's primarily subjective.

* Juried awards. A panel of experts examines the material and sifts out the best of it, either based on personal taste or parameters furnished by the award. This tends to be a pretty reliable gauge of quality; it rarely turns up duds. The fairness aspect is that people know what they're doing and don't tend to vote on a whim; the unfairness aspect is that it has few participants with a lot of power, and if they are poorly chosen, much bias may be introduced. It tends to be more objective.

* Performance awards. These aren't even voted, they're scored based on something the material does, usually a market factor like selling a certain number of copies or making so much money. Again it says nothing about quality, just popularity, but it's a fantastic gauge of how something behaves in use. It's objective; anyone can simply look at the numbers to see what's highest.

There are other formats; those are just a few examples.

Some things that may help or hinder fairness:

* Some awards have a participation fee. This necessarily narrows the field, favoring those with access to more funds. The higher the fee, the worse the discrimination. These fees may be overt (entry fees) or covert (membership fees) and are a source of much underrepresentation across many fields.

* Blinding means trying to separate the entry from its creator. When feasible, this is a fantastic way to limit bias. However, sometimes the connection is too well know (e.g. who made a movie) and other times the content can give it away (male authors rarely write childbirth scenes). It's a good tool for fairness in some cases, but not all.

* Judges may be handed a rubric to assist in scoring. The components may be objective (e.g. grammar in literature, run time in film) or subjective (e.g. artistic merit), but most rubrics contain many objective elements as these are easier to define. This greatly increases objectivity and makes it easier for judges to keep track of many entries or different factors within a single entry by detailing what the award considers important. It may or may not include a section for the judge's personal taste. If dealing with novice judges, it also helps them understand what to look for in entries, rather than leaving them to their own devices. The rubric may or may not be public; some awards hand the scoresheets and judge notes to the participants, which can be very valuable feedback. If nothing else it improves transparency as people know what they are being scored on. Ideally, you want people to know the main goals of the award before collecting entries, but over time the rubric can really help with aim -- which is great if you're trying to increase something.

There are lots of other options; these are just some samples.

Know how awards work and what they're trying to accomplish. Objectivity is great for some goals, subjectivity for others. Don't try to use a screwdriver to pound nails.
Tags: awards, entertainment, news
Subscribe
  • Post a new comment

    Error

    default userpic

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 0 comments