Saturday, April 04, 2009

Silk Purses and Sows Ears? Saturday Morning

If you accept the metrics you have always used, have the same audience, etc. you are setting yourself up to fail. Always look for a different way to tie the knot.

LJ Index: Ray Lyons

Contingent valuation definition: value that someone is willing to trade for something else. What else will equal the item in question.

Looking at measures in general...research project -- let's just look at one.

  • Library ratings are contests
  • Comparison to library peers
  • Performance is not gauged according to objective standards
Rules are that are chosen by the person running the contest. You must have rules to have any kind of evaluation. HAPLR was the first, the pioneer. We have to compare libraries to peers because we do not have standards.

They are based on standard library statistics. They do not measure: quality, excellence, goodness, greatness, value.

They do not account for mission, service responses, community demographics, or other factors.

Selection of statistics and weightings are arbitrary. Assume higher statistics are always better, and adopta one-size-fits-all approach (all libraries are rated using a similar formula).

Simplistic measures are created primarily for library advocacy. They are subject to misinterpretation by library community, the press, and the public.

Current rating systems: BIX (German Library Association), HAPLR, LJ Index

It is a totally arbitrary method. The more different methods, the more different views of the world.

Uses library expenditure levels as peer comparison groups. If you chose population a similar distribution would exist.

Measures service output only. Libraries must "qualify" to be rated: pop over 1,000; expenditures of more than $1K; meet IMLS definition, and report all those to IMLS

Reference questions are statistically significantly different in correlation to other items. Look at outlying values most of which occur in the smallest libraries.

Indicators chosen: circulation per capita; visits per capita; program attendance; public Internet computer uses. If libraries do report data, can not be retrospectively added. This is a contest not a pure scientific event.

There are anomalies in the data, it reflects the "untidiness" of the IMLS data. Chose to do per capita statistics. It can be an unfair advantage/disadvantage depending on whether the official population accurately represents service population.

Libraries are rated by how far above or below the average a library is. Calculate the mean, standard deviation. Score given to data to show how spread out data is.

Create a standard score: compares visits to the mean and divide by standard deviation to get a score. Your score should not be influenced by the others in your group, and therefore this is not a real scientific evaluation process, and does not measure quality.

What is the point, data is old ... advocacy is the reason to do it. We are in a profession where technology is driving change. Perhaps we really need to change.

What can you squeeze out of the data we have? Is this what we should do?

(Hand out....adjustment to get rid of negative, then get rid of decimal point.) Number looks very precise, but it in not very precise.

Advocacy -- showcases more libraries, encourages conducting and publicizing local evaluation

Encourages submission of data, and emphasizes the limitations of the indicators.

The model is inherently biased. Measures service delivery. If other stats chosen, other libraries could move to the top. Comparison between groups is inherently impossible.

Encourages assessment and collecting of data not previously collected. How many can you list. This is a contest and not a rigorous evaluation. Five stars went to an arbitrary number. Partly determined by space in the journal.

Customer Satisfaction -- Joe Matthews

Customer satisfaction is performance minus expectations. It is a lagging and backward looking factor. Not an output or outcome, it is an artifact.

Survey: Can create own survey, borrow from others, use a commercial service (Counting Opinions), need to decide when to use.

Need to go beyond how doing, and ask about particular services, ask respondents how are they doing, open ended questions elicit a high response rate. Most surveys focus on perceptions rarely ask about expectations. (PAPE - Priority And Performance Evaluation)

Service quality: SERVPERF - Service perfomance; SERVQUAL - Service Quality (see handout); LibQUAL+ for academic libraries.

LibQUAL+ is web based, costs $5K per cycle, and public libraries who have tried it have generally been disappointed.

Telephone surveys are being abandoned as more and more people are dropping land lines (partly to avoid surveys).

Cannot do inferential analysis if response rate is less than 75%

Single customer satisfaction survey to loyal customers: How likely is it that you would recommend X to a friend or colleague using a 10 point scale. Net Promoter Scores (NPS) (handout)

The library fails 40% of the time (although for a range of reasons). One of the worst things is to tell people that there is a long wait for best sellers.

Look at wayfinding in the library. Hours, cleanliness as well as comfort and security are very important. One library put flowers in both the men's and women's restrooms. Review when you say no.

Take a walk in your customer's shoes, but remember that you are also wearing rose colored glasses.

Hotel staff are trained in making eye contact and greeting.

No comments:

Post a Comment