Wednesday, June 03, 2009

Silk Purses and Sows Ears? Assessing the Quality of Public Library Statistics and Making the Most of Them: PLA Spring Symposium 2009 - Morning I

The program began with an introduction by Joe Matthews. He went over the handouts (in paper) and reviewed the agenda review. He reminded ups that the only dumb question is the unasked one.

Is is possible to develop a "library goodness scale?"

What is good, what is a great library? This is an interesting challenge to define.

In a library organization management’s responsibilities are:

defining goals;

  • obtaining the needed resources;
  • identifying programs and services to reach the goals;
  • and using the resources wisely.

There are benefits and challenges: lots of performance measures -- most libraries have too many which are never used. (You have the authority to stop collecting data if it is not being used.)

A very important concept is "You get what you measure." He cited an example of police performance measurement. As a result of the measure used (minor quality of life issues) the community had man cops reporting pot holes – including the same pot holes day after day. The measure, reports filed, was incredibly high. The solving of crimes was not. As managers we need to refine the performance measurement system to reflect what you want.

Benefits and challenges: role of evaluation not to prove but to improve; provides feedback on actual performance; develops a culture of assessment. When data is disconfirming, report is often ignored rather than addressing the issue raised.

Efficiency & Effectiveness

Efficiency is the internal perspective: are we doing things right? Effectiveness is the external perspective: are we doing the right things? It is an important distinction.

The Library-centered view: how much, how many, how economical, how prompt?

Types of measures: Leading v. lagging: circ is lagging, what you did last month; historic data.

Leading is something that lets you forecast demand: pre-registration figures. In Joe’s opinion there is no relationship between inputs and outputs in libraries!

Leading indicator at reference: Very few libraries use reference data they have to change the staffing pattern at the reference desk. There is no leading data for reference queries...it may be the number of Google searches that month. He quoted OCLC Perception data on use of library reference as first source 3% of the time. You can forecast from past data trending. Should change staffing pattern, should get rid of reference questions....

A leading indicator could be a "high holds list" for items on order; another could be the school district calendar for staffing the reference desk.

Question on interpreting data when users asked what they want. Triangulation, partly asking what they want, customer satisfaction data, focus groups.

Measures need to be: SMART: Specific (accurate), Measurable, Action oriented, Relevant (clear), Timely

It is also important to review the data, and how it is collected and reported. In one library, the gate count suddenly doubled. When a manager went to check the manager discovered that there was a new staff member reporting it – the gate counted both those entering and exiting, and the former staff member correctly reported ½ of the number as the attendance. The new staff member did not.

Why do we use the data? There are several reasons: to help understand demand; to demonstrate accountability; to help focus; to improve services; to move from opinions to use of data, more responsive to customer needs; communicate value.

When we collect data we make some assumptions. For instance comparability (why does 3-week book count the same as 2-day DVD) [Joe also made an argument to not include renewals as part of circulation]; accuracy [how to count reference? ticks or argues to use gate count as an indicator; also argued for sampling--demonstrated busy-ness, need to demonstrate value] blow up reference desks....get rid of them.

Performance -- often bunch of numbers and no historical context, last 2-3 years of data.

Problem is failure to keep pace with ever rising expectations.

Larry Nash White presented next on the Library Quality Assessment Environment

He noted that he was raised by grandfather who was an efficiency expert.

Performance person in the library actually knows more about what is going on in the library. Statistics and metrics are like tight fitting clothes, they are suggestive, but not completely revealing.

History helps tells us where we have been. Most of what we measure we stole from somewhere else.

We have measured parts, how do we measure the whole. In 1934 Rider developed a way to maximize efficiency using costs. "If we don't assess things and do it correctly, then others from outside of the library will come and do it for us." (Rider 1934) About 100 library systems around the country are run by an outsourced firm (LSSI and others).

Google in 9 hours answers as many reference questions as all libraries in the US in 2006.

1939 was first customer service survey. 50s and 60s saw the quantitative crunch. Smile ratio as a measure? Especially when there are more smiles on the other side of the counter.

What is happening today? What are the influencing factors?

How many have enough resources (money, time, staff)? No one. [Great story about Santiago, Chile library. Single building of 275,000 square feet, 75 staff, 75,000 items to serve a city of over 5 million from one building.]

Increasing stakeholder involvement is important. When you want to keep your stakeholders out, that is a bad sign. They bring in own perceptions, biases, etc. which you must work with.

Technology is neutral, it is intent which the value. How we use it to deliver service it is made good or bad. How effective is our technology service. Total cost of ownership studies. Anti-tick marks. Use technology to count wherever possible. Use automation system to count computer use, reference questions, directional questions. An ILS is really good at counting. Can do location by location and hour by hour.

We are always borrowing from someone else. Libraries are using what business world gave up years ago. And they are tools that were often designed for something else.

Time is affecting what we do.

More quantitative data is wanted by stakeholder, more qualitative data is wanted by profession. This is a tension/division.

A wider scope is needed to assess and improve the process. Dynamic alignment: held up knotted string, not a macramé -- used as an analogy for our performance assessment environment (not much give). Do you have the right things in place, counting the right things and giving the right answer. (Pulled in the right way, and it became a single string.) When we align our assessment we need to continually align because of the changes in the environment.

Future predictions

  • More assessment.
  • More quantitative data to support quality outcomes
  • More intangible assessment. (Many things we do are intangible, and are important.) What would it look like if we started reporting the air.
  • More assessment of organizational knowledge
  • More assessment of staff knowledge (human capital) are we effectively assessing the use of that resource.
  • Increased alignment of assessment process.
  • [Intellectual capital. Human capital -- what people know. Structural value -- what is left when people go home. Value of the relationships: stakeholders, vendors, partners.] Report the value created. Wherever we spend money we need to report the value of what we do.

Ray Lyons then talked about Input-Process-Output Outcomes Models

IMLS has now embraced the United Way’s language. But there are also program evaluation sets, and a 1993 Government Performance Results Act.

He showed several graphics including "Program Evaluation Feedback Loop." It is considered to be a rational process. It is also very stagnant which ignores political issues.

If you remember why you are doing this, you can often come up with your own answers to your questions.

Evaluation questions include "merit." Orr's model does not include stakeholders very well, they are listed as "Demand" How can you produce demand?

Performance Assessment is often blind to unintended consequences. Does not ask: what are the real needs of the community?

Input statistics, should only be used only in connection with outputs, only potential for services. Output statistics measure current level of performance.

Goals are often related to the statistics. Aren't you going to reach a point where you can no longer improve?

Interpreting output statistics: interpret in relation to goals and is left up to the library. There are no standards for evaluating quality or the value of the items. We also don't look at the relationships between the data elements. (Or don't trust the judgments we make.)

No comments:

Post a Comment