Friday, April 24, 2009

One year ago

It was a year ago yesterday that my mom died.

Here are the two posts from that time, one is actually what my daughter said.




Caption for Megan's photo:

L to R sitting on the floor: Gregory, Brian, Mark Leach, Justin Elmore, Andy Doucette, Matthew Toney, Kyle, Ed. Second row: Christina Kennedy (now Greg's wife), Megan, Carolyn, Karen Doucette, my mom (holding Bridget Elmore), Collen, Nancy Doucette, Amanda Leach. Third row: Jill (ex wife), Meg Leach (sister #1, child #3), Helen Elmore (sister #4, child #8), Sue Doucette (sister #2, child #4), Diane (Paul's wife), Beth Toney (sister #3, child #7), Jeanne (Peter's wife). Back row: me (eldest all around), John Elmore (Helen's husband), Jimbo Doucette (Sue's husband), Thomas (brother #3, child #6), Paul (brother #2, child #5), Peter (brother #1, child #2).

Edit Note: My daughter properly pointed out that I had flip-flopped my eldest sister and her daughter. I have now corrected that. Thanks, Megan!

Friday, April 17, 2009

Judith Krug: Tribute and Thoughts

Many folks have written about the death of Office for Intellectual Freedom (OIF) Director Judy Krug. I won't even point to the wonderful articles in the New York Times and Washington Post. There is a somewhat incomplete article in Wikipedia (my recollection is that it used to be fuller, and had a photo!). There are some wonderful quotes in Wikiquotes, though. Most of them I can mentally hear Judy say!

She was a staunch supporter of the first amendment to the US Constitution which includes "free speech" as a core value of life in the United States.

I had heard of Judy for years before I ever met her when I became an ALA Councilor. However, from then on I learned to listen carefully to what she said, and to respect her incredible commitment to a value which is important to me.

I was not going post on this until I read John Berry III's recent Blatant Berry post (which I am expecting to see in the print version of Library Journal. He has followed that up with a shorter post that is pure tribute and expresses the need to have the ALA OIF headed by a librarian. The longer post says some of the things I would have about Judith. However, John (and yes, I know him in person), has lost what made Judy so critically important in defending the First Ammendment. Let me quote the third to last and the penultimate paragraphs:

The new chief of OIF and FTRF must be a consolidator, a diplomat, and a lobbyist of high skill. This IF leader must not only defend ALA’s IF apparatus but manage its continued evolution in an environment of easy technological access to information, where censorship is often practiced not by removal of information but by its online manipulation. Beyond that, innovations like Google Book Search pose new challenges. The new leader must possess the legal, political, and moral fiber to outmaneuver the opponents of free inquiry and individual privacy in the courts, the marketplace, and the civic community.

Just as important, the new leader of OIF must face the longstanding gap between our principles and our practice. This gap comes in part from transposing policies born in the print age, such as providing open access to all library materials for juveniles, and the difficulty of allowing unrestricted use of public access computer terminals.

The first of the two above paragraphs hits the nail on the head. The next leader of the OIF must be incredibly tactful, articulate, visionary, and politically skilled. Where I take issue with John Berry is the second of the two paragraphs above. In a country (and profession) which is as large and diverse as ours, there will always be differences. These will include places where there is a gap between principles and practice. But, just because there are gaps does not mean that on a "core value" issue such as this, we should be any less diligent in expressing our views. Policies (in public libraries) are generally made by appointed or elected boards. In an age of technological change there will be a lag in having the policies (created in an earlier enviornment) matching the new environment.

The next head of the OIF *must* continue to express unqualified support for all that the First Ammendment stands for. To do anything less will allow us to slide down a slippery slope.

Wednesday, April 08, 2009

"The Cloud"

I'm reading the latest issue of American Libraries (April 2009), and got to Meredith Farkas' column on technology. (Meredith also writes Information Wants to be Free -- one of the first blogs I read, and one of my favorites.)

She talks in the column about the model of SaaS (Software as a Service) for delivering software.

It seems to becoming more prevalent, but I have to admit that it is not new. For a lot of years the technology has moved to having more computing power on the client side of our client/server networks, but not only is it an old idea (remember "mainframes" like Hal in 2001: A Space Odyssey?) but I sure remember the first "live" library automation systems where there were terminals hard wired to the mainframe.

Two jobs ago, my organization was facing a dilemma. We had received "end of life" notices not only for the software for the ILS, but also for the servers on which it was hosted. At the same time a regional library consortium was moving to its next generation of automation system. Because the consortium had purchased powerful enough software, they "sold" space on the server for our data, and we agreed on a cost for maintenance and upgrades (keeping our own license to the system housed on their machine). It was not long after when the consortium took all its servers and put them in/on a server farm (meaning that local power outages did not disrupt operations). All of this happened nearly 5 years ago. For my organization it represented an opportunity to move to new software and abandon hardware while saving money. (Isn't that every administrator's dream -- better and more services at a lower cost?)

So I guess it is an idea that is coming.

What Meredith does not talk about is the possibility of portable applications on a flash drive. In my current position I have a 8-GB "Data Traveler" which has a whole office productivity suite on the drive, so I am not dependent on anyone else's software set up.

Saturday, April 04, 2009

Silk Purses and Sows Ears? Saturday Morning

If you accept the metrics you have always used, have the same audience, etc. you are setting yourself up to fail. Always look for a different way to tie the knot.

LJ Index: Ray Lyons

Contingent valuation definition: value that someone is willing to trade for something else. What else will equal the item in question.

Looking at measures in general...research project -- let's just look at one.

  • Library ratings are contests
  • Comparison to library peers
  • Performance is not gauged according to objective standards
Rules are that are chosen by the person running the contest. You must have rules to have any kind of evaluation. HAPLR was the first, the pioneer. We have to compare libraries to peers because we do not have standards.

They are based on standard library statistics. They do not measure: quality, excellence, goodness, greatness, value.

They do not account for mission, service responses, community demographics, or other factors.

Selection of statistics and weightings are arbitrary. Assume higher statistics are always better, and adopta one-size-fits-all approach (all libraries are rated using a similar formula).

Simplistic measures are created primarily for library advocacy. They are subject to misinterpretation by library community, the press, and the public.

Current rating systems: BIX (German Library Association), HAPLR, LJ Index

It is a totally arbitrary method. The more different methods, the more different views of the world.

Uses library expenditure levels as peer comparison groups. If you chose population a similar distribution would exist.

Measures service output only. Libraries must "qualify" to be rated: pop over 1,000; expenditures of more than $1K; meet IMLS definition, and report all those to IMLS

Reference questions are statistically significantly different in correlation to other items. Look at outlying values most of which occur in the smallest libraries.

Indicators chosen: circulation per capita; visits per capita; program attendance; public Internet computer uses. If libraries do report data, can not be retrospectively added. This is a contest not a pure scientific event.

There are anomalies in the data, it reflects the "untidiness" of the IMLS data. Chose to do per capita statistics. It can be an unfair advantage/disadvantage depending on whether the official population accurately represents service population.

Libraries are rated by how far above or below the average a library is. Calculate the mean, standard deviation. Score given to data to show how spread out data is.

Create a standard score: compares visits to the mean and divide by standard deviation to get a score. Your score should not be influenced by the others in your group, and therefore this is not a real scientific evaluation process, and does not measure quality.

What is the point, data is old ... advocacy is the reason to do it. We are in a profession where technology is driving change. Perhaps we really need to change.

What can you squeeze out of the data we have? Is this what we should do?

(Hand out....adjustment to get rid of negative, then get rid of decimal point.) Number looks very precise, but it in not very precise.

Advocacy -- showcases more libraries, encourages conducting and publicizing local evaluation

Encourages submission of data, and emphasizes the limitations of the indicators.

The model is inherently biased. Measures service delivery. If other stats chosen, other libraries could move to the top. Comparison between groups is inherently impossible.

Encourages assessment and collecting of data not previously collected. How many can you list. This is a contest and not a rigorous evaluation. Five stars went to an arbitrary number. Partly determined by space in the journal.

Customer Satisfaction -- Joe Matthews

Customer satisfaction is performance minus expectations. It is a lagging and backward looking factor. Not an output or outcome, it is an artifact.

Survey: Can create own survey, borrow from others, use a commercial service (Counting Opinions), need to decide when to use.

Need to go beyond how doing, and ask about particular services, ask respondents how are they doing, open ended questions elicit a high response rate. Most surveys focus on perceptions rarely ask about expectations. (PAPE - Priority And Performance Evaluation)

Service quality: SERVPERF - Service perfomance; SERVQUAL - Service Quality (see handout); LibQUAL+ for academic libraries.

LibQUAL+ is web based, costs $5K per cycle, and public libraries who have tried it have generally been disappointed.

Telephone surveys are being abandoned as more and more people are dropping land lines (partly to avoid surveys).

Cannot do inferential analysis if response rate is less than 75%

Single customer satisfaction survey to loyal customers: How likely is it that you would recommend X to a friend or colleague using a 10 point scale. Net Promoter Scores (NPS) (handout)

The library fails 40% of the time (although for a range of reasons). One of the worst things is to tell people that there is a long wait for best sellers.

Look at wayfinding in the library. Hours, cleanliness as well as comfort and security are very important. One library put flowers in both the men's and women's restrooms. Review when you say no.

Take a walk in your customer's shoes, but remember that you are also wearing rose colored glasses.

Hotel staff are trained in making eye contact and greeting.

Friday, April 03, 2009

Silk Purses and Sows Ears? Assessing the Quality of Public Library Statistics and Making the Most of Them: PLA Spring Symposium 2009 – Afternoon

Conundrum: is a library excellent because it is busy or is the library busy because it is excellent?

Measuring excellence is difficult.

Output statistics are not useless, but they cannot tell us about quality, excellence, or if there is a match between community need and services.

Interesting examples of use of data from the afternoon. One branch was looking at cutting. Circ per hour was same first hour or last hour, but all circ last hour was from one person, where there were many the first. Therefore closing last hour inconvenienced only one person. Or rearranged collection so easier to find, but while circ went up, reference questions went down (and patrons were more satisfied).

Cost benefit analysis has several advantages quantify monetary value of library services. How do you chose the economic value. Technical ideas: consumer surplus valuation; contingent valuation (how much would you pay for ... ) or Willingness to pay.

Select specific services delivered to specific audiences.

One advantage is that it is well known in the business community. It can be used over time for specific services or products. Cost can be high since it involves surveying the community. It is possible that you may choose and area to study, when you could be missing other areas which the community values more than you think.

Larry White

Estimated cost of performance assessment in Florida in 2000 was $16 million, but the state only gave $32 million in state aid, and therefore 1/4 to 1/2 of state aid was wasted.

Outcomes based performance measurement takes a long time to generate results...years.

Return on investment. Everyone hopes a high return on investment.

ROI is used by business. Happy stories and smiley kids did not work. Created an ROI. First year 6 to 1. Took a buck from commissioner, promised return. I created this (showed stats). Showed ROI, then told a story.
Combination of cost avoidance and return revenue wise. Used data from geneology/history room list, took local folks and multiplied by figures by tourism for local. Then estimated distant customers for overnight stays. Showed library accounted for $500,000 in tourism revenue. Then cost avoidance (average cost per book times number of circ -- because then people did not have to purchase).

Data mining for libraries gets to be an important role in the community. Need to use a combination of numbers and words in a creative fashion.

Can use it to justify what you want to do and to save what you want to do. It is scalable.

Lack of consensus in value and use. It is usually used defensively (preserving library funding) and reactively. We wait for disasters to tell the story.

Joe Matthews

Summer Reading Program

In Portland OR, 75,000 in program, raise $60K, fly family of 4 to Disney Land. Only 30% complete. Real outcome is improvement in reading level. Cost is high. Can be done with third party agency for confidentiality of kids.

Encourage to start thinking about outcome side of things do they spend more time reading, do they do better in school. One place does a survey of caregivers about perception of outcomes.

Statistics

Context, trends, history.

Age of collection as a stat.

Unintended consequences of performance measures
  • assessment process consequences (survey fatigue)
  • tactical process consequences
  • strategic process consequences
Assessment process consequences: changes in organizational culture; changes in operational processes; changes in organizational procedures/policies; technology’s impact. We can assess them more often, and faster, too! How far do we assess, what about the user of your web page in China who wants the Mandarin version of the page.

Tactical consequences: operational (how you work); systems (can create an 'us v. them'; can look at forest and forget the trees); financial (ten to one return speech -- loaned out to economic development dept.); service ("new Coke"); organizational impacts (can bring good things).

Strategic consequences occur over a long period of time. Operation supported an unethical behavior to support the need to constantly increase circulation. Problem where assessment drives the mission rather than the mission driving the assessment.

Final of the day: Management Frameworks (Joe Matthews)

Three Rs: Resources, Reach, and Results.

Resources: how much money do we need to achieve
Reach: who do we want and where
Results: What do we want and why

Choose only two or three measures. It is important to think about customer segments. (other than demographics) they come with different needs and with different expectations.

Performance Prism (see handout) used in England and New Zealand and Australia

Balanced Scorecard

Financial perspective: saving time, reducing costs, gnerating new revenues, share of the pie
(see hand out....)

Building a Library Balanced Scorecard: Mission and Vision Statements; Strategy -- how to reach the mission; select performance measures; set targets or goals; projects or initiatives

Strategic plans often do not use the word strategy. Most use one of two approaches: conserative, reachable, or the scientific wild ass guess approach, but you may want to have a BHAG.

The scorecard is usually created by a small group with the results shared with the stakeholders.

Silk Purses and Sows Ears? Assessing the Quality of Public Library Statistics and Making the Most of Them: PLA Spring Symposium 2009 - Morning II

Larry White

We need to tell our stories better.

Every one of the 2,000 FedEx outlets report several thousand data elements every day. It is collected electronically, compiled company-wide, and delivered to upper management with comparatives. Response lag of less than 12 hours. Took asessment and made it a value added service.

Walmart has new data server farm multi-petrabit storage. All data kept and stored for 2 years. When something is sold, a replacement item is leaving from the warehouse to replace it on the shelf.

More metrics need to be automated, and more frequently performed. If we don't figure out how to do it for ourselves, someone is going to come in and do it for us.

Ray Lyons

Challenges of Comparative Statistics

Choosing a comparable library - there is no answer.

We need to be as diligent as we can to get a satisfactory answer even if it is not as satisfactory as we want it to be.

It is also about accountability.

See book on municipal benchmarks in footnotes. (Organization is in favor of standards existing to see if you are doing ok.) If money is not attached to standards, there may not be a reason to adhere to standard.

Types of benchmarking: data comparison using peer organizations; and analyzing practices for comparable purposes; seeking most effective, best practices.

Benchmarking steps: define measures to us (who will be involved); identify appropriate partners; agree on ground rules (ethical issues); identify objectives; document profile and context; collect data; analyze data (including normalizing, e.g. per capita); report results; and then use them for change.

Is it important to think about community need and culture. Important to chose libraries which have the same service responses.

Need to both count what you want/need, but also need to report using the standard definitions so that the data can be compared. Another person noted that staff feel that what is counted is valued.

Peer measures: average individual income; college education; # women not working outside the home; school age children; geographic area

Study of what output measures that Ohio library directors used: 3 regularly: material expenditures, operating expenditures, circulation. [These are the easiest to find, and easy to define.]

Now moving to use of internet and job application sessions, use of computers.

Recommendation: at a minimum identify peer libraries:
  • service area population
  • key demographic characteristics
  • library budget
  • service responses

Some libraries are in "fertile settings" which can explain statistical performance.

Joe Matthews
Activity Based Costing

Handout based process:

Figure costs....salaries. Handout includes cost of library, which as a municipal library does not include utilities.