ltr: Vol. 47 Issue 4: p. 5
Chapter 1: About the Perceptions Survey
Marshall Breeding
Andromeda Yelton

Abstract

Chapter 1 details the methodology of the Perceptions survey and notes its limits; readers are encouraged not to base decisions solely on the content of this report. Chapter 1 also outlines the findings of the remaining chapters.


For the last four years, Marshall Breeding has conducted an online survey to measure satisfaction with multiple aspects of the automation products used by libraries. The results of the four editions of the survey data, along with brief interpretive narratives, have been published on Library Technology Guides. This issue of Library Technology Reports will take a deeper look at the survey data, including an expansion of findings based on the 2010 iteration, an examination of trends seen across the four years, and additional analysis not previously published. For this report, the survey data have been extended with additional fields that provide the opportunity to separate the findings into categories that show some interesting trends not otherwise apparent.


Goals of the Survey

Why conduct this survey? In this time of tight budgets where libraries face difficult decisions regarding how to invest their technology resources, it's helpful to have data regarding how libraries perceive the quality of their automation systems and the companies that support them. This report, based on survey responses from more than two thousand libraries, aims to give some measure of how libraries perceive their current environment and to probe their inclinations for the future, as well as investigating trends that have emerged over the four years of the Perceptions survey.

Some libraries may refer to the results of this survey as they formulate technology strategies or even consider specific products. Libraries are urged not to base any decision solely on this report. While it reflects the responses of a large number of libraries using these products, this survey serves best as an instrument to guide what questions a library might bring up in its considerations. We caution libraries not to make premature conclusions based on subjective responses. Especially for libraries with more complex needs, it's unrealistic to expect satisfaction scores at the very top of the rankings. Large and complex libraries exercise all aspects of an automation system and at any given time may have outstanding issues that would naturally result in survey responses short of the highest marks.


How the Data Were Collected

The survey instrument included six numeric ratings, three yes/no responses, three short response fields, and a text field for general comments. The numeric rating fields allow responses from 0 through 9. Each scale was labeled to indicate the meaning of the numeric selection.

Five of the numeric questions probe at the level of satisfaction with and loyalty to the company or organization that provides its current automation system:

  • How satisfied is the library with your current Integrated Library System?
  • How satisfied is the library overall with the company from which you purchased your current ILS?
  • How satisfied is this library with this company's customer support services?
  • Has the customer support for your ILS gotten better or gotten worse in the last year?
  • Would your library consider working with this company again if your library were to migrate to a new ILS in the future?

One yes/no question asks whether the library is considering migrating to a new ILS and a fill-in text field provides the opportunity to provide specific systems under consideration. Another yes/no question asks whether the automation system currently in use was installed on schedule and according to the terms of the contract.

Given the recent interest in new search interfaces, a third yes/no question asks, “Is this library currently considering acquiring a discovery interface or Next-generation catalog for its collection that is separate from the ILS?” and provides a fill-in form to indicate products under consideration.

The survey includes two questions that aim to gauge interest in open source integrated library systems, a numerical rating that asks “How likely is it that this library would consider implementing an open source ILS?” and a fill-in text field for indicating products under consideration.

The survey concludes with a text box inviting comments. A copy of the survey may be viewed online. (This version of the survey does not accept or record response data.)

In order to correlate the responses with particular automation systems and companies, the survey links to entries in the lib-web-cats directory of libraries. Each entry in lib-web-cats indicates the automation system currently in use as well as data on the type of library, location, collection size, and other factors that might be of interest. In order to fill out the survey, responders first had to find their library in lib-web-cats and then press a button that launched the response form. Some potential respondents indicated that they found this process complex.

The link between the lib-web-cats entry and the survey automatically populated fields for the library name and current automation system and provided access to other data elements about the library as needed. The report on survey response demographics, for example, relies on data from lib-web-cats.

A number of methods were used to solicit responses to the survey. E-mail messages were sent to library-oriented mailing lists such as WEB4LIB, PUBLIB, and NGC4LIB. Invitational messages were also sent to many lists for specific automation systems and companies. Where contact information was available in lib-web-cats, an automated script produced e-mail messages with a direct link to the survey response form for that library.

The survey limited responses to one per library, though it allowed responses from multiple branches or facilities associated with a system. This restriction was imposed as an attempt to sway the respondents to reflect the broad perceptions of their institution rather than their personal opinions.

The survey instrument was created using the same infrastructure as the Library Technology Guides website—a custom interface written in Perl using MySQL to store the data, with ODBC as the connection layer. Access to the raw responses is controlled through a user name and password available only to the author. Scripts allow public access to the survey results in a way that does not expose individual responses.

In order to provide access to the comments without violating the stated agreement not to attribute individual responses to any given institution or individual, an additional field was created for edited comments. This field was manually populated with text selected from the comment text provided by the respondent. Any information that might identify the individual or library was edited out, with an ellipsis indicating the removed text. Comments that only explained a response or described the circumstances of the library were not transferred to the edited comments field.


Caveats and Limitations of the Survey Data

There are several limitations to keep in mind while analyzing the survey data.

First, although the survey is quite large (at 2,000 + libraries), it is by no means comprehensive. There are well over 57,000 libraries in lib-web-cats, which itself represents only a portion of the total libraries worldwide, and methods used do not ensure that the survey respondents are a random or representative sample. For example, Innovative customers had a relatively high response rate, giving Millennium prominence in the survey out of proportion to its market share. Survey responses, though including many international libraries, skew heavily toward North America; nearly 1,700 of the respondents are from libraries in the United States. Similarly, although many library types are represented, public and academic libraries alone comprise more than 1,700 of the responses. Therefore other demographics may be underrepresented.

Second, it cannot be guaranteed that respondents’ choices fully represented the libraries’ views. Though survey instructions requested that respondents speak for their institutions, the survey cannot ensure this. In addition, respondents sometimes commented that they did not have direct contact with their support vendors or direct influence over their library automation choices because those were handled through a central IT office or consortially. It is not clear how this impacts their satisfaction ratings.

Third, libraries do not consistently fill out the survey from year to year. While comparing results over time can reveal broad trends, it is not necessarily possible to track how individual libraries’ opinions changed, and comparisons between different years are not apples-to-apples.


Basic Findings of the Data

Because the survey included both numeric data and a comment field, we were able both to gauge overall satisfaction with various products and services and to speculate on the reasons behind those ratings.

In chapter 2, we discuss issues frequently raised in the comments, which included cost, consortia, open source software, and ILS functionality. Comments on cost, of course, were almost universally negative, reflecting libraries’ concerns about limited budgets and the increasing price of software. Many libraries feel that they pay too much for their automation systems. Libraries have mixed feelings on consortia, appreciating the savings and shared expertise they offer but sometimes feeling that their individual needs are lost in the mix. They also complain about not having a direct voice in software selection and support.

The survey was designed to probe perceptions regarding open source library automation systems, with both a numeric indicator and a corresponding comment. About 10 percent of libraries responding had already implemented open source systems; others appeared drawn to such systems as a potential low-cost alternative, though still others questioned whether the total cost of ownership would truly yield savings. Many libraries expressed concern, however, about the functionality and maturity of open source products or the expertise needed to maintain them and do not think they are viable alternatives at this time. It is unclear what these concerns will mean for the future.

Comments on ILS functionality also varied tremendously. Some libraries expressed pleasure at the modern features of their ILS while others said it was outdated and clunky—even when they used the same software. Some libraries are doing local development or customization which places specific technical demands on the ILS, but many do not have the in-house expertise to do this. Almost a quarter of respondents are looking for discovery layers or other next-generation catalog features—in some cases to replace an existing product of that type, in others as a first system; however, comments rarely go into depth on libraries’ opinions of these products.

Although libraries’ demands on ILS functionality varied, there was general agreement that ILSes should be modern, fast, and easy to use. There was also some interest in the potential simplicity and cost savings of cloud solutions.

In chapter 3, we move on to the numerical ratings for ILS, company, and customer support satisfaction and examine trends by size and type of library. Although there are some products used in a wide variety of market niches, in general larger and smaller libraries gravitate toward different ILSes. Smaller libraries use a wider variety of ILSes than larger ones and tend to be more satisfied with them; Apollo, OPALS, and Polaris scored particularly well. Similarly, library type (public or academic) affects both the ILSes used and the ratings. While Millennium, Horizon, and Symphony are widely used in both types of library, public libraries also commonly use Polaris and Library•Solution, whereas academic libraries use Voyager and Aleph. Public libraries are somewhat more satisfied with their software, vendors, and support than academic libraries; this is true even for larger libraries, and even when comparing the same ILS. Libraries’ satisfaction with their software has remained roughly constant or perhaps, in some cases, increased slightly over the four years of the Perceptions survey, while average satisfaction with companies and customer support has generally increased, along with libraries’ loyalty to their current vendor. Satisfied libraries tend to cite the quality of support and say that their vendors listen to them, whereas reasons for dissatisfaction vary, including concerns over software functionality, support quality, and vendors’ business direction.

We also examine interest in open source in 2010 and over time and find a complicated picture. Although the most common level of interest in open source is 0, the next most common is 9. This polarization appears to have increased over time, with more libraries indicating extreme scores and fewer at most scores between 1 and 8. The growth in high interest can be partly, but not entirely, accounted for by open source adopters, who almost always indicate very high levels of interest in open source. Comments indicate interest in the potential affordability and flexibility of open source software, but concerns about its functionality and maturity and about a lack of in-house technical expertise. It is not clear what this means for future trends. Historically highly interested libraries have been much more likely to adopt open source ILSes, so the growth in that category may indicate future adoptions; on the other hand, it may be that libraries that are interested but have not yet migrated to such products do not feel they are viable options at present. Either way, there are far more libraries averse to open source than interested in it.

Also in chapter 3, we examine the relationship between libraries’ stated loyalty and whether they are shopping for a new ILS. Indeed, low-loyalty libraries are much more likely than high-loyalty ones to be in the market for a new ILS. Low-loyalty ones are also much more likely to be considering an open source candidate, whereas high-loyalty libraries that are nonetheless seeking a new ILS are likely to be looking at another product line from the same company. In comparing 2007 Perceptions survey data to migration data in lib-web-cats, we find that libraries which indicated that they are shopping are, indeed, much more likely to have migrated; therefore, company loyalty likely impacts the chance of migration.

Finally, because we now have four years of Perceptions survey data, we look for trends over time. We find that average satisfaction with ILSes, companies, and customer support has remained roughly constant, with perhaps a slight upward trend in some scores for some products. Nonetheless, company loyalty has increased. It is not clear why this is. Perhaps economic concerns make migrations less likely, so libraries are necessarily loyal to their current vendors, or perhaps libraries that formerly had low loyalty have switched vendors.

In chapter 4, we look closely at specific vendors (Polaris, Apollo, SirsiDynix, Millennium, and several Koha support strategies), which span a range of library types and satisfaction ratings. By examining the comments, we look for the reasons behind those ratings.



Article Categories:
  • Information Science
  • Library Science

Refbacks

  • There are currently no refbacks.


Published by ALA TechSource, an imprint of the American Library Association.
Copyright Statement | ALA Privacy Policy