lrts: Vol. 52 Issue 1: p. 4
Use of the Checklist Method for Content Evaluation of Full-text Databases: An Investigation of Two Databases Based on Citations from Two Journals
Thomas E. Nisonger

Thomas E. Nisonger is Professor, Indiana University, School of Library and Information Science, Bloomington; nisonge@indiana.edu
The author gratefully acknowledges his graduate assistants in Indiana University’s School of Library and Information Science, Sara Franks, Catherine Hall, and Suzanne Switzer, who assisted in a variety of ways, including checking citations in the databases and helping tabulate the results.

Abstract

Following a detailed (but not comprehensive) review of the use of citation data as checklists for library collection evaluation, the use of this technique for evaluating database content is explained. This paper reports an investigation of the full-text and indexing and abstracting coverage of Library Literature & Information Science Full Text and EBSCOhost Academic Search Premier, based on checking citations to journal articles in the 2004 volumes of Library Resources & Technical Services and Collection Building. Analysis of these citations shows they were predominately to English-language library and information science journals published in the United States, with the majority dating from 2000 to 2004. Library Literature & Information Science Full Text contained 21.1 percent of the citations in full-text format, while the corresponding figure for Academic Search Premier was 16.1 percent. The database coverage also is analyzed by publication date, country of origin, and Library of Congress classification number of cited items. Some limitations to the study are acknowledged, while issues for future research are outlined.


That the librarianship paradigm is rapidly changing with the evolution from a print to an electronic environment is almost a cliché. Relatively new formats, such as full-text databases, electronic journals, electronic books, and the Web, offer numerous challenges to contemporary librarians, including a need for evaluation techniques. While a host of generally accepted collection evaluation methods were developed for the twentieth century’s relatively stable, mostly print environment, identifying appropriate evaluation methodologies ranks among the library profession’s major challenges in the first decade of the twenty-first century. As will be illustrated in the following literature review, the checklist method, dating to the mid-nineteenth century, is one of the oldest and among the most often used approaches to library collection evaluation. This paper’s purpose is to demonstrate the use of a citation-based checklist approach by evaluating the content of two full-text databases: Library Literature & Information Science Full Text and Academic Search Premier.

The Guide to the Evaluation of Library Collections offers a succinct definition of the checklist approach: “With this procedure the evaluator selects lists of titles or works appropriate to the subjects collected, to the programs or goals of the library, or to the programs and goals of consortia. These lists are then searched in the library files to determine the percentage the library has in its own collection.”1 More specifically, the lists are checked in the library’s catalog (originally a card catalog, now an online public access catalog [OPAC]).

The benefits and drawbacks associated with the checklist technique have been discussed in the literature by Lockett, Lundin, and the author, among others.2 On the positive side, lists can be compiled to meet the needs of a particular library or type of library and they can be examined to increase knowledge of the literature. Lists also are straightforward to implement, require little subject expertise, and provide objective data that is easily understood. On the negative side, the collection might hold other resources better than those on the list; all items on the list are not of equal value; appropriate lists might be difficult to locate; held items might not be available because they are checked out, missing, or for other reasons; and many lists focusing on a single subject area do not consider resources from other disciplines. One of the more compelling criticisms is the fact that the checklist approach was developed to test ownership in the traditional model of librarianship and usually does not consider items obtained on interlibrary loan or licensed electronically.


History of the Checklist Method

According to Mosher and other authorities, the earliest reported collection evaluation in an American library, published in 1849, used the checklist method.3 That investigation, written by the Smithsonian Institution’s assistant secretary Charles Coffin Jewett, used the citations in leading mid-nineteenth century textbooks in chemistry, commerce, ethnography, and international law as the checklist and concluded that North American libraries were inadequate compared to their European counterparts.4

A major collection evaluation at the University of Chicago during the early 1930s relied upon the checklist method. As part of an ambitious collection-building project led by M. Llewellyn Raney, more than four hundred bibliographies were checked by approximately two hundred faculty members resulting in a multimillion dollar desiderata list.5 In the mid-1930s, Waples and Lasswell used the checklist approach to evaluate select social science areas in six major American research libraries, including the Library of Congress (LC), Harvard, and the New York Public Library.6 Two checklist collection evaluations published during the 1960s have sometimes been termed classic studies: Coale’s evaluation of the Newberry Library’s Latin American Colonial history holdings along with comparative data for the University of Texas at Austin, the University of California at Berkeley Libraries, and the Hispanic Society of America, and Webb’s assessment of medieval studies, art history, political science, physics, Slavic studies, and United States and United Kingdom social and literary history at the University of Colorado.7

The checklist technique (sometimes in combination with other approaches) also has been used for the evaluation of library holdings in science and technology at the University of Idaho by Burns; the periodicals collection at James Madison University by Bolgiano and King; history of Christianity at Ohio State University by Shiels and Alt; music at Louisiana State University by Taranto and Perrault; irrigation at the University of Illinois by Porta and Lancaster; biocatalysis and applied molecular biology at Columbia University by Kehoe and Stein; theatre arts at the University of California at Sacramento by Snow; mathematics at Winona State University by Dennison; the legal collection at Suffolk University Law Library by Flaherty; and graphic novels at the University of Memphis (although specific results are not reported) by Matz.8 In addition, the method was used by Larson to test the accuracy and consistency of assigned Conspectus collection levels in French literature by twenty Research Libraries Group (RLG) libraries in a Conspectus verification study.9 Note that this paragraph does not contain a comprehensive listing as numerous other examples could be cited.


Citation-based Checklists

Most of the earliest checklist evaluations used bibliographies, recommended lists, or other so-called “authoritative” sources. Yet the Guide to the Evaluation of Library Collections outlines fifteen possible sources for a checklist, such as course syllabi or reading lists, publisher or dealer catalogs, bestseller lists, the holdings of important libraries, and so on.10 Two of the fifteen relate to citations: lists of highly cited journals, such as those in the Institute for Scientific Information’s Journal Citation Reports (JCR) and “citations contained in publications.”11

Citation analysis is a well-established library and information science research methodology that is frequently used to analyze scholarly communications patterns as well as for numerous evaluative purposes. Citations selected from journals, textbooks, dissertations and theses, faculty publications, and other sources have frequently been used as collection evaluation checklists. The advantages and disadvantages of using citations for checklists have been reviewed by this author.12 The technique is based on the assumption that the cited sources were used by researchers, and thus should be contained in a library collection supporting research. Relevant interdisciplinary or multidisciplinary citations might be included that would not appear on other lists specific to a particular subject. Among the disadvantages, some citations may be peripheral to the topic, the technique focuses on library patrons who publish, and an item might be cited simply because it is available rather than because it is the best resource.

Heidenwolf asserts that the use of citations for checklists originated during the 1950s and cites a 1957 study by Emerson.13 She categorizes Jewett’s well-known 1849 evaluation, described above, as an example of checking an “authoritative bibliography,” but Jewett’s study also was a citation-based checklist, as it used references from textbooks.14 The most frequently used methods for selecting citations for checklist evaluation will be reviewed below. Note that illustrative examples are provided for each category rather than a comprehensive review.

Citations from Journals

This researcher used two methods for selecting citations from political science journals (the first based on three years of the American Political Science Review and the second based on one year of five other journals) to evaluate the political science collections of George Washington, Georgetown, Howard, Catholic, and George Mason university libraries.15 Utilizing the author’s second method, Heidenwolf used citations from five epidemiology journals to evaluate the epidemiology collection in the University of Michigan Library system and its Public Health Library.16 Gleason and Deffenbaugh selected citations from three biblical studies journals to evaluate the University of Notre Dame Library’s holdings on that topic.17 In addition to other methods, Crawley-Low evaluated the University of Saskatchewan’s toxicology collection by using citations to books from a three-year run of the Annual Review of Pharmacology and Toxicology as a checklist.18 Journal citations also were used in a checklist evaluation of irrigation at the University of Illinois at Urbana-Champaign by Porta and Lancaster.19

Citations from Textbooks

This is the approach used by Jewett in the 1840s.20 Among numerous methods employed in the evaluation of the Washington University School of Medicine’s ophthalmology monograph collection, Gallagher used the one hundred monographic citations in the classic textbook Ophthalmology: Principles and Concepts as a checklist to address the question whether the book could have been written with the library’s resources.21 In a similar vein, Watson selected citations from Duane’s Clinical Ophthalmology along with another non-citation source.22 Her checklists were used by members of the Association of Vision Science Librarians to evaluate their collections with results for twenty-one unidentified libraries reported.

Bland checked citations from twenty-five textbooks (five each in mathematics, philosophy, physics, psychology, and sociology) against the holdings of the Western Carolina University Library, predicated on the assumption that the collection’s relevance for teaching purposes would be tested because the citations were taken from textbooks for courses taught in the curriculum.23 Following up on Bland’s work, Stelk and Lancaster checked citations from five religious studies textbooks against the holdings of the University of Illinois at Urbana-Champaign undergraduate and main university libraries, and confirmed the technique’s usefulness for evaluation of undergraduate collections.24 In another permutation on the use of textbooks, Currie selected one citation from the textbook for each of eighty courses taught at Firelands College, a two year-branch of Bowling Green State University, to create a checklist for evaluating both the branch and the main library.25

Citations from Dissertations and Theses

Citations from dissertations and theses have been used as a checklist to test the ability of university libraries to support doctoral- and master’s-level research. In the earliest known study, Emerson analyzed the citations in twenty-three engineering dissertations completed at Columbia University from 1950 through 1954, and then used them as a checklist to evaluate the Columbia Libraries engineering holdings.26

Herubel used a list of the journals and serials cited twice in philosophy dissertations written at Purdue University as a checklist for evaluating the Purdue library’s periodical collection.27 The University of California at Irvine’s library was evaluated by Buzzard and New based on a checklist of citations selected from thirty-six dissertations (twelve each from the sciences, social sciences, and humanities) completed at that institution.28 Citations from sixty-five master’s theses in human resources development were used by Moulden to evaluate the National College of Education’s ability to support off-campus programs.29

Citations from Faculty publications

The selection of citations from dissertations as well as faculty-authored books and articles at Loughborough University in the United Kingdom has been reported by Lewis in a study that also examined interlibrary loan (ILL) records to determine if unheld items had been borrowed.30 To test the capability of the Pennsylvania State University’s branch campus libraries to support faculty research, Neal and Smith checked citations from journal articles published by branch faculty against the system holdings.31 Haas and Lee evaluated the University of Florida library’s periodical holdings in forestry by checking journal titles cited in faculty publications as well as articles written by faculty.32

The Lopez Method

Although infrequently used, the Lopez method offers an interesting variation on citations as checklist technique that is worth noting. Lopez described an evaluation method, developed at the State University of New York at Buffalo Library, that extends the checklist technique through four hierarchical levels.33 He explain this approach in the following:

Select at random from a critical bibliography, a number of references. Check these references against the library’s holdings. If those references are available, then take as your second reference, the first citation in that publication’s footnote. Repeat the procedure until either the library lacks the material cited or until a fourth and final citation is obtained.34

Lopez then outlined a 10-20-40-80 scoring method for items held at levels one through four respectively.35 This researcher reported a test of Lopez’s method at the University of Manitoba Library in four subject areas (family therapy, the American novel, modern British history, and Medieval French literature) that concluded that the method measured a collection’s depth for supporting research, but was unreliable because of inconsistent results between the two different tests in each subject.36

Use of Checklists in Database Evaluation

Ever since so-called full-text databases emerged during the 1980s, the completeness of their coverage has been debated and, to some extent, researched. One of the earlier investigations of full-text database content, published by Pagell in 1987, bore the provocative and catchy-sounding subtitle, “How Full Is Full?”37 A variety of methods have been used or proposed to assess full-text database content coverage and quality, including Pagell’s comparison of print issues with database coverage; Black’s average JCR impact factor for journals contained in the database (that also were covered by JCR); and Jacsó’s summing of the impact factor of all of a database’s JCR journals.38

The checklist method also has been used to evaluate indexing and abstracting coverage, full-text content of databases, or both. In this modification of the traditional checklist approach, each item on the list is checked against the database under evaluation rather than in a library’s OPAC. Typically, a list of journal titles is checked against the vendor’s list of titles theoretically contained in the database. For example, Carr and Wolfe used core lists of education and biology journals to evaluate four electronic databases at the University of Wisconsin system libraries.39 At the University of Hawaii at Manoa, Brier and Lebbin used Magazines for Libraries as a checklist to evaluate the title coverage of three databases.40 Black used the list of journals covered in JCR to evaluate four full-text databases.41 Jacobs, Woodfield, and Morris compiled core journal lists, based on local citations by British researchers, that were checked against the coverage of four major databases as well as the British Library Document Supply Centre.42 Instead of checking titles, Grzeszkiewicz and Hawbaker checked the articles from sample issues of journals subscribed to by the University of the Pacific Library in Business Index ASAP.43 This literature review identified only two published cases in which citations were directly checked in databases—the method used in this study. Tyler, Boudreau, and Leach selected 6,170 citations from the first available 2000 issue of an unnamed number of core communication studies journals and checked them for coverage in three communication studies indexes and five multidisciplinary databases.44 Schaffer used a sample of 368 citations from more than 150 articles published by psychology department faculty at Texas A & M University between 2000 and 2002 as a checklist (although that term is not used by Schaffer) for evaluating the content of twenty-six electronic full-text databases licensed by the library.45


Databases Evaluated in this Investigation

Library Literature & Information Science Full Text, published by H. W. Wilson, contains “full text of articles from nearly 150 journals as far back as 1997” and indexing coverage for four hundred journals dating to 1984.46 Although this database has undergone name changes and migration from print to CD-ROM to a Web-interface, it can be traced to Library Literature, the well-known library science index originally published in print format by the American Library Association in 1921.47 This product was chosen for evaluation because of its pedigree and reputation as a premier library and information science database.

Part of the EBSCOHOST suite of databases marketed by EBSCO, Academic Search Premier is advertised as “designed specifically for academic institutions” and offering “the world’s largest, multidisciplinary full-text database.”48 This product contains full text for “nearly” 4,650 serials, with backfiles as far back as 1975 “or further” for more than one hundred journals; furthermore, indexing and abstracts are provided for 8,200 titles.49 This service was selected for investigation because it is an important multidisciplinary database that includes library and information science with an academic focus.

One might ask why compare a specialized full-text database with a general one (rather than two specialized or two general databases), and does not such a comparison unfairly advantage the former when based on citations from its discipline? Library Literature & Information Science Full Text is the only full-text database specific to that discipline, as LISA: Library and Information Science Abstracts and Library, Information Science & Technology Abstracts are not advertised as full-text services. Academic Search Premier, although a multidisciplinary database, is known to have significant library and information science content and is actually listed under “library and information science” in the “Databases by Subject” menu selection on the Indiana University library Web page.50 While a better performance by Literature and Information Science Full Text would be presumed, it is useful to gather empirical evidence to test this assumption and to examine the differences in the two databases’ coverage. At the project’s conclusion, the results from the two databases were similar enough to suggest it was not unreasonable to compare them.


Procedures

The citations to periodicals in the 2004 bibliographical volumes of Library Resources & Technical Services (LRTS), volume 48, and Collection Building, volume 23, served as the source for this investigation. All citations in endnotes (referred to as “references” in both journals) or appended in “further reading” or bibliography sections were consecutively numbered, classified by format, and entered into an Excel spreadsheet. Citations were counted according to the item-to-item link approach developed by Garfield and used in the Institute for Scientific Information’s Web of Science. Thus, if a specific bibliographic item was cited twice in one article, it was counted as only one citation, but if cited in two different articles it counted as two citations. A small number of nonbibliographical items (editor inquiries to the author included as numbered footnotes apparently in error) were disregarded.

The cited periodical titles were checked in the OCLC WorldCat database to verify their subject (based on Library of Congress classification number) and country of publication. During the spring 2005 semester, the citations to periodicals were checked (by author and, if not found, by title) in two databases: Library and Information Science Full Text and Academic Search Premier. Each checked periodical citation was initially classified into one of four categories:

  1. a citation only
  2. a citation plus an abstract
  3. a full-text entry
  4. no record in the database

An Excel spreadsheet was used to calculate the overall periodical coverage for LRTS and Collection Building in both databases; in other words, the distribution of the journal’s citations to periodicals among the four categories outlined above. For purposes of final analysis, categories one and two were combined into a single indexing and abstracting coverage category. The spreadsheet also was used to tabulate the results by title and by publication date of the cited articles, facilitating analysis by those variables. Note that analysis by language, subject, and place of publication did not require a spreadsheet.


Analysis of the Citations

The 2004 LRTS contained 910 citations, counted according to the method described in the preceding section. Table 1 presents a breakdown of these citations by format. A majority of the citations (60.0 percent) were to periodical articles, while books were the second most frequently cited format (12.4 percent). If the citations for books and book chapters (3.9 percent) are combined, 16.3 percent (calculated from the raw data rather than by adding percentages) of the citations were to monographs. The Web accounted for 11.7 percent of the citations: 10.7 percent to Web documents and 1.0 percent to Web sites.

Table 2’s summary of journals cited in LRTS shows that LRTS itself was the most frequently cited title, with its 43 citations accounting for 7.9 percent of the 546 total. The ten most cited journals (those cited 22 times or more) accounted for more than half the citations (52.0 percent). Yet a total of 115 different titles were cited, with 62 cited only once, 14 twice, 9 cited three times, and 9 cited four times. In counting titles, a title change is considered a different title (following the policy of the Institute for Scientific Information). Accordingly, Library Acquisitions: Practice & Theory and its later title, Library Collections, Acquisitions & Technical Services, are listed separately.

The 2004 volume of Collection Building contained 256 citations. Table 3 indicates that journal articles were the most frequently cited format (41.8 percent), although they accounted for a smaller proportion of citations than in LRTS. In contrast to LRTS, where monographs were the second most frequently cited format, Web sites (18.4 percent) and Web documents (9.8 percent) accounted for 28.1 percent (calculated from raw data rather than by adding percentages) of the Collection Building citations, whereas books (19.9 percent) and book chapters (3.1 percent) comprised 23.0 percent of the citations.

The journals cited in Collection Building are displayed in table 4. The two most frequently cited journals, Collection Building itself and Library Trends, both cited 11 times, contributed 20.6 percent of the 107 journal citations. The top 8 journals (contributing three or more citations) accounted for 43.0 percent of journal citations, while 45 titles were cited only once and 8 titles twice. Altogether, 61 different titles were cited in Collection Building. The listing of titles in tables 2 and 4 should definitely not be interpreted as a formal journal ranking. Rather, these titles are presented in order to provide information about the citations used as the basis for the evaluation.

The periodical citation data for both LRTS and Collection Building conform to two well-known patterns: journal self-citation and the law of concentration and scatter. For a variety of reasons (the explanations for which are beyond this paper’s scope), journals tend to cite themselves. Citation and use studies typically display a pattern of concentration in a small number of highly cited or used journals and scatter among a large number of infrequently cited or used titles.

The publication dates of the periodical articles cited in LRTS ranged from 1964 through 2004, while the periodical articles cited by Collection Building were published from 1981 to 2004. Table 5 summarizes cited periodical publication dates, organized into five-year intervals, for the two journals. It is striking that for both journals the majority of cited periodicals were published since 2000 and thus within the most recent five years: 53.8 percent for LRTS and 57.9 percent for Collection Building. Only a small fraction of both journals’ periodical citations predate 1990 (8.4 percent in LRTS and 2.8 percent in Collection Building), probably reflecting the rapid changes within the field.

Most citations were to journals published in the United States. In LRTS, 90.1 percent of the 546 citations were to United States journals, followed by 7.1 percent to journals published in the United Kingdom, and 1.1 percent to German journals. Four countries each received fewer than 1 percent of the LRTS citations: Denmark (0.9 percent), Australia (0.4 percent), Canada (0.2 percent), and the Netherlands (0.2 percent). The international citation rate was somewhat higher in Collection Building, where 73.8 percent of 107 citations were to United States journals and 17.8 percent to United Kingdom titles. There was a smattering of citations to seven other countries. Nigeria and Malaysia each received 1.9 percent of the Collection Building journal citations, and five countries received 0.9 percent (1 citation only): Australia, India, Germany, Netherlands, and the Philippines. The journal citations were almost exclusively in English, exceeding 99 percent in both journals. One citation in Collection Building was in Spanish, while two citations in LRTS were in German.

As noted in the preceding section, WorldCat was consulted to determine the LC classification number for the cited titles. Not unexpectedly, a strong majority of citations in both LRTS and Collection Building were to journals classified in Z (Bibliography, Library Science, Information Resources [General]). In LRTS, 93.0 percent of the 546 journals citations were to Z-classified titles; specifically 89.0 percent to “Libraries” (Z662 to Z1000.5); 2.0 percent to “General Bibliography” (Z1001 to Z1121); 1.5 percent to “Information Resources (General)” (ZA); and 0.5 percent to “Book Industries and Trade” (Z116 to Z659). In addition to Z, 2.9 percent of LRTS citations were to P (Language and Literature), while seven other broad classes were represented: L (Education)—0.9 percent; Q (Science)—0.9 percent; T (Technology)—0.5 percent; C (Auxiliary Sciences of History)—0.4 percent; H (Social Sciences)—0.4 percent; A (General Works)—0.2 percent and M (Music and Books on Music)—0.2 percent. Classification numbers were not available for 0.5 percent of the citations.

For Collection Building, 82.2 percent of the 107 journal citations were to Z, including 72.0 percent to “Libraries,” 9.3 percent to “General Bibliography,” and 0.9 percent to “Information Resources (General).” In addition, 5.6 percent were to P; 3.7 percent to H; 2.8 percent to Q; 1.9 percent to E (American and U.S. history (other than local); 1.9 percent to T; and 0.9 percent to L. A classification number could not be determined for 0.9 percent of the citations.


Results of Checking the Databases
Overall Results

The results of checking the citations in the two databases under investigation are tabulated in table 6. In terms of full-text entries, the best result, not unexpectedly, was found in Library and Information Science Full Text, which contained full text for 21.1 percent of the total citations for the two journals. Academic Search Premier held 16.1 percent of the total citations in full-text format. While this article’s primary focus is on the comparison of databases rather than journals, it is noteworthy that Collection Building receives equal full-text coverage from Library and Information Science Full Text and Academic Search Premier, as both provided 25.2 percent of its periodical citations in that form. In contrast, LRTS’s full text coverage was higher in Library and Information Science Full Text (20.3 percent) than in Academic Search Premier (14.3 percent).

Apart from the issue of full-text coverage, Library Literature & Information Science Full Text contained some type of record (full text or indexing and abstracting coverage) for a much higher proportion of the citations than did Academic Search Premier. For illustration, Academic Search Premier had no coverage for 65.6 percent of LRTS and 55.1 percent of Collection Building citations (63.9 percent for both), whereas Library Literature & Information Science Full Text covered all but 17.4 percent of LRTS’s citations and 24.3 percent of those in Collection Building (18.5 percent in the two journals). With a small number of exceptions, Library Literature & Information Science Full Text listed only a citation, whereas Academic Search Premier contained both a citation and abstract. This paragraph’s data show that when full-text entries and indexing or abstracting coverage are collectively considered, LRTS received fuller coverage than did Collection Building in Library Literature & Information Science Full Text, while Collection Building’s coverage was better in Academic Search Premier.

To provide some comparative results from similar studies, Tyler, Boudreau, and Leach found that indexing coverage in three communication studies indexes ranged from 25.0 percent to 34.1 percent and from 4.0 percent to 77.8 percent in five other multisubject databases.51 Schaffer discovered that “less than one-third” of the cited articles in his study were available in full text in at least one of the twenty-six online databases licensed by Texas A&M.52

Finally, because this investigation is based on checking the entire universe of periodical citations in LRTS and Collection Building during 2004 rather than random samples, the use of statistical significance tests would be inappropriate.

Results by Publication Date

Table 7 analyzes the results by publication date. One can observe that items published during the 2000–2004 interval received the highest proportion of full-text coverage in both databases and for both journals, with the percentages covered consistently declining in a perfect linear relationship for the two earlier five-year intervals (1995–1999 and 1990–1994). For items published before 1990, only two (LRTS citations covered in Academic Search Premier) received full-text coverage. When one examines the table’s final column (which indicates the percentage of titles not covered in the database as either full-text or indexing or abstracting entries), the proportions usually increase for the older intervals, although a direct linear relationship does not exist. Thus, one can conclude that coverage is generally higher for more recent citations, as would be expected.

Results by Country of Publication and Language

Analysis of full-text coverage by country of publication found a primary focus on the United States. For Academic Search Premier, 77 of the 78 LRTS citations contained in full text were published in the United States, and 1 was published in the Netherlands, while 26 of the 27 full-text Collection Building entries were published in the United States, and 1 in the United Kingdom. All 111 of the LRTS full-text items in Library Literature & Information Science Full Text were published in the United States, as were 22 of the 27 Collection Building full-text entries in that database, with 2 published in Malaysia and 1 each in Australia, India, and Germany. A similar but less strong pro-United States bias was found in the indexing or abstracting coverage for the cited items. In the Academic Search Premier database, 106 of 110 LRTS indexing or abstracting entries were published in the United States, with 4 in the United Kingdom, while for Collection Building 20 of 21 originated in the United States, and 1 in the United Kingdom. In Library Literature & Information Science Full Text, 295 of 340 non-full-text entries (in other words, indexing or abstracting entries) from LRTS came from United States–published journals, whereas 35 were published in the United Kingdom, 4 in Denmark, 3 in Germany, 2 in Australia, and 1 in the Netherlands. For Collection Building, 40 of 54 non-full text entries were United States–published, with 13 in the United Kingdom, and 1 in the Netherlands.

Both databases held full-text entries for a significantly larger proportion of the citations published in the United States than those published outside the country. Academic Search Premier contained 15.7 percent (77 of 492) of LRTS citations published in the United States, versus 1.9 percent (1 of 54) of those from other countries. For Collection Building, 32.9 percent (26 of 79) of United States publications were held in full text, contrasted to 3.6 percent (1 of 28) for non-United States publications. In Library Literature & Information Science Full Text, the percentage for United States versus non-United States publications was 22.6 percent (111 of 492), contrasted with 0 percent (0 of 54) for LRTS, and 27.8 percent (22 of 79) compared with 17.9 percent (5 of 28) for Collection Building.

When limited to indexing or abstracting entries, a stronger coverage for the United States also was observed in Academic Search Premier but not Library Literature & Information Science Full Text. The former database held citations or abstracts for 21.5 percent (106 of 492) of the United States publications in LRTS, and 25.3 percent (20 of 79) of the United States publications in Collection Building, although it only held 7.4 percent (4 of 54) of LRTS’s non-United States citations and 3.6 percent (1 of 28) of Collection Building’s. In contrast, Library Literature & Information Science Full Text contained non-full-text entries for a larger proportion of LRTS citations published outside the United States than inside the United States: 83.3 percent (45 of 54) versus 60.0 percent (295 of 492). For Collection Building, the percentages were almost identical: 50.6 percent (40 of 79) for United States publications, and 50.0 percent (14 of 28) for non-United States publications.

These data suggest that Library Literature & Information Science Full Text has a stronger international coverage, as far as indexing or abstracting entries are concerned, than Academic Search Premier. Regarding language, there was no record in either database for the one Spanish citation in Collection Building or the two German citations in LRTS, although these numbers are obviously too small to allow conclusions about language coverage.

Results by LC Classification

A breakdown by classification number revealed that most of the entries in both databases were classed in the Z segment for “Libraries” (Z662-Z1000.5). In the Academic Search Premier database, 54 of the 78 LRTS full-text entries were classified there, while 13 were classed in P, 4 in Q, 3 in L, 2 in T, 1 in H, and 1 in M. Also, 101 of 110 LRTS indexing or abstracting entries were classified in the “Libraries” range of Z, 4 in Z’s “General Bibliography” segment, 2 in L, 2 in P, and 1 in Q. Of the 27 full-text Collection Building entries in Academic Search Premier, 15 were classed in Z’s “Libraries” range, 5 in P, 4 in Z’s “General Bibliography,” 1 in E, 1 in H, and 1 in T. For Collection Building’s 21 non-full-text items, 15 were in Z’s “Libraries” range, 3 in Z’s “General Bibliography,” 1 in H, 1 in P, and 1 in Q.

Classification analysis of the Library Literature & Information Science Full Text database discovered that 109 of the 111 LRTS full-text entries fell in Z’s “Libraries” range, with 1 in L, and 1 in M. For the 340 indexing or abstracting entries, 310 were in the Z range for “Libraries,” 14 in P, 9 in “General Bibliography” in Z, 5 in ZA, and 2 in C. Of the 27 Collection Building full-text entries, 23 were in Z’s “Libraries” section, 3 in Z’s “General Bibliography,” and 1 in T. For non-full-text entries, 43 of 54 fell in the “Libraries” range of Z, 5 in P, 4 in Z’s “General Bibliography,” 1 in ZA, and 1 in H.

Further analysis revealed that Academic Search Premier was more likely to contain a full-text entry if the citation were classed outside the Z range for “Libraries.” The database held full-text entries for 11.1 percent (54 of 486) of the LRTS citations classed in Z’s “Libraries” section, contrasted to 40.0 percent (24 of 60) for all the remaining citations classed elsewhere. For Collection Building in Academic Search Premier, 19.5 percent (15 of 77) of the citations in Z’s “Libraries” range were held in full text, whereas 40.0 percent (12 of 30) of the citations classified elsewhere were found in full text. However, this pattern did not hold up for indexing or abstracting coverage. Academic Search Premier contained non-full-text entries for 20.8 percent (101 of 486) of the LRTS citations classed in “Libraries,” compared to 15.0 percent (9 of 60) for the citations classed outside Z’s “Libraries.” Furthermore, 19.5 percent (15 of 77) of Collection Building’s citations classed under “Libraries,” were included in the database as indexing or abstracting entries—a percentage almost identical to the 20.0 percent (6 of 30) of the citations classed elsewhere that were indexed or abstracted.

In contrast to Academic Search Premier, Library Literature & Information Science Full Text held for the two journals in both full-text and indexing or abstracting form a higher proportion of citations classed in Z’s “Libraries” range than those classified elsewhere. For LRTS, it contained 22.4 percent of the former (109 of 486), contrasted with 3.3 percent (2 of 60) of the later in full text, and 63.8 percent of the former (310 of 486), contrasted with 50.0 percent (30 of 60) of the later as indexing or abstracting entries. For Collection Building, the corresponding data were 29.9 percent (23 of 77), contrasted with 13.3 percent (4 of 30) for full text, and 55.8 percent (43 of 77) versus 36.7 percent (11 of 30) for indexing or abstracting.

Results by Title

Another approach is to analyze database coverage by title rather than by citation, as has been done in the preceding sections. In Academic Search Premier, of the 115 titles cited in LRTS, 15 titles (13.0 percent) received full-text coverage for all citations, 10 titles (8.7 percent) received partial full-text coverage (in other words, some but not all citations were in full text), 7 (6.1 percent) received full indexing or abstracting coverage, 7 titles (6.1 percent) received partial indexing or abstracting coverage, and 76 titles received no coverage (66.1 percent). For the 61 titles cited in Collection Building, 10 (16.4 percent) received complete full-text coverage, 2 (3.3 percent) partial full-text coverage, 11 (18.0 percent) complete indexing or abstracting coverage, and 38 (62.3 percent) no coverage. For Library Literature & Information Science Full Text, the coverage for the 115 titles cited in LRTS was: complete full-text coverage—11 (9.6 percent); partial full-text coverage—11 (9.6 percent); complete indexing or abstracting coverage—41 (35.7 percent); partial indexing or abstracting coverage—10 (8.7 percent), and no coverage—42 (36.5 percent). The corresponding data for the 61 Collection Building titles stands at: complete full-text coverage—11 (18.0 percent); partial full-text coverage—3 (4.9 percent); complete indexing or abstracting coverage—22 (36.1 percent); and no coverage—25 (41.0 percent).

Because it is highly skewed by the large number of titles cited only once, this breakdown by title is less useful than analysis by citation. Yet it offers the benefit, unlike some previous checklist evaluations of database coverage, of demonstrating incomplete or mixed coverage for some titles. For example, of the 33 citations to College & Research Libraries in LRTS, Library Literature & Information Science Full Text provided full text for 20, indexing or abstracting for 10, and no coverage for 3. (Note that this was counted as partial full-text coverage in the preceding title analysis.)


Limitations to the Study

A number of limitations to this investigation are acknowledged. One would not expect to find many of the citations in LRTS and Collection Building in a library and information science database such as Library Literature & Information Science Full Text because they are from other disciplines. Because database content frequently changes, this investigation’s results represent a snapshot as of spring 2005. Also, items not held in the three databases under investigation might have been available in others licensed by the library, such as Science Direct. Finally, coverage is only one among many factors in database evaluation, along with pricing structure, licensing terms, search features, screen display, accuracy of records, compatibility with technological infrastructure, and others.


Summary and Conclusions

A citation-based checklist addresses the extent to which a collection or database would meet the needs of researchers. This research shows that both databases provide full-text entries for only a fraction of the articles cited by LRTS and Collection Building authors in 2004. Thus, the answer to that catchy-sounding question, “How full is full?” is, in this instance, “not very full.” However, one should acknowledge that neither database provider claims complete full-text coverage.

Library Literature & Information Science Full Text is somewhat better than Academic Search Premier for full- text coverage (containing 21.1 percent versus 16.1 percent of the citations), but far more likely to contain an indexing or abstracting record of a cited item—all but 18.5 percent were found there, compared to 63.9 percent in Academic Search Premier. The latter finding would logically imply that Library Literature & Information Science Full Text is clearly the preferred database for users interested in identifying existing resources on a library and information science topic even though they may not have immediate access through the database.

Generally, full text and indexing and abstracting coverage is stronger for more current citations and declines with citation age. The fact that more than half the periodical citations in LRTS and Collection Building date from 2000 or later, with fewer than 10 percent predating 1990, suggests that, at least in reference to these two journals, deep backruns in a database covering library and information science may not be of vital importance.

The full-text coverage of both databases is highly skewed toward United States publications, as measured by the percentage of United States versus non-United States citations held and the origin of those citations actually held in full text. Academic Search Premier’s indexing or abstracting coverage also is skewed towards the United States, but, somewhat surprisingly, Library Literature & Information Science Full Text provides indexing or abstracting for a higher proportion of non-United States than United States citations in LRTS, and essentially equal coverage for Collection Building. Consequently, in terms of overall international coverage, Library Literature & Information Science Full Text performs better than Academic Search Premier.

Analysis by the LC classification system, which serves as a proxy for the cited item’s subject, shows that Library Literature & Information Science Full Text provides better full-text and indexing or abstracting coverage for items classed in Z’s “Libraries” section than for those classed elsewhere. Academic Search Premier provides stronger full-text coverage for citations classed outside Z’s “Libraries” segment, although that pattern does not hold for its indexing or abstracting coverage. In the final analysis, Academic Search Premier provides better overall coverage for items outside traditional librarianship than does Library Literature & Information Science Full Text—an expected finding, as Academic Search Premier advertises itself as a multidisciplinary database.

The author makes no explicit recommendation concerning whether a library should license either, both, or neither of the databases investigated here. Such a licensing decision would incorporate a variety of additional factors, such as budgetary considerations, collecting priorities, curricular and teaching needs, researcher interests, and other databases already licensed, that would vary from library to library.

This research is significant because it serves as further evidence that the citation-based checklist technique can be adapted to database content evaluation. A detailed analysis of coverage by such critical parameters as publication date, county of origin, and subject is offered. Moreover, the study represents the first known application of the technique to database evaluation for the field of library and information science.

Some questions for future research should be mentioned. What proportion of the items would be available in other electronic databases licensed by the library, elsewhere on the Web through open-access journals or author self-archiving, or in the library’s print collection? How often would patrons wanting a specific item successfully locate it in full-text form in the two databases under evaluation? As database content is commonly believed to be unstable, what results would be obtained by searching the same databases at later points for longitudinal comparison?


References
1. Barbara Lockett,   Guide to the Evaluation of Library Collections (Chicago:  ALA, 1989):  5.
2. Ibid., 6; Anne H. Lundin, “List-Checking in Collection Development: An Imprecise Art,” Collection Management 11, no. 3/4 (1989): 103–12; Thomas E. Nisonger, “A Test of Two Citation Checking Techniques for Evaluating Political Science Collections in University Libraries,” Library Resources & Technical Services 27, no. 2 (Apr./June 1983): 163–76
3. Paul H. Mosher,  "“Quality and Library Collections: New Directions in Research and Practice in Collection Evaluation,”," in Advances in Librarianship,   ,   ed. Wesley Simonton , vol. 13.  211-38 (Orlando, Fla.:  Academic Pr., 1984) .
4. Jewett C. C.,  "“Report of the Assistant Secretary Relative to the Library, Presented December 13, 1848,”," in Third Annual Report of the Board of Regents of the Smithsonian Institution to the Senate and House of Representatives (Washington, D.C.:  Tippin and Streeper, 1849):  39-47.
5. Paul H. Mosher,  "“Collection Evaluation in Research Libraries: The Search for Quality, Consistency, and System in Collection Development,”,"  Library Resources & Technical Services  (Winter 1979)   23, no. 1:  16–32,  The original report, M. Llewellyn Raney, The University Libraries (Chicago: Univ. of Chicago Pr., 1933), was unavailable to the author of this article
6. Douglas Waples and Harold D. Lasswell,   National Libraries and Foreign Scholarship: Notes on Recent Selections in Social Science (Chicago:  Univ. of Chicago Pr., 1936): .
7. Robert Peerling Coale,  "“Evaluation of a Research Library Collection: Latin-American Colonial History at the Newberry,”,"  Library Quarterly  (July 1965)   35, no. 3:  173–84,  William Webb, “Project CoEd: A University Library Collection Evaluation and Development Program,” Library Resources & Technical Services 13, no. 4 (Fall 1969): 457–62
8. Robert W. BurnsJr.,   Evaluation of the Holdings in Science/Technology in the University of Idaho Library (Moscow, Idaho:  Univ. of Idaho Library, 1968): , Christina E. Bolgiano and Mary Kathryn King, “Profiling a Periodicals Collection,” College & Research Libraries 39, no. 1 (Jan. 1978): 99–104; Richard D. Shiels and Martha S. Alt, “Library Materials on the History of Christianity at Ohio State University: An Assessment,” Collection Management 7, no. 2 (Summer 1985): 69–81; Cheryl Taranto and Anna H. Perrault, “An Evaluation of the Music Collection at Louisiana State University,” LLA Bulletin 51, no. 2 (Fall 1988): 89–92; Maria A. Porta and F. Wilfrid Lancaster, “Evaluation of a Scholarly Collection in a Specific Subject Area by Bibliographic Checking: A Comparison of Sources,” Libri 38, no. 2 (June 1988): 131–37; Kathleen Kehoe and Elida B. Stein, “Collection Assessment of Biotechnology Literature,” Science & Technology Libraries 9, no. 3 (Spring 1989): 47–55; Marina Snow, “Theatre Arts Collection Assessment,” Collection Management 12, no. 3/4 (1990): 69–89; Russell F. Dennison, “Quality Assessment of Collection Development Through Tiered Checklists: Can You Prove You Are a Good Collection Developer?” Collection Building 19, no. 1 (2000): 24–26; Brian Flaherty, “Assessing Legal Collections: Trying to Eke Out a Method from the Madness,” Against the Grain 14, no. 1 (Feb. 2002): 66–68, 70; Chris Matz, “Collecting Comic Books for an Academic Library,” Collection Building 23, no. 2 (2004): 131–37.
9. Jeffry Larson,  "“The RLG Conspectus French Literature Collection Assessment Project,”,"  Collection Management  (Spring/Summer 1984)   6:  97–114.
10. Lockett, Guide to the Evaluation of Library Collections
11. Ibid., 6
12. Nisonger, “A Test of Two Citation Checking Techniques for Evaluating Political Science Collections in University Libraries.”
13. Terese Heidenwolf,  "“Evaluating an Interdisciplinary Research Collection,”,"  Collection Management  (1994)   18, no. 3/4:  34.William L. Emerson, “Adequacy of Engineering Resources for Doctoral Research in a University Library,” College & Research Libraries 18, no. 6 (Nov. 1957): 455–60, 504
14. Jewett, “Report of the Assistant Secretary Relative to the Library, Presented December 13, 1848.”
15. Nisonger, “A Test of Two Citation Checking Techniques for Evaluating Political Science Collections in University Libraries.”
16. Heidenwolf, “Evaluating an Interdisciplinary Research Collection,” 33–48
17. Maureen L.. Gleason and James T. Deffenbaugh,  "“Searching the Scriptures: A Citation Study in the Literature of Biblical Studies: Report and Commentary,”,"  Collection Management  (Fall/Winter 1984)   6, no. 3/4:  107–17.
18. Jill V. Crawley-Low,  "“Collection Analysis Techniques Used to Evaluate a Graduate-Level Toxicology Collection,”,"  Journal of the Medical Library Association  (July 2002)   90, no. 3:  310–16.
19. Porta and Lancaster, “Evaluation of a Scholarly Collection in a Specific Subject Area by Bibliographic Checking.”
20. Jewett, “Report of the Assistant Secretary Relative to the Library, Presented December 13, 1848.”
21. Kathy E. Gallagher,  "“The Application of Selected Evaluative Measures to the Library’s Monographic Ophthalmology Collection,”,"  Bulletin of the Medical Library Association  (Jan. 1981)   69, no. 1:  36–39,  F.W. Newell, Ophthalmology: Principles and Concepts, 4th ed. (St. Louis: Mosby, 1978)
22. Maureen Martin Watson,  "“The Association of Vision Science Librarians’ Citation Analysis of Duane’s Clinical Ophthalmology,”,"  Journal of the Medical Library Association  (Jan. 2003)   91, no. 1:  83–85,  Thomas David Duane, William Tasman, and Edward A. Jaeger, Duane’s Clinical Ophthalmology, rev. ed. (Philadelphia: Lippincott Williams and Wilkins, 1998)
23. Robert N. Bland,  "“The College Textbook As a Tool for Collection Evaluation, Analysis, and Retrospective Collection Development,”,"  Library Acquisitions: Practice & Theory  (1980)   4, no. 3/4:  193–97.
24. Ibid.; Roger Edward Stelk and F. Wilfrid Lancaster, “The Use of Textbooks in Evaluating the Collection of an Undergraduate Library,” Library Acquisitions: Practice & Theory 14, no. 2 (1990): 191–93
25. William W. Currie,  "“Evaluating the Collection of a Two-Year Branch Campus by Using Textbook Citations,”,"  Community & Junior College Libraries  (1989)   6, no. 2:  75–79.
26. Emerson, “Adequacy of Engineering Resources for Doctoral Research in a University Library.”
27. Jean-Pierre V. M. Herubel,  "“Simple Citation Analysis and the Purdue History Periodical Collection,”,"  Indiana Libraries  (1990)   9, no. 2:  18–21,  Jean-Pierre V.M. Herubel, “Philosophy Dissertation Bibliographies and Citations in Serials Evaluation,” Serials Librarian 20, no. 2/3 (1991): 65–73
28. Marion L.. Buzzard and Doris E. New,  "“An Investigation of Collection Support for Doctoral Research,”,"  College & Research Libraries  (Nov. 1983)   44, no. 6:  469–75.
29. Carol M. Moulden,  "“Evaluation of Library Collection Support for an Off-Campus Degree Program,”," in The Off-Campus Library Services Conference Proceedings; Charleston, South Carolina, October 20– 21, 1988 ,   ed. Barton M.. Lessin ,  340-46 (Mount Pleasant, Mich:  Central Michigan Univ., 1989) .
30. Lewis D. E.,  "“A Comparison between Library Holdings and Citations,”,"  Library & Information Research News  (Autumn 1988)   11, no. 43:  18–23.
31. James G.. Neal and Barbara J. Smith,  "“Library Support of Faculty Research at the Branch Campuses of a Multi-campus University,”,"  Journal of Academic Librarianship  (Nov. 1983)   9, no. 5:  276–80.
32. Stephanie C.. Haas and Kate Lee,  "“Research Journal Usage by the Forestry Faculty at the University of Florida, Gainesville,”,"  Collection Building  (1991)   11, no. 2:  23–25.
33. Manual D. Lopez,  "“A Guide for Beginning Bibliographers,”,"  Library Resources & Technical Services  (Fall 1969)   13, no. 4:  462–70.
34. Ibid., 469–70
35. Ibid
36. Ibid.; Thomas E. Nisonger, “An In-Depth Collection Evaluation at the University of Manitoba Library: A Test of the Lopez Method,” Library Resources & Technical Services 24, no. 4 (Fall 1980): 329–38
37. Ruth Pagell,  "“Searching Full-Text Periodicals: How Full Is Full?”,"  Database  (Oct. 1987)   10, no. 5:  33–36.
38. Ibid.; Steve Black, “An Assessment of Social Sciences Coverage by Four Prominent Full-Text Online Aggregated Journal Packages,” Library Collections, Acquisitions, & Technical Services 23, no. 4 (Winter 1999): 411–19; Péter Jacsó, “Evaluating the Journal Base of Databases Using the Impact Factor of the ISI Journal Citation Reports,” in National Online Meeting Proceedings 2000: Proceedings of the 21st National Online Meeting, New York, May 16– 18, 2000, ed. Martha E. Williams, 169–72 (Medford, N.J.: Information Today, 2000)
39. Jo Ann Carr and Amy Wolfe,  "“Core Journal Titles in Full-Text Databases,”," in Racing Toward Tomorrow: Proceedings of the Ninth National Conference of the Association of College and Research Libraries April 8– 11, 1999 ,   ed. Hugh A.. Thompson ,  234-41 (Chicago:  ACRL, 1999) .
40. David J.. Brier and Vickery Kaye Lebbin,  "“Evaluating Title Coverage of Full-Text Periodical Databases,”,"  Journal of Academic Librarianship  (Nov. 1999)   25, no. 6:  473–78,  Bill Katz and Linda Sternberg Katz, Magazines for Libraries, 9th ed. (New York: Bowker, 1997)
41. Black, “An Assessment of Social Sciences Coverage by Four Prominent Full-Text Online Aggregated Journal Packages.”
42. Jacobs N.JacobsN. ,  Woodfield J.JacobsN. ,  Morris A.,  "“Using Local Citation Data to Relate the Use of Journal Articles by Academic Researchers to the Coverage of Full-Text Document Access Systems,”,"  Journal of Documentation  (Sept. 2000)   56, no. 5:  563–81.
43. Anna Grzeszkiewicz and A. Craig Hawbaker,  "“Investigating a Full-Text Journal Database: A Case of Detection,”,"  Database  (Dec. 1996)   19, no. 6:  59–62.
44. David C.. Tyler, Signe O.. Boudreau,  and Susan M. Leach,  "“The Communication Studies Researcher and the Communication Studies Indexes,”,"  Behavioral & Social Sciences Librarian  (2005)   23, no. 2:  119–46.
45. Thomas Schaffer,  "“Psychology Citations Revisited: Behavioral Research in the Age of Electronic Resources,”,"  Journal of Academic Librarianship  (Sept. 2004)   30, no. 5:  354–60.
46. Wilson H. W.,  "Library and Information Science Full Text"www.hwwilson.com/databases/liblit.htm (accessed Sept. 20, 2006)
47. Indiana University, Herman B Wells Library, “IUCAT” [online public access catalog], www.iucat.iu.edu/authenticate.cgi?status=start (accessed Jan. 12, 2007)
48. EBSCOHOST Web, "“Academic Search Premier,”"http://web.ebscohost.com/ehost/selectdb?hid=111&sid=159646b9-7f69-4b2a-9cec-8dbcb2fd82cf%40sessionmgr102 (accessed Sept. 20, 2006)
49. Ibid
50. Indiana University, Herman B Wells Library, “Databases by Subject,” www.libraries.iub.edu/index.php?pageId=1697&subjectId=100&mode=subjectId (accessed Apr. 5, 2007)
51. Tyler, Boudreau, and Leach, “The Communication Studies Researcher and the Communication Studies Indexes,” 35
52. Schaffer, “Psychology Citations Revisited,” 354

Tables
Table 1

Format of items cited in LRTS in 2004


Format No. %
Journal articles 546 60.0
Books 113 12.4
Web documents 97 10.7
Conferences 50 5.5
Book chapters 35 3.8
Government documents 22 2.4
E-mail 11 1.2
Web sites 9 1.0
Zines 7 0.8
Spec kits 5 0.5
CD-ROM 4 0.4
Master’s theses 4 0.4
Internal documents 3 0.3
Private conversations 2 0.2
Electronic discussion list 1 0.1
Video recording 1 0.1
Total 910 99.8

Note. Total does not add to 100% due to rounding.


Table 2

Summary of journals cited in LRTS


Title (N=115) Times cited % Cumulative %
Library Resources & Technical Services 43 7.9 7.9
Collection Management 42 7.7 15.6
College & Research Libraries 33 6.0 21.6
Cataloging & Classification Quarterly 27 4.9 26.6
Against the Grain 24 4.4
Library Trends 24 4.4 35.3
Collection Building 23 4.2
Journal of Library Administration 23 4.2
Library Collections, Acquisitions & Technical Services 23 4.2 48.0
Information Technology & Libraries 22 4.0 52.0
Serials Librarian 18 3.3 55.3
Serials Review 14 2.6 57.9
Journal of Academic Librarianship 13 2.4 60.3
Acquisitions Librarian 12 2.2 62.5
American Libraries 10 1.8 64.3
Library Hi Tech 9 1.6 65.9
Library Acquisitions: Practice & Theory 8 1.5 67.4
Library Quarterly 7 1.3 68.7
ARL: A Bimonthly Report 6 1.1
D-Lib Magazine 6 1.1
Portal: Libraries & the Academy 6 1.1 72.0
9 titles 4 6.6 78.6
9 titles 3 4.9 83.5
14 titles 2 5.1 88.6
62 titles 1 11.4 100
Total 546 99.9

Notes. Cumulative percentages were calculated from the raw data, rather than by adding percentages. Total does not add to 100% due to rounding.


Table 3

Format of items cited in Collection Building in 2004


Format No. %
Journal articles 107 41.8
Books 51 19.9
Web sites 47 18.4
Web documents 25 9.8
Book chapters 8 3.1
Conferences 8 3.1
Compact discs 6 2.3
Ph.D. dissertation 1 0.4
Internal document 1 0.4
Masters thesis 1 0.4
Newspaper article 1 0.4
Total 256 100

Table 4

Summary of journals cited in Collection Building in 2004


Title (N=61) Times cited % Cumulative %
Collection Building 11 10.3
Library Trends 11 10.3 20.6
Collection Management 5 4.7
Library Collections, Acquisitions & Technical Services 5 4.7
Serials Review 5 4.7 34.6
Journal of Academic Librarianship 3 2.8
Public Libraries 3 2.8
Publishers Weekly 3 2.8 43.0
Acquisitions Librarian 2 1.9
Against the Grain 2 1.9
Booklist 2 1.9
Malaysian Journal of Library & Information Science 2 1.9
Online Information Review 2 1.9
Reference & Users Service Quarterly 2 1.9
Rural Libraries 2 1.9
Serials Librarian 2 1.9 56.9
45 titles 1 42.1 100
Total 107 100.4

Notes. Cumulative percentages were calculated from the raw data, rather than by adding percentages. Total does not add to 100% due to rounding.


Table 5

Analysis of periodical citations by publication date


Library Resources & Technical Services
Years No. %
2000–2004 294 53.8
1995–1999 159 29.1
1990–1994 47 8.6
1985–1989 29 5.3
1980–1984 8 1.5
Pre-1980 9 1.6
Total 546 99.9
Collection Building
Years No. %
2000–2004 62 57.9
1995–1999 29 27.1
1990–1994 13 12.1
1985–1989 2 1.9
1980–1984 1 0.9
Pre-1980 0 0.0
Total 107 99.9

Note. Totals do not add to 100% due to rounding.


Table 6

Results from searching LRTS and Collection Building citations in two databases


Academic Search Premier
Full-text entry Indexed/abstracted Not covered
No. No. % No. % No. %
LRTS 546 78 14.3 110 20.1 358 65.6
Collection Building 107 27 25.2 21 19.6 59 55.1
Total 653 105 16.1 131 20.1 417 63.9
Library Literature & Information Science Full Text
Full-text entry Indexed/abstracted Not covered
No. No. % No. % No. %
LRTS 546 111 20.3 340 62.3 95 17.4
Collection Building 107 27 25.2 54 50.5 26 24.3
Total 653 138 21.1 394 60.3 121 18.5

Table 7

Analysis of searching results by publication date


Academic Search Premier—citations from LRTS
Full-text entry Indexed/abstracted Not covered
Publication Date No. No. % No. % No. %
2000–2004 294 47 16.0 65 22.1 182 61.9
1995–1999 159 24 15.1 42 26.4 93 58.5
1990–1994 47 5 10.6 3 6.4 39 83.0
1985–1989 29 0 0 0 0 29 100
1980–1984 8 0 0 0 0 8 100
Pre-1980 9 2 22.2 0 0 7 77.8
Total 546 78 14.3 110 20.1 358 65.6
Academic Search Premier—citations from Collection Building
Full-text entry Indexed/abstracted Not covered
Publication Date No. No. % No. % No. %
2000–2004 62 20 32.3 12 19.4 30 48.4
1995–1999 29 6 20.7 5 17.2 18 62.1
1990–1994 13 1 7.7 4 30.8 8 61.5
1985–1989 2 0 0 0 0 2 100
1980–1984 1 0 0 0 0 1 100
Total 107 27 25.2 21 19.6 59 55.1
Library & Information Science Full Text—citations from LRTS
Full-text entry Indexed/abstracted Not covered
Publication Date No. No. % No. % No. %
2000–2004 294 82 27.9 172 58.5 40 13.6
1995–1999 159 29 18.2 103 64.8 27 17.0
1990–1994 47 0 0 36 76.6 11 23.4
1985–1989 29 0 0 27 93.1 2 6.9
1980–1984 8 0 0 1 12.5 7 87.5
Pre-1980 9 0 0 1 11.1 8 88.9
Total 546 111 20.3 340 62.3 95 17.4
Library & Information Science Full Text—citations from Collection Building
Full-text entry Indexed/abstracted Not covered
Publication Date No. No. % No. % No. %
2000–2004 62 24 38.7 22 35.5 16 25.8
1995–1999 29 3 10.3 20 69.0 6 20.7
1990–1994 13 0 0 11 84.6 2 15.4
1985–1989 2 0 0 1 50.0 1 50
1980–1984 1 0 0 0 0 1 100
Total 107 27 25.2 54 50.5 26 24.3


Article Categories:
  • Library and Information Science
    • ARTICLES

Refbacks

  • There are currently no refbacks.


ALA Privacy Policy

© 2024 Core