Identifying E-Resources

Identifying E-Resources

An Exploratory Study of University Students

Amy Buhler (abuhler@ufl.edu) is the Engineering Librarian at Marston Library, University of Florida, Gainesville, and Tara Cataldo (tara@uflib.ufl.edu) is the Science Collections Coordinator, Marston Science Library, University of Florida, Gainesville.

Manuscript submitted December 19, 2014; returned to authors for revision March 6, 2015; revised manuscript submitted April 28, 2015; returned to authors for minor revisions July 22, 2015; revised manuscript submitted August 21, 2015; accepted for publication September 14, 2015.

We express our deepest gratitude to Dr. David Schwieder for his assistance with our statistics-related questions. The research was funded by a University of Florida faculty Enhancement Opportunity award.

This preliminary study assesses university students’ ability to identify document types or information containers (journal, article, book, etc.) and different types of search tools (database, search engine) in the online environment. It is imperative to understand students’ behaviors due to the pervasiveness of online resources and their impact on information literacy. A survey administered at the University of Florida sought to investigate this phenomenon and queried respondents about their age, higher education level, exposure to bibliographic instruction, and time devoted to school-related online searching. Analyses of 765 responses show that many students cannot consistently correctly identify these containers and behavioral characteristics have no influence on this process. This has implications for the online information seeking process and judging credibility and is of importance to the library, education, and publishing communities. Recommendations for these various communities are discussed.

In 2010, The Economist noted that digital information is increasing tenfold every five years.1 Navigating this vast amount of information to find what a seeker needs to answer a question, solve a problem, or complete a task becomes more challenging as the amount of digital information grows. This is particularly problematic for college students who by necessity must navigate this sea of information as a critical part of their education. Head recently described this phenomenon in a study of freshmen as an “information tsunami that engulfed them.”2 Librarians often encounter students with the question “How do I cite this book?” only to discover that the resource in question is a journal article, conference proceeding, or other type of resource that they found online. Additionally, library instruction sessions reveal that students do not readily distinguish between the various types of resources when searching online (e.g., Google versus a library database). Because of these behaviors, we hypothesize that in the online environment many students do not differentiate between the varieties of electronic information. It is prudent to understand students’ behaviors not only because of the pervasiveness of online resources, but more importantly, the impact that this populations’ information literacy will eventually have on society. Identification of the container plays a role in the judgment of reliability or authoritativeness of the source. Students are told to use peer-reviewed journal articles over books or books over Wikipedia, presumably because of the higher authority of one, but what happens when the student cannot distinguish between them? If a student cannot identify these containers, it can have a negative effect on how they seek information and assess credibility. This issue also has ramifications for libraries: how we provide reference and instruction services, market resources, create metadata to describe resources, and design our online presence.

Until this point, our stated hypothesis has only been represented in the literature as a byproduct of other types of studies. Our study administered a survey to students at the University of Florida to preliminarily evaluate this phenomenon that Abram and Luther call “format agnostic.”3 We attempted to answer the following questions:

  1. Do university students have difficulty in identifying different digital information resources?
  2. Do factors such as age, level of university experience, amount of bibliographic instruction, or amount of time spent searching play a role in a student’s ability to identify digital information resources correctly?

This paper offers an exploration of the survey results, including comparative analyses of these different containers and search tools. In addition, a brief discussion of implications, future research directions, and recommendations for libraries, publishers, and educators is included.

Literature Review

Many students feel confident locating information resources for papers or projects, but experience confusion when they need to identify the document type (from here on referred to as the information container).4 This could be for the purposes of formatting a bibliography or ensuring that they have used the required types of sources for an assignment. The impetus for this project stems from our observations as practitioners and anecdotal evidence found in the literature. A catalyst for our research was ebrary’s two surveys examining e-book usage.5 The study examined trends through surveys administered in 2008 and 2011. During that three-year period, self-reported usage of e-books declined whereas actual usage significantly increased. This discrepancy led to a third follow-up survey in late 2011 where respondents were asked: When you are using electronic resources at your library how often do you know what type of document you are using? Only 47.39 percent replied “Always,” indicating that over half of students experience confusion regarding information containers. Our study seeks to expound upon this trend. In the current world of scholarly digital information, the lines between the various traditional information containers (book, journal, conference proceeding, etc.) are blurred. We surmise that within this environment, many information consumers, particularly current university students, cannot consistently and correctly identify these containers. Since this issue has not been thoroughly explored and understood, it is not currently addressed in most teaching opportunities.

A search of the literature yields a few articles over the past decade that alludes to this issue, particularly in the area of e-book usage studies. Croft and Bedi discovered the phenomenon as part of the open-ended responses to their 2003 e-book usage survey of students at Royal Roads University (a distance-based university). One of the most intriguing results of their survey was the students’ comments:

  • “We were shown during our residency how to access journals and info. Is this the same as ebooks?”
  • “An explanation of what an ebook is would be helpful. I’ve answered these questions as if they refer to the journals and articles that I accessed through the LRCsite.”
  • “I think that I used eBooks. For sure, I searched for articles. For some limited material, I had access to a whole book. I must confess that I am unsure by exactly what you mean by elibrary and netlibrary!”6

Levine-Clark conducted an e-book usage study at the University of Denver in 2005 and noted “ . . . a small, but significant portion of those responding to the survey indicated a degree of confusion about the concept of the electronic book.” He continues by stating “It is hard to draw any conclusions from the limited responses to open-ended questions, but it is clear that some degree of confusion exists between electronic resource types. This blurring of the distinction between book and journal may mean that for some users the online/print division is more important than the traditional book/journal distinction.”7

In our own experience as practitioners, students do not appear to care about the type of information they find until it is absolutely necessary. Palfrey and Gasser elaborate on this notion in their 2008 book by stating students do not care about the quality of information they find on the web until they get poor grades on an assignment. They trust the search engine to give them reliable information and judge quality by what “makes sense.” 8 Shelburne’s 2008 study also produced findings that support this idea:

The open comments on why e-books have not been used are especially interesting and indicate that lack of awareness of the content is clearly a problem. It appears that users may be accessing e-books without knowing that the resources they are using are actually e-books. . . . Further, several of the open responses indicate that some users may not even be aware of any difference between an electronic journal and an electronic book, a phenomenon also noted by Levine-Clark.9

The Primary Research Group 2009 Survey of American College Students produced a report on library e-books usage that asked “What do you think of your college library’s E-book collection?”10 Approximately 32 percent chose the response “I am not sure what an e-book is.” This was also noted in the UK’s Joint Information Systems Committee (JISC)’s famous National E-book Observatory study in which the authors state “The lack of awareness about the availability of e-books was accompanied by confusion about what an e-book actually is.”11

In her discussion of e-book studies, Soules makes an observation in line with the authors’ experiences. She argues that this is not an issue limited to e-books, but is pervasive across all e-content. From her perspective, users are only concerned with content, and the ability to detect the differences between resources is a relic of the print era.12 Holman echoes this in her study of millennial students, “Having grown up with online information sources, they do not discriminate between websites and more traditional print and broadcast media.”13 She noted problems such as when a student found a newspaper article online but was unsure if it was from a newspaper.14 A 2007 report from JISC on the future researcher’s information seeking behavior stated that the “Google Generation are format agnostic and have little interest in the containers (reports, book chapters, encyclopedia entries) that provide the context and wrapping for information `nuggets.’” This report describes this as an issue of importance that has yet to be addressed by the literature, but one that should be studied given the impact for libraries and publishers.15 Clearly; there is a call for the type of research that our preliminary study seeks to explore.

Methods

Qualtrics survey software (www.qualtrics.com) was used to construct the instrument assessing this information container phenomenon. Two identical instruments were created: one requiring the respondent to click on live links and one that allowed respondent to view screen captures for each example. Once these initial surveys were completed, a pilot commenced using approximately twenty subjects to compare the survey formats in addition to testing different response choices (the option of “other” was a response choice). Unlike the screen capture survey, the live link survey did not offer the consistency and uniformity across surveys. Due to this reason plus overall response time being affected (it took 3–4 times longer to complete), the live link survey was discarded. Further, after consulting with a statistician, the response choices of “other” and “textbook” were eliminated to provide more targeted analysis. Eighteen online resources were selected to test users’ perceptions. The resources are broken down into the two respective categories of individual resource and search tool (see tables 1 and 2).

The Resources are Broken

The final version of the survey (see the appendix) used the question “What would you call this?” when querying respondents. We felt that this was a neutral question and would not provide textual cues that would skew respondents’ choices. Choice selections were standardized based on category; however, these selections were randomized. For individual resources, the choices were: e-book, e-journal, article, website or webpage. The choices for search tools were: search engine, database, catalog, website or webpage. The option of “website or webpage” was listed as a choice to allow respondents to select a generic term for a particular resource. We did not provide a definition of the choices so as not to alter the respondents’ established perceptions.

For our own analyses, we have used definitions from the Oxford English Dictionary:16

  • Article—A separate portion of something written.
  • Catalog—Now usually distinguished from a mere list or enumeration, by systematic or methodical arrangement, alphabetical or other order, and often by the addition of brief particulars, descriptive, or aiding identification, indicative of locality, position, date, price, or the like.
  • Database—A structured set of data held in computer storage and typically accessed or manipulated by means of specialized software.
  • E- Prefixed to nouns to denote involvement in electronic media and telecommunications (esp. the use of electronic data transfer over the Internet, etc.), usually to distinguish objects or actions from their non-electronic counterparts.
  • E-book—A hand-held electronic device on which the text of a book can be read. Also: a book whose text is available in an electronic format for reading on such a device or on a computer screen; (occas.) a book whose text is available only or primarily on the Internet.
  • Journal—A daily newspaper or other publication; hence, by extension, any periodical publication containing news or dealing with matters of current interest in any particular sphere.
  • Search Engine—a program that searches for and identifies items in a database that correspond to one or more keywords specified by the user; spec. such a program used to search for information available over the Internet, using its own previously compiled database of Internet files and documents.
  • Webpage—a hypertext document that is accessible via the World Wide Web.
  • Website—a document or a set of linked documents, usually associated with a particular person, organization, or topic that is held on such a computer system and can be accessed as part of the World Wide Web.

Data was collected with the survey instrument in two different ways: in person using a peer-to-peer model and online via a pop-up on all of the libraries’ 400 public computers. This was done to assess how the peer-to-peer model would compare with a computer pop-up in terms of response rate. Two student assistants were hired to conduct the peer-to-peer method of collection using surveys loaded on iPads. They primarily collected data in the lobbies of the two largest campus libraries and in quad areas and the student union. They worked for fifty-nine days for a total of seventy-five hours and collected 436 surveys with a 100 percent completion rate. The online collection method took eighteen days and was available for a total of 314 hours (this number corresponds to when any given library was open). A total of 327 surveys with a 100 percent completion rate were collected. The peer-to-peer method gathered an average of 5.8 completed surveys per hour compared to the online method which yielded an average of one completed survey per hour. However, the peer-to-peer method was much more labor intensive in terms of hiring, training, scheduling, and managing the student workers. Additionally, there was a potential for bias in the survey population due to the respondents recruited by the student workers. Due to these factors, the online delivery method appears to be ideal when gathering unbiased survey responses with little effort (and cost) on the part of the researcher. However, it would be beneficial for future studies to partner with campus computer labs to reach a broader audience. The analysis of the results was conducted using tools housed within Qualtrics as well as the use of SPSS software.

Results

Seven hundred and eighty respondents completed the survey. Six hundred fifty-six (84 percent) were undergraduate students and 109 (14 percent) were graduate students. The remaining respondents (2 percent) fell into the categories “High School Student” or “Other.” For the purposes of this study, only the university students’ responses were analyzed (N = 765). Due to the size difference between the graduate and undergraduate pools, figures comparing these two groups use percentages as opposed to raw numbers. The age breakdown of these university students appears in figure 1.

Figure 2 shows how long students spent searching online for class assignments in an average week. Graduate students are more likely to spend over six hours per week performing school-related searches compared to undergraduates who were more likely to spend less than five hours. This is most likely due to research associated with graduate students’ theses or dissertations. However, close to half of both graduates and undergraduates fall in the 2–5 hour range.

Participants were also asked what type of bibliographic instruction (BI) they have received, if any. Figure 3 shows the breakdown between graduate and undergraduate students. Again, there were similar trends in the responses of both graduates and undergraduates, with the most common form of BI exposure being a librarian visiting their class. It is important to note that nearly 30 percent of both graduate and undergraduate respondents had not received any form of BI.

Tables 3 and 4 report the 765 university student responses to the e-resource questions broken down into the categories of undergraduate and graduate student.

Discussion

When we presented the preliminary survey results, the following question arose: “Is it really wrong to call any of these resources a website?”17 By definition, this is correct. These resources could all fall under the technical description for a website or webpage. We wanted to determine whether students apply this generic label or if they could categorize the resource as a specific information container or search tool. It is not wrong, per se, though we consider it incorrect for purposes of this study. It is problematic when a student (or any information seeker) needs to reference an electronic resource and identifies an item as a website when the more precise container is an e-journal. For example, the JSTOR journal article shown in the survey (see the appendix) would correctly be cited in MLA style as:

Nilsson, Lena Maria, Ingegerd Johansson, Per Lenner, Bernt Lindahl and Bethany Van Guelpen “Consumption of filtered and boiled coffee and the risk of incident cancer: a prospective cohort study” Cancer Causes & Control, 21.10 (2010): 1533–1544. Web. 22 Apr. 2013

As a webpage, which 8 percent of survey respondents identified it as, the citation would likely look like this:

Nilsson, Lena Maria, Ingegerd Johansson, Per Lenner, Bernt Lindahl and Bethany Van Guelpen “Consumption of filtered and boiled coffee and the risk of incident cancer: a prospective cohort study.” JSTOR. 2010. Web. 22 Apr. 2013.

The second citation does not provide the precise detail (the journal title) needed by a reader to locate the resource that student has used. Therefore, the webpage citation would be considered incorrect and a student citing it this way in a paper, poster or bibliography would lose points on the assignment. This can present real problems for not only students during their academic careers, but also have ramifications for them once they become professionals.

E-Books

E-books seemed to be the most problematic, according to the literature. Five different e-books were shown in the survey and comparing the responses for these yields interesting findings and additional questions. The five resources were:

  • A Springer e-book
  • A Google e-book
  • An e-textbook in Knovel
  • A Gale encyclopedia
  • An annual report from the National Endowment for the Arts (NEA)

Figure 4 provides a breakdown of the respondents’ answers.

The NEA report showed the widest distribution in responses and we concluded its label as an e-book was more tenuous than the other examples and excluded it from further analysis. The Google e-book was the most recognizable of the remaining four with 77 percent (N = 589) choosing “e-book” as their answer. This was followed by the Knovel e-book at 74 percent (N = 567), the Gale encyclopedia at 54 percent (N = 416), and the Springer e-book proved the least identifiable with only 35 percent (N = 264) identifying it as an e-book. The question is then raised as to why there are such discrepancies. This survey study cannot answer the question but it can provide some observations and hypotheses for further study. The least and most recognizable e-books, the Springer e-book (see figure 5) and Google e-book (see figure 6), are examined.

The Springer e-book is hosted on the same platform as the publisher’s e-journals and the layout for each is almost identical. The image of the book cover is a small icon, and the text on the page uses the word “book” in four places. In contrast, the Google e-book’s page is dominated by a large image of the book cover. There is minimal text on this page and, within the text; the word “book” is listed six times. Based on these observations and the survey results, a hypothesis for further study could be: Imagery and heavy labeling are key to an electronic resource being labeled correctly by users. After the survey was conducted, Springer launched its new interface and the look changed substantially. Additional study would need to be conducted to see if these changes improve identification.

When a respondent answered the question “wrong,” how did they identify the resource? In the case of the e-books, the most popular “wrong” answer tended to be the generic “website or webpage” choice. Forty-one percent of the responses for the Springer e-book chose this option, 12 percent for Knovel and 26 percent for Gale. The exception was the Google e-book where the next most common wrong answer was “article” (16 percent). It is presumed that many respondents were unsure of what to call the resource and therefore reverted to the most generic choice option. Regarding the response to the Google e-book, we cannot determine any rationale on how it could be identified as an article.

We also compared graduate student and undergraduate student perceptions. It seems plausible to hypothesize that graduate students would more accurately identify the information container, but this was not the case for e-books. In three of the four examples, a greater percentage of the undergraduate students correctly identified the resource than the graduate students (see figure 7). Seventy-nine percent of the undergraduate students identified the Google e-book correctly, compared to 66 percent of the graduate students. Only with the Gale encyclopedia did the graduate students identify more accurately than the undergraduates with 58 percent versus 54 percent. Firm correlations are not possible with this study because the undergraduate respondents far outnumber the graduate (N = 656 and N = 109, respectively).

E-Journals

The survey asked respondents to examine two e-journal homepages. One featured a more traditional set-up with the table of contents page of an academic journal on Elsevier’s ScienceDirect platform. The other was the main page of the born-digital video journal, JoVE (Journal of Visualized Experiments). Figure 8 shows how respondents labeled these resources.

Responses were split between labeling the ScienceDirect e-journal correctly (N = 320) or a website (N = 301). The high number of website responses reinforces the idea that students are selecting this generic designation because they are unsure of which more specific choice to select. Additionally, the graduate students recognized the more traditional ScienceDirect journal as an e-journal more frequently than the undergraduate students (54 percent to 40 percent). JoVE was labeled an article most often (N = 378) with 49 percent of all students providing this response. Twenty-eight percent referred to JoVE as an e-journal. This could be attributed to the fact that, even though this page serves as what would traditionally be considered a table of contents page, it prominently features a video article.

Articles

The survey included four articles: a blog article, a Wikipedia article, an academic journal article from JSTOR, and a newspaper article. Although different in nature, they do lend to cross comparison as a student might readily choose any of them for a project or paper. This is especially true when they begin their research with Google or a discovery service search, as a mix of these containers will appear in their results. Figure 9 shows the distribution of responses.

By far, the newspaper article was the most recognizable with 80 percent (N = 610) calling it an article. The Wikipedia article was most often termed with the generic “website” label with 62 percent (N = 476) identifying it as such. The JSTOR article had the most variance across the labels, which we found surprising. However, more graduate students recognized it as an article than undergraduates, at 42 percent and 27 percent respectively. Perhaps the high level of recognition for the newspaper article stems from the fact that many students have used online newspapers from an early age and thus have a good understanding about this information container. Both the blog and Wikipedia articles were often labeled as “website” by participants. This is not surprising considering that both are open, born digital resources.

Search Tools

The survey included the biomedical literature database, PubMed; the ProQuest database, Computer and Information Systems Abstracts; the Zappos shopping catalog; the Stanford University Library catalog; the Google Scholar search screen and the discover service, Summon’s search screen; and the website MedlinePlus. All are analyzed with the exception of Medline Plus, which was not included because it was inadvertently assigned the answer choices for an individual resource as opposed to a search tool.

When cross comparing these tools as a group, many interesting trends are revealed (see figure 10). Most notably, Google Scholar was the most correctly identified search tool by a 29 percent margin. Additionally, Zappos was the most likely to be labeled with the generic designation of website or webpage, which is not unusual given the commercial nature of this resource. However, it was surprising that when offered the option to assign the label “catalog,” only 39 percent (N = 301) of respondents chose this container.

Given the current library landscape, a comparison should be made between discovery services, the traditional library catalog, and Google. Discovery services are marketed as a more effective search tool because they mimic web search engine aesthetics and functionality. As previously noted, there was little ambiguity correctly labeling Google Scholar (90 percent or 686 choosing search engine). A slight majority labeled Summon correctly as a search engine (52 percent or 395) and the Stanford catalog as a catalog (49 percent or 376). However, there was a greater distribution suggesting some confusion with regards to these tools compared to Google Scholar. It is also interesting to note that the Stanford catalog and Summon had a nearly identical incidence of being labeled as a database.

PubMed and the ProQuest databases had equal distribution, with database being the most popular response, followed by website or webpage. However, there is a margin of difference in the correct response for these two databases, 49 percent (N = 375) for PubMed and 61 percent (N = 463) for ProQuest. We hypothesize that, similar to the e-book comparison, labeling played a role. The ProQuest database used the term “database” in four instances on the page and even included the term in the description. On the PubMed page, the term “database” only appeared when referring to other search tools.

Influence of Respondent Characteristics

The survey queried respondents about their age, higher education level, exposure to bibliographic instruction, and time devoted to school-related online searching. We hypothesized that one or more of these factors would influence the rate of correctly identifying these information containers. After analysis, comparison of the other characteristics against the results proved inconclusive with no significant trends emerging. For example, the theory that a graduate student would be more likely than an undergraduate to correctly identify an academic e-book.

We attempted to correlate a positive relationship between the amount of bibliographic instruction (see figure 3) and container identification, but no significant results emerged. Students with no BI identified the Springer e-book correctly 32 percent of the time, whereas others who had BI in at least three instances only made the correct identification 39 percent of the time. In the case of the Gale encyclopedia, there was a negative correlation. Those with no BI correctly identified at a rate of 60 percent compared to those with three only did so at 48 percent.

Recommendation for Practitioners and Publishers

Though this phenomenon is early in its study, some interventions can be implemented for reference and instructional services to address the issue. Previously, librarians devoted time to explaining the characteristics that differentiated various print resources. We argue that this component of instruction should be restored for electronic resources. This is not an easy task in the online environment, but librarians can help to facilitate users’ identification through different visual cues such as the structure of an online document and what the front matter denotes. Simple, even elementary, rules may need to be emphasized such as a book has chapters, a journal has articles. We see the creation of such rules stemming from a partnership between both public services and technical services librarians. We, as a library community, can disseminate this information via traditional instruction sessions, face-to-face reference interactions, and virtually using online tools and tutorials (e.g., LibGuides or YouTube videos). It should be noted that this intervention is not something that should begin at the university level, but perhaps as early as elementary school. This is a shared opportunity for media specialists/school librarians and educators to impart this skillset during the formative years.

Further recommendations that would involve broader conversation plus stakeholder buy-in and take more time to implement include:

  • Marketing and branding of different containers to clearly differentiate between resource types
  • Leveraging metadata to “tag” items with a container type and enabling this as a search parameter for discovery services and search engines

Producers of online information should look to strong labeling models if container recognition is important. Likewise, if container recognition is truly valued, educators and librarians should dedicate instruction time to teaching the concept and the role it plays in scholarly communication. Ideally, a future dialogue should occur between information producers, educators, and librarians (from both public services and technical services) to determine whether content and container are independent of one another. This conversation is already beginning to happen with the introduction of the ACRL Framework for Information Literacy for Higher Education (www.ala.org/acrl/standards/ilframework#authority) that suggests that information literate learners recognize that authoritative content can be presented both formally and informally. This can also lead to better mechanics of citation management tools which exist to assist users in the organization of digital information. We feel confident that labeling and branding play a role in recognition, but more study is needed to deduce the reasoning behind these choices.

Study Limitations and Future Research

As there were no prior studies that directly investigate this phenomenon, we approached this as a pilot that was bound to define limitations, raise more questions, and be a foundation for further research with more rigorous methodologies. Examples of such limitations include: not using a live online search environment, a one-dimensional data collection method, and too many disparate individual resources hindering cross comparisons. Future research design should utilize multi-modal data collection methods that feature banks of more directly comparable containers (e.g., bank of several academic journals from different platforms). To better determine what experiences influence accurate identification of information containers, collection of demographic data should be expanded to factors such as study major, country of origin, socioeconomic status, and early exposure to Internet technologies.

Conclusion

This preliminary study begins to provide insight into the ambiguity of information containers in the eyes of the information consumer, namely university students. This study begins to answer the questions: (1) Do university students have difficulty identifying different digital information resources?, and (2) Do factors such as age, level of university experience, amount of bibliographic instruction, or amount of time spent searching influence a student’s ability to identify digital information resources correctly?

The most basic answers to these are yes and no, respectively. The results suggest that often identification cannot be correctly ascertained, at least not to a degree that we find academically acceptable. This has implications for the online information seeking process and judging credibility. Students are instructed to use peer-reviewed journal articles over books or books over Wikipedia, presumably because of the higher authority of one, but what happens when the student cannot distinguish between them? Clearly, there is confusion surrounding the identification of online information containers. Further, we found no correlation between student levels, age, experience in online searching, and bibliographic instruction and their ability or inability to identify an electronic resource correctly. If the information container is still important then this knowledge can not only lead to improvements in the navigation and presentation of the digital resources, but also provide further insights for the librarians, educators and businesses that serve them.

References

  1. Kenneth Cukier, “Data, Data Everywhere,” Economist 394, no. 8671 (2010): 3.
  2. Alison Head, “Learning the Ropes: How Freshmen Conduct Course Research Once They Enter College,” Project Information Literacy, 2013, accessed November 13, 2014, http://projectinfolit.org/images/pdfs/pil_2013_freshmenstudy_fullreport.pdf.
  3. Stephen Abram and Judy Luther, “Born with the Chip,” Library Journal 129, no. 8 (2004): 34.
  4. Lynn Silipigni Connaway and Timothy J. Dickey, “The Digital Information Seeker: Findings from Selected OCLC, RIN and JISC User Behaviour Projects,” 2010, accessed October 10, 2014, www.jisc.ac.uk/publications/reports/2010/digitalinformationseekers.aspx.
  5. Allen W. McKiel, “2011 Global Student E-book Survey,” ebrary, 2012, accessed June 26, 2013, http://site.ebrary.com/lib/surveys/docDetail.action?docID=80076107&ppg=1.
  6. Rosie Croft and Shailoo Bedi, “eBooks for a Distributed Learning University: The Royal Roads University Case,” Journal of Library Administration 41, no. 1–2 (2004): 130.
  7. Michael Levine-Clark, “Electronic Book Usage: A Survey at the University of Denver,” portal: Libraries and the Academy 6, no. 3 (2006): 289, http://muse.jhu.edu/journals/portal_libraries_and_the_academy/v006/6.3levine-clark.html.
  8. John G. Palfrey and Urs Gasser, Born Digital: Understanding the First Generation of Digital Natives (New York: Basic Books, 2008), 161.
  9. Wendy Allen Shelburne, “E-book Usage in an Academic Library: User Attitudes and Behaviors,” Library Collections, Acquisitions, & Technical Services 33, no. 2–3 (2009): 61.
  10. Primary Research Group, The Survey of American College Students: Student Use of Library E-book Collections (New York: Primary Research Group, 2009): 15.
  11. Harnid R. Jamali, David Nicholas, and Ian Rowlands, “Scholarly E-books: The Views of 16,000 Academics: Results from the JISC National E-book Observatory,” Aslib Proceedings 61, no. 1 (2009): 42.
  12. Aline Soules, “E-books and User Assumptions,” Serials 22, no. 3 (2009): S4.
  13. Lucy Holman, “Millennial Students’ Mental Models of Search: Implications for Academic Librarians and Database Developers,” Journal of Academic Librarianship 37, no. 1 (2011): 20.
  14. Ibid, 25.
  15. Peter Williams and Ian Rowlands, “The Literature on Young People and Their Information Behaviour,” British Library/JISC, 2007, accessed October 12, 2013, www.jisc.ac.uk/media/documents/programmes/reppres/ggworkpackageii.pdf, 20.
  16. OED Online, Oxford University Press, 2014, accessed April 10, 2015, www.oed.com.
  17. Tara T. Cataldo and Amy G. Buhler, “Positively Perplexing E-books: Digital Natives’ Perceptions of Electronic Information Resources,” in Charleston Conference Proceedings, 2012, http://dx.doi.org/10.5703/1288284315106.

Appendix. The E-Resources Survey

http://ufdc.ufl.edu/IR00004920/0001

  1. What would you call this?
    • A website or webpage
    • An e-book
    • An e-journal
    • An article
  1. What would you call this?
    • An article
    • A website or webpage
    • An e-book
    • An e-journal
  1. What would you call this?
    • An e-journal
    • A website or webpage
    • An e-book
    • An article
  1. What would you call this?
    1. A website or webpage
    2. An article
    3. An e-book
    4. An e-journal
  1. What would you call this?
    • An e-journal
    • A website or webpage
    • An e-book
    • An article
  1. What would you call this?
    • An e-book
    • A website or webpage
    • An e-journal
    • An article
  1. What would you call this?
    • An article
    • An e-book
    • An e-journal
    • A website or webpage
  1. What would you call this?
    • A website or webpage
    • An e-book
    • An e-journal
    • An article
  1. What would you call this?
    • An article
    • An e-book
    • A website or webpage
    • An e-journal
  1. What would you call this?
    • An e-journal
    • A website or webpage
    • An e-book
    • An article
  1. What would you call this?
    • An article
    • An e-journal
    • A website or webpage
    • An e-book
  1. What would you call this?
    • A website or webpage
    • An e-book
    • An e-journal
    • An article
  1. What would you call this?
    • A website or webpage
    • A search engine
    • A database
    • A catalog
  1. What would you call this?
    • A website or webpage
    • A catalog
    • A search engine
    • A database
  1. What would you call this?
    • A catalog
    • A website or webpage
    • A search engine
    • A database
  1. What would you call this?
    • A database
    • A website or webpage
    • A search engine
    • A catalog
  1. What would you call this?
    • A search engine
    • A website or webpage
    • A database
    • A catalog
  1. What would you call this?
    • A website or webpage
    • A search engine
    • A database
    • A catalog
  1. I am a _____
    • High School Student
    • Undergraduate Student
    • Graduate Student
    • Other ____________________
  1. What year were you born?
  1. Honestly, I spend about this amount of time a week searching online for class-related assignments
    • 0–1 hours
    • 2–5 hours
    • 6–10 hours
    • More than 10 hours
  1. I have . . . (you can choose more than one response)
    • Never had library instruction
    • Had a librarian speak in at least one of my classes
    • Gone to the library for an instruction session or a workshop
    • Received library instruction online (i.e. online tutorial)
    • No idea what these choices mean
Respondents’ Age Range

Figure 1. Respondents’ Age Range

Time Spent per Week Searching Online for Class-Related Assignments

Figure 2. Time Spent per Week Searching Online for Class-Related Assignments

Survey Respondents’ Exposure to Library Instruction

Figure 3. Survey Respondents’ Exposure to Library Instruction

Respondents’ Labels for E-Books

Figure 4. Respondents’ Labels for E-Books

Springer E-book Screenshot

Figure 5. Springer E-book Screenshot

Google E-book Screenshot

Figure 6. Google E-book Screenshot

Percentage of Students Who Correctly Identified the E-Books

Figure 7. Percentage of Students Who Correctly Identified the E-Books

Respondents’ Labels for E-Journal Front Pages

Figure 8. Respondents’ Labels for E-Journal Front Pages

Respondents’ Labels for Articles

Figure 9. Respondents’ Labels for Articles

Respondents’ Labels for Search Tools

Figure 10. Respondents’ Labels for Search Tools

Table 1. Individual resources included in survey instrument

Individual Resource

Authors’ Designation

An e-journal article (JSTOR)

Article

An e-journal Title/Table of Contents page (Science Direct)

E-journal

An e-book front matter from a publisher (Springer)

E-book

An e-book front matter from Google Books

E-book

An e-textbook front matter from an aggregator (Knovel)

E-book

An e-encyclopedia (Gale)

E-book

A Wikipedia article

Article

A video journal (JoVE)

E-journal

A blog post

Article

An organization’s online annual report (NEA)

E-Book

A newspaper article (Chicago Sun Times)

Article

Table 2. Search tools included in survey instrument

Search Tool

Authors’ Designation

An Abstracting & Indexing database search page (PubMed)

Database

An Abstracting & Indexing database search page (Proquest)

Database

A medical website (Medline Plus)

Website

A library catalog search screen (Stanford)

Catalog

A discovery service search screen (Summon)

Search engine

Google Scholar search screen

Search engine

A shopping catalog search screen (Zappos)

Catalog

Table 3. Survey Responses—Individual Resources

% Undergraduates

% Graduates

Article

E-book

E-Journal

Website or Webpage

Article

E-book

E-Journal

Website or Webpage

Springer e-book

3

36*

21

41

8

28*

19

44

Science Direct e-journal

8

12

40*

41

9

6

54*

31

Knovel e-book

8

74*

6

11

9

72*

6

12

Blog post

45*

2

9

43

26*

0

6

68

Wikipedia article

36*

0

2

62

31*

0

5

64

Google e-book

2

79*

4

15

6

66*

5

23

JoVE e-journal

51

2

29*

18

41

4

27*

28

JSTOR article

27*

25

40

8

42*

18

34

6

Gale e-encyclopedia

4

54*

16

27

4

59*

16

22

NEA Annual Report

15

22*

33

30

15

27*

35

24

Chicago Sun Times article

85*

1

2

12

50*

3

8

39

* Authors’ designated “correct answer”

Table 4. Survey Responses—Search Tools

% Undergraduates

% Graduates

Catalog

Database

Search Engine

Website or Webpage

Catalog

Database

Search Engine

Website or Webpage

PubMed

6

50*

20

24

10

40*

22

28

Zappos

40*

1

6

52

34*

1

10

55

Google Scholar

0

3

90*

7

0

4

87*

9

Library Catalog

51*

31

15

3

37*

29

29

5

Summon

13

32

50*

5

16

17

61*

6

Database (Proquest)

8

61*

10

21

6

57*

17

19

* Authors’ designated “correct answer”

Refbacks

  • There are currently no refbacks.


ALA Privacy Policy

© 2024 Core