rusq: Vol. 53 Issue 4: p. 334
Teaching Information Evaluation with the Five Ws: An Elementary Method, an Instructional Scaffold, and the Effect on Student Recall and Application
Rachel Radom, Rachel W. Gammons

Rachel Radom is Instructional Services Librarian for Undergraduate Programs, University of Tennessee Libraries, Knoxville, Tennessee
Rachel W. Gammons is Learning Design Librarian, McNairy Library and Learning Forum, Millersville University, Millersville, Pennsylvania

Researchers developed an information evaluation activity used in one-shot library instruction for English composition classes. The activity guided students through evaluation using the “Five Ws” method of inquiry (who, what, when, etc.). A summative assessment determined student recall and application of the method. Findings, consistent over two semesters, include that 66.0 percent of students applied or recalled at least one of the Five Ws, and 20.8 percent of students applied or recalled more than one of its six criteria. Instructors were also surveyed, with 100 percent finding value in the method and 83.3 percent using or planning to use it in their own teaching.

Undergraduate instruction librarians face the common challenge of addressing a wide variety of information literacy competencies in sessions that follow short, one-shot, guest lecturer formats. Of these competencies, one of the most complicated and time-consuming to teach is the evaluation of information sources. It can also be one of the most difficult competencies for students to effectively learn.1 In this study, the researchers aimed to find or develop a framework that would efficiently assist students in the acquisition and application of information evaluation skills. The desired framework would be memorable, familiar to students, scalable (used in face-to-face sessions or asynchronous, online instruction), and valuable to course instructors.

The following study introduces an information evaluation method based on a well-known framework of inquiry—the “Five Ws,” or who, what, when, where, why, and how. Researchers modified the Five Ws to create a formative assessment that introduced evaluation skills to students and piloted it in fall 2011 during one-shot library instruction sessions for English composition classes. Full implementation followed in fall 2012. In both the pilot and formal study, a summative assessment was sent to students an average of three weeks after the library session to assess recall and application of the evaluation method. Composition instructors were also surveyed to assess their responses to the Five Ws evaluation method and determine whether they had added, or would consider adding, the method to their own instruction. The findings of these assessments may be relevant to instruction librarians and composition instructors, as well as those interested in the connections between information literacy competencies and student learning outcomes in general education.


LITERATURE REVIEW

In 2000, the Association of College and Research Libraries (ACRL) published the “Information Literacy Competency Standards for Higher Education.”2 Intended to facilitate the development of lifelong learners, the standards outline the skills needed for students to identify an information need and then locate, evaluate, and utilize resources to fulfill that need.3 For more than a decade, the ACRL guidelines have directed the library profession’s approach to instruction, shaping the ways that librarians conceptualize, design, provide, and assess library instruction. Corresponding to the widespread adoption of these standards, there has been an increase in research investigating students’ skills (or lack thereof) in critical thinking, and more specifically, information evaluation. The majority of these research studies, however, are based on the evaluation of web and print sources as separate materials. As the numbers of online and open access publications increase and the boundaries between formats of information recede, the depiction of print and electronic resources as existing in distinct and separate categories does not accurately reflect the modern search experience.4 It is also misleading to students who are used to accessing a variety of media and information sources in multiple formats.

Student confusion about the format and quality of information sources is substantiated by recent research. In a 2009 report for the United Kingdom’s Joint Information Systems Council (JISC), researchers identified a dissonance between college and university students’ expectations of published research and the realities of those bodies of work.5 When asked what types of information a student would recognize as “research,” an overwhelming majority (97 percent) identified traditional formats such as books and articles. When confronted with less well-known formats, such as posters or dissertations, the number of students willing to identify the documents as “research” greatly decreased.6 Additional qualitative results describing student confusion were obtained in small focus group sessions. While the majority of students “distrusted” the Internet, they widely accepted “all published materials” as appropriate for academic use.7 This inaccurate distinction between the credibility of print and electronic resources was also reported in research by Biddix et al., who found that students view the information available from an academic library as “vetted” or “pre-accepted.”8 Students have oversimplified relationships between publication format, library resources, and credibility, a situation that has been further complicated by the increase in federated search tools. Although federated searching may simplify the research experience, it also increases the quantity of unfamiliar materials to which students are exposed, while simultaneously making distinctions between information sources less discrete.

As the information landscape undergoes radical shifts, librarians’ approaches to teaching information literacy and information evaluation have remained relatively static. Approximately ten years ago, two information evaluation methods associated with different mnemonic devices were shared in the library literature and were subsequently incorporated into many library instruction sessions. In 2004, Blakeslee described the motivation behind designing California State University Chico’s CRAAP Test as a desire to create a memorable acronym because of its “associative powers.”9 Intended to guide users through evaluating the Currency, Relevance, Authority, Accuracy, and Purpose of a document, the method’s accompanying checklist and questions can be applied to both print and online resources; however, its emphasis on the evaluation of electronic materials has resulted in a loose categorization of the method as a website evaluation tool.10 In contrast, the CRITIC method was incorporated into library instruction as a tool to be utilized in the evaluation of print resources.11 In a presentation on the method at a 2004 conference, Matthies and Helmke describe CRITIC as a “practical system of applied critical thought”; repurposing the steps of the scientific method, it encourages users to approach evaluation as an iterative process and to interrogate the Claim, Role of the Claimant, Testing, Independent Verification, and Conclusion of a given document.12

Both the CRAAP Test and CRITIC method attempt to simplify the evaluation process by breaking down complex ideas into a set of accessible criteria, but little research has been conducted on the effectiveness of the methods themselves. However, one recently published study on the advantages of formative assessment in information literacy instruction includes a series of anecdotal observations that may provide insight into the effectiveness of the CRAAP Test.13 Following an instruction workshop in which the test was taught, many students self-reported a persisting difficulty with “determining the quality of different sources.”14 The authors found that some students continued to have trouble “distinguishing between popular magazines and scholarly journals” and “finding authoritative websites” even after follow-up consultations.15 Their findings suggest that the CRAAP Test may not effectively bridge the gap between determining easily identifiable qualities, such as date of publication, and those that require a greater level independent judgment and critical thinking, such as authority, especially if used in only a single instruction session.

Meola contends that it is problematic to use models such as CRAAP and CRITIC to teach information evaluation because of their structural dependence on linear processes and checklists.16 He describes such checklist-based models as “question-begging” and criticizes them for offering “slim guidance” as to how the questions should be answered.17 Meola also argues that a linear organization encourages students to view evaluation as a “mechanical and algorithmic” process, thereby separating “higher level judgment and intuition” from the evaluation process.18 Bowles-Terry et al. expand on Meola’s ideas, writing that the checklist approach “reduces critical thinking about the value of information to easily memorized and superficial criteria.”19 The solution, the authors suggest, is to reconceptualize the evaluation of information as a meaningful process rather than a “look up skill.”20 Librarians can support this by broadening the evaluation methods they teach to include contextualizing a document within a student’s “wider social experience.”21

Bowles-Terry et al. also encourage information literacy instructors to enhance their teaching efforts by incorporating aspects of social constructivist theory, developed in large part by Lev Vygotsky.22 In his preeminent writings on child psychology, Vygotsky made highly influential contributions not only to sociological but also educational theory, including the concept of the “zone of proximal development,” or ZPD, which he describes as the distance between what a learner can accomplish independently and what he or she can accomplish under the “guidance of an adult or in collaboration with more capable peers.”23 According to Vygotsky, a learner’s transition to a more advanced skill set or level of thinking is facilitated in collaboration with a person or group of people at a higher developmental level than the learner.24

Related to the ZPD is the educational theory of instructional scaffolding, a process by which a tutor or instructor helps a learner successfully achieve a task that the learner would be unable to accomplish alone, thus spanning the ZPD. Scaffolding processes assist learners by building on behaviors and tasks they have already mastered to achieve those that require higher levels of thought. In a seminal work on scaffolding, Wood, Bruner, and Ross write that scaffolding begins when a tutor actively interacts with learners and controls the “elements of a task initially beyond the learner’s capacity.”25 According to Bruner, responsive tutors gradually remove their support (the scaffold) as learners develop skills and need less assistance.26 By working with instructors or more competent peers, learners who successfully negotiate skill development are then able to build on their accomplishments by achieving the component steps of a process individually and then progressing to skills of greater intellectual complexity.

Vygotsky theorized that learners may surpass their developmental level by working with others more capable, while Wood, Bruner, and Ross found that learners are capable of recognizing good solutions to a task or problem before they are capable of completing the steps needed to reach that solution by themselves.27 These theories are useful to consider in the design of information literacy instruction and formative learning assessments. Integrating group work into instruction sessions may help learners achieve more success together than if they were to work alone. Utilizing instructional scaffolds may also assist learners in the development of new skills. Furthermore, if the scaffold helps students accomplish goals that they recognize as purposeful and relevant to their near-future success, they may be more invested in developing the skills and learning the process being taught. Based on these criteria, a useful evaluation method in library instruction would be associated with something already familiar to students and valued by course instructors to the extent that they would incorporate the method into their own classes after the library session. An evaluation method that met these ideal qualities would then have the potential to be more fully integrated into a student’s greater learning process by surpassing the limitations of one-shot instruction sessions.


METHODS

At the University of Tennessee Knoxville, the first-year composition program includes two sequential courses, English 101 and 102. Although the common syllabus for English 101 includes three standardized composition assignments, only one of these, the argumentative paper, requires students to cite outside sources. Despite the applicability of library instruction to the composition curriculum, not all composition sections attend a library instruction session. In fall 2011 and 2012, an average of 24 percent of all English 101 sections requested library instruction, while 70 percent of instructors for English 118 (an Honors course that combines English 101 and 102) requested library instruction for a similar assignment.

Although the argumentative assignment does not require scholarly sources, many composition instructors encourage their students to cite sources with differing points of view. As a result, librarians dedicate a significant portion of the corresponding library instruction session to the development of information evaluation skills. To facilitate this process, an instructional services librarian and a graduate teaching assistant (both hereafter referred to as “the researchers”) sought to employ an in-class evaluation activity that could be consistently used in each 101/118 library session, and would accomplish two aims. First, the activity should effectively introduce students to an information evaluation method. Second, the evaluation method itself should be conducive to student recall and application after the library session.

The researchers first identified an evaluation method and created the in-class evaluation activity, which was completed in small groups during the instruction session and served as a formative assessment. A post-session summative assessment measured student application and recall of the evaluation method. To determine composition instructors’ responses to the session and, in particular, if those instructors found the evaluation method valuable or would consider adding it to their own teaching repertoire, the researchers also created a follow-up survey for composition instructors. With approval from the Institutional Review Board, the researchers piloted the assessments in fall 2011 and implemented them with post-pilot improvements in fall 2012.

When selecting an information evaluation method, researchers searched for a tool that would serve as an instructional scaffold.28 Rather than introducing students to a new evaluation method, the researchers hypothesized that introducing students to a method based on a concept with which they were already familiar would have several benefits: It might allow students to grasp the evaluation criteria more quickly, interpret the steps involved more effectively, and reduce the number of clarifying questions necessary before launching into the activity and applying the method. If such benefits were actualized, the instructional scaffold would also facilitate an efficient use of time for library instructors, who were operating under the time constraints of either a fifty- or seventy-five-minute session.

Between CRAAP and CRITIC, the two methods popular in library instruction, only CRITIC is associated with a concept first-year university students might have encountered in previous learning experiences as its steps are based in the scientific method, a process taught in most elementary and secondary schools.29 However, while the method’s guiding questions may seem familiar, terms associated with the scientific method are not mirrored in the words of the acronym, thereby making it appear new to users. To facilitate the effectiveness of the scaffold, researchers also wanted to teach a “catchy” evaluation method, that is, easily remembered and effectively recalled. Though this specific study did compare student recall of different evaluation methods, anecdotal conversations between library colleagues revealed that the CRAAP and CRITIC criteria were difficult for library instructors to remember. While many of the researchers’ colleagues had utilized the methods more than once in previous information literacy sessions, few were able to recall the components of either acronym.

Therefore, in the interests of familiarity and memorability, the researchers looked outside of library literature. They selected what is colloquially known as the “Five Ws” method of inquiry as a foundation for the activity and subsequent study. The method is composed of six guiding questions: who, what, when, where, why, and how. Frequently taught in primary schools as introduction to basic rhetoric, the Five Ws method is often associated with journalistic investigations and authorship. The likelihood that students would have been introduced to the Five Ws criteria at an early age satisfied the desire of the researchers to present a method with which students were already accustomed, while the guiding questions provided a framework of interrogation on which the researchers could build a more complex activity.

Using its six basic questions as the foundation for the in-class evaluation activity, researchers supplemented each main Five Ws question with more extensive questions to create an activity appropriate for university students. The “who” question, for example, asked students not only to identify the author, but also to investigate the author’s credentials, including where the author worked, if the author had been published more than once, and if the author had research or work experience that contributed to his or her authority. The resulting Five Ws activity served as a formative assessment that measured students’ existing abilities in comprehending and evaluating documents. Students had the opportunity to improve these skills by working through the Five Ws evaluation method in small groups, with a librarian available to direct or correct students’ progress.

During the instruction session, the Five Ws activity was presented to students as an online worksheet, managed and maintained in the UT Libraries’ SurveyMonkey account (appendix A). A link to the activity, as well as a PDF of the document that students evaluated, was available on all library computers used in instruction sessions. The evaluated document was a column by Nicholas Kristof about the 2011 Tōhoku earthquake, tsunami, and Fukushima nuclear radiation leaks in Japan, which appeared in PDF as a full-page from The New York Times opinion section.30 The decisions to have all students evaluate the same document, and for them to analyze a column rather than an article, were deliberate, based on observations from and results of the pilot study. Analyzing an opinion piece challenged students without making the exercise aggravating and, consequently, presented the best opportunity for student learning.31

In the library session, students were directed to skim Kristof’s column, which was referred to by the researchers as neither a “column” nor an “article,” but simply the “document.” After skimming the document, students were asked to work in small groups of two to five to evaluate it using the Five Ws criteria via the online worksheet. They were also directed and encouraged to use Internet search engines to help them complete the evaluation, for example, to find more information about the author, his work, and his previous publications. After completing the activity, researchers asked each group to explain to the class how each of the Ws contributed to their group’s final decision of whether they would or would not cite the column in a college research paper.

During the fall 2011 pilot, researchers tested the Five Ws activity with an estimated 682 students.32 Results of the pilot study prompted researchers to make several minor adjustments to the Five Ws activity, including simplifying the phrasing of some questions, choosing to evaluate a single document rather than multiple types in one section, and adding links to definitions for several terms, such as methodology, with which students had struggled. After the pilot project, the improved Five Ws activity was incorporated into many 101 and 118 library instruction sessions. An estimated 391 students in small groups participated in the fall 2012 research study.33

The pilot study also included a post-session survey, designed in SurveyMonkey and distributed to students in the last quarter of the semester. This twelve-question summative assessment was intended to determine whether several student learning outcomes had been met; namely, whether students found and used library resources after the library session and whether students recalled and used the Five Ws method for evaluating an information source for authority, credibility, and bias. Except for minor clarifications to phrasing, the post-session assessment sent to students in the fall 2012 study was nearly identical to the one distributed during the pilot project.

The post-session summative assessment was distributed to students via their respective composition instructors. During the fall 2011 pilot, sixteen composition instructors taught the thirty composition sessions in which the Five Ws activity was trialed. During the formal study in fall 2012, this number fell to eleven composition instructors for seventeen sections. In each iteration of the study, librarians sent course instructors an email containing an invitation to and directions for completing the 12-question follow-up survey, which they were asked to forward to their students. The emailed invitations were sent to instructors an average of three weeks after the library instruction session. Composition instructors were also sent at least one email reminder to forward to students before the last day of classes.

A separate, qualitative survey was distributed to the same sixteen composition instructors in fall 2011 and eleven composition instructors in fall 2012. This twenty-one-question survey was distributed two to five weeks after the library session and was intended to gather composition instructors’ feedback about the library instruction session. Among other questions, instructors were asked whether or not they found the Five Ws evaluation method valuable and if they had used it or planned to use it in their own classes. The follow-up survey sent to instructors in fall 2012 was nearly identical to the fall 2011 pilot with very minor clarifications to wording in some questions.

In both semesters, students were offered an incentive for participation in the post-session summative assessment. During the pilot project, participants were entered into a drawing for a single $30 gift certificate to the university bookstore. In fall 2012, the incentive was increased and participants were entered into a drawing for one of four $50 gift certificates to the university bookstore. Composition instructors received no incentive in either semester.


RESULTS

Responses are summarized below in an order that matches the question order as presented to participants in the assessments/surveys, with several responses included in table format. The results refer to responses gathered in the fall 2012 study, with comparisons to the pilot project results provided only at the end of each section.

Formative Assessment: Five Ws Activity

With an average of six small groups per section working together to complete the Five Ws activity, an expected number of 102 groups would have submitted online worksheets in fall 2012; however, 180 groups started the Five Ws activity. Of these, 99 submitted worksheets and are included in this analysis. The high number of worksheets not submitted is likely due to the nature of group activities; researchers observed many students reviewing the activity on their own computers to read through the questions and help their group finish the worksheet, though only one group member submitted each group’s collective response. The number of submitted responses includes 44 incomplete responses, in which students submitted the activity by visiting the last page of the worksheet without providing answers to each individual question.

The first criterion, the “what” of the Five Ws, consisted of questions about the document type and the overall tone the author used throughout the document. The vast majority of student groups incorrectly identified the document as a popular article. Less than 10 percent correctly identified the document as a column (figure 1). When asked about the author’s writing tone (n=96), all but one group agreed that the tone was conversational rather than technical.

Students were next asked to investigate the author of the document (“who”). Student groups agreed that the author had qualifications that made him an authority in 98.9 percent of cases (n = 94). In an open-ended question asking respondents to identify any credentials that contributed to the author’s authority, the most commonly listed were the author had earned a law degree, attended Magdalen College/Oxford, was a Rhodes Scholar, had been awarded Pulitzer Prizes, or had graduated from Harvard University. Two student groups specifically referred to the author’s work as a journalist in Asia as contributing to his authority. Of 94 groups, most reported finding information about the author from Wikipedia’s entry about him (60, or 64.5 percent). Some checked The New York Times website for his biography (18, or 19.4 percent), and a relatively small number referred to both websites (5, or 5.4 percent). The remaining groups claimed to find author information from Google or from other sources, such as the website for the Public Broadcasting System (PBS).

The “why” criterion was made up of five questions to help determine the author’s primary purpose for writing, one of which asked students to provide a quote from the document as justification for their choice. Most groups decided that the author’s main purpose was to convince readers of something (as befits a column), but one quarter of groups indicated that the author’s purpose was to inform readers. A majority agreed that the author’s point of view was interested and opinionated, and thought that he favored emotional language (table 1). Over 90 percent of groups (91 of 98) correctly identified the author’s main audience as “the general public,” while 7.1 percent thought his main audience was “an educated audience interested in a specific topic (i.e., a marketing professional addressing others in the marketing field).”

Though the “when” questions were fairly straightforward—all but 4 of 96 respondent groups correctly identified the publication date—students consistently demonstrated difficulty in identifying when the “event or research being discussed in the document occurred.” Of 95 short answer responses, fewer than half (43, or 45.3 percent) referred in some way to the 2011 earthquake, tsunami, or Fukushima nuclear radiation leaks that were the impetus for the columnist’s writing. The majority of the remaining 52 groups identified the Japanese earthquakes in 1923 and 1995 to which the columnist referred but failed to identify a connection to more recent natural disasters.

The subsequent “where” criterion focused on the publication in which the document appeared. Of 95 responding groups, all stated that the document was published in The New York Times, except for 2 who referred to the publication as “The Sunday Opinion” and 6 others who referred to it as the “The New York Times Sunday Opinion.” It is unclear if those six understood this was the newspaper’s opinion section, or if they incorrectly believed it was a publication distinct from The New York Times. Of the 94 groups that identified the type of publication, 91 groups (96.8 percent) described it as a “newspaper,” with the remaining groups identifying the publication as an academic or scholarly journal, a magazine, or a website. Another question asked students to provide contact information for the author and/or publication. Most groups (72 of 79, or 91.1 percent) provided the newspaper’s phone number or address, or stated that a message could be sent to either the author or The New York Times company via email, Facebook, Twitter, or GooglePlus. Seven groups (8.9 percent) were unable to locate any contact information.

Of all the Five Ws criteria, the questions relating to “how” Kristof gathered and presented information received the fewest number of responses. One question asked if and how the author cited outside sources (the column included one quote attributed to a Japanese shop owner). Of 82 submitted responses, 1 group stated that references were cited throughout the document in a scholarly style, 16 that references were cited throughout the document in a popular style (19.5 percent), i.e., there were in-text quotes and attributions but no bibliography at the end of the document, and 65 stated that references were not listed (79.3 percent).

When asked how the author gathered data to reach his conclusions, a question to which multiple answers were permitted and 63 groups responded, over half of student groups (57.1 percent) inaccurately claimed that the author gathered data from a research study he conducted. Several groups (22, or 34.9 percent) opted to write in additional answers. Of these, one quarter of all respondents (16 of 63), stated that the author gathered data from his personal experience (figure 2).

The final question in the “how” category asked students to identify the document’s elements or component parts (i.e., how the information was presented). Almost 34 percent of groups incorrectly stated that the document contained an abstract and almost 18 percent stated that it contained a methodology (figure 3). It should be noted that the text of this question provided a link to “What is an abstract?” next to the word “abstract,” and “What is a methodology?” next to the word “methodology.” Both links took students to definitions of these terms from a website at George Mason University.34

In the concluding questions of the formative in-class assessment, students were asked (1) if the document was scholarly or popular, (2) to list the strengths and weaknesses of the document, and (3) whether they would use it as a source in a college paper. Of 74 groups, 6 stated that the document was scholarly (8.1 percent). Justifications for why it was scholarly included that it was “written by a graduate of Harvard” or “written by a Rhodes Scholar,” or because it “uses facts” or “has facts in it.” Of these 6 groups, 5 also stated that the article was popular (the survey did not limit respondents to one answer only). Of the groups who stated it was popular (73, or 98.6 percent), their justifications included that the document was published in a newspaper (38, or 52.1 percent), appealed to or was written for the public or used nontechnical language/no jargon (29, or 39.7 percent), included or was mostly opinion (17, or 23.3 percent), or that the author did not cite sources (9, or 12.3 percent). Groups provided one or more of these explanations in 28.8 percent of cases.

Student groups listed strengths of the document in a write-in text box (n = 63). Researchers coded responses by assigning them to the appropriate Five Ws criteria. Respondents attributed the document’s strengths to the credentials of the author (“who,” 35, or 55.6 percent), the positive reputation of the publication in which it appeared (“where,” 17, or 27.0 percent), or that the author included examples from personal experiences (“how,” 14, or 22.2 percent). A total of 27.0 percent of groups provided more than one of these answers. An additional 17 groups (27.0 percent) provided unclear or incomplete responses in describing strengths.

In identifying weaknesses of the document (n = 53), also in a write-in text box, most student groups responded that a weakness was in “how” the author gathered his information or cited his sources. Student groups wrote that the lack of citations was a weakness (16, or 30.2 percent), the lack of views other than the author’s was a weakness (5, or 9.4 percent), or simply wrote that “how” was a weakness with no further explanation (6, or 11.3 percent). Adding these responses together, 50.9 percent of student groups identified some element of “how” as a weakness of the document. The bias or opinion in the document was another characteristic commonly listed as a weakness (22, or 41.5 percent), which related to both the “what” criteria (whether the document was opinion-based or fact-based) and “why” (author’s purpose). One group referred to the source as a weakness because the document was not published in a scholarly journal, and three groups (5.7 percent) stated that the “why” was a weakness without providing further explication. A total of 15.1 percent of groups listed more than one of these criteria as weaknesses.

The ultimate question asked groups, “Thinking about the Five Ws of your source, would you cite this source in a paper? Why or why not? Might your answer depend on the type of paper you’re writing? How so?” Researchers coded responses by whether or not the respondents provided a reasonable justification for their answer. Such rationale included

  • “Yes if the paper was for persuasion. No if it was an informative paper.”
  • “Wouldn’t site [sic] it as evidence, but could use it to demonstrate an opinion.”
  • “Yes [because it is] from very credible newspaper and a well-respected writer.”
  • “If I needed the opinion of an American familiar with Japanese culture and living there I would use Kristof as a reputable source.”

Of the 55 student groups responding to this question, 37 (67.3 percent) provided what the researchers considered a reasonable justification for their decision to cite or not cite the document in a college paper. A total of 27 (49.1 percent) provided particularly strong or compelling justifications, of which the four quotations above are indicative.

There was a great degree of similarity between student responses in both fall 2011 and fall 2012. Comparisons are provided in table 2, which highlights select questions in each of the Five Ws criteria. Between semesters, one of the biggest differences was in responses to how the author presented information, including which particular elements the document contained. This difference may have resulted from the inclusion of links to definitions of component terminology (e.g., “What is a methodology?”) in the 2012 assessment, which were not included in the 2011 pilot.

Summative Assessment: Follow-up Survey

After the instruction sessions, a summative assessment measured student recall and application of the Five Ws. Though eleven composition instructors were asked to forward to their students an invitation to participate in the survey, responses indicate that only nine instructors distributed the invitations to students. Based on this assumption, fifteen sections of English 101 and 118, or approximately 345 students, would have received an invitation to participate. Of the 55 student responses received, 53 were usable, making the response rate 15.4 percent when calculated out of fifteen sections (or 13.6 percent if calculated out of seventeen sections with eleven instructors).

The survey’s twelve questions included several that assessed student recall of the evaluation method. Among 51 respondents, 25 stated that they recalled the method or technique of evaluating sources that was taught in the library session (49.0 percent). Of these, 3 students identified the Five Ws method by name (12.0 percent), 2 indicated using more than one of the Five Ws (e.g., a student wrote that “We looked at the author’s credibility, the style of the article, what type of article it was, etc.”), and 2 more recalled researching an author to evaluate authority. In total, 7 of the 25 respondents who claimed to recall the method were able to recall (in spirit, if not in letter) at least one of the Five Ws criteria (28.0 percent).35

The survey also asked students about their method of evaluating sources after the library session. Of the 53 respondents, 45 stated they had evaluated the credibility and authority of sources they cited in at least one paper completed in the semester (84.9 percent). Of the 44 respondents who described their evaluation techniques, nearly three quarters described evaluating sources using at least one of the Five Ws criteria. Just over 18 percent recalled two or more of the Five Ws (table 3).

After combining and de-duplicating responses to related questions that asked about recall of the library-taught method and the method of evaluation students actually used, a total of 66.0 percent of all respondents recalled and/or applied at least one of the Five Ws criteria after the session (table 4). The “who,” or authority criterion, was “stickiest”; those students who recalled or applied only one of the Five Ws most often described evaluating the author. Approximately 20 percent of students recalled or applied more than one of the Five Ws evaluation criteria, with 7.5 percent of all respondents referring to the Five Ws method by name.

The response rate of the fall 2011 pilot summative assessment was too low (5.1 percent) to justify any in-depth comparisons. It may still be of interest to report that responses from the pilot study were similar to those from fall 2012. Of the fifteen completed surveys, nine students (60.0 percent) recalled and/or applied at least one of the Five Ws criteria an average of three weeks after the Five Ws library instruction session.

Instructor Survey

Eleven instructors were sent a follow-up survey after the library session in fall 2012. Six instructors completed the survey for a response rate of 55 percent. All respondents thought the Five Ws had value for their students. One instructor reported the Five Ws method to be a “quick, efficient, and easy-to-remember tool to help students evaluate a source.” Another stated, “I like that it reminded them of ‘the W’s’ they learned in high school (several, I noticed, expressed recognition), while moving them forward into new territory/information.”

Instructors were also asked if they might use the Five Ws method of evaluation in their own instruction. Four of six stated that, at the time of the study, they had already incorporated some form of the Five Ws method into their teaching (table 5). Five reported that they intended to utilize the method in the future, and one respondent was unsure about future use. When asked how they might include the method in their classes in the future, one instructor wrote that they would repeat the activity in another class meeting but may also consider adding it as a homework assignment. Another wrote, “I have already been using it in 102, but will begin stressing it in 101 as soon as we begin talking about research for the source-based paper.” These instructors’ responses were echoes of the positive responses reported in the fall 2011 pilot project, in which six out of six instructors reported that the Five Ws was valuable for their students, and four of six were considering using the method in their own instruction.

Notably, students who identified being enrolled in a course in which their instructor had used the Five Ws performed better in recalling and/or applying the Five Ws than those students in a course in which the instructor did not use the Five Ws outside of the library session, or in a course in which the instructors’ use of the Five Ws was unknown.36 In sections in which course instructors were known to have used the Five Ws, over half of students self-reported that they recalled the evaluation method taught in the library class (19, or 52.8 percent of 36 respondents). In sections in which the Five Ws were not referred to during regular class times, 40.0 percent of students reported recalling the method (6 of 15 respondents). When asked to explain this library-taught method, 31.6 percent of students recalled at least one of the Five Ws criteria when they were in a section in which the instructor used the Five Ws, as opposed to 16.7 percent of those enrolled in sections in which the instructor did not/was not known to reinforce the Five Ws (table 6).

Additionally, when students were asked if they had evaluated sources that semester, 84.2 percent of students in sections that used the Five Ws outside of the library session stated that they evaluated their sources (32 of 38). Similarly, 80.0 percent of students in sections who did not use the Five Ws outside of the library session stated that they evaluated their sources (12 of 15). Yet, when asked how they evaluated sources, 78.1 percent of students in courses in which the Five Ws were used outside of the library session applied at least one of the Five Ws, while 58.3 percent of students in which the Five Ws were not used outside of the library session did the same (table 7). After combining both recall and application responses, 65.8 percent of those with repeated exposure to the method recalled and/or applied aspects of the Five Ws evaluation, and 46.7 percent of students enrolled in sections in which the Five Ws were not used outside of the library class were able to do so.


DISCUSSION

In assigning the initial in-class, formative assessment the researchers had three intended goals: (1) to introduce students to a systematic information evaluation method that would serve as an instructional scaffold to develop evaluation skills, (2) to measure how many students could accurately characterize features of a given source (for example, determining that a given source was opinionated, popular, and written by a credible author), and (3) to examine if students would would be able to present a reasonable argument about why they would or would not cite an opinionated, popular source in a college paper, and if they would use criteria from the library method in their rationales.

On the first point, the use of the Five Ws as an instructional scaffold was successful. Students asked very few questions about the Five Ws method or how to use it. While no formal assessment measured student familiarity with the Five Ws before the library session, more than three quarters of students in each section confirmed by vocal agreement, a head nod, or raised hand that they had heard of the Five Ws before the library session. Because very few students had questions about the evaluation method itself, the scaffold was helpful in using class time efficiently. Most student groups (82, or 82.8 percent) completed at least three-quarters of the activity during class time, and 55 out of 99 student groups (55.6 percent) completed the entire in-class activity.

The effectiveness of the Five Ws as a scaffold was also supported by the summative assessment results. Students in sections where the Five Ws method was reiterated after the library session were better at recalling and applying the evaluation method than those exposed to the Five Ws only once (65.8% versus 46.5%). Scaffolds are tools put in place temporarily to help students master a skill, and learners may need to use a scaffold for some time before they develop or internalize the steps involved in a particular skill. Those students who used the Five Ws method in a class setting more than once were able to apply the skills of source evaluation more consistently than students who used the scaffold only once in a library instruction setting, suggesting that the former group had made more progress in internalizing these skills.

Regarding source evaluation, students were, overall, successful in determining the opinionated and popular nature of the source—nearly 90 percent of student groups described the piece as being written from an opinionated point of view and 70.7 percent of student groups stated that the author’s main purpose in writing was to persuade readers to agree with his opinion (table 1). Students accurately determined that the author had multiple degrees, first-hand knowledge of the topic, and was highly regarded by other journalists and authors. In fact, this particular author’s credentials made such an impression on students that, if a respondent recalled only one of the Five Ws on the follow-up summative assessment, it was most often the importance of evaluating the author (the “who” criterion) of a document. In addition, the vast majority of student groups correctly identified the document as having been written for the general public (92.9 percent) and published in a popular newspaper (96.8 percent).

Most students performed well in evaluating the “who” (authority of the author), “where” (credibility of the publication), and “why” (author’s purpose) characteristics of the document. Because of the emphasis on rhetorical analysis in composition courses, some students may have been attuned to analyzing the tone, language, and purpose of an author. As a result, the number of correct responses to questions about authorial tone and intent is, perhaps, unsurprising. However, the remaining Five Ws and one H—the “what” (document type), “how” (gathering and presentation of data), and “when” (recency/currency/timely impetus for publication)—were criteria with which many, if not most, students struggled.

Of these three criteria, the formative assessment results associated with the “what” and “how” criteria provide insights into what students do not know. These gaps in student knowledge might be classified, in general, as a lack of awareness of publication jargon and processes. In particular, this manifested in students’ inability to recognize either types of documents or types of authors. For instance, the majority of student groups (85.6 percent) claimed that the document was a popular article and not a column. This mistake persisted despite the author’s identification as a columnist in his biographical statements and student descriptions of the author’s opinionated perspective. Furthermore, in the library session wrap-up when groups presented their Five Ws answers to the class, most students were unable to describe to the researchers the differences between a column and an article, or between a columnist and a journalist. Although this provided the researchers with a built-in teachable moment during the session, it also indicates that, though students are capable of identifying an opinion piece when they read it, they do not recognize terminology typically associated with such pieces.

Additionally, student responses demonstrate a lack of understanding of elements included in scholarly publications, as well as ignorance of the jargon used to explain scholarly authors’ research processes. The researchers were, frankly, surprised at the number of student groups claiming that the newspaper column included an abstract and a methodology. In the pilot project, 44.4 percent of student groups claimed the column included a methodology and 33.3 percent of student groups claimed it included an abstract. After the pilot project, the researchers added links to definitions of “abstract” and “methodology” in the assessment, and this reduced the number of groups claiming the column included a methodology (17.9 percent) but had little to no impact on the number of student groups claiming that an abstract was presented (33.9 percent). Students were also asked about how the author gathered data for his argument. Most students claimed that the author conducted a research study (52.2 percent in the fall 2011 pilot, 57.1 percent in fall 2012). Approximately one quarter of students in either semester accurately stated that the author gathered data from personal experience.

While the responses to these “how” questions had the fewest number of respondents (probably because time constraints kept some students from reaching these penultimate questions), the findings are notable because they indicate that many students are unfamiliar with scholarly article components. Although the time-constrained library session did not include specific instruction on defining or identifying the parts of scholarly documents (including abstracts and methodologies), students were asked whether the author included such components. They were also directed to ask the library instructor if they had any questions, and the researchers were readily available to provide any necessary assistance. Additionally, students had access to both Internet search engines and links to definitions of these terms. Yet, for the most part, students did not ask for help defining these terms. This is further evidence that students can identify a popular piece in aggregate, but they are largely unaware of the defining characteristics and categories of publications that help knowledgeable readers distinguish between document types and authorial processes.

These findings point to an illiteracy that is important to address. Lower division undergraduate students can clearly distinguish between a scholarly document and a popular one with little instruction. What students are missing, however, is an awareness of distinctions among the processes by which columnists, journalists, and researchers arrive at their conclusions and an ability to correctly classify or label opinion pieces from factual ones. This is of particular concern in terms of scholarly publications, such as The Lancet, in which letters to the editor often include citations and refer to the letter writers’ employment at universities or other research institutes. To a new student, such a letter could easily look like a scholarly research article as opposed to criticism of another researcher’s study. Without knowledge of publishing jargon and processes, students may find criticism and opinion pieces, such as book reviews and letters to the editor, indistinguishable from their research-based counterparts in a list of database search results. These critically important abilities were underdeveloped in these first-year students who were at the end of their first semester at the university, and these findings were consistent over a two-semester period.

The other challenging criterion, the “when” questions, proved difficult to students for two reasons. First, one “when” question asked students whether they needed to cite something recently published for their assignment or if a historical piece was suitable for their topic. Because students were not reviewing this document in connection with a particular research assignment, the question was irrelevant and confusing in this context. Second, and more significant, were student difficulties regarding when the events discussed in the document occurred. Though published in March 2011, the same month in which the Tōhoku earthquake and tsunami and Fukushima nuclear disaster struck Japan, the columnist did not explicitly state that his writing was prompted by those disasters. At the time of publication, most readers would have been bombarded with media reports covering those terrible events, but these students were evaluating the document eighteen months after the events (six months after for the pilot group), and they approached the “when” at face value, providing only the dates of earthquakes that were specifically referenced in the column (1995 and 1923). Less than half of student groups approached the question from the angle of a past current event; only 45.3 percent made a connection between the March 2011 disasters in Japan and the March 2011 column. The value of situating a publication in its appropriate context was a discussion point at the end of the library session, after students had submitted their responses via SurveyMonkey and presented their group’s findings to the class.

The third and final purpose of the formative assessment was to examine student arguments for why they would, or would not, cite the opinionated, popular source in a college paper. Following examination of the document using the six criteria, the activity concluded by asking students to articulate their overall impressions of the document, both verbally and in the written assessment. These reflections were valuable for both students and researchers in that they not only prompted students to consider the document holistically and for a definitive purpose, but also provided researchers a glimpse into students’ decision-making processes. For example, several groups thought that the author was a scholar, but because he was published in a newspaper, addressed a popular audience, and provided his opinion, the document was more of a popular source than a scholarly one. Thus, most groups (98.6 percent) capably weighed multiple criteria to accurately describe the popular nature of the document.

This holistic processing was again demonstrated in the final question, in which students were asked to state definitively whether or not they would cite the document in a college paper. The researchers deliberately left this as a “judgment call” to see if responses would speak to their abilities as first-year students to navigate the complexities of the evaluation process. There was no right or wrong answer for whether the source was worthy of citation in a college paper, and some students may have been influenced by the fact that citing popular sources was permitted in their composition assignment. Just over 67 percent of respondents provided a reasonable explanation for their decisions, often referring to the author’s credentials or the expression of opinion in the document as reasons for why they would, or would not, cite the material. Nearly half of all respondents provided a comparatively well-synthesized or nuanced justification.

As a result, the Five Ws can be considered an effective instructional scaffold and evaluation method. Results indicate that students were familiar with the Five Ws before attending library instruction, were able to apply it successfully during class, and that instructors unanimously found value in the method. However, one limitation of this study is its lack of comparisons among different types of evaluation methods. At this point, researchers are unable to determine whether the Five Ws is more or less effective or memorable than alternative methods, such as CRITIC. It is also unknown if composition instructors would have preferred or valued a different evaluation method over the Five Ws, though it should be noted that no instructor offered an alternative approach.

Regarding recall, the memorability and application of the Five Ws method was less successful than researchers originally hoped. Summative assessment results demonstrate that few students (7.5 percent) recalled the Five Ws evaluation method by name, indicating that the method is not overly “catchy” as a mnemonic device. In describing their own evaluation processes, however, most student responses (66.0 percent) suggest an internalization of some aspects of the Five Ws activity. After the library session, more than half of students (60.3 percent) reported researching the backgrounds of the authors they cited in papers. The 13.2 percent of students who considered more than one aspect of a source (but not all Five Ws) most often evaluated the reputation and reliability of both the author (who) and the publication (where). Consequently, while students may not have replicated the method in its entirety, many applied aspects of the Five Ws and understood it as part of an evaluation process.

The findings of the summative assessment also point to the value of collaborating with course instructors. The impact of library instruction beyond the one-shot session was enhanced by creating an assessment/activity that served as a skills development scaffold needed for established assignments. At the conclusion of the study, one composition instructor requested a copy of the Five Ws activity to use in her class, with several additional instructors indicating their intentions to incorporate aspects of the activity in future classes. Due to the demonstrated increase of memorability and application among students who received additional instruction on the method outside of the library, the researchers plan to utilize this collaborative scaffold approach as a model for other library instruction. Activities based on this model will be similar to the formative Five Ws assessment in their value to instructors and students, repeatability in a later nonlibrary class session, and ease of incorporation into an existing assignment.


CONCLUSION

To effectively prepare students for a lifetime of learning, it is essential that information literacy instruction sessions develop skills, such as source evaluation, that transfer beyond classroom walls. Although navigating the complexities of the modern search experience in a one-shot session can be difficult, learners are increasingly likely to encounter information sources, such as online journals, that defy traditional relationships between content and means of access; gone are the days when scholarly output was more likely to be found in print sources than online. As a result, using an evaluation method that works well with any source, regardless of means of access and retrieval, is vital. The information evaluation methods published in the library literature over the past decade tend to privilege distinctions in access—online versus print—over distinctions in document types (e.g., articles versus editorials). In doing so, these methods fail to emphasize the unique characteristics that well-informed readers use to distinguish between information sources, such as the inclusion of a research methodology or the author’s affiliation.

In this study, the Five Ws was introduced as a means of evaluating sources found regardless of format or mode of access. The researchers were primarily concerned with testing the memorability of the Five Ws evaluation method, instructors’ perceived value of that method, and its effectiveness as a scaffold. Although less than 10% of students recalled the Five Ws in its entirety, a majority used salient evaluation points from the method in their own research later in the semester. In addition, most instructors found value in the method and, in those classes where instructors reiterated the method outside of the library session, student retention and use of the Five Ws increased. These results suggest that instructional scaffolding is an effective way to overcome some of the many limitations of one-shot library instruction—including time restrictions and an of abundance of learning outcomes to address—by integrating library instruction into course-level instruction through an information literacy activity based on a concept familiar to students, and easily incorporated into course instruction and assignments. Though these results are promising, still more research is needed in the application of instructional scaffolding to library instruction and in collaborating with academic departments to teach the type of information evaluation students most need in the current information environment.

Although this study was primarily concerned with testing the memorability and value of the Five Ws, several unanticipated findings were discovered via the formative assessment. These findings include that the majority of students lacked the background knowledge necessary to differentiate among the information gathering techniques of various types of authors (e.g., journalists versus researchers). Additionally, students’ difficulties in explaining differences between scholarly articles, popular articles, and columns may have resulted from their lack of familiarity with the jargon and function of publication components, such as abstracts. As a formative assessment, the Five Ws activity did not address such gaps in knowledge, but instead identified their existence among students who were close to completing their first semester at a university.

For many lower division undergraduate students, general education courses punctuate the first two years of their college careers. Introducing evaluation and research skills in these general, interdisciplinary, required courses may help to equip students with the critical thinking skills needed to succeed in advanced and specialized courses. Undergraduates who are able to acquire and internalize skills for evaluating information at both source and document level may be more prepared for upper division courses in which evaluation becomes deeper, involving the comparing and contrasting of methodologies and scholarly findings within a particular field. The findings from this study suggest that there is a need for increased attention to developing these skills in general education courses. In particular, there is a need for ensuring that students look at documents not only from a narrow, disciplinary view, but also contextually, in an attempt to understand the greater forces at play in their creation. There is life-long value in ensuring that students not only summarize competently, but also analyze competently; that they not only understand what they read, but also recognize that there are people and processes involved in creating what they read. If that is indeed the case, there may also be value in assessing students’ knowledge of publication processes prior to entry into upper division courses.

Despite the low recall of the Five Ws in its entirety or by name, the overall effectiveness of the method has led researchers to continue to use, revise, and improve the Five Ws formative assessment for English 101/118 instruction sessions. Researchers have also made adjustments to the activity for use with high school library instruction sessions and have forthcoming plans to adapt the method for use in an online tutorial. In the near future, the researchers plan to share results of this study with the First-Year Composition department in an effort to support lower division undergraduate student learning. In the long term, the researchers hope that these findings will encourage more studies on information evaluation instruction and the role it might play in the development of information literate citizens and scholars.


References and Notes
1. Patrick J.. Biddix, Joo Chung Chung,  and Park Han Woo,  "“Convenience or Credibility? A Study of College Student Online Research Behaviors,”,"  Internet & Higher Education. 14, no. 3 (2011): 175–82; Lea Currie et al., “Undergraduate Search Strategies and Evaluation Criteria: Searching for Credible Sources,” New Library World 111, no. 3/4(2010): 113–24
2. ACRL Association of College and Research Libraries (): Association of College & Research Libraries Information Literacy Competency Standards for Higher Education (Chicago: ACRL, 2001), accessed July 18, 2013, www.ala.org/acrl/sites/ala.org.acrl/files/content/standards/standards.pdf.
3. Ibid. 
4. Mikael Laakso et al.,  "“The Development of Open Access Journal Publishing from 1993–2009,”,"  PLoS ONE   6, no. 6(June 13, 2011), accessed July 24, 2013, www.plosone.org/article/info:doi/10.1371/journal.pone.0020961
5. Stuart Hampton-Reeves et al.,  Students’ Use of Research Content in Teaching and Learning, Report for the Joint Information Systems Council (University of Central Lancashire: Center for Research-Informed Teaching, 2009), accessed July 15, 2013, www.jisc.ac.uk/media/documents/aboutus/workinggroups/studentsuseresearchcontent.pdf
6. Ibid., 26
7. Ibid., I, 47
8. Biddix, Chung, and Park, “Convenience or Credibility?” 180
9. Sarah Blakeslee,  "“The CRAAP Test,”"; in LOEX Quarterly,   accessed July 24, 2013, http://commons.emich.edu/cgi/viewcontent.cgi?article=1009&context=loexquarterly2004 6
10. Meriam Library, California State University, Chico, “Evaluating Information—Applying the CRAAP Test,” September 17, 2010, accessed July 18, 2013, www.csuchico.edu/lins/handouts/eval_websites.pdf; Andrew B. Pachtman, “Developing Critical Thinking for the Internet,” Research & Teaching in Developmental Education 29, no. 1 (2012): 39–47
11. Brad Matthies and Jonathan Helmke,  “Using the CRITIC Acronym to Teach Information Evaluation,” in Library Instruction: Restating the Need, Refocusing the Response: Papers and Session Materials Presented at the Thirty-Second National LOEX Library Instruction Conference held in Ypsilanti, Michigan 6 to 8 May 2004, ed. D. B. Thomas, Randal Baier, Eric Owen, and Theresa Valko, 65–70 (Ann Arbor, MI: Pierian Press, 2005), accessed July 25, 2013, http://works.bepress.com/brad_matthies/28
12. Wayne R. Bartz,  "“Teaching Skepticism via the CRITIC Acronym and the Skeptical Inquirer,”,"  Skeptical Inquirer   (September 2002)   26:  42–44.
13. Sara Seely, Sara Fry,  and Margie Ruppel,  "“Information Literacy Follow-Through: Enhancing Pre-Service Teachers’ Information Evaluation Skills Through Formative Assessment,”,"  Behavioral & Social Sciences Librarian  (2012)   30, no. 2:  72–84.
14. Ibid., 78
15. Ibid., 83
16. Marc Meola,  "“Chucking the Checklist: A Contextual Approach to Teaching Undergraduates Web-Site Evaluation,”,"  portal: Libraries and the Academy  (2004)   4, no. 2:  331–44.
17. Ibid., 336
18. Ibid., 337
19. Melissa Bowles-Terry, Erin Davis,  and Wendy Holliday,  "“‘Writing Information Literacy’ Revisited: Application of Theory to Practice in the Classroom,”,"  Reference & User Services Quarterly  (2010)   49, no. 3:  225–30.
20. Ibid., 229
21. Ibid., 230
22. Ibid., 226
23. Lev Semyonovich Vygotsky"“Interaction Between Learning and Development,” in Mind and Society: The Development of Higher Psychological Process," ,   ed. Michael Cole, ed., with Vera John-Steiner;Sylvia Scribner and Ellen Souberman,  79-91 (Cambridge, MA:  Harvard University Press, 1978) .
24. Ibid. 
25. David Wood, Jerome S.. Bruner,  and Gail Ross,  "“The Role of Tutoring in Problem Solving,”,"  Journal of Child Psychology & Psychiatry  (1974)   17:  89–100.
26. Jerome S. Bruner,  "“The Ontogenesis of Speech Acts,”,"  Journal of Child Language  (1975)   2, no. 1:  1–19.
27. Vygotsky, “Interaction Between Learning and Development”; Wood, Bruner, and Ross, “The Role of Tutoring in Problem Solving.”
28. Bruner, “The Ontogenesis of Speech Acts”; Wood, Bruner, and Ross, “The Role of Tutoring in Problem Solving.”
29. Bartz, “Teaching Skepticism via the CRITIC Acronym and the Skeptical Inquirer.”
30. Nicholas Kristof,  "“The Japanese Could Teach Us a Thing or Two,”,"  New York Times.  (March 19, 2011) accessed July 29, 2013, www.nytimes.com/2011/03/20/opinion/20kristof.html
31. During the pilot study, students attempted the in-class Five Ws activity with one of three separate documents: a report from the World Health Organization (WHO), a scholarly article from a geography journal, and the aforementioned newspaper page that included Kristof’s column. These documents were assigned randomly to groups and provided researchers an opportunity to observe student experiences evaluating different document types. Most students were able to identify the scholarly article right away. The unambiguous nature of the document presented little challenge in terms of conducting a nuanced evaluation and, as such, was of minimal value to students. The WHO report led to some confusion and difficulty (e.g., finding information about the authors of the report) and first-year composition students who became “stuck” on a question were unable to complete the assessment in the time allotted. The column, on the other hand, presented an appropriate balance of difficulty and accessibility. The material was familiar in that most students easily identified the New York Times as a newspaper, but Kristof was unfamiliar to most of them, and his academic achievements helped students question their assumptions about scholarly versus popular authors
32. Though the exact number of student participants is unknown, the pilot group consisted of 30 first-year composition sections, including eight English 118 sections and 22 sections of English 101. In fall 2011, each English 101 section was capped at 23 students and each English 118 was capped at 22 students
33. In 2012, both English 101 and English 118 sections were capped at 23 students, and researchers taught 17 101/118 sections in which the Five Ws learning activity was used
34. Jennifer Morse et al.,  “A Guide to Writing in the Biological Sciences: The Scientific Paper: Abstract,” George Mason University Department of Biology, accessed July 29, 2013, http://wac.gmu.edu/supporting/guides/BIOL/Abstract.htm; Jennifer Morse, et al., “A Guide to Writing in the Sciences: The Scientific Paper: Methods,” George Mason University Department of Biology, accessed July 29, 2013, http://wac.gmu.edu/supporting/guides/BIOL/Methods.htm
35. It should be noted that even if a student did not use a Five Ws term to describe their evaluation method (e.g., a student did not say “I evaluated ‘who’ wrote the document”), as long as a student’s comments and explanations clearly referred to a Five Ws criterion, the comment was coded for the corresponding criterion. For example, one student’s response to how he or she evaluated a source was, “I researched their degree level, literary accomplishments, and involvement in the field I was writing in.” This response was coded as an application of the “who,” or author criterion
36. The four instructors who used the Five Ws in some way in their own instruction outside of the library session taught eight of the fifteen sections whose students participated in the student survey. Two instructors who participated in the instructors’ survey did not use the Five Ws in their own instruction, and taught three sections of 101/118. Three instructors did not participate in the follow-up survey, but their students participated in the student survey. These three instructors taught four sections of 101/118, and their use of the Five Ws outside of the library session remains unknown

Figures

Figure 1

Student Responses to “What is the Document?” (N = 97)



Figure 2

Student Responses to How the Author Gathered Data (N = 63)



Figure 3

Student Responses to Components of the Document (N = 56)



Tables
Table 1

Student Responses to Questions in the “Why” Criterion


Question: What Was the Author’s . . . Correct Responses: An Opinion Piece Incorrect Responses: A Non-Opinion Piece
Main Purpose? (n = 99) Convince Readers: 70 (70.7%) Inform Readers: 25 (25.3%)
Other: 4 (4.0%)
Point of View? (n = 97) Opinionated: 87 (89.7%) Objective: 10 (10.3%)
Language? (n = 98) Emotional: 72 (73.5%) Factual: 26 (26.5%)

Table 2

Select Responses to the Five Ws Criteria: Comparison between Fall 2011 Pilot Project and Fall 2012 Study


Criteria Fall 2011 Pilot Fall 2012
What: Type of Document n = 125 n = 97
 Popular Article 64.8% 85.6%
 Editorial 29.6% 4.1%
 Column* (Correct Answer) 0.8% 9.3%
Why: Author’s Purpose n = 117 n = 99
 To Convince (Correct Answer) 57.3% 70.7%
 To Inform 35.9% 25.3%
When: Occurrence that Precipitated Publication n = 88 n = 95
 2011 Events in Japan (Correct Answer) 40.9% 45.3%
Where: Publication Type n = 116 n = 94
 Newspaper (Correct Answer) 94.8% 96.8%
How: Author’s Method of Gathering Data n = 92 n = 63
 Author’s Research Study 52.2% 57.1%
 Variety of Outside Sources 35.9% 42.9%
 Interviewed Similar People 20.7% 27.0%
 Interviewed Variety of People 22.8% 25.4%
 Personal Experience (Write-In; Correct Answer) 27.2% 25.4%
How: Author’s Presentation of Information** n = 81 n = 56
 Abstract 33.3% 33.9%
 Bibliography 12.3% 1.8%
 Methodology 44.4% 17.9%
 Designs/Illustrations/Cartoons 9.9% 5.4%
 Eye-Catching Fonts (Correct Answer) 11.1% 50.0%

*The option of “column” was not one of the multiple choice options offered in the pilot assessment.

**Links to definitions for “abstract” and “methodology” were not provided in the pilot assessment. Links to definitions for these words were included in the fall 2012 assessment.


Table 3

Techniques Students Used to Evaluate Sources: Application of the Five Ws


Evaluation Method Respondents (N = 44)
The Five Ws Exactly 2 (4.5%)
Author (Who) Only 21 (47.7%)
Publication (Where) Only 2 (4.5%)
Author’s Purpose (Why) Only 1 (2.2%)
2–4 Ws 6 (13.6%)
At Least 1 W 32 (72.7%)

Table 4

Combined Responses, Recall, and/or Application of the Five Ws Evaluation Method


Evaluation Method Respondents (N = 53)
The Five Ws Exactly 4 (7.5%)
Author (Who) Only 21 (39.6%)
Publication (Where) Only 2 (3.8%)
Author’s Purpose (Why) Only 1 (1.9%)
2–4 Ws 7 (13.2%)
At Least 1 W 35 (66.0%)

Table 5

Instructors’ Use of the Five Ws: How They Used It Outside of the Library Session


“Yes, in our next class meeting after the session we reviewed the five Ws as a tool for source evaluation.”
“I have referenced it in class discussion and particularly in one-on-one conferences.”
“Modified: I had my student evaluate sources by doing in-class research on a few of the W’s, like ‘who,’ ‘what,’ and ‘where,’ though I didn’t call it ‘the three Ws, or anything.”
“I do, but not as overtly. I incorporate it into our discussion about the readings as we go—the types of questions I ask them are shaped by the five Ws.”

Table 6

Comparison of Student Recall of the Five Ws


Students Who Recalled Learning to Evaluate . . . Enrolled in Sections in which Instructors Used the Five Ws Outside of the Library Session (n = 19) Enrolled in Sections in which Instructors Did Not Use the Five Ws Outside of the Library Session, or Instructors’ Use of the Five Ws is Unknown (n = 6)
Five Ws Exactly 2 (10.5% ) 1 (16.7% )
Author (Who) Only 2 (10.5%) 0 (0.0%)
2–4 Ws 2 (10.5% ) 0 (0.0%)
At Least 1 W 6 (31.6%) 1 (16.7% )

Table 7

Comparison of Student Application of the Five Ws


Students Who Explained Evaluating their Sources by Using . . . Enrolled in Sections in which Instructors Reiterated the Five Ws Outside of the Library Session (n = 32) Enrolled in Sections in which Instructors Did Not Use the Five Ws Outside of the Library Session, or Instructors’ Use of the Five Ws Was Unknown (n = 12)
The Five Ws Exactly 0 (0.0%) 2 (16.7%)
Author (Who) Only 19 (59.4%) 2 (16.7%)
Publication (Where) Only 1 (3.1%) 1 (8.3%)
Author’s Purpose (Why) Only 0 (0.0%) 1 (8.3%)
2–4 Ws 5 (15.6%) 1 (8.3%)
At Least 1 W 25 (78.1%) 7 (58.3%)


Article Categories:
  • Library Reference and User Services
    • Features

Refbacks

  • There are currently no refbacks.


ALA Privacy Policy

© 2023 RUSA