ch7

Chapter 7. Assessment

The library world faces pressure to prove our worth or have some measurable impact. This situation has led to the growth of research trying to correlate libraries to improved academic performance or emphasizing the important role of libraries in the open educational resources movement. Fortunately, digital badges have evidence and other forms of assessment metadata baked into them. This technical standard is critically important to demonstrating the value of digital badges. In fact, this standard and the open badges framework are truly a tremendous strength of digital badges. Educause published an article in 2017 that analyzed the mismatch between the rhetoric and reality of digital badges.1

One of the most powerful aspects of a digital badge is that an open badge has “metadata fields that function as dynamic narratives of learning.”2 The badge ties together the learner story through the evidence with the approval of that evidence and validity of the badge apparent. While not all of the metadata fields need to be filled in every time, the more descriptive the data, the more searchable and findable the badge will be within the database and the better that the badge will be able to talk to other relevant systems. This chapter will explore assessment through three levels: within badges, across badge programs, and through badge ecosystems.

Assessment within Badges

Most of the assessment of individual badges comes from the evidence that is submitted as part of the steps of that badge. Evidence can be submitted in several different ways. Learners can take a short quiz demonstrating their knowledge, or they can upload a screenshot demonstrating something they did, take a video of their project, create a web-based object, upload a file, or enter a response in a textbox. The evidence that you choose to accept for your badges should primarily be driven by your learning outcomes and instructional design approach. Automated assessments are appropriate for some activities, while others are designed to have students externalize and articulate their thinking. There is value in both types of evidence, and each step of the badge could require a different type of evidence depending on what is required to complete that step of the badge.

While learning outcomes and design philosophy are absolutely a large part of deciding which evidence to accept, another factor to consider is scale and sustainability. Naturally, text-entry responses and file uploads will take longer to grade than an automated quiz. However, they also provide different insights into student learning. When designing a badge step, you should always ask if assessment of the step can be automated, and if not, why not.

When you have to think about economies of scale, there is always a balance between the ultimate type of activities you want students to apply their knowledge in and the available resources you have to evaluate that student work. For example, if you are going to be the only one administering the badge program, then having more automated assessments will make the badging program more sustainable. Just realize that, by creating automated assessments, you will be giving up the ability to read the thought process of every student who is completing the badge. Krajcik and Blumenfeld emphasize the importance of externalization and articulation of thinking that the learner is experiencing as they learn concepts to assess their formative understandings of a concept. 3

Two examples from our work illustrate the decisions behind badge evidence types. To earn one of our badges, students select the citation style they would use in their field. We link out to a resource that lists the majority of citation styles and what fields tend to use it. While it is interesting to read comments, such as “I had no idea there were citation styles beyond APA or MLA,” assessment of this step could be automated with a multiple-choice survey in which students select the citation style most closely related to their field.

In another step to earn a different badge, students create initial keywords and then narrow their focus based on the initial results received. Students enter in a textbox their initial keywords and search results, and then their narrowed keywords and search results. It would be nearly impossible to automate assessment of this step, as we want insight into the descriptors students are using for their specific topic. One way we could automate this step would be to choose a topic for students, create keyword searches, and then require students to select the best search. However, since our badges are designed to be meaningfully tied to assignments, we want students to choose a topic that interests them and that they are going to use in their course assignment. If this connection is not part of your instructional design, you could automate this step. The decision of what evidence to require in a badge is an intentional decision between assessment types and can be refined over time. You might start with the decision to automate the assessment of a step and then discover that seeing the articulation of thought in that step would be helpful.

The benefits of automated assessment are clear in that the badges are infinitely scalable and sustainable with very little manpower required on the part of the badge creators and evaluators. However, there are also constraints in the types of questions that can be asked in multiple-choice assessments and the level of learning that can be assessed. Textboxes and other creative entries allow for deep insight into student thinking around topics, but this evidence takes time to evaluate and limits the amount of scaling that can occur.

In order to assess the effectiveness and design of one badge, consider scheduling times to review comprehensive evidence submitted for that badge. Depending on the badge system you use, you should be able to pull evidence and analyze it to determine if there are any pain points or other areas where your badges are not producing desired results.

Assessment of Badge Programs

When you think about assessing badge programs, the level of assessment should move beyond individual badges to the overall quality and effectiveness of a complete program. To assess quality and effectiveness, it is helpful to create surveys or other measures of feedback that are given to key stakeholders. These stakeholders include students, instructors, and other librarians who might be assisting in the badge evaluation.

It is important to realize that if the survey is not required, the overall completion rate might be very low. Survey results should be considered in a holistic manner with any other evidence that points to the quality and effectiveness of the badge program. Take any survey results and combine them into a holistic approach of the quality and effectiveness of the badge program. Some other measures of assessment of the badging program are free text responses within the badges, comments from students, and overall completion numbers for the badges. When thinking about the program, it is also important to assess the overall process and technical logistics of earning the digital badges. If the user experience is clunky and not intuitive, then learners can get frustrated before they even begin working on the actual activities you have designed.

Assessment of Ecosystems

Badging ecosystems go beyond the individual badges and badge programs at one library. If your institution has a larger badging program, then that is an ecosystem that can be assessed. If not, the external digital badging world has large and connected ecosystems.

One of the biggest critiques of digital badges is that it is really hard to tell a valid and quality badge from a badge that has less evidence and fewer requirements.4 This is a realistic concern, but efforts are being made to assess badging ecosystems. One of the most important developments is the use of BadgeRank by Badgr. This is a search engine that allows searching and ranking of badges. Theoretically, with mass adoption of this system, quality badges will rise to the top. It can also provide a way for employers to quickly check the validity and worth of a badge.

BadgeRank

https://badgerank.org

Another aspect of assessing the badging ecosystem is looking at the connection to social media platforms, such as LinkedIn, or to a learner’s experiential learning record. This assessment could explore how often learners choose to push their badges to their social media accounts or how often employers view digital badges that have been pushed to LinkedIn.

We conducted a badging ecosystem assessment in a 2016 article for College and Research Libraries that explored the willingness of human resource professionals in ten distinct fields to accept digital badges as a form of evidence for students working on information literacy skills.5 Other colleagues and researchers have also conducted research on badging ecosystems.6

Future Assessment Directions

Learning analytics are going to drive the future of much assessment, and digital badges are not immune to the use of learning analytics. The field of learning analytics is still very much in its infancy. However, the nature of digital badges means that a massive amount of data is being collected and stored. This data can be used to analyze the effectiveness of digital badges and digital badge ecosystems.

As mentioned in the previous chapter, another possible technology that might help with assessment and the entire badging ecosystem is artificial intelligence. This field is also in its infancy, but it has the potential to help scale badging programs and reduce the labor involved in creating and organizing badging systems.

Notes

  1. Viktoria Strunk and James Willis, “Digital Badges and Learning Analytics Provide Differentiated Assessment Opportunities,” Educause Review, February 13, 2017, https://er.educause.edu/articles/2017/2/digital-badges-and-learning-analytics-provide-differentiated-assessment-opportunities.
  2. Strunk and Willis, “Digital Badges and Learning Analytics.”
  3. Joseph S. Krajcik and Phyllis C. Blumenfeld, “Project-Based Learning,” in Cambridge Handbook of the Learning Sciences, ed. R. Keith Sawyer (New York: Cambridge University Press, 2006), 317–34, https://doi.org/10.1017/CBO9780511816833.020.
  4. Troy Markowitz, “The Seven Deadly Sins of Digital Badging in Education,” Forbes, September 16, 2018, https://www.forbes.com/sites/troymarkowitz/2018/09/16/the-seven-deadly-sins-of-digital-badging-in-education-making-badges-student-centered/#79cdb1670b8b.
  5. Victoria Raish and Emily Rimland, “Employer Perceptions of Critical Information Literacy Skills and Digital Badges,” College and Research Libraries 77, no. 1 (2016): 87–113, https://doi.org/10.5860/crl.77.1.87.
  6. Nate Otto and Daniel T. Hickey, “Design Principles for Digital Badge Systems: A Comparative Method for Uncovering Lessons in Ecosystem Design,” in New Horizons in Web Based Learning ICWL 2014 International Workshops SPeL, PRASAE, IWMPL, OBIE, and KMEL, FET Tallinn, Estonia, August 14–17, 2014 Revised Selected Papers, ed. Yiwei Cao, Terje Väljataga, Jeff K. T. Tang, Howard Leung, and Mart Laanpere (Cham, Switzerland: Springer International, 2014), 179–84, https://doi.org/10.1007/978-3-319-13296-9_20; James E. Willis, J. Quick, and Daniel Hickey, “Digital Badges and Ethics: The Uses of Individual Learning Data in Social Contexts,” in Proceedings of the 2nd International Workshop on Open Badges in Education co-located with the 5th International Learning Analytics and Knowledge Conference (LAK 2015), Poughkeepsie, NY, March 16, 2015 (New York: ACM, 2015), 41–45, https://dblp.org/db/conf/lak/obie2015.

Refbacks

  • There are currently no refbacks.


Published by ALA TechSource, an imprint of the American Library Association.
Copyright Statement | ALA Privacy Policy