ch6

Chapter 6. Deployment and Sustainability

Once you have your badges designed, partnerships secured, and a path forward, you may be ready to launch your program. What follows are considerations for the deployment of your digital badges as well as for keeping the program sustainable and manageable. Some of these ideas may affect the learning design of your badges and cause you to alter your original designs, but now is the time to make these fixes before unleashing your badges out into the wild! Other details you probably won’t be able to anticipate until you see student responses and get a sense for the “flow” of the badge evaluation process. Nonetheless, these considerations are meant to help you head off some of the major pitfalls as well as give you some ideas for the next steps to take.

Deployment

As with other new initiatives and programs libraries offer, a tried-and-true way to launch one is using a pilot. A small group of users who can authentically work through your digital badge program; find and report any bugs, glitches, or confusing wording; and successfully navigate to the right places will be extremely helpful. A pilot will also give you a sense of what the evaluation workload will look like and how much time and effort will truly be needed by evaluators. This pilot phase is also good for collecting and incorporating any user experience feedback on the design of the badges and navigation of the digital badge platform. At Penn State, we have a homegrown badging platform and a working relationship with the developers, who are always open to user experience feedback to make the platform better. If you are using a commercial badging platform, consider submitting help-desk tickets or feedback about your issues—many vendors are receptive to this kind of feedback and incorporate fixes and upgrades regularly. For a pilot, we recommend finding a partner who is an early adopter or a champion of the library or information literacy. By working with someone who is enthusiastic and supportive of the process of launching a new technology or of the library’s goals, you will have a smoother rollout. Additionally, a partner who is a supporter will tell all their friends about the program, helping you spread the word. Not to put too fine a point on it, having a smooth rollout and enthusiastic partners is critical to the success of the deployment of your badge program. Therefore, the pilot is key to getting your program off the ground. We also highly recommend evaluation or grading rubrics. What follows are the most common types of submissions for badge steps, their pros and cons, and evaluation considerations. Since each badge platform operates differently, take the variations your platform has into consideration.

Free-Text Responses

Free-text response submissions are simply written answers by learners that will be read and verified by evaluators. In this scenario, the learner is responding with original ideas to a question or prompt in the badge step’s instructions, and the response could be in list or paragraph form. In the embryonic days of our own badge platform, this was one of only three types of submissions that were offered, and thus we used it (and still do) for many of our badge steps. An example screenshot from our own sets of badges is shown in figure 6.1; it includes the grading criteria (aka rubric), student response, and sample reply using the Penn State system. Additionally, some other open-ended questions and prompts from our badges are listed below.

Example prompts for free-text responses:

  • “For this step, type in your research question and come up with a few keywords for your particular research topic and list them in the box below. Try to create three keywords.” (For a badge on developing a research question)
  • “Do you now feel more comfortable evaluating a website? Do you feel that you could evaluate information on your own after completing this badge? Would you change anything about this activity? Does evaluating a website remain confusing for you or is it clearer now? Please include a short paragraph (4–5 sentences) or the equivalent web 2.0 technology creation in the textbox.” (A final reflection for a badge about evaluating web credibility)
  • “Your evidence for this step is to write 2–3 sentences on what part of the librarians’ job surprised you the most and a question that you might have as a result of viewing the video. Is there a librarian whose job you want to learn more about? Is there a service these librarians provide that surprised you? We want to know what you thought of the video.” (From a badge introducing our virtual reference service to undergraduates)

Additionally, in this type of submission, you can use third-party online tools to let students get more creative. For example, students may prefer to create a quick slide show in Google Slides or a VoiceThread response. Using the evidence box, students can simply enter a URL to their web-based multimedia response, which evaluators can view on that website (although students should be reminded to make any work open, at least to the reviewers) and then respond as usual via the badging platform.

VoiceThread

https://voicethread.com

Hands down, one of the pros of the free-text response submissions is the insights into the learner’s mind that you see when reviewing the evidence. By having open-ended questions, the evaluator gets valuable insight into what the learner is thinking (see figure 6.1). A related benefit is that text responses help you keep a finger on the pulse of what is popular with students in terms of research topics, their values, and where they have trouble or ease with learning the content of the badges. Below are a few collected anonymous responses that give a sense of the insights we tend to see when evaluating student work. For the librarian who is accustomed to teaching one-shots and having only surface-level interactions with students, evaluating these responses can be eye-opening and very informative.

  • “I think this badge will help me with practically every paper I write in the future. Research is such a big part of Psychology.”
  • “At first the keywords I was using were not very effective as I was not getting many helpful sources. Once I learned how to broaden and narrow my keywords, I found that my research skills improved when I was exposed to much more helpful information.”
  • “This badge activity has caused me [to] reexamine my own method of how I select keywords, and it has increased my understanding of investigating topics for research.”
  • “I have used scholarly articles for the research in my papers for the past 3 years, but this helped me identify a few new differences between scholarly and popular articles. I was not aware that popular articles did not cite their information, so if I have a questionable source, I can use this idea to find out whether or not it is scholarly. Truthfully, I wish I would have been given this badge when I was a freshman. Honestly, I had no idea there was a difference between scholarly and popular articles then, and this would have helped tremendously.”

Conversely, the main drawback of evaluating text responses by students is the time-intensive nature of the process. As one might imagine, if you deploy your badges to lots of people who are actively engaged with the content, you would be quickly inundated with responses and evaluating lengthy text responses could get cumbersome. Some solutions for this are listed in the sustainability section below.

Document Uploads

The document upload submission type is one where the learner attaches a file or document in the badging platform for the evaluator to review. This type of response could be an extension of the text response and would allow the learner to use word processing software to make a more formal document and include things like tables or charts. It would also be appropriate for a capstone type project that might be a research paper, which would be much longer than a simple response to a question. Additionally, by using this type of response, other types of files could be added, such as spreadsheets or slides, so again, a return to your badge design and determining what kinds of outcomes you’d like to see from the learners will help you decide on the format to fit the badge.

This submission type has benefits and drawbacks similar to those of the free-text response. As an evaluator, you’d be able to see learners’ work directly and gain insights from seeing their evidence. This format allows the learners to use other software or applications to expand the formats for their work and allows more creativity than just text. On the other hand, adding a layer of complexity with an upload option can make evaluation time even longer or more intense. Again, the number of submissions you expect would have a huge implication for the workload. Reviewing research papers for five students would be quite different from papers for 100 students. Another aspect to consider is the variety of file types learners might potentially upload. Speaking from experience, unless students are explicitly told what types of file format to upload, you might find yourself with some submitted files that are not platform-agnostic—for example, .pages files that cannot be read on a Windows machine. Also, depending on your badge platform and computer environment, downloading, opening, and viewing files may quickly become a tedious process. Again, this workflow can be significantly different depending on the number of the responses in your queue.

Auto-graded Quizzes

Most badging platforms offer a quizzing tool that can be used in badge steps as an assessment for learners’ understanding of the content. In most cases, these quizzes are auto-graded within the platform based on the correct answer being input ahead of time by the designer. The most popular type is multiple-choice quiz questions, but true/false, matching, or ordering type questions might be options as well. This option is generally good for quick assessments, particularly formative assessments along the course of a badge where learners can self-check their own comprehension along the way. In the early days of our own badge platform development, auto-graded quizzes were not an option, so we didn’t initially have any included. Today we have a sprinkled a few quizzes into our badges (see figure 6.2), but from an instructional design perspective, we feel they are not best for the designs where we focus on student articulation of their learning through reflection. In our student feedback about our digital badges, we sometimes have students offer suggestions that the badges should include more quizzes. One pro of this type of response is that students seem to like the familiar and often easy format of a quick quiz as opposed to writing a thoughtful response to a question. In fact, they occasionally suggest that more or all of the steps be quizzes. Additionally, in this format, it is difficult or impossible to respond to the student’s work with personalized feedback. One pro of quizzes is that the step is graded immediately for the student and the evaluator. As an evaluator, you may be able to see the quiz answers either individually or in the aggregate so you can see where students have trouble. Conversely, depending on the platform, you may not be able to provide personalized feedback on quiz work. While it’s tempting to make every step of a badge a quiz due to learners’ preferences and ease of evaluation, we caution against this type of blanket approach to submissions. Quizzes are not an assessment that fits every type of learning, learner, or topic. Consider the design of the badge and what you would like students to learn. If the content requires critical thinking and articulation of knowledge, a quiz may not be the best fit. However, if the step is providing new information and facts about a topic, a quiz might be a good fit and provide some welcome variety over the course of a badge.

No Evidence or Optional Evidence

Occasionally, you may have a step that asks a learner to do some task or take note of information that is required as part of the learning journey but doesn’t necessarily require that the learner submit evidence. In this case you could have a badge step where no evidence is required or the learner can submit evidence as an option. We have two examples of such a step from our own digital badge program (see figure 6.3 for one example). As an example of a badge step where no evidence is required, we ask students to review and bookmark a site for future reference but don’t require that they submit evidence and take their word that they’ve done it. As an example of an optional evidence badge step in a badge about citations, we give students the option of submitting a citation to us for review and feedback. In this case, not all students submit something, which means we are addressing the students who are focused on learning the topic. Another use case for this type of evidence would be a reading you want students to complete but do not need them to respond to questions about it.

The benefits of having no or optional evidence required are that you can still place needed or supplemental materials into the design of the badge, but learners and graders both get a break from submitting and evaluating evidence. If you have a large number of learners working through a badge, this option can allow you to incorporate something that might be hard to test or reflect on without interrupting the flow of the badge. The obvious drawback to this approach is that you don’t have explicit confirmation that the learner did the task, and you also don’t have data or feedback about this particular step. This type of submission may not be the most common one for badges, but it can be useful and should be considered at times.

Sustainability

This section will cover some ideas to consider for your digital badge program to keep all of the different aspects working smoothly, your users’ expectations managed, and your work sustainable. At Penn State, the question of scale is always looming large because having over 90,000 total students (online and residential) means that most classes we interact with are either large or have multiple sections. Any program we launch needs to have some built-in growing room if we want to build a program to have impact on larger groups or programs. If you aren’t at a large institution like ours, you will still want to consider these suggestions for your own situation as they will help you plan for potential pain points ahead of time or at least be braced to deal with an issue should it arise.

Evaluation Time

Time needed for evaluation is one of the biggest sustainability issues we’ve faced and one area to definitely consider proactively. The design of your badges will impact not only the learners but also you and your colleagues. The amount of time it will take to evaluate the evidence that learners submit for their digital badge work is probably the biggest area that will affect you day to day. When considering the different types of responses outlined above, some types of responses clearly require more time and effort than others, with document uploads and text responses being the most time-intensive and auto-graded quizzes and no or optional responses being the least time-intensive. The learning theory driving our design was connectivism with a focus on placing resources in key moments within the learning experience. Therefore, we didn’t want the majority of the student work to be auto-graded quizzes—rather, we wanted students to think critically and respond. One approach we’ve taken over time is to provide a mix of response types in each digital badge. Giving the learner a choice of ways to respond to questions in the digital badge helps as well. Also consider the type of response students will be articulating. Is it a reflection of their experiences where there isn’t a “wrong” answer per se, or are you looking for a specific response? Due to the nature of the evidence, the first is easier to evaluate than the second, and this would be a factor in time required for evaluation.

As an evaluator, you become faster and more skilled the more responses you verify. Once you get a handle on what you are looking for in a response, you will be able to deftly identify a “good” response. In our own experience, we find that the large majority of students do the work appropriately and don’t need multiple attempts to pass a step or earn a badge, which also helps to speed evaluations. As mentioned earlier, grading rubrics or criteria for evaluating responses will also help limit time needed for evaluating responses, especially text or multimedia responses.

Another way you can ease evaluation is to enlist your colleagues and crowdsource this aspect of your program. When our program started taking off and we were inundated with evidence to evaluate, we quickly found a few supporters who were willing to pitch in to help. That cadre of evaluators soon grew to over a dozen people and is the main way we’ve been able to expand our program. We’ve created an orientation and training curriculum for volunteer evaluators and put out a call twice a year to find new helpers. Once new evaluators are onboarded, we offer to co-evaluate with them until they feel secure responding to students and go at a pace they are comfortable with.

If you don’t have many colleagues or helpers to draw upon, another way to keep your work manageable is to limit the number of participants completing badges. You can do this by making them optional, but if you want to see badges completed in their entirety or the badges are part of a scaffolded program, you’ll want to make them mandatory, so limiting the number of people who can earn them may be the option you want to use. This method can make your program seem more exclusive while at the same time keeping your workload manageable. Additionally, it means you will be spending more time on each response and providing meaningful feedback, if that’s how your badges are designed.

Artificial Intelligence

Aside from changes in staff support to manage a digital badge program, there’s technology on the horizon that may help more in the future, one we’ve recently gotten to explore in detail—artificial intelligence (AI). While our crowdsourcing approach is a success, it is not likely to be sustainable at the current growth rate. The integration of digital badges changed our pedagogy by deepening the learning experience for the student and the teaching experience for the librarian. We didn’t want to move away from the philosophy of providing personalized feedback in our digital badge designs, but at the same time wanted a way to automate parts of the process in order to make it more efficient. This is when we turned to AI.

The type of AI we are exploring, automated essay scoring (AES), is used to assess the quality, accuracy, and relevancy of natural language writing. Recent advances in machine learning (ML) have led to significant improvements in the accuracy of AES, and evaluation of student responses in micro-credentials is a natural application of this technology, yet an underexplored one, and certainly so within libraries. Our micro-credential data is well-suited to various ML techniques because we’ve had so much success with adoption, and thousands of responses are available in order to train an AI model.

Luckily, our institution was offered seed funding for AI-based projects, and we partnered with our I-School (College of Information Sciences and Technology) to develop an AI tool that integrates human and algorithmic capabilities. The AI gives students indicators as to whether their response is likely to be successfully scored and speeds the grader’s response time so that personalized feedback will still be possible at scale. Through this process, we learned how challenging it is to integrate AI into an environment very concerned with data privacy and how integrating new technology into existing systems requires careful coordination.

Although the use of AI is on the rise and we’ve started to use it in our everyday lives, it is still a developing technology. It’s important to remember that using a developing technology where student grades are potentially impacted is an area to especially tread lightly. With our experimentation with AI, we felt it was and will be critical to have a human in the loop throughout the process. However, we can clearly see how AI will have an effect on digital badges as well as other areas of libraries in the future.

Conclusion

It is our hope that with our own experiences in mind, you will have a clearer path forward with your own digital badge program and that by considering some of our challenges and ideas up front, you will be in a better place to be agile and responsive to your learners’ needs so that you have a successful launch of your program.

A screenshot of Penn State’s badging platform showing the grading criteria for the evaluator, a student’s response to a prompt (evidence), and a follow-up answer by an evaluator.

Figure 6.1

A screenshot of Penn State’s badging platform showing the grading criteria for the evaluator, a student’s response to a prompt (evidence), and a follow-up answer by an evaluator.

A screenshot of Penn State’s badging platform showing a step using an auto-graded quiz.

Figure 6.2

A screenshot of Penn State’s badging platform showing a step using an auto-graded quiz.

A screenshot of Penn State’s badging platform showing a badge step that doesn’t ask the learner to submit a response.

Figure 6.3

A screenshot of Penn State’s badging platform showing a badge step that doesn’t ask the learner to submit a response.

Refbacks

  • There are currently no refbacks.


Published by ALA TechSource, an imprint of the American Library Association.
Copyright Statement | ALA Privacy Policy