Randall, D. L., Farmer, T., & West, R. E. (2019). Effectiveness of undergraduate instructional design assistants in scaling up a teacher education open badge system. Contemporary Issues in Technology and Teacher Education, 19(4). https://citejournal.org/volume-19/issue-4-19/general/effectiveness-of-undergraduate-instructional-design-assistants-in-scaling-a-teacher-education-open-badge-system

Effectiveness of Undergraduate Instructional Design Assistants in Scaling a Teacher Education Open Badge System

by Daniel L. Randall, Brigham Young University; Tadd Farmer, Purdue University; & Richard E. West, Brigham Young University

Abstract

This article describes an examination of how undergraduate instructional design assistants (IDAs) scaled up an open badge system by assisting in creating open badges. External reviewers rated the open badge rubrics created by seven of these IDAs along with those created by instructors, and the results were compared by scored components as well as overall totals. Interviews were conducted with the seven IDAs, which were coded using cross-case thematic analysis. With the help of IDAs the number of badges increased without compromising the quality of the badge rubrics, as IDAs’ rubrics were of quality equal to those created by instructors. Benefits experienced by IDAs included technology skills and professional growth. Several practitioner tips are provided for those wanting to employ IDAs effectively in creating open badges, including finding students with strong content expertise, creating a rigorous mentoring process that guides the IDAs in their tasks, allowing IDAs to own their badge development from beginning to end, involving the IDAs as teaching assistants so they can see the implementation of their badges, and encouraging peer collaboration among the IDAs to share best practices.

Of the many educational technology innovations created over the years, a surprising number have failed (Coleman, 2014). For example, in the 1920s and 1930s several prominent universities began offering college courses via radio (Matt & Fernandez, 2013). At that time it was predicted that radio would change the way students are educated, perhaps even replacing the lecture hall. By 1941 only one radio course remained that offered credit, and no one enrolled in it. Matt and Fernandez (2013) suggested the possible reasons why radio instruction floundered included low rates of completion, distractions in homes, and lack of social interaction.

Massive open online courses (MOOCs) began with similar hype. They appeared to be infinitely scalable, and many thought they would democratize education (Portmess, 2013; Skiba, 2013) and possibly disrupt higher education (Horn & Christensen, 2013). Like radio courses, however, the hype around MOOCs did not last long. Enrollments were high, but so were the attrition rates (Breslow et al., 2013; Jordan, 2014).

Although Ahearn (2018) argued that MOOCs should be evaluated as digital content rather than courses, they were marketed as an educational innovation. Many scholars claimed numerous MOOCs used poor or less-effective pedagogical practices, and they questioned how much students learned (Naidu, 2013; Prensky, 2013; Siemens, 2014). By 2015, after poor performance (see Kolowich, 2013), MOOCs were not dead, but the hype was (Hill, 2015), and even some leaders in the MOOC space had earlier seemed to question the usefulness of MOOCs in higher education (Chafkin, 2013).

Radio instruction and MOOCs appear to have followed the path of many previous technological innovations for education. As Cuban (2001) famously declared about computers in the classroom, they “have been oversold and underused, at least for now” (p. 179).

In preservice teacher education, in particular, technology is consistently found to be “significantly under-used” (Cuban, 2001, p. 134). Although there has been an abundance of research into pedagogical strategies surrounding educational technologies, these past technological failures may indicate that more attention needs to be placed on studying how to make these technological innovations sustainable and more easily integrated into organizational practices, in order to truly effect teaching and learning.

Another new educational technology innovation, open badges, has been hailed as a potential “game changer for higher education” (Moore, 2013, p. 75), and many have declared it particularly valuable for teacher education (Gamrat, Zimmerman, Dudek, & Peck, 2014). Open badges are a type of alternative microcredential (West, Newby, Cheng, & Clements, in press) that represent skills and knowledge acquired within traditional institutions or through more informal and dynamic ways, such as internships, independent learning, and other learning experiences. Open badges provide an efficient and economical representation of accomplishments and capabilities:

  1. Access to information about the criteria, evidence, and performance required to earn the credential.
  2. Freedom to share the credential openly through social media and electronic portfolios.
  3. Low cost of hosting and issuing badges.
  4. Independent verifiability to prevent forgery.

These affordances make open badges a potentially disruptive innovation that can service new educational markets outside of and across formal higher education institutions, including teacher education (Gamrat et al., 2014; Randall, Harrison, & West, 2013).

Although the concept of open badges was said to be “rapidly gaining traction among educational practitioners, education-oriented companies, and nonprofit organizations” (Devedžić & Jovanović, 2015, p. 603) and popular with learners (Cross, Whitelock, & Galley, 2014), badges have faced similar threats to permanence as MOOCs and radio courses. As Cuban (2001) warned, often the problem appears not in the design of an innovation but in the scaling of its adoption and implementation to achieve a positive impact on learners and educational systems. The on-campus courses on which radio courses or MOOCs had been based were likely effective, but the method used to scale up the course (via radio or MOOC) introduced problems that prevented it from being successful.

In preservice teacher education, in particular, barriers to adopting an innovation like open badges include institutional requirements for earning and compensating credit, already strenuous degree requirements, state and national accreditation requirements, and allocation of preservice education over several departments in a university. For open badges to be successful in preservice education, institutions must be able to scale the badging systems without sacrificing quality pedagogical practices and overly burdening the faculty and departments. This success requires supporting the financial cost of open badges.

Because badges are usually issued after an assessment, a human time cost is often involved. If the open badges prove to be popular, the time needed by teacher education faculty members to manage the open badging system, review the work submitted, and issue the badges could substantially increase. With time already at a premium, faculty members often are not interested in more to do, despite the benefits it may provide to their students.

For this reason, for open badging and other educational innovations to be successful, methods must be developed to integrate the technology into daily practice within colleges of education that do not drastically increase faculty workloads. Educational innovations that can scale up, especially with a lower cost than alternatives, can especially be highly beneficial to people who are underserved — due to lack of space or lack of funding — by providing them with greater access to education (Selingo, 2013).

Despite the emergence of increasing research into the potential benefits of open badges and the perceptions of those who receive them, little has been published on how faculty members can implement this innovation. In this article, we report on our efforts to improve the scalability of open badges by employing skilled undergraduate instructional design assistants (IDAs) to address these challenges.

Development and Scaling of Open Badges in Teacher Training

Open badges are a type of open microcredential created by the Mozilla Foundation in 2012 with funding from the MacArthur Foundation. The number of badge issuers on the Mozilla Open Badges platform grew to 1,450 by 2013 (Mozilla Open Badges, n.d.). The common characteristic of open badges is that they adhere to the open badge infrastructure specification maintained by the IMS Global Learning Consortium, which allows the badges to be portable across platforms. The typical focus is on micro, skill-based achievements, allowing the badges to be collected and stacked in various ways to form learner profiles of achievements as part of digital portfolios.

Open badges can be given for recognizing honor or authority, achievement, experience, or community service (Araújo, Santos, Pedro, & Batista, 2017), developing digital literacies (Harvey, 2017), gamifying education (Ebner et al., 2017), developing specialized skills such as in a laboratory (Hensiek et al., 2017; Young, West, & Nylin, in press) or for computational thinking (Hunsaker & West, in press), enhancing student interaction or engagement (Chou & He, 2017); validating competencies (Niehaus, Platz, Herselman, & Botha, 2017), developing online tutors (Hrastinski, Cleveland-Innes, & Stenbom, 2018), improving goal setting (Cheng, Watson, & Newby, 2018), or for many other purposes, including promoting and recognizing diverse kinds of learning mapped to standards and endorsements (Hickey & Willis, 2017).

Badges, in particular, have garnered attention specifically for teacher training. Our review of the literature on open badges in teacher education found enthusiasm for the concept and several initial positive results. For example, Jones, Hope, and Adams (2018) collected qualitative and quantitative survey data from 99 K–12 teachers who were awarded digital badges and found that these teachers liked and were supportive of the initiative and often shared their badges on social media.

Diamond and Gonzalez (2016) similarly studied 29 middle and high school teachers who responded with strong support of badging as external credentials, but with the caveat that external validation (e.g., from employers) was important. Despite these initial studies, we found scant research on how effective open badges could be for teacher professional development. Even more inadequate are guidelines and research on how to overcome institutional barriers to sustaining this innovation in teacher training practices, which created a strong need for the current study.

Badges for Professional Development

As Fontichiaro and Elkordy (2016) asserted, “Digital badges, a version of microcredentialing, offer an opportunity to go beyond a seat time paradigm to more accurately and vividly document professional learning” (p. 287) in a way that helps teachers connect professional development to their authentic practice. Diamond and Gonzalez (2016) tested this idea, and after their first year of implementation reported insights from 29 middle and high school history and social studies teachers from multiple states who had participated in an online badging experience for professional development. These teachers felt the badges could be useful, particularly to encourage competency-based education and disciplinary mastery, but reported not being rewarded at their schools for earning badges.

Similarly, Gamrat et al. (2014) developed the Teacher Learning Journeys program to enable teachers to customize their professional development experience using open badges. After evaluating 154 professional development activities from 36 teachers, they found the strategy enabled personalization in some aspects of learning and assessment. Jobe, Östlund, and Svensson (2014) and later Hodges, Lowenthal, and Grant (2016) added that badges could be used as credentials for MOOCs used by teachers in training, facilitating more open learning for teachers.

These initial positive results have led to many companies emerging to offer professional development to teachers via open badges (for example, Digital Promise, n.d.). In addition, many school districts and badge platforms are targeting the use of microcredentials for teacher professional development (Jennings & Roome, 2017).

Our Open Badge System

We use open badges as part of a larger badge system that services undergraduate preservice teachers in multiple fields. Each badge, which typically corresponds to a single technology, is earned by mastering a number of competencies listed in a rubric. In our system we use the word badge generically to include, in addition to the digital credential, its corresponding rubric and any relevant instructional aids.

We have incorporated our badge system within the course EdTech 200, which requires students to complete three educational technology projects, which they choose from several badge options. For example, the Internet Communications Project can be satisfied by completing the Google Sites badge, the Wix badge, or another badge in the same category. Students have greater choice and autonomy, and the system also reduces demands on instructors’ time, so they can provide individual help to students who need it (Randall et. al, 2013). We desired to scale our badging system to serve more people with more options.

Challenges to Scaling

The current challenges to scaling up badge systems include maintaining rigor and quality control while creating new badges and assessing them (West & Randall, 2016). As we create new badges, we must ensure that they are developed according to the core philosophy of the badging entity and that all elements associated with the badge, such as the rubric and instructional materials, are completed.

Maintaining rigor while completing all steps in the process can be time intensive and a challenge to scalability, particularly for teacher education faculty members who have limited time. Some badging systems contend with additional challenges related to having sufficient content knowledge in the areas represented in the badges. For example, as an educational technology department, we do not have specific science, mathematics, or dance education content knowledge, although understand the framework of technological pedagogical knowledge (Koehler, Mishra, & Cain, 2013). A badge development team with an inadequate understanding of content knowledge can slow down development and reduce quality.

In addition, teachers know that the assessment process should involve more than a simple review of a submission to determine whether or not a badge should be issued. Earning the badge should provide learners with specific formative feedback that enables them to reach mastery. This feedback not only promotes and enhances learning, but additionally increases the credibility of the badge as a legitimate credential (West & Randall, 2016). Providing high-quality feedback based on a rigorous assessment process can be time intensive, however, particularly as many skills are best assessed through human graders, which can complicate the challenge of scaling without sacrificing quality in a badging system. As with badge development, assessment requires knowledge specific to content areas as well as technologies — and preparing teachers requires many content areas.

Involvement of Undergraduate Assistants

Undergraduate teaching assistants (UTAs) may be an affordable solution to some of these challenges in scaling open badging while maintaining rigor in assessment. While many UTAs perform more clerical functions and have less responsibility than graduate teaching assistants (Weidert, Wendorf, Gurung, & Filz, 2012), UTAs who have been given more responsibilities have demonstrated ability to perform these tasks well (Mendenhall & Burr, 1983; Weidert et al., 2012).

Abilities and Responsibilities

UTAs have been successful in such functions as reviewing assignments and tests and making suggestions for improvement, giving feedback to students, and writing test items (Hogan, Norcross, Cannon, & Karpiak, 2007; McKeegan, 1998; Mendenhall & Burr, 1983). Some studies have found UTAs to be effective in improving the experiences of students in the courses (e.g., Dickson, 2011).

Indeed, Mendenhall and Burr (1983) advocated giving UTAs more responsibilities, noting that precedent shows they have been able to meet the expectations. In particular, UTAs in preservice teacher education programs are already being trained as teachers and have a better understanding of teaching, mentoring, and learning than UTAs in other disciplines.

Serving as a UTA or a graduate teaching assistant (GTA) provides important benefits for student teachers (Weidert et al., 2012), which includes improving their research abilities if they are trained in inquiry-based approaches (French & Russell, 2002). Studying UTAs teaching sociology, Fingerson and Culley (2001) found benefits for the students and faculty as well as UTAs themselves, particularly if following best practices such as faculty collaboration with the UTA and use of dialogue-based teaching strategies. Another best practice they recommended was to make the work of the UTA more visible to students, involving the UTA in more authentic aspects of the teaching.

Training and Mentoring

For teaching assistants (TAs) to be effective, graduate or undergraduate, they must be well trained. The scholarship is thin concerning the efficacy of TAs and the nature of their training, and the available research focuses on GTAs. Several institutions that utilized TAs have provided extensive training via orientation seminars, weekly meetings, and personal mentoring (Hogan et al., 2007; McKeegan, 1998; Mendenhall & Burr, 1983; Weidert et al., 2012).

Luft, Kurdziel, Roehrig, and Turner (2004) studied the training of science GTAs and found little research on training of this increasingly crucial pool of university instructors. They found that GTAs were working autonomously and following traditional teaching practices but had limited opinions about undergraduates’ abilities. The researchers noted the importance of faculty and staff participating in GTA training programs. In a separate paper [SHOULDN’T THIS PAPER BE CITED?], they also found that chemistry GTAs needed training focused on instruction and assessment, which has relevance to our study.

Carroll (1980) had similarly noted the scarcity of research on graduate and undergraduate TA training — especially studies providing evaluation data about the effectiveness of these programs. In his review, he considered 48 studies that were mostly descriptive. Eight studies reported data on TA feedback about training, indicating they had found it very helpful. Only one study considered cognitive outcomes from the TA training (it reported positive effects).

Observations of TA performance after training were reported by 13 studies, which documented improvements in teaching. Carroll (1980) criticized the generalizability and validity of these studies, but concluded that TA training seemed to improve satisfaction and performance, with a caveat that more research and better research methods were needed on both UTAs and GTAs.

Since Carroll’s (1980) review, the amount of research on UTA or GTA performance or training has, unfortunately, been minimal. Boyle and Boice (1998) found that systematic mentoring produced better results than elaborate training programs. McKeegan (1998) described using UTAs to help with course preparation, course instruction, and individual tutorial mentoring, as well as teaching and grading.

These activities are somewhat equivalent to developing badge rubrics and assignments as we have done in our study reported in this paper. Surveying students and UTAs, McKeegan (1998) found that 73% of the UTAs (n = 15) rated their learning experience as “excellent,” and 91% of the students they worked with (n = 47) believed the UTAs provided “good” or “excellent” help (p. 13). When TAs are given a great deal of training and responsibility, they are still frequently instructed to contact the course instructor for guidance when needed (Mendenhall & Burr, 1983).

Undergraduates as Designers

Johnson (2014) reported on a university that, instead of hiring a team of instructional designers to implement a new learning management system (LMS), chose to hire 55 undergraduates part-time to serve as implementation assistants (IAs). These IAs helped train faculty on the new LMS, assisted faculty in migrating their courses, and even rebuilt courses in the new LMS.

Approximately 1,242 faculty and staff members received one-on-one training from IAs. These assistants logged nearly 11,000 phone calls and over 6,000 emails to accomplish their work, a number that could have been achieved only by a large group of employees. These numbers provide evidence that a large group of well-trained and qualified undergraduates can effectively perform many professional tasks. This approach may be more cost effective, as Johnson (2014) noted, “because we were able to hire as many students as we did, we were able to support more faculty members than we could have had we hired more [full-time instructional design] consultants” (p. 84).

This was the first professional job for many of the IAs, and they received extensive training once they were hired. Johnson maintained that “even though they are not trained pedagogues, student employees can be taught principles of effective course design and can teach these to faculty members, who will listen” (p. 87). Many faculty members asked these IAs questions about pedagogy and accepted suggestions from them about how to improve the design of their course.

Undergraduates as Assessors

Besides being effective designers, undergraduates can be taught to assess specific performances at a level similar to experts in relatively little time, as shown in preservice teacher education research. Prusak, Dye, Graham, and Graser (2010) examined preservice physical education students’ abilities to accurately and reliably code videos of teachers performing a skill. Students were trained on the competencies they would assess. After only 2 hours of training and three practice attempts, these preservice teachers were moderately reliable and highly accurate when compared to expert reviewers, which led to this conclusion: “It seems evident, from the results of this study that students can become capable analyzers” (p. 151).

While studies such as Johnson (2014) and Prusak et al. (2010) showed that preservice teachers can often perform well in design and assessment tasks, the research in this area is limited, and additional studies are needed. In particular, research needs to examine how undergraduates might help in successfully scaling up a potentially disruptive innovation, such as open badges, by serving simultaneously as assistants in the design and assessment process.

Methods

Research Context

This research was conducted at our university, which is a large undergraduate-focused private institution in the Intermountain West with a sizable College of Education focused on training and certifying K–12 teachers through a 4-year baccalaureate program. Within this 4-year program, we taught a course on technology integration for teachers, which was the context for this study.

To support our open badge initiative in the preservice technology courses, we followed a practice similar to that described by Johnson (2014), hiring many undergraduates, an average of two or three per year, to assist in creating, assessing, and maintaining our badges. We invited these UTAs to apply for these positions based on their previous successful performance in the course. Some worked as UTAs who helped assess submitted projects, while others were instructional design assistants (IDAs) who helped design and test new badges. Some undergraduates worked as both a UTA and IDA simultaneously.

UTAs performed much of the grading required by our assessment model. To ensure quality grading and feedback, UTAs received group instruction and one-on-one mentoring. Course instructors periodically spot-checked grading done by UTAs and, if needed, provided additional mentoring or instruction. Our detailed badge rubrics, along with grading guides and other job aids, also aided UTAs. By using a group of UTAs who had been specially trained, we were able to grade far more submissions with a relatively low cost.

The challenges we faced in creating more badges included identifying the needs of the academic content areas and developing badge rubrics for content-specific technologies in areas where we lacked specialized disciplinary knowledge. To meet these challenges, we hired undergraduate teaching majors specializing in the subjects where our knowledge was insufficient. Working with instructors who were experienced in badge design, these IDAs were able to identify and create rubrics for discipline-specific and general-use technologies. IDAs also created student examples and other instructional materials.

IDAs’ Backgrounds

Because the IDAs are central to this story about how they assisted in the open badging project, we now provide a brief description of each IDA and their relevant background. None of them had a background in instructional design or open badges specifically, but were teaching majors who had previously completed the course that contained open badges.

Steve had a double major: A primary major in social science teaching, with linguistics as a secondary major. He had worked previously at the university as a writing tutor, which gave him teaching experience, as well as experience giving feedback to students.

Hollie also had a double major, with physical science teaching (including physics and geology courses) and chemistry. She had finished all the science classes for her majors and was beginning to take teaching classes when she took the blended section of EdTech 200. Before being hired as an IDA, she described her technology skills as a “standard college student . . . capable, but not well-versed in a variety of [technologies].”

Carter was majoring in social studies teaching while he was an IDA. Previously he had worked as a research assistant for a history education professor. In that job, he often focused on teaching methods and educational tactics, especially in social studies.

Dena was a biology education major who had finished all her course work except her student teaching when she became an IDA. She had done some substitute teaching. She had also worked at the university’s Museum of Life Sciences for 2 years, adding plants to a database, mounting specimens, and helping her professor collect data on two new species he had found. This background helped her create a badge for a leaf identification application (“app”). Dena did not have an advanced technology background but described herself as a typical user. For an undergraduate, Dena had a strong background in teaching theories including classroom application. She believed this knowledge helped her as an IDA.

Janelle was an English teaching major close to completing her degree. She had already taken most of her education classes, so she had classroom teaching experience by the time she became an IDA. Her English teaching courses had included instruction on creating rubrics, which was helpful to her. She also had strong editing skills, which she was able to use. Before being an IDA, she had worked for the statistics department for 2 years, first as a front desk secretary and later as an administrative assistant. She used basic office programs, as well as Adobe InDesign software. In her technology skills, Janelle felt comfortable helping others with programs she had used before, but she also felt confident that she easily could learn new technologies.

Liz was a physics teaching major who had already taken several education classes before becoming an IDA. She had also been a teaching assistant for several physics classes at the university. Liz had used Logger Pro software before she became an IDA (both in high school and in college), but due to a time lapse she did not remember everything about it. However, her early experiences with Logger Pro helped her develop ideas for what to include in the badge. Liz had “always felt like [she] was good with technology” and if she did not know how to do something she could figure it out.

Joy earned a bachelor’s degree in English. She then worked for a year as a “paraeducator for kids with mild to moderate disabilities in Grades K through 2” before returning to the university get her teaching license. She had previously worked as a research assistant studying reader identity with tablets versus traditional books. As a part of getting her teaching certificate, she took EdTech 200, which was her first technology class. She enjoyed it and believed that she learned a lot. She had little technology experience before taking the class, other than using technology on her own. She did not describe herself as a “techie,” but felt she was tech-savvy because she had grown up during the “digital age.”

Research Questions

Employing IDAs and UTAs allowed us to scale our badge system effectively, while providing IDAs and UTAs with valuable experiences that could positively affect their careers. In this study, we sought to validate these expectations. While employing undergraduate preservice education UTAs to grade learner submissions is not unprecedented, the use of IDAs is rarer in research and practice. We thus chose to focus this examination on the effectiveness and experiences of the IDAs. Specifically, we sought answers to the following research questions:

  1. What experiences did IDAs have while creating these open badge materials?
  2. How effective were IDAs in creating quality well-aligned content for open badge rubrics?
  3. How could this approach to having IDAs create open badges be improved?

Data Gathering and Processing

Because the research questions sought to investigate IDAs’ experiences as well as the effectiveness of their work, we used a convergent parallel, mixed methods research design (Creswell & Clark, 2017). In a convergent parallel design, two modes of data are collected and analyzed concurrently but separately and then compared in the interpretation phase.

In our study, to examine IDAs’ experiences (Research Question 1) semistructured interviews were conducted with seven IDAs. These interviews provided information about what the interviewees did as IDAs, explored how they believed they benefited by this form of employment, and asked what changes could be made to improve their experience in these positions.

We considered each IDA as a separate case and wrote short vignettes describing their experiences. A cross-case thematic analysis was used to determine common themes in their experiences. Codes were based on interview questions and grouped into the following categories:

  • Technology skills (improved or unimproved).
  • Subject matter knowledge (improved or unimproved).
  • Perspective (changed or unchanged after being an IDA compared to being a student in EdTech 200).
  • Job duties (positive and negative).
  • Project contributions (positive and negative).
  • Ways subject matter expertise helped them perform their job.
  • Professional growth.
  • Attitudes regarding badges (positive and negative).

Constant comparison analysis techniques were used to allow additional categories to emerge (Glaser & Strauss, 1967). Two researchers coded all the interviews. They first coded the same interview separately and then met to compare categories and discuss discrepancies to improve trustworthiness of the categories. They coded a second interview separately and met again to discuss it to strengthen further the integrity of their coding, after which they coded the rest of the interviews. When they found passages that they did not know how to code, they worked together to determine what code should be used. Finally, member checking by sharing the interview transcripts and analysis with the participants was used to ensure the interviewees’ responses had been interpreted correctly.

The second source of data for our mixed methods study was quantitative. To determine the quality of the badge rubrics created by IDAs (Research Question 2), three external badge designers familiar with the badge system were asked to rate 11 of our badge rubrics. They were not informed that six had been created by instructors with experience designing badges and five had been created by IDAs. These external badge designers were scholars from outside our university who had several years of experience designing open badging systems and conducting research in open badges. One is a recent PhD graduate now working as an instructional designer, and the second is a full professor and program chair for his academic program.

Rubrics were given a score of 1-4, with 4 being the highest, for the following list of criteria that we developed for this study. The criteria of spelling/grammar and clarity were used to rate the rubrics on quality of writing. The criteria of rigor/comprehensiveness was developed to indicate how strong the reviewers thought the rubrics were at assessing the underlying skills. The criteria of demonstrable tasks reflect our belief that badges should reflect skills by collecting authentic evidence of performance (see Randall et. al, 2013).

  • Spelling and grammar
  • Demonstrable tasks
  • Rigor/comprehensiveness
  • Clarity
  • Adoptability

The adoptability criterion was used as another way reviewers could indicate how strong they thought a rubric was, by whether they would adopt it themselves. This criterion was also relevant as these badges were developed as part of a consortium of professors who shared rubrics/badges with each other.

Seven rubrics reviewed were for large projects (6–8 hours), and four were for small projects (1–3 hours). The ratings were compared to determine scoring differences existed between the rubrics made by instructors and those prepared by the IDAs. These statistics were reported descriptively in order to make these comparisons.

The third research question, which focused on how this methodology can be improved, was answered using insights from analyzing the data for the first two questions.

Findings

Wolcott (1994) explained that qualitative research should provide description, analysis, and interpretation. This discussion of findings begins by describing the IDAs and their experiences, to provide context for the analysis and interpretation that will follow.

IDAs’ Experiences Creating Open Badges

Our IDAs were all former EdTech 200 students who had achieved high grades in the class and had extensive knowledge in their academic field compared to most undergraduates. Four of them had completed (or nearly completed) the content courses for their major. At least four were former research assistants, two had double majors, and several had teaching experience from having worked as UTAs or in other educational positions. Most thought their technology skills were about average, but generally felt they could learn new technologies fairly easily.

Formats. IDAs worked in two different formats. Five of them (Hollie, Dena, Janelle, Liz, and Joy) worked within a collaborative format with each other and mentors, while Steve and Carter worked independently. (Names were changed except for Janelle, who requested that her real name be used.) Both formats are described before IDA experiences are presented, as the difference appears to have affected their overall experience.

Collaborative format. While each IDA had individual assignments to design specific badges, Hollie believed that creating badges in the collaborative format was “very much a team thing.” For instance, IDAs reviewed rubrics made by other IDAs, going through each step like a student in the class. The reviewer then collaborated with the IDA authoring the rubric to resolve any problems before the rubric went to the instructor for final review. This process also produced a sample of a completed project that we could put online along with the badge rubric.

IDAs in the collaborative format met each week as a group with the instructor leading the project to report on their progress, receive new assignments, and discuss any challenges they faced. Hollie said these meetings were also a good time to arrive at solutions to particularly challenging problems encountered during the peer-review process. Janelle summed up the collaborative format:

We worked well together, and it was nice because we could do kind of our own work, and we only had to get together maybe once a week for an hour and just kind of rocket through each of the ones that we had done. And then we would send them off to [the instructor] to do . . . the final approval. So that worked well. If they’re not doing that now, then that would definitely be a helpful thing for them.

Independent format. Steve and Carter, the two IDAs who worked independently, wrote rubrics without peer review and collaboration with other IDAs. Steve said, “I never collaborated with another badge designer on any badge that I was working on. I would create the badge and then get feedback [from an instructor] and maybe tweak it a little bit before sending it out to the students.”

Training and support. While the training and support given to the IDAs varied, most IDAs in the collaborative format were mentored by the instructor leading the project while creating their first one or two badges. In addition to the weekly group meeting, they had job aids to help them, such as a guide explaining how to make a badge and a badge creation template. Steve and Carter had less interaction with the instructors and received less training and support.

Mentoring and collaboration. Janelle was one of our first IDAs. Because we had not yet developed any training materials, she received extensive one-on-one mentoring. Instead of having worked elsewhere like most of the later IDAs, Janelle had previously worked in the office with the instructor leading the project and was able to get immediate support if she needed it. She recalled,

It was helpful that I had a good relationship with [the instructors]. I knew that [making badges] was a new thing, and . . . I knew that I could go to them at any point and say, “I don’t understand what you want here.”

Joy said for the first couple of rubrics she made, the mentoring instructor would make suggestions and provide additional instruction, but after she became more experienced she required less help. She felt she had support throughout her time as an IDA, as instructors and other IDAs were “always really quick to answer questions that I had.”

Although Carter was not in the collaborative format, he too believed he received adequate training on the badge creation process. He said instructors told him, “If you need help, you can come in and talk [or] sit down with [us].” Steve said he was “tossed in there,” although he did say he could always go to instructors if he had questions. Steve was an IDA when we were examining new ways of designing badges. Consequently, he rightly felt he had needed “more clear direction” for what a badge rubric should and should not include.

Prior experience with our badge rubrics. Because all our IDAs had successfully completed the course, they already had some familiarity with the badge rubrics. Liz believed she did not receive much training; however, because she had been “shown some examples of some other badges, and [she] had taken the class . . . [she] had an idea of what [the instructor was] looking for.”

Carter said that “the biggest thing that helped” him was the fact that he proofread all of the instructors’ badge rubrics before he wrote a rubric himself. This step allowed him to “become familiar with how the rubrics worked” and the “general structure of the rubrics,” which “guided [his] process” as he created rubrics. Carter said training new IDAs using this structure would be “very effective.”

Opportunities to grade projects. Liz said that having the opportunity to grade projects might have improved her efforts at writing badge rubrics:

I think I would have liked to have the experience of grading . . . so that I could experience that frustration and be able to see where some of the pitfalls might be when I was writing my rubrics.

Her comment was supported when contrasted with Janelle’s experience of having opportunities to grade numerous projects, some that even used rubrics she had created; Janelle affirmed that grading had helped her make better rubrics.

Benefits to IDAs

Their experiences during their work as IDAs benefited these undergraduate assistants far beyond the pay per hour that they received. Their specific strengths in technology as well as experience as professionals were affected.Both Joy and Janelle enjoyed becoming the de facto technology gurus in the schools of their major discipline because of their ability to learn and use technology effectively.

Attitudes toward technology. Joy said the greatest benefit she received from working as an IDA and a UTA was “having a more adventurous attitude about using technology.” This sentiment was shared by most of the IDAs. At least six of the seven IDAs expressed feeling more confidence learning and using technology as a result of their IDA experience. Hollie said being an IDA gave her “the majority of [her confidence] . . . 90 percent of it.”

Carter developed the ability to “approach something that’s totally unfamiliar and [figure] out how to use it to the point where [he knew] it well enough to help somebody else be able to use it.” Liz said she was surprised by “how much [she] could just figure out on [her] own” by exploring a technology tool.

Increase in technology skills. Several IDAs said that being an IDA exposed them to many technologies they would not have been aware of otherwise. For instance, Dena used biology apps she discovered as an IDA to start an after-school biology club, while Carter became familiar with most of the 50 technologies available as badges. Additionally, Liz learned much more about Logger Pro while making badges for it, even though she had already used it many times before as a student. Carter said he gained “the skills and disposition and aptitude and the tendency to just try to stay on top of new innovations.”

Skills outside technology. Besides improved confidence and skills with technology, former IDAs reported other strengthened skills and forms of growth. Liz, Carter, and Steve reported that writing badge rubrics helped them approach learning from a student’s perspective, which allowed them to create better learning materials in areas besides technology. Liz said she learned how to relate physics to newcomers who do not have the same background and understanding of terminology that she has.

 Janelle and Steve mentioned that their time as IDAs helped them write better rubrics. Janelle had experience creating rubrics and other teaching materials for education classes but had never seen someone do a project she developed. As an IDA she was able to see how students interpreted what she had written and was able to create better rubrics as a result. Carter and Joy specifically mentioned the desire to apply a mastery approach in their teaching, like that used with the badges in EdTech 200.

IDAs’ Reflections and Perspectives

Positive contributions and suggestions for improving IDA work. All of the IDAs said that being able to help others — by making badges or by working with students as a UTA — were among their favorite experiences as an IDA. All the IDAs believed the badges they designed were well made and would be of benefit to others; several considered this contribution the best part of their experience. Liz said, “Knowing that I was able to help another teacher down the line … was really the most rewarding part.” Similarly, Hollie enjoyed watching people use her badges and recognizing that the badges would help other teachers:

I feel like I did do something that contributed to … teachers learning how to better use technology in their classroom … and that was the coolest thing for me. … I feel like I contributed to … the education of the world.

Suggestions for improvement were relatively few. Hollie admitted she wished badges could have moved more quickly from start to finish. Most suggestions related to specific not general needs. Dena did not particularly enjoy creating her first badge because she did not believe the technology would be useful to biology teachers. Steve mentioned that as one of the workers outside the collaborative group he needed more direction about how to create a badge.

Positive contributions and suggestions for improving UTA roles. Our original intent had not been to discuss the UTA role in which some IDAs also served, but we received so much UTA feedback that we chose to include a few items. Readers contemplating using IDAs might benefit if they are deciding whether to combine the roles as some of our participants chose to do.

Carter said the opportunity to provide feedback on projects and mentor others was very rewarding to him. For Janelle and Joy, the most rewarding aspect of their time as IDA/UTAs was working directly with students. Joy loved teaching workshops about various technologies, and Janelle particularly liked working with students who needed extra help. Hollie enjoyed grading student projects because it allowed her to see everyone’s “different interpretations” of the requirements.

Those also serving as UTAs as well as IDAs were able to identify more difficulties for the former role. At one point, we experimented with allowing anyone, not just EdTech 200 students, to submit a project. Janelle, who was tasked with grading the submissions from people outside the course, said this activity was her “least favorite part” because it was “less rewarding or less interesting” to her. Many outside submissions failed to address several criteria in the rubrics, which unnecessarily created more work for her.

The other UTAs mentioned challenges that are common to all TAs and teachers. Joy, who graded only submissions from EdTech 200 students, said grading was not “the most fun thing” to do because “it tends to be a little bit tedious,” but recognized “that’s just the nature of grading.” Hollie recalled that the hardest thing for her was trying to help students in class with a technology not completely familiar to her.

IDAs’ perspectives on badges. Because the IDAs experienced badges first as students and then as badge designers, we were interested in their perspectives on open badges. Thus, we briefly describe some of their thoughts on the badging movement.

While some IDAs, including Liz and Dena, liked the concept of badges when they were students and continued to like them as they worked on creating them, others were not enthusiastic about badges until becoming IDAs. Both Hollie and Joy reported that as students they cared only about what was required to complete assignments and earn their grade. This viewpoint changed once they became IDAs. Hollie became enthusiastic when she realized our badge system focused not on the products students create, but on the skills they develop in order to produce them. Hollie believed many students feel as she once did and hoped they, too, would gain that realization.

Several IDAs liked that the badge system provides students with options for what they want to learn. All of them continued to remain enthusiastic about badges, although some expressed disappointment that the idea had not spread more widely in the field of education. Three former IDAs mentioned they wished they could earn badges for professional development credit and relicensing credit.

Ironically, none of the IDAs have displayed their badges online or otherwise shared them with prospective employers, even though most have now gone through the teacher hiring process. Janelle and Carter said they believed principals would not know what badges were and thus showing them would have little meaning.

Liz and Joy, however, believed badges would be useful in showcasing their skills and could increase their chance of receiving a job offer. Unfortunately, neither Liz nor Joy displayed their badges digitally because, as all the IDAs said, they could not find an easy way to do so. Joy listed the technologies she had badges for on her résumé but did not provide links to the badges themselves. It seems everyone would like easier ways to display badges. Liz hoped someday there might be an easy way to display them on LinkedIn.

Quality of Materials Created by IDAs

Overall, the IDAs appear to have done as well or even better than our instructors who were experienced badge designers. Speaking of all the rubrics she reviewed (which included those by instructors and those by IDAs), one reviewer noted that most of the problems she found were “grammatical … I thought the content for the badges in general was high quality.” However, there were some isolated challenges. In the following section we discuss one rubric that was an outlier before discussing overall results.

Logger Pro. The greatest discrepancy between the work of the IDAs and the instructors was with the rubric for Logger Pro, a data analysis program used in several fields of science. While Logger Pro is used frequently in those fields, the EdTech 200 instructors were unfamiliar with this type of software and with ways science teachers use it. The instructors produced a rubric that they acknowledged was not rigorous. To address this problem, we hired Liz, a teaching major with a strong science background, who though mentored by an instructor, was ultimately the author of the updated Logger Pro rubric.

Every reviewer who was asked to rate both the original Logger Pro rubric and the updated version gave a much higher overall score to the Logger Pro rubric created by Liz (17/20, 19/20, and 15/20) than the one created by the instructors (15/20, 15/20, and 10/20). Also, all three reviewers rated the Logger Pro rubric made by the instructors lower than any other rubric. This result provides evidence that mentored IDAs can produce better rubrics for content-specific technologies than experienced instructors if the IDAs have greater familiarity with the subject matter. This is a single case, so more studies are needed to see if this assertion can be generalized.

Median ratings. The ratings produced data that were not normal or linear, so to compare the rubric ratings we looked at the median scores of the large rubrics produced by the instructors compared to the median scores of the large rubrics made by IDAs. We followed this same procedure for the small rubrics. As Table 1 shows, both IDAs and instructors produced one large rubric with a median score of 20/20.

Instructors’ scores were slightly higher than IDAs’ when we compared the other large rubrics, with the notable exception of the original and updated Logger Pro rubrics. When comparing the small rubrics, we found that the IDAs and instructors received the same median scores for each of their rubrics, suggesting that the quality was essentially the same, with the IDAs performing slightly better when comparing standard deviations.

Table 1
Summary of Ratings for Each Rubric

Instructors or IDAsLarge or Small RubricBadge RubricMean Total ScoreMedian Total ScoreStandard DeviationMinMax
InstructorsLargeGoogle Earth19.33190.581920
InstructorsLargeGoogle Sites19.33190.581920
InstructorsLargeiMovie19.33201.151820
InstructorsLargeLogger Pro (Original)13.33152.891015
IDAsLargeBlogger18.00181.001719
IDAsLargeLogger Pro (Updated)17.00172.001519
IDAsLargeWix19.33201.151820
InstructorsSmallCanva & ThingLink18.67191.531720
InstructorsSmallUbersense19.33190.581920
IDAsSmallBiology Classification19.00190.001919
IDAsSmallBioDigital Human19.33190.581920

Comparing the median ratings of IDA-made rubrics to instructor-made rubrics along each criterion showed that both groups received the highest possible median rating (i.e., 4) for each criterion (Table 2). Looking specifically at rigor/comprehensiveness, not only did the IDAs’ median score match the instructors’, but the standard deviation did as well. Despite the strong ceiling effect requiring us to be cautious in our interpretations, this result may suggest that rubrics made by IDAs were similar in rigor to those made by instructors. Since maintaining rigor was one of our greatest concerns when employing IDAs, this finding was encouraging.

Table 2
Summary of Ratings for Each Criterion

Instructors or IDAsCriterionMean Total ScoreMedian Total ScoreStandard DeviationMinMax
InstructorsSpelling and grammar3.5340.8314
InstructorsDemonstrable tasks3.8740.3534
InstructorsRigor/ comprehensiveness3.9340.2634
InstructorsClarity3.8740.3534
InstructorsAdoptability4.0040.0044
IDAsSpelling and grammar3.4740.6424
IDAsDemonstrable tasks3.6040.6324
IDAsRigor/ comprehensiveness3.9340.2634
IDAsClarity3.6740.6224
IDAsAdoptability3.9340.2634

Discussion and Recommendations

By employing IDAs, we were able to greatly increase the number of badges in our badge system while maintaining quality, because the badge rubrics IDAs created were on par with those created by our instructors as rated by experienced reviewers. Participation of IDAs also enabled us to create badges for subject-matter-specific technologies, which we had failed to do effectively in the past. IDAs also benefited from this arrangement, as they gained valuable experience and skills that have helped them in their careers and other pursuits. Employing IDAs as we have could prove beneficial to other organizations that are seeking to increase the scale of a project or initiative. We offer the following suggestions to those interested in this practice.

Selecting Qualified IDAs

Choosing well-qualified undergraduates to serve as IDAs likely helped us succeed. As previously mentioned, all the IDAs had taken and performed well in the course. Additionally, all demonstrated expertise in their academic major subject matter above what would be expected of a typical undergraduate. Many also had prior experience as a teaching assistant or research assistant that had further prepared them for serving as an IDA.

Mentoring

Using a mentoring process to both train and support IDAs appears to have been effective. Both Janelle and Joy specifically mentioned the mentoring they received as helping them succeed. Building a relationship of trust by explaining what IDAs were doing well and providing suggestions for improvements were important aspects of the mentoring process.

The willingness of mentors to make themselves available to sit and work with an IDA if needed was also important. Some IDAs chose to work in the same lab as one of our instructors and knew they could approach him at any time with questions. Our instructors were quick to respond to emails when IDAs needed help, which Joy specifically mentioned was “key” to performing her job successfully. Even IDAs who seemed to have received less mentoring, such as Carter, still felt they could succeed because they knew help was readily available. The weekly meetings during which IDAs met together with the instructors and talked about their projects continued the mentoring relationship.

Ownership

Another aspect we consider important was that IDAs felt ownership of their badges. They were tasked with a full badge project, which gave more meaning to their work. IDAs knew the badges they were creating would be seen both by undergraduates taking EdTech 200 and by people outside the university. Thus, they wanted to do their best. Hollie said that after launching badges she had made, she liked seeing students working on them. “I was like a little proud mom.”

Peer Collaboration

While IDAs had ownership over their project, they also collaborated by reviewing each other’s rubrics and providing feedback. Joy described it this way:

It was definitely a team effort. I would learn the technology, create the rubric, and then there would be a couple of other people who would vet it and make sure that it was a good rubric. … They’d suggest revisions and I’d make them. I definitely felt like there was a team effort involved. … I had a lot of support because my teammates were always really quick to answer questions that I had.

This process improved the quality of the rubrics and reduced the time the instructors had to spend reviewing rubrics and providing feedback. The process also enabled us to support a larger number of IDAs and helped extend the mentoring relationship to IDAs’ peers.

Job Aids

Although the instructors were available if IDAs needed them, the how-to guide and job aids allowed IDAs to have more autonomy and served as scaffolding. The how-to guide was especially helpful when IDAs were learning new parts of the process. While the instructors wrote much of the guide — particularly the section on the philosophy behind our badge system — anyone was able to add to the document. Eventually, nearly every aspect of our process was detailed.

A Google Sheets spreadsheet was used as the badge creation template in which IDAs wrote drafts of their rubric, facilitating the process of reviewing drafts and providing feedback. As it detailed each step in our design process, it provided a way to track when a step was completed. The use of the spreadsheet was particularly helpful for task and project management. Before we created this template, it was not uncommon for IDAs and instructors to overlook steps in the process.

Dual Role as IDA and TA

As mentioned previously, some undergraduates were both IDAs and UTAs, while others were IDAs only. Those who chose to function only as IDAs did so either because they could not or did not desire to take on the additional role. However, those who filled both roles seemed to benefit even more from the experience. Hollie expressed how rewarding it was to see the badges she had designed being used by students she was working with in her UTA role, but she also said being a UTA allowed her to see how she could make improvements to her badge rubrics. This fact likely made her a better designer. Similarly, Liz, who served only as an IDA, wished in hindsight that she had the chance to grade a project, as she believed that experience would have improved her rubrics.

Janelle said her favorite part of being an IDA/UTA was teaching workshops about different technologies to students in EdTech 200. Joy also said she greatly enjoyed teaching workshops. Providing IDAs the opportunity to be both an IDA and UTA might provide a more rewarding experience and could improve their design skills. We do not suggest requiring IDAs to also serve as UTAs, because some of our most qualified IDAs would not have been able to join our team had that requirement been in place. The subject matter knowledge those IDAs brought to the team was critical to our success. However, giving IDAs the option of also acting as UTAs bears serious consideration.

Conclusion

In this study, we examined how employing undergraduates as instructional design assistants could allow us to increase the scale of our open badge system by increasing the number of hours devoted to the project at a relatively low cost. We found that with the help of IDAs we were able to greatly increase the number of badges in our system without compromising the quality of the badge rubrics. In fact, IDA and instructor-created badges were generally of the same quality.

We also found that employing IDAs not only benefited our project but also provided many benefits to the IDAs. These benefits included improved technology attitudes and skills, along with professional growth that has helped them in their careers. We also identified several practices that allowed our IDAs to be successful, including making mentoring freely available, giving them ownership of projects, encouraging peer collaboration, and providing them with job aids.

Enthusiasm is increasing for the concept of open badges in education as the credential that can support a “new culture of learning” (Grant, 2016, p. 3) that is more open and recognizes all varieties of learning and accomplishment. However, a major concern for many institutions considering open badges is how to handle the workload of creating and managing a badging system. This is particularly true in teacher education, with its scant resources and multiple complex competencies to be mastered (Koehler et al., 2013).

The strategy explained in this article provides one solution that warrants exploring: Training undergraduate teaching majors to use their emerging teaching and assessment skills to support a badging initiative. Following this strategy could make badges possible for more institutions at an affordable cost. In addition, we found substantial positive benefits to the preservice IDAs, who increased their content knowledge and teaching skills through this process.

Although open badges are often employed to engage learners, (e.g., Harvey, 2017), we found they also increased the engagement of the UTAs, providing them with a stronger learning experience of their own. While additional research is needed, IDAs appear to have the potential to provide a win–win–win situation for student teachers, learners, and institutions. As Rosenberger (2018) found, effective badging programs are often designed in nontraditional ways to meet local needs. When exploring how to make badging successful, we found it helps to consider all available resources and think creatively about how to apply them.

Limitations and Future Research

While we are confident in the ability of well-qualified undergraduates to serve as IDAs, this study was limited to seven IDAs in one program. The sample was not random or diverse: All our IDAs were preservice teaching majors, most of whom had already completed most of their coursework on pedagogy. It is unclear if IDAs from nonteaching majors would be as successful in the IDA role or if they would benefit professionally from being an IDA. A study that looks specifically at IDAs from a wide range of nonteaching majors could answer these questions. More studies are needed to support and extend our conclusion. Additionally, future studies could examine how the IDA model and open pedagogy (Wiley, 2013) could be mutually reinforcing.

This study was also limited as a single case study of an open badging system at one university. Thus, replication is needed in other settings for more generalizable findings. In addition, the expert ratings of the rubrics had a ceiling effect, in that the scores were based on a rating scale of 1-4, with high marks given in general to all of the rubrics. Score variability was limited, making comparison claims difficult. However, while further research is needed for more generalizable claims, the insights from this study could still encourage further development of this IDA strategy.

References

Ahearn, A. (2018, November 28). Stop asking about completion rates: Better questions to ask about MOOCs in 2019. EdSurge. Retrieved from: https://www.edsurge.com/news/2018-11-28-stop-asking-about-completion-rates-better-questions-to-ask-about-moocs-in-2019

Araújo, I., Santos, C., Pedro, L., & Batista, J. (2017). Digital badges on education: Past, present and future. In A. Skarzauskiene & N. Gudeliene (Eds.), Proceedings of the 4th European Conference on Social Media (pp. 27–35). Red Hook, NY: Curran Associates.

Boyle, P., & Boice, B. (1998). Systematic mentoring for new faculty teachers and graduate teaching assistants. Innovative Higher Education, 22, 157–179. https://doi.org/10.1023/A:1025183225886

 Breslow, L., Pritchard, D. E., DeBoer, J., Stump, G. S., Ho, A. D., & Seaton, D. T. (2013). Studying learning in the worldwide classroom: Research into edX’s first MOOC. Research and Practice in Assessment, 8, 13–25.

Carroll, J. G. (1980). Effects of training programs for university teaching assistants: A review of empirical research. The Journal of Higher Education, 51, 167–183. https://doi.org/10.1080/00221546.1980.11780043

Chafkin, M. (2013, November 14). Udacity’s Sebastian Thrun, godfather of free online education, changes course. Fast Company. Retrieved from http://www.fastcompany.com/3021473/udacity-sebastian-thrun-uphill-climb

Cheng, Z., Watson, S. L., & Newby, T. J. (2018). Goal setting and open digital badges in higher education. TechTrends, 62,190–196. https://doi.org/10.1007/s11528-018-0249-x

Chou, C. C., & He, S-J. (2017). The effectiveness of digital badges on student online contributions. Journal of Educational Computing Research, 54, 1092-1116.

Coleman, C. (2014, September 18). 5 reasons why great edtech products don’t succeed. EdSurge. Retrieved from https://www.edsurge.com/news/2014-09-18-5-reasons-why-great-edtech-products-don-t-succeed

Creswell, J. W., & Clark, V. L. P. (2017). Designing and conducting mixed methods research (3rd ed.). Thousand Oaks, CA: Sage.

Cross, S., Whitelock, D., & Galley, R. (2014). The use, role and reception of open badges as a method for formative and summative reward in two massive open online courses. International Journal of e-Assessment, 1(1), 1–16.

Cuban, L. (2001). Oversold and underused: Computers in the classroom. Cambridge, MA: Harvard University Press.

Devedžić, V., & Jovanović, J. (2015). Developing open badges: A comprehensive approach. Educational Technology Research and Development, 63, 603–620. https://doi.org/10.1007/s11423-015-9388-3

Diamond, J., & Gonzalez, P. C. (2016). Digital badges for professional development: Teachers’ perceptions of the value of a new credentialing currency. In D. Ifenthaler, N. Bellin-Mularski, & D.-K. Mah (Eds.), Foundation of digital badges and micro-credentials (pp. 391–409). Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-15425-1_21

Dickson, P. E. (2011, March). Using undergraduate teaching assistants in a small college environment. In Proceedings of the 42nd ACM Technical Symposium on Computer Science Education (pp. 75–80). New York, NY: ACM.  https://doi.org/10.1145/1953163.1953187

Digital Promise. (n.d.). Educator micro-credentials. Retrieved from https://digitalpromise.org/initiative/educator-micro-credentials/

Ebner, M., Lorenz, A., Lackner, E., Kopp, M., Kumar, S., Schön, S., & Wittke, A. (2017). How OER enhances MOOCs — A perspective from German-speaking Europe. In M. Jemni, Kinshuk, & M. K. Khribi (Eds.), Open education: From OERs to MOOCs (pp. 205–220). Berlin, GE: Springer. https://doi.org/10.1007/978-3-662-52925-6_11

Fingerson, L., & Culley, A. B. (2001). Collaborators in teaching and learning: Undergraduate teaching assistants in the classroom. Teaching Sociology, 29, 299–315. https://doi.org/10.2307/1319189

Fontichiaro, K., & Elkordy, A. (2016). Digital badges: Purposeful design in professional learning outcomes for K–12 educators. In D. Ifenthaler, N. Bellin-Mularski, & D.-K. Mah (Eds.), Foundation of digital badges and micro-credentials (pp. 287–305). Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-15425-1_16

French, D., & Russell, C. (2002). Do graduate teaching assistants benefit from teaching inquiry-based laboratories? AIBS Bulletin, 52, 1036–1041.

Gamrat, C., Zimmerman, H. T., Dudek, J., & Peck, K. (2014). Personalized workplace learning: An exploratory study on digital badging within a teacher professional development program. British Journal of Educational Technology45, 1136–1148. https://doi.org/10.1111/bjet.12200

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Chicago, IL: AldineTransaction.

Grant, S. L. (2016). History and context of open digital badges. In L. Y. Muilenberg & Z. L. Berge (Eds.), Digital badges in education: Trends, issues, and cases (pp. 3–11). New York, NY: Routledge.

Harvey, F. (2017). Using open badges to support student engagement and evidence based practice. Journal of Educational Innovation, Partnership and Change, 3(1), 234–242.

Hensiek, S., DeKorver, B. K., Harward, C. J., Fish, J., O’Shea, K., & Towns, M. (2017). Digital badges in science: A novel approach to the assessment of student learning. Journal of College Science Teaching, 46(3), 28–33.

Hickey, D. T., & Willis, J. E., III. (2017, May 1). Where open badges appear to work better: Findings from the Design Principles Documentation project. Retrieved from http://www.badgenumerique.com/wp-content/uploads/2017/08/DPD-Project-Final-Report-Dan-Hickey-Willis-May-2017.pdf

Hill, P. (2015, April 22). ASU, edX and the Black Knight: MOOCs are not dead yet. E-Literate. Retrieved from https://mfeldstein.com/asu-edx-and-the-black-knight-moocs-are-not-dead-yet/

Hodges, C., Lowenthal, P., & Grant, M. (2016). Teacher professional development in the digital age: Design considerations for MOOCs for teachers. In G. Chamblee & L. Langub (Eds.), Proceedings of SITE 2016: Society for Information Technology & Teacher Education International Conference (pp. 2075–2081). Chesapeake, VA: Association for the Advancement of Computing in Education.

Hogan, T. P., Norcross, J. C., Cannon, J. T., & Karpiak, C. P. (2007). Working with and training undergraduates as teaching assistants. Teaching of Psychology, 34, 187–190. https://doi.org/10.1080/00986280701498608

Horn, M., & Christensen, C. (2013, February 20). Beyond the buzz, where are MOOCs really going? Wired. Retrieved from https://www.wired.com/2013/02/beyond-the-mooc-buzz-where-are-they-going-really/

Hrastinski, S., Cleveland-Innes, M., & Stenbom, S. (2018). Tutoring online tutors: Using digital badges to encourage the development of online tutoring skills. British Journal of Educational Technology, 49, 127–136. https://doi.org/10.1111/bjet.12525

Hunsaker, E.& West, R. E. (in press). Designing computational thinking and coding badges for early childhood educators. Techtrends. Advance online publication. https://doi.org/10.1007/s11528-019-00420-3

 Jennings, J., & Roome, B. (2017, August 31). How digital badges are shaking up teacher PD. ESchoolNews. Retrieved from https://www.eschoolnews.com/2017/08/31/digital-badges-shaking-teacher-pd/

Jobe, W., Östlund, C., & Svensson, L. (2014). MOOCs for professional teacher development. In M. Searson & M. Ochoa (Eds.), Proceedings of SITE 2014: Society for Information Technology & Teacher Education International Conference (pp. 1580–1586). Chesapeake, VA: Association for the Advancement of Computing in Education.

Johnson, C. A. (2014). Holding hands and drying tears: Effectiveness of student employees in promoting a successful LMS implementation (Doctoral dissertation). Available from ProQuest Dissertations and Theses database. (UMI No. 3680936)

Jones, W. M., Hope, S., & Adams, B. (2018). Teachers’ perceptions of digital badges as recognition of professional development. British Journal of Educational Technology, 49, 427–438. https://doi.org/10.1111/bjet.12557

Jordan, K. (2014). Initial trends in enrollment and completion of massive open online courses. The International Review of Research in Open and Distance Learning, 15(1), 133–160. https://doi.org/10.19173/irrodl.v15i1.1651

Koehler, M. J., Mishra, P., & Cain, W. (2013). What is technological pedagogical content knowledge (TPACK)? Journal of Education, 193(3), 13–19. https://doi.org/10.1177/002205741319300303

Kolowich, B. (2013, July 19). San Jose State U. puts MOOC project with Udacity on hold. The Chronicle of Higher Education. Retrieved from https://www.chronicle.com/article/San-Jose-State-U-Puts-MOOC/140459/

Luft, J. A., Kurdziel, J. P., Roehrig, G. H., & Turner, J. (2004). Growing a garden without water: Graduate teaching assistants in introductory science laboratories at a doctoral/research university. Journal of Research in Science Teaching, 41, 211–233. https://doi.org/10.1002/tea.20004

Matt, S., & Fernandez, L. (2013, April 23). Before MOOCs, ‘colleges of the air.’ The Chronicle of Higher Education. Retrieved from https://www.chronicle.com/blogs/conversation/2013/04/23/before-moocs-colleges-of-the-air/

McKeegan, P. (1998). Using undergraduate teaching assistants in a research methodology course. Teaching of Psychology, 25, 11–14. https://doi.org/10.1207/s15328023top2501_4

Mendenhall, M., & Burr, W. R. (1983). Enlarging the role of the undergraduate teaching assistant. Teaching of Psychology, 10, 184–185. https://doi.org/10.1207/s15328023top1003_27

Moore, M. G. (2013). Independent learning, MOOCs, and the open badges infrastructure. American Journal of Distance Education, 27, 75–76. https://doi.org/10.1080/08923647.2013.786935

Mozilla Open Badges. (n.d.). In Wikipedia. Retrieved April 22, 2019, from https://en.wikipedia.org/wiki/Mozilla_Open_Badges

Naidu, S., (2013). Transforming MOOCs and MOORFAPs into MOOLOs. Distance Education, 34, 253–255. https://doi.org/10.1080/01587919.2013.842524

Niehaus, E., Platz, M., Herselman, M., & Botha, A. (2017). Using digital badges in South Africa informing the validation of a multi-channel open badge system at a German university. In Proceedings of the 2017 IST-Africa Week Conference (pp. 1–12). Danvers, MA: IEEE. https://doi.org/10.23919/ISTAFRICA.2017.8101977

Portmess, L. (2013). Mobile knowledge, karma points, and digital peers: The tacit epistemology of linguistic representation of MOOCs. Canadian Journal of Learning and Technology, 39(2), 1–8. https://doi.org/10.21432/T23S30

Prensky, M. (2013). My [2013] take on MOOCs. Retrieved from http://marcprensky.com/wp-content/uploads/2013/05/Prensky-My_2013_take_on_MOOCs-EDTECH-July-Aug-2013-FINAL.pdf

Prusak, K., Dye, B., Graham, C., & Graser, S. (2010). Reliability of pre-service physical education teachers’ coding of teaching videos using Studiocode® analysis software. Journal of Technology and Teacher Education, 18, 131–159.

Randall, D. L., Harrison, J. B., & West, R. E. (2013). Giving credit where credit is due: Designing Open Badges for a technology integration course. TechTrends, 57, 88–95. https://doi.org/10.1007/s11528-013-0706-5

Rosenberger, K. (2018). Designing digital badging programs: Findings from an interview-based study with instructional designers. TechTrends. Advance online publication. https://doi.org/10.1007/s11528-018-0349-7

Selingo, J. J. (2013). College (un)bound: The future of higher education and what it means for students. Boston, MA: New Harvest.

Siemens, G. (2014, January 31). The attack on our higher education system — And why we should welcome it. TED. Retrieved from https://ideas.ted.com/the-attack-on-our-higher-education-system-and-why-we-should-welcome-it/

Skiba, D. J. (2013). MOOCs and the future of nursing. Nursing Education Perspectives, 34, 202–204.

West, R. E., Newby, T., Cheng, Z., & Clements, K. (in press). Acknowledging all learning: Flexible, micro, and open credentials. In M. J. Bishop, E. Boling, J. Elen, & V. Svihla (Eds.), Handbook of research on educational communications technology (5th ed.). New York, NY: Springer.

West, R. E., & Randall, D. L. (2016). The case for rigor in open badges. In L. Muilenburg & Z. Berge (Eds.), Digital badges in education: Trends, issues, and cases (pp. 21-29). New York City, NY: Routledge.

Weidert, J. M., Wendorf, A. R., Gurung, R. A. R., & Filz, T. (2012). A survey of graduate and undergraduate teaching assistants. College Teaching, 60, 95–103. https://doi.org/10.1080/87567555.2011.637250

Wiley, D. (2013, October 21). What is open pedagogy? [Web log]. Retrieved from https://opencontent.org/blog/archives/2975

Wolcott, H. F. (1994). Transforming qualitative data: Description, analysis, and interpretation. Thousand Oaks, CA: Sage.

Young, D.,West, R. E., & Nylin, T. A. (in press). Value of open badge microcredentials to employees, customers, and the organization: A case study. International Review of Research in Open and Distributed Learning.

Loading