Haughton, N., & Keil, V. (2013). Technology-supported assessment systems: A comparison of accredited and unaccredited programs. Contemporary Issues in Technology and Teacher Education, 13(4). https://citejournal.org/volume-13/issue-4-13/current-practice/technology-supported-assessment-systems-a-comparison-of-accredited-and-unaccredited-programs

Technology-Supported Assessment Systems: A Comparison of Accredited and Unaccredited Programs

by Noela Haughton, University of Toledo; & Virginia Keil, University of Toledo

Abstract

The debate surrounding teacher quality often fails to differentiate effectively between teacher-preparation providers. This failure also extends to distinguishing between teachers prepared in traditional campus-based accredited programs from those prepared in unaccredited campus-based programs. This paper compares assessment infrastructure and expenditure levels of accredited and unaccredited schools, colleges, and departments of education (SCDEs). The College of Education Assessment Infrastructure Survey was administered to 1,011 campus-based programs in 2007 and 2009. Six hundred seven responses—341 (33.7%) from 2009 and 266 (26.3%) from 2007—were analyzed. Results support the notion that accredited SCDEs are significantly more likely than their unaccredited counterparts (a) to implement electronic assessment systems, and (b) to invest at higher levels in assessment infrastructure. Implications include the role of accreditation reporting and other requirements on SCDE assessment policy and allocation of resources to support the growing need for enhanced capacity.

Much debate in the United States has occurred regarding teacher effectiveness and the colleges and universities that prepare beginning teachers. The common belief is that university-based programs prepare the vast majority of new teachers (Baines, 2010). Yet, nontraditional preparers such as for-profit higher education institutions and a variety of alternate route programs continue to increase their share of the educator preparation market.

The public debates surrounding teacher quality and preparation accountability often do not address this increasing segmentation and, thus, focus on university-based programs. For example, the National Council for Teacher Quality (2011, 2012) annual policy yearbooks focused exclusively on traditional university-based preparation programs. National reports frequently fail to differentiate between the quality of teachers prepared at colleges and universities and those prepared through other types of institutions.

Furthermore, there is a failure to distinguish between teachers prepared in traditional campus-based accredited programs from those prepared in unaccredited campus-based programs, as well as those prepared in programs like the American Board for Certification of Teacher Excellence. Baines (2010) cited the case of the University of North Texas, which offered both a relatively small nationally accreditation program and a much larger unaccredited online program designed to compete with similar providers in the state.

Unlike accredited programs in schools, colleges, and departments of education (SCDEs), which are required to comply with a number of professional and accreditation standards, these preparation programs operate with significantly fewer regulations. A specific example is the disparities that exist in terms of accountability and accreditation.

Much attention has been focused on assessment in higher education, with special focus on teacher preparation as a result of the federal Higher Education Act (Title II) and the No Child Left Behind Act of 2001 (Bales, 2006). Consequently, many accredited SCDEs have added student outcomes as part of their strategic planning, including infrastructure that supports program evaluation at multiple levels.

Palomba and Banta (1999) defined assessment as “the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development” (p. 4). For accredited SCDEs, this system of collection, review, reporting, and use is written into accreditation standards and accountability reporting. The National Council for Accreditation of Teacher Education (NCATE, 2008) required an assessment system “that collects and analyzes data on applicant qualifications, candidate and graduate performance, and unit operations to evaluate and improve the performance of candidates, the unit, and its programs” (p. 25).

Although the Teacher Education Accreditation Council (TEAC, 2009) quality principles do not make specific reference to a technology-based assessment system, programs are required to use multiple measures and assessment methods to demonstrate preservice teacher learning. NCATE and TEAC have since merged to form the Council for the Accreditation of Educator Preparation (CAEP).  However, because NCATE and TEAC were accrediting bodies during the time of this study, the separate NCATE and TEAC accreditation processes are detailed in this article. As with NCATE and TEAC, the new CAEP standards focus upon evidence, continuous improvement, and clinical practice for all educator preparation providers. However, the CAEP standards require evidence of innovation within programs as well as the uniformity of these requirements for all educator preparation providers.

Accreditation reporting is not unique to education colleges. In fact, higher education, in general, has seen an increase in mandated data reporting. As data collection and reporting requirements, especially for accredited programs, increase so does the need for data systems to support the management of the various processes. Data systems typically comprise a number of external and internal systems. The external systems are usually implemented by external entities, such as the federal government, accreditors, state departments of education, and test companies in collaboration with these entities.

One of the best known external data systems is the Integrated Postsecondary Education Data System, which was established by a 1965 amendment to the Higher Education Act. It requires postsecondary institutions who participate in federal student aid programs to report data on enrollment, program completers, graduation rates, faculty, student aid, and so forth (National Center for Education Statistics, n.d.). These data and a number of reports are publicly available. The Higher Education Arts and Data Services Project was established in 1982 as a joint effort by the National Association of Schools of Music, The National Association of Art and Design, and the National Association of Schools of Theatre to compile, manage, and report data on the arts in higher education.  Participation is mandatory for programs accredited programs (Council of Arts Accrediting Associations, 2013).

Academic programs in communication sciences and disorders are required to complete annual reports and accreditation application in the Higher Education Data System (American Speech-Language-Hearing Association, 2013). What seems to be unique to education colleges is an ongoing increase in the volume of accountability and accreditation reporting.

Accountability reporting refers to reports required by federal and state agencies. For example, sections 205 through 208 of Title II of the Higher Education Act (HEA), as amended in 2008, requires reports from institutions that conduct traditional programs and alternate route to licensure programs that enroll students receiving federal assistance under the Title IV of the HEA (U.S. Department of Education, 2013). This report, typically referred to as the Title II report, requires three annual reports of the quality of teacher preparation that include licensure test pass rates, selectivity, and other program data.

Accreditation reporting refers to reports required by educator preparation accreditors. For example, NCATE and the American Association of Colleges for Teacher Education (AACTE) require accredited programs to voluntarily submit data to their Professional Education Data System Parts A, B, and C. Parts A and B of this annual report series comprise a number of institutional, faculty, student characteristics and outcomes data, including student enrollment and faculty demographic data and degree completion by degree level, degree area, gender, and race (AACTE, 2013).

Part C of the Professional Education Data System is a required NCATE report, in which institutions respond to areas for improvement that were cited on the previous on-site visit, report substantive changes to their programs in areas such as content delivery and budget, and report on continuous improvement initiatives including data tables (NCATE, 2013).

TEAC accredited institutions are also required to submit annual reports that include an updated statement of evidence used by or to be used by the program, an update to the program’s data sheet that supports claims of effectiveness, and an updated count of the number of students enrolled and graduated by program option (TEAC, n.d.). These annual reports for each accreditor are completely separate in terms of effort and resources from cyclical accreditation visits.

Even though NCATE reaccreditation occurs on a 7-year cycle, preparation consumes institutions for a minimum of 3 to 4 years, beginning with the specialized professional association recognition process. Specialized professional associations are NCATE member organizations that provide discipline-level oversight (e.g., National Science Teachers Association recognition for science education) in partnership with NCATE. Under the NCATE system discipline-level licensure programs are required to submit national recognition reports beginning 3 years in advance of the onsite NCATE visit.

Many of the reporting requirements across specialized professional associations are standardized. Discipline-level licensure programs are required to support their reports using six to eight key assessments, six of which are required, five of which are prescribed. Assessment 1: licensure test pass rates, which must be at least 80%; Assessment 2: content knowledge assessment; Assessment 3: ability to plan instruction; Assessment 4: student teaching evaluations; Assessment 5: teacher candidate impact on P-12 student learning outcomes. Assessment 6: required but undefined for most specialized professional associations; and Assessments 7-8 are optional.

The “NCATE unit” accreditation process is separate from the specialized professional association recognition process. The term unit refers to the collective of discipline-level educator preparation programs that, depending on size and university organization structure, can be viewed as the college. This unit approval process requires separate preparation, analysis, and reporting, which typically takes place after the specialized professional association process is completed or mostly completed. The NCATE report includes summaries of candidate data by discipline (each licensure area), level (baccalaureate, post-baccalaureate, masters, postmasters, doctoral), and mode of delivery (off-site, online delivery, traditional), faculty, and institutional data according to six standards (NCATE, 2008).

The unit accreditation process also requires the preparation of an electronic exhibit room that houses hundreds of pieces of supporting evidence, as well as the hosting of a four- to five-day onsite visit. The visit allows the accreditation team to meet with various stakeholders, including teacher candidates, faculty, staff, senior college and university administrators, graduates, and members of the P-12 community.

The TEAC accreditation process shares many similarities with the NCATE process. Eligible programs (the term program is used to be equivalent to the NCATE unit) submit an inquiry brief, in which claims of program effectiveness are satisfied in accordance with the three qualities principles:  evidence of teacher candidate achievements; evidence that programs monitor quality; and evidence of program’s capacity for quality accordance with TEAC standards (TEAC, 2009).

Unlike the more prescriptive NCATE accreditation, TEAC accreditation allows its accredited programs more flexibility in terms of supportive evidence. Another key difference is that TEAC-accredited programs do not submit discipline-level accreditation reports to specialized professional associations. However, each discipline-level submission is a part of the inquiry brief and attracts separate review costs. Depending on program size and number of disciplines offered, program review-related costs can be significant.

Accredited university-based teacher preparation programs are increasingly diverse in a number of characteristics, including number of programs offered, number of candidates, and size of faculty (Haughton & Keil, 2009). All states require their institutions to be accountable. Moreover, NCATE-accredited programs are required to implement and utilize an electronic assessment system as part of the assessment system and unit evaluation standard. Depending on accreditor and state partnership agreements with one or both national accreditors, additional program approval must be sought from the respective states.

Therefore, it is fair to state that accredited programs are required to demonstrate quality outcomes related to their respective standards at multiple levels—national, state department of education, and discipline. Often, little similarity exists between state standards, accreditation requirements, and accountability requirements. Depending on national accreditor and respective state partnerships, the nature of and the largely unfunded reporting burden varies substantially from institution to institution.

Variance also exists in the type and level of support from state to state. This variation includes the level of sophistication of state data systems and support mechanisms—data, personnel, and data collection support—available to institutions. Differences in characteristics, accreditor, accreditation status (nationally accredited vs. not nationally accredited) and accompanying mandates create important differences in infrastructure requirements and reporting burdens among institutions. For example, as previously mentioned, accredited SCDEs must track, among other data, outcomes related to content knowledge, field experience, clinical practice, professionalism, diversity, faculty qualifications, graduate impact, employer satisfaction, and technology competence at all levels of preparation and modes of delivery (NCATE, 2008; TEAC, 2010).

All states want their institutions to be accountable without enough state-level support. States also get to change the rules in terms of reporting requirements with little, if any, negotiations with their institutions and no additional resources. The differences and associated costs of regulatory burdens and national accreditation have led to smaller institutions choosing to close teacher education programs. Coupland (2011) documented this decision by Hillsdale College, a small liberal arts institution that was founded in 1844, to close its teacher education programs.

Serious resource-related dilemmas for SCDEs have arisen as a result of the demands for accountability and accreditation and the use of multiple sources of data and methods to demonstrate effectiveness. A major challenge for teacher educators in accredited institutions is how to negotiate these technology and infrastructural demands within their own institutions.  The assessment requirements for accredited programs require the support of an infrastructure of technology, staffing, and most importantly, initial and ongoing funding that exceeds the technological infrastructure of most colleges and universities. All this comes on the heels of shrinking budgets, including retroactive budget cuts of state funding.

Given the accountability and accreditation reporting, this paper examines the influence, or lack of influence, of the accreditation process on assessment system infrastructure. Specific questions, as related to accreditation status, are as follows:

  • What types of assessment systems are SCDEs implementing?
  •  What is the level of investment being made (i.e., high, medium, low)?

Methods

Participants

The College of Education Assessment Infrastructure Survey (CEAIS) was administered to 1,011 schools, colleges, and departments of education heads during spring 2007 and spring 2009. Each administration occurred over a 4-month period. The list of university-based educator preparation programs was compiled from the US News and World Report (http://colleges.usnews.rankingsandreviews.com/best-colleges/sitemap?int=a557e6) college finder resource as part of the first administration in 2007. The list was updated with current contact information for the second administration. The targeted total response rate was 279 institutions, which is required for a 95% confidence level.

Instrumentation and Procedures

The authors developed the CEAIS. The goal of the survey was to determine the assessment infrastructure and practices of university-based SCDEs. We have significant experience with education accreditation in their capacities as associate dean and NCATE/assessment coordinator. Survey items were based on assessment system requirements written into state, national, and professional accreditation standards.

Content validation was undertaken throughout its development. The primary methods were alignment with accreditation requirements and expert-paneling by faculty from different program areas including some with accreditation-related responsibilities. Prior to each electronic administration, the questionnaire was tested for flow, readability, appearance, and technical bugs by the researchers, and other faculty and staff colleagues.

The resulting 27-item survey included the following five sections:

  • Institution characteristics: four items, including control and Carnegie classification
  • SCDE characteristics: five items, including accreditation status, school size (annual number of graduates), and size of faculty
  • Assessment system characteristics:  seven items, including type of system and factors influencing choice
  • Support infrastructure: eight items, including the number of support personnel and the annual estimated cost of the assessment system and infrastructure
  • Qualitative question: three open-ended items regarding unexpected benefits and challenges, assessment-driven changes implemented, and whether assessment system is “worth it.”

Redropping for each administration (Alreck & Settle, 2004) was done via electronic mail throughout the data collection period to nonresponding institutions.

Data Analysis

Web-based results were imported into SPSS 17 for Windows. Data elements (institution and SCDE characteristics, type of assessment system, education accreditation status, and estimated expenditure) from both the 2007 and 2009 administrations were merged. Responses were coded, and a series of summaries, cross-tabulations, and measures of association based on education accreditation status were produced for the questions being addressed in this report.

Results

Description of Responding Units

Table 1 describes the sample of responding institutions in terms of institution and SCDE characteristics. Six hundred seven responses – 341 from the 2009 administration and 266 from the 2007 administration – were analyzed. Three hundred forty-one (33.7%) responses were received for the 2009 administration, which is well above the 279 required for a 95% confidence level for a target population of 1,011. The 2007 yielded 266 responses (for a 26% response rate), which represents a confidence level of 90%. Of the 2007 sample, 170 (72%) were accredited and 65 (28%) were unaccredited. For 2009, 266 (78%) were accredited and 75 (22%) were unaccredited.

Table 1
Description of Responding Institutions 

 

Institutional Descriptional
2007
2009
Institution Control  (2007: N = 266; 2009: N = 339)
Public
133 (50%)
158 (46.6%)
Private
133 (50%)
181 (53.4%)
National Education Accreditation  (2007: N = 235; 2009: N = 341)
NCATE
152 (64.7%)
230 (67.4%)
TEAC
18 (7.6%)
36 (10.6%)
Not Accredited
65 (27.7%)
75 (22%)
Regional Accreditors (2007: N = 263; 2009: N = 330)
Middle States
42 (15.8%)
52 (15.8%)
New England Association
18 (6.8)
19 (5.7)
North Central
91 (34.2%)
118 (35.8%)
Northwest Commission
13 (4.9%)
22 (6.7%)
Western Association
10 (3.8%)
14 (4.2%)
Southern Association
89 (33.5%)
105 (31.8%)
Carnegie Classification (2007: N = 265; 2009:  N = 339)
Bachelors
70 (26.3%)
98 (28.9%)
Masters
120 (45.1%)
144 (42.5%)
Doctoral
75 (28.2%)
97 (28.6%)
Number of Graduates (2007: N = 249; 2009: N = 337)
< 50
57 (21.4%)
45 (25.8%)
50-99
48 (18%)
54 (16%)
100-149
30 (11.3%)
42 (12.5%)
150-199
19 (7.1%)
29 (8.6%)
200-249
14 (5.3%)
24 (7.1%)
250-299
11 (4.1%)
20 (5.9%)
300-349
14 (5.3%)
12 (3.6%)
>/= 350
56 (21.1%)
69 (20.2%)

Table 2 summarizes the responses to answer the first research question and supports the notion that accredited SCDEs are significantly more likely to implement electronic assessment systems: 96% vs. 74% in 2007 (χ² = 22.715, df = 2, N = 226, p < .001) and 84% vs. 74% (χ² = 6.375, df = 2, N = 339, p < .05) in 2009.

Table 2
Type of Assessment and Data Management Systems by Accreditation Status

 

Type of Assessment System

Accredited

Not Accredited
Total
Chi-Squared / df
Cramer’s V
Sig.
2007
Accredited: = 164; Not accredited: n = 226
Primarily paper-based/manual system
7 (4%)
16 (26%)
23
22.715
.318
.000
Electronic supported
157 (96%)
46 (74%)
203
22.649
.317
.000
Primarily third-party supported
70 (43%)
19 (31%)
89
2
Proprietary/in house
87 (50%)
27 (43%)
114
2009
Accredited: n = 265; Not accredited: n = 74
Primarily paper-based/manual system
43 (16%)
19 (26%)
62
6.375
.137
.041
Electronic supported
222 (84%)
55 (74%)
277
5.578
.128
.018
Primarily third-party supported
108 (41%)
24 (32%)
132
2
Proprietary/in house
114 (43%)
31 (42%)
145

Paper-based systems were more likely to be used by unaccredited SCDEs at private master’s-level institutions. However, there was some narrowing of the electronic-supported-system gap between 2007 and 2009, as indicated by the Cramer’s V statistics .318 and .128, respectively. As a measure of effect size, the former is considered medium, while the latter is considered small (Cohen, 1988). Regardless of national accreditation status, state departments of education require reporting from all university-based preparation programs.

In 2009, the top three reasons for assessment system choice by unaccredited SCDEs were cost (60%), ease of use (49%), and the ability to aggregate and disaggregate data (40%). Accredited SCDE choices were primarily based on the ability to aggregate and disaggregate (65%), ease of use (56%), and compatibility with existing institution systems (48%).

Third-party assessment systems such as Livetext and Taskstream had a strong presence in both the accredited (43% in 2007 and 41% in 2009) and unaccredited (31% in 2007 and 32% in 2009) markets. In the case of accredited SCDEs, the three most commonly used third-party assessment system packages in both survey years were Livetext (37% in 2007 and 46% in 2009), Task Stream (19% in 2007 and 24% in 2009), and TK20 (13% in 2007 and 20% in 2009).

For unaccredited SCDEs the top three third-party assessment systems in 2007 were Livetext (53%), Chalk and Wire (21%), and TK20 (13%). This was slightly different in 2009, where the top choice for unaccredited SCDE was still Livetext (39%), the second choice was Taskstream (21%), and the third choice was Chalk and Wire at 17%.

Across the board a strong presence was evidence that existing reporting systems in higher education were not sufficient to support SCDEs’ reporting requirements, regardless of accreditation status. Based upon the assessment system choice, accredited SCDEs must be able to aggregate and disaggregate data, including institution and college level characteristics as part of their assessment, accreditation, and accountability reporting. This type of reporting becomes increasingly more difficult with size and complexity in terms of the number of licensure or certification options. While paper-based systems are not viable options, purchasing off-the-shelf packages is a desirable and “easier” alternative as compared with diverting scarce resources to in-house development.

Table 3 presents summaries of the estimated level of investment in assessment systems and support infrastructure by accredited and unaccredited SCDEs, as evidence in answer of the second research question. In both survey years, accredited SCDEs were significantly more likely to invest in assessment systems at higher levels than unaccredited SCDEs (χ² = 12.189, df = 2, N = 218, p < .01), Cramer’s V = .24 and (χ² = 17.858, df = 2, N = 322, p < .001), Cramer’s V = .24.

Table 3
Summary of Annual Expenditure on Assessment Infrastructure

Survey Year
Level of Investment
Accredited
Not Accredited
Total
Chi-Square / df
Cramer’s V
Sig.
2007Low (< $75,000)
105 (66.5%)
53 (88.3%)
158
12.189
.236
.002
Moderate ($75,000 – $149,999)
35 (22.2%)
7 (11.7%)
42
2
High (>=$150,000)
18 (11.4%)
0
18
Total
158 (100%)
60 (100%)
218
2009Low (< $75,000)
153 (60.2%)
59 (86.8%)
212
17.858
.235
.000
Moderate ($75,000 – $149,999)
64 (25.2%)
8 (11.7%)
72
2
High (>=$150,000)
37 (14.6%)
1 (1.5%)
38
Total
254 (100%)
68 (100%)
322

 

While expenditure levels for unaccredited SCDEs remained mostly static, accredited SCDEs reported increased spending levels between 2007 and 2009. Moderate and higher expenditure levels rose from 22% to 25% and 11% to 15%, respectively. In the same period lower levels of investment fell from 67% to 60%.

Limitations and Conclusions

In addition to sample size, other study limitations related to the selection of institutions impacted the extent to which these findings can be generalized. The 2007 sample fell short of the desired number of responses. This issue was further compounded by missing data, which resulted in a smaller number of responses for some questions. Responses were limited to those who were willing to provide this information. Respondents were SCDE heads, some of which may have limited interaction with and knowledge about the assessment process. The potential impact of additional redropping is unknown.

Further, the list of targeted institutions was compiled from the US News and World Report database. The comprehensiveness and accuracy of this database is unknown. The relatively small sample from unaccredited SCDEs makes drawing extensive conclusions regarding these institutions impossible.

Despite limitations, some preliminary conclusions can be made from this inquiry into a limited slice in the operations of accredited and unaccredited campus-based teacher preparation programs. By and large, education programs are coming under heavy scrutiny as a result of the teacher quality debate and international student achievement comparisons. Real differences exist between accredited and unaccredited SCDEs in terms of assessment system choice, infrastructure, and investment levels.

Accredited SCDEs are significantly more likely to rely on technology-supported assessment systems. Most likely, this fact is related to the accreditation requirements and the additional levels of reporting that requires aggregated and disaggregated data, as well as integration with institutional level data. Accredited SCDEs are also significantly more likely to invest scarce resources at a higher rate in order to build system capacity to support the ability to be in compliance with national, state, and professional standards.

Even though the 2009 sample showed an increase in paper-based systems used by accredited SCDEs, this could be an artifact of a larger sample or incorrect response to the question. It may also reflect evaluation decisions that do not reflect accreditation standards. A closer examination of the 2009 responses by a national accreditor reveals that 36 (15.7%) of the NCATE accredited institutions relied on primarily paper-based systems. Although assessing the reasons and motivations of unaccredited SCDEs is difficult, the small sample indicates no change in their adoption rate.

Despite resource constraints, many SCDEs are making relatively large investments in assessment systems, with many spending at least $75,000 annually. As expected, accredited SCDEs tend to spend more. The reported costs were estimated and, as such, may not represent the true costs (e.g., opportunity costs) related to this expenditure. Also, the impact of money spent on the assessment function in terms of system effectiveness and unit quality outcomes is yet to be determined. The spending patterns of unaccredited SCDEs were significantly lower than their accredited peers and reflected limited change over the 2-year period.

SCDEs, by and large, responded to the accountability mandates and made varying levels of investments in their own college-level assessment systems and infrastructure. This college-level investment was often separate from institutional-level systems, and infrastructure cannot fully support these accountability and accreditation reporting requirements.

Responding to the accreditation and accountability mandate to invest in data management systems and infrastructure that facilitates the collection, analysis, reporting, and usage of data reflects an important difference in the operations of accredited and unaccredited SCDEs. Is all the investment worth it in terms of improved teacher quality?

In April 2013, the National Council on Teacher Quality (NCTQ), in a joint effort with U.S. News and World Report, released a report that provided letter grades to states and institutions. Non-traditional providers were not included. Despite the early debates surrounding NCTQ’s methodology—sources of data, the heavy reliance on inputs such as syllabi versus outputs such as performance data—a segment of the public and some policy makers embraced this report as a more credible source of teacher preparation quality information. In other words, has the NCTQ/U.S. News rating system trumped accreditation? A letter grade is certainly more understandable and intuitive than the complexities of state requirements, specialized professional association standards, and national accreditation standards and evidence. Despite issues related to NCTQ’s motives and methodology, CAEP/NCATE/TEAC and State Departments of Education may need to learn something from this organization’s approach. If the NCTQ approach is deemed a legitimate assessment of preservice teacher quality, accredited university-based programs may need to adopt a similar approach if it promises reduced costs and fewer infrastructural demands.

Differences between university-based programs and nontraditional educator preparers as they relate to assessment systems are unknown. Additional research that empirically determines other critical differences between educator preparation entities is required. Once identified, the real impacts of these differences on teacher quality, an already illusive concept, should be determined. Improving the quality of elementary, secondary, and postsecondary education is an important national imperative. This goal becomes even more complex and illusive given the lack of agreement about what constitutes a qualified educator.

Berliner (2005) stated, “Under the best of circumstances, it would be difficult to define a quality teacher; under political mandate to do so, it is likely to lead to silly and costly compliance-oriented actions by each of the states” (pp. 206-207). By default units of education are also likely to engage, and may have already engaged, in similar “silly and costly compliance-oriented actions.” 

References

Alreck, P., & Settle, R. (2004). The survey research handbook (3rd ed.). New York, NY: McGraw-Hill/Irwin.

American Association of Colleges for Teacher Education. (2013). Professional education data system. Retrieved from http://aacte.org/professional-education-data-system-peds/

American Speech-Language-Hearing Association. (2013). The higher education data system. Retrieved from http://www.asha.org/academic/hes/

Baines, L. (2010). The teachers we need vs. the teachers we have: The realities and the possibilities. Plymouth, United Kingdom: Rowman & Littlefield Education.

Bales, B. (2006). Teacher education policies in the United States: The accountability shift since 1980. Teaching and Teacher Education, 22, 395-407.

Berliner, D. (2005). The near impossibility of testing for teacher quality. Journal of Teacher Education, 56(3), 205-213. DOI: 10.1177/0022487105275904

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New York, NY: Academic Press.

Council of Arts Accrediting Associations. (2013). HEADS higher education arts data services project. Retrieved from https://secure3.verisconsulting.com/HEADS/

Coupland, D. (2011). The cost of accreditation: Hillsdale ends its teacher certification program. Academic Questions, 24(2), 209-221.

Haughton, N., & Keil, V. (2009). Assessment systems and data management in colleges of education: An examination of systems and infrastructure. Mid-Western Educational Researcher, 22(2), 38-48.

National Center for Education Statistics. (n.d.). Integrated postsecondary education data system. Retrieved from http://nces.ed.gov/ipeds/about/

National Council for the Accreditation of Teacher Education. (2008). Professional standards for the accreditation of schools, colleges, and departments of education. Washington, DC: National Council for Accreditation of Teacher Education.

National Council for the Accreditation of Teacher Education. (2012). 2012 annual report. Retrieved from http://www.ncate.org/Accreditation/AnnualReports/AnnualReport/tabid/126/Default.aspx

National Council for the Accreditation of Teacher Education. (2013). PEDS data entry. Retrieved from http://aacte.org/professional-education-data-system-peds/peds-data-entry/

National Council on Teacher Quality. (2011). 2011 State teacher policy yearbook. Retrieved from http://www.nctq.org/dmsStage/2011_State_Teacher_Policy_Yearbook_National_Summary_NCTQ_Report

National Council on Teacher Quality. (2012). 2012 state teacher policy yearbook. Retrieved from http://www.nctq.org/stpy11/reports/stpy12_national_report.pdf

Palomba, C., & Banta, T. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco, CA: Jossey-Bass.

Teacher Education Accreditation Council. (2009). TEAC principles and standards for teacher education programs. Retrieved from http://www.teac.org/wp-content/uploads/2009/03/quality-principles-for-teacher-education-programs.pdf

Teacher Education Accreditation Council. (2010). TEAC annual reports. Retrieved from http://www.teac.org/accreditation/annual-reports/

U.S. Department of Education. (2013). Title II reports on the quality of teacher preparation. Retrieved from http://www2.ed.gov/about/reports/annual/teachprep/index.html

Author Notes

Noela Haughton
University of Toledo
Email: [email protected]

Virginia Keil
University of Toledo

 

Loading