Implementing System-Level Graduation Standards

Driven by external pressure for increased accountability and internal pressure for improved learning outcomes, colleges across the country have been developing and refining assessment systems for several decades. In some cases, assessment results have significant positive impact, for example, when used to enhance teaching and learning or as a lever for organizational change. In other cases, the results have little impact, are not seen as useful or not designed for program improvement purposes in the first place. Assessment can have substantial negative effects as well, including ill will among faculty or other key constituents, reputational damage or reduced funding.

In 1999, the Vermont State Colleges (VSC)—comprising Castleton, Johnson and Lyndon state colleges, the Community College of Vermont and the Vermont Technical College—initiated a systemwide planning process that identified multiple strategic initiatives, including several designed to improve outcomes assessment and accountability. One initiative called for the establishment of common graduation standards for all students across the five colleges, at both the associate and bachelor’s levels. The board of trustees wanted to provide a “guarantee” to the public and employers that every graduate of the VSC could demonstrate essential skills for success after college.

The chancellor established a systemwide steering committee to oversee the graduation standards initiative. The committee included faculty representatives from each college and academic deans, and was co-chaired by the academic vice president of the system and the president of one of the four-year colleges. Faculty on the committee were expected to serve as liaisons to the faculty assemblies on each campus, to allow for broader faculty input and to facilitate endorsement of the committee’s plan. Likewise, reports were provided frequently for the state colleges’ Council of Presidents, the chancellor and the broader VSC community.

Areas of competency

The steering committee ultimately proposed six areas of competency: writing, quantitative reasoning, information literacy, oral communication, civic engagement, and critical thinking. Facing significant opposition to the entire initiative from a vocal group of faculty, the steering committee formed faculty-majority subcommittees to define the outcomes and propose assessment strategies for each standard. Several months into this process, civic engagement and critical thinking were permanently tabled as the subcommittees were sharply divided about the feasibility of valid assessment in those areas. This elevated the political challenges associated with assessing a limited set of skills rather than a broad set of learning outcomes such as those identified by the Association of American Colleges and Universities through Liberal Education and America’s Promise (LEAP).

Unexpectedly, it was easier to come to agreement about specific language for defining learning outcomes than about what to call the entire set of competencies. Faculty vehemently opposed the initial label of “minimum competencies,” on the grounds that it potentially conflated expectations for collegiate learning with those at the high school level. Faculty ultimately agreed to the term “graduation standards.”  Of course, this semantic shift did not mitigate the challenges associated with establishing appropriate performance levels for the standards, made politically charged given the VSC’s public access mission and that over 60% of students are the first in their families to attend college. Many expressed concerns about creating barriers to graduation. But by far, the most controversy centered on the assessment tool itself.

Fundamental methodological questions were debated. Would faculty design the assessments or would the VSC select commercially available instruments? Who would set the standards for passing? Would all students be assessed or would a sampling technique be employed? At what point in time would students be assessed? Ultimately the steering committee recommended a politically acceptable compromise—adoption of common statements of learning outcomes across the five colleges and agreement on a set of parameters for assessing the outcomes (including that every student would be assessed), while allowing each college to develop and implement campus-specific assessments for each standard. This plan satisfied the demands of the board of trustees and chancellor for common learning outcomes and a “guarantee” of minimum competency, and provided a mechanism for faculty buy-in at the campus level.

Implementation

The academic vice president in the system office worked closely with the college presidents and academic deans to ensure progress on the development of local assessments. The implementation timeline was staggered over a five-year period, beginning with the development of a writing assessment that met the requirements established by the steering committee. One college already had in place an institutional writing proficiency exam, and another had in place portfolio-based writing assessment. These models and others were shared among faculty and provided a foundation for the timely and relatively smooth implementation of writing assessments across the system.

The other three areas proved more difficult to implement. There was wide disagreement about the level at which students should demonstrate proficiency in quantitative reasoning, especially for students in STEM fields as opposed to those majoring in the humanities. There was disagreement about how to differentiate minimum competency in information literacy from what might be expected of high school graduates. Finally, there was ongoing confusion about how to differentiate expectations at the associate and bachelor’s levels. Concerns arose about the potential for wide variation across colleges in the performance levels being assessed, as well as in the overall quality of the assessments.

Several years into the implementation process, the academic vice president in the system office and academic deans at the colleges designed and implemented a process to regularly review the assessment methods and results at the colleges. In addition to annual monitoring of results across all assessments, one competency is evaluated comprehensively per year on a rotating basis. Faculty from across the colleges go together in a retreat format to reconsider the common learning outcomes, analyze local assessment methodology and results, and make recommendations to the presidents and chancellor for improving the process. This provided a mechanism for faculty to have a significant role in the ongoing improvement of the assessment system, while supporting the broader strategy of engaging faculty in assessment as part of the regular work of teaching and learning.

Given that writing was the first area to be implemented, it was also the first to be evaluated. As a result, revisions were made to the learning outcomes, as were recommendations for improving the reliability and validity of the local assessments. Writing faculty from across the system shared student writing samples and assessment rubrics, a process they found both useful and engaging, particularly given the opportunity for expanded colleagueship beyond the small departments in VSC colleges. Most recently, the assessment of information literacy was reviewed, which identified  areas of concern in the current approach, including the wide variability in expectations across departments within colleges.  Additionally, there was agreement that the standards and implementation are not rigorous enough in relation to intellectual property and the ethical use of information

Results and lessons learned

Given that all students would be assessed across all standards, the instruments developed by faculty at each college were, in theory, high-stakes. The VSC policy remains that no student can graduate without demonstrating competency in all four graduation standards. However, as assessments were implemented and have now been in place for several years, very few students fail to the pass the assessments in time to graduate. Students routinely require multiple attempts to pass (and benefit from a variety of academic supports in place to help them), but none of the colleges limited the number of times a student could attempt demonstrating competence. The de facto pass-rate, then, remains nearly 100%.

The perception of a high-stakes model may have brought about low standards (as did the original concept of “minimum” competencies). But the most consequential decision was to allow for the design of local assessments within a system-level model. This approach provided for substantial faculty ownership of the process but precluded any cross-college analysis or national benchmarking with similar institutions (although two colleges use a nationally normed online assessment of information literacy). Equally significant was the decision to measure competence at a single point in time rather than at multiple points in order to measure learning gains over time. While the notion of measuring the “value added” by a college degree is fraught with methodological problems related to isolating the effects of the institution (versus those resultant of maturation or experiences outside the institution), it has become the gold standard in outcomes assessment, particularly at a time when popular books such as Academically Adrift: Limited Learning on College Campuses (Arum and Roksa, 2011) have raised questions about the extent to which students learn anything at all in college. Further, measuring competence at a single point in time provides little insight into how students acquire skills and the extent to which particular curricular or pedagogical approaches impact learning gains.

To a large extent, the approach did not take advantage of the opportunity to aggregate and analyze system-level data to improve teaching and learning. Despite having a single administrative information system across the colleges, inadequate attention was paid to developing robust data-collection and analysis systems to support the graduation standards initiative. The strategy of early compromise was critical to ensuring faculty engagement in the assessment process, but it leaned too far in the direction of local autonomy. This manifests an inherent tension in higher education system leadership: supporting strong, unique colleges while maximizing the benefits of the system.

In other respects, the assessment approach did maximize the benefits of being a system. VSC policy remains that meeting the graduation requirements at one college also meets the graduation requirements at any other VSC college, despite the variation of assessment methodology. This benefits transfer students and encourages community college students to continue their studies in the VSC. Other benefits of the assessment model include systemwide awareness of national trends in assessment and accountability, faculty agreement on essential learning outcomes for all VSC graduates, and increased student awareness of performance expectations for college graduates.

Perhaps most valuable has been the annual systemwide retreat devoted to analyzing assessment methods and results in particular areas. In order for an assessment model to ultimately succeed as a means of improving learning outcomes, systemic processes must be in place at all levels to continually monitor, evaluate and strengthen the approach. The annual review process could potentially be enhanced through student involvement, reflecting the growing body of literature speaking to the potential benefits of engaging students in the study of teaching and learning. But by bringing together faculty from across colleges, systems have the opportunity to establish what the Carnegie Foundations calls “networked improvement communities,” which provide for highly structured, cross-functional, cross-institutional inquiry. Finally, the decision to focus on a limited set of outcomes, while for some creating the perception of diluting the greater purpose of a college education, provides the opportunity for in-depth analysis of how students learn a discrete set of skills commonly viewed as essential for success in and beyond college.

Carol Moore is the past president of Lyndon State College and currently works as a consultant. Karrin Wilks is the past senior vice president of the Vermont State Colleges and currently serves as university dean for undergraduate studies at the City University of New York.

 

 


[ssba]

Comments are closed.