From Institutional Effectiveness and Accreditation
In art criticism horror vacui is the meticulous filling of empty space with visual clutter. A Senior Integrative Experience paper I read recently by Jenna Chander likened it to obsessive-compulsive disorder. Schizophrenia is another psychological diagnosis sometimes associated with paintings that exhibit horror vacui. In contrast, many world class modern architects practice minimalism. Instead of a cluster of gargoyles in every niche, buildings are not only restricted to essential features, but each feature serves multiple purposes resulting in an overall economy of features.
Place the Learning Outcome Network (LON), advocated in these postings, in the context of horror vacui. An LON is accomplished by assessing every student in every course in a program using the same multidimensional set of developmental rubrics for all courses. Developmental levels anchor each dimension before and after the program, so that the measurement does not act like a rubber yardstick (beginning strategies occur before the program, easy ones during the first few months, practical ones by the end of the program, and inspiring ones many years afterwards out in the field). Typically, faculty rate eight to twelve dimensions using software that can produce multiple choice questions (e.g., SurveyMonkey or LiveText). Also, typically, their first rating term is prefaced by complaints that getting in grades is hard enough without adding assessment during this end-of-term crunch.
LONs contrast with the common approach of assessing only the final or capstone outcome of a program in order to identify the program’s accomplishments. Is not this summative approach a minimalist response to a LON horror vacui? After all, having one assessment per program creates a distinct impression of simplicity that contrasts markedly with the scores of assessments needed for the LON.
Thinking that LONs are the horror vacui distorts both it and the capstone assessment process. The summative or capstone assessments only appear to be minimalistic. A better architectural analogy is that they are like putting a depiction of Princess Diana’s tiara on the top of a skyline defining building, as architect Gyo Obata did with the Great American Tower at Queen City Square in Cincinnati, Ohio. It has no function other than to draw attention to the building being the tallest in the skyline. Furthermore, it takes an excessive amount of time and resources to accomplish. The only function of summative assessment is to show what faculty members already know—that their students learned something. Furthermore, it requires student artifacts to be collected and made available to multiple raters—a tedious process even in the day of portfolio software. These raters need first to be recruited and then engaged in special reliability training.
In contrast LON assessments are analogous to the scaffolding that enables the building and maintenance of the skyscraper. To build and maintain learning, great educators, coaches, and leaders know to
1. focus on fundamentals, 2. present attainable challenges, 3. define the challenges in ways that are easily remembered by both themselves and those they teach, coach or lead, 4. repeat the practice of fundamentals even long after they are automated, and above all, 5. track progress in an efficient way that can be learned by all participants.
Because they generate eight to twelve practical approaches to perform the activity being taught, LONs help to define the fundamentals of the program. Because they identify developmentally prior steps to achieving these approaches, they facilitate the choices of attainable challenges. Because faculty members have worked through the definitions within a developmental framework, they begin with definitions that include their own terms. Because they take only a minute or two per student per term, LONs can be used many times. Because they are used many times, the faculty practice thinking about the fundamentals they contain even long after they have learned their names and definitions. Because multiple faculty members rate the same students using the same criteria, reliability calculations require no addition data storage and collection. And above all, the LONs provide quick, low-resources, and easily learned ways to track the success of each student and each course innovation over time. To top it all off, the most common response after the first round has been "that was a lot easier than I thought."
In true minimalist fashion, LONs serve many educational purposes in a single essential function. As they are built and learned, they become the scaffolding of the program.