Assessing Assessment

From Institutional Effectiveness and Accreditation

Jump to: navigation, search

There is a bigger problem facing higher education assessment than closing the loop. This problem involves establishing the loop in the first place. Assessment’s missing link seems almost as elusive as the ape man once seemed. In our case, progress in finding the link can come from investigating not old bones, but rather culture shock and undergraduate research.

Culture shock is the real effect of having to face novel but inescapable pressures when engaged in ingrained habits. It is well known to affect immigrant well-being and the success of study abroad experiences. It is less recognized, but still a potent factor in less dramatic changes ranging from business transfers to life passages. Last week I had the pleasure of doing developmental interviews with some scholar-educators on the topic of global studies. One of them was Aubrey Westfall, an expert in international migration, who had just returned from leading a student tour of Spain and Morocco. When the interview turned to culture shock, a fascinating hypothesis emerged:

Adjustment to culture shock begins when people experience events that show they are making progress toward their objectives.

There’s a converse to this hypothesis. What if the objectives are so fuzzy that there are no events clearly related to them or so trivial that any event would serve regardless of the effort to achieve it? So conversely:

Clear, significant objectives are necessary to produce an adjustment.

So how is research linked to culture shock? I once spent a decade at a small college teaching research courses in psychology. To stimulate discussion, I instituted a Friday forum where the students were required to come to class with a research problem related to the chapter being studied for the week. In their first encounters with this activity, students would often start “I want to show that…” to which I quickly acquired the habit of interrupting with “Research is time consuming and difficult. If you already know the answer, why bother?” Today, I could add that there is culture shock in doing research. We must not only learn many new habits but we have to delay gratification long beyond our common expectations.

Later courses with much improved questions would involve excursions to the library (or more recently to a research database) to find out what others had learned. I knew the Friday forums worked when a class of 16 students had 31 presentations at undergraduate research conferences and 6 presentations at the national conference of the Association for Psychological Science in the same year. Moreover, the topics were fascinatingly diverse and occasionally resulted in findings that truly added insights to the discipline (e.g., Donna Michal discovered that—and Matthew Vande Brake revealed more about the causes of—children who speak English use more Anglo Saxon words while as adolescents they use more Norman words; Susan Perry discovered that not only are rhythms and melodies processed in different parts of the brain, but the interval between notes in the average song melody is Poisson distributed with a mean and standard deviation of one, etc.). Research is time consuming and difficult, but those in pursuit of really good questions, just don’t notice the effort.

Instructors are used to assessing students, not their instruction approaches. Trying to change approaches that were modeled decades earlier requires new habits that even with the best of questions will not be rewarded for months. Assessment can be culture shock. So, how often do assessment offices ask faculty why they are doing program assessment? When they do, the first responses are likely to be variations of “to get federal financial aid” and “to show accreditation agencies what the program has accomplished.” Those answers provide a good opportunity to interrupt with “Assessment is time consuming and difficult. If you already know the answer, accreditation review teams will suspect you are biased and you will be bored and frustrated with the process. Can you think of something that you don’t know about student learning in your program that assessment data might be able to answer?” The most rewarding task of higher education assessment is helping programs design assessment research that leads to discoveries about significant educational questions. But if we fail to ask and expect significant questions, we will miss the reward.

Assessment still has a gap when the loop is closed, unless significant questions have first been posed.

Personal tools