You Can't Change What You Can't Measure

This is an adaptation of remarks I made on a panel October 29 at the SUNY Critical Thinking in Higher Education Conference. The panel was billed as "You Can't Change What You Can't Measure" and so I had to slightly disagree right off the bat. 

You can change what you can't measure. But you might not be as effective at that change if you don't measure it somehow.  My career in higher education has, for 25 years, focused on conducting research and evaluation in order to provide information for decision makers at colleges and universities.  At the local level, at Dartmouth, I created the office of student affairs planning, evaluation, and research.  I worked with faculty, administrators, and trustees at all levels of the spectrum of change.  Starting with defining and refining the basic question, conducting research through surveys, focus groups, interviews, and observation and refining what we know, we then create ways to measure if we were successful in making change happen.

People love innovation, but people hate change.  

Change disrupts the status quo. And in higher education, we like the way things are. So there will be those who do not want change. Having data that effectively demonstrates the problem can be very helpful. Be sure to collect that and use it when you communicate why change needs to happen.  Explain your thought process and refer to the research you have done to support your conclusion.  Lead with that.  Don't lead with the end.

In the late 90s at Dartmouth the word came down that the president and trustees wanted to improve student social life. This was in a boom period, and there were millions of dollars coming to this initiative. There were committees and working groups and consultants and I worked with a lot of them providing information from surveys and focus groups on social life. I had detailed data on student behaviors, attitudes, and beliefs from surveys we created in my office as well as that all important comparison data on social life at other schools.  There were plans for new residence halls, a new student center, new dining areas, and new extracurricular opportunities. Great, right? 

Most of that never happened.  

It never happened because instead of leading with “we want to improve student life because it is an integral part of the experience that is 1) impeding student learning 2) not providing enough opportunities for students to experience leadership, and 3) in some cases leading to unhealthy behaviors and attitudes and let me illustrate what the issues are…” the lead skipped all that and was “the fraternity system as we know it will end.”

It was a bold statement. But it never happened. Because that change was then not about why we need to change and how things will be better, it was about emotion.  And one of the more popular college presidents with students and alumni, overnight, became one of the less popular presidents. The administration had been part of the thought process, and some of the faculty, but key stakeholders were omitted from that process and were blindsided.

Dartmouth still has fraternities. And the grand plans were scrapped.  The student life initiative went forward, but it was a shadow of what had been envisioned.

The funny thing was that I had a presentation that lined it all up beautifully.  I had years of survey information that compared us with our peers.  I had detailed local information.  It was one of the most popular presentations I gave.  The alumni office saw my presentation and had me give it to alumni groups, such as the alumni council, when they visited campus.  Alumni loved it. Students loved it. We had great conversations about student life and what could change. (I also had a legendary talk I gave about the same time on the phenomenon of beer pong that at one time was referenced in the Wikipedia entry on beer pong, but that's another story.)

So, lead with why. That is how to persuade people to consider change. 

It also gives you the benchmarks from which to measure change.  It's great to have national data from peer institutions to give perspective to your situation.  It's easy to make abstract statements like: “Our students have academically rigorous experiences here !” In looking at the data, however, I might reply (and this is a fictional case, by the way) "Well, actually, we have fewer students going into graduate school, lower scores on the Collegiate Learning Assessment, and on our senior survey fewer students report academic gains than at our peer institutions.  Our analysis shows that our students study on average four hours less a week and produce 5 fewer papers over 20 pages over the academic year when compared with our peers.  One thing we have noticed though is that students that take more blended learning classes than strict lecture classes report higher levels of engagement and, when we track them, we see they score higher on the CLA than those without those experiences."

Ok.  So now we have laid out the problem not as abstract, but as specific issues.  We want more students going off into grad school.  We want a better demonstration of learning, and we are going to use the Collegiate Learning Assessment as one measure to look at change over time. Those are our outcome measures.  And, this is important, having students study more and write longer papers and take more blended learning courses are not the outcomes.  Those are methods by which you think, base upon research, they will help you achieve your real outcomes.  Don't confuse process with the goal. 

But you need to measure both.  Because that tells you if you are on the right track.  If all you measured was the final outcome, you would have no way of telling which experiences some students had that made it more likely that you will achieve your goal. Because, frankly, sometimes it doesn't matter what the intermediates steps are.  Don't tie your initiative to increasing study time.  That's not the goal. But be sure to measure all those possible behaviors and experiences so you make informed choices on how to reach your goal.  And of course, you need to effectively measure if you achieve your goals.  

Which means measuring change.  Use nationally available tools like CIRP or NSSE to measure change.  Use them longitudinally, with the same students over time, to make sure you are looking at actual change and not comparing different groups.  For instance, if you want to see if you have impacted your students, the survey them coming in with the CIRP Freshman Survey and the survey those same students as seniors with the CIRP College Senior Survey.  Or BCSSE and NSSE.  But don't compare first year students in 2014 with seniors in 2014 and think you are measuring change.  You are not. You are looking at different cohorts. Useful information.  But it doesn't measure change.

And you need to measure change.  From day one. That means getting people who can do that for you and giving them the resources to help you both gauge if you are achieving your outcomes and based upon the data, and what programs or policies you enact make it more, or less likely to be successful.  

Previous
Previous

Who Are You, And Who Knows?