F  

rom Inside-Out to Backward Design: An aid for deeper learning

Eric Giordano

Eric Giordano
Political Science

Actively examining his existing teaching style, the concept of backward design launched Eric into a self-directed reflective redesign project for one of his introductory courses. The processes involved in this adventure forced him to identify and prioritize appropriate learning outcomes, and differentiate more clearly the types of learning experiences he wanted students to explore.

Recently, I made it about halfway through a semester and realized I was teaching exactly the kind of course that I most dreaded taking as a student. Though I reflexively made mid-course corrections, the end result invariably proved less than the sum of its parts. Redesigning a course in the middle of a semester, from the inside out as it were, could create a fantastic finish, but one completely disconnected from a lethargic beginning. Enter the FACETS experience. Beginning with well articulated learning outcomes and following through with effective measurement and correlated pedagogy, I have begun to plot my courses with a destination firmly in mind—in short, from inside out to backward design.

FACETS—The Motivation to Change

I would get about halfway through each course and bump into one of those great and terrible “aha!” moments—you know, the ones when you are suddenly, lucidly cognizant that you are teaching exactly the kind of course you most dreaded taking as a student: a content-driven, lecture-style, memorization-oriented, ho-hum class that requires an unhealthy dose of caffeine just to remain awake, let alone endure.

In the fall of 2004 I took a deep breath and plunged into my first academic year with all the expectant vigor and vim of a newly minted assistant professor at a “teaching” university. I was finally living my dream, the one conjured up in response to the recurring nightmare where, as an ABD, I enjoy a long career selling appliances at the mall. Like any a dedicated educator, I entered the classroom determined to teach the socks off those tabula rasas we euphemistically refer to as students. But after two short semesters and eight long courses, I noticed a disturbing pattern. I would get about halfway through each course and bump into one of those great and terrible “aha!” moments—you know, the ones when you are suddenly, lucidly cognizant that you are teaching exactly the kind of course you most dreaded taking as a student: a content-driven, lecture-style, memorization-oriented, ho-hum class that requires an unhealthy dose of caffeine just to remain awake, let alone endure. Savvy enough to recognize a problem, I invariably made mid-course corrections—in some cases, complete overhauls (fixing the shuttle after the launch as it were)—to engineer what I hoped would be a better second half teaching and learning experience. Not surprisingly, the sum proved much less than the parts. While undertaking serious re-design midway through the course led to some fantastic finishes, they rarely overcame lethargic beginnings. In short, I found myself teaching from the middle, creating my courses from the inside out, only to end in a kind of self-reflective muddle. I could only imagine how the students felt. Still, my course evaluations ended up fairly positive. But what did my students know? It turns out they don’t even wear socks!

The FACETS experience first lured, then cajoled, and ultimately convinced me to think differently about what I do or don’t do in the classroom.

Enter FACETS. The FACETS experience first lured, then cajoled, and ultimately convinced me to think differently about what I do or don’t do in the classroom. It demystified and filled some crucial gaps in my understanding of pedagogy and the process of learning. Just as important, FACETS facilitators provided useful resources and tools to explore the latest developments in cognitive science and the nature of learning, while also offering concrete pedagogical strategies to better serve my students and my teaching interests. Finally, FACETS educators administered a healthy diet of challenges, motivation, and support for me to realistically engage in the scholarship of teaching and learning as a means of improving learning outcomes in the classroom.

A Crazy Notion

Any educated dunderhead can intuit the importance of linking effective teaching with prescribed learning objectives. But inside my head at least, there remained a substantial disconnect between pedagogy and realistic expectations of student learning outcomes. On the one hand, I was painfully self-reflective about the teaching process (hence my mid-course awareness of “problems”); on the other, I often found myself comfortably rooted in a transmission-centric teaching method typical of the classic education model I desperately wanted to escape.

A recent letter from a former student illustrates the dilemma. This student had taken an introductory course from me and later transferred to another university where she enrolled in an upper level class in the same discipline. She was, to put it graciously, an average student. In her words,

Once I got into the class here I found that the information I retained in your class was crazy [sic]. I was able to quote word for word definitions (like power, direct democracy, etc.). I have never retained information like I did in your class. I believe it is in direct correlation with your teaching skills and test set-up.

Naturally, I was flattered. So I was doing something right. This is more than an assistant professor in his first year could ask for. But as I began pondering her words, I realized that there was something disturbing—even “crazy”—about her praise. While I am glad she retained key information presented in class (indeed, I had set up my lectures and tests in a manner that forced students to remember important concepts), I realized that this was not what I most wanted her to remember about the class or about my teaching. The feedback I vainly wished for would have gone something like this (with changes in brackets):

Once I got into the class here, I found that the [connections I made] in your class [were] crazy. I was able to [understand and relate concepts (like power and legitimacy to direct democracy and republicanism)]. I have never [thought about ideas] like I did in your class. I believe it is in direct correlation with your teaching skills and test set-up.

Yes, students need to know key concepts. Yes, they need to speak a common language to progress in a discipline. But they also need a foundation of critical thinking skills that moves them beyond a stovepipe mentality.

Some colleagues argued that I should be content. A C+ student mastered essential information from one of my classes and actually thanked me for my efforts. Sing hey nonny-nonny! And, isn’t this sort of retention of key definitions a core goal of any introductory course? Yes, I counter, as I contemplate how intensely gratifying it will be to spend an entire career unlocking the mysteries of memorization. Is it really too much to want more? Should I be satisfied creating automatons capable of rattling off definitions, easily found in any basic course text, but prepared for little other purpose than to climb the next rung on the ever-narrowing disciplinary ladder? Yes, students need to know key concepts. Yes, they need to speak a common language to progress in a discipline. But they also need a foundation of critical thinking skills that moves them beyond a stovepipe mentality.

How I Learned Backward Design

The introduction of backward design by FACETS educators may have saved me from a career of second-guessing. With a little imagination and some timely hand holding, FACETS launched me on a self-directed reflective redesign project for one of my introductory courses, implemented in the fall 2005 semester. The first stage of the process has been to determine what my students should know or understand at the conclusion of the course—preferably something of lasting value. As a social scientist, I envision students in my introductory courses learning the teleology of critical thinking, a process beginning with empirical observation, analysis, and hypothesis building, culminating in syntheses of arguments, and the key concepts and texts they embrace. For the disciplinary purist, this does not preclude the idea of learning knowledge as fact. But it certainly encourages thinking about what type of (and how) knowledge fits into a larger “need to know” category within the framework of the course. To create hypotheses about the role of money in election politics, for instance, students must know something about the Bipartisan Campaign Reform Act. They also need to know where, how, and how much money was donated and spent by candidates, parties, PACs and 501(c)(3) organizations. But, devoid of a contextual relationship, these concepts and figures do not promote critical thinking.

The second stage of the backward design process, which appeals to my social science instinct, is to identify a mechanism for acquiring evidence of successful learning outcomes. In a traditional classroom setting, we still call these tests. Because most of academe has outlived the post-modern moment, I don’t propose to break entirely from this tradition. But rather than testing students largely on the minutiae of a particular text or on their ability to regurgitate a portion of my lecture, I am interested in mechanisms that actually test those enduring principles and skills that my backward design process initially articulated. In other words, I propose to measure student understanding that conforms to my imagined end-state of student learning. For example, using the case study method, I propose to test students’ ability to use empirical facts, say from the 2004 presidential election, to critically analyze extant arguments and hypotheses about the role of money in election politics. First, can students critically analyze important texts in the discipline? Second, can they identify factual information and use it to reasonably argue for or against a particular hypothesis? Finally, can they create more robust and testable hypotheses based on a critical analysis of the empirical data?

In sum, backward design has forced me to differentiate among types of learning outcomes, juxtaposing, for example, “higher learning” skills, such as analysis, critical thinking, comparison, and synthesis, with “lower learning” skills, such as memorization and retention of facts.

The third stage of backward design demands the use of learning experiences and teaching methods that will encourage students to acquire the skills that I propose to measure. Naturally, if I expect students to improve critical thinking skills during the semester, I must include activities that teach students how to critically read and analyze texts. I must model and give them practical experience in reading case studies; I must teach them to identify and sort key information; and I must help them learn to cull out potential competing arguments. Finally, I must provide them with assistance to understand the process of creating and testing hypotheses, and articulating the strengths and weaknesses of competing arguments.

In sum, backward design has forced me to differentiate among types of learning outcomes, juxtaposing, for example, “higher learning” skills, such as analysis, critical thinking, comparison and synthesis, with “lower learning” skills, such as memorization and retention of facts. Ultimately, because both types are important, both need to find place in the classroom. But the backward design process has helped me to identify and prioritize appropriate learning outcomes and follow through with appropriate testing and pedagogy to support them.

The most intriguing aspect of backward design, however, is that it encourages a rational—even testable—approach to teaching and learning. In other words, the design process itself, if done with care, lends itself to empirical investigation. While it may be difficult to measure and compare the effectiveness of backward design versus traditional course design methods for an entire course (because of inherent problems regarding sample size, creating a control group, and accounting for other variables), it is certainly reasonable to test a module from one course to the next. In fact, a major selling point of the FACETS project for me is the opportunity I have to conduct just such a test using two sections of the same introductory course.

Afterthoughts—Looking Backward

Fast-forward six months. I completed my backward design project during the fall 2005 semester using two sections of a standard Introduction to Politics survey course as the backdrop. My objective was to compare learning outcomes from two cohorts of students—one enrolled in a classic content-driven course and the other enrolled in a redesigned course centered on outcome-driven learning. To make the contrast even more vivid, I had the opportunity to teach both courses during the same semester, and back to back on the same day. To measure and compare learning outcomes of the two cohorts, I designed a two part assessment tool:

  1. An in-class examination including identification questions, to measure students’ ability to memorize and recall the meanings of key terms, and an essay question, to measure students’ ability to critically analyze and synthesize ideas from lectures, readings and activities and to measure deeper understanding of subject matter; and
  2. A critical essay paper assignment. My expected outcome was that the backward design cohort would produce a higher quality of answers for the essay portions of the assessment. (I was unsure what to expect with regard to the memorization portion of the assessment). My reasoning was that carefully planned lectures, critical thinking activities, and new content delivery methods—all inspired by the backward design process—would produce better essay answers in the aggregate. To account for potential intervening variables, I borrowed a testing tool from the Institute for Personality and Ability Testing. I administered the test to both cohorts at the beginning of the semester to control for potential competing causal factors including personality differences, gender, learning style preferences, as well as overall intelligence and learning abilities.

Looking backward, this project proved highly ambitious. In combination with my normal teaching load, the experiment with my introduction to politics courses meant four completely separate preparations two to three days a week. I realized very early on that my ambitions would have to give way in part to sanity. I decided to shorten the length of the experiment to include only the first third of the semester after which I administered the two-part assessment. After crunching the numbers, I came to the following conclusions:

  1. Some of the data was inconclusive. For example, there was no significant difference between the cohorts on the in-class examination assessment. A learning styles analysis showed that concrete-linear students did better on the identification/memorization portion of the assessment, while abstract students did better on the essay portion of the test. These results match well-researched learning trends. (The two classes did not differ significantly in terms of gender, intelligence, or personality variables.)
  2. The backward design cohort did perform significantly better than the control cohort on the critical essay assignment. In terms of other potential intervening variables, the only statistically significant trend was that abstract learners outperformed concrete learners on the critical essay paper, also conforming to expectations. The results seem to suggest that the backward design process helped students, on average, write better papers, though abstract thinkers in general did better as well. Of the remaining intervening variables, general intelligence (based on an IQ measure) was the next best statistical predictor of better essays, followed by the personality measures of conscientiousness and agreeableness.
  3. Beyond mere numbers, from a pedagogical standpoint, the backward design process improved the internal rationality and logic of the course and created a better ends-means match between class lectures, activities and assignments, and desired learning outcomes.
  4. Finally, the process of backward design itself engaged me in a self-evaluative process that led to improvements in content, delivery and presentation; I found myself more highly motivated and engaged in the classroom than in the past, and more attentive than usual to student interests, needs, and performance. To the extent that this influenced the outcome I can only report that I found myself feeling this way about both cohorts of students.

Rather than sweeping changes to overhaul the course midstream, consistent self-evaluation encouraged minor and specific course corrections along the way to ensure that classroom instruction was in fact matching planned course objectives.

With the benefit of reflection, I have come to view my experiment with backward design as a unique thought exercise. During the FACETS workshops, one of the facilitators suggested that as part of my project, I might want to keep a log to record what I did in class each week along with a summary of lessons learned and overall impressions of the process. While seemingly a small part of the overall experiment, I now look back at this “journal-writing” exercise as the capstone of the backward design process itself. Though the weekly log was obviously a conscious stab at self-assessment, only much later, during post-project reflection and writing did I realize that keeping the log encouraged me to evaluate my teaching performance more consistently and rationally than in previous iterations. Rather than sweeping changes to overhaul the course midstream, consistent self-evaluation encouraged minor and specific course corrections along the way to ensure that classroom instruction was in fact matching planned course objectives. In short, not only did the process of backward design revolutionize the way I went about organizing my course, it offered a more rational approach to adjusting content, activities, and delivery to match pre-designated learning outcomes.

Admissions from a Successful Teacher

Fortunately, FACETS provided a timely map, allowing me to successfully plot my course with a destination firmly in mind.

I can admit now that I went to the initial FACETS informational meeting to impress my colleagues. “See? I care about teaching.” I neglected to share with them my belief that I already had a good handle on what it took to be a great teacher. I had energy. I had commitment. I had some exceptional ideas for teaching in my discipline. And, it turns out, I had unending irrational hubris masquerading as confidence. In the end I was more like Bullwinkle in a china shop: high on theory but with little practical experience to navigate a successful path without damaging something (or someone) in the process. Fortunately, FACETS provided a timely map, allowing me to successfully plot my course with a destination firmly in mind. By redirecting my energy from inside out to backward design, the FACETS experience helped restore my idealistic ambition to make a difference in the lives of students, this time grounded in a practical strategy for success.

Brief Bio: Eric is an assistant professor of political science at
 UW-Marathon County in Wausau. He received his Ph.D. from The Fletcher School of Law and Diplomacy at Tufts University in 2003. He currently teaches introductory politics, international politics, and a U.S. Foreign Policy course at Steven's Point through the Collaborative Degree Program. His current research focuses on how the U.S. military has adapted strategies and tactics in stability and reconstruction operations. Contact Eric at: egiordan@uwc.edu