Teach Strats

HIGHER ED TEACHING STRATEGIES FROM MAGNA PUBLICATIONS

Six Principles for Measuring and Communicating the Value of Your Faculty Development Center

This is an era of rapid transformation and heightened opportunities for Faculty Development Centers (FDCs). There is a growing realization that faculty development can be a crucial component in addressing some of the most significant challenges facing higher education, including technology’s impact on teaching, reliance on part-time and distance faculty, and student success.

Tightening higher education budgets, on the other hand, pose a significant threat to programs and centers that are unable to show the value of the services they provide. FDCs rarely operate under financial models that give them direct control over their budget, or that provide direct income-generating activities. It is vital, therefore, for FDC leaders to demonstrate the value of what they do to those who control the budget process. Outcomes-based assessment is a powerful tool that can be used to evaluate and improve the services FDCs provide while simultaneously communicating the value of what they do.

Outputs vs. Outcomes
Output-based evaluation assesses direct products, particularly the volume of activities in which one is engaged. FDCs that evaluate themselves solely by the number of workshops they provide, or by the number of participants in their events, are measuring an output. While there is definite value in knowing that information, it provides no direct indication of its worth to participants, nor any indication that change of any sort happened as a result of that activity. Outcomes-based evaluations, on the other hand, assess changes in the behavior or attitudes of the participants themselves. Here are the general principles I recommend when developing and using outcomes-based assessment:

Principle 1: Identify the key organizational goals for your center
FDCs are widely divergent in their mission and constitution. Is your center in charge of technology training? Adjunct development? Online instructional resources? Institutional assessment? Graduate student development? Regardless of your mission (defined or implied), you need to be able to establish clear goals, or your assessment plan will be ineffective.

Principle 2: Identify two or three outcomes for each organizational goal
If your programs are effective, then what changes might be measureable in faculty or student behavior? One of the organizational goals for my center, for example, is to provide faculty with educational technology instruction. If that instruction is effective, then faculty should become more comfortable with and proficient in using educational technology, and students should report that faculty use technology more frequently and more effectively in their classes.

Principle 3: Identify ways to collect data relevant to the outcomes chosen above
Data can be gathered through faculty surveys, student surveys, institutional data reports, grant reports, publications, focus groups, case study methodology, etc. If possible, use evaluation tools that have already been developed, rather than reinventing the wheel. Our outcomes for the technology goal above, for example, include the following:

  • 80% of the faculty who participate in a technology workshop will identify on the workshop evaluation a specific idea or action from that workshop that they plan to implement. (AY14 results: 80% of the faculty identified a specific plan, so we just met this goal.)
  • 75% of the respondents to our faculty survey will agree or strongly agree with the statement “I felt confident creating technology-enhanced learning experiences for my students.” (AY14 results: 73.4% responded as desired, so although we were close, we’ll need to continue to grow in this area.)
  • 80% of the faculty who participate in the full CTL faculty consultation process will show improvement the semester of and the semester after consultation in the categories “Excellent” Instructor” (Raw Score All Courses) and “Excellent Course” (Raw Score All Courses) on the IDEA student survey of instruction, compared to the semester before consultation. (Cumulative results AY11-AY14: 90% of the faculty improve, by an average of 15.52 percentage points for “Excellent Instructor” and 16.03 percentage points for “Excellent Course”. This is a strong measure of value for us.)

Principle 4: Understand the limitations of your data
It is very difficult to show a direct (statistically reliable and meaningful) correlation between faculty development activities and improved student learning. Try to establish measures that are as valid as possible, and then accept the fact that much of your data will be quite “messy” from a statistical standpoint. Understand what each of your assessment tools measures. The IDEA form, for example, provides powerful data, but it measures only student perceptions. Students may perceive that a faculty member is doing well (or poorly) implementing technology in the class when the reality is quite different.

Principle 5: Use the data to improve
Use the data to intentionally guide strategic planning. Based on the data you have generated, what areas or services might you improve?

Principle 6: Use the data to tell your story
Data speaks to upper administration! The statement that “84% of our faculty rate our 1:1 technology assistance as effective or very effective” is far more powerful than stating “100 faculty visited the center for 1:1 technology assistance.” I have found that this aspect of outcomes-based assessment—the ability to communicate the impact and value of our center—has been invaluable.

In conclusion, outcomes-based assessment provides FDC leaders with data that can be used both to guide the strategic improvement of their centers, and to speak convincingly about the value of what they do.

Dr. Bruce C. Kelley is a professor of music and Director of the Center for Teaching and Learning at the University of South Dakota.