Program Evaluation: Assessing the design and effectiveness of CEDAM’s AmeriCorps program

Written by Madeleine March-Meenagh, program analyst

Program evaluations can take on many forms, but they all serve the same fundamental purpose: to see what’s working and what can be done to improve. (Sometimes they can even help us check off a funding requirement!) Every three years, CEDAM reevaluates its AmeriCorps State program for all of those reasons—and in 2020, they hired me to do their evaluation. 

At the time, I was a first-year graduate assistant for Michigan State’s Master of Public Policy Program (conveniently) taking a course on program evaluation. It was through that course, and a well-timed class presentation by CEDAM’s Rachel Diskin, that I learned about CEDAM’s AmeriCorps evaluation needs. CEDAM’s relationship with our program is mutually beneficial; MPP candidates have the opportunity to fulfill course requirements with real-world experience while CEDAM can trust its evaluation is carried out by someone with access to expert training and guidance. 

After consulting previous evaluations on the program and administrative data, we decided to re-evaluate program design and implementation. In other words: was the design of the program sound and was the program being implemented according to that design?


CEDAM’s AmeriCorps State program design primarily relies upon the assumption that financial education interventions positively impact client financial behavior. Research is vast and varied but effectiveness seems to largely rely on implementation, target population served, and program design. Because client demographic data is not collected uniformly across sites, we were unable to evaluate how target population served may impact program effectiveness. However, the FDIC Money Smart curriculum and structure underpinning the program design seemed to align with commonly accepted best practices

Satisfied that the program design reflected best practice in theory, we turned to consider program implementation in practice. The evaluation aggregated data in two phases: 

1) surveys for each member and supervisor; and 

2) follow-up one-on-one interviews with each member and supervisor. 

Because we started evaluating the program last March as Michigan communities started shutting down in response to the emergence of COVID-19,  the AmeriCorps State program’s day-to-day implementation had shifted significantly from its status quo. While this certainly affected the data collected during the course of the evaluation, the pandemic appeared to magnify pre-existing themes more than generate new issues. 

Results from the evaluation indicated that improved 1) communication and 2) program/host site understanding are both key to more successful program implementation. Our recommendations spanned from host site applications all the way to further evaluation:


In the end, my colleague and I gained invaluable experience to ground our academic training, and CEDAM received a comprehensive evaluation replete with recommendations on the next steps they can take. If you’re looking to evaluate a program of your own, consider reaching out to your local higher education institution to see if they offer a course in evaluation—or #GoGreen and reach out to MSU MPP.  

Keep an eye out for our next blog post about important considerations for your own in-house evaluation!