American Evaluation Association 365 Blog

Syndicate content
A Tip-a-Day by and for Evaluators
Updated: 28 min 11 sec ago

MIE TIG Week: Stafford Hood and Rodney Hopson on Continuing the Journey on Culture and Cultural Context in Evaluation

Wed, 08/20/2014 - 01:15

Greetings AEA and evaluation family, we’re Stafford Hood, professor, University of Illinois-Urbana Champaign and Director, Center for Culturally Responsive Evaluation and Assessment (CREA) and Rodney Hopson, professor, George Mason University and Senior Research Fellow, Center for Education Policy and Evaluation.

We are members of the AEA Multi-Ethnic Issue TIG, having been long time members and having seen the TIG grow over twenty (20) years.  Additionally, we promote the historical and contemporary development of Culturally Responsive Evaluation (CRE).  Grounded in traditions of Robert Stake’s Responsive Evaluation in the 1970s and influenced by the work of Gloria Ladson-Billings, Jackie Jordan Irvine, and Carol Lee who coined Culturally Responsive Pedagogy twenty years later. CRE marries these perspectives into a holistic evaluation framework that centers culture throughout evaluation.  Of particular attention to groups historically marginalized, CRE seeks to balance their interests and matters of equity into the evaluation process.

Hot Tip:  Refer to CRE framework in the 2010 NSF User-Friendly Guide (especially the chapter by Henry Frierson, Stafford Hood, Gerunda Hughes and Veronica Thomas) and the previous Hot Tip to illustrate how CRE can be applied to evaluation practices. 

Lesson Learned: There is a recognizable growth in what some may now call our culturally responsive evaluation community, particularly in the presence of a younger and more diverse cadre of evaluators. A recent search of scholar.google.com of the terms culturally responsive evaluation (CRE) and culturally competent evaluation (CCE) anywhere in an article or chapter or title between 1990 and 2013 indicates the major increase in this discourse over a little more than a decade is illustrated in the table below:

Rad Resources:

  • CREA is an international and interdisciplinary evaluation center that is grounded in the need for designing and conducting evaluations and assessments that embody cognitive, cultural, and interdisciplinary diversity that are actively responsive to culturally diverse communities and their academic performance goals;
  • CREA’s second conference is upcoming!: “Forging Alliances For Action:  Culturally Responsive Evaluation Across Fields of Practice” will be held September 18-20, 2014 at the Oak Brook Hills Resort, Chicago – Oak Brook, IL and feature seasoned and emerging scholars and practitioners in the field;
  • AEA Statement on Cultural Competence in Evaluation is the (2011) membership-approved document as the result of the Building Diversity Initiative (co-sponsored by AEA and W.K.Kellogg Foundation in 1999);
  • Indigenous Framework for Evaluation, which synthesizes Indigenous ways of knowing and Western evaluation practice, is summarized in a Canadian Journal of Program Evaluation 2010 paper by Joan LaFrance and Richard Nichols.

The American Evaluation Association is celebrating Multiethnic Issues in Evaluation (MIE) Week with our colleagues in the MIE Topical Interest Group. The contributions all this week to aea365 come from MIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Stafford Hood on the 2013 CREA Conference: Repositioning Culture in Evaluation and Assessment
  2. Cultural Competence Week: Karen Anderson on Cultural Competence in Evaluation Resources
  3. Cultural Competence Week: Leah Neubauer on Resources and Updates from the Practice Committee

MIE TIG Week: Ray Kennard Haynes on the Use of Domestic Diversity Evaluation Checklists in Higher Education

Tue, 08/19/2014 - 01:15

My name is Ray Kennard Haynes and I am an Assistant Professor at Indiana University- Bloomington and I have a keen interest domestic racial Diversity in Higher Education (HE).   Since the 1970s the United States (U.S.) has attempted to address Diversity by focusing primarily on race and gender through Equal Employment Opportunity (EEO) legislation. This legislation produced some gains; however, those gains have now eroded and are under threat due to legal challenges.

HE institutions in the US have ostensibly embraced Diversity and even claim to manage it. Evidence of this commitment to diversity can be seen in the proliferation of Diversity offices and programs at HE institutions and with the advent of the position of Chief Diversity Officer (CDO). The casual observer could reasonably conclude that Diversity has been achieved in HE. Surely, we see evidence of this reality with the CDO position and ubiquitous Diversity commitment statements. Note too, that the term university can also be construed as: the many and different in one place. Given this meaning and the fact that one in every two U.S. residents will be non-white by the year 2050, Diversity in higher education is a fait accompli. Is HE really diverse with respect domestic racial groups (i.e. African-Americans and Latino-Americans)?

Hot Tips: Research suggests that despite increasing racial diversity, communities and schools are re-segregating to levels representative of the 1960s. In highly selective institutions, diversity has come to mean many things and underrepresented domestic students and faculty are becoming an increasingly smaller part of the Diversity calculus. The evidence suggests HE is becoming less domestically diverse because of the negative co-variation between increases in domestic racial diversity and decreasing access for African-Americans and Latino-Americans to higher education, especially at highly selective schools.

One way for HE to show its commitment to domestic Diversity is to define and evaluate it within the broader construct of DIVERSITY that includes visible and non-visible differences.

Evaluation checklists can be applied to assess domestic diversity deficits and related program implementation thoroughness.

For HE institutions and evaluators who believe that domestic diversity matters, a good place to start is to create Domestic Diversity Evaluation Checklists that assess for both Diversity and Inclusion. These checklists should include dimensions that capture:

  • Diversity investment: the budget (investment) associated with domestic racial diversity
  • Structural diversity: the numbers of underrepresented domestic students and faculty
  • Diversity Climate: decision making and the level of meaningful cross-race interaction and inclusion in shaping the culture and direction of the HE institution

Rad Resources: For practical help on checklists you may access, see Western Michigan University’s page on evaluation checklists and some examples of evaluation checklists.

The American Evaluation Association is celebrating Multiethnic Issues in Evaluation (MIE) Week with our colleagues in the MIE Topical Interest Group. The contributions all this week to aea365 come from MIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Cultural Competence Week: Lisa Aponte-Soto and Leah Christina Neubauer on Increasing the AEA Latino/a Visibility and Scholarship
  2. GEDI Week: Faheemah Mustafaa on Pursuing Racial Equity in Evaluation Practice
  3. GEDI Week: Nnenia Campell and Saúl Maldonado on Amplifying Definitions of Diversity in the Discourse and Practice of Culturally Responsive Evaluation, Part 1

MIE TIG Week: Nicole Clark on Engaging Young Women of Color in Program Design & Evaluation

Mon, 08/18/2014 - 01:15

Hello! I’m Nicole Clark, a licensed social worker and independent evaluator for Nicole Clark Consulting. I specialize in working with organizations and agencies to design, implement, and evaluate programs and services specifically for women and young women of color.

Young women of color (YWOC) face many issues, including racism, sexism, ageism, immigrant status, socioeconomic status, and sexuality. How can evaluators make sure the programs we design and evaluate are affirming, inclusive, and raise the voices of YWOC?

To help you be more effective at engaging young Black, Latina, Asian/Pacific Islander, and Native/Indigenous women in your evaluation work, here are my lessons learned and a rad resource on engaging YWOC:

Lessons Learned: Not all YWOC are the same- YWOC are not a monolithic group. Within communities of color, there are a variety of cultures, customs, and regional differences to consider.

Meet YWOC where they are- What are the priorities of the YWOC involved in the program or service? When an organization is developing a program on HIV prevention while the YWOC they’re targeting are more concerned with the violence happening in their community, there’s a disconnect. What the organization (and even you as the evaluator) considers a high priority may not be to the YWOC involved.

Be mindful of slang and unnecessary jargon- Make your evaluation questions easy to understand and free from jargon. Be mindful of using slang words with YWOC. Given cultural and regional considerations (along with the stark difference in age between you as the evaluator and of the YWOC), slang words may not go over well.

Start broad, then get specific- Let’s use an example of creating a evaluation questions on reproductive rights and YWOC. Creating evaluation questions around “reproductive rights” may not be as effective to YWOC as creating evaluation questions on “taking care of yourself.” While both can mean the same thing, “taking care of yourself’ evokes an overall feeling of wellness and can get YWOC thinking of specific ways in which they want to take care of themselves. This can be narrowed down to aspects of their health they want to be more empowered on, and you can help organizations hone in on these needs to develop a program or service that YWOC would be interested in.

Rad Resource: A great example of a YWOC-led program is the Young Women of Color Leadership Council (YWOCLC), a youth initiative through Advocates For Youth. Through thoughtful engagement of young people in their work, the YWOCLC cultivates a message of empowerment for young women of color, and it serves as a great example of a true youth-organization partnership framework. Pass this resource along to the youth-focused organizations you work with!

The American Evaluation Association is celebrating Multiethnic Issues in Evaluation (MIE) Week with our colleagues in the MIE Topical Interest Group. The contributions all this week to aea365 come from MIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Cheri Hoffman on Youth-Led Evaluation
  2. Kim Sabo Flores on Engaging Youth in Evaluation
  3. YFE Week: Katie Richards-Schuster on a Declaration for Youth Participation

MIE TIG Week: Dominica McBride on Living with the People

Sun, 08/17/2014 - 01:15

Hi again, I’m Dominica McBride, Founder and CEO of Become: Center for Community Engagement and Social Change. A few weeks ago, I wrote a tip on the importance of cultural competence. I wrote on the perpetual sociopolitical dilemmas we face as a society. Today, I’m providing a way to contribute to alleviating these issues.

Start with the wisdom of Lao Tzu -Go to the people. Live with them. Learn from them. Love them. Startwith what they know. Build with what they have. But with the bestleaders, when the work is done, the task accomplished, the people willsay ‘We have done this ourselves.’”

It is out of this concept that real and sustainable transformation happens. I work in communities that are marginalized both socio-politically and economically. A remedy to this reality is co-creation; those affected by the decisions sit at the table, are equal partners in making the decisions, and co-create the conditions they desire.

Lesson Learned: For this to happen, framing and language are key. In one community project, we decided to call the community evaluation team the “elevation team,” which connotes a collective process of creating and realizing a vision. From this framing and subsequent relationship building, we built an evaluation team with parents, youth, elders, and organizational staff. Together, we’ve established a team vision and mission, evaluation question questions, methods, and are now collecting data.

Hot Tips:

Believe in people. Even though someone may not have completed high school or be over the age of 12 doesn’t mean they are not capable. The youth on our community evaluation team have come up with some of the best evaluation questions and now are engaging other youth in ways we, as adults, are not as able.

Ask. Some make the mistake of thinking that community members (especially in marginalized areas) would not want to be involved in an evaluation or social change process. I’ve found this to be far from the truth, especially if the evaluation targets an issue about which they are passionate. In the discovery process, we learned what they cared about and then asked if they would be involved.

Build genuine, interdependent relationships. One-on-ones are at the heart of community organizing. Why? It’s because relationships are necessary in developing and maintaining cohesion and motivation. They are the glue for teams, especially those addressing challenging social issues. If relationships fall apart, the initiative will likely fail.

Rad Resources:  Check out the Community Tool Box for tips and tools on strengthening partnerships, advocacy, and sustaining the initiative.

Read Whatever it Takes by Paul Tough about an inspiring story on how to create change.

The American Evaluation Association is celebrating Multiethnic Issues in Evaluation (MIE) Week with our colleagues in the MIE Topical Interest Group. The contributions all this week to aea365 come from MIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. 

 

 

Related posts:

  1. YFE Week: Katie Richards-Schuster on a Declaration for Youth Participation
  2. Linda Lee on Using Visual Methods to Give Voice in Evaluations
  3. Pam Larson Nippolt on Soft Skills for Youth

Sheila B Robinson on Seeking an aea365 Intern!

Sat, 08/16/2014 - 05:50

Hello loyal readers! I am Sheila B Robinson, aea365’s Lead Curator and sometimes Saturday contributor and I’m looking for a partner!

Get Involved: We’re looking for an aea365 intern  – a volunteer curator willing to donate perhaps an hour or two a week to help aea365 continue its commitment to maintaining a high quality daily blog by and for evaluators.

The aea365 intern will primarily assist with:

  • Recruiting contributors – sending invitations, communicating with leaders of sponsoring groups
  • Shepherding contributors – sending reminders, asking questions, giving thanks
  • Uploading contributions – entering posts into the aea365 wordpress-based website (very easy to learn!)
  • Contributing occasional aea365 posts

The commitment requires on average approximately 1-2 hours per week for six months beginning approximately October 1 and running through approximately April 1.

Lesson Learned: I began my position as Lead Curator with a six-month commitment and that was a year and a half ago! Yes, it’s just that fun and rewarding. I learn so much from reading each post, and I love “meeting” evaluators through my communication with them.

Hot Tip: The ideal intern has contributed before to aea365, or at a minimum is a regular reader familiar with the format, breadth, and style of entries. She or he has good writing skills and communications skills and is interested in making connections across the evaluation community. Finally, the work can be done remotely, from anywhere, and thus the intern should be self-directed, organized, and adept at meeting deadlines.

Serving as an aea365 intern is a great way to build your professional network and expand your knowledge of the breadth and depth of the field. The intern will receive ongoing mentoring throughout the term of the internship as well as support in learning how to use wordpress.

This is a volunteer position, and as such compensation will be in the form of our sincerest gratitude, thanks, and recognition of your contribution!

To apply – on or before Friday, September 12, send the following to aea365@eval.org: (1) a brief letter of interest noting your favorite type(s) of aea365 posts and why, and (2) an example original aea365 post following the contribution guidelines and demonstrating your writing/editing capacity.

Cool Trick: Be sure to follow the contribution guidelines, and proofread your work when submitting your example post.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Susan Kistler on Seeking an aea365 Intern
  2. Susan Kistler on Seeking an aea365 Intern
  3. Sheila B. Robinson on Joining the aea365 Author Community

Kirk Knestis on Innovation Research and Development (R&D) vs. Program Evaluation

Fri, 08/15/2014 - 01:15

Kirk Knestis, here, CEO of Hezel Associates—a research and evaluation firm specializing in education innovations. Like many of you, I’ve participated in “evaluation versus research” conversations. That distinction is certainly interesting, but our work in studying science, technology, engineering, and math education (STEM) leaves me more intrigued with what I call the “NSF Conundrum”—confusion among stakeholders (not least National Science Foundation [NSF] program officers) about the expected role of an “external evaluator” as described in a proposal or implemented for a funded project. This has been a consistent challenge in our practice, and is increasingly common among other agencies’ programs (e.g., Departments of Education or Labor). The good news is that a solution may be at hand…

Lessons Learned – The most constructive distinction here is between (a) studying the innovation of interest, and (b) studying the implementation and impact of the activities required for that inquiry. For this conversation, call the former “research” (following NSF’s lead) and the latter “evaluation”—or more particularly “program evaluation,” to further elaborate the differences. Grantees funded by NSF (and increasingly by other agencies) are called “Principal Investigators.” It is presumed that they are doing some kind of research. The problem is that their research sometimes looks like, or gets labeled “evaluation.”

Hot Tip – If it seems like this is happening (purposes and terms are muddled), reframe planning conversations around the differences described above—again, between research, or more accurately “research and development” (R&D) of the innovation of interest, and assessments of the quality and results of that R&D work (“evaluation” or “program evaluation”).

Hot Tip – When reframing planning conversations, take into consideration the new-for-2013 Common Guidelines for Education Research and Development developed by NSF and US ED Institute of Education Sciences (IES). The Guidelines delineate six distinct types of R&D, based on the maturity of the innovation being studied. More importantly, they clarify “justifications for and evidence expected from each type of study.” Determine where in that conceptual framework the proposed research is situated.

Hot Tip – Bearing that in mind, explicate ALL necessary R&D and evaluation purposes associated with the project in question. Clarify questions to be answered, data requirements, data collection and analysis strategies, deliverables, and roles separately for each purpose. Define, budget, assign, and implement the R&D and the evaluation, noting that some data may support both. Finally, note that the evaluation of research activities poses interesting conceptual and methodological challenges, but that’s a different tip for a different day…

Rad Resources – The BetterEvaluation site features an excellent article framing the research-evaluation distinction: Ways of Framing the Difference between Research and Evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Climate Ed Eval Week: Dan Zalles on Maintaining Flexibility in Evaluating Outcomes Across Varying Implementations
  2. STEM TIG Week: Kim Kelly on Key Insights on the Journey from Psychological Science Researcher to Program Evaluator
  3. STEM Week: Veronica Smith on Evaluating Student Learning in STEM Subjects

Carla Hillerns and Pei-Pei Lei on You had me at Hello: Effective Email Subject Lines for Survey Invitations

Thu, 08/14/2014 - 01:15

Did we get your attention? We hope so. We are Carla Hillerns and Pei-Pei Lei – survey enthusiasts at the Office of Survey Research at the University of Massachusetts Medical School.

An email subject line can be a powerful first impression of an online survey. It has the potential to convince someone to open your email and take your survey. Or it can be dismissed as unimportant or irrelevant. Today’s post offers ideas for creating subject lines that maximize email open rates and survey completion rates.

Hot Tips:

  • Make it compelling – Include persuasive phrasing suited for your target recipients, such as “make your opinion count” and “brief survey.” Research in the marketing world shows that words that convey importance, like “urgent,” can lead to higher open rates.
  • Be clear – Use words that are specific and recognizable to recipients. Mention elements of the study name if they will resonate with respondents but beware of cryptic study names – just because you know what it means doesn’t mean that they will.
  • Keep it short – Many email systems, particularly on mobile devices, display a limited number of characters in the subject line. So don’t exceed 50 characters.
  • Mix it up – Vary your subject line if you are sending multiple emails to the same recipient.
  • Avoid words like “Free Gift” (even if you offer one) – Certain words may cause your email to be labeled as spam.
  • Test it – Get feedback from stakeholders before you finalize the subject line. To go one step further, consider randomly assigning different subject lines to pilot groups to see if there’s a difference in open rates or survey completion rates.

Cool Trick:

  • Personalization – Some survey software systems allow you to merge customized/personalized information into the subject line, such as “Rate your experience with [Medical Practice Name].”

Lesson Learned:

  • Plan ahead for compliance – Make sure that any recruitment materials and procedures follow applicable regulations and receive Institutional Review Board approval if necessary.

Rad Resource:

  • This link provides a list of spam trigger words to avoid.

We’re interested in your suggestions. Please leave a comment if you have a subject line idea that you’d like to share.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Shortcut Week: Ana Drake on Macros and Hotkeys
  2. Susan Kistler on Formatting Qualitative Questions for Online Surveys
  3. Katya Petrochenkov on Surveygizmo

Molly Ryan on Using Icon Array to Visualize Data

Wed, 08/13/2014 - 01:15

Hello! I’m Molly Ryan, a Research Associate at the Institute for Community Health (ICH), a non-profit in Cambridge, MA that specializes in community based participatory research and evaluation. I am part of the team evaluating the Central Massachusetts Child Trauma Center (CMCTC) initiative, which seeks to strengthen and improve access to evidence-based, trauma-informed mental health treatment for children and adolescents. I would like to share a great resource that we use to visualize and communicate findings with our CMCTC partners.

Rad Resource: Icon Array University of Michigan researchers developed Icon Array to simply and effectively communicate risks to patients. For more information on why icons are so rad, check out Icon Array’s explanation and bibliography.

Hot Tip: Icon Array offers 5 different icons to choose from.

6 out of 11 reassessments (54.5%) received

Hot Tip: Icons aren’t just for risk communication! We use icons to help our partners understand and visualize their progress collecting reassessment data for clients.

14 out of 24 reassessments (58.3%) received
• 9 out of 14 (64.3%) complete
• 5 out of 14 (35.7%) incomplete

Cool Trick: Icon Array allows you to illustrate partial risk by filling only a portion of the icon. We used this feature to communicate whether a reassessment was complete or incomplete for a given client.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Sheila B Robinson on Being an “Iconic” Presenter! ;-)
  2. Susan Kistler on a Free Tool for Adding Interactivity to Online Reports: Innovative Reporting Part IV
  3. Loraine Park, Carolyn Verheyen, and Eric Wat on Tips on Asset Mapping

Emily Lauer and Courtney Dutra on Person-Centered Evaluation: Aging and Disability Services

Tue, 08/12/2014 - 01:15

Hello, we are Emily Lauer and Courtney Dutra from the University of Massachusetts Medical School’s Center for Developmental Disability Evaluation and Research (CDDER). We have designed and conducted a number of evaluations of programs and projects for elders and people with disabilities. In this post, we focus on the topic of person-centered evaluations. We have found this type of evaluation to be one of the most effective strategies for evaluating aging and/or disability services, as it tends to provide results that are more valid and useful through empowering consumers in the evaluation process.

Why person-centered evaluation? Traditional evaluations tend to use a one-size-fits-all approach that risks supplanting judgment about consumers’ individual perspectives and may not evaluate components that consumers feel are relevant. In a person-centered evaluation, consumers of the program’s or project’s services are involved throughout the evaluation process. A person-centered evaluation ensures the program or project is evaluated in a way that:

  • is meaningful to consumers;
  • is flexible enough to incorporate varied perspectives; and
  • results in findings that are understandable to and shared with consumers.

Lessons Learned:

Key steps to designing a person-centered evaluation?

  1. Design the evaluation with consumers. Involve consumers in the development process for the evaluation and its tools.
  2. Design evaluations that empower consumers
    • Utilize evaluation tools that support consumers in thinking critically and constructively about their experiences and the program under evaluation. Consider using a conversational format to solicit experiential information.
    • Minimize the use of close-ended questions that force responses into categories. Instead, consider methods such as semi-structured interviews that include open-ended questions which enable consumers to provide feedback about what is relevant to them.
    • Consider the evaluation from the consumer’s perspective. Design evaluation tools that support varied communication levels, are culturally relevant, and consider the cognitive level (e.g. intellectual disabilities, dementia) of consumers.
  1. Involve consumers as evaluators. Consider training consumers to help conduct the evaluation (e.g. interviewers).
  2. Use a supportive environment. In a supportive environment, consumers are more likely to feel they can express themselves without repercussion, their input is valued, and their voices are respected, resulting in more meaningful feedback.

Hot Tip: Conduct the evaluation interview in a location that is comfortable and familiar for the consumer. When involving family or support staff to help the consumer communicate or feel comfortable, ensure they do not speak “for” the consumer, and that the consumer chooses their involvement.

  1. Involve consumers in synthesizing results. Involve consumers in formulating the results of the evaluation.

Rad Resource: Use Plain Language to write questions and summarize findings that are understandable to consumers.

Many strategies exist to elicit feedback from consumers who do not communicate verbally. Use these methods to include the perspective of these consumers.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. EPE TIG Week: Yvonne M. Watson on Buying Green Today to Save and Sustain Our Tomorrow
  2. MA PCMH Eval Week: Christine Johnson on Self-Assessment Medical Home Transformation Tools
  3. Joi Moore on Facilitating Systematic Evaluation Activities

PD Presenters: Kerry Zaleski, Mary Crave and Tererai Trent on Whose Judgment Matters Most? Using Child-to-Child approaches to evaluate vulnerability-centered programs

Mon, 08/11/2014 - 01:15

Hi Eval Friends! We are Kerry Zaleski and Mary Crave of the University of Wisconsin-Extension and Tererai Trent of Tinogona Foundation and Drexel University. Over the past few years we have co-facilitated workshops on participatory M&E methods for centering vulnerable voices at AEA conferences and eStudies.

This year, we are pleased to introduce participatory processes for engaging young people in evaluation during a half day professional development workshop, borrowing from Child-to-Child approaches. Young people can be active change agents when involved in processes to identify needs, develop solutions and monitor and evaluate changes in attitudes and behaviors for improved health and well-being.

Child-to-Child approaches help center evaluation criteria around the values and perspectives of young people, creating environments for continual learning among peers and families. Children learn new academic skills and evaluative thinking while having fun solving community problems!

Child-to-Child approaches help young people lead their communities to:

  • Investigate, plan, monitor and evaluate community programs by centering the values and perspective of people affected most by poverty and inequality.
  • Overcome stigma and discrimination by intentionally engaging marginalized people in evaluation processes.

We are excited to introduce Abdul Thoronka, a community health specialist from Sierra Leone, as a new member of our team. Abdul has extensive experience using participatory methods and Child-to-Child approaches in conflict- and trauma- affected communities in Africa and the US.

Lessons Learned:

  • Adult community members tend to be less skeptical and more engaged when ‘investigation’ types of exercises are led by children in their community rather than external ‘experts’. The exercises make learning about positive behavior change fun and entertaining for the entire community.
  • Young people are not afraid to ‘tell the truth’ about what they observe.
  • Exercises to monitor behaviors often turn into a healthy competition between young people and their families.

Hot Tips:

  • Child-to-child approaches can be used to engage young people at all stages of an intervention. Tools can include various forms of community mapping, ranking, prioritizing, values-based criteria-setting and establishing a baseline to measure change before and after an intervention.
  • Build in educational curricula by having the children draw a matrix, calculate percentages or develop a bar chart to compare amounts or frequency by different characteristics.
  • Explain the importance of disaggregating data to understand health and other disparities by different attributes (e.g. gender, age, ability, race, ethnicity)
  • Ask children to think of evaluation questions that would help them better understand their situation.

Rad Resources:

Child-to-Child Trust

The Barefoot Guide Connection

AEA Coffee Break Webinar 166: Pocket-Chart Voting-Engaging vulnerable voices in program evaluation with Kerry Zaleski, December 12, 2013 (recording available free to AEA members).

Robert Chambers 2002 book: Participatory Workshops: A Sourcebook of 21 Sets of Ideas and Activities.

Want to learn more? Register for Whose Judgment Matters Most: Using Child-to-Child approaches to evaluate vulnerability-centered programs at Evaluation 2014.

We’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Kim Sabo Flores on Engaging Youth in Evaluation
  2. Ed Eval Week: Silvana Bialosiewicz on Tips for Collecting Valid and Reliable Data from Children
  3. YFE Week: Kim Sabo Flores and David White on Youth Focused Evaluation: One Year And Growing Strong!

PD Presenters: Brian Yates on Doing Cost-Inclusive Evaluation. Part IV: Cost-Effectiveness Analysis and Cost-Utility Analysis

Sun, 08/10/2014 - 05:30

Hi! I’m Brian Yates. This is the fourth piece in a series of AEA365′s on using costs in evaluation. I started using costs as well as outcomes in my program evaluations in the mid-1970s, when I joined the faculty of the Department of Psychology at American University in Washington, DC. Today I’m still including costs in my research and consultation on mental health, substance abuse, and consumer-operated services.

Three other 365ers focused on evaluating costs, benefits, and cost-benefit of programs; there’s even more to cost-inclusive evaluation!

Lesson Learned: What if important outcomes of a program are not monetary, and cannot be converted into monetary units? Easy answer: do a cost-effectiveness analysis or a cost-utility analysis!

Cost-effectiveness analysis (CEA) describes relationships between types, amounts, and values of resources consumed by a program and the outcomes of that program — with outcomes measured in their natural units. For example, the outcome of a prevention program for seasonal depression could be measured as days free of depression. Program costs could be contrasted to these outcomes by calculating “dollars per depression-free day” or “average hours of therapy A versus therapy B per depression-free day generated.”

Hot Tip: How to compare apples and oranges. “But how can you compare costs of generating one outcome with costs of generating another? Cost per depression-free day versus cost per drug-free day?!” No problem: compare these “apples” and “oranges” by bumping the units up one notch of generality, to fruit. Diverse health program outcomes now are measured in common units of Quality-Adjusted Life Years (QALYs), with a year of living with depression as being worth substantially less than a year of living without depression. This and other forms of cost-utility analysis (CUA ) are increasingly used for health services funding.

Lessons Learned:

Insight Offered: It’s easy to dismiss using of costs in evaluation with “…shows the price of everything and the value of nothing.” Actually, cost-inclusive evaluation encompasses types and amounts of limited societal resources used to achieve outcomes measured in ways meaningful to funders and other stakeholders.

More? Yes! Lately I’ve gained better understanding of relationships between resources invested in programs and outcomes produced by programs when I work with stakeholders to also include information on program activities and clients’ biopsychosocial processes. More on that later.

Rad Resources:

Cost-effectiveness analysis (2nd edition) by Levin and McEwan.

Analyzing costs, procedures, processes, and outcomes in human services by Yates.

Want to learn more? Brian will be presenting a Professional Development workshop at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. Brian Yates on Doing Cost-Inclusive Evaluation. Part II: Measuring Monetary Benefits
  2. Brian Yates on Doing Cost-Inclusive Evaluation – Part I: Measuring Costs
  3. PD Presenters Week: Brian Yates on Doing Cost-Inclusive Evaluation. Part III: Cost-Benefit Analysis

Dan McDonnell on Becoming an Amateur Graphic Designer with Canva

Sat, 08/09/2014 - 11:16

Hello, my name is Dan McDonnell and I am a Community Manager at the American Evaluation Association (AEA) and a regular contributor to AEA365. Sharing photos has always been a popular pasttime, and the rise of social media has made it even easier than ever. The only major obstacle is the differing requirements by channel when it comes to file dimensions: you’ll often have to resize a photo to make it fit within the dimensions of the channel you’re using, and there’s no uniformity in requirements between Facebook, LinkedIn and Twitter.

Canva is an online tool that turns you into an amateur graphic designer. Let’s say your research has uncovered some interesting facts, and you’d like to visualize this data in an interesting way. Canva can help you turn that data into a visually stunning Infographic, or a cool flyer – with no graphic design experience needed!

With drag-and-drop functionality, you can create collages, create or alter text, pull in stunning backgrounds or personal photos into preset templates and browse an extensive library of graphics, stock photos and layout options to use in your designs. The main graphics manipulation features come free, and some stock graphics are available for purchase (usually around $1.00 each).

Hot Tip: Create a Photo Collage

Canva

Starting a new design on Canva is a cinch. First, select from one of the preset templates from the top bar (see image above). You can choose from a number of options, including a header photo for a handful of social networks, business cards, general social media graphics or you can even set your own custom dimensions. Once you’ve made your selection, you can choose a layout to customize, from hundreds of different examples. I recommend starting with something simple – like a photo collage. Click ‘Search’ and select the ‘Grids’ option.

Make Your Selection

Select one of the different Grid layouts by clicking, then click the ‘Uploads’ button to upload photos from either your Facebook page or your harddrive. Once they’ve been uploaded, simply drag and drop into the design area, and once you’re pleased with how it looks, click ‘Link and Publish,’ and you’re ready to go. Simply download the image file, and share it on the social media platform of your choice!

This is really just scratching the surface of what Canva is capable of, but hopefully it gives you enough skills to be dangerous as an amateur graphic designer. Enjoy!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Dan McDonnell on Google + | Dan McDonnell on Twitter  

Related posts:

  1. Dan McDonnell on Setting Up Your New Twitter Profile Page
  2. Dan McDonnell on Using Lists to Become a Twitter Power User
  3. Dan McDonnell on Upcoming Changes to Twitter

CP TIG Week: Karen Countryman-Roswurm and Bailey Patton on Qualitative Research Methods as an Empowering Practice with Marginalized Youth

Fri, 08/08/2014 - 01:15

We are Karen Countryman-Roswurm, Executive Director, and Bailey Patton, Community Outreach Coordinator, at Wichita State University’s Center for Combating Human Trafficking (CCHT). CCHT provides education, training, consultation, research, and public policy services to build the capacity for effective anti-trafficking prevention, intervention, and aftercare responses.

A primary service of CCHT is training organizations on The Lotus Victim to Vitality Anti-Trafficking ModelTM – a model that includes practice tools such as the Domestic Minor Sex Trafficking Risk and Resiliency Assessment (DMST-RRA). The DMST-RRA is based on interviews with 258 youth and is intended to assist direct-service providers in 1) increasing identification of young people at-risk of and/or subjugated to DMST; and 2) providing individualized strengths-based prevention and intervention strategies. During the development of the DMST-RRA, we learned invaluable lessons on engaging youth in empowering practices through qualitative research.

Lessons Learned:

  • Allow the process to be organically healing – This could be the first time the participant has spoken or reflected on their experience. The process of sharing one’s story can be empowering and healing when done in safe and non-exploitive environment.
  • Let the participant lead – Be flexible and fully engaged in the process. Allow the participant to take the interview where it needs to go. Do not let your desire for information or research curiosities define the experience for the participant.
  • Reflect the participant’s words back to them – By hearing their words repeated back to them, participants have the opportunity to gain insight, process, and reach their own epiphany.
  • Allow participants the opportunity to find and use their own voice – Do not try to define the experience for the participant. Let them give words to their feelings, emotions and thoughts. Telling participants what you think of their situation is disempowering.
  • Offer Validation – Help relieve the participant of self-blame and guilt for their past experiences and encourage them to focus on resiliency factors and strength.

Hot Tip:

  • Towards the development of the DMST-RRA, facilitating, transcribing, and coding qualitative interviews – the real lives of those at-risk of and/or subjugated to DMST—was at times painfully heart wrenching. Therefore, 1) Have supportive, competent partners. Tara Gregory with Wichita State University’s Center for Community Support and Research was extremely helpful during this process. 2) Recognize yourself as a “wounded healer.” Whether engaging in therapeutic and/or research practices, we must consistently seek to heal ourselves in a manner that enables us to utilize our full professional selves.

Rad Resources:

  • Wichita State University, Center for Combating Human Trafficking. Our website includes resources such as Sharing the Message of Human Trafficking: A Public Awareness and Media Guide to assist those interested in joining the anti-trafficking movement.

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. CP Week: Tara Gregory on Using Storytelling to Help Organizations Develop Logic Models
  2. Oliwier Dziadkowiec and Trish Peaster on Using Social Network Analysis in Evaluation
  3. Martha Henry on Data Confidentiality and Data Ownership

CP TIG Week: Jeff Sheldon on using SEEPPO to determine empowerment evaluation model fidelity, adherence to empowerment evaluation principles, and the likelihood of psychological well – being outcomes.

Thu, 08/07/2014 - 01:15

I’m Jeff Sheldon from the School of Social Science, Policy, and Evaluation at Claremont Graduate University and today I’m introducing the Survey of Empowerment Evaluation Practice, Principles, and Outcomes (SEEPPO). I developed SEEPPO for my dissertation, but more important, as a tool that can be modified for use by researchers on evaluation and evaluation practitioners.

For practitioners, SEEPPO is an 82 item self – report survey (92 items for researchers) across seven sections (nine for researchers).

  • Section one items (“Your evaluation activities”) ask for a report on behaviors in terms of the specific empowerment evaluation steps implemented.
  • Section two (“Evaluation Participant Activities”) asks for observations on the evaluation – specific behaviors of those engaged in the evaluation as they relate to the empowerment evaluation steps implemented.
  • Section three (“Changes you observed in individual’s values”) asks for a report on changes in evaluation – related values by comparing the values observed at the beginning of the evaluation to those observed at the end of the evaluation.
  • Section four items (“Changes you observed in individual’s behaviors”) ask for a report on changes observed in evaluation – related behavior and whether the sub-constructs characterizing the psychological well- being outcomes of empowerment (i.e. knowledge, skills/capacities, self-efficacy) and self – determination (competence, autonomy, and relatedness) were present by comparing observed behaviors at the beginning of the evaluation to those at evaluation’s end.
  • Section five (“Changes you observed within the organization”) items ask for a report on the changes observed within the organization as a result of the evaluation by comparing various organizational capacities at the beginning of the evaluation to those observed at evaluation’s end.
  • Section six (“Inclusiveness”) asks about the extent to which everyone who wanted to fully engage in the evaluation was included.
  • Section seven (“Accountability”) items ask about who the evaluator was accountable to during the evaluation.
  • Lastly, the items in sections eight and nine, for researchers, ask about the evaluation model used and demographics.

This is a brief “snap-shot” of SEEPPO. Item development was based on: 1) constructs found in the literature regarding the three known empowerment evaluation models and their respective implementation steps; 2) the ten principles (i.e., six process and four outcome) of empowerment evaluation; 3) the purported empowerment and self – determination outcomes for individuals and organizations engaged in the process of an empowerment evaluation; and 4) constructs found in the humanistic psychology literature on empowerment theory and self – determination theory.

Hot Tip: Theresults of SEEPPO can be used to: determine whether you or your subjects are adhering with fidelity to the empowerment evaluation model being implemented, the principles of empowerment evaluation in evidence, and the likelihood of empowerment and self – determination outcomes.

Rad Resource: Coming soon! SEEPPO will soon be widely available.

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Alice Hausman on Measuring Community -Defined Indicators of Success
  2. CPE Week: Abraham Wandersman on Empowerment Evaluation and Getting to Outcomes
  3. Jeff Sheldon on the Readiness for Organizational Learning and Evaluation instrument

CP TIG Week: Rachel Becker-Klein on Using Most Significant Change: A Participatory Evaluation Strategy That Empowers Clients and Builds Their Evaluation Capacity

Wed, 08/06/2014 - 01:15

My name is Rachel Becker-Klein and I am an evaluator and a Community Psychologist with almost a decade of experience evaluating programs. Since 2005, I have worked withPEER Associates, an evaluation firm that provides customized, utilization-focused program evaluation and educational research services for organizations nationwide.

Recently I have been used an interview and analysis methodology called Most Significant Change (MSC). MSC is a strategy that involves collecting and systematically analyzing significant changes that occur in programs and the lives of program participants. The methodology has been found to be useful in monitoring programmatic changes, as well as evaluating the impact of programs.

Lessons Learned: Many clients are interested in taking an active role in their evaluations, but may not be sure how to do so. MSC is a fairly intuitive approach to collecting and analyzing data that clients and participants can be trained to use. Having project staff interview their own constituents can help to create a high level of comfort for interviewees, allowing them to share more openly. Staff-conducted interviews also provides them with a sense of empowerment in collecting data. The MSC approach also includes a participatory approach to analyzing the data. In this way, the methodology can be a capacity building process in and of itself, supporting project staff to learn new and innovative monitoring and evaluation techniques that can be integrated into their own work once the external evaluators leave.

Cool Trick: In 2012, Oxfam Canada contracted with PEER Associates to conduct a case study of their partner organization in the Engendering Change (EC) program in Zimbabwe – Matabeleland AIDS Council (MAC). The EC program funds capacity-building of Oxfam Canada’s partner organizations. This program is built around a theory of change that suggests partners become more effective change agents for women’s rights when their organizational structures, policies, procedures, and programming are also more democratic and gender just.

The evaluation employed a case study approach, using MSC methodology to collect stories from MAC staff and their constituents. In this case study, PEER Associates trained MAC staff to conduct the MSC interviews, while the external evaluators documented the interviews with video and/or audio and facilitated discussions on the themes that emerged from those interviews.

Rad Resources:

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Wilder Research Week: Caryn Mohr on Case Studies
  2. Chris Michael Kirk on Negotiating the Value of Evaluation
  3. Ann Zukoski on Participatory Evaluation Approaches

CP TIG Week: Abraham Wandersman on Demystifying Evaluation: Even A Fourth Grader Likes Empowerment Evaluation

Tue, 08/05/2014 - 01:15

Hi, I’m Abe Wandersman and I have been working since the last century to help programs achieve outcomes by building capacity for program personnel to use evaluation proactively.  The words “evaluation” and “accountability” scare many people involved in health and human services programs and in education.   They are afraid that evaluation of their program will prove embarrassing or worse and/or they may think the evaluation didn’t really evaluate their program.   Empowerment evaluation (EE) has been devoted to demystifying evaluation and putting the logic and tools of evaluation into the hands of practitioners so that they can proactively plan, implement, self-evaluate, continuously improve the quality of their work, and thereby increase the probability of achieving outcomes.

Lesson Learned: Accountability does not have to be relegated solely to “who is to blame” after a failure occurs e.g., problems in the U.S. government initial roll out of the health insurance website (and Secretary of Health and Human Services Kathleen Sebelius’ resignation) and the Veterans Administration scandal (and Secretary Shisinski’s resignation). It actually makes sense to think that individuals and organizations should be proactive and strategic about their plans, implement the plans with quality, and evaluate whether or not the time and resources spent led to outcomes. It is logical to want to know why certain things are being done and others are not, what goals an organization is trying to achieve, that the activities are designed to achieve the goals, that a clear plan is put into place and carried out with quality, and that there be an evaluation to see if it worked. EE can provide funders, practitioners, evaluators, and other key stakeholders with a results-based approach to accountability that helps them succeed.

Hot Tip: I am very pleased to let you know that in September 2014, there will be a new EE book: Empowerment Evaluation: Knowledge and Tools for Self-Assessment, Evaluation Capacity Building, and Accountability (Sage:Second Edition) edited by Fetterman, Kaftarian, & Wandersman.   Several chapters are authored by community psychologists including:  Langhout and Fernandez describe EE conducted by fourth and fifth graders; Imm et al. write about the SAMSHA service to science program that brings practice-based programs to reach evidence-based criteria; Haskell and Iachini describe empowerment evaluation in charter schools to reach educational impacts; Chinman et al describe a decade of research on the Getting To Outcomes® accountability approach; Suarez-Balcazar,Taylor-Ritzler,  & Morales-Curtin describe their work on building evaluation capacity in a community-based organization; and Lamont, Wright, Wandersman, & Hamm describe the use of practical implementation science in building quality implementation in a district school initiative integrating technology into education.

Rad Resources:

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. CPE Week Bonus: Abraham Wandersman on Empowerment Accountability
  2. CPE Week: Abraham Wandersman on Empowerment Evaluation and Getting to Outcomes
  3. CP TIG Week: Wendi L. Siebold on Where The Rubber Meets The Road: Taking A Hard Look At Organizational Capacity And The Use Of Empowerment Evaluation

CP TIG Week: Wendi L. Siebold on Where The Rubber Meets The Road: Taking A Hard Look At Organizational Capacity And The Use Of Empowerment Evaluation

Mon, 08/04/2014 - 01:15

Hi! I’m Wendi Siebold, President of Strategic Prevention Solutions, a consulting firm that works to address and prevent social and health problems through research, evaluation and training. We spend a lot of time in communities working with non-profit organizations to improve staff and organizational evaluation capacity. Currently, we are the “empowerment evaluator” for domestic violence and/or sexual assault organizations in Alaska, Idaho and Florida.

Empowerment evaluators act as coaches, or critical friends, for the people who actually implement evaluation activities. There are a number of tensions in the balancing act of coaching someone’s capacity building. What I’m highlighting today is the tension between organizational capacity for evaluation and realistic expectations of “empowerment.”

Empowerment evaluators have the intention of improving a person or organization’s capacity to a notch above where they start. However, it’s essential for the evaluator and client to be on the same page about the ability of an organization to devote resources to evaluation and to define their desired level of capacity. For example, how much time does staff have to enter survey data? Who can review the data and find the story to report? Does a scale score need to be calculated?Usually people want to evaluate: it’s simply they don’t have the resources. This is why it’s vital to start capacity building only after knowing where you will end. Let’s practice what we preach and determine capacity building goals. Even if you build skills, will this get your client to a finished product? Isn’t the merit of program evaluation to improve the program and reach outcomes? If you never get to the stage of finding the “story” in the data or having findings to use, was the evaluation meaningful?

Hot Tip: Figure out organizational and staff capacity for evaluation immediately, before jumping into building capacity. A fatal flaw of empowerment evaluation is that the true time and resources needed to move from writing outcomes to summarizing findings is greater than most nonprofit staff and even evaluators realize. This requires diligence on the part of the evaluator – you’re the person who understands the reality of how many resources each step of an evaluation process will take. It is only after you have this discussion about feasibility that you can effectively coach your clients to completing evaluation work, and not leave them stranded in a pile of unanalyzed data. That just gives evaluation the bad name people have come to expect, and we’re better than that!

Rad Resources:

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. CPE Week: José M. Díaz–Puente on the Empowerment Evaluator’s Role
  2. Oscar Figueroa on Community Development and Empowerment Evaluation
  3. CP TIG Week: Tara Gregory and Natalie Wilkins on An Introduction to Community Psychology Topical Interest Group’s Series on Empowerment Evaluation

CP TIG Week: Tara Gregory and Natalie Wilkins on An Introduction to Community Psychology Topical Interest Group’s Series on Empowerment Evaluation

Sun, 08/03/2014 - 05:16

Hello. We are Tara Gregory, Director of Research and Evaluation at Wichita State University’s Center for Community Support and Research, and Natalie Wilkins, Behavioral Scientist at the Centers for Disease Control and Prevention. We’re members of the Leadership Council for the Community Psychology TIG and are excited to introduce this week’s blogs highlighting connection among empowerment, evaluation and community psychology.

As community psychologists who are evaluators, we often think of the tenet of meeting people where they are. “Where people are,” related to evaluation may be overwhelmed, confused, and even resistant. This is not a criticism of those trying to make a difference in our communities, but more a recognition of the need for approaching evaluation from an empowerment perspective – both in helping people learn evaluation themselves and in providing results of our own evaluations in a way that helps empower people. Either way, the role of the community psychologist in evaluation is to meet people where they are and walk with them as a partner with the intention of preparing the other to go forward independently.

Lessons Learned:

  • Empowerment evaluation – Listening to key stakeholders is key. Often, people will be resistant to evaluation because they are overwhelmed by the idea of having to do something outside their area of expertise. Listening to stakeholders’ stories about how their program works, and how they know it works can often reveal strengths and evaluation capacity that people and programs never knew they had. Lots of folks have the building blocks of evaluation in place already – they’re just not calling it “evaluation!”
  • Facilitating reflection – Encouraging reflection on evaluation results and helping people come to their own conclusions is a way to create ownership and empowerment to continue good work or make changes where needed.
  • Qualitative methods – Offering an opportunity for people to share their own stories as part of an evaluation can also be empowering, particularly when they’re encouraged to focus on strengths, successes, resiliency or other positives that sometimes get lost.

Hot Tip:

  • Check out the Empowerment Evaluation TIG! They host their own blog weeks, webinars, and many other educational opportunities. Many of us community psychologists belong to this group and gain valuable knowledge and skills through membership.

Rad Resources:

These teaching materials are designed to introduce individuals to empowerment evaluation and intended to be a resource for facilitating an introductory lecture on the topic.

Dr. David Fetterman’s blog provides a range of resources on empowerment evaluation theory and practice, including links to videos, guides and relevant academic literature.

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

Related posts:

  1. CPE Week: Abraham Wandersman on Empowerment Evaluation and Getting to Outcomes
  2. CPE Week Bonus: Abraham Wandersman on Empowerment Accountability
  3. CP TIG Week: Wendi L. Siebold on Where The Rubber Meets The Road: Taking A Hard Look At Organizational Capacity And The Use Of Empowerment Evaluation

Sheila B. Robinson on Making the Most of Evaluation 2014, Even if You Cannot Attend

Sat, 08/02/2014 - 09:34

Happy Summer! I’m Sheila B. Robinson, aea365′s Lead Curator and sometimes Saturday contributor. Last year at this time, Susan Kistler contributed some fabulous ideas for enjoying AEA’s annual conference even for those not attending, and they’re well worth repeating this year. So, with thanks to Susan, here we go!

Hot Tip #1 – Leverage the Evaluation 2014 Online Conference Program (coming soon!) to Build Your Professional Network: The Evaluation 2014 Online Conference Program is searchable by Topical Interest Group Sponsor, speaker, and keyword. If you are attending, researching the conference program and its 700+ sessions in advance of attending is a must-do in order to make the most of your time. Even if you are not attending, you can search the conference program for colleagues working in your area and connect via email to raise a question.

Hot Tip #2 – Check the AEA eLibrary for Handouts and Related Materials: AEA’s online public eLibrary has over 1000 items in its repository and that will grow considerably as the conference nears and immediately following. All speakers are encouraged to post their materials in the eLibrary and anyone may search and download items of interest, whether attending the conference or not.

Hot Tip #3 – Follow Hashtag #Eval14 on Twitter: If you are on Twitter use hashtag #Eval14 to tag your conference-related tweets. If you aren’t attending, follow #Eval14 to stay abreast of the conversation and @aeaweb, AEA’s Headlines and Resources Twitter Feed in particular. Check out #Eval13 for an idea of what folks were tweeting last year!

Bonus Cool Trick – Get the H&R Compilation: Not up for joining Twitter quite yet, but want to get the field’s headlines and resources for the week nevertheless? You can subscribe to AEA’s Headlines and Resources compilation to arrive via email or RSS once each week. Learn more here.

Hot Tip #4 – Check in Regularly or Subscribe to EvalCentral: Chris Lysy maintains EvalCentral, a compilation of 57 evauation-related blogs where you can always find the newest posts. Lots of bloggers will be in attendance at Evaluation 2014 and EvalCentral allows you to find them all in one place. BONUS: Download Evaluation 2013 – A Conference Story ebook, written and illustrated by Chris himself!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Susan Kistler on 4 Ways to Make the Most of Evaluation 2013 – Even if you cannot attend
  2. Susan Kistler on the eLibrary and Headlines List
  3. Susan Kistler on Subscribing to the eLibrary

PD Presenters Week: Mike Trevisan and Tamara Walser on Evaluability Assessment

Fri, 08/01/2014 - 01:15

Hello from Mike Trevisan and Tamara Walser! Mike is Dean of the College of Education at Washington State University and Tamara is Director of Assessment and Evaluation in the Watson College of Education at the University of North Carolina Wilmington. We’ve published, presented, and conducted workshops on evaluability assessment and are excited about our pre-conference workshop at AEA 2014!

Evaluability assessment (EA) got its start in the 1970s as a pre-evaluation activity to determine the readiness of a program for outcome evaluation. Since then, it has evolved into much more and is currently experiencing resurgence in use across disciplines and globally.

We define EA as the systematic investigation of program characteristics, context, activities, processes, implementation, outcomes, and logic to determine

  • The extent to which the theory of how the program is intended to work aligns with the program as it is implemented and perceived in the field;
  • The plausibility that the program will yield positive results as currently conceived and implemented; and
  • The feasibility of and best approaches for further evaluation of the program.

EA results lead to decisions about the feasibility of and best approaches for further evaluation and can provide information to fill in gaps between program theory and reality—to increase program plausibility and effectiveness.

Lessons Learned:  The following are some things we and others have learned about the uses and benefits of EA—EA can:

  • Foster interest in the program and program evaluation.
  • Result in more accurate and meaningful program theory.
  • Support the use of further evaluation.
  • Build evaluation capacity.
  • Foster understanding of program culture and context.
  • Be used for program development, formative evaluation, developmental evaluation, and as a precursor to summative evaluation.
  • Be particularly useful for multi-site programs.
  • Foster understanding of program complexity.
  • Increase the cost-benefit of evaluation work.
  • Serve as a precursor to a variety of evaluation approaches—it’s not exclusively tied to quantitative outcome evaluation.

Rad Resources:

Our book situates EA in the context of current EA and evaluation theory and practice and focuses on the “how-to” of conducting quality EA.

An article by Leviton, Kettel Khan, Rog, Dawkins, and Cotton describes how EA can be used to translate research into practice and to translate practice into research.

An article by Thurston and Potvin introduces the concept of “ongoing participatory EA” as part of program implementation and management.

An issue of New Directions for Evaluation focuses on the Systematic Screening Method, which incorporates EA for identifying promising practices.

A report by Davies describes the use of EA in international development evaluation in a variety of contexts.

Want to learn more? Register for Evaluability Assessment: What, Why and How at Evaluation 2014.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. Nicola Dawkins on Evaluability Assessment and Systematic Screening Assessment
  2. CP TIG Week: Hsin-Ling (Sonya) Hung on Resources for Evaluability Assessment
  3. EPE Week: Valerie Williams on Evaluating Environmental Education Programs