Monitoring, Evaluation and Learning Systems

NA TIG Week: Sue Hamann on Tips for Novice Needs Assessors

American Evaluation Association 365 Blog - Tue, 05/27/2014 - 01:15

Hello, my name is Sue Hamann. I work at the National Institutes of Health as a Science Evaluation Officer, and I teach program evaluation to graduate students. Today I’m providing tips to novices in needs assessment (NA).

Hot Tips:

Use the original definition of needs.

  • The original definition of NA is the measurement of the difference between currently observed outcomes and future desired outcomes, that is, the difference between “what is” and “what should be.” Novices often plan to address either status or desired future, but they do not realize how much more valuable it is to collect data about both status and future and analyze the difference between these two conditions. Read anything about NA written by Roger Kaufman, Belle Ruth Witkin, James Altschuld, or Ryan Watkins to get started.

Collect data using multiple methods.

  • A rewarding and challenging aspect of needs assessment is that an evaluator gets to take almost all her tools out of the toolbox. From census data and epidemiologic data to document reviews to group and individual interviews, needs assessment typically requires multiple methods. The best way to start is to review the literature, both in the problem area of interest and in the evaluation journals. You can start with the New Directions for Evaluation issue (#138, summer 2013) on Mixed Methods and Credibility of Evidence in Evaluation, edited by Mertens and Hesse-Biber. Also use listservs such as AEA’s Evaltalk to discover work that has been done but not published.

Keep an open mind about the validity of qualitative data, particularly interviews.

Remember that needs assessment and program planning go hand in hand.

  • Collecting needs assessment data is just the first step in program planning. Use Jim Altschuld’s Needs Assessment Kitor other resources to plan for the work needed to conduct this vital component of program planning and evaluation.

Rad Resources:

Coming in Fall 2014, Jim Altschuld and Ryan Watkins are editing an issue of New Directions in Evaluation dedicated to Needs Assessment.

The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. NA Week: James Altschuld on Lessons Learned: Use the 3 Phase Model
  2. EEE Week: Siri Scott on Conducting Interviews with Youth
  3. Jim Altschuld on So They Think They Need a Needs Assessment!

NA TIG Week: Ryan Watkins on Planning to Measure Needs

American Evaluation Association 365 Blog - Mon, 05/26/2014 - 01:15

I’m Ryan Watkins, Associate Professor at George Washington University and manager of two websites, www.WeShareScience.com and www.NeedsAssessment.org.

Needs assessments require planning. You must plan for the “who, what, why, where, when, and how” of each step – from soliciting participation of stakeholder groups to the iterative development of grounded recommendations.   An overlooked planning task involves the planning for the actual measurement of needs. Below are considerations to guide your planning:

Lessons Learned From Experience:

Consideration 1: What is a need? Needs can be defined in many ways. While discrepancy definitions are common (such as the gap between “what is” and “what should be”), varied definitions are frequently applied (such as needs as desired resources or programs).

Ask:

  • Have you agreed with stakeholders on a definition of need?
  • Are you assessing needs at the state, local, institutional, and/or individual level?
  • Are needs exclusively related to results (most useful), or do they include processes and inputs as well (not useful)?
  • Are needs to be assessed along with assets?

Consideration 2: What data is really required?

When you know how needs are to be defined, next determine what data is required to document the needs.

Ask:

  • What indicators would suggest what needs exist?
  • How could the size and scope of identified needs be measured?
  • Who else is collecting data on similar issues?
  • Which measures are “nice to have” but not absolutely required?
  • When can indirect measures be applied?
  • If applying a discrepancy definition, what data is required for measuring the current state, and what is required for comparably defining the desired state.

Consideration 3: What is feasible?

There are always constraints to a needs assessment and these will guide what is feasible in terms of measuring needs.

Ask:

  • What techniques can be used to collect data?
  • What is appropriate timeframe for measuring (e.g., weekly, monthly, bi-monthly, # of weeks/months after critical interventions)?
  • What is the appropriate sequence for measuring (e.g., indicators #1 and #3, followed by indicator #2)?
  • What resources (people, time, money, technology, access, etc.) are readily available?

Consideration 4: How will it be managed?

Measuring needs is a process that must be managed. Take time to assign responsibilities and hold people accountable for results.

Ask:

  • Who specifically (individuals or organizations) will provide necessary information (e.g., who is the sample, or who has the desired information)?
  • Who is responsible for collecting and analyzing data to measure needs?
  • Who is accountable for the validity of data?
  • Who is going to interpret the data in order to make recommendations?

Rad Resources: Find and share needs assessment resources at www.NeedsAssessment.org, including lesson learned videos, free publications, podcast interviews, and a new document repository.

(Share Clip)

The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. NA Week: Roger Kaufman on Needs Assessment
  2. Mary Talbut on Testing What You Teach
  3. NA TIG Week: Sue Hamann on Tips for Novice Needs Assessors

NA TIG Week: Maurya West Meiers on Introducing Needs Assessment TIG Week and Identifying and Staying in Touch with Your Needs Assessment Stakeholders and Informants

American Evaluation Association 365 Blog - Sun, 05/25/2014 - 01:15

My name is Maurya West Meiers. I work at the World Bank as a Senior Evaluation Officer and am coauthor of A Guide to Assessing Needs: Essential Tools for Collecting Information, Making Decisions, and Achieving Development Results (available free here). This week’s blog postings are from members of the Needs Assessment TIG. Check out our TIG website for even more resources.

I’m writing about ways to identify and stay in touch with “hard to find” stakeholders and potential informants for your large scale needs assessments.

Lessons Learned:

Consider who might be your stakeholders and informants:

  • Primary. They typically have some direct relationship with the assessment (e.g., managers and employees, mayor’s office, neighborhood representatives, community members).
  • Secondary. They usually have a lesser or indirect relationship to the assessment, but should not be overlooked (e.g., residents from the neighboring community).
  • Experts and other informants. These people may have useful data to inform the assessment, but may not have a direct or even indirect relationship with it (e.g., experts in the field of study, database managers).
  • Research stakeholders. These are others who could benefit and learn from the results of your assessment (e.g., academics, policy makers). Be sure to publish your needs assessment methods and papers to build the needs assessment literature base.

How do you find your informants and stakeholders? And stay in touch with them throughout the assessment? Here are some high and low-tech ideas:

  • Websites are easy to build and provide a central place to share information about the assessment and resources.
  • Blogs allow you to share quick and informal updates – and to offer two-way engagement through comments.
  • Social media (Twitter, Facebook, Instagram, Linkedin, Flickr, Youtube, etc.) help you to connect to those interested in the assessment and build a following.
  • Mobile and text-message updates, or short ‘pulse’ surveys are becoming more common.
  • Community or organization meetings are the old standby, but essential.
  • Existing networks (such as community leaders, association representatives, etc.) allow you to find key people through snowball sampling.
  • Letters to stakeholders/groups help to get the word out formally.
  • Newslettersshould not be overlooked as a way to engage stakeholders.
  • Newspaper articles, television broadcasts, advertisements, and other media outreach are useful for broad outreach.
  • Posters, announcements or events in spaces visited by stakeholders (such as municipal buildings, libraries) are inexpensive and easy to create.
  • Street billboard announcements are common in many countries.
  • Radio broadcasts and call-in shows are especially effective in certain regions, such as Africa.

Rad Resources:

  • Many cities engage with community members about needs (especially of the “this should be fixed/addressed” communications) through social media services such as PublicStuff.
  • McKinsey Quarterly’s latest edition provides useful tips on Tapping the Power of Hidden Influencers.

The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Xiaomei Song on Evaluating a Large-Scale High-Stakes Testing System from the Stakeholder Perspective
  2. MN EA Week: Leah Goldstein Moses on Cultivating Relationships
  3. Silvana Bialosiewicz on Turning Organizational Learning into Action: Evaluation and Strategic Planning

Sheila B Robinson on Playing “Trivial Pursuit” – AEA Style!

American Evaluation Association 365 Blog - Sat, 05/24/2014 - 05:57

Hello! I’m Sheila B Robinson, aea365′s Lead Curator and sometimes Saturday contributor, and I have some questions for you today!

Hot Tip: Poke around AEA’s website (or go directly to this link under the “Learning” tab) and you’ll find some fabulous trivia about the annual conference. For instance…

1.) Who was president of AEA in 2005?

2.) In what year(s) was the AEA Annual Conference actually held in Canada?

3.) How many different US states have hosted the conference?

4.) When was the conference theme: Evaluation for a New Century: A Global Perspective?

5.) Where Evaluation 2015 will be held?

The answers are all there!*

Cool Trick: Want to know about the sessions your favorite evaluator presented in any given year? Curious to see what the hot topics were when the conference theme was Evaluation Quality? Perhaps you have an idea about a new topic and wonder if anyone has presented on it before. Or, you’re just learning something new about evaluation and want to see who the thought leaders on that topic appear to have been over the past few years, so you can follow up on their work or even network with them. You can access conference programs for the last 15 years from the Conference History page for all of this information. Most are even searchable online!

Cooler trick: Want to know how the conference was evaluated and how it performed in any given year? How many evaluators attended in 2003? What do we know about them? How many were students, researchers, professors, or consultants? How many conferences had they attended before? Did they consider themselves novice or expert evaluators? What were their reactions to the conference in that year? Evaluation data and reports are available for several conference years.

Rad Resource: AEA’s Conference History page: Everything you wanted to know about Evaluation 1986 – Evaluation 2016, (30 years!) but were afraid to ask. (Well, perhaps not afraid…)

(Share Clip)

*Except for this one: #3. I counted 15 different states plus Washington, DC.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Evaluation 2013 Conference Week: Michael Quinn Patton on Using the Conference Program as a Data Source About Trends in Evaluation
  2. LAWG Week: Herb Baum on the Upcoming AEA Meeting
  3. Susan Kistler on Submitting a Proposal to Present at Evaluation 2011

FIE TIG Week: Kathryn Sielbeck-Mathes and Rebecca Selove on Feminist Evaluation and Framing

American Evaluation Association 365 Blog - Fri, 05/23/2014 - 01:15

Hi! We are Kathryn Sielbeck-Mathes and Rebecca Selove, co-authors of Chapter 6 of “Feminist Evaluation and Research: Theory and Practice”. In our article, based on three evaluations of substance abuse treatment programs for individuals with co-occurring mental illness and substance abuse issues, we discuss the importance of framing and shared understanding between evaluators and evaluation stakeholders.

Lesson Learned: Despite linking closely with our own values of fairness, social justice and gender equity within social programming, we did not spend sufficient time understanding the differing values, language, perspectives, frames, etc., of the program manager and his staff, rather assuming we were all interpreting trauma in the same ways and sharing the same values associated with addressing trauma during treatment specifically and programming for women in general. In hindsight, focusing on this understanding should have held the same importance in the evaluation as monitoring fidelity and measuring outcomes.

Hot Tip:

In order to gain attention and respect for the adoption of feminist frameworks, principles, and values for conducting program evaluation, it is imperative that we frame our conversations to connect rather than compete, align rather than malign and foster acceptance rather than objection from those we need to communicate to and with. This requires an understanding of their position on issues that follow from the language or lens of their value and belief systems.

Lesson Learned: Connecting through words, images, symbols, and stories grounded in values helps make solutions accessible and relevant to program stakeholders, service organizations, and funding agencies. Linking an issue to a widely held cultural value or belief helps start the framing process by appealing to program managers and staff, increasing their interest in learning more.

Hot Tip:

If it seems as if you are not being heard….you probably are not. A feeling of frustration can be a signal that reconstruction of a shared meaning based upon shared values is necessary!

Lesson Learned: Key tasks associated with feminist evaluation include 1) understanding the problem from the perspective of the women the program is designed to serve, 2) studying the interior and external context of the program to understand the realities and lived experiences of women, and 3) identifying the invisible structures that can undermine even the most diverse, gender-responsive, trauma informed program.

Hot Tip:

Feminist evaluators must engage in attentive conversations with those implementing and managing human service/treatment programs, listening closely for congruence and dissonance regarding the feminist frame. From the outset of a program evaluation, the feminist evaluator must be mindful and prepared for changing assumptions and language/communication that perpetuates injustice and the disempowerment of women.

Rad Resource: Combating structural disempowerment in the stride towards gender equality: an argument for redefining the basis of power in gendered relationships.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. FIE TIG Week: Donna Podems on the Difference Between Feminist Evaluation and Gender Approaches
  2. Kathryn Bowen on “Framing” Feminist Evaluation
  3. FIE/MME Week: Denise Seigart on Implementing a Feminist Evaluation of School Health Care

FIE TIG Week: Silvia Salinas Mulder and Fabiola Amariles on Latin American Feminist Perspectives on Gender Power Issues in Evaluation

American Evaluation Association 365 Blog - Thu, 05/22/2014 - 01:15

Hi! We are Silvia Salinas Mulder and Fabiola Amariles, co-authors of Chapter 9 of “Feminist Evaluation and Research: Theory and Practice”. Our article examines the fact that in our region, understanding and accepting gender mainstreaming as an international mandate is still slow and even decreasing in some political and cultural contexts, where the indigenous agenda and other internal and geopolitical issues are gaining prominence. Feminist evaluation may play an important role in getting evidence to create policies to improve the lives of women, but it is necessary to make feminist principles operational in the context of the multicultural Latin American countries.

Lesson Learned: We should re-consider and reflect on concepts and practices usually taken-for-granted like “participation.” In evaluations, members of the target population are usually treated as information resources but not as key audiences, owners and users of the findings and recommendations of the evaluation. Interactions with excluded groups usually reproduce hierarchical power relations and paternalistic communication patterns between the evaluator and the interviewed people, which may shape participation patterns, as well as the honesty and reliability of responses.

Hot Tip: Emphasize that everyone should have the real opportunity to participate and also to decline from participating (e.g., informed consent), and should not fear any implications of such a decision (e.g., formal or informal exclusion from future program activities). Having people decide about their own participation is a good indicator of ethical observance in the process.

Lesson Learned: Sensitivity and respect for the local culture often lead to misinterpreting rural communities as homogenous entities, paying little attention to internal diversity, inequality and power dynamics, which influence and are influenced by the micro-political atmosphere of an evaluation, oftentimes reproducing exclusion patterns.

Hot Tip: Pay attention and listen to formal leaders and representatives, but also search actively for the marginalized and most excluded people, enabling secure and confidential environment for them to speak. The role of cultural brokers knowledgeable of local culture is key to achieve an inclusive, context-sensitive approach to evaluation.

Lesson Learned: Another key concept to reflect on is “success.” On one hand, the approach of success as an objective and logically-derived conclusion of “neutral” analysis usually omits its power essence and intrinsic political and subjective dimensions. On the other hand, evaluation cultures that privilege limited funder-driven definitions of success reproduce ethnocentric perspectives, distorting experiences and findings, and diminishing their relevance and usefulness.

Hot Tip: Openly discussing the client’s and donor’s ideas about “success” and their expectations regarding a “good evaluation” beyond the terms of reference diminishes resistance to rigorous analysis and constructive criticism.

Rad Resources:

Silvia Salinas-Mulder and Fabiola Amariles on Gender, Rights and Cultural Awareness in Development Evaluation

Batliwala, S. & Pittman, A. (2010). Capturing Change in Women’s Realities.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Kathryn Bowen on “Framing” Feminist Evaluation
  2. FIE TIG Week: Donna Podems on the Difference Between Feminist Evaluation and Gender Approaches
  3. FIE/MME Week: Bessa Whitmore on Researcher and Evaluator Roles and Social Justice

FIE TIG Week: Tristi Nichols on A Feminist-Ecological Model for Evaluation

American Evaluation Association 365 Blog - Wed, 05/21/2014 - 01:15

Bismillah! My name is Tristi Nichols, and with an advanced degree from Cornell University, I have been continuously involved in a wide-ranging evaluation practice over the past fifteen years.

In my area of work – the international development arena – I find that I am frequently asked to evaluate the extent to which gender inequity has been increased or decreased by in a given context (e.g., agriculture, education, or post conflict). I searched high and low for a framework with insightful questions to guide my evaluation practice. Over the years, I only found gaps and tid-bits of information here and there, Frustrated, I decided to development my own framework that includes guiding questions to measure the degree to which an intervention furthers female empowerment and/or minimizes gender inequity within an international context. In a recently released book, “Feminist Evaluation and Research: Theory and Practice”, this rubric of questions is included in my chapter entitled: Measuring Gender Inequality in Angola: A Feminist-Ecological Model for Evaluation.

Since you may not go out right away to purchase / download the book, this contribution includes a few “Rad Resources” to get you started thinking.

Rad Resources: Measurement draws from a feminist-ecological lens assessing program effects on women, using Urie Bronfenbrenner’s three systems of environmental influences: (a) Micro-; (b) Meso-/Exo-; and (c) Macro-systems. My labels are slightly different where (i) Micro-level systems have been renamed “individual-level”; (ii) Meso-/Exo- are referred to as “composite community-level”; and (iii) Macro-level systems are assigned the term of the “collective-level.”

At the collective level, the focus of the evaluative effort is at the social or political environment level;

The composite community-level concentrates on critical factors for program effectiveness and/or reasons for implementation failure and how these affect women’s / girl’s participation.

The individual-level looks at the complex relations between the woman [or adolescent girl] and her environment (e.g., home, family, school, marketplace, and workplace).

Daily, there are new and interesting blogs and websites that can be easily categorized into these three measurement areas: Here are just a few which are on my desktop at the moment…

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. FIE TIG Week: Donna Podems on the Difference Between Feminist Evaluation and Gender Approaches
  2. FIE/MME Week: Katherine Hay on How to do Feminist Evaluation
  3. FIE TIG Week: Katherine Hay on Using Feminist Evaluation to Increase Equity in the Real World

FIE TIG Week: Michael Bamberger on Identifying Unintended Outcomes of Programs Promoting Gender Equality

American Evaluation Association 365 Blog - Tue, 05/20/2014 - 01:15

Hi!  This is Michael Bamberger, an Independent Consultant specializing in the evaluation of social development and gender programs.  Over the past few years I have worked with United Nations organizations, bi-lateral programs and NGOs helping strengthen their evaluations of their gender equity policies.

Most international development agencies have now defined the promotion of gender equality (or gender equity) as one of their development goals, and most conduct periodic evaluations to assess the extent to which their programs and policies contribute to strengthening the economic, social and political empowerment of women and the reduction of the differences between women and men on these dimensions.  However, many of these evaluations tend to over-estimate the positive effects of their programs on promoting gender equality and frequently under-estimate or even ignore some of the negative outcomes of these programs. Frequently the evaluations only interview the women participating in the project, and produce glowing reports on the significant benefits, but fail to talk to women who did not participate.  The failure to identify negative outcomes is unfortunately very common and has serious implications.

Hot Tips:

  • Build into the Terms of Reference for the evaluation a requirement that the evaluators, even when working on a limited budget and under time-constraints, must identify and interview women (and where appropriate men) who did not benefit from the project.
  • Assess carefully the evaluation methodology to ensure that it is capable of identifying unintended outcomes.  Many evaluation methodologies such as results-based evaluations, many theories of change, and most experimental and quasi-experimental designs only measure the extent to which intended results have been achieved and are not able to capture unintended outcomes.
  • Be aware that many evaluations only obtain information from people directly involved in the program, most of whom will be reluctant to criticize the program which pays their salary.
  • In most communities there are people who are familiar with the program being evaluated and its effects, but who are not directly involved and who are able to provide a more balanced perspective.  Examples include: the district nurse (who usually knows almost all of the women in the community), the police chief (a useful source in cases of domestic violence), women’s organizations, school teachers and local religious leaders.
  • Use a mixed methods design that combines quantitative and qualitative data and that emphasizes the importance of triangulation to increase validity by systematically comparing information collected from different sources.

Rad Resources:  The World Bank “Gender and Transport” website illustrates many of the challenges and unintended consequences for women and girls of transport initiates which are assumed to be “gender neutral”.  Module 2 identifies the challenges and Module 5 identifies tools, including research tools, for promoting gender equity in this sector.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. CP TIG Week: Nidal Karim on Innovating for Gender Transformative M&E
  2. FIE TIG Week: Katherine Hay on Using Feminist Evaluation to Increase Equity in the Real World
  3. FIE/MME Week: Bessa Whitmore on Researcher and Evaluator Roles and Social Justice

FIE TIG Week: Katherine Hay on Using Feminist Evaluation to Increase Equity in the Real World

American Evaluation Association 365 Blog - Mon, 05/19/2014 - 01:15

Hi, I’m Katherine Hay. I’ve spent the last 20 years in India working on development, research, and evaluation.

Lessons Learned: One thing I’ve learned, and say all the time is, “equity is not an intervention.”

In societies where equity is the driving goal, perhaps fairly straightforward evaluations of interventions can increase equity. In such societies evaluations could identify the ‘best’ programs, where ‘best’ is defined as reducing inequities. Armed with that knowledge we would design more of the ‘right programs’ and equity would be achieved.

But these societies do not exist. Most of the world is characterized by increasing inequity and development models that put equity on the back seat.

So how can evaluation increase equity in the real world? Is it reasonable to expect that interventions generated from systems that perpetuate gender and other inequities will lead to equity or that evaluation of interventions will deepen equity?

I practice evaluation because I think it is reasonable, but only if such evaluations are understood as intentional disruptions to inequitable systems. This entails seeing equity as emerging from and integral to movements to change societies rather than from technical tinkering within existing systems. This is why applying a feminist lens to evaluation is so core to the way I practice evaluation.

At worst, evaluation can reinforce inequities; on average it might reflect them, but at best it can challenge them. Feminist evaluation offers a lens that fosters designs, approaches, and tools which bring inequity to the foreground.

Hot Tips: I’m often asked, ‘how do you do feminist evaluation?’ There isn’t a simple checklist. The only way is by applying feminist principles at each stage of an evaluation.

Reach out and get involved. I’ve come to feminist evaluation by working with social activists, researchers and evaluators. We share designs, instruments, processes and challenges. I give time to feminist NGO’s with limited resources, but a lot of desire, to use evaluation to guide their work. Being part of these groups deepens my practice and experience. Find peers to challenge and inspire you.

Rad Resources: A few of us formed a gender and evaluation group that now has 646 members from around the world. Why not join the discussion?

Try to make feminist evaluation relevant to issues on the ground. Following a spate of brutal attacks on women in India, I changed a planned keynote at the last minute to discuss how evaluation can help end violence against women. It was a risk I’m glad I took. You can see the video here.

EvalPartners gives small grants to voluntary evaluation organizations to implement peer-to-peer, teaching, and evaluation advocacy projects. All proposals need to include equity and gender but it can also be the focus.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. FIE/MME Week: Katherine Hay on How to do Feminist Evaluation
  2. FIE TIG Week: Donna Podems on the Difference Between Feminist Evaluation and Gender Approaches
  3. Kathryn Bowen on “Framing” Feminist Evaluation

FIE TIG Week: Donna Podems on the Difference Between Feminist Evaluation and Gender Approaches

American Evaluation Association 365 Blog - Sun, 05/18/2014 - 01:15

Greetings from South Africa. I am Donna Podems, a Research Fellow at Stellenbosch University, and founder and director of OtherWISE: Research and Evaluation, a small monitoring and evaluation firm in Cape Town, South Africa.

As a practitioner and academic, I often engage in discussions around evaluation theory and practice. One common discussion is around the differences between gender evaluation approaches and feminist evaluation. While they have many commonalities, and one often draws on the other, they each bring their own strengths—and weaknesses.

Understanding the difference between the two approaches in their purest form enables evaluators to choose what elements of which approach are appropriate for their evaluation design—if any at all.

Lesson Learned: Four Key Distinctions between Feminist Evaluation and Gender Approaches:

  1. Feminist evaluation and gender approaches have different historical roots and bring their own strengths (and weaknesses) to an evaluation.
  2. Gender approaches aim to document and map the lives of women, while feminist evaluation aims to change them.
  3. Feminist evaluation would be used to guide the evaluation methodology if the evaluation questions seek to understand why differences exist between men and women and to bring about social change.
  4. Feminist evaluation offers broad guidance that encourages an evaluator how to think about an evaluation, and how to use that reflection to inform the evaluation’s design, data collection, and communication of findings. Gender approaches often provide more concrete guidelines and prescriptive methods for data collection and analysis.

Feminist evaluation and gender approaches can be quite complementary, with both effortlessly integrating together as a part of an evaluation design. They can also be used as distinct approaches on their own, or incorporated into other approaches, such as economic evaluations.

Rad Resources:

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. FIE/MME Week: Katherine Hay on How to do Feminist Evaluation
  2. FIE/MME Week: Donna Podems on Applying Feminist Evaluation for Non-feminist Evaluators
  3. FIE/MME Week: Donna Mertens and Mika Yamashita on Co-hosting Mixed Methods Evaluation and Feminist Issues in Evaluation

Dan McDonnell on Uncovering Interesting Facts About Your Twitter Network with Twitonomy

American Evaluation Association 365 Blog - Sat, 05/17/2014 - 17:34

Hello, my name is Dan McDonnell and I am a Community Manager for the American Evaluation Association. With the plethora of social media tools, apps and platforms available, it can often be overwhelming to find the right one to best suit your needs. I recently came across a tool that I’m very excited to share with you all today – one that can open the doors to more interesting insights on your Twitter network.

Rad Resource

Twitonomy  is a powerful Twitter analytics platform that can reveal intriguing statistics about Twitter profiles and hashtags. Let’s start by looking at what things you can learn about your own profile.

What your profile on Twitonomy might look like.

Once you’ve signed in via your own Twitter profile, type your Twitter handle into the ‘Analyze Twitter’s profile of’ box, and wait for the results. The next screen (example above) will break down the entirety of your Twitter history (yes, every Tweet you’ve ever sent!) and some cool data points, including how many Tweets a day you send on average, the breakdown of your Tweet responses vs. Retweets vs. @mentions and even a full activity chart that shows what days, times and seasons in which you Tweet most frequently. It looks like 11:00 AM is my sweet spot, and I Tweet pretty consistently Monday through Friday.

Hot Tip

With so many different data points available, it’s tough to even know where to begin. Here are a few things I’d recommend researching with Twitonomy to start:

  • See who your interact with most in your network with the Users most mentioned, most retweeted and most replied to statistics.
  • Find out what your best performing Tweets of all time are, and use that knowledge to inform what you’re Tweeting about in the future.
  • Check out patterns and trends in how your fellow evaluators are Tweeting – which days and times and who they are Tweeting with and about.
  • Take a look at some of your favorite Twitter users ‘most frequently used hashtag’ list to see what types of Twitter conversations you might be missing out on.

These few tips are really just scratching the surface of what Twitonomy can do. With a Premium subscription, you can unlock loads of new features – but all of what I’ve outlined so far is doable in the free version of the tool. In a future post, I’ll tackle some of the cool Premium features and what it can reveal to make you a smarter Tweeter. Stay tuned!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Dan McDonnell is a regular Saturday contributor to AEA365, where he blogs on social media-related topics for evaluators. You can reach Dan on Twitter at@Dan_McD.

 

Related posts:

  1. Dan McDonnell on Upcoming Changes to Twitter
  2. Dan McDonnell on Making New Friends and Mastering Lesser-Known Twitter Features Without Third Party Apps
  3. Dan McDonnell on Keyboard Shortcuts and Other Advanced Twitter Features

Judy Savageau and Kathy Muhr on Working with Different Data Sources

American Evaluation Association 365 Blog - Fri, 05/16/2014 - 01:15

Hi. We’re Judy Savageau and Kathy Muhr from the University of Massachusetts Medical School’s Center for Health Policy and Research. Within our Research and Evaluation Unit, we work on a number of projects using qualitative and quantitative methods as well as primary and secondary data sources. We’ve come to appreciate that different types of data from different sources need varying levels of data management and quality oversight.

One of our current projects is evaluating a screening program that requires primary care providers to screen children for potential behavioral health conditions. Among a random sample of 4000 children seen for a well child visit during one of two study years, we collected data both from medical records (primary data source: both quantitative and qualitative chart notes) as well as administrative/claims data (secondary data source: solely quantitative). Given the nature of data from the two sources, we implemented different data quality checks and cross-checks between them.

Lessons Learned:

  • Claims data comes from the insurance payer having already gone through its own internal data cleaning and data management processes. However, much of the patient demographic data comes at the time of insurance enrollment and not updated at the time of a clinical visit. Some data elements are often incomplete and not updated even after numerous clinical encounters, especially data such as gender, race, ethnicity and primary language. While a provider might ‘know’ this information when seeing a patient, it’s not necessarily updated in administrative datasets.
  • Many practices don’t necessarily collect demographic data in a uniform manner unless they’re required to report on this data. Primary care providers are well connected to their patient’s demographics in terms of needs for interpreters, cultural health beliefs, and age- or gender-specific anticipatory guidance needs. Unfortunately, medical records data often had nearly as much missing data as did the administrative claims data!
  • Cross-checking data between these two sources was an important step for us to take in this project as we hypothesized that there might be differences in screening children for behavioral health needs. Wanting to assess potential health service disparities was an important factor in this evaluation given the interest in vulnerable populations.
  • While electronic medical records (EMRs) were evident in at least 60% of practices where charts were abstracted, it was no surprise to find that EMRs vary practice to practice. It was clear that projects such as this one might then need to use text-based data within the chart notes to obtain vital information in order to assess potential disparities.

Hot Tip: Although data quality is key, find a balance between budgetary and personnel resources and the time required to cross-check data through multiple sources and/or impute missing data using a variety of techniques.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. MA PCMH Eval Week: Ann Lawthers on Triangulation Using Mixed Methods Appeals to Diverse Stakeholder Interests
  2. Gary Huang on Improper Payment Studies
  3. MME Week: Terri Anderson on Using Best Practices for Mixed Methods Research in Evaluation

Jeanne Hubelbank on Assessing Audience or Client Knowledge in a Sweet Way

American Evaluation Association 365 Blog - Thu, 05/15/2014 - 01:15

Hello, my name is Jeanne Hubelbank. I am an independent evaluation consultant. Most of my work is in higher education where, most recently, I help faculty evaluate their classes, develop proposals, and evaluate professional development programs offered to public school teachers. Sometimes, I am asked to make presentations or conduct workshops on evaluation. When doing this, I find it helpful to know something about the audience’s background. Clickers, hand raising, holding up colored cards, standing up, and clapping are ways to approach this. A recent AEA365 post, Innovative Reporting Part I: The Data Diva’s Chocolate Box, that showed how to present results on candy wrappers served as an impetus for another way to introduce evaluation and to assess people’s understanding of it.

Instead of results, write evaluation terms such as use, user, and methods on stickers and place them on the bottom of Hershey’s Kisses®; one word to a kiss. Participants arrange their candy in any format that they think represents how one approaches the process of conducting an evaluation. This can give one a quick view of how the participants view evaluation and most people like to eat the candy afterwards.

Hot tips:

  • Use three-quarter inch dots
  • Hand write or print terms you want your clients to display
  • Besides Hershey’s Kisses® provide Starbursts®, for those who are allergic or adverse to chocolate
  • Use different colored kisses for key terms, such as use and uses in silver and assessment in red, for a quick view on where people place them in the process
  • Wrap each collection of candy terms into a piece of plastic wrap and tie with a curled ribbon
  • Ask people to arrange candy in any format that they think represents how one approaches the process of doing an evaluation
  • You can do this before and after a presentation, but if you do it again, remind people to wait to eat.

Rad Resources:

Susan Kistler’s chocolate results

Stephanie Everygreen’s cookie results and her book Presenting Data Effectively: Communicating Your Findings for Maximum Impact.

Hallie Preskill and Darlene Russ-Eft’s book Building Evaluation Capacity: 72 Activities for Teaching and Training.

Michael Q. Patton’s book Creative Evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Nicole Vicinanza on Explaining Random Sampling to Stakeholders
  2. Best of aea365 week: Nicole Vicinanza on Explaining Random Sampling to Stakeholders
  3. Susan Kistler on Innovative Reporting Part I: The Data Diva’s Chocolate Box

Sue Griffey on Ready or Not: Mentor and Mentee Readiness

American Evaluation Association 365 Blog - Wed, 05/14/2014 - 01:15

I’m Sue Griffey; I lead the Evaluation Center at Social & Scientific Systems, Inc. in Silver Spring MD. I mentor outside of work for professionals in evaluation and public health in both formal programs (Cherie Blair. Foundation (CBF), APHA-SA National Mentoring Program, Aspire, Foundation, Rollins School of Public Health Annual Mentoring Program) and through individual connections.

I have noticed over the past few years, as my mentoring work has increased, that my ability to assess mentoring readiness is critical to the success of the mentoring relationship.

Hot Tip: Mentoring is a volunteer acting. Don’t just assume that the Mentor-Mentee pairing results in both being ready. It may appear as Mr+/Me+ (as in the table below) but the pairing may actually be in a discordant cell.

Mentee isn’t ready: The mentee may not realize she isn’t ready for mentoring; you as the mentor may need to help her see that. A mentee may identify needing mentoring when it really isn’t what she needs. As the mentor, develop and apply metrics for readiness as you would in an evaluation.

Hot Tip: I have a 3-email rule. If I have to track down the mentee more than 2 times because he has missed a scheduled session or not confirmed a session time, my third email lays out my perspective that there may be a mismatch in what the mentee is able to do (as shown below).

Hot Tip: Don’t rule out a mentoring program because you don’t think you offer the program’s content or focus. I became a CBF mentor in its initial program even though I didn’t necessarily have the business focus I thought they wanted. My match was a mentee who two years later still benefits from my experience in public health and leadership.

Mentor isn’t ready: If you have agreed to mentor, respect the commitment or acknowledge that you can’t.

As the mentee, make sure you are getting what you need from the mentor. And if you aren’t getting what you need, don’t be afraid to let the mentor or the mentoring program manager know that. It may be that the mentor really isn’t ready for the mentoring relationship

Hot Tip: it may help you as a mentee to think overall and about each session as answering 3 questions:

  1. What do you need right now?
  2. What do you want to do and why?
  3. How can your mentor help you?

Hot Tip: Being a mentee is as important as your work or schooling. Be proactive in communications, making sure to check your email daily, letting a mentor know what your schedule is, what time zone you are in, and how and when to reach you.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Tamara Bertrand Jones on Finding and Working With a Mentor
  2. Norma Martinez-Rubin on Mentorship and Involvement in AEA
  3. DOVP Week: Nivedita Ranade and Tom McKlin on The Importance of Nurturing in Mentoring Students with Disabilities (SwD)

Susan Kistler on Innovative Reporting Part III: Taking It to the Streets

American Evaluation Association 365 Blog - Tue, 05/13/2014 - 01:15

I’m Susan Kistler, of TheSmarterOne.com, and I’m on to Part III of an Innovative Reporting Series (see Part I on chocolate reports and Part II on adding video to your toolbox). Today I wanted to share lessons from a project marrying street art, evaluation, infographics, technology, and community building.

Lessons Learned: Last month, I had the privilege of interviewing Lisa Koeman, co-lead for the Visualising Mill Road project. The project placed small voting devices in 18 shops on Mill Road in Cambridge, United Kingdom. The devices asked straightforward questions about perceptions of happiness, safety, and social connectedness and had three buttons with which visitors could indicate a positive, neutral, or negative response. The questions were replaced every two days over a three week period and the information gathered was reflected in growing chalked Infographics on the street in front of each shop, with a new line added in the early hours every other day to reflect question responses in that shop.

As the project unfolded, they evaluated “the effect of the public visualisation of community data…How can it inform people on what other members of the community think of specific local issues? Does it encourage reflection and discussion?”Although the analysis of the results is still in progress, Lisa was pleased to share that the project encouraged community discussion about the issues raised generally and the project specifically at both the point of data collection (what’s up with those little boxes and buttons?) and reporting (what’s up with those infographics on the sidewalk?). I’ll be sure that we share the final report when it comes out.

Hot Tip: For me, this statement from Lisa was the most compelling in terms of rethinking data visualization and reporting: “Lots of people go for screens, for sharing results digitally, but by positioning graphics of local data on the pavement, at people’s feet, it attracted attention and made it accessible for the community. It gave the data back to the people.” She noted that people who would be unlikely to view an online report were talking with others about the graphics and voting devices that had appeared right on their doorsteps.

Rad Resource: Learn more about the project at http://visualisingmillroad.com/.

Hot Tip: Spray painted chalk lasted all three weeks, even through rain, yet faded away over time, leaving no permanent marks.

Hot Tip: Work with community groups to gain buy-in and get permissions and to ensure that the questions asked are meaningful.

Hot Tip: By working with local artists, as part of their 5:00 AM chalking team, they found an affordable way to complete the project and gained further buy-in from an additional group living in the Mill Road area.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Lisa Townson on Tailoring Evaluation to Your Audience
  2. Courtney Heppner and Sarah Rand on Producing Online Evaluation Reports
  3. Susan Kistler on Learning From DVR Innovation

Bob Kahle on Practical Strategies to Leverage Technology for Deeper Insights

American Evaluation Association 365 Blog - Mon, 05/12/2014 - 01:15

I am Bob Kahle, a veteran evaluator and frequent AEA workshop presenter. As owner of Kahle Research Solutions, a research and evaluation firm with a qualitative methods focus, I am writing today to offer practical tips to leverage new tools enabled by advancing technology.

Lesson Learned: Ease into it. Most evaluators are aware of tools like Bulletin Board Focus Groups (BBFG), web enabled telephone focus groups or in-depth interviews and using mobile devices to collect data in text, audio or visual forms. Knowing about these techniques is a good start, but many of us get stuck at how to actually implement some of these new digital methods, as it seems risky and may be out of our personal comfort zones. Consider using some of the new methods in combination with existing approaches to gain experience, confidence and rich insights.

Hot Tip: Using new online tools is not an all or nothing situation. Consider hybrid designs where you couple traditional tried and true techniques with new digital methods. For example, if you just completed focus groups with your target population in traditional face-to-face settings, consider inviting back the most articulate “rock star” respondents to participate in an online (BBFG). In this way, you can still gain the benefit of in-person discussions, but can leverage technology by bringing together especially insightful participants in a virtual and convenient asynchronous data collection mode. Ease the burden on respondents by letting them work in their environment and on their schedule. Finally, if you are like me, you always have the nagging feeling after the last focus group of “I wish I would have followed up on….” Instead of beating yourself up, organize and implement a BBFG to ask those follow-up items you did not have the time for (or think of) in the face-to-face setting. BBFG results are often so rich and detailed that your new problem becomes synthesizing and organizing the wealth of information so clients can digest.

Rad Resource: If you want to learn more about new digital methods and how to apply them, consider attending “Digital Qualitative: Leveraging Technology for Deeper Insights” an AEA hosted eStudy which I will conduct on May 20 and 22.

Rad Resource: Attend the AEA Summer Evaluation Institute June 1-4 in Atlanta, GA for an array of great sessions including one with the same title as above.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. PD Presenters: Bob Kahle on Digital Qualitative Methods
  2. Kristina Mycek on Spatial Analysis
  3. Poster Week: Anna Douglas on Using Asynchronous Discussion Groups for Evaluation

Miki Tsukamoto on Using Video as a Tool to Capture Baseline Surveys

American Evaluation Association 365 Blog - Sun, 05/11/2014 - 01:15

Greetings AEA365! My name is Miki Tsukamoto and I am a Senior Monitoring and Evaluation Officer at the Planning and Evaluation Department in the International Federation of Red Cross and Red Crescent Societies (IFRC).

What if you had an opportunity to add a human face to baseline surveys and reflect numbers in a more appealing way?

In a joint initiative with the Uganda Red Cross Society (URCS) and the Swedish Red Cross,  I recently had such an opportunity. We piloted video as a tool to complement a baseline survey which had been carried out for URCS’s community resilience programme. The video aimed to capture stories from communities according to selected objectives/indicators of the programme, with the idea that in three-years’ time this tool could then be used again to measure and demonstrate change or highlight gaps in programming.

Lessons Learned: Baseline data is important for planning, monitoring and evaluating a project’s performance. In many organizations, the end product of such a survey can sometimes result in a report filled with numbers; which, although useful for some purposes, is not always understood by all stakeholders, including some of the communities we aim to assist. Taking this into consideration, video seemed to be an ideal medium for what the IFRC needed since it:

  • Offers visual imagery and can transcend language barriers if needed;
  • Allows community(ies) with an opportunity to participate and directly express their views during the interviews; and
  • Provides a more appealing way to capture and report on the baseline.

Here are 3 lessons that I took away from this experience:

Gatekeepers: It is important to identify your gatekeeper(s), since this will be necessary for meeting community members on the ground, and in obtaining their permission to film and in accepting the presence of the film crew in the community(ies) and in the randomly selected individual households.


Independent Interpreter:
If interpretation is necessary, an independent interpreter is key since s/he serves as the voice of the interviewee, as well as the interviewer. S/He has an important role in reducing bias and providing a comfortable environment for an honest dialogue during the interview process.

Community buy-in: The filming process and the community’s better understanding of the aims of the video project, can help build a stronger buy in from the community(ies) for your programme overall.

Rad Resources: We have two version of the baseline video (if you are reading this via email that does not support embedded video, please click through back to the online post):

Short Version: 

Long Version: 

Hot Tip: For those interested in innovations in the field of humanitarian technology and its practical impact Humanitarian Technology: Science, Systems and Global Impact 2014 conference is coming up soon  in Boston, MA from 13 to 15 May 2014.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Video in #Eval Week: Cindy Banyai on Putting it in Their Hands – Using participatory video to foster evaluation ownership
  2. Video in #Eval Week: Kas Aruskevich on Telling the Story Through Video
  3. SWB Week: Cindy Weng, Chris Barker, and Larry George on Survival Analysis using Current Status Data

Sheila B. Robinson on a Fabulous Way to Start Your Summer: AEA’s Summer Evaluation Institute

American Evaluation Association 365 Blog - Sat, 05/10/2014 - 06:23

Hello Evaluation Learners! I’m Sheila B. Robinson, aea365′s Lead Curator and sometimes Saturday contributor, coming to you from Rochester, NY where we’re enjoying our first warm weekend in ages and I’m thinking summer! In March I wrote about a great opportunity for summer – the 2014 AEA Summer Evaluation Institute - and gave a preview of some of the high quality professional development courses offered. Today, I’ll share a few more.

Note: Descriptions are truncated, so please visit the site for complete descriptions:

Rad Resources: The Institute opens with two concurrent pre-institute workshops:

Practical Methods for Improving Evaluation Communication with Stephanie Evergreen

…attendees will learn the science behind presenting data effectively and will leave with direct, pointed changes that can be immediately administered to their own conference presentations and other evaluation deliverables. … the workshop will address principles of graph, slideshow, and report design that support legibility, comprehension, and retention of our data in the minds of our clients. Grounded in visual processing theory, the principles will enhance attendees’ ability to communicate more effectively with peers, colleagues, and clients through a focus on the proper use of color, arrangement, graphics, and text in written evaluation documents.

Introduction to Evaluation with Thomas Chapel

…an overview of program evaluation for Institute participants with some, but not extensive, prior background in program evaluation. The session will be organized around the Centers for Disease Control and Prevention’s (CDC) six-step Framework for Program Evaluation in Public Health as well as the four sets of evaluation standards from the Joint Commission on Evaluation Standards. …The course will touch on all six steps, but particular emphasis will be put on the early steps, including identification and engagement of stakeholders, creation of logic models, and selecting/focusing evaluation questions.

Among concurrent session offerings during the Institute are:

Focus Group Research: Understanding, Designing and Implementing with Michelle Revels

As a qualitative research method, focus groups are an important tool to help researchers understand the motivators and determinants of a given behavior. This course provides a practical introduction to focus group research.

Using Theory to Improve Evaluation Practice with Stewart Donaldson

…provide evaluators with an opportunity to improve their understanding of how to use theory to improve evaluation practice. We’ll examine social science theory and stakeholder theories, including theories of change and their application to making real improvements in how evaluations are framed and conducted.

Fundamentals of Survey Sampling for Evaluators with Michael Cohen

…targets evaluators who do not have experience in developing or assessing the various sampling frameworks, but who have an interest in better understanding this fundamental concept. Beginning with clarifying the scientific language and operational lingo of the field, you will review basic approaches and identify the benefits and pitfalls of each.

Hot Tip: Registration is still open for the 2014 AEA Summer Evaluation Institute - June 1-4 in Atlanta, GA but courses do fill up!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Sheila B. Robinson on a Really Rad Resource for Summer Evaluation Learning
  2. Susan Kistler on Handouts from the 2011 AEA/CDC Summer Evaluation Institute
  3. Susan Kistler on Taking a Professional Development Break This Summer

Ed Eval TIG Week: Tara Donahue on Avoiding Surprises: Collaborating with the District’s Research Departments

American Evaluation Association 365 Blog - Fri, 05/09/2014 - 01:15

My name is Tara Donahue, and I am a managing evaluator at McREL International.  In my role as an evaluator, I spend a lot of time working with K-12 districts and collaborating with their research departments to gain access to district, school, and student data. Even as an “external” partner, I have found that many strategies can be put in place to develop a positive working relationship with the district in order to receive datasets to complete evaluation reports within budget and on time.

Lesson Learned: Share not only your evaluation timeline but explain the purpose for which you need the data at the very beginning of the project with the research director and any staff member who may be assigned to work with you.  By sharing your timeline, the district staff understands what deadlines you are working under.  They can also tell you what is possible.  For example, if you need end of year grades and have a report due August 30, the district can tell you the earliest possible time those grades will be available.  If you think you may need to make an adjustment to the timeline, those negotiations can be made at the beginning of the project and contingency plans can be developed “just in case” the data are not available when you need them.

No one knows the data better than the district’s research staff and by explaining the purpose for which you need a particular dataset, they may be able to help you think through how the dataset should be created.  On some multi-year projects, our team has worked directly with the district to develop templates during the first year of the project that can then be used annually with few revisions.  By not having to reinvent the wheel each year, you can develop project efficiencies which saves staff time and reduces cost.  Another benefit to discussing with the research staff why you are requesting certain data is simply to increase buy-in from the staff.  By being transparent and openly discussing why you are requesting something, the research staff has a better sense of your purpose and a vision of how your evaluation will benefit students in the district.

Hot Tip: Work with the research department to ensure that you are complying with the district’s IRB and research policies.  Even if your organization has an IRB board, districts may require that you also go through theirs.  Having that conversation at the beginning of the project can save a lot of time, energy, and stress later on.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PK12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. Jack Mills on Project Requirements
  2. Jack Mills on Setting Up Evaluation Contracts
  3. Ed Eval Week: Susan Shebby and Sheila Arens on Using Evaluation to Support the Sustainability of District Grant Initiatives

Ed Eval TIG Week: Krista Collins on Resources to Inform Evaluations on Personalized Learning, Blended Learning and Digital Learning

American Evaluation Association 365 Blog - Thu, 05/08/2014 - 01:15

Hi!  My name is Krista Collins, and I am an Evaluation Associate at The Evaluation Group in Decatur, GA.   I am an educational evaluator, with an expertise in child development, and my work primarily consists of conducting multi-level evaluations of large federally-funded projects awarded to school districts and non-profit organizations.  One of the questions that I ask myself daily is, “How do I develop strong, evidence-based instruments to evaluate innovative educational strategies that are currently being defined?”

One of my current projects is an evaluation of a Race to the Top District grant focused on revolutionizing instruction by building the capacity for personalized learning, blended learning, and digital learning to improve student achievement and educator effectiveness.  With limited empirical evidence to define the best implementation practices, pathways, outcomes, or methodologies to evaluate these innovative learning strategies, I turned to Google to find credible resources that could inform my evaluation design.  Here are a few useful resources that can provide a good starting point for other evaluators working with innovative educational programs focused on incorporating digital learning into classrooms.

Rad Resource: The International Association for K-12 Online Learning (iNACOL) is a non-profit organization that supports research on online and blended learning strategies to inform policy, standards, and professional development opportunities that guide innovative instruction in schools.  I found their list of National Quality Standards and Promising Practices to be very helpful in identifying the goals and best practices that guided the impact evaluation.

(Share Clip)

Rad Resource: Center for Digital Education is a national research and advisory institute that reports on the current trends and policy efforts that guide educational technology in the US.  Join their email list to receive Special Reports, Papers, and Newsletters that can provide valuable information on how innovative practices in digital learning are being implemented and assessed.  These reports provide an objective perspective on how school districts, schools, and individual teachers can work together to modernize the learning environment, and therefore allowed me to identify key strategies to be included in the implementation evaluation.

Rad Resource: ASCD is a professional association for educators focused on professional development, capacity building, and educational leadership around innovative programs to educate the whole child.  Through conferences, publications, and professional learning services, ASCD has developed standards and tools that informed our evaluation of professional development and project implementation.

Hi!  My name is Krista Collins, and I am an Evaluation Associate at The Evaluation Group in Decatur, GA.   I am an educational evaluator, with an expertise in child development, and my work primarily consists of conducting multi-level evaluations of large federally-funded projects awarded to school districts and non-profit organizations.  One of the questions that I ask myself daily is, “How do I develop strong, evidence-based instruments to evaluate innovative educational strategies that are currently being defined?”

One of my current projects is an evaluation of a Race to the Top District grant focused on revolutionizing instruction by building the capacity for personalized learning, blended learning, and digital learning to improve student achievement and educator effectiveness.  With limited empirical evidence to define the best implementation practices, pathways, outcomes, or methodologies to evaluate these innovative learning strategies, I turned to Google to find credible resources that could inform my evaluation design.  Here are a few useful resources that can provide a good starting point for other evaluators working with innovative educational programs focused on incorporating digital learning into classrooms.

Rad Resource: The International Association for K-12 Online Learning (iNACOL) is a non-profit organization that supports research on online and blended learning strategies to inform policy, standards, and professional development opportunities that guide innovative instruction in schools.  I found their list of National Quality Standards and Promising Practices to be very helpful in identifying the goals and best practices that guided the impact evaluation.

Rad Resource: Center for Digital Education is a national research and advisory institute that reports on the current trends and policy efforts that guide educational technology in the US.  Join their email list to receive Special Reports, Papers, and Newsletters that can provide valuable information on how innovative practices in digital learning are being implemented and assessed.  These reports provide an objective perspective on how school districts, schools, and individual teachers can work together to modernize the learning environment, and therefore allowed me to identify key strategies to be included in the implementation evaluation.

Rad Resource: ASCD is a professional association for educators focused on professional development, capacity building, and educational leadership around innovative programs to educate the whole child.  Through conferences, publications, and professional learning services, ASCD has developed standards and tools that informed our evaluation of professional development and project implementation.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PK12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PK12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. CREATE Week: Barbara B. Howard and Don Klinger on The Classroom Assessment Standards: Guidelines for Teacher Practice
  2. Chad Green on Context -Specific Evaluation of School-wide Initiatives
  3. Ed Eval TIG Week: Amy Gaumer Erickson on Evaluating the Quality of Professional Development