Resource Feeds

We should defend the rights of Gaza’s children

ODI general feed - Mon, 08/04/2014 - 00:00
The carnage in Gaza has taken a terrifying toll on Palestinian children. It has also called into question the commitment of western governments to a raft of international human rights provisions aimed at protecting children trapped in conflict.
Categories: Resource Feeds

CP TIG Week: Tara Gregory and Natalie Wilkins on An Introduction to Community Psychology Topical Interest Group’s Series on Empowerment Evaluation

American Evaluation Association 365 Blog - Sun, 08/03/2014 - 05:16

Hello. We are Tara Gregory, Director of Research and Evaluation at Wichita State University’s Center for Community Support and Research, and Natalie Wilkins, Behavioral Scientist at the Centers for Disease Control and Prevention. We’re members of the Leadership Council for the Community Psychology TIG and are excited to introduce this week’s blogs highlighting connection among empowerment, evaluation and community psychology.

As community psychologists who are evaluators, we often think of the tenet of meeting people where they are. “Where people are,” related to evaluation may be overwhelmed, confused, and even resistant. This is not a criticism of those trying to make a difference in our communities, but more a recognition of the need for approaching evaluation from an empowerment perspective – both in helping people learn evaluation themselves and in providing results of our own evaluations in a way that helps empower people. Either way, the role of the community psychologist in evaluation is to meet people where they are and walk with them as a partner with the intention of preparing the other to go forward independently.

Lessons Learned:

  • Empowerment evaluation – Listening to key stakeholders is key. Often, people will be resistant to evaluation because they are overwhelmed by the idea of having to do something outside their area of expertise. Listening to stakeholders’ stories about how their program works, and how they know it works can often reveal strengths and evaluation capacity that people and programs never knew they had. Lots of folks have the building blocks of evaluation in place already – they’re just not calling it “evaluation!”
  • Facilitating reflection – Encouraging reflection on evaluation results and helping people come to their own conclusions is a way to create ownership and empowerment to continue good work or make changes where needed.
  • Qualitative methods – Offering an opportunity for people to share their own stories as part of an evaluation can also be empowering, particularly when they’re encouraged to focus on strengths, successes, resiliency or other positives that sometimes get lost.

Hot Tip:

  • Check out the Empowerment Evaluation TIG! They host their own blog weeks, webinars, and many other educational opportunities. Many of us community psychologists belong to this group and gain valuable knowledge and skills through membership.

Rad Resources:

These teaching materials are designed to introduce individuals to empowerment evaluation and intended to be a resource for facilitating an introductory lecture on the topic.

Dr. David Fetterman’s blog provides a range of resources on empowerment evaluation theory and practice, including links to videos, guides and relevant academic literature.

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

Related posts:

  1. CPE Week: Abraham Wandersman on Empowerment Evaluation and Getting to Outcomes
  2. CPE Week Bonus: Abraham Wandersman on Empowerment Accountability
  3. CP TIG Week: Wendi L. Siebold on Where The Rubber Meets The Road: Taking A Hard Look At Organizational Capacity And The Use Of Empowerment Evaluation

Sheila B. Robinson on Making the Most of Evaluation 2014, Even if You Cannot Attend

American Evaluation Association 365 Blog - Sat, 08/02/2014 - 09:34

Happy Summer! I’m Sheila B. Robinson, aea365′s Lead Curator and sometimes Saturday contributor. Last year at this time, Susan Kistler contributed some fabulous ideas for enjoying AEA’s annual conference even for those not attending, and they’re well worth repeating this year. So, with thanks to Susan, here we go!

Hot Tip #1 – Leverage the Evaluation 2014 Online Conference Program (coming soon!) to Build Your Professional Network: The Evaluation 2014 Online Conference Program is searchable by Topical Interest Group Sponsor, speaker, and keyword. If you are attending, researching the conference program and its 700+ sessions in advance of attending is a must-do in order to make the most of your time. Even if you are not attending, you can search the conference program for colleagues working in your area and connect via email to raise a question.

Hot Tip #2 – Check the AEA eLibrary for Handouts and Related Materials: AEA’s online public eLibrary has over 1000 items in its repository and that will grow considerably as the conference nears and immediately following. All speakers are encouraged to post their materials in the eLibrary and anyone may search and download items of interest, whether attending the conference or not.

Hot Tip #3 – Follow Hashtag #Eval14 on Twitter: If you are on Twitter use hashtag #Eval14 to tag your conference-related tweets. If you aren’t attending, follow #Eval14 to stay abreast of the conversation and @aeaweb, AEA’s Headlines and Resources Twitter Feed in particular. Check out #Eval13 for an idea of what folks were tweeting last year!

Bonus Cool Trick – Get the H&R Compilation: Not up for joining Twitter quite yet, but want to get the field’s headlines and resources for the week nevertheless? You can subscribe to AEA’s Headlines and Resources compilation to arrive via email or RSS once each week. Learn more here.

Hot Tip #4 – Check in Regularly or Subscribe to EvalCentral: Chris Lysy maintains EvalCentral, a compilation of 57 evauation-related blogs where you can always find the newest posts. Lots of bloggers will be in attendance at Evaluation 2014 and EvalCentral allows you to find them all in one place. BONUS: Download Evaluation 2013 – A Conference Story ebook, written and illustrated by Chris himself!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Susan Kistler on 4 Ways to Make the Most of Evaluation 2013 – Even if you cannot attend
  2. Susan Kistler on the eLibrary and Headlines List
  3. Susan Kistler on Subscribing to the eLibrary

CrisisWatch N°132

Increasing Israeli-Palestinian tensions culminated in Israel launching "Operation Protective Edge" in Gaza in early July. The assault, which started as an aerial campaign and was later extended to include ground operations, reportedly killed more than 1400 Palestinians throughout the month while 64 Israelis were killed in clashes inside the Gaza Strip and by Hamas rocket fire. Several attempts at reaching a ceasefire agreement failed in July. Israel backed proposals demanding a cessation of hostilities as a prerequisite for negotiating a long-term truce, while Hamas insisted that ceasefire modalities not agreed to during the fighting would never be addressed. As CrisisWatch goes to press there are reports that a three-day humanitarian ceasefire announced 1 August has already collapsed.

PD Presenters Week: Mike Trevisan and Tamara Walser on Evaluability Assessment

American Evaluation Association 365 Blog - Fri, 08/01/2014 - 01:15

Hello from Mike Trevisan and Tamara Walser! Mike is Dean of the College of Education at Washington State University and Tamara is Director of Assessment and Evaluation in the Watson College of Education at the University of North Carolina Wilmington. We’ve published, presented, and conducted workshops on evaluability assessment and are excited about our pre-conference workshop at AEA 2014!

Evaluability assessment (EA) got its start in the 1970s as a pre-evaluation activity to determine the readiness of a program for outcome evaluation. Since then, it has evolved into much more and is currently experiencing resurgence in use across disciplines and globally.

We define EA as the systematic investigation of program characteristics, context, activities, processes, implementation, outcomes, and logic to determine

  • The extent to which the theory of how the program is intended to work aligns with the program as it is implemented and perceived in the field;
  • The plausibility that the program will yield positive results as currently conceived and implemented; and
  • The feasibility of and best approaches for further evaluation of the program.

EA results lead to decisions about the feasibility of and best approaches for further evaluation and can provide information to fill in gaps between program theory and reality—to increase program plausibility and effectiveness.

Lessons Learned:  The following are some things we and others have learned about the uses and benefits of EA—EA can:

  • Foster interest in the program and program evaluation.
  • Result in more accurate and meaningful program theory.
  • Support the use of further evaluation.
  • Build evaluation capacity.
  • Foster understanding of program culture and context.
  • Be used for program development, formative evaluation, developmental evaluation, and as a precursor to summative evaluation.
  • Be particularly useful for multi-site programs.
  • Foster understanding of program complexity.
  • Increase the cost-benefit of evaluation work.
  • Serve as a precursor to a variety of evaluation approaches—it’s not exclusively tied to quantitative outcome evaluation.

Rad Resources:

Our book situates EA in the context of current EA and evaluation theory and practice and focuses on the “how-to” of conducting quality EA.

An article by Leviton, Kettel Khan, Rog, Dawkins, and Cotton describes how EA can be used to translate research into practice and to translate practice into research.

An article by Thurston and Potvin introduces the concept of “ongoing participatory EA” as part of program implementation and management.

An issue of New Directions for Evaluation focuses on the Systematic Screening Method, which incorporates EA for identifying promising practices.

A report by Davies describes the use of EA in international development evaluation in a variety of contexts.

Want to learn more? Register for Evaluability Assessment: What, Why and How at Evaluation 2014.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. Nicola Dawkins on Evaluability Assessment and Systematic Screening Assessment
  2. CP TIG Week: Hsin-Ling (Sonya) Hung on Resources for Evaluability Assessment
  3. EPE Week: Valerie Williams on Evaluating Environmental Education Programs

PD Presenters Week: Mary Crave, Kerry Zaleski and Tererai Trent on Do your participatory methods contribute to an equitable future?

American Evaluation Association 365 Blog - Thu, 07/31/2014 - 01:15

Greetings from Mary Crave and Kerry Zaleski, of the University of Wisconsin – Extension and Tererai Trent of Tinogona Foundation and Drexel University. For the past few years we’ve teamed up to teach participatory methods for engaging vulnerable and historically under-represented persons in monitoring and evaluation. We’ve taught hands-on professional development workshops at AEA conferences, eStudies, and Coffee Breaks. “Visionary Evaluation for a Sustainable, Equitable Future” is not only the theme for Evaluation 2014, it is a succinct description of why we believe so strongly in what we teach. Lessons Learned: We’ve noticed during our trainings around the world that there is a continuum or range of what an evaluator might consider to be “participatory”.  Being aware of our own position and philosophy of participatory methods is especially critical when working with persons who traditionally may have been excluded from participation due to income, location, gender, ethnicity or disability. Trent suggests these lenses or levels from low to high: Spectator Participation > Tokenism Participation > Incentive Participation > Functional Participation > Ownership Participation. The more ownership or the higher the level of participation, the more impact a program will have on social justice issues and sustainable, equitable futures for people. Those who want their methods lens to focus on “ownership participation” sometimes have trouble reaching that aim because they have a small tool box or get stuck using the wrong tool in a particular time in the program cycle. Rubrics for success often leave out the voice of the vulnerable, though those voices can also be included using participatory tools. Hot Tips:

  • There are M & E tools especially suited for working with vulnerable persons that allow all voices to be heard, that do not depend on literacy skills, that consider cultural practices and power relationships in decision making and discussion, and that engage program beneficiaries in determining rubrics for success. These tools can be used in the planning, monitoring, data collection, analysis, and reporting stages of the program cycle.
  • You can expand your tool box of methods, and widen your lens on participatory methods at our 2-day workshop at AEA 2104, Reality Counts (Workshop #6) We’ll be joined by Abdul Thoronka, an international community health specialist and manager of a community organization that works with persons with disabilities.

Rad Resources: Robert Chambers 2002 book: Participatory Workshops: A Sourcebook of 21 Sets of Ideas and Activities. Food and Agricultural Organization (FAO) of the UN: Click on publications; type in PLA in search menu. AEA Coffee Break Webinar 166: Pocket-Chart Voting-Engaging vulnerable voices in program evaluation with Kerry Zaleski, December 12, 2013 (recording available free to AEA members). Want to learn more? Register for Reality Counts: Participatory methods for engaging marginalized and under-represented persons in M&E at Evaluation 2014. This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Mary Crave, Kerry Zaleski and Tererai Trent on Participatory Methods for Engaging Vulnerable and Under-Represented Persons in M&E
  2. Mary Kane and Scott Rosas on Leveraging Concept Mapping
  3. Ann Zukoski on Participatory Evaluation Approaches

Negotiating perceptions: Al-Shabaab and Taliban views of aid agencies

ODI general feed - Thu, 07/31/2014 - 00:00
Based on research and interviews with members of the Taliban and Al-Shabaab, this HPG policy brief explores how these armed groups perceive aid agencies and the implications on humanitarian response in those areas.
Categories: Resource Feeds

Complex Change: An emerging field

Networking Action - Wed, 07/30/2014 - 13:31

Complex change challenges are a specific type of change challenge that is the growing focus of an impressive array of work.  An earlier blog distinguished complex environments from simple, complicated and chaotic ones through Dave Snowden’s cynefin framework.  It can …

PD Presenters Week: Brian Yates on Doing Cost-Inclusive Evaluation. Part III: Cost-Benefit Analysis

American Evaluation Association 365 Blog - Wed, 07/30/2014 - 01:15

“But wait — there’s more!” Hi! I’m Brian Yates. Yes, this is the third piece in a series of “AEA365′s” on using cost data in evaluation … and not entirely inappropriate for your former AEA Treasurer to author. (I’m also Professor in the Department of Psychology at American University in Washington, DC.)

My past two 365ers (you can find them here and here) focused on evaluating costs of resources consumed by programs, and the monetary and monetizable outcomes produced by programs. With those two types of data, we can begin evaluation of the programs’ cost-benefit.

Lesson Learned – Funders should ask “Is it worth it?” and not just “How much does it cost?” The impulse to cut programs to meet budget goals actually can increase costs and bust budgets if the programs would have increased employment (and taxes paid), reduced consumers’ use of other services, or both. Evaluators can investigate this by comparing program costs to program benefits.

Hot Tip – Strategies for quantifying Value. “Is it worth it?” can be answered in different ways, including:

  • dividing benefits by costs (benefits/costs ratio),
  • subtracting costs from benefits (net benefit), and
  • measuring the time required before benefits exceed costs (Time to Return On Investment).

Each cost-benefit index describes a different aspect of a program’s cost-benefit relationship, and all can be reported in a Cost-Benefit Analysis or CBA.

Hot Tip Too - Costs, benefits, and cost-benefit relationships can be measured at several levels of specificity (such as for an individual in a program or for a new implementation of the program). Cost-benefit differences between programs as well as between consumers in the same program are important to understand, too.

Lesson Learned – Programs whose costs exceed measurable benefits still can be funds-worthy. A “news” story in humorous Daily Onion a few years back made an important point with its story, “Cost of living now outweighs benefits” (http://www.theonion.com/articles/cost-of-living-now-outweighs-benefits,1316/). Few of us would be inclined to act in a summative manner based on these “findings” (even if they were valid)! So may the value of many programs be underestimated or missed entirely when viewed through the “green eyeshade” of an exclusively pecuniary perspective.

Also, some programs provide services to which all people are legally entitled. Evaluations of these can instead ask, “What is the best way to deliver the highest quality services to the most people for the least money?”

Next time: how to include costs when evaluating nonmonetary program outcomes.

Resources: Text: Michael F. Drummond’s (2005) Methods for the Economic Evaluation of Health Care Programmes. 

CBA for environmental interventions

Want to learn more? Register for Evaluating and Improving Cost, Cost-Effectiveness, and Cost-Benefit at Evaluation 2014.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Brian Yates on Doing Cost-Inclusive Evaluation. Part II: Measuring Monetary Benefits
  2. Brian Yates on Doing Cost-Inclusive Evaluation – Part I: Measuring Costs
  3. Agata Jose-Ivanina on Identifying and Evaluating a Program’s Costs

Incredible Budgets - budget credibility in theory and practice

ODI general feed - Wed, 07/30/2014 - 00:00
Drawing from case study examples of budget credibility problems in Liberia, Tanzania, and Uganda, this paper addresses why governments struggle to execute their budgets to plan and why they continue to expend time and energy on a budget process if this budget is not adhered to.
Categories: Resource Feeds

Are we getting things done? Rethinking operational leadership

ODI general feed - Wed, 07/30/2014 - 00:00

What does good leadership look like in humanitarian operations? How can we promote it? Join us for a panel discussion to launch ALNAP's new study, where we will discuss the findings of our literature review, survey and interviews into effective leadership.

Categories: Resource Feeds

PD Presenters Week: Mindy Hightower King and Courtney Brown on A Framework for Developing High Quality Performance Measurement Systems of Evaluation

American Evaluation Association 365 Blog - Tue, 07/29/2014 - 01:15

Hello. We are Mindy Hightower King, Research Scientist at Indiana University and Courtney Brown, Director of Organizational Performance and Evaluation at Lumina Foundation. We have been working to strengthen and enhance performance management systems for the last decade and hope to provide some tips to help you create your own.

Lesson Learned: Why is this important?

Funders increasingly emphasize the importance of evaluation, often through performance measurement. To do this, you must develop high quality project objectives and performance measures, which are both critical to good proposals and successful evaluations.

A performance measurement system:

  • Makes it easier for you to measure your progress
  • Allows you to report progress easily and quantitatively
  • Allows everyone to easily understand the progress your program has made
  • Can make your life a lot easier

Two essential components to a performance measurement system are high quality project objectives and performance measures.

Project objectives are statements that reflect specific goals that can be used to gauge progress. Objectives help orient you toward a measure of performance outcomes and typically focus on only one aspect of a goal. Strong Project objectives concisely communicate the aims of the program and establish a foundation for high quality performance measures.

Cool Trick: When developing projective objectives, be sure to consider the following criterion of high quality project objectives: relevance, applicability, focus, and measurability.

Performance measures are indicators used at the program level to track the progress of specific outputs and outcomes a program is designed to achieve. Strong performance measures are aligned with program objectives. Good performance measurement maximizes the potential for meaningful data reporting.

Cool Trick: When developing project measures, be sure to account for the following questions:

  • What will change?
  • How much change you expect?
  • Who will achieve the change?
  • When the change will take place?

Hot Tip: Make sure your performance variables:

  • Have an action verb
  • Are measurable
  • Don’t simply state what activity will be completed 

Rad Resources: There are a host of books and articles on performance measurement systems, but here are two good online resources with examples and tips for writing high quality objectives and measures:  Guide for Writing Performance Measures  and Writing good work objectives: Where to get them and how to write them.

Want to learn more? Register for A Framework for Developing and Implementing a Performance Measurement System of Evaluation at Evaluation 2014.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. GOVT Week: David Bernstein on Top 10 Indicators of Performance Measurement Quality
  2. MA PCMH Eval Week: Ann Lawthers, Sai Cherala, and Judy Steinberg on How You Define Success Influences Your Findings
  3. Susan Wolfe on When You Can’t Do An Evaluability Assessment

PD Presenters Week: Osman Özturgut and Cindy Crusto on Remembering the “Self” in Culturally Competent Evaluation

American Evaluation Association 365 Blog - Mon, 07/28/2014 - 01:15

Hi, we’re Osman Özturgut, assistant professor, University of the Incarnate Word and Cindy Crusto, associate professor, Yale University School of Medicine. We are members of the AEA Public Statement on Cultural Competence in Evaluation Dissemination Working Group. We are writing to inform you of our forthcoming professional development workshop at Evaluation 2014 in Denver. Since the last meeting in Washington, we have been “learning, unlearning, and relearning” with several groups and workshop participants with respect to cultural competence. We wanted to share some of our learning experiences.

Our workshop is entitled, “Acknowledging the ‘Self’ in Developing Cultural Competency.” We developed the workshop to highlight key concepts written about in the AEA Public Statement that focus on evaluator hirself and what the evaluator hirself can do to better engage in work across cultures. As the AEA’S Public Statement explains, “Cultural competence requires awareness of self, reflection on one’s own cultural position.” Cultural competence begins with awareness and an understanding of one’s own viewpoints (learning).  Once we become aware of, reflect on, and critically analyze our existing knowledge and viewpoints, we may need to reevaluate some of our assumptions (unlearning). It is only then we can reformulate our knowledge to accommodate and adapt to new situations (relearning). This process of learning, unlearning, and relearning is the foundation of becoming a more culturally competent evaluator.

We learned that evaluators really want a safe place to talk about culture, human diversity, and issues of equity. In our session, we provide this safe place and allow for learning. Participants can explore their “half-baked ideas”, as one of our previous workshop participants had mentioned. This is the idea that we don’t always have the right words or have fully formulated thoughts and ideas regarding issues of culture, diversity, and inclusion. We believe it is crucial to provide a safe place to share ideas, even if they are “half-baked”.

Lessons Learned: We learned that the use of humor is critically important when discussing sensitive topics and communicating across cultures. It reduces anxiety and tension.

Providing a safe place for discussion is crucial, especially with audiences with diverse cultural backgrounds and viewpoints. Be open to unlearning and relearning – Remember, culture is fluid and there is always room for improvement. Get out of your comfort zone to realize the “self”.

Rad Resource: AEA’s Public Statement on Cultural Competence in Evaluation

Also, see Dunaway, Morrow & Porter’s (2012) Development and validation of the cultural competence of program evaluators (CCPE) self-report scale.

Want to learn more? Register for Acknowledging the “Self” in Developing Cultural Competency at Evaluation 2014.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Scribing: Vidhya Shanker on Discussions Regarding the AEA Cultural Competence Statement
  2. CC Week: Osman Özturgut and Tamera Bertrand Jones on Integrating Cultural Competence into Your AEA Presentation
  3. Cultural Competence Week: Rupu Gupta and Tamara Bertrand Jones on Cultural Competence Working Group Evaluation

Humanitarian trends and trajectories to 2030: North and South-East Asia

ODI general feed - Mon, 07/28/2014 - 00:00
This briefing supports the World Humanitarian Summit regional consultation for North and South-East Asia, being held in Tokyo 23-24 July 2014. It sets out the current trends and forecasts future threats and their humanitarian implications in the post-2015 era.
Categories: Resource Feeds

Social protection and growth: Research synthesis

ODI general feed - Mon, 07/28/2014 - 00:00
Social protection is one of many policy interventions that can contribute to poverty reduction goals. Evidence is growing of the positive impacts it can have on economic growth, especially in protecting and enhancing productivity and labour force participation among poor households. However, disentangling the effects of social protection on aggregate growth from the impacts of other economic and social policies is challenging. This review paper identifies the channels through which social protection policies and programmes have impacts on growth and productivity and provides evidence of this from academic and evaluative literature.
Categories: Resource Feeds

PD Presenters Week: M. H. Clark and Haiyan Bai on Using Propensity Score Adjustments

American Evaluation Association 365 Blog - Sun, 07/27/2014 - 01:15

Hi! We are M. H. Clark and Haiyan Bai from the University of Central Florida in Orlando, Florida. Over the last several years propensity score adjustments (PSAs) have become increasingly popular; however, many evaluators are unsure of when to use them. A propensity score is the predicted probability of a participant selecting into a treatment program based on several covariates. Theses scores are used to make statistical adjustments (i.e., matching, weighting, stratification) to data from quasi-experiments to reduce selection bias.

Lesson Learned:

PSAs are not the magic bullet we had hoped they would be. Never underestimate the importance of a good design. Many researchers assume that they can fix poor designs with statistical adjustments (either with individual covariates or propensity scores). However, if you are able to randomly assign participants to treatment conditions or test several variations of your intervention, try that first. Propensity scores are meant to reduce selection bias due to non-random assignment, but can only do so much.

Hot Tip:

Plan ahead! If you know that you cannot randomly assign participants to conditions and you MUST use a quasi-experiment with propensity score adjustments, be sure that you measure covariates (individual characteristics) that are related to both the dependent variable and treatment choice. Ideally, you want to include all variables in your propensity score model that may contribute to selection bias. Many evaluators consider propensity score adjustments after they have collected data and cannot account for some critical factors that cause selection bias. In which case, treatment effects may still be biased even after PSAs.

Hot Tip:

Consider whether or not you need propensity scores to make your adjustments. If participants did not self-select into a treatment program, but were placed there because they met a certain criterion (i.e., having a test score above the 80th percentile), a traditional analysis of covariance used with regression discontinuity designs may be more efficient than PSAs. Likewise, if your participants are randomly assigned by pre-existing groups (like classrooms) using a mixed-model analysis of variance might be preferable.  On the other hand, sometimes random assignment does not achieve its goal in balancing all covariates between groups. If you find that the parameters of some of your covariates (i.e., average age) are different in each treatment condition even after randomly assigning your participants, PSAs may be a useful way of achieving the balance random assignment failed to provide.

Rad Resource:

William Holmes recently published a great introduction to using propensity scores and Haiyan Bai and Pan Wei have a book that will be published next year.

Want to learn more? Register for Propensity Score Matching: Theories and Applications at Evaluation 2014.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. Wei Pan on Propensity Score Analysis
  2. LAWG Week: Joseph Gasper on Propensity Score Matching for “Real World” Program Evaluation
  3. LAWG WEEK: Michelle Slattery on the Need for Evaluators in Problem Solving Courts

Blockages to preventing malnutrition in Kambia, Sierra Leone: a semi-quantitative causal analysis

ODI general feed - Sun, 07/27/2014 - 00:00
Over the last four decades, attempts to reduce malnutrition in Sierra Leone have been met with mixed success. To tackle the country's high rate of malnutrition the Government has made a commitment to ensure that 60% of infants are exclusively breastfed by 2016. Based on SLRC data from Kambia, there is still a way to go.
Categories: Resource Feeds

Dan McDonnell on Learning More About Your Twitter Community with FollowerWonk

American Evaluation Association 365 Blog - Sat, 07/26/2014 - 12:37

Hello, my name is Dan McDonnell and I am a Community Manager at the American Evaluation Association (AEA). If you’re a frequent Twitter user, you’re probably familiar with Twitter’s ‘Who to Follow’ feature – a widget in the right sidebar that ‘suggests’ Twitter users for you to follow, based on your profile and following list. If you’re like me, you’re a frequent user of this feature, and oftentimes feel as if  you’ve exhausted the suggestions Twitter provides, and are interested in digging a bit deeper. Enter: Followerwonk!

Followerwonk


Hot Tip: Search Twitter Bios

For starters, Followerwonk offers a robust Twitter bio/profile search feature. When you search a keyword like ‘evaluation’, Followerwonk will return a full list of results with many different sort-able criteria: social authority, Followers, Following and age of the account. The really cool part, however, is the Filters option. You can narrow these results down by only individuals with whom you have a relationship  (they follow you or you follow them), reciprocal followers or only pull those with whom you are not currently connected, which is a great way to find interesting new people to follow.

Hot Tip: Learn More About Your Followers

Using the ‘Analyze Followers’ tab, you can search for a Twitter handle and find some really interesting details about your network of followers (or folks that you follow). Like Twitonomy, Followerwonk will map out the location of your followers and the most active hours that they are Tweeting (great for identifying optimal times to post!). In addition, you’ll see demographic details, Tweet frequency information and even a nifty wordcloud of the most frequently Tweeted keywords.

Hot Tip: Compare Followers/Following

Now here’s where Followerwonk really shines. Let’s say I want to see how many followers of @aeaweb also follow my personal Twitter account, @Dan_McD. Or maybe you’re a data visualization geek, and want to see what accounts both Stephanie Evergreen (@evalu8r) and AEA (@aeaweb) are following to find some new, interesting Twitter users to follow. The Compare Users tab allows you to see what followers certain accounts have in common and add them to your network!

Using Followerwonk can give you a better overall view of your Twitter community, whether it be identifying interesting connections between your followers or surfacing new users to follow by comparing followers of those you trust. Many of the features of Followerwonk (including some I didn’t cover today) are available for free – and for those that aren’t, a 30-day free trial is all you need. What are you waiting for?

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Dan McDonnell on Google + | Dan McDonnell on Twitter

 

Related posts:

  1. Dan McDonnell on Making New Friends and Mastering Lesser-Known Twitter Features Without Third Party Apps
  2. Dan McDonnell on Even More Ways to Analyze Tweets with Twitonomy
  3. Dan McDonnell on Twitter Etiquette and Data Archiving

Why Overtime in Nuclear Talks with Iran is Better than Game Over

After nearly three weeks of round-the-clock negotiations to achieve a comprehensive nuclear agreement with Iran, the United States, joined by its major allies Britain, France and Germany, as well as Russia and China — the P5+1 — chose to extend the current agreement for four months and continue negotiations.

Why Overtime in Nuclear Talks with Iran is Better than Game Over

After nearly three weeks of round-the-clock negotiations to achieve a comprehensive nuclear agreement with Iran, the United States, joined by its major allies Britain, France and Germany, as well as Russia and China — the P5+1 — chose to extend the current agreement for four months and continue negotiations.