Resource Feeds

CAPE conference

ODI general feed - Tue, 11/18/2014 - 01:00
Categories: Resource Feeds

The 2014 CAPE conference: Does money matter? The role of finance in supporting the Sustainable Development Goals

ODI general feed - Wed, 11/12/2014 - 01:00

Will money matter for the future of development? When much of the wider discourse is focused on how to raise more money, the 2014 CAPE conference will ask what sort of difference that money can make in practice.

Categories: Resource Feeds

Finding ‘evidence’ for what works in security and justice programming

ODI general feed - Mon, 10/20/2014 - 00:00
​​This session will discuss the struggle with the notion of ‘evidence’ in justice and security programming. Using the categories of proof, principle and plausibility, the session will reframe the question of ‘what works’ by broadening it to the question of ‘why something might plausibly work’ and why on some issues neither research nor practice seems to be able to learn.
Categories: Resource Feeds

Professional Course on Disaster Management in Asia and the Pacific

ODI general feed - Thu, 10/16/2014 - 00:00
​The Professional Course on Disaster Management in Asia and the Pacific in Beijing is aimed at professionals from government agencies, international organisations, NGOs, charities and the private sector who are interested in learning about managing disasters and other crises in China and throughout the Asia-Pacific region.

Categories: Resource Feeds

Professional Course on Disaster Management in Asia and the Pacific

ODI general feed - Thu, 10/16/2014 - 00:00
​The Professional Course on Disaster Management in Asia and the Pacific in Beijing is aimed at professionals from government agencies, international organisations, NGOs, charities and the private sector who are interested in learning about managing disasters and other crises in China and throughout the Asia-Pacific region.

Categories: Resource Feeds

Global value chains in Asia: Is everyone benefitting?

ODI general feed - Fri, 09/05/2014 - 00:00
​The seminar will discuss critical issues for inclusive growth in Asia and the rest of the developing world.

Categories: Resource Feeds

How to ensure accountability and oversight in security and justice programming

ODI general feed - Mon, 09/01/2014 - 00:00
​A number of major UK and internationally delivered security and justice programmes have been characterised by a lack of focus on issues of accountability. In this event, Piet Biesheuvel will outline multiple models of accountability, and how accountability affects major security and justice actors. 

Categories: Resource Feeds

How to ensure accountability and oversight in security and justice programming

ODI general feed - Mon, 09/01/2014 - 00:00
​A number of major UK and internationally delivered security and justice programmes have been characterised by a lack of focus on issues of accountability. In this event, Piet Biesheuvel will outline multiple models of accountability, and how accountability affects major security and justice actors. 

Categories: Resource Feeds

Aid in danger: Violence against aid workers and the future of humanitarianism

ODI general feed - Tue, 08/19/2014 - 00:00
​On World Humanitarian Day join us for the launch of 'Aid in Danger: The Perils and Promise of Humanitarianism' - a hard look at violent attacks against aid workers on the frontlines of humanitarian crises.
Categories: Resource Feeds

Incredible Budgets - budget credibility in theory and practice

ODI general feed - Wed, 08/06/2014 - 00:00

​In many countries, particularly low-income and/or fragile states, national budgets are often poor predictors of revenue and expenditure. This roundtable will bring together CAPE staff, academics and public financial management practitioners to discuss why, in spite of this considerable effort to strengthen budget credibility, do many governments still struggle to execute their budget to plan?

Categories: Resource Feeds

PD Presenters Week: Brian Yates on Doing Cost-Inclusive Evaluation. Part III: Cost-Benefit Analysis

“But wait — there’s more!” Hi! I’m Brian Yates. Yes, this is the third piece in a series of “AEA365′s” on using cost data in evaluation … and not entirely inappropriate for your former AEA Treasurer to author. (I’m also Professor in the Department of Psychology at American University in Washington, DC.)

My past two 365ers (you can find them here and here) focused on evaluating costs of resources consumed by programs, and the monetary and monetizable outcomes produced by programs. With those two types of data, we can begin evaluation of the programs’ cost-benefit.

Lesson Learned – Funders should ask “Is it worth it?” and not just “How much does it cost?” The impulse to cut programs to meet budget goals actually can increase costs and bust budgets if the programs would have increased employment (and taxes paid), reduced consumers’ use of other services, or both. Evaluators can investigate this by comparing program costs to program benefits.

Hot Tip – Strategies for quantifying Value. “Is it worth it?” can be answered in different ways, including:

  • dividing benefits by costs (benefits/costs ratio),
  • subtracting costs from benefits (net benefit), and
  • measuring the time required before benefits exceed costs (Time to Return On Investment).

Each cost-benefit index describes a different aspect of a program’s cost-benefit relationship, and all can be reported in a Cost-Benefit Analysis or CBA.

Hot Tip Too - Costs, benefits, and cost-benefit relationships can be measured at several levels of specificity (such as for an individual in a program or for a new implementation of the program). Cost-benefit differences between programs as well as between consumers in the same program are important to understand, too.

Lesson Learned – Programs whose costs exceed measurable benefits still can be funds-worthy. A “news” story in humorous Daily Onion a few years back made an important point with its story, “Cost of living now outweighs benefits” (http://www.theonion.com/articles/cost-of-living-now-outweighs-benefits,1316/). Few of us would be inclined to act in a summative manner based on these “findings” (even if they were valid)! So may the value of many programs be underestimated or missed entirely when viewed through the “green eyeshade” of an exclusively pecuniary perspective.

Also, some programs provide services to which all people are legally entitled. Evaluations of these can instead ask, “What is the best way to deliver the highest quality services to the most people for the least money?”

Next time: how to include costs when evaluating nonmonetary program outcomes.

Resources: Text: Michael F. Drummond’s (2005) Methods for the Economic Evaluation of Health Care Programmes. 

CBA for environmental interventions

Want to learn more? Register for Evaluating and Improving Cost, Cost-Effectiveness, and Cost-Benefit at Evaluation 2014.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Brian Yates on Doing Cost-Inclusive Evaluation. Part II: Measuring Monetary Benefits
  2. Brian Yates on Doing Cost-Inclusive Evaluation – Part I: Measuring Costs
  3. Agata Jose-Ivanina on Identifying and Evaluating a Program’s Costs

Are we getting things done? Rethinking operational leadership

ODI general feed - 8 hours 8 min ago

What does good leadership look like in humanitarian operations? How can we promote it? Join us for a panel discussion to launch ALNAP's new study, where we will discuss the findings of our literature review, survey and interviews into effective leadership.

Categories: Resource Feeds

PD Presenters Week: Mindy Hightower King and Courtney Brown on A Framework for Developing High Quality Performance Measurement Systems of Evaluation

American Evaluation Association 365 Blog - Tue, 07/29/2014 - 01:15

Hello. We are Mindy Hightower King, Research Scientist at Indiana University and Courtney Brown, Director of Organizational Performance and Evaluation at Lumina Foundation. We have been working to strengthen and enhance performance management systems for the last decade and hope to provide some tips to help you create your own.

Lesson Learned: Why is this important?

Funders increasingly emphasize the importance of evaluation, often through performance measurement. To do this, you must develop high quality project objectives and performance measures, which are both critical to good proposals and successful evaluations.

A performance measurement system:

  • Makes it easier for you to measure your progress
  • Allows you to report progress easily and quantitatively
  • Allows everyone to easily understand the progress your program has made
  • Can make your life a lot easier

Two essential components to a performance measurement system are high quality project objectives and performance measures.

Project objectives are statements that reflect specific goals that can be used to gauge progress. Objectives help orient you toward a measure of performance outcomes and typically focus on only one aspect of a goal. Strong Project objectives concisely communicate the aims of the program and establish a foundation for high quality performance measures.

Cool Trick: When developing projective objectives, be sure to consider the following criterion of high quality project objectives: relevance, applicability, focus, and measurability.

Performance measures are indicators used at the program level to track the progress of specific outputs and outcomes a program is designed to achieve. Strong performance measures are aligned with program objectives. Good performance measurement maximizes the potential for meaningful data reporting.

Cool Trick: When developing project measures, be sure to account for the following questions:

  • What will change?
  • How much change you expect?
  • Who will achieve the change?
  • When the change will take place?

Hot Tip: Make sure your performance variables:

  • Have an action verb
  • Are measurable
  • Don’t simply state what activity will be completed 

Rad Resources: There are a host of books and articles on performance measurement systems, but here are two good online resources with examples and tips for writing high quality objectives and measures:  Guide for Writing Performance Measures  and Writing good work objectives: Where to get them and how to write them.

Want to learn more? Register for A Framework for Developing and Implementing a Performance Measurement System of Evaluation at Evaluation 2014.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. GOVT Week: David Bernstein on Top 10 Indicators of Performance Measurement Quality
  2. MA PCMH Eval Week: Ann Lawthers, Sai Cherala, and Judy Steinberg on How You Define Success Influences Your Findings
  3. Susan Wolfe on When You Can’t Do An Evaluability Assessment

PD Presenters Week: Osman Özturgut and Cindy Crusto on Remembering the “Self” in Culturally Competent Evaluation

American Evaluation Association 365 Blog - Mon, 07/28/2014 - 01:15

Hi, we’re Osman Özturgut, assistant professor, University of the Incarnate Word and Cindy Crusto, associate professor, Yale University School of Medicine. We are members of the AEA Public Statement on Cultural Competence in Evaluation Dissemination Working Group. We are writing to inform you of our forthcoming professional development workshop at Evaluation 2014 in Denver. Since the last meeting in Washington, we have been “learning, unlearning, and relearning” with several groups and workshop participants with respect to cultural competence. We wanted to share some of our learning experiences.

Our workshop is entitled, “Acknowledging the ‘Self’ in Developing Cultural Competency.” We developed the workshop to highlight key concepts written about in the AEA Public Statement that focus on evaluator hirself and what the evaluator hirself can do to better engage in work across cultures. As the AEA’S Public Statement explains, “Cultural competence requires awareness of self, reflection on one’s own cultural position.” Cultural competence begins with awareness and an understanding of one’s own viewpoints (learning).  Once we become aware of, reflect on, and critically analyze our existing knowledge and viewpoints, we may need to reevaluate some of our assumptions (unlearning). It is only then we can reformulate our knowledge to accommodate and adapt to new situations (relearning). This process of learning, unlearning, and relearning is the foundation of becoming a more culturally competent evaluator.

We learned that evaluators really want a safe place to talk about culture, human diversity, and issues of equity. In our session, we provide this safe place and allow for learning. Participants can explore their “half-baked ideas”, as one of our previous workshop participants had mentioned. This is the idea that we don’t always have the right words or have fully formulated thoughts and ideas regarding issues of culture, diversity, and inclusion. We believe it is crucial to provide a safe place to share ideas, even if they are “half-baked”.

Lessons Learned: We learned that the use of humor is critically important when discussing sensitive topics and communicating across cultures. It reduces anxiety and tension.

Providing a safe place for discussion is crucial, especially with audiences with diverse cultural backgrounds and viewpoints. Be open to unlearning and relearning – Remember, culture is fluid and there is always room for improvement. Get out of your comfort zone to realize the “self”.

Rad Resource: AEA’s Public Statement on Cultural Competence in Evaluation

Also, see Dunaway, Morrow & Porter’s (2012) Development and validation of the cultural competence of program evaluators (CCPE) self-report scale.

Want to learn more? Register for Acknowledging the “Self” in Developing Cultural Competency at Evaluation 2014.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Scribing: Vidhya Shanker on Discussions Regarding the AEA Cultural Competence Statement
  2. CC Week: Osman Özturgut and Tamera Bertrand Jones on Integrating Cultural Competence into Your AEA Presentation
  3. Cultural Competence Week: Rupu Gupta and Tamara Bertrand Jones on Cultural Competence Working Group Evaluation

Social protection and growth: Research synthesis

ODI general feed - Mon, 07/28/2014 - 00:00
Social protection is one of many policy interventions that can contribute to poverty reduction goals. Evidence is growing of the positive impacts it can have on economic growth, especially in protecting and enhancing productivity and labour force participation among poor households. However, disentangling the effects of social protection on aggregate growth from the impacts of other economic and social policies is challenging. This review paper identifies the channels through which social protection policies and programmes have impacts on growth and productivity and provides evidence of this from academic and evaluative literature.
Categories: Resource Feeds

PD Presenters Week: M. H. Clark and Haiyan Bai on Using Propensity Score Adjustments

American Evaluation Association 365 Blog - Sun, 07/27/2014 - 01:15

Hi! We are M. H. Clark and Haiyan Bai from the University of Central Florida in Orlando, Florida. Over the last several years propensity score adjustments (PSAs) have become increasingly popular; however, many evaluators are unsure of when to use them. A propensity score is the predicted probability of a participant selecting into a treatment program based on several covariates. Theses scores are used to make statistical adjustments (i.e., matching, weighting, stratification) to data from quasi-experiments to reduce selection bias.

Lesson Learned:

PSAs are not the magic bullet we had hoped they would be. Never underestimate the importance of a good design. Many researchers assume that they can fix poor designs with statistical adjustments (either with individual covariates or propensity scores). However, if you are able to randomly assign participants to treatment conditions or test several variations of your intervention, try that first. Propensity scores are meant to reduce selection bias due to non-random assignment, but can only do so much.

Hot Tip:

Plan ahead! If you know that you cannot randomly assign participants to conditions and you MUST use a quasi-experiment with propensity score adjustments, be sure that you measure covariates (individual characteristics) that are related to both the dependent variable and treatment choice. Ideally, you want to include all variables in your propensity score model that may contribute to selection bias. Many evaluators consider propensity score adjustments after they have collected data and cannot account for some critical factors that cause selection bias. In which case, treatment effects may still be biased even after PSAs.

Hot Tip:

Consider whether or not you need propensity scores to make your adjustments. If participants did not self-select into a treatment program, but were placed there because they met a certain criterion (i.e., having a test score above the 80th percentile), a traditional analysis of covariance used with regression discontinuity designs may be more efficient than PSAs. Likewise, if your participants are randomly assigned by pre-existing groups (like classrooms) using a mixed-model analysis of variance might be preferable.  On the other hand, sometimes random assignment does not achieve its goal in balancing all covariates between groups. If you find that the parameters of some of your covariates (i.e., average age) are different in each treatment condition even after randomly assigning your participants, PSAs may be a useful way of achieving the balance random assignment failed to provide.

Rad Resource:

William Holmes recently published a great introduction to using propensity scores and Haiyan Bai and Pan Wei have a book that will be published next year.

Want to learn more? Register for Propensity Score Matching: Theories and Applications at Evaluation 2014.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. Wei Pan on Propensity Score Analysis
  2. LAWG Week: Joseph Gasper on Propensity Score Matching for “Real World” Program Evaluation
  3. LAWG WEEK: Michelle Slattery on the Need for Evaluators in Problem Solving Courts

Blockages to preventing malnutrition in Kambia, Sierra Leone: a semi-quantitative causal analysis

ODI general feed - Sun, 07/27/2014 - 00:00
Over the last four decades, attempts to reduce malnutrition in Sierra Leone have been met with mixed success. To tackle the country's high rate of malnutrition the Government has made a commitment to ensure that 60% of infants are exclusively breastfed by 2016. Based on SLRC data from Kambia, there is still a way to go.
Categories: Resource Feeds

Dan McDonnell on Learning More About Your Twitter Community with FollowerWonk

American Evaluation Association 365 Blog - Sat, 07/26/2014 - 12:37

Hello, my name is Dan McDonnell and I am a Community Manager at the American Evaluation Association (AEA). If you’re a frequent Twitter user, you’re probably familiar with Twitter’s ‘Who to Follow’ feature – a widget in the right sidebar that ‘suggests’ Twitter users for you to follow, based on your profile and following list. If you’re like me, you’re a frequent user of this feature, and oftentimes feel as if  you’ve exhausted the suggestions Twitter provides, and are interested in digging a bit deeper. Enter: Followerwonk!

Followerwonk


Hot Tip: Search Twitter Bios

For starters, Followerwonk offers a robust Twitter bio/profile search feature. When you search a keyword like ‘evaluation’, Followerwonk will return a full list of results with many different sort-able criteria: social authority, Followers, Following and age of the account. The really cool part, however, is the Filters option. You can narrow these results down by only individuals with whom you have a relationship  (they follow you or you follow them), reciprocal followers or only pull those with whom you are not currently connected, which is a great way to find interesting new people to follow.

Hot Tip: Learn More About Your Followers

Using the ‘Analyze Followers’ tab, you can search for a Twitter handle and find some really interesting details about your network of followers (or folks that you follow). Like Twitonomy, Followerwonk will map out the location of your followers and the most active hours that they are Tweeting (great for identifying optimal times to post!). In addition, you’ll see demographic details, Tweet frequency information and even a nifty wordcloud of the most frequently Tweeted keywords.

Hot Tip: Compare Followers/Following

Now here’s where Followerwonk really shines. Let’s say I want to see how many followers of @aeaweb also follow my personal Twitter account, @Dan_McD. Or maybe you’re a data visualization geek, and want to see what accounts both Stephanie Evergreen (@evalu8r) and AEA (@aeaweb) are following to find some new, interesting Twitter users to follow. The Compare Users tab allows you to see what followers certain accounts have in common and add them to your network!

Using Followerwonk can give you a better overall view of your Twitter community, whether it be identifying interesting connections between your followers or surfacing new users to follow by comparing followers of those you trust. Many of the features of Followerwonk (including some I didn’t cover today) are available for free – and for those that aren’t, a 30-day free trial is all you need. What are you waiting for?

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Dan McDonnell on Google + | Dan McDonnell on Twitter

 

Related posts:

  1. Dan McDonnell on Making New Friends and Mastering Lesser-Known Twitter Features Without Third Party Apps
  2. Dan McDonnell on Even More Ways to Analyze Tweets with Twitonomy
  3. Dan McDonnell on Twitter Etiquette and Data Archiving

Why Overtime in Nuclear Talks with Iran is Better than Game Over

After nearly three weeks of round-the-clock negotiations to achieve a comprehensive nuclear agreement with Iran, the United States, joined by its major allies Britain, France and Germany, as well as Russia and China — the P5+1 — chose to extend the current agreement for four months and continue negotiations.

GSNE Week: Alice Walters on Stakeholder Engagement

American Evaluation Association 365 Blog - Fri, 07/25/2014 - 01:15

I’m Alice Walters, a member of AEA’s Graduate Student and New Evaluator TIG.  I am a doctoral student in human services and work as a non-profit consultant in fund development, marketing, and evaluation.  Here, I explore potential pitfalls and recommendations based on experience with stakeholders for new evaluators.

Hot Tip 1:  Stakeholders are central to evaluation – include them in every step of the process.

This may be Evaluation 101 but it bears emphasizing.  Identify, include, and inform stakeholders.  Think carefully and critically about all involved parties in evaluation outcomes.  Leaving out key stakeholders may lead to poor quality evaluation in unrepresented perspectives.  Key decision-making stakeholders should be engaged in the evaluation process to ensure evaluation relevancy. 

Rad Resource: Engaging Stakeholders  This CDC guide has a worksheet for identifying and including stakeholders in evaluation.

Hot Tip 2:  Be proactive in frequent & ongoing communication to stakeholders.

Don’t assume that initial evaluation conversations and perspectives haven’t changed without your knowledge.  Frequent communication with stakeholders will alert you to any changes in stakeholder perspectives toward the evaluation.  Ongoing communication will also keep lines of communication open and inform stakeholders of evaluation progress.

Rad Resource: A Practical Guide for Engaging Stakeholders in Developing Evaluation QuestionsThis 48-page resource from the Robert Wood Johnson Foundation covers engaging stakeholders throughout the evaluation process.  It provides worksheets and a range of useful communication strategies.

Hot Tip 3:  Take the time to consider stakeholder’s views at every stage of evaluation.

Stakeholders may be unclear about the evaluation process, its steps, and methods used.  Be sure to explain and continue to inform at every stage of evaluation.  As a new evaluator, I made the faulty assumption that stakeholder views were unchanging from initial evaluation meetings.  I also failed to use opportunities to communicate during evaluation stages that might have signaled changing circumstances from stakeholder response.  Evaluators should be cautious about assuming that evaluation environments and stakeholder views are static.

Rad Resource: Who Wants to Know? A 4-page tip sheet from Wilder Research on stakeholder involvement. Evaluators have an expertise that may require working away from direct stakeholder contact, particularly key decision-making stakeholders.  The relevancy of an evaluation requires ongoing stakeholder input.  Successful evaluation requires keeping communication channels open with stakeholders.

AEA is celebrating GSNE Week with our colleagues in the Graduate Student and New Evaluators AEA Topical Interest Group. The contributions all this week to aea365 come from our GSNE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. OL-ECB Week: Bonnie Richards on Setting the Stage: Evaluation Preparation and Stakeholder Buy-In
  2. Bikash Kumar Koirala on The Importance of Effective Communication in Participatory Monitoring and Evaluation
  3. Cultural Competence Week: Asma Ali and Anthony Heard on Beyond the Findings: Reflections on the Culturally Competent Evaluator’s Role