Monitoring, Evaluation and Learning Systems

WE Week: David J. Bernstein on Sustaining an Evaluation Community of Practice

American Evaluation Association 365 Blog - Mon, 06/02/2014 - 01:15

I am David J. Bernstein, a Senior Study Director with Westat, an employee-owned research and evaluation company. I am the President-elect of Washington Evaluators (WE), the Washington, DC/Virginia/Maryland area affiliate of the American Evaluation Association (AEA). This year marks the 30th anniversary of the WE.

As WE founder Michael Hendricks has noted, affiliates help develop an evaluation community in a local area: an evaluation community of practice. WE membership draws from the U.S. Federal government, state and local governments, nonprofits, academia, consulting firms, independent consultants, and the private sector.

WE is the second oldest AEA affiliate, and has been a model for other affiliates, providing a local or regional focus to complement AEA membership and services. Specific activities have evolved to meet the needs and interests of our members. WE offers monthly brown bag lunches and other professional development activities. WE worked with AEA’s Evaluation Policy Task Force to host Evaluators Visit Capitol Hill, an effort to reach out to members of Congress and their staff to inform them about evaluation and AEA. WE offers local evaluators informal opportunities to socialize and network, including an annual holiday party and happy hours. As an all-volunteer organization, the activities are a direct reflection of the interests and needs of our members.

The 2014 AEA Conference theme calls attention to the issues of sustainable and equitable living and the importance of building relationships. WE is all about sustainability and building relationships, and has provided leadership and membership opportunities for a wide variety of disciplines, institutions, political perspectives (a reflection of WE’s DC zeitgeist), and cultural traditions.

WE is not just an acronym, it is also a not-so-subliminal message: WE are in this together. WE is a collective effort, made up of activities and networks developed by volunteers for volunteers. WE focuses on developing professional and social relationships among its members. You don’t just belong to WE, you join it, become part of it, and hopefully take advantage of it. Community is WE’s raison d’etre. We exist so evaluators have a place to network, meet other evaluators, learn about evaluation, develop professionally, celebrate the holidays, and sometimes find new work partners and employment.

Hot tip: Are you from the DC area? Join WE. Is there an affiliate in your area?  Join up, or start a new affiliate. WE would be glad to help

The American Evaluation Association is celebrating Washington Evaluators (WE) Affiliate Week. The contributions all this week to aea365 come from WE Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. LAWG Week: Bernadette Wright and Ladel Lewis on “Delivering the Goods”
  2. CEA Affiliate Week: Leah Christina Neubauer on Chicago-Based Evaluators and Including AEA Local Affiliates in Your 2014 Evaluator Learning Resolution Plans
  3. OPEG Affiliate Week: Sheri Chaney Jones on Steps to Strengthen Affiliate Membership

WE Week: Brian Yoder on Evaluators Visit Capitol Hill

American Evaluation Association 365 Blog - Sun, 06/01/2014 - 01:15

I’m Brian Yoder. I live in Washington, D.C., and I serve as president of the local AEA affiliate, the Washington Evaluators.  I’m writing about an initiative I spearheaded called Evaluators Visit Capitol Hill which took place during the AEA conference in Washington, D.C. last year.

(Share Clip)

EVCH is a collaboration between AEA’s Evaluation Policy Task Force (EPTF) and Washington Evaluators (WE).  EPTF provided policy documents related to the role of evaluation in government; WE organized AEA members to visit the office of their congress person.  During the AEA conference in D.C. last year, AEA members visited the office of their congress person, dropped off documents related to the role of evaluation in government, spoke about evaluation, and asked if anyone in the office would be interested to be contacted by a member of EPTF to further discuss/provide additional information about evaluation.

A total of 69 participants from 31 states and the District of Columbia signed-up and participated in an initial training conference call.  The federal government shut-down for two weeks prior to the AEA conference, creating challenges for some to schedule appointments with their congress person’s office.  Eighteen participants, visiting twenty-one different congressional offices, completed a post meeting survey. One third of the congressional offices visited by an AEA member said they were interested in receiving additional materials from EPTF.  AEA members reported having opportunities to speak with congressional staff about issues related to evaluation of government programs.  An unanticipated outcome of the government shutdown was some AEA members were able to meet with their representative or senator since they were available due to the government closer.

My hope is that this initiative helped accomplish three things:

  1. Make more policy makers aware of AEA and the work of EPTF.
  2. Expand the reach of EPTF by creating connections for EPTF.
  3. Give evaluators the opportunity to be part of the early policy-making process by providing materials on evaluation to policy makers prior to the policy being made.

We plan to continue and expand the initiative the next time AEA’s annual meeting is in Washington, D.C.  Please be on the look-out for additional information on how you can participate in EVCH in the run up to the next AEA conference in Washington, D.C.

Rad Resource: A Rad Resource I would like to share is the Washington Evaluators, one of the oldest local affiliates. If you are ever in D.C., please join us for one of our storied brown bag sessions or other events.  You’ll find information on the website: washingtonevaluators.org/events

The American Evaluation Association is celebrating Washington Evaluators (WE) Affiliate Week. The contributions all this week to aea365 come from WE Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. LAWG Week: Brian Yoder on Evaluators Visit Capitol Hill
  2. Stephanie Shipman on Developing an Effective Evaluation Agenda
  3. Scribing: Anne Vo on President Obama’s Evaluation Policies

Dan McDonnell on Even More Ways to Analyze Tweets with Twitonomy

American Evaluation Association 365 Blog - Sat, 05/31/2014 - 16:44

Hello, my name is Dan McDonnell and I am a Community Manager for the American Evaluation Association. In my last Saturday post, I introduced a Twitter measurement and analytics tool called Twitonomy and a few of the basic features available. While you can certainly pull some great data and interesting stats with the free version, the premium version of Twitonomy ($20 a month) offers a lot more in-depth reporting and analytics: a must-have for any data geek.  Today, I’ll peek behind the curtain of premium membership and shine a light on some of the most useful features it unlocks.

Hot Tip: Download everything. Yes, everything!

Whether you’re looking at your own tweets, those of another user or searching a hashtag like #eval for analysis purposes, Twitonomy Premium allows you to download this data right to your desktop. You can export these Tweets to an excel file, for some data crunching of your own, or onto a PDF for easy printing. This also works for lists of Twitter users – you can export all of your followers Twitter handles to a spreadsheet, or any lists of users that you’ve set up. Pretty cool!

Downloading Tweets with Twitonomy

 

Cool Tool: Mentions Map

There’s a really cool data visualization feature that Premium unlocks: the Mentions Map. This tool automatically reviews all of the @mentions directed at the Twitter account you have associated with your account, and generates a world map with pins at the location of each Tweet. If you’ve ever wondered how global (or local) your network is, this is a really neat way to find out. Here’s the Mentions Map generated by the recent tweets that mention @aeaweb.

@aeaweb Mentions Map

 

Hot Tip: Followers Report

While the free tool lets you analyze Tweets from individual users manually, the Followers Report available in Premium will pull a list of all of you Twitter followers and do the data analysis automatically. Once the tool has crawled your followers, it will spit out a report with no shortage of interesting facts: location (via a map similar to the Mentions Map), age of Twitter account, language, top hashtags and keywords and tons more. Take a look at a snapshot of @aeaweb ‘s Followers Report.

Followers Report

Whether you’re using the free or the paid version of the tool, Twitonomy offers tons of value to evaluators looking to dig into Twitter data to understand more about their social media network. Give it a whirl! 

What features do you wish Twitonomy had?

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Dan McDonnell is a regular Saturday contributor to AEA365, where he blogs on social media-related topics for evaluators. You can reach Dan on Twitter at@Dan_McD.

Related posts:

  1. Dan McDonnell on Uncovering Interesting Facts About Your Twitter Network with Twitonomy
  2. Dan McDonnell on Twitter Etiquette and Data Archiving
  3. Dan McDonnell on Making New Friends and Mastering Lesser-Known Twitter Features Without Third Party Apps

NA TIG Week: Lisle Hites on Conducting a Needs Assessment on HIV/AIDS Issues in the South

American Evaluation Association 365 Blog - Fri, 05/30/2014 - 01:15

I’m Lisle Hites, Director of the Evaluation and Assessment Unit (EAU) at the University of Alabama at Birmingham (UAB). I’m writing to share my team’s experiences in conducting needs assessments.

We frequently have opportunities to work with our colleagues on campus to conduct needs assessments for grant-funded projects. One such example was a training grant through the School of Nursing, and we provide it to highlight the value of gathering more than one perspective in assessing needs.

In 2012, CDC data revealed that the South is the epicenter of new infections of HIV; compared to other regions, 46% of all new infections occurred in the region, with a higher percentage of women (24%) and African-Americans (58%) represented in the new infections. Therefore, it is critically important that healthcare providers receive HIV/AIDS training in order to provide HIV/AIDS primary care to meet current and future healthcare demands.

To establish workforce training capacity, we sent surveys to two key healthcare audiences: (1) potential training sites (Ryan White Grantees) and (2) future family nurse practitioners (FNPs). Responses identified both a shortage of trained HIV/AIDS healthcare providers as well as an interest by providers and students to establish clinical training opportunities. Additionally, 78% of current FNP students enrolled at one research institution in the south resided within 60 miles of a Ryan White Grantee site in a tri-state region.

Lessons Learned:

  • The design of this needs assessment allowed us to consider the capacity of Ryan White Grantee sites to provide clinical training opportunities for FNP students.
  • The survey captured the interest and desire of FNP students to seek the skills necessary to provide HIV/AIDS primary care.

Despite the current and future needs for a trained healthcare workforce, healthcare providers in the Deep South still encounter many of the same attitudes toward people living with HIV/AIDS as were found in the early years of the epidemic; therefore, it was necessary to identify a pool of potential candidates for training (i.e., FNP students). At the same time, little was known regarding the capacity and willingness of Ryan White Grantee sites to provide an adequate number of opportunities to meet the training needs of these students. By considering both sides of the equation, we could accurately match the number of students and training sites to ensure a high degree of satisfaction and success for both parties.

Rad Resources: 

The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Shar McLean and Esteban Colon on Tools for Organizational Assessment
  2. NA TIG Week: James Altschuld on Hybrid Vigor – It’s not just Needs Assessment or Asset/Capacity Building
  3. Bonnie Stabile on Adapting Student Research Papers

NA TIG Week: James Altschuld on Hybrid Vigor – It’s not just Needs Assessment or Asset/Capacity Building

American Evaluation Association 365 Blog - Thu, 05/29/2014 - 01:15

I’m James Altschuld, Professor Emeritus of Ohio State University. I’ve written a lot over the years about needs assessment.  Today’s posting is about having hybrid vigor in how we approach our work.  It’s not just needs assessment or asset/capacity building, it’s both!

My premise is that there are two contrasting stances.  One is building from strengths, resources, and assets (positives).  The other is from a negative (something is missing) needs perspective.  And yet these stances are eternally interdependent, and share enough common ground that we should commonly assess both.

Lessons Learned From Experience:

Observations

  1. Mind the philosophies, they are different, use unique methods, and the improvement plans could be quite distinct.  One is the glass half full and the other half empty; and the reality is that the glass is both half-full and half-empty.
  1. Hybrid asset/capacity building and needs assessment approaches are now appearing in the literature (health, community development, governmental activities, and related areas).  I’ve found examples of implementation in Scotland, Indonesia, Spain, Minnesota, and elsewhere.
  1. Hybrid assessments always begin from assets to avoid a possible negative taint of needs.
  1. Hybrids require more cost, time, facilitation, coordination, and management than traditional needs assessment.
  1. The voice of the people (the bottom up) is much more prominent in asset/capacity building and hence in hybrid applications.
  1. For intractable problems (health, violence, etc.) hybrids are thought to be better than either needs assessment or asset/capacity building by themselves

Some Implementation Ideas

  1. Use two working groups so that needs and assets can be looked at independently, not contaminated by the other before comparing what is found for each.
  1. Expect hybrids to take longer to complete, just build in more time.

Rad Resources:

Coming in Fall 2014, Ryan Watkins and I have edited an issue of New Directions in Evaluation dedicated to Needs Assessment.

The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Scott Chaplowe on International M&E Training & Capacity Building Modules
  2. CP TIG Week: Jim Altschuld on Observations on Evaluability Assessment from a Practitioner
  3. Veena Pankaj and Kat Athanasiades on Coalition Assessment: Approaches for Measuring Capacity and Impact

NA TIG Week: Hsin-Ling (Sonya) Hung on Needs Assessment and Decision-making

American Evaluation Association 365 Blog - Wed, 05/28/2014 - 01:15

My name is Hsin-Ling (Sonya) Hung, an assistant professor in the Department of Educational Foundations and Research at the University of North Dakota and Program Co-chair for the Needs Assessment (NA) TIG. The main concern I want to share in this blog is that we need to pay more attention to the influence of decision-making in NA work.

Lessons Learned Through Experience:

Decision-making is an essential part of the NA endeavor

  • There are many decisions in any NA endeavor. Examples of decisions required relate to the type of needs to be investigated, the selection of methods for data collection, identification of discrepancies, allocation of resources, development of action plans, and others.

Early decision-making has influence on the NA process

  • Decisions made in the very beginning determine how the NA would be conducted. Using the three-phase model (Altschuld & Kumar, 2010) as a starting point, you must decide how to go through all three phrases (pre-assessment, assessment, and post-assessment). Sometimes you might minimize or leave out a phase, affecting such things as methods of data collection, personnel, time, and budget.

Decision-making has impact on NA results or quality of NA

  • Decision-making has an effect on the overall outcome/quality of NA which in turn may change the program/organization funding the endeavor. One of my students worked with a campus organization to identify graduate students’ health needs. A comprehensive approach was discussed initially, but could not be carried out due to funding concerns of the sponsor. Under pressure to get something quickly, the student was only able to collect data via an online survey near the end of the semester. Its timing resulted in small n’s because most students were busy with school work. The quality of the survey was in question because of the press for quick results. Data were not meaningful and participant representation was a concern. Thus decisions in the actual process of NA have an impact on quality and what it produced.

Rad Resources:

The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. NA Week: James Altschuld on Lessons Learned: Use the 3 Phase Model
  2. MME Week: Hongling Sun on Mixed Methods Design
  3. Karen Widmer on Knowledge Flow: Making Evaluation the Reference Point

NA TIG Week: Sue Hamann on Tips for Novice Needs Assessors

American Evaluation Association 365 Blog - Tue, 05/27/2014 - 01:15

Hello, my name is Sue Hamann. I work at the National Institutes of Health as a Science Evaluation Officer, and I teach program evaluation to graduate students. Today I’m providing tips to novices in needs assessment (NA).

Hot Tips:

Use the original definition of needs.

  • The original definition of NA is the measurement of the difference between currently observed outcomes and future desired outcomes, that is, the difference between “what is” and “what should be.” Novices often plan to address either status or desired future, but they do not realize how much more valuable it is to collect data about both status and future and analyze the difference between these two conditions. Read anything about NA written by Roger Kaufman, Belle Ruth Witkin, James Altschuld, or Ryan Watkins to get started.

Collect data using multiple methods.

  • A rewarding and challenging aspect of needs assessment is that an evaluator gets to take almost all her tools out of the toolbox. From census data and epidemiologic data to document reviews to group and individual interviews, needs assessment typically requires multiple methods. The best way to start is to review the literature, both in the problem area of interest and in the evaluation journals. You can start with the New Directions for Evaluation issue (#138, summer 2013) on Mixed Methods and Credibility of Evidence in Evaluation, edited by Mertens and Hesse-Biber. Also use listservs such as AEA’s Evaltalk to discover work that has been done but not published.

Keep an open mind about the validity of qualitative data, particularly interviews.

Remember that needs assessment and program planning go hand in hand.

  • Collecting needs assessment data is just the first step in program planning. Use Jim Altschuld’s Needs Assessment Kitor other resources to plan for the work needed to conduct this vital component of program planning and evaluation.

Rad Resources:

Coming in Fall 2014, Jim Altschuld and Ryan Watkins are editing an issue of New Directions in Evaluation dedicated to Needs Assessment.

The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. NA Week: James Altschuld on Lessons Learned: Use the 3 Phase Model
  2. EEE Week: Siri Scott on Conducting Interviews with Youth
  3. Jim Altschuld on So They Think They Need a Needs Assessment!

NA TIG Week: Ryan Watkins on Planning to Measure Needs

American Evaluation Association 365 Blog - Mon, 05/26/2014 - 01:15

I’m Ryan Watkins, Associate Professor at George Washington University and manager of two websites, www.WeShareScience.com and www.NeedsAssessment.org.

Needs assessments require planning. You must plan for the “who, what, why, where, when, and how” of each step – from soliciting participation of stakeholder groups to the iterative development of grounded recommendations.   An overlooked planning task involves the planning for the actual measurement of needs. Below are considerations to guide your planning:

Lessons Learned From Experience:

Consideration 1: What is a need? Needs can be defined in many ways. While discrepancy definitions are common (such as the gap between “what is” and “what should be”), varied definitions are frequently applied (such as needs as desired resources or programs).

Ask:

  • Have you agreed with stakeholders on a definition of need?
  • Are you assessing needs at the state, local, institutional, and/or individual level?
  • Are needs exclusively related to results (most useful), or do they include processes and inputs as well (not useful)?
  • Are needs to be assessed along with assets?

Consideration 2: What data is really required?

When you know how needs are to be defined, next determine what data is required to document the needs.

Ask:

  • What indicators would suggest what needs exist?
  • How could the size and scope of identified needs be measured?
  • Who else is collecting data on similar issues?
  • Which measures are “nice to have” but not absolutely required?
  • When can indirect measures be applied?
  • If applying a discrepancy definition, what data is required for measuring the current state, and what is required for comparably defining the desired state.

Consideration 3: What is feasible?

There are always constraints to a needs assessment and these will guide what is feasible in terms of measuring needs.

Ask:

  • What techniques can be used to collect data?
  • What is appropriate timeframe for measuring (e.g., weekly, monthly, bi-monthly, # of weeks/months after critical interventions)?
  • What is the appropriate sequence for measuring (e.g., indicators #1 and #3, followed by indicator #2)?
  • What resources (people, time, money, technology, access, etc.) are readily available?

Consideration 4: How will it be managed?

Measuring needs is a process that must be managed. Take time to assign responsibilities and hold people accountable for results.

Ask:

  • Who specifically (individuals or organizations) will provide necessary information (e.g., who is the sample, or who has the desired information)?
  • Who is responsible for collecting and analyzing data to measure needs?
  • Who is accountable for the validity of data?
  • Who is going to interpret the data in order to make recommendations?

Rad Resources: Find and share needs assessment resources at www.NeedsAssessment.org, including lesson learned videos, free publications, podcast interviews, and a new document repository.

(Share Clip)

The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. NA Week: Roger Kaufman on Needs Assessment
  2. Mary Talbut on Testing What You Teach
  3. NA TIG Week: Sue Hamann on Tips for Novice Needs Assessors

NA TIG Week: Maurya West Meiers on Introducing Needs Assessment TIG Week and Identifying and Staying in Touch with Your Needs Assessment Stakeholders and Informants

American Evaluation Association 365 Blog - Sun, 05/25/2014 - 01:15

My name is Maurya West Meiers. I work at the World Bank as a Senior Evaluation Officer and am coauthor of A Guide to Assessing Needs: Essential Tools for Collecting Information, Making Decisions, and Achieving Development Results (available free here). This week’s blog postings are from members of the Needs Assessment TIG. Check out our TIG website for even more resources.

I’m writing about ways to identify and stay in touch with “hard to find” stakeholders and potential informants for your large scale needs assessments.

Lessons Learned:

Consider who might be your stakeholders and informants:

  • Primary. They typically have some direct relationship with the assessment (e.g., managers and employees, mayor’s office, neighborhood representatives, community members).
  • Secondary. They usually have a lesser or indirect relationship to the assessment, but should not be overlooked (e.g., residents from the neighboring community).
  • Experts and other informants. These people may have useful data to inform the assessment, but may not have a direct or even indirect relationship with it (e.g., experts in the field of study, database managers).
  • Research stakeholders. These are others who could benefit and learn from the results of your assessment (e.g., academics, policy makers). Be sure to publish your needs assessment methods and papers to build the needs assessment literature base.

How do you find your informants and stakeholders? And stay in touch with them throughout the assessment? Here are some high and low-tech ideas:

  • Websites are easy to build and provide a central place to share information about the assessment and resources.
  • Blogs allow you to share quick and informal updates – and to offer two-way engagement through comments.
  • Social media (Twitter, Facebook, Instagram, Linkedin, Flickr, Youtube, etc.) help you to connect to those interested in the assessment and build a following.
  • Mobile and text-message updates, or short ‘pulse’ surveys are becoming more common.
  • Community or organization meetings are the old standby, but essential.
  • Existing networks (such as community leaders, association representatives, etc.) allow you to find key people through snowball sampling.
  • Letters to stakeholders/groups help to get the word out formally.
  • Newslettersshould not be overlooked as a way to engage stakeholders.
  • Newspaper articles, television broadcasts, advertisements, and other media outreach are useful for broad outreach.
  • Posters, announcements or events in spaces visited by stakeholders (such as municipal buildings, libraries) are inexpensive and easy to create.
  • Street billboard announcements are common in many countries.
  • Radio broadcasts and call-in shows are especially effective in certain regions, such as Africa.

Rad Resources:

  • Many cities engage with community members about needs (especially of the “this should be fixed/addressed” communications) through social media services such as PublicStuff.
  • McKinsey Quarterly’s latest edition provides useful tips on Tapping the Power of Hidden Influencers.

The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Xiaomei Song on Evaluating a Large-Scale High-Stakes Testing System from the Stakeholder Perspective
  2. MN EA Week: Leah Goldstein Moses on Cultivating Relationships
  3. Silvana Bialosiewicz on Turning Organizational Learning into Action: Evaluation and Strategic Planning

Sheila B Robinson on Playing “Trivial Pursuit” – AEA Style!

American Evaluation Association 365 Blog - Sat, 05/24/2014 - 05:57

Hello! I’m Sheila B Robinson, aea365′s Lead Curator and sometimes Saturday contributor, and I have some questions for you today!

Hot Tip: Poke around AEA’s website (or go directly to this link under the “Learning” tab) and you’ll find some fabulous trivia about the annual conference. For instance…

1.) Who was president of AEA in 2005?

2.) In what year(s) was the AEA Annual Conference actually held in Canada?

3.) How many different US states have hosted the conference?

4.) When was the conference theme: Evaluation for a New Century: A Global Perspective?

5.) Where Evaluation 2015 will be held?

The answers are all there!*

Cool Trick: Want to know about the sessions your favorite evaluator presented in any given year? Curious to see what the hot topics were when the conference theme was Evaluation Quality? Perhaps you have an idea about a new topic and wonder if anyone has presented on it before. Or, you’re just learning something new about evaluation and want to see who the thought leaders on that topic appear to have been over the past few years, so you can follow up on their work or even network with them. You can access conference programs for the last 15 years from the Conference History page for all of this information. Most are even searchable online!

Cooler trick: Want to know how the conference was evaluated and how it performed in any given year? How many evaluators attended in 2003? What do we know about them? How many were students, researchers, professors, or consultants? How many conferences had they attended before? Did they consider themselves novice or expert evaluators? What were their reactions to the conference in that year? Evaluation data and reports are available for several conference years.

Rad Resource: AEA’s Conference History page: Everything you wanted to know about Evaluation 1986 – Evaluation 2016, (30 years!) but were afraid to ask. (Well, perhaps not afraid…)

(Share Clip)

*Except for this one: #3. I counted 15 different states plus Washington, DC.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Evaluation 2013 Conference Week: Michael Quinn Patton on Using the Conference Program as a Data Source About Trends in Evaluation
  2. LAWG Week: Herb Baum on the Upcoming AEA Meeting
  3. Susan Kistler on Submitting a Proposal to Present at Evaluation 2011

FIE TIG Week: Kathryn Sielbeck-Mathes and Rebecca Selove on Feminist Evaluation and Framing

American Evaluation Association 365 Blog - Fri, 05/23/2014 - 01:15

Hi! We are Kathryn Sielbeck-Mathes and Rebecca Selove, co-authors of Chapter 6 of “Feminist Evaluation and Research: Theory and Practice”. In our article, based on three evaluations of substance abuse treatment programs for individuals with co-occurring mental illness and substance abuse issues, we discuss the importance of framing and shared understanding between evaluators and evaluation stakeholders.

Lesson Learned: Despite linking closely with our own values of fairness, social justice and gender equity within social programming, we did not spend sufficient time understanding the differing values, language, perspectives, frames, etc., of the program manager and his staff, rather assuming we were all interpreting trauma in the same ways and sharing the same values associated with addressing trauma during treatment specifically and programming for women in general. In hindsight, focusing on this understanding should have held the same importance in the evaluation as monitoring fidelity and measuring outcomes.

Hot Tip:

In order to gain attention and respect for the adoption of feminist frameworks, principles, and values for conducting program evaluation, it is imperative that we frame our conversations to connect rather than compete, align rather than malign and foster acceptance rather than objection from those we need to communicate to and with. This requires an understanding of their position on issues that follow from the language or lens of their value and belief systems.

Lesson Learned: Connecting through words, images, symbols, and stories grounded in values helps make solutions accessible and relevant to program stakeholders, service organizations, and funding agencies. Linking an issue to a widely held cultural value or belief helps start the framing process by appealing to program managers and staff, increasing their interest in learning more.

Hot Tip:

If it seems as if you are not being heard….you probably are not. A feeling of frustration can be a signal that reconstruction of a shared meaning based upon shared values is necessary!

Lesson Learned: Key tasks associated with feminist evaluation include 1) understanding the problem from the perspective of the women the program is designed to serve, 2) studying the interior and external context of the program to understand the realities and lived experiences of women, and 3) identifying the invisible structures that can undermine even the most diverse, gender-responsive, trauma informed program.

Hot Tip:

Feminist evaluators must engage in attentive conversations with those implementing and managing human service/treatment programs, listening closely for congruence and dissonance regarding the feminist frame. From the outset of a program evaluation, the feminist evaluator must be mindful and prepared for changing assumptions and language/communication that perpetuates injustice and the disempowerment of women.

Rad Resource: Combating structural disempowerment in the stride towards gender equality: an argument for redefining the basis of power in gendered relationships.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. FIE TIG Week: Donna Podems on the Difference Between Feminist Evaluation and Gender Approaches
  2. Kathryn Bowen on “Framing” Feminist Evaluation
  3. FIE/MME Week: Denise Seigart on Implementing a Feminist Evaluation of School Health Care

FIE TIG Week: Silvia Salinas Mulder and Fabiola Amariles on Latin American Feminist Perspectives on Gender Power Issues in Evaluation

American Evaluation Association 365 Blog - Thu, 05/22/2014 - 01:15

Hi! We are Silvia Salinas Mulder and Fabiola Amariles, co-authors of Chapter 9 of “Feminist Evaluation and Research: Theory and Practice”. Our article examines the fact that in our region, understanding and accepting gender mainstreaming as an international mandate is still slow and even decreasing in some political and cultural contexts, where the indigenous agenda and other internal and geopolitical issues are gaining prominence. Feminist evaluation may play an important role in getting evidence to create policies to improve the lives of women, but it is necessary to make feminist principles operational in the context of the multicultural Latin American countries.

Lesson Learned: We should re-consider and reflect on concepts and practices usually taken-for-granted like “participation.” In evaluations, members of the target population are usually treated as information resources but not as key audiences, owners and users of the findings and recommendations of the evaluation. Interactions with excluded groups usually reproduce hierarchical power relations and paternalistic communication patterns between the evaluator and the interviewed people, which may shape participation patterns, as well as the honesty and reliability of responses.

Hot Tip: Emphasize that everyone should have the real opportunity to participate and also to decline from participating (e.g., informed consent), and should not fear any implications of such a decision (e.g., formal or informal exclusion from future program activities). Having people decide about their own participation is a good indicator of ethical observance in the process.

Lesson Learned: Sensitivity and respect for the local culture often lead to misinterpreting rural communities as homogenous entities, paying little attention to internal diversity, inequality and power dynamics, which influence and are influenced by the micro-political atmosphere of an evaluation, oftentimes reproducing exclusion patterns.

Hot Tip: Pay attention and listen to formal leaders and representatives, but also search actively for the marginalized and most excluded people, enabling secure and confidential environment for them to speak. The role of cultural brokers knowledgeable of local culture is key to achieve an inclusive, context-sensitive approach to evaluation.

Lesson Learned: Another key concept to reflect on is “success.” On one hand, the approach of success as an objective and logically-derived conclusion of “neutral” analysis usually omits its power essence and intrinsic political and subjective dimensions. On the other hand, evaluation cultures that privilege limited funder-driven definitions of success reproduce ethnocentric perspectives, distorting experiences and findings, and diminishing their relevance and usefulness.

Hot Tip: Openly discussing the client’s and donor’s ideas about “success” and their expectations regarding a “good evaluation” beyond the terms of reference diminishes resistance to rigorous analysis and constructive criticism.

Rad Resources:

Silvia Salinas-Mulder and Fabiola Amariles on Gender, Rights and Cultural Awareness in Development Evaluation

Batliwala, S. & Pittman, A. (2010). Capturing Change in Women’s Realities.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Kathryn Bowen on “Framing” Feminist Evaluation
  2. FIE TIG Week: Donna Podems on the Difference Between Feminist Evaluation and Gender Approaches
  3. FIE/MME Week: Bessa Whitmore on Researcher and Evaluator Roles and Social Justice

FIE TIG Week: Tristi Nichols on A Feminist-Ecological Model for Evaluation

American Evaluation Association 365 Blog - Wed, 05/21/2014 - 01:15

Bismillah! My name is Tristi Nichols, and with an advanced degree from Cornell University, I have been continuously involved in a wide-ranging evaluation practice over the past fifteen years.

In my area of work – the international development arena – I find that I am frequently asked to evaluate the extent to which gender inequity has been increased or decreased by in a given context (e.g., agriculture, education, or post conflict). I searched high and low for a framework with insightful questions to guide my evaluation practice. Over the years, I only found gaps and tid-bits of information here and there, Frustrated, I decided to development my own framework that includes guiding questions to measure the degree to which an intervention furthers female empowerment and/or minimizes gender inequity within an international context. In a recently released book, “Feminist Evaluation and Research: Theory and Practice”, this rubric of questions is included in my chapter entitled: Measuring Gender Inequality in Angola: A Feminist-Ecological Model for Evaluation.

Since you may not go out right away to purchase / download the book, this contribution includes a few “Rad Resources” to get you started thinking.

Rad Resources: Measurement draws from a feminist-ecological lens assessing program effects on women, using Urie Bronfenbrenner’s three systems of environmental influences: (a) Micro-; (b) Meso-/Exo-; and (c) Macro-systems. My labels are slightly different where (i) Micro-level systems have been renamed “individual-level”; (ii) Meso-/Exo- are referred to as “composite community-level”; and (iii) Macro-level systems are assigned the term of the “collective-level.”

At the collective level, the focus of the evaluative effort is at the social or political environment level;

The composite community-level concentrates on critical factors for program effectiveness and/or reasons for implementation failure and how these affect women’s / girl’s participation.

The individual-level looks at the complex relations between the woman [or adolescent girl] and her environment (e.g., home, family, school, marketplace, and workplace).

Daily, there are new and interesting blogs and websites that can be easily categorized into these three measurement areas: Here are just a few which are on my desktop at the moment…

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. FIE TIG Week: Donna Podems on the Difference Between Feminist Evaluation and Gender Approaches
  2. FIE/MME Week: Katherine Hay on How to do Feminist Evaluation
  3. FIE TIG Week: Katherine Hay on Using Feminist Evaluation to Increase Equity in the Real World

FIE TIG Week: Michael Bamberger on Identifying Unintended Outcomes of Programs Promoting Gender Equality

American Evaluation Association 365 Blog - Tue, 05/20/2014 - 01:15

Hi!  This is Michael Bamberger, an Independent Consultant specializing in the evaluation of social development and gender programs.  Over the past few years I have worked with United Nations organizations, bi-lateral programs and NGOs helping strengthen their evaluations of their gender equity policies.

Most international development agencies have now defined the promotion of gender equality (or gender equity) as one of their development goals, and most conduct periodic evaluations to assess the extent to which their programs and policies contribute to strengthening the economic, social and political empowerment of women and the reduction of the differences between women and men on these dimensions.  However, many of these evaluations tend to over-estimate the positive effects of their programs on promoting gender equality and frequently under-estimate or even ignore some of the negative outcomes of these programs. Frequently the evaluations only interview the women participating in the project, and produce glowing reports on the significant benefits, but fail to talk to women who did not participate.  The failure to identify negative outcomes is unfortunately very common and has serious implications.

Hot Tips:

  • Build into the Terms of Reference for the evaluation a requirement that the evaluators, even when working on a limited budget and under time-constraints, must identify and interview women (and where appropriate men) who did not benefit from the project.
  • Assess carefully the evaluation methodology to ensure that it is capable of identifying unintended outcomes.  Many evaluation methodologies such as results-based evaluations, many theories of change, and most experimental and quasi-experimental designs only measure the extent to which intended results have been achieved and are not able to capture unintended outcomes.
  • Be aware that many evaluations only obtain information from people directly involved in the program, most of whom will be reluctant to criticize the program which pays their salary.
  • In most communities there are people who are familiar with the program being evaluated and its effects, but who are not directly involved and who are able to provide a more balanced perspective.  Examples include: the district nurse (who usually knows almost all of the women in the community), the police chief (a useful source in cases of domestic violence), women’s organizations, school teachers and local religious leaders.
  • Use a mixed methods design that combines quantitative and qualitative data and that emphasizes the importance of triangulation to increase validity by systematically comparing information collected from different sources.

Rad Resources:  The World Bank “Gender and Transport” website illustrates many of the challenges and unintended consequences for women and girls of transport initiates which are assumed to be “gender neutral”.  Module 2 identifies the challenges and Module 5 identifies tools, including research tools, for promoting gender equity in this sector.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. CP TIG Week: Nidal Karim on Innovating for Gender Transformative M&E
  2. FIE TIG Week: Katherine Hay on Using Feminist Evaluation to Increase Equity in the Real World
  3. FIE/MME Week: Bessa Whitmore on Researcher and Evaluator Roles and Social Justice

FIE TIG Week: Katherine Hay on Using Feminist Evaluation to Increase Equity in the Real World

American Evaluation Association 365 Blog - Mon, 05/19/2014 - 01:15

Hi, I’m Katherine Hay. I’ve spent the last 20 years in India working on development, research, and evaluation.

Lessons Learned: One thing I’ve learned, and say all the time is, “equity is not an intervention.”

In societies where equity is the driving goal, perhaps fairly straightforward evaluations of interventions can increase equity. In such societies evaluations could identify the ‘best’ programs, where ‘best’ is defined as reducing inequities. Armed with that knowledge we would design more of the ‘right programs’ and equity would be achieved.

But these societies do not exist. Most of the world is characterized by increasing inequity and development models that put equity on the back seat.

So how can evaluation increase equity in the real world? Is it reasonable to expect that interventions generated from systems that perpetuate gender and other inequities will lead to equity or that evaluation of interventions will deepen equity?

I practice evaluation because I think it is reasonable, but only if such evaluations are understood as intentional disruptions to inequitable systems. This entails seeing equity as emerging from and integral to movements to change societies rather than from technical tinkering within existing systems. This is why applying a feminist lens to evaluation is so core to the way I practice evaluation.

At worst, evaluation can reinforce inequities; on average it might reflect them, but at best it can challenge them. Feminist evaluation offers a lens that fosters designs, approaches, and tools which bring inequity to the foreground.

Hot Tips: I’m often asked, ‘how do you do feminist evaluation?’ There isn’t a simple checklist. The only way is by applying feminist principles at each stage of an evaluation.

Reach out and get involved. I’ve come to feminist evaluation by working with social activists, researchers and evaluators. We share designs, instruments, processes and challenges. I give time to feminist NGO’s with limited resources, but a lot of desire, to use evaluation to guide their work. Being part of these groups deepens my practice and experience. Find peers to challenge and inspire you.

Rad Resources: A few of us formed a gender and evaluation group that now has 646 members from around the world. Why not join the discussion?

Try to make feminist evaluation relevant to issues on the ground. Following a spate of brutal attacks on women in India, I changed a planned keynote at the last minute to discuss how evaluation can help end violence against women. It was a risk I’m glad I took. You can see the video here.

EvalPartners gives small grants to voluntary evaluation organizations to implement peer-to-peer, teaching, and evaluation advocacy projects. All proposals need to include equity and gender but it can also be the focus.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. FIE/MME Week: Katherine Hay on How to do Feminist Evaluation
  2. FIE TIG Week: Donna Podems on the Difference Between Feminist Evaluation and Gender Approaches
  3. Kathryn Bowen on “Framing” Feminist Evaluation

FIE TIG Week: Donna Podems on the Difference Between Feminist Evaluation and Gender Approaches

American Evaluation Association 365 Blog - Sun, 05/18/2014 - 01:15

Greetings from South Africa. I am Donna Podems, a Research Fellow at Stellenbosch University, and founder and director of OtherWISE: Research and Evaluation, a small monitoring and evaluation firm in Cape Town, South Africa.

As a practitioner and academic, I often engage in discussions around evaluation theory and practice. One common discussion is around the differences between gender evaluation approaches and feminist evaluation. While they have many commonalities, and one often draws on the other, they each bring their own strengths—and weaknesses.

Understanding the difference between the two approaches in their purest form enables evaluators to choose what elements of which approach are appropriate for their evaluation design—if any at all.

Lesson Learned: Four Key Distinctions between Feminist Evaluation and Gender Approaches:

  1. Feminist evaluation and gender approaches have different historical roots and bring their own strengths (and weaknesses) to an evaluation.
  2. Gender approaches aim to document and map the lives of women, while feminist evaluation aims to change them.
  3. Feminist evaluation would be used to guide the evaluation methodology if the evaluation questions seek to understand why differences exist between men and women and to bring about social change.
  4. Feminist evaluation offers broad guidance that encourages an evaluator how to think about an evaluation, and how to use that reflection to inform the evaluation’s design, data collection, and communication of findings. Gender approaches often provide more concrete guidelines and prescriptive methods for data collection and analysis.

Feminist evaluation and gender approaches can be quite complementary, with both effortlessly integrating together as a part of an evaluation design. They can also be used as distinct approaches on their own, or incorporated into other approaches, such as economic evaluations.

Rad Resources:

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. FIE/MME Week: Katherine Hay on How to do Feminist Evaluation
  2. FIE/MME Week: Donna Podems on Applying Feminist Evaluation for Non-feminist Evaluators
  3. FIE/MME Week: Donna Mertens and Mika Yamashita on Co-hosting Mixed Methods Evaluation and Feminist Issues in Evaluation

Dan McDonnell on Uncovering Interesting Facts About Your Twitter Network with Twitonomy

American Evaluation Association 365 Blog - Sat, 05/17/2014 - 17:34

Hello, my name is Dan McDonnell and I am a Community Manager for the American Evaluation Association. With the plethora of social media tools, apps and platforms available, it can often be overwhelming to find the right one to best suit your needs. I recently came across a tool that I’m very excited to share with you all today – one that can open the doors to more interesting insights on your Twitter network.

Rad Resource

Twitonomy  is a powerful Twitter analytics platform that can reveal intriguing statistics about Twitter profiles and hashtags. Let’s start by looking at what things you can learn about your own profile.

What your profile on Twitonomy might look like.

Once you’ve signed in via your own Twitter profile, type your Twitter handle into the ‘Analyze Twitter’s profile of’ box, and wait for the results. The next screen (example above) will break down the entirety of your Twitter history (yes, every Tweet you’ve ever sent!) and some cool data points, including how many Tweets a day you send on average, the breakdown of your Tweet responses vs. Retweets vs. @mentions and even a full activity chart that shows what days, times and seasons in which you Tweet most frequently. It looks like 11:00 AM is my sweet spot, and I Tweet pretty consistently Monday through Friday.

Hot Tip

With so many different data points available, it’s tough to even know where to begin. Here are a few things I’d recommend researching with Twitonomy to start:

  • See who your interact with most in your network with the Users most mentioned, most retweeted and most replied to statistics.
  • Find out what your best performing Tweets of all time are, and use that knowledge to inform what you’re Tweeting about in the future.
  • Check out patterns and trends in how your fellow evaluators are Tweeting – which days and times and who they are Tweeting with and about.
  • Take a look at some of your favorite Twitter users ‘most frequently used hashtag’ list to see what types of Twitter conversations you might be missing out on.

These few tips are really just scratching the surface of what Twitonomy can do. With a Premium subscription, you can unlock loads of new features – but all of what I’ve outlined so far is doable in the free version of the tool. In a future post, I’ll tackle some of the cool Premium features and what it can reveal to make you a smarter Tweeter. Stay tuned!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Dan McDonnell is a regular Saturday contributor to AEA365, where he blogs on social media-related topics for evaluators. You can reach Dan on Twitter at@Dan_McD.

 

Related posts:

  1. Dan McDonnell on Upcoming Changes to Twitter
  2. Dan McDonnell on Making New Friends and Mastering Lesser-Known Twitter Features Without Third Party Apps
  3. Dan McDonnell on Keyboard Shortcuts and Other Advanced Twitter Features

Judy Savageau and Kathy Muhr on Working with Different Data Sources

American Evaluation Association 365 Blog - Fri, 05/16/2014 - 01:15

Hi. We’re Judy Savageau and Kathy Muhr from the University of Massachusetts Medical School’s Center for Health Policy and Research. Within our Research and Evaluation Unit, we work on a number of projects using qualitative and quantitative methods as well as primary and secondary data sources. We’ve come to appreciate that different types of data from different sources need varying levels of data management and quality oversight.

One of our current projects is evaluating a screening program that requires primary care providers to screen children for potential behavioral health conditions. Among a random sample of 4000 children seen for a well child visit during one of two study years, we collected data both from medical records (primary data source: both quantitative and qualitative chart notes) as well as administrative/claims data (secondary data source: solely quantitative). Given the nature of data from the two sources, we implemented different data quality checks and cross-checks between them.

Lessons Learned:

  • Claims data comes from the insurance payer having already gone through its own internal data cleaning and data management processes. However, much of the patient demographic data comes at the time of insurance enrollment and not updated at the time of a clinical visit. Some data elements are often incomplete and not updated even after numerous clinical encounters, especially data such as gender, race, ethnicity and primary language. While a provider might ‘know’ this information when seeing a patient, it’s not necessarily updated in administrative datasets.
  • Many practices don’t necessarily collect demographic data in a uniform manner unless they’re required to report on this data. Primary care providers are well connected to their patient’s demographics in terms of needs for interpreters, cultural health beliefs, and age- or gender-specific anticipatory guidance needs. Unfortunately, medical records data often had nearly as much missing data as did the administrative claims data!
  • Cross-checking data between these two sources was an important step for us to take in this project as we hypothesized that there might be differences in screening children for behavioral health needs. Wanting to assess potential health service disparities was an important factor in this evaluation given the interest in vulnerable populations.
  • While electronic medical records (EMRs) were evident in at least 60% of practices where charts were abstracted, it was no surprise to find that EMRs vary practice to practice. It was clear that projects such as this one might then need to use text-based data within the chart notes to obtain vital information in order to assess potential disparities.

Hot Tip: Although data quality is key, find a balance between budgetary and personnel resources and the time required to cross-check data through multiple sources and/or impute missing data using a variety of techniques.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. MA PCMH Eval Week: Ann Lawthers on Triangulation Using Mixed Methods Appeals to Diverse Stakeholder Interests
  2. Gary Huang on Improper Payment Studies
  3. MME Week: Terri Anderson on Using Best Practices for Mixed Methods Research in Evaluation

Jeanne Hubelbank on Assessing Audience or Client Knowledge in a Sweet Way

American Evaluation Association 365 Blog - Thu, 05/15/2014 - 01:15

Hello, my name is Jeanne Hubelbank. I am an independent evaluation consultant. Most of my work is in higher education where, most recently, I help faculty evaluate their classes, develop proposals, and evaluate professional development programs offered to public school teachers. Sometimes, I am asked to make presentations or conduct workshops on evaluation. When doing this, I find it helpful to know something about the audience’s background. Clickers, hand raising, holding up colored cards, standing up, and clapping are ways to approach this. A recent AEA365 post, Innovative Reporting Part I: The Data Diva’s Chocolate Box, that showed how to present results on candy wrappers served as an impetus for another way to introduce evaluation and to assess people’s understanding of it.

Instead of results, write evaluation terms such as use, user, and methods on stickers and place them on the bottom of Hershey’s Kisses®; one word to a kiss. Participants arrange their candy in any format that they think represents how one approaches the process of conducting an evaluation. This can give one a quick view of how the participants view evaluation and most people like to eat the candy afterwards.

Hot tips:

  • Use three-quarter inch dots
  • Hand write or print terms you want your clients to display
  • Besides Hershey’s Kisses® provide Starbursts®, for those who are allergic or adverse to chocolate
  • Use different colored kisses for key terms, such as use and uses in silver and assessment in red, for a quick view on where people place them in the process
  • Wrap each collection of candy terms into a piece of plastic wrap and tie with a curled ribbon
  • Ask people to arrange candy in any format that they think represents how one approaches the process of doing an evaluation
  • You can do this before and after a presentation, but if you do it again, remind people to wait to eat.

Rad Resources:

Susan Kistler’s chocolate results

Stephanie Everygreen’s cookie results and her book Presenting Data Effectively: Communicating Your Findings for Maximum Impact.

Hallie Preskill and Darlene Russ-Eft’s book Building Evaluation Capacity: 72 Activities for Teaching and Training.

Michael Q. Patton’s book Creative Evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Nicole Vicinanza on Explaining Random Sampling to Stakeholders
  2. Best of aea365 week: Nicole Vicinanza on Explaining Random Sampling to Stakeholders
  3. Susan Kistler on Innovative Reporting Part I: The Data Diva’s Chocolate Box

Sue Griffey on Ready or Not: Mentor and Mentee Readiness

American Evaluation Association 365 Blog - Wed, 05/14/2014 - 01:15

I’m Sue Griffey; I lead the Evaluation Center at Social & Scientific Systems, Inc. in Silver Spring MD. I mentor outside of work for professionals in evaluation and public health in both formal programs (Cherie Blair. Foundation (CBF), APHA-SA National Mentoring Program, Aspire, Foundation, Rollins School of Public Health Annual Mentoring Program) and through individual connections.

I have noticed over the past few years, as my mentoring work has increased, that my ability to assess mentoring readiness is critical to the success of the mentoring relationship.

Hot Tip: Mentoring is a volunteer acting. Don’t just assume that the Mentor-Mentee pairing results in both being ready. It may appear as Mr+/Me+ (as in the table below) but the pairing may actually be in a discordant cell.

Mentee isn’t ready: The mentee may not realize she isn’t ready for mentoring; you as the mentor may need to help her see that. A mentee may identify needing mentoring when it really isn’t what she needs. As the mentor, develop and apply metrics for readiness as you would in an evaluation.

Hot Tip: I have a 3-email rule. If I have to track down the mentee more than 2 times because he has missed a scheduled session or not confirmed a session time, my third email lays out my perspective that there may be a mismatch in what the mentee is able to do (as shown below).

Hot Tip: Don’t rule out a mentoring program because you don’t think you offer the program’s content or focus. I became a CBF mentor in its initial program even though I didn’t necessarily have the business focus I thought they wanted. My match was a mentee who two years later still benefits from my experience in public health and leadership.

Mentor isn’t ready: If you have agreed to mentor, respect the commitment or acknowledge that you can’t.

As the mentee, make sure you are getting what you need from the mentor. And if you aren’t getting what you need, don’t be afraid to let the mentor or the mentoring program manager know that. It may be that the mentor really isn’t ready for the mentoring relationship

Hot Tip: it may help you as a mentee to think overall and about each session as answering 3 questions:

  1. What do you need right now?
  2. What do you want to do and why?
  3. How can your mentor help you?

Hot Tip: Being a mentee is as important as your work or schooling. Be proactive in communications, making sure to check your email daily, letting a mentor know what your schedule is, what time zone you are in, and how and when to reach you.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Tamara Bertrand Jones on Finding and Working With a Mentor
  2. Norma Martinez-Rubin on Mentorship and Involvement in AEA
  3. DOVP Week: Nivedita Ranade and Tom McKlin on The Importance of Nurturing in Mentoring Students with Disabilities (SwD)