Monitoring, Evaluation and Learning Systems

Dawn Henderson on Developing a Program Theory to Guide your Evaluation Plan

American Evaluation Association 365 Blog - Tue, 02/25/2014 - 01:15

I am Dawn Henderson, an Assistant Professor at Winston-Salem State University and former AEA GEDI (Graduate Education Diversity Initiative) intern. My experience as a GEDI afforded the opportunity to increase my knowledge in thinking about a program’s theory in evaluation. I used this in designing research and work with a school-based intervention. I want to share with you some tips I have used to help programs think about their particular theory and ways to evaluate and measure it. This is primarily targeted towards youth, so you can modify these tips for your specific needs.

Hot Tip: Conduct interviews with program staff and collect program materials. Interview key staff using questions like, What are unique characteristics of this program? How do these impact youth? When youth finish your program what do you want them to achieve or have? How would you measure that?  These perspectives can assist you with understand factors that may potentially lead to outcomes. Review program materials such as curriculum, lesson plans, and activities, and organize them into common themes. For example, I read through the lesson plans of one program and identified managing conflict as a consistent theme and used this to help develop program outcomes.

I also used the findings to identify literature to support the program’s efforts.  For example, I framed the activities of the program using research in positive youth development.

Hot Tip: Use visual aids.  Use a visual aid to draw connections between program characteristics and outcomes. This can be created using basic shape features in Microsoft Word or SmartArt.  I found that this visual aid helped the program think about how to achieve its objectives and communicate its model to external constituents.

Hot Tip: The “if” and “then” logic. It appears that everyone talks about using logic models in evaluation. Using the information you collected from program staff, materials, and visual aid, develop a series of “if” and “then” statements. For example, IF program X provides activities in conflict management and resolution THEN youth participants will improve their ability to manage and resolve conflict.

Hot Tip: The evaluation plan. Here is when you start gearing the program and yourself up for the evaluation. This can be similar to logic model development; I integrate each of the previous steps in this phase and include objectives, measures, participants, analysis, and outcomes.  It not only helped me in thinking through the evaluation, but assisted the program in communicating their efforts to external constituents.

Rad Resource: Program Theory and Logic Models by Amhert H. Wilder Foundation

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Monica Hargraves and Miranda Fang on Operationalizing the “Golden Spike” – Practical Guidance for Literature Searches to Complement Program Evaluation
  2. SIOP Week: Dale S. Rose on Organization Development: A Program Worth Evaluating (Logically)
  3. Bernadette Sangalang on Developing or Improving Evaluation Efforts in Nonprofit Organizations

Laura Beals and Noah Schectman on Data Formatting for Performance Management Systems

American Evaluation Association 365 Blog - Mon, 02/24/2014 - 01:15

Hello! We are Laura Beals (program evaluator) and Noah Schectman (database administrator) from Jewish Family and Children’s Service (JF&CS), located in Boston, MA. At JF&CS we use a cloud-based case management system to facilitate data collection about our clients and services. We are part of the internal evaluation department at JF&CS (i.e., he is not part of IT) and so we work closely together to ensure that we are collecting data in a way that allows us to complete our analyses in the most efficient manner possible.

Lessons Learned: Though our system has built-in reporting tools, we often download data for more complex analysis in another tool, such as Access or SPSS. In addition, though the data collection tools are designed as easy-to-complete forms in the system, we do have to bulk upload data regularly.

Many case management/performance management systems allow for back-end customization of the data collection tools—you may have the ability to do so in-house (as we do) or you may have to work with a third-party developer. Regardless, as an evaluator, if you are working with an online performance management system, you should ask yourself: “What does the data need to look like when it is downloaded? When it is uploaded?” In general, we first think about how the data will be used, then design the data architecture to match.

Hot Tips: When designing new data collection tools in our database, we ask several key questions about how the data should be formatted on the back-end, including:

-       What are the unique identifiers for each case that will need to be downloaded with or uploaded to the database?

-       Should the data be arranged so that a case is on a row or each assessment is on a row?

-       For each variable, are the variable labels or the numerical values used?

-       How are multiple response variables formatted? As dummy variables?

-       If names are used, how will they be formatted? What about addresses? What about dates?

Even when we think we have it figured out, we always enter fake assessments for fake clients in the system, through the online form and through a bulk upload, and then download the data. We then review the resulting import/download and triple-check that the data is formatted in the manner we expect. We find doing the work to prepare the system ahead of time saves us a lot of data formatting and manipulation down the road!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. Pei-Pei Lei on Using InfoPath as a Project Management/Data Collection Tool
  2. Felix Blumhardt on Simplifying the Data Collection Process with User-Friendly Workbooks
  3. PD Presenters Week: Jennifer Ann Morrow on What to Do With “Dirty” Data – Steps For Getting Evaluation Data Clean and Useable

Cameron Norman on The Evaluator-as-Designer

American Evaluation Association 365 Blog - Sun, 02/23/2014 - 06:59

You might not think so, but I think you’re a designer.

My name is Cameron Norman and I work with health and human service clients doing evaluation and design for innovation. As the Principal of CENSE Research + Design I bring together concepts like developmental evaluation, complexity science and design together for clients to help them learn about what they do and better create and re-create their programs and services to innovate for changing conditions.

Nobel Laureate Herbert Simon once wrote: “Everyone designs who devises courses of action aimed at changing existing situations into preferred ones”.

By that standard, most of us who are doing work in evaluation probably are contributing designers as well.

Lessons Learned: Design is about taking what is and transforming it into what could be. It is as much a mindset as it is a set of strategies, methods and tools. Designing is about using evidence and blending it with vision, imagination and experimentation.

Here are some key lesson’s I’ve learned about design and design thinkers that relate to evaluation:

  1. Designers don’t mind trying something and failing as they see it as a key to innovation. Evaluation of those attempts is what builds learning.
  2. When you’re operating in a complex context, you’re inherently dealing with novelty, lots of information, dynamic conditions and no known precedent so past practice will only help so much. Designers know that every product intended for this kind of environment will require many iterations to get right; don’t be afraid to tinker
  3. Wild ideas can be very useful. Sometimes being free to come up with something outlandish in your thinking reveals patterns that can’t be seen when you try too hard to be ‘realistic’ and ‘practical’. Give yourself space to be creative.
  4. Imagination is best when shared. Design is partly about individual creativity and group sharing. Good designers work closely with their clients to stretch their thinking, but also to enlist them as participants throughout the process.
  5. Design (and the learning from it) is doesn’t stop at the product (or service). Creating an evaluation is only part of the equation. How the evaluation is used and what comes from that is also part of the story because that informs the next design and articulates the next set of needs.

I write regularly on this topic on my blog, Censemaking, which has a library section where you can find more resources on design and design thinking. Design is fun, engaging and taps into our creative energies for making things and making things better. Try it out and unleash your inner designer in your next evaluation.

(Share Clip)

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Cameron Norman on Complexity Science for Evaluators
  2. Bloggers Week: Cameron Norman on Censemaking
  3. Susan Kistler on the OECD Better Life Index

Dan McDonnell on 3 Tools to Make Reading and Sharing On Twitter Even Easier

American Evaluation Association 365 Blog - Sat, 02/22/2014 - 17:31

Hello, my name is Dan McDonnell and I am a Community Manager for the American Evaluation Association. As someone who uses Twitter extensively both in a personal and professional capacity, I have found that, while powerful, the basic client is occasionally lacking in functionality. In this post, I intend to provide a brief overview of a few tools that can make for a more feature-rich and convenient Twitter. As more and more evaluators look to Twitter as a potent avenue for knowledge share, my hope is that this list provides some valuable information on how to improve that experience.

Rad Resource: Hootsuite

Hootsuite is a powerful social media management software that offers both free, paid and enterprise versions. This software puts a social media dashboard at your fingertips with access to behind-the-scenes data on follower interaction, gives you the ability to create and schedule tweets and other social media posts and allows you to customize how you visualize different hashtags and Twitter feeds. Hootsuite is best used to keep track of who mentions you on Twitter, retweets your tweets and to organize the type of information you want to consume on Twitter. Think of it as your enhanced Twitter homepage!

Rad Resource: Buffer
Buffer, quite simply, makes it easy to schedule tweets. By adding Buffer as an add-on to your Browser or app on your mobile device, you can automatically tweet out  any article or blog post. Once installed, you’ll see Buffer added to the ‘Share on social media’ option across the web. But the best feature of Buffer? It automatically schedules your posts at an optimal time for your followers to read, helping you space out your posts, plan ahead and make it easier for you to share great knowledge and content.

Rad Resource: Pocket
Found a blog post that you’d love to read, but don’t have time right now? Pocket has both an app and a browser add-on that gives you the ability to save interesting posts and content for later with one click, similar to the bookmarking feature on most browsers. What makes Pocket different? Any pages that you save to Pocket will be available offline (and on any device that you can use to access your Pocket account) so if you’re on a plane, airport or another location with a limited internet connection, you still have full access to your Pocket-ed content. I love using it while on the subway!

Do you use any of the above tools, and if so, what are your favorite features?

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Dan McDonnell on Google + | Dan McDonnell on Twitter

Related posts:

  1. Dan McDonnell on Using Lists to Become a Twitter Power User
  2. Dan McDonnell on Using Twitter to Add Value to Your Evaluation 2013 Experience
  3. Dan McDonnell on Making New Friends and Mastering Lesser-Known Twitter Features Without Third Party Apps

Veena Pankaj and Kat Athanasiades on Coalition Assessment: Approaches for Measuring Capacity and Impact

American Evaluation Association 365 Blog - Fri, 02/21/2014 - 08:32

Greetings fellow evaluators!  We are Veena Pankaj and Kat Athanasiades from Innovation Network.  Over the past few years we’ve been evaluating coalition capacity and we are excited to share our learnings with you.

Building and activating coalitions is an increasingly common strategy in the social sector, especially when working towards advocacy and policy change.  Research and experience tell us that high capacity coalitions are better positioned to advance policy. So how do you measure coalition capacity?

While standard tools for evaluating coalition capacity may provide value to the sector, we advocate for situation-specific tools because of the different contexts coalitions face.

Hot Tips: Here are some tips to get started.

Step 1: Clarify the Purpose of the Assessment

Think about why you are interested in assessing coalition capacity in the first place.  What is the purpose of the assessment? How is it intended to strengthen the coalition? Think critically about what information coalition members and other stakeholders will want and be able to use. A good starting point is identifying the coalition’s advocacy goals and strategies.

Step 2: Identify Specific Capacities

Given your coalition’s goals and strategies, which capacities are likely to be important? Examples of coalition capacities include coalition leadership, ability to cultivate champions, and sustainability. Your coalition’s capacity categories may be similar—or they may be very different. Identifying these capacities will help frame your coalition assessment tool.

Step 3: Involve Stakeholders in the Assessment and Vetting Process

Involving stakeholders who are knowledgeable about the context and the work of the coalition is critical to the overall vetting process of the tool.  Which individuals—internal and external to the coalition—have a valuable perspective? Who should be engaged throughout the assessment process to build buy-in and support?

Step 4: Share Assessment Results in a Variety of Formats

Make the data actionable. This involves reporting relevant information back to key stakeholders in a meaningful way. Figure out who needs which information to learn, adapt, and improve. Return the results in weeks rather than months or years.

Here are some examples of charts we used to communicate coalition assessment results to various stakeholders:

Assessment may uncover differences of opinion between technical assistance providers and coalition members.

You can analyze responses across coalitions. Each column in the above chart is a coalition’s average score across capacities.

A deeper dive shows differences in how capacities are scored in one coalition.

Rad Resource: Want to learn more about coalition assessment? Look no further! Check out our hot-off-the-press white paper, Coalition Assessment: Approaches for Measuring Capacity and Impact.

Rad Resource: Jared Raynor’s What Makes an Effective Coalition will give you more ideas about important coalition capacities.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Fran Butterfoss on Evaluating Community Coalitions
  2. Shar McLean and Esteban Colon on Tools for Organizational Assessment
  3. Stacy Carruth on using Wordle to Understand Substance Abuse

Molly Hamm on Translation in Cross-Cultural Research

American Evaluation Association 365 Blog - Thu, 02/20/2014 - 01:15

Greetings! I’m Molly Hamm, the Monitoring, Evaluation and Learning Coordinator at The DREAM Project, an educational non-profit organization in the Dominican Republic.

Working in a multilingual environment, I must continuously switch back and forth between languages as I complete my daily tasks. From elaborating evaluation plans and designing research instruments to facilitating focus groups and presenting results, I am constantly employing either English or Spanish based on the needs of specific audiences. This process often creates double the work on any one project, as most documents need to be designed in both English and Spanish. Additionally, translating information into multiple languages can present significant challenges when it comes to validity, reliability, accuracy, and comprehension. This post focuses on challenges related to written translations for instrument design.

Lessons Learned: Because evaluators painstakingly select wording when they are designing instruments, it can be easy to fall into the trap of trying to achieve word for word translations. However, it’s most important to focus on translating meaning. Besides the fact that there are simply no translations between languages for some ideas, you want to make certain that your tools are measuring the same constructs across translations. Wording may need to be adapted significantly to elicit desired responses from participants.

Hot Tip: Use back translation. Once you have an initial draft in the original (source) language, translate into the target language.  Clean up the target language translation, and then “retranslate” into the source language. This process enables you to see how well the meaning is retained through translation. If the back translation results in a question that is measuring something different than originally intended, continue the process until satisfied with the results.

Hot Tip: Be sure to pilot translated instruments, as these should be validated in both the source and target languages. Watch for variation in the target language across countries. Due to significant regional differences in vocabulary and even grammar, having an instrument successfully translated into a language such as Spanish for use in one country does not mean it will be well understood in another country. Always adapt your translations as necessary in a new cultural context, even when using the same language.

Resources: Check out translation tips from the University of Michigan’s Cross Cultural Survey Guidelines, Duke University’s Tip Sheet on Cross-Cultural Surveys, and University of California-San Francisco’s Annotated Bibliography for Translating Surveys in Cross-Cultural Research.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Susan Kistler on Going Global With aea365
  2. LAWG Week: Jeff Williams on Pre-Pilot Language Testing
  3. Susan Kistler on the AEA/BetterEvaluation Webinar Series and Making the Most of YouTube

Audrey Roerrer on Google Tools for Multi-site Evaluation

American Evaluation Association 365 Blog - Wed, 02/19/2014 - 01:15

Hi, I’m Audrey Rorrer and I’m an evaluator for the Center for Education Innovation in the College of Computing and Informatics at the University of North Carolina at Charlotte, where several projects I evaluate operate at multiple locations across the country.  Multisite evaluations are loaded with challenges, such as data collection integrity, evaluation training for local project leaders, and the cost of resources. My go-to resource has become Google because it’s cost-effective both in terms of efficiency and budget (it’s free). I’ve used it as a data collection tool and resource dissemination tool.

Lessons Learned:

Data Collection and Storage:

  • Google Forms works like a survey reporting tool with a spreadsheet of data behind it, for ease in collecting and analyzing project information.
  • Google Forms can be sent as an email so that the recipients can respond to items directly within the email.
  • Google documents, spreadsheets, and forms can be shared with any collaborators, whether or not they have a gmail account.
  • Google Drive is a convenient storage source in ‘the cloud.’

Resource Dissemination:

  • Google Sites provides easy to use website templates that enable fast website building for people without web development skills.
  • Google Groups is a way to create membership wikis, for project management and online collaboration.

Rad Resource: Go to www.google.com and search for products. Then scroll down the page to check out the business & office options, and the social options.

For a demonstration of how I’ve used google tools in multisite evaluation, join the AEA Coffee Break Webinar on February 27, “Doing it virtually: Online tools for managing multisite evaluation.” You can register here.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. Dreolin Fleischer on Organizing Quantitative and Qualitative Data
  2. Summer Evaluation Institute Attendees on Low-cost No-cost Tools for Evaluators Part IV
  3. Katya Petrochenkov on Surveygizmo

Call for Papers: Large Systems Change, Transformations and Transitions: An emerging field

Networking Action - Tue, 02/18/2014 - 07:51
Call For Papers: The Journal of Corporate Citizenship Special Issue:  Large Systems Change, Transformations and Transitions: An emerging field

NOTE:  THERE IS A VERY TIGHT PRODUCTION SCHEDULE.

The Editorial Team

This issue is being developed by the GOLDEN Ecosystems Lab

Alexey Kuzmin on Locating Evaluation Consultants Outside Your Home Country

American Evaluation Association 365 Blog - Tue, 02/18/2014 - 01:15

My name is Alexey Kuzmin and I am the President of the Process Consulting Company based in Moscow, Russia.

Lesson Learned: I often work internationally and believe that the best (if not the only) way to conduct a quality evaluation outside your home country is to involve local (national) consultants. Local specialists have an in-depth understanding of the local situation (political context, socio-economic environment, culture) and can help to make evaluation most relevant in terms of the overall design, methods and tools, and data analysis. They speak the local language and can interview any informant without an interpreter, which is a huge benefit in the terms of data quality. Finally, they better understand how things work in their country and can assist with the evaluation logistics.

The question is: how to find a local consultant in the country you have not been to before?

Rad Resources: I would propose two resources that may help:

1)    The International Organization for Cooperation in Evaluation (IOCE) website has links to dozens of evaluation associations around the world. The website has an interactive map that makes it easy to find out if there is an association in the country you are going to. You may contact local evaluators through their respective national/regional associations.

2)    XCeval is a listserv for persons interested in issues associated with international and cross-cultural evaluation. It was initially set up for the International and Cross-Cultural Topical Interest Group of the American Evaluation Association. Many of the postings are announcements of short-term consultancies or full-time positions in international M&E-related jobs. It also features exchanges of ideas of current interest to persons involved in the evaluation of international development.

(Share Clip)

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. Linda Cabral on Using Cultural Brokers on Evaluation Teams
  2. Susan Kistler on EvalPartners, Free eLearning in Development Evaluation, and Mapping Evaluation Associations Around the World
  3. Dianne Hofner Saphiere on Interculturally Competent Evaluation

WINGSForum: On networks and cultivating “more leaderful change agents”

Networking Action - Mon, 02/17/2014 - 08:31

WINGSForum 2014 will be the premier event for connecting with a global network of grantmakers, community foundation support organizations, and leaders in the sphere of philanthropy from across the globe.  I am honored to be keynote speaker and leading a …

Monika Mitra and Lauren Smith on Conducting a Health Needs Assessment of People with Disabilities

American Evaluation Association 365 Blog - Mon, 02/17/2014 - 01:15

Hi, we are Monika Mitra and Lauren Smith from the Disability, Health, and Employment Policy unit in the Center for Health Policy and Research at the University of Massachusetts Medical School.  Our research is focused on health disparities between people with and without disabilities.

Evaluating a Population of People with Disabilities

In collaboration with the Health and Disability Program (HDP) at the Massachusetts Department of Public Health (MDPH), we conducted a health needs assessment of people with disabilities in Massachusetts.  The needs assessment helped us better understand the unmet public health needs and priorities of people with disabilities living in MA.  We learned a tremendous amount in doing this assessment and wanted to share our many lessons learned with the AEA365 readership!

Lessons Learned:

  • 3-Pronged approach

Think about your population and how you can reach people who might be missed by more traditional methodologies:  In order to reach people with disabilities who may not be included in existing health surveys, we used two other approaches to complement data from the MA Behavioral Risk Factor Surveillance System (BRFSS).  They included: an anonymous online survey on the health needs of MA residents with disabilities and interviews with selected members of the MA disability community.

  • Leveraging Partnerships

Think about alternative ways to reach your intended population:  For the online survey, we decided on a snowball sampling method.  This method consists of identifying potential respondents who in turn identify other respondents; it is a particularly useful methodology in populations who are difficult to reach and may generally be excluded from traditional surveys and affect one’s generalizability of findings.  HDP’s Health and Disability Partnership provided a network to spread the survey to people with disabilities, caregivers, advocates, service providers, and friends/family of people with disabilities.

  • Accessibility is Key

Focus on accessibility:  In an effort to increase the accessibility of the survey, Jill Hatcher from DEAF, Inc. developed a captioned vlog (a type of video blog) to inform the Deaf, DeafBlind, Hard of Hearing, and Late-Deafened community about the survey.  In the vlog, she mentioned that anyone could call DEAF, Inc. through videophone if they wanted an English-to-ASL translation of the survey.  Individuals could also respond to the survey via telephone.

Rad Resources:

  • Disability and Health Data System (DHDS)

DHDS is an online tool developed by the CDC providing access to state-level health data about people with disabilities.

  • Health Needs Assessment of People with Disabilities Living in MA, 2013

To access the results of the above-mentioned needs assessment, please contact the Health and Disability Program at MDPH.

  • A Profile of Health Among Massachusetts Residents, 2011

This report published by the MDPH contains information on the health of people with disabilities in Massachusetts.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. DOVP Week: Jennifer Sullivan Sulewski on Data Sources for Tracking Outcomes of People with and without Disabilities
  2. DOVP Week: Mary Moriarty on Planning and Implementing Disability-based Evaluations
  3. DOVP Week: JS Sulewski on Using Universal Design to Make Evaluations Inclusive

Caren Oberg on Using Tablets for Data Collection

American Evaluation Association 365 Blog - Sun, 02/16/2014 - 08:06

My name is Caren Oberg and I am the Principal and Owner of Oberg Research. I am a proud late adopter. Proof? I still use a paper calendar and have Moleskin notebooks dating back years. But I have joyfully embraced tablet applications for data collection. The applications below, not to mention many others, have made the process cheaper, greener, less prone to human error and more innovative.

Rad Resources: All resources below work on iPads and Android tablets, except Story Kit, which is iPad only.

TrackNTime is designed for tracking participant interactions or behaviors in a learning environment.

QuickTap Survey is a survey platform designed specifically for tablets. It is easy to read, pretty to look at, and you can collect data without an internet connection.

Sticky Notes come pre-installed on most tablets. Participants can move sticky notes around the screen, grouping and regrouping, based on questions you ask.

Story Kit allows your participants to recreate their experiences through images and text by using an electronic storybook.

Hot Tips: Consider the type of data you are trying to collect. The majority of tablet apps I have come across can do one type of data collection extremely well, but are not yet built for multi-method data collection. That said, you can easily switch back and forth between two applications and link the data manually by assigning a single id number to both.

Apps eliminate data entry. They do not eliminate data cleaning, nor do they do advanced analyses. Yet.

Lessons Learned: The number of applications developed specifically for evaluators is small. Learning to manipulate applications to fit my needs has been very important. As well as letting go of an app when it is just not going to work for me. Knowledge sharing is also important. I was made aware of Quicktap Survey and StoryKit from my colleague Jennifer Borland of Rockman, et al, who in turn learned about StoryKit at Evaluation 2013.

In that vein I will be talking about all four resources as an AEA Coffee Break webinar on February 20, 2014. Hope you can join.

(Share Clip)

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. Dawn Henderson on Consider Telling a Story
  2. MNEA Week: Joseph Curiel and Kirsten Rewey on Making the Most of EVAL2013
  3. Cindy Tananis and Cara Ciminillo on Round Robins

Dan McDonnell on Keyboard Shortcuts and Other Advanced Twitter Features

American Evaluation Association 365 Blog - Sat, 02/15/2014 - 11:05

My name is Dan McDonnell and I am a Community Manager at the American Evaluation Association (AEA). For today’s Saturday post, we’ll be covering a few advanced Twitter tips for evaluators: features that fly under the radar enough to be considered ‘secret!’

Rad Resource: Hidden Keyboard Shortcuts

Find yourself  without a computer mouse or working with an unreliable laptop trackpad? As it happens, you can enjoy the full functionality of Twitter using only keyboard shortcuts. Here is a short list of commands to get you started:

Tweeting and Other Actions:

  • Create a New Tweet: n
  • Favorite: f (this can only be done if you open up the tweet in a new page)
  • Reply: r (see above)
  • Retweet: t (see above)

Navigate Your Feeds:

  • Next Tweet: j
  • Previous Tweets: k
  • Open Tweet in a New Page: ENTER
  • Homepage: g h
  • Your Profile: g p
  • View Mentions: g r

To quickly view all available keyboard shortcuts, press the key.

Rad Resource: Advanced Search

Now for a feature that you may have noticed but never taken advantage of: advanced search. This tool comes in extremely handy if you looking to segment tweets by region for evaluation purposes, or create  search parameters that can help you further drill-down to fit a specific profile. On the main search screen, you’ll notice on option on the left side bar labelled ‘Advanced Search’- click this option. The new screen gives you the option to conduct a search using Boolean queries,  location details, Tweet sentiments and related accounts. Once you click the search button, your results are displayed. You can even save this search by clicking the ‘Save’ button in the upper right hand corner of the search feed. Access your saved search queries from the dropdown menu in the top search bar.

Hot Tip: Embed Your Tweet on Your Website

Looking for an easy way to display a tweet on your blog or website? This feature allows you to drop a short line of code in the HTML of your page to display any tweet just as it appears on the Twitter client. Simply find the tweet you want, expand the tweet, and click details (if you don’t see this, click ‘More’ to drop down the menu of options) . You’ll see the option ‘Embed this tweet’ pop up,  which you can click. Copy the code with CTRL + C, and paste  (CTRL + V) in the HTML code of your site or blog post where you’d like it to appear.

Have any neat Twitter hacks or work-arounds of your own? Share your own tips and tricks by leaving a comment on this post or tweeting us @aeaweb.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Dan McDonnell on Google +

Related posts:

  1. Shortcut Week: Sarah Mann-Voss on Increasing Screen Readability
  2. Shortcut Week: Ana Drake on Macros and Hotkeys
  3. Shortcut Week: Sharon Rodriguez on Shortcuts for Inserting Diacritics

STEM TIG Week: Sam Held on Federal Data Sharing Policies

American Evaluation Association 365 Blog - Fri, 02/14/2014 - 01:15

My name is Sam Held and I am the Data Manager for Science Education Programs at ORAU— Oak Ridge Associated Universities. We are involved with science education and STEM workforce development from K-12 through postgraduate fellowships. I am involved with evaluations done internally (programs we manage) and externally in addition to all data reporting needs.

A recent trend in the STEM fields is the call to share or access research data, especially data collected with federal funding. The result is requirements from the federal agencies for data management plans in grants, but the different agencies have different requirements.  NSF requires a plan for every grant, but NIH only requires plans for grants over $500,000.

The common theme in all policies is “data should be widely and freely available as possible while safeguarding the privacy of participants, and protecting confidential and proprietary data” (NIH’s Statement on Sharing Data 2/26/2003). The call for a data sharing plan forces the PIs, evaluators, and those involved with the proposals to consider what data will be collected, how will it be stored and preserved, and what will be the procedures for sharing or distributing the data within privacy or legal requirements (i.e., HIPAA or IRB requirements). To me – the most important feature here is data formatting. What format will the data be in now and still be accessible or usable in the future or to those who cannot afford expensive software?

Rad Resource: DMPTool – a website from the University of California system for developing Data Management Plans. The best component of this site is their collection of funder requirements, including those for NIH, NSF, NEH, and some private foundations.  This site includes templates for the plans.

Rad Resource: Your local university – many universities have Offices of Research which have templates for these plans as well. For example, see:

http://scholcomm.columbia.edu/data-management/data-management-plan-templates/

(Share Clip)

Sam Held is a leader in the newly formed STEM Education and Training TIG. Check out our TIG Website for more resources and information.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. STEM Week: Kim Kelly on Responding to National Curriculum and Policy Initiatives in STEM Education Evaluation
  2. STEM Week: Jack Mills on STEM Education Evaluation in Higher Education
  3. STEM TIG Week: Susan Eriksson on the Education of Stakeholders

STEM TIG Week: Susan Eriksson on the Education of Stakeholders

American Evaluation Association 365 Blog - Thu, 02/13/2014 - 01:15

Hi, I’m Susan Eriksson, a geologist and science educator reformed as an evaluator for science-related programs.  I write from my experience as a scientist turned evaluator with many years of working with evaluators, doing my own internal evaluation, and now doing evaluation for others.

Lesson Learned: It seems that many people really ‘don’t get’ evaluation.  “Why do we need this?”  “Evaluators just make work for themselves.” “You put WHAT in the budget!” Grants administrators, financial people, boards and advisory committees, heads of organizations, and STEM Principal Investigators commonly ask why evaluation is important and why it costs so much.

As an independent evaluator, I am still educating people about what evaluation is.  One of the more interesting comments I’ve heard recently was from a program officer in an un-named federal agency.  “Susan, why would anyone hire YOU?  Evaluators are social science researchers!” Although a reformed scientist/educator  does not necessarily qualify as an evaluator, many people equate social science research with evaluation.

Evaluation is deemed increasingly important by our government – knowledge- generation faster and supported by evidence!  People giving out the grants want to do the ‘right thing’ but many admit they don’t know what good evaluation looks like.  In addition, many grant proposal reviewers are inexperienced in evaluation.  I just sat on a review panel in which the relatively inexperienced science faculty spoke highly of proposals who mention the phrase ‘external evaluator’.  At Evaluation 2013, an NSF officer told us to always include a logic model because reviewers are just beginning to understand those.

We have a long way to go for people to understand the breadth and depth of good evaluation.

Hot Tip: Continue to use any opportunity to educate your clients, your peers, your friendly grant administrator about what evaluation is, what good evaluation looks like, and why evaluation is important in helping people ask the right questions and get significant answers.

Rad Resources: Three websites are great for our colleagues and clients who need a boost in evaluation:

  1. Better Evaluation is an international collaboration to improve evaluation practice and theory by sharing information about options and approaches.
  2. National Science Foundation; the well-used 2002 User-Friendly Handbook for Project Evaluation
  3.  And a tip from my colleague Ayesha Tillman writing in this same STEM evaluation blog series, Read and become familiar with AEA’s Guiding principles for evaluators.”

(Share Clip)

 

Susan Eriksson is a leader in the newly formed STEM Education and Training TIG. Check out our TIG Website for more resources and information.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. STEM TIG Week: Susan Eriksson on Scientists Becoming Evaluators
  2. Hui-Hui Wang on Assessing STEM Education
  3. STEM Week: Jack Mills on STEM Education Evaluation in Higher Education

A New Look and New Offerings

The Center for Effective Philanthropy Blog - Wed, 02/12/2014 - 08:15

In the coming days, CEP’s new website will be live (and it may even be by the time you read this), along with a new logo, brand identity, and tag line: Improving foundation performance through data + insight. The launch of this new site comes on the heels of a year-long planning process that concluded in December 2013 with the Board of Directors’ approval of a new strategic plan for CEP, laying out seven directions for 2014-2015.

Among those directions is that CEP has begun to offer advising services to funders in order to better help them improve their effectiveness. Over the past 12 years, we have focused on three activities to help foundations become more effective, according to our working definition of effectiveness: research, assessment tools, and programming. We have received many requests to work on more customized projects for foundation clients, or to do more follow on work to help foundations move from our assessment tool data to change in practice, but have generally demurred.

We have now begun to take on some advising engagements, very selectively, in cases where we think our skill sets, knowledge, and analytic and data analysis skills can be of use to foundation clients. For example, we are working with a large corporate foundation on a tailored benchmarking project focused on the application review process, and for a large private foundation on a series of facilitated staff and board conversations about the potential challenges and paths forward to becoming a more strategic foundation – tapping into our research on the unique challenges of foundation strategy. We recently conducted a training for all new program staff of a large foundation based on the fundamentals of strong funder-grantee relationships laid out in our guide, Working Well with Grantees.

We are committed in these efforts and others we will take on to avoiding the traps that have befallen too many consultants in the corporate and philanthropic sectors. We will look for opportunities to address substantive programmatic and operational questions in which the kind of data and rigorous analysis that CEP is known for can lead to clear action. From our work, particularly with donor and grantee surveys, we understand the importance of context, and we’ll seek to avoid over generalizing from past advisory engagements. We firmly believe that there’s no one size fits all answer to most of the questions asked by funders. We’ll take on only those engagements where we think the needs of foundations are aligned with what we can offer. For example, while we know that we’d like to take on projects about how funders’ operationalize their goals and strategies, we don’t think we’re best positioned to be the consultants designing strategy. And while we believe that our experience seeking feedback from community and field stakeholders positions us well to do the type of environmental scans that many funders commission, we don’t plan on taking on pure evaluations. We will rigorously assess our own performance, and whether we are making the difference we seek.

This is just one of seven directions laid out in the Plan, and I’ll share information about the others in the weeks and months ahead. Once the site is live, I hope you’ll explore it and give us your feedback. We have sought to make our work as accessible and clear as possible. I hope you’ll download our research reports, learn about our assessment tools, watch videos of our past conferences, and explore our upcoming programming and events.

I want to thank my colleagues who have worked so hard on this effort: Sara Dubois, our senior graphic designer, who designed our logo and everything about our new site; Emily Giudice, our communications coordinator, who contributed to much of the writing and editing that went into this effort; and Grace Chiang Nicolette, manager on the assessment tools team and, since October, interim director of marketing and programming. As we just announced, Grace will assume the full time role of director of marketing and programming when she returns from maternity leave in June and I am thrilled to have her in that role. Great job – and thanks!

Phil Buchanan is President of CEP and a regular columnist for The Chronicle of Philanthropy. You can find him on Twitter @philCEP.

STEM TIG Week: Kim Kelly on Key Insights on the Journey from Psychological Science Researcher to Program Evaluator

American Evaluation Association 365 Blog - Wed, 02/12/2014 - 01:15

I’m Kim Kelly, PhD, from the Psychology Department at the University of Southern California where I teach courses in statistics and research methods. I have been involved in the evaluation of STEM curriculum and professional development programs since 2002. I have been reflecting on the career path that led me from basic research in psychological science to an independent program evaluator of STEM education initiatives. I offer two insights that have been instrumental in my own professional journey from research to evaluator.

Rad Resources: Social scientists in particular struggle with the distinction between research and evaluation. To be honest, I still struggle with this distinction, and there are many varieties of opinion on the matter. It’s worth the time to consider published ideas, not to end the debate, but to consider the goals and methods of research and evaluation in order to appreciate the practical and intellectual differences between the pursuit of generalizable knowledge in research and the program specific feedback needed in most program evaluations. Gene Glass wrote about this back in 1971 in Curriculum Theory Network and the subject regularly appears in books and journals. See more recent comments in Jane Davidson’s Editorial in the 2007 Journal of Multidisciplinary Evaluation and by Miri Levin-Rozalis in the 2005 Canadian Journal of Evaluation. Reflecting on this key distinction has enabled me to appropriately refine my deep knowledge of the goals and methods of psychological science research to become a more effective program evaluator.

Cool Trick:  It may seem like a no-brainer to suggest establishing a good relationship with those we evaluate or evaluate for. The training of researchers often emphasizes a detached, objective approach to interaction with participants. Further, participants are typically cooperative as they have often volunteered to participate. When I first began program evaluation, I failed to appreciate the interpersonal dynamics associated with evaluations—the perceptions of threat often experienced by participants and clients, the reality of unwilling participants and investigators, and the barriers this lack of trust posed to obtaining valid data. In my work with programs, I emphasize rapport building on both social and programmatic levels to build trust. Rapport building at a programmatic level includes looking for ways to make evaluation data more useful and utilized as part of program development. For example, I shared results of content knowledge assessments with teachers in a metacognitive reflection activity. Being both a familiar and friendly face maximizes the likelihood that you will get the access and cooperation you need to do an effective program evaluation.

Kim Kelly is a leader in the newly formed STEM Education and Training TIG. Check out our TIG Website for more resources and information.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. STEM Week: Alyssa Na’im on Using Culturally and Contextually Responsive Practices in STEM Education Evaluation
  2. STEM TIG Week: Susan Eriksson on Scientists Becoming Evaluators
  3. STEM Week: Disa Cornish on Statewide STEM Initiative Evaluations

STEM TIG Week: Susan Eriksson on Scientists Becoming Evaluators

American Evaluation Association 365 Blog - Tue, 02/11/2014 - 01:15

Hi, I’m Susan Eriksson, a geologist and science educator reformed as an evaluator for science-related programs.  I’ve worked in industry, academia and the non-profit sector and have worked with many evaluators and served on many grant review panels.  My tips for today might help you decide if you need to partner with an evaluator who comes from a STEM field.

Lessons learned:  Scientists have the content knowledge to delve into important questions needed for significant evaluation. They have insights about the scientific community expectations.

Scientists have the language and culture to work with STEM people. A great deal of  ‘eye-rolling’ goes on when a scientist hears the word ‘evaluator’. Recently a scientist kept saying ‘but that’s not proof….”.  I calmly replied, “we are looking for measurable results that support multiple and independent lines of evidence that point to significant impact’.  He understood the ‘multiple and independent lines of evidence’ because that is the way science works.

Scientists working as evaluators have the ‘trust’ because we have been ‘one of them’. Scientists inherently don’t trust social scientists!  I could add an emoticon with a smiley face here, but it is generally true!  I know, I have been one of those ‘eye-rolling’ natural scientists.

The other side of the coin: Without additional training, scientists don’t have the skills to do evaluation.  As a proposal reviewer, I see random people put on grants as “external evaluators”.  Recently a proposal had a mathematics professor as the external reviewer.  No credentials, no experience in evaluation….no good rating from me! Good evaluation has standards, ways of thinking, and research methodologies that are normally not part of a person’s scientific training.

Scientists can, however, learn the profession of evaluation, gaining the right knowledge, expertise, and experience. Certificate programs, short courses, webinars, and working with experienced evaluators give many scientists the evaluation skills to combine with content knowledge, trust, insights to potentially do a very effective job in evaluation.

Hot Tip: For evaluators working with scientists, go beyond measurement of generic outcomes to find out what your client’s expectations are.  Craft your language about evidence and methodology in terms that are valid to STEM audience.  Partner or talk to a scientist/evaluator to gain insights into the rich world of STEM.

Susan Eriksson is a leader in the newly formed STEM Education and Training TIG. Check out our TIG Website for more resources and information.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. STEM TIG Week: Martin, Barnes, Chambers and Pippin on Internal STEM Evaluation from a Stakeholder Perspective at NASA
  2. STEM Week: Jack Mills on STEM Education Evaluation in Higher Education
  3. Hui-Hui Wang on Assessing STEM Education

STEM TIG Week: Martin, Barnes, Chambers and Pippin on Internal STEM Evaluation from a Stakeholder Perspective at NASA

American Evaluation Association 365 Blog - Mon, 02/10/2014 - 01:15

Hello from NASA Langley Research Center! We represent NASA Innovations in Climate Education (NICE), which provides funding for external institutions to carry out (and evaluate!) climate education projects. We’re Ann Martin (a postdoctoral fellow working on NICE evaluation internally, at the portfolio level), Monica Barnes (NICE project manager), Lin Chambers (project scientist), and Margaret Pippin (deputy project scientist).

The NICE team has the benefit of an embedded, internal evaluator, and today we’ll be sharing some of the lessons we’ve learned from the experience.

Here at NASA, we’re well aware that robust evaluation is a necessity for determining the true impact of a project, and that evaluation can also help shape a project and its strategy. An internal evaluator is somewhat rare within our context. How has NICE benefited from having an internal evaluator – and how could other federal STEM education initiatives benefit?

Lesson Learned: As a team, we’ve found that it is very helpful to have a go-to person who is focused on evidence and data.  Programmatic issues, reporting requirements, and other short turn-around requests are always on the management team’s radar, which doesn’t leave a lot of time for monitoring and evaluation. This has given NICE the opportunity to build a community among the project-level evaluators within our portfolio. The evaluation of climate change education is a relatively new, and rapidly changing, field. We see community building as a key part of what an internal evaluator can do.

Hot Tip: An internal evaluator can also be helpful in a more informal way, providing a fairly independent and somewhat on-the-fly view of “how things are going.” While we may not formally evaluate basic programmatic elements like meetings and webinars, our conversations touch on ideas like, “What outcome are we hoping to achieve with this?”

Hot Tip: NASA is full of people who are used to thinking systematically and to applying data to answer questions, so evaluation fits right in. Working on a team with an internal evaluator is a great opportunity to learn about evaluation and its relationship to STEM’s ways of knowing. When Ann returned from Evaluation 2013, this year’s AEA meeting in Washington, DC, the team discussed her poster on meta-evaluation, and increased their evaluation literacy. In turn, Ann (who is also from a physical sciences background) has thought a lot about “translating” evaluation ideas into concepts that scientists and engineers recognize. She’s learned about how to reconcile evaluation “ideals” with NASA limitations and realities, leading to conversations about evaluation that more quickly get to a productive, valuable place.

Ann Martin is a leader in the newly formed STEM Education and Training TIG. Check out our TIG Website for more resources and information.

 The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Climate Ed Eval Week: Ann Martin on Building a Community of Climate Education Evaluators
  2. STEM Week: Kim Kelly on Responding to National Curriculum and Policy Initiatives in STEM Education Evaluation
  3. Climate Ed Eval Week: Nicole Holthuis on Lessons Learned from Measuring Intermediary Outcomes

STEM TIG Week: Ayesha Tillman on Advice to New Evaluators

American Evaluation Association 365 Blog - Sun, 02/09/2014 - 05:57

Hello, I am Ayesha Tillman, a fifth year Ph.D candidate in Educational Psychology at the University of Illinois at Urbana-Champaign.  Advice for new evaluators is something that my colleagues Holly Downs, Lorna Rivera, Maria Jimenez, Gabriela Juarez and I have been thinking and presenting about for quite some time now.

Lessons Learned:

  • Decide what type of evaluator you want to be. There are many types of evaluation positions: academic, internal, external, etc. There are even more types of evaluation: process, development, outcome, etc. Take some time to think about the evaluation setting you are most comfortable and interested in.
  • Find a good advisor or mentor.  Where you plan to train on the job or obtain a degree related to evaluation, don’t underestimate the value of a good mentor. In fact, the Graduate Student and New Evaluator TIG has just launched a mentorship program for new evaluators. Contact Rae Clementz for more information.
  • Get involved with AEA.  AEA has over 50 topical interest groups (TIGS). Join one or three! Participating in TIGs is a great way to find those with similar interests and to development professional contacts in evaluation.
  • Read and become familiar with AEA’s Guiding Principles For Evaluators. Systematic inquiry, competence, integrity/honesty, respect for people, and responsibility for the public welfare are the five guiding principles. If you don’t do any of the above, at the very least take a second to read this brief document.

Rad Resources: AEA has a number of resources for evaluators at every level within their career!

  • AEA offers an online eStudy program. These in-depth virtual professional development opportunities cost $40/$80 for student AEA members and $75/$150 (depending on the number of contact hours) for all other AEA members.
  • Check out the AEA Public eLibraryIf you are interested in studying on your own, this is a great place to start. This eLibrary includes PowerPoints and papers from AEA conferences and all have tags to help the user easily search for their topic of interest.
  • Graduate and certificate programs. AEA lists universities that offer certificate and graduate programs in evaluation. The list includes the institution name, location, department the program in housed in and names of AEA members on faculty.
  • Find an evaluation job. The AEA Career Center is a great place to start looking for jobs. You can search by keyword or by state. You can also subscribe and receive emails as new jobs are posted.

Ayesha Tillman is a leader in the newly formed STEM Education and Training TIG. Check out our TIG Website for more resources and information.

 

(Share Clip)

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Hui-Hui Wang on Assessing STEM Education
  2. STEM Week: Jack Mills on STEM Education Evaluation in Higher Education
  3. STEM Week: Kim Kelly on Responding to National Curriculum and Policy Initiatives in STEM Education Evaluation