American Evaluation Association 365 Blog

Syndicate content
A Tip-a-Day by and for Evaluators
Updated: 28 min 34 sec ago

EPE TIG Week: Yvonne M. Watson on Buying Green Today to Save and Sustain Our Tomorrow

14 hours 11 min ago

Hi everyone!  I’m Yvonne M. Watson, an Evaluator in U.S. EPA’s Evaluation Support Division and Chair of AEA’s Environmental Program Evaluation Topical Interest Group.  As we celebrate Earth Week in April and prepare for the annual American Evaluation Association  (AEA) conference in October, the theme of sustainability looms large.

As I think about an area where organizations and individuals can make a significant difference to ensure a sustainable future, consumer choice and green purchasing/procurement comes to mind.  The federal government’s role as the leading purchaser of green products is vital to ensuring a sustainable future.  Equally important is the role that households and individuals play in this equation.

Lesson Learned: According to Fischer’s 2010 report, Green Procurement: Overview and Issues for Congress, at the institutional level, federal government procurement accounts for $500 billion annually. Because of its size and purchasing power, the federal government influence on the market is broad—“affecting manufacturing (product planning and development), and purchasing (large institutions and States that mimic federal specifications) both nationally, and internationally.  Established in 1993, the purpose of EPA’s Environmentally Preferable Purchasing (EPP) Program is to: 1) achieve dramatic reductions in the environmental footprint of federal purchasing through creation of guidelines, tools, recognition programs, environmental standards and other incentives and requirements, and (b) make the overall consumer marketplace more sustainable through federal leadership.  In 2011, the EPP program initiated an evaluation to examine the changes in spending on green products across the federal government since 2001. The results indicate greater awareness and positive attitudes towards green procurement among federal purchasers surveyed.

At the individual level, consumers not only vote with their feet – they vote with their purses and wallets too, through the purchase of food, cars, electronics, clothes and a host of other services. In addition, the prominence of green and eco-labels is a prime example of the manufacturing industry’s response to greater demand from consumers who look for green products.  During Earth Week, I encourage organizations, individuals and evaluators alike to take a step back and assess our individual and collective consumer purchasing decisions and the implications for a sustainable future.  After all, the purchasing choices we make today affect the future we have tomorrow.

Rad Resources: EPA’s Greener Products website provides information for consumers, manufacturers and institutional purchasers related to green products.

The EPP Evaluation Report is available here.

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. EPE Week: Annelise Carleton-Hug on Evaluating Your Own Environmental Impact
  2. David Erickson on Measuring the Social Impact of Investments
  3. Canadian Evaluation Conference – Going Green, Gold, and Global

EPE TIG Week: Andy Rowe on Cats and Climate Change

Tue, 04/22/2014 - 01:15

Andy Rowe here, I evaluate sustainable development and natural resource interventions.  I am convinced evaluation is facing a key adapt or decline juncture.

Connectivity is the mechanism enabling us to understand how interventions reach to the public interest and effects in the natural system. Our siloed governance approaches come from cost and accountability structures in the for-profit sector.  For-profits recognize the importance of connections to the larger mission and judges performance accordingly; now in the mission includes sustainability.  Major corporations such as Mars and WalMart are acting decisively to ensure sustainable supply chains, which they judge essential to survival of their businesses.  We need to begin the process of incorporating sustainability into evaluation.

The story of how domesticated cats contribute to climate change illustrates how obscure but important these causal connections can be.

Lesson Learned: Domesticated cats living with humans, and feral cats are a significant predator of songbirds taking an estimated 40% annually.  Birds carry a parasite Toxoplasma gondii. The unharmful parasite departs in stools, often in litter, which ends up in landfills. Landfills are often connected to the sea via groundwater and streams and the parasites enter coastal waters where bivalves ingest them.  Sea otters love bivalves ingesting the parasite, which attacks the otter brains.  Poor otters.

Another system connects with our story. Fertilizer and waste from sewage treatment and other sources deliver nutrients to the sea causing algal growth in the water that weaken sea grasses.  Otters address the effects of excessive nutrient loading on grasses keeping the sea grasses alive.  Sea grasses are amazingly effective at storing carbon – with the help of otters Pacific sea grasses store the equivalent of annual carbon dioxide emissions from 3 to 6 million cars.

So, cats contribute to climate change via mechanisms that are far from transparent.  As evaluators we need to attend to the connections from the intervention to important effects, including effects in the natural system.  By tracing connectivity within and across systems, evaluation can play an important role in ensuring that interventions are designed and undertaken so that the world we leave for our grandchildren is at least as good as the world we inherited.  It is time that sustainability becomes an expected element in evaluation.  Several years ago the National Academy of Science gave sustainability science a room of its own –time now for sustainability to become a required element in our Standards.

Lesson Learned:  Take a look at sustainability in the for-profit sector:  1. Mars Corporation here and here and 2. Walmart here.

Rad Resources:  Otters and weeds:

Also, see Sustainability Science Room of Its Own by William C. Clark (2007).

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. EPE Week: Andy Rowe on Time for Evaluators to Get Ahead of the Curve
  2. EPE Week: Tracy Dyke-Redmond on Evaluating Adaptation Plans
  3. Climate Ed Eval Week: John Baek on Online Collection of Climate Knowledge Assessment Items

EPE TIG Week: Juha Uitto on Sustainability Evaluation and the Need to Keep an Eye on the Big Picture

Mon, 04/21/2014 - 01:15

Hi all! I’m Juha Uitto, Deputy Director of the Independent Evaluation Office of the United Nations Development Programme (UNDP). I’ve spent many years evaluating environment and development in international organizations, like UNDP and the Global Environment Facility (GEF).

As we all know, evaluating sustainability is not easy or simple. Sustainability as a concept and construct is complex. It is by definition multidimensional encompassing environmental, social, cultural, political and economic dimensions. It cannot be evaluated from a single point of view or as just one dimension of a programme. Apart from the above considerations, sustainability refers to whether the programme or intervention that is the evaluand is in itself sustainable. Sustainability evaluation, must take all of the above into account.

At its simplest, sustainability evaluation would look into whether the intervention would ‘do no harm’ when it comes to the various environmental, social, cultural and other dimensions that may or may not be the main target of the programme. At this level, the evaluation does little more than ensuring that safeguards are in place. The evaluation also has to look at whether the intervention itself was sustainable, i.e. whether it has developed exit strategies so benefits will continue beyond the life of the intervention.

But this is not enough. It is essential for evaluations and evaluators to be concerned with whether the evaluand makes a positive difference and whether it has unintended consequences. In environment and development evaluation a micro-macro paradox is recognized: evaluations show that many individual projects are performing well and achieving their stated goals; yet the overall trends are downward. There are lots of projects focused on protected areas and biodiversity conservation; still, we are facing one of the most severe species extinction crises ever. Many projects successfully address climate change mitigation in various sectors ranging from industry to transportation to energy; still, the global greenhouse gas emissions continue their rising trend. It is not enough for evaluators to focus on ascertaining that processes, activities, outputs and immediate outcomes are achieved.

Lessons learned: In evaluating environment and poverty linkages, one should never underestimate the silo effect. Sustainable development requires a holistic perspective but few organizations operate that way. People have their own responsibilities, priority areas, disciplinary perspectives, partners, networks, and accountabilities that often preclude taking a holistic perspective. Evaluators must rise above such divisions. An evaluation – such as the Evaluation of UNDP Contributions to Environmental Management for Poverty Reduction – can make a major contribution to how an organization acknowledges, encourages and rewards intersectoral and transdisciplinary cooperation.

Rad resource: All UNDP evaluation reports and management responses to them are available on a publicly accessible website, the Evaluation Resources Centre, and independent evaluations at Independent Evaluation Office of UNDP.

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Related posts:

  1. Jindra Cekan on Furthering Community Self-sustainability of Our Projects
  2. EPE Week: Andy Rowe on Getting the Evaluand Right
  3. Canadian Evaluation Conference – Going Green, Gold, and Global

EPE TIG Week: Sara El Choufi on Thinking Globally: How to evaluate the effectiveness of aid to the environment on a global level

Sun, 04/20/2014 - 01:15

My name is Sara El Choufi and I wanted to share with you some of thoughts on evaluating the effectiveness aid to the environment.

As evaluators, we tend to focus our work on programs and projects. We thoroughly evaluate a project, or a set of projects and draw out conclusion, best practices, lessons learned, etc. But, I wonder if we ever take a step back and take a look at the bigger picture. I mean really take a step back and try to figure out what the world has achieved in terms of environmental protection in over four decades.

Of course such a study is not an easy task to undertake; for starters, where do we get the data? How reliable is it? Assuming we do have remarkably detailed and reliable accounts, how can we generalize and draw conclusions? To what degree do we rely on quantitative studies, and how much thematic and qualitative work needs to be done?

Thinking about this lead me to Greening Aid? – a book solely focused on the foreign assistance and its impact on the environment. I also discovered what could be considered the most comprehensive database for foreign aid – AidData. Collecting data from the OECD, donors, and recipients, AidData “aimed to create a database of development finance activities with as much descriptive detail as possible at the project level for use in the research community.”

Ok, so we have data, now what? How does one begin to evaluate the impact of aid to the environment on the protection and conservation of the global commons, or forests for example? What about measuring to what degree aid has contributed to the reduction of CO2 emissions? What about marine ecosystems? The list goes on…

Another layer is what indicators do we use? Are the World Development Indicators enough? Do we rely on locally assembled data (be it from government, research institutions, or civil society)? Do we need to have boots on the ground and do our own data collection? So on and so forth…

This seems like an impossible undertaking, or at least an impractical one. Should it be done? How can we as evaluators contribute to such a study?

This is meant as a thought piece, and I hope it compels you to respond and weigh in

Rad Resources: AidData – “a research and innovation lab making information on development finance more accessible and actionable.”

Greening aid? : Understanding the environmental impact of development assistance by R.L. Hicks (2010).

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

Related posts:

  1. EPE Week: Annelise Carleton-Hug on Evaluating Your Own Environmental Impact
  2. EPE Week: Matt Keene, Alex Ortega-Argueta, Lieven De Smet, and Lisa Eriksson on The Environmental Evaluators Network in 2012
  3. Susan Kistler on Getting Engaged in Environmental Evaluation

Dan McDonnell on Upcoming Changes to Twitter

Sat, 04/19/2014 - 11:08

Hello, my name is Dan McDonnell and I am a Community Manager for the American Evaluation Association. My last couple of posts have been focused around evaluating Tweets and Twitter data as well as sharing useful Twitter tools for evaluators. This week, I’d like to talk about an upcoming change to Twitter to keep an eye on.

In the coming weeks, Twitter will be giving users’ profile pages a major facelift. A small group of Twitter users, including First Lady Michelle Obama, already have the new page enabled. Besides just looking sharp, the new and improved Twitter profile page adds a number of new features that can come in very handy for users.

Rad Resource: Show Off Your Top Tweets

One of the biggest features that Twitter is advertising is the new ability to display a list of your top tweets prominently. Imagine a fellow evaluation professional who is not currently following you sees you mentioned in a Tweet, and clicks through to your profile to learn more. If they did this today, they’d see whatever your most recent tweet was – whether it was sharing a tip on #dataviz, or checking in at a local eatery. Post update, your tweets that were most engaging (whether retweeted, favorited or responded to) will feature prominently – which helps you put your best foot forward on Twitter.

The New Look and Feel

Rad Resource: Pin a Favorite Tweet

Have a tweet that you feel really represents YOU, or the reason that you’re on Twitter? On your new Twitter profile page, you can select your favorite tweet that you’ve sent, and wear it like a badge of honor. Like the above feature, this opportunity allows for you to really customize the first meeting that many Twitter users will have with you, helping to make an introduction on your terms.  See how the band Weezer has taken advantage of this on their new page.

There are a couple more features that the new pages offer, as well as some changing guidelines for the design of your Twitter page – but we’ll save those for a future post. For now, start sifting through your Tweets to find your favorite, and keep your eyes peeled for the update!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Dan McDonnell is a regular Saturday contributor to AEA365, where he blogs on social media-related topics for evaluators. You can reach Dan on Twitter at @Dan_McD.

Related posts:

  1. Dan McDonnell on Making New Friends and Mastering Lesser-Known Twitter Features Without Third Party Apps
  2. Dan McDonnell on Keyboard Shortcuts and Other Advanced Twitter Features
  3. Dan McDonnell on Twitter Etiquette and Data Archiving

David Fetterman on Google Glass Part II: Using Glass as an Evaluation Tool

Fri, 04/18/2014 - 01:52

I’m David Fetterman, evaluator, author, entrepreneur, and Google Glass user. Yesterday, we talked about what Google Glass is and how it can revolutionize communications. Today, let’s turn to thinking about how Glass could be used as an evaluation tool.

Hot Tips – Glass for Empowerment Evaluation: Youth (with parental permission) can wear the Glass to produce photovoice productions, sharing their pictures of their neighborhoods and videos of the activities. It’s easy (and fun) – that’s my son over on the right trying out Glass. Their stories can be used as part of their self-assessment, gaining insight into their lives and potentially transforming their worlds.

Community and staff members can post their digital photographs (and videos) on a common server or blog while conducting their self-assessment with the blink of an eye. This ensures community access, a sense of immediacy, and transparency.

Community and staff members can use Google Hangout on Glass to communicate with each other about their ratings, preliminary findings, and plans for the future.

Hot Tips – Glass for Traditional Evaluation: Evaluators can use it to communicate with colleagues on the fly, share data (including pictures and video) with team members, and conduct spontaneous videoconference team meetings. Note that everyone doesn’t need to have Glass, as Glass users can leverage its capabilities while connecting with others who are using Smartphones or computers.

Glass stamp dates photos, videos, and correspondence, ensuring historical accuracy.

Glass can be used as an effective “ice breaker” to gain access to a new group.

Evaluators can also solicit feedback from colleagues about their performance, with brief videos of their data collection and reporting behavior. There is a precedent for this type of critique – assessments of student teaching videos.

Glass can be used to provide “on the fly” professional development with streaming video of onsite demonstrations for colleagues working remotely.

In addition, Glass can help maximize evaluator’s multi-tasking behavior (when appropriate).

Lessons Learned – Caveats:

Take time to get to know people before disrupting their norm with this innovation.

Plan to use it over time to allow people to become accustomed to it and drop their company manners.

Respect people’s privacy. Ask for permission to record any behavior.

Do not use it in bathrooms, while driving, or in areas requiring additional sensitivity, e.g. bars, gang gatherings, and funerals.

In the short term, expect the shock factor, concerns about invasion of privacy, and a lot of attention. Over time, as the novelty wears off and they become more common place, Glass will be less obtrusive than a bag of digital cameras, laptops, and Smartphones.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. David Fetterman on Google Glass Part I: Redefining Communications
  2. Kim Sabo Flores on Using Rubrics to Support Positive Youth Development
  3. Innovative #Eval Week: Cakici, Pleasants, and Kankane on Three…Two…One…Action! Evaluators Under the Spotlight

David Fetterman on Google Glass Part I: Redefining Communications

Thu, 04/17/2014 - 01:47

“Ok, glass.” That’s how you activate Google Glass. I’m David Fetterman and that’s me to the right wearing Google Glass. I’m an empowerment evaluation synergist and consultant, busy father and spouse, and owner of Fetterman & Associates.

Rad Resource – Google Glass: Google Glass is a voice and gesture activated pair of glasses that lets you connect with the world through the internet. You can take a picture, record a video, send a message, listen to music, or make a telephone or video call – all hands free.

Hot Tips – Redefining Communications: Google Glass is not just another expensive (currently about $1500) gadget. It can free us up to do what we do best – think, communicate, facilitate, and, in our case, assess. Here is a brief example.

I said “Ok, Glass,” then “make a call to Kimberly James.” She is a Planning and Evaluation Research Officer I am working with at the W.K. Kellogg Foundation.

Kimberly asked how the evaluation capacity building webinar is coming along. Via Glass, I took a screenshot and mailed it to her so we can discuss it. When a colleague is mentioned, with a few swipes of my finger on the frame, I find a picture on the web, and miraculously remember who we are talking about.

Mid-conversation, Kimberly needed to step away briefly. While on hold, I sent a note to colleagues in Arkansas to ask them to check on the data collection for our tobacco prevention empowerment evaluation.

Kimberly returned to the call and we discussed a recent survey. With a simple request, the display of our results appeared, reminding me what the patterns look like.

Did I mention that I did all of these things while making lunch, picking up my son’s clothes off the floor, letting the dogs out, and emptying the dishwasher?

Later in the day, with a tap on the frame, I confirmed our scope of work with Linh Nguyen, the Vice President of Learning and Impact at the Foundation, while dropping my son off for piano lessons.

Later in the week I plan to use Google Hangout to videoconference with another colleague using Glass. When she connects during a project site visit, she will be able to take pictures and stream video of her walk around the facilities, bringing me closer to the “hum and buzz” of site activities.

Lessons Learned:

Respect people’s privacy – do not wear Google Glass where it is not wanted, will put people off, or will disrupt activities. Do not take pictures without permission. Remove it when you enter a bathroom.

Rad Resources

Hot Tip: Stay tuned for Part II tomorrow when I will cover using Google Glass as an evaluation tool.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. David Fetterman on Participation and Collaboration
  2. David Fetterman on Google
  3. CPE Week: David Fetterman on Empowerment Evaluation

Susan Kistler on Innovative Reporting Part II: Book Giveaway and #altreporting Videos

Wed, 04/16/2014 - 01:16

My name is Susan Kistler and I am on a crusade to expand our reporting horizons. Earlier this month, we looked at little chocolate reports. Today, let’s consider adding videos to your evaluation reporting toolbox.

Get Involved: But first, a little incentive for you to share your best alternative reporting ideas. And possibly get a reward for doing it. In the notes to this blog, or via twitter using the hashtag #altreporting, share either (a) your best unique evaluation reporting idea, or (b) a link to a great alternative evaluation report, and in either case note why you love it. I’ll randomly draw one winner from among the commenters/tweeters and send you a copy of “How to Shoot Video That Doesn’t Suck,” a book that can help anyone create video that isn’t embarrassing. Contribute as often as you like, but you will be entered only once in the random drawing on May 1.

Back to our programming. If you are reading this via a medium that does not allow you to view the embedded videos, such as most email, please click back through to the blog now by clicking on the title to the post.

Rad Resource – Unique Reporting Videos: Kate Tinworth, via a post on her always thought-provoking ExposeYourMuseum blog, recently shared three wonderful short video reports made by her audience insights team when she was working at the Denver Museum of Nature and Science. Each uses everyday objects to help visualize evaluation findings in an engaging way.

This video is my favorite of the three. It introduces the evaluators, reports demographics via a stacked bar chart built from jellybeans, and is at once professional and accessible.

Cool Trick: Kate’s team met museum volunteers and staff at the door with small bags of jellybeans that included a cryptic link to the report in order to get people to view the video.

Rad Resource – Unique Reporting Videos: This video from a team in Melbourne, Australia, shares findings from an evaluation of a primary school kitchen gardening program. It introduces the key stakeholders and deepens our understanding of the program without listing its components.

Rad Resource – Unique Reporting Videos: I wrote before on aea365 about getting this mock reporting video made for $5. I can still envision it embedded on an animal shelter’s website, noting how the shelter is using its evaluation findings. My favorite part is that it talks about evaluation use – how things are changing because of the evaluation at a small business.

Rad Resource: Visit the Alternative Reporting – Videos Pinterest Page I’m curating for TheSmarterOne.com for more reporting video examples and commentary.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Video in #Eval Week: Kas Aruskevich on Telling the Story Through Video
  2. Courtney Heppner and Sarah Rand on Producing Online Evaluation Reports
  3. Susan Kistler on Learning From DVR Innovation

Ann Price on The (Evaluation) Road Less Traveled

Tue, 04/15/2014 - 01:40

Hello fellow evaluators! My name is Ann Price and I am President of Community Evaluation Solutions, near Atlanta, Georgia. A few weeks ago a friend and I spent the weekend in the Georgia Mountains at the Hike Inn, a state park only accessible via a 5 mile “moderate” hike. There is no cell phone, no tv, no internet. It was nice to disconnect and reflect on life and work. This blog about my reflections over the weekend as an external evaluation consultant.

My friend and I have found over the years that even though we work in different areas, our processes and our relationships with clients are quite similar. We both have a penchant for metaphor so we had fun over the weekend applying metaphors to our clients and our work.

The first thing we did was spend ½ hour just trying to find the trail head. I told my friend this was similar to programs not doing the ground work for an evaluation (i.e. failing to design a program logic model or a strategic plan or in our case, having the map but not following it). When all else fails, read the directions….

The hike was a lovely, albeit up and down trek. So the second thing we learned was something my son’s scout leader once said, “Everyone is on their own hike.” We reminded ourselves of that as folks of all ages passed us by (that was a bit discouraging). But the main point is to start on the path. Similarly, you may not have the biggest, most well-funded program. But it is important to start the evaluation journey or you will never “get there.” You do this by building your program’s organizational and evaluation capacity.

Tips and Tricks:
The hike was pretty steep at times, so we had to stop every once in awhile and catch our breath. We kept ourselves motivated by setting goals (Let’s just make it to the next tree! Think benchmarks and indicators). Evaluation work is the same way. It’s important to take a break and look at your data. If you don’t you might miss some pretty awesome sites (or findings). So stop every once in awhile and see where you are. Is your program where it needs to be? If your program is not, make an adjustment. And if you need help, here are a few great resources to guide you on your way.

Rad Resources:
Start with baby steps if you must. There are plenty of free resources out there to help you on your journey:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Dan McDonnell on Twitter Etiquette and Data Archiving
  2. Stephanie Evergreen on Project Management Tools
  3. GEDI Week: D. Pearl Barnett on Cultural Responsiveness in a Representative Bureaucracy

Jordan Slice on How Being a Creator Informs Being an Evaluator

Mon, 04/14/2014 - 01:58

Greetings to my fellow #DataNerds! My name is Jordan Slice. I am a Research Specialist at Richland One, an urban school district in Columbia, South Carolina. In addition to being a full-time evaluator, I create handmade pieces for my Etsy shop, resliced.

As a handmade business owner, many of the sales I make are custom orders. People really appreciate when something is tailored to meet their needs. The same is true for evaluation stakeholders: your results are much more likely to be appreciated (and used!) if they answer the questions your stakeholders need to know.

Lesson Learned: Whether I’m making a custom purse (that’s one of my bags to the right) or designing a program evaluation, clear communication is key. For example, if a customer sends me her grandfather’s favorite shirt and requests that I make her a purse using the fabric, it is imperative that we come to a clear agreement about the design of the purse before I start constructing. Similarly, when evaluating a program, it is imperative that you consult with the stakeholders before developing your evaluation if you expect the results to be utilized.

Hot Tip: Keep it simple. While you and I may love geek speak, flooding your stakeholders with evaluation jargon may impair their ability to understand your results. Whether you are talking with stakeholders, constructing a presentation, or writing a report, commit to the mantra that less is more. Once I gather my summary in writing, I use a two step revision process. First, I focus on organizing the content for better flow. Second, I put on my minimalist cap and cut out all the excess fluff (usually repetitive statements or unnecessary detail). Before finalizing any report, always ask a colleague (or stakeholder when appropriate) to proof and provide feedback. I employ the same technique when I am building newsletters (Rad Resource: Mail Chimp – free & user-friendly!) or item listings on Etsy.

Rad Resource: Stephanie Evergreen has some really great posts (like this one!) on her blog with tips for creating better visualizations with your data.

Another Hot Tip: Allow yourself time to focus on something creative (even just a daydream) several times a week. This can give your mind the break it needs to process information and improve your focus. Pursue a new hobby or build on an existing interest. You may be surprised at how this new skill can help you grow as an evaluator.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Paul Watkins on Itemized Report-Writing Template
  2. Judith Kallick Russell on Translating Findings Into Action
  3. Stephanie Evergreen on Project Management Tools

Kylie Hutchinson on The Ever Expanding Evaluator’s Toolbox

Sun, 04/13/2014 - 05:34

My name is Kylie Hutchinson.  I am an independent evaluation consultant with Community Solutions Planning & Evaluation.  In addition to evaluation consulting and capacity building, I tweet at @EvaluationMaven and co-host the monthly evaluation podcast, Adventures in Evaluation along with my colleague @JamesWCoyle.

When I started out in evaluation 26 years ago, I was focused on being a good methodologist and statistician.  After deciding to work primarily with NGOs I learned the importance of being a good program planner.  Employing a participatory approach required me to become a competent facilitator and consensus-builder.  These days, the increased emphasis on utilization and data visualization is forcing me to upgrade my skills in communications and graphic design.  New developments in mobile data collection are making me improve my technical skills.  A recent foray into development evaluation has taught me the important role that a knowledge manager plays in evaluation. Finally, we are starting to understand evaluation capacity development as a process rather than a product, so now I need expertise in organizational development, change management, and the behavioral sciences.  Whoa.

Don’t get me wrong, I’m not complaining.  Every day I wake up and think how lucky I am to have picked such a diverse career as evaluation. But with all these responsibilities on my plate, my toolbox is starting to get full and sometimes keep me awake a night.  How can I manage to be effective at all of these things?  Should I worry about being a Jack of all trades, Master of none?

Hot Tip:  You don’t have to do it all.  Determine your strengths and outsource your weaknesses. Pick several areas of specialization and ask for assistance with the others.  This help may come in the form of other colleagues or departments.  For example, if you think you need help with change management, sub-contract an organizational development consultant to your team.  If you work in an organization with a communications or graphic design department, don’t forget to call on their expertise when you need it.

Hot Tip:  Take baby steps.  If you want to practice more innovative reporting, don’t assume you have to become an expert in communication strategies overnight. Select one or two new skills you want to develop annually and pick away at those.

Hot Tip:  If you can, strategically select those evaluations that will expose you to a new desired area, e.g. mobile data collection or use of a new software.

Rad Resource:  Even if you’re not Canadian, the Canadian Evaluation Society’s Competencies for Canadian Evaluation Practice provide a great basis from which to reflect on your skills.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Stephanie Evergreen on Graphic Design
  2. Cheryl Poth on Articulating Program Evaluation Skills Using the CES Competencies
  3. Kerry Bruce on Getting Started with Mobile Phones

Sheila B. Robinson on Delightful Diagrams from Design Diva Duarte

Sat, 04/12/2014 - 05:45

Hello! I’m Sheila B. Robinson, aea365′s Lead Curator and sometimes Saturday contributor with a new cool tool to spice up your evaluation presentations and reports!

Do you know the feeling you get when you stumble upon something so good you want to share it, but then again, part of you wants to keep it all to yourself? It will be apparent from this post which side won out for me.

Lesson Learned: Based on advice from respected presentation and information designers, I now shy away from canned, cliche, or clip art images, including charts and diagrams. I’m no designer though, and often find it challenging to start with a blank page when I have something to share that calls for a good visual representation of a relationship.

I’ve enjoyed Microsoft’s SmartArt graphics that come with Office, and they are quite customizable, but with only 185 choices or so, I find I start recognizing them in other people’s presentations, especially when they are not customized by the user, and they begin to remind me of the overused, 20th century clip art we’ve all come to loathe.

Rad Resource: Turns out, one of my favorite presentation designers, Nancy Duarte, has offered her expertise in a fabulous resource she has made available to all of us, and it’s FREE! Diagrammer™ is “a visualization system” featuring over 4,000 downloadable, customizable diagrams. Duarte, Inc. makes it easy to search for exactly what you need by allowing you to search all diagrams, or filter by relationship (flow, join, network, segment, or stack), style (2D or 3D), or number of nodes (1-8) needed.

Once you choose a diagram (and “shopping” for one is half the fun!), you simply download it as a PowerPoint slide, and fill in your text, or customize the various components. You can change shapes, colors, sizes and more. Diagrams range from the very simplest to somewhat complex. Here are just a few examples:

Most diagrams you see come in a variety of configurations. Each of the above examples are also available with different numbers of nodes.

Hot Tip: Duarte’s diagrams are in a gorgeous color palette if you ask me, but often it’s the colors you want to customize to match your report style or the colors of your organization. Here’s a before and after with the original digram, and my redesign.

Cool Trick: Take some time searching diagrams as you’re thinking about the relationship you want to communicate. This added reflection time will give you the opportunity to dig a little deeper into your data and you may be rewarded with new insights.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. John Nash on Creating Outstanding Presentation Slides
  2. Michelle Landry and Judy Savageau on No Need to Reinvent the Wheel: Project Management Tools for Your Evaluation Projects
  3. Best of aea365 week: John Nash on Creating Outstanding Presentation Slides

EEE Week: Cheryl Peters on Measuring Collective Impact

Fri, 04/11/2014 - 01:08

My name is Cheryl Peters and I am the Evaluation Specialist for Michigan State University Extension, working across all program areas.

Measuring collective impact of agricultural programs in a state with diverse commodities is challenging. Many states have an abundance of natural resources like fresh water sources, minerals, and woodlands. Air, water and soil quality must be sustained while fruit, vegetable, crop, livestock and ornamental industries remain efficient in yields, quality and input costs.

Extension’s outreach and educational programs operate on different scales in each state of the nation: individual efforts, issue-focused work teams, and work groups based on commodity types. Program evaluation efforts contribute to statewide assessment reports demonstrating the value of Extension Agricultural programs, including public value. Having different program scales allows applied researchers to align to the same outcome indicators as program staff.

Hot Tip: Just as Extension education has multiple pieces (e.g., visits, meetings, factsheets, articles, demonstrations), program evaluation has multiple pieces (e.g., individual program evaluation about participant adoption practices, changes in a benchmark documented from a secondary source, and impact assessment from modeling or extrapolating estimates based on data collected from clientele).

Hot Tip:  All programs should generate evaluation data related to identified, standardized outcomes. What differs in the evaluation of agriculture programs is the evaluation design, including sample and calculation of values. Impact reports may be directed at commodity groups, legislature, farming groups, and constituents. State Extension agriculture outcomes can use the USDA impact metrics. Additionally, 2014 federal requirements for competitive funds now state that projects must demonstrate impact within a project period. Writing meaningful outcomes and impact statements continues to be a focus of USDA National Institute of Food and Agriculture (NIFA).

Hot Tip: Standardizing indictors into measurable units has made aggregation of statewide outcomes possible. Examples include pounds or tons of an agricultural commodity, dollars, acres, number of farms, and number of animal units. Units are then reported by the practice adopted. Dollars estimated by growers/farmers are extrapolated from research values or secondary data sources.

Hot Tip: Peer-learning with panels to demonstrate scales and types of evaluation with examples has been very successful. There are common issues and evaluation decisions across programming areas. Setting up formulas and spreadsheets for future data collection and sharing extrapolation values has been helpful to keep program evaluation efforts going. Surveying similar audiences with both outcomes and program needs assessment has also been valuable.

Rad resource: NIFA  provides answers to frequently asked questions such as when to use program logic models, how to report outcomes, and how logic models are part of evaluability assessments.  

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Lisa Townson on Tailoring Evaluation to Your Audience
  2. EEE Week: Suzanne Le Menestrel on Developing Common Measures
  3. EEE Week: Melissa Cater on Extension Evaluation

EEE Week: Laura Downey on Community-Based Participatory Research Logic Models

Thu, 04/10/2014 - 01:04

Hi! This is Laura Downey with Mississippi State University Extension Service. In my job as an evaluation specialist, I commonly receive requests to help colleagues develop a program logic model. I am always thankful when I receive such a request early in the program development process. So, I was delighted a few weeks ago when academic and community colleagues asked me to facilitate the development of a logic model for a grant proposing to use a community-based participatory research (CBPR) approach to evaluate a statewide health policy. For those of you who are not familiar with CBPR, it is a collaborative research approach designed to ensure participation by communities throughout the research process.

As I began to assemble resources to inform this group’s CBPR logic model, I discovered a Conceptual Logic Model for CBPR available on the University of New Mexico’s School of Medicine, Center for Participatory Research, website.



(Share Clip)

Rad Resource:

What looked like a simple conceptual logic model at first glance was actually a web-based tool complete with metrics and measures (instrument) to assess CBPR processes and outcomes. Over 50 instruments related to the most common concepts in CBPR, concepts such as organizational capacity; group relational dynamics; empowerment; and community capacity are profiled and available through this tool. The profile includes the instrument name; a link to original source; the number of items in the instrument; concept(s) original assessed; reliability; validity; and identification of the population created with.

With great ease, I was able to download surveys to measure those CBPR concepts in the logic model that were relevant to the group I was assisting. Given the policy-focus of that specific project, I explored those measures related to policy impact.

Hot Tip:

Even if you do not typically take a CBPR approach to program development, implementation, and/or evaluation, the CBPR Conceptual Logic Model website might have a resource relevant to your current or future evaluation work.

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Laura Myerchin Sklaroff on Community Based Participatory Research
  2. Sally Honeycutt on Developing Logic Models
  3. Michael Duttweiler on Talking Your Way Into a Logic Model

EEE Week: Siri Scott on Conducting Interviews with Youth

Wed, 04/09/2014 - 01:02

Hi, I’m Siri Scott, and I work as a Graduate Assistant with the University of Minnesota Extension.

Conducting interviews with youth is one way to consider gathering information for program improvement, especially when you want to bring youth voice into your improvement process. While more time intensive than doing a survey, interviews will provide you with rich contextual information. Below is a brief overview of the planning process as well as tips for conducting interviews.

Hot Tips: Planning Phase

One main decision is whether or not you will need IRB approval for conducting interviews. Even when done for program improvement purposes, it is a good idea to comply with IRB regulations for data practices and protection for youth. Other major decision points in the planning process include how many individuals you will interview, how you will choose your participants, and how you will collect and analyze your data. In addition, you must decide what type of interview you want to conduct, the purpose of the interview, and then create an interview protocol (if appropriate).

Hot Tips: Conducting interviews

Here are some tips for conducting interviews:

  • Practice: Test the use of the protocol with a colleague (or a young person who you know well) and ask for feedback about the questions themselves and how they fit together.
  • Space: Find a quiet, secluded space for the interview in a public setting (or in a private home if the young person’s guardian can be present). You don’t want other people overhearing your conversation or your participant being distracted by anything.
  • Warm up: Start the interview with some informal chit chat. This will build your rapport with the participant and ease the participant’s (and your) nerves.
  • Probe: If you are not doing a structured interview, make sure to ask participants to clarify or elaborate on their responses. This will provide you with much better data.
  • Notes: If you are using an audio recorder, don’t trust it (fully). Jot down some quick notes during the interview. You can elaborate on these later if the audio recorder malfunctions.
  • Relax! If you mess up, that’s okay. Also, if you’re nervous and tense, the participant will sense that. Do whatever you can to put the participant (and yourself) at ease.
  • Learn More:  A good resource for learning about how to conduct interviews is this newly released, comprehensive overview of the interviewing process:  InterViews: Learning the Craft of Qualitative Research Interviewing (3rd Edition) by Brinkman and Kvale (2014).

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Lisanne Brown on Involving Youth in Interviewing
  2. Dreolin Fleischer on Organizing Quantitative and Qualitative Data
  3. Nicole Jackson on Improving Interview Techniques During Formative Evaluations

EEE Week: Kevin Andrews on Connecting Students with Extension

Tue, 04/08/2014 - 01:59

Howdy! I am Kevin Andrews, a program specialist at Texas A&M AgriLife Extension Service. In addition to my Extension duties, I co-teach a graduate evaluation course at Texas A&M University.

I came across a post from March about students partnering with community agencies to apply their evaluation skills. I’d like to build upon Dr. Brun’s idea for evaluators who have ties to a university, especially those in Extension.

Many of our students have no idea what extension (or any other agency) is. Any engaged university seeks to tie together the scholarships of teaching, research, and service, and hands-on evaluations are a perfect way to accomplish this.

Lessons Learned: By allowing students to partner with us on evaluations, they not only receive practical experience and make an impact, they also get to learn who we are. This can aid in recruiting talented students to work for the agency; we’ve had several ask about careers in extension.

Hot Tip: Students are going to ask a lot of questions. We can get pretty set in our ways and think we know our agency well. When you have to pause to explain why we do what we do in basic terms, you are forced to reflect on exactly why it is we have been doing things a certain way all these years!

Hot Tip: Our employees just want their voices heard. With students conducting interviews we get far more coverage than a single evaluator using a sample, and employees are able to feel their opinions matter. Our staff is also much more likely to be open with a student than they are a peer.

Lessons Learned: I like to be in total control over my projects, but part of delegating work is letting others do their own thing. By developing goals together early in the project, I can ensure the outcome is as I intended while allowing students to experiment and develop their own processes.

Hot Tip: Often, when a class is over, the student-teacher relationship ends. Keep contact information and follow up with students a year later to let them know the impact of their work. No matter where life takes them, they are your stakeholders and you want them to hold you in high esteem.

Lessons Learned: I’m lucky to get to straddle teaching and Extension. For those who don’t simply reach out and ask! I’ve been approached by others with projects for students, and I’ve approached others with projects of my own. Everyone has something they need done!

Two years ago, I was the student participating in a class evaluation. Three from my class, including myself, now work for Extension and our report generated $200,000 of funding – the model works!

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Aubrey Perry on Ensuring a Positive Practica Experience
  2. GSNE Week: Kristin Woods on Gaining Practical Experience as a New Evaluator
  3. PD CoP Week: David Brewer on Evaluating Professional Development: Guskey Level 5: Student Learning Outcomes

EEE Week: Suzanne Le Menestrel on Developing Common Measures

Mon, 04/07/2014 - 01:52

Greetings! My name is Suzanne Le Menestrel and I am a National Program Leader for Youth Development Research at the 4-H National Headquarters, National Institute of Food and Agriculture, U.S. Department of Agriculture.  4-H is a national youth development organization serving 6 million youth throughout the country. We partner with the nation’s Cooperative Extension system operated by the more than 100 land-grant universities and colleges and with National 4-H Council, our private, non-profit partner. Recent trends in funding have elevated the importance of illustrating impact and accountability for nonformal educational programs.  We were also interested in building capacity for evaluation through the creation of easy-to-use and accessible tools.  We partnered with National 4-H Council, state 4-H program leaders, 4-H specialists and Extension evaluators from around the country to create a national 4-H common measures system that will also enable us to aggregate data across very diverse 4-H programs.

I have learned a number of lessons through the implementation of this new system.

Lessons Learned:

    • Common measures must be developmentally appropriate. Children and youth who participate in 4-H range in age from ages 5 to 19.  Because of concerns about reading levels and developmental appropriateness, we focused the common measures on ages 9 to 18. We also divided up the measures into two levels—one for children and youth in grades 4 through 7 and one for youth in grades 8 through 12.
    • Common measures must have strong psychometric properties.  As much as possible, we drew from existing measures but have been conducting analyses with both pilot and preliminary data.
    • Measures must be applicable to a broad variety of programs. 4-H looks very different from county to county and state to state. We started with the creation of a national 4-H logic model that represents desired program outcomes.


(Share Clip)

 

  • Common measures must be available through a flexible, easy-to-use, and robust on-line platform.  This includes the ability to add custom items.
  • Training and technical assistance are key to the implementation of common measures in a complex, multi-faceted organization such as 4-H.
  • Buy-in and support from stakeholders is critical as is creating an ongoing system for soliciting stakeholder feedback.
  • Such a system cannot be developed without sufficient funding to support the on-line platform, technical assistance, and on-going formative evaluation.
  • Common measures are a flexible product that needs to grow and change with the outcomes of the organization.

Rad Resource:

Check out this article written by Pam Payne and Dan McDonald on using common evaluation instruments.

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Pam Larson Nippolt on Soft Skills for Youth
  2. Lisa Townson on Tailoring Evaluation to Your Audience
  3. Elizabeth Harris on A Measure of Youth Resiliency

EEE Week: Melissa Cater on Extension Evaluation

Sun, 04/06/2014 - 01:42

My name is Melissa Cater, and I am an assistant professor and evaluation specialist at Louisiana State University AgCenter. I am also serving as Chair of the AEA Extension Education Evaluation Topical Interest Group (EEE-TIG) this year. The EEE-TIG provides a professional development home for Extension professionals who are interested in program evaluation; we also welcome other individuals who are evaluating non-formal education outreach programs in a community setting. The EEE-TIG goals provide a guiding framework for the membership.

Hot Tip: Our TIG has provided a place for Extension professionals to become more collaborative. If you are searching for a way to become more involved in evaluation, join a TIG. The networking opportunities are endless.

This week’s aea365 blog posts are sponsored by the EEE-TIG. I invite you to learn more about who we through this week’s series of posts. You’ll see that we have a range of interests within our membership from evaluating agricultural programs, to teaching evaluation, to supporting participatory community research, to building evaluation capacity.

Hot Tip: You can learn even more about the EEE-TIG and the varied interests of our members by viewing our archived blog posts.



(Share Clip)

Hot Tip: Want to learn about the diversity of programs that are being evaluated in Extension? Check out the Journal of Extension to see the breadth of topics.

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. EEE Week: Sarah Baughman on Building Evaluation Capacity Across Disciplines: From Feral Hogs and Fire Ants to Families
  2. EEE Week: Mary Arnold on Building Capacity
  3. Lisa Townson on Tailoring Evaluation to Your Audience

Dan McDonnell on Evaluating Your Tweets

Sat, 04/05/2014 - 13:09

Hello, my name is Dan McDonnell and I am a Community Manager at the AEA. 

I’ve written much in the past about different tools and tricks that can help evaluators be more productive in using Twitter, which hopefully have proved worthwhile in helping you make smart use of your time on social media. By evaluating your Twitter activity and engagement, you can better understand what content resonates with your followers, and how your tweets might help you expand your network of contacts and followers.

Hot Tip: Monitor Tweet Click Throughs with a URL Shortener

While you can’t necessarily measure if people are reading your tweets, you can see who is taking action and clicking the links that you share – which in turns lets you know that you’re sharing content your followers find interesting! Using a link shortening tool like Bit.ly or Ow.ly (HootSuite’s built-in shortener) will automatically track the number of times followers click on your links. Periodically check in to see the types of content that get the most attention. Are tweets using certain hashtags or are shorter tweets getting clicked more often? Let that help you inform future content and topics for things you tweet about.

Hot Tip: Measure Your Most Engaging Tweets

Another set of metrics that you can look to for wisdom is engagement. Keep an eye on the number of times your tweets are being retweeted, favorited or replied to through the basic Twitter client, or sign up for a free tool along the lines of Sprout Social or HootSuite. This lets you keep track of your top-engaging tweets so you can easily see what stories, resources and thoughts are most likely to be be engaging to your followers.
Hot Tip: Evaluate your Favorite Hashtags

In my last post, I mentioned Tweetbinder as a handy tool for digging into hashtag data. The amount of data you can find is staggering! Simply visit the site and type in your hashtag of choice. The report you pull up with show you the top contributors, when the hashtag is most active and examples of recent tweets. With this knowledge, you can find new, interesting people to follow, analyze good times to tweet on the hashtag and see where you rank among tweeters for impact, influence and more.

This is really just scratching the surface on what Twitter metrics can tell you, and how you can use them to your advantage in evaluation. In a future post, I hope to be able to expand upon these topics and provide additional tips and tricks on digging into the data. How do you use Twitter metrics to your advantage?

 Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Dan McDonnell on Making New Friends and Mastering Lesser-Known Twitter Features Without Third Party Apps
  2. Dan McDonnell on Twitter Etiquette and Data Archiving
  3. Dan McDonnell on 5 Social Media Tools For Curation and Visualization

Erin Blake on Healthy Tips for Traveling Evaluators

Fri, 04/04/2014 - 01:15

My name is Erin Blake and I am worried about your health!

Working for the Caribbean Public Health Agency has raised my awareness of the growing problems with obesity and associated diseases. The go to axiom in public health is ‘the health of the nation, is the wealth of the nation.’ Well, I think that is also true for evaluators too. We need to take care of our physical and mental health in order to do the best job possible for our clients and stakeholders.

Many of us (myself included) struggle with our weight and maintaining our health. It can be hard making good choices when travelling frequently, working long hours, under stressful deadlines, in places where food options are limited and/or working in locations that have no facilities for exercise.

So how can we better take care of ourselves when we are on the road?

Hot Tip: Do your research. When booking accommodation, look for places that have facilities for exercise or are close to places you can go for a jog/walk/swim. You don’t have to take exercise seriously, just regularly. Thirty minutes a day!

Hot Tip: Be prepared. ALWAYS pack your gym gear and swim wear when you are heading out on the road. Be prepared to exercise in your room. Sit ups, push-ups, squats, yoga, dancing, star jumps and many more exercises don’t need a gym. There are loads of resources on the web that can help you identify exercise that can work for you.

Hot Tip: If you can’t measure it, you can’t manage it. Download some apps for tracking your calories and exercise (or just write it down!).

Rad Resource: MyFitnessPal is a free app (available for iPhone, Android, Blackberry, and Windows) that includes a daily diary for tracking your nutritional intake and calories burned. It has a surprisingly large database of different foods and their nutritional content which is particularly handy. It also has some fun graphs for your inner data viz nerd!

Get Involved: I want to encourage more AEA members to share their experiences and tips for maintaining their health.

(Share Clip)

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Related posts:

  1. Shortcut Week: John Paul Manning on RescueTime for Time Tracking
  2. Clare Nolan on Transforming Health Care: Evaluating Accountable Care Organizations
  3. BLP TIG Week: Michelle Baron on The Importance of Strategic Planning in Building a Culture of Evaluation