13/02/08

Evaluating science communication projects

Copyright: Flickr/marie-II

Send to a friend

The details you provide on this page will not be used to send unsolicited email, and will not be sold to a 3rd party. See privacy policy.

The trick to evaluating a science communication project, explains Marina Joubert, is to plan carefully — and learn from your mistakes.

 

Communicating science to the public comprises diverse approaches such as public talks, debates, exhibitions, publications, science theatre, television documentaries and citizen projects like consensus conferences. Often, these activities form part of a wider campaign to engage people in science.

 

Audiences range from young children and teenagers to parents and community, business or political leaders. Formulating programmes requires a collective effort between scientists and science communicators, as well as between science communicators in different countries, and may last several years or span different countries.

 

The rationale behind investing in science communication often includes encouraging young people to consider science careers, supporting dialogue and informed decision-making about science and society issues, promoting public acceptance of new technologies, changing attitudes or behaviour, or simply fostering a science ‘culture’.

 

The achievement of such goals can only be measured by comprehensive surveys that are regularly repeated to monitor trends. So although a project may contribute to broad societal objectives, its evaluation must be planned around its own specific, measurable objectives.

 

Evaluating your science communication project will not only provide feedback to investors, it will also help you to:

  • improve the project as it develops;
  • document what works (and what doesn’t); and
  • assess how effectively the project has reached its targets.

Planning for evaluation

Don’t shy away from evaluation because it seems expensive or difficult, or because you fear being seen in a negative light. And don’t leave evaluation until after the project is over. Unless you plan for evaluation from the start, you won’t have the people, instruments or resources in place to do it. You could miss critical opportunities to gather data and could end up with insufficient evidence about your project’s impact.

In a competitive proposal, a credible evaluation plan will inspire confidence in investors. Consider carefully what you want to evaluate and why; how the evaluation strategy will be implemented; and how the information will be used. Find out, for example, whether your sponsor is planning an independent evaluation so that you can optimise cooperation between the internal and external evaluations.

 

Involve your co-workers in designing the evaluation plan so that they feel shared ownership, and thus, responsibility — their buy-in to the evaluation process is essential to its success.

 

Most importantly, set clear and measurable objectives and don’t be too ambitious.

 

Evaluation reports from other science communicators may help you design your own (see for example the Graphic Science at the University of West England’s reports).

 

Types of evaluation

 

Evaluation can be quantitative — monitoring the number of people at an event, or the number of visits to a website for example — or qualitative, using interviews or questionnaires. Both are important in determining a project’s impact.

 

Evaluation happens before, during and after a project. ‘Formative’ evaluation is done early on to test prototypes, messages and ideas with individuals and groups from your target audience. Your co-workers can also pre-test evaluation tools. Early feedback can stop you making costly mistakes — use it to adjust the project before you start.

 

Once the project is underway, continuous monitoring will ensure that you have comprehensive and relevant data to evaluate the project’s impact.

 

Periodic checks against your original objectives should show you whether adjustments are needed.

 

Mid-term evaluations, in particular, are done specifically to use lessons learned so far to improve impact and delivery in the rest of the project.

 

At the end, a ‘summative’ evaluation based on project outcomes can determine how effective a programme has been in reaching its objectives.

 

Evaluation tools

 

Different tools can be used in evaluation — their appropriateness will depend on the nature and context of your project. Broadly, evaluation tools can help you observe people’s response to a project, collect feedback on it or gauge its reach.

 

Observational tools include using photographs and video footage to see how people interact with an exhibition or how they respond during a public debate. Observing visitors’ behaviour in science centres and museums can document how people relate to displays.

 

If your project relies on a website, make sure you monitor how people navigate it and how much time they spend on specific sections so that you can measure individual pages’ appeal. Incorporate user-friendly response options to gather online feedback.

 

For events, use simple questionnaires to collect feedback, document what worked particularly well, and identify things that should be reconsidered. Remember to also get feedback from the project’s participants — speakers, actors, educators and others — whose experience can also be valuable.

 

Use an attendance register if you need a demographic profile of your audience. A visitors’ book is a good way of capturing the impressions and recommendations of audience members at a show, play, event or exhibition. Alternatively, interviews can be done in person at the event or via a telephone or e-mail survey soon afterwards. Remember to collect participants’ contact details so that you contact them later.

 

Focus groups — small groups of participants who are extensively interviewed — can provide defined feedback on your project’s relevance.

 

And self-evaluation, where project leaders and collaborators provide their own views on a project’s successes and failures, can complement objective, external evaluations.

 

Gauging a project’s reach is more difficult. Tracking changes in public attitudes or opinions requires large-scale, expensive surveys, which are often complicated and best done in collaboration with experienced social science researchers.

 

Analysing press clippings and radio or television coverage of an event or science communication programme can also help to judge a project’s wider impact. Be sure to look at the coverage’s quantity and quality (position, tone, etc) and get expert help if you need it.

 

Evaluation in practice

 

Remember to keep your evaluation simple, practical and user-friendly. Choose tools that are easy to implement. Telephone interviews, for example, may be more affordable than face-to-face interviews. Lengthy or complex survey forms may scare people away. Pre-test your tools to make sure they deliver data that are relevant and easy to process.

 

Consider the language and style of your evaluation materials carefully and make sure they are appropriate, especially if you are targeting ‘niche’ audiences, such as rural communities in developing countries. If, for example, you are working with teenagers, use young interviewers and avoid a formal ‘clipboard’ approach.

 

Your evaluation should be tailored to meet your project’s unique challenges. For example, it is easy to get public opinion on an event such as a science festival, but difficult to demonstrate its long-term effects. In order to show a trend, you need to have collected baseline data, which shows how things were before you implemented your science communication project. It may be very difficult to claim, let alone prove, that a single project contributed to change at a national or international level.

 

A print media project demands a different approach, since there may be no direct contact with readers. In this case, incentives could entice people to provide feedback.

 

Using your evaluation results

 

Make sure all your collaborators know how you plan to use the evaluation results. You don’t want people to be surprised about being quoted in a public article or report if they were under the impression that it was a confidential interview.

 

Position your evaluation as an empowerment tool. People are more likely to collaborate with your evaluation if they understand that it aims to build confidence and improve future performance, rather than to find fault and identify shortcomings.

 

Feed your results into future activities. Both successes and failures can teach you how to increase new projects’ impact and cost-effectiveness.

 

Share your results — your mistakes, and any solutions you developed, could be valuable information for others. Disseminating your findings in a journal or at a conference also lets you share best practice and learn from others.

 

Finally, don’t forget to communicate evaluation outcomes to your team members, partners, collaborators and sponsors.

Further reading

Gascoigne, T. and Metcalfe, J. Report: the evaluation of national programs of science awareness. Science Communication, 23 66-76 (2001)

Metcalfe, J. and Perry, D. The evaluation of science-based organisations’ communication programs. Australian Science Communicators conference, Sydney (2001)

Marina Joubert is a science communication consultant at Southern Science in South Africa and coordinator of SciDev.Net’s science communication topic gateway

This article was previously part of SciDev.Net’s e-guide to science communication and has been reformatted to become this practical guide.