Chapter 13. Evaluation

_0012_chap-13
Human Rights Law Centre

Evaluation enables CLCs to keep track of and represent the value of systemic work to the community. It draws attention to the strengths of projects and shows where there is room to improve. This can help CLCs to sharpen their advocacy efforts in order to make them as effective as possible. This chapter explains how and why CLCs should evaluate their law reform work. We are grateful to Emma Pritchard for her assistance preparing this chapter.

What is evaluation?

Evaluation is the process of assessing the merit or value of something. Evaluating a project means to review the project’s operation and outcomes in order to determine its success to date and areas which can be improved in the future.

Advocacy evaluation is the evaluation of social or policy change projects. Advocacy evaluation involves assessing how and to what extent advocacy work (e.g. campaigns, lobbying) is contributing to intended policy (or other) changes. It looks at achievement of the final policy goal as well as progress toward that goal.

Advocacy evaluation has attracted substantial and growing interest in recent years, particularly among community organisations and philanthropic bodies in the USA.

Why should CLCs evaluate advocacy work?

Evaluation of advocacy work is important. CLCs can and should track the results of their efforts to improve unfair laws.

Evaluation can benefit CLCs in a number of ways.

First, evaluation can help CLCs to do better policy work. It enables CLCs to learn what works to accomplish change. Evaluation techniques offer a basis upon which staff can re-think campaign tactics if things do not go to plan. In this sense, it builds CLC capacity to improve project design and delivery, both during a project (if the project under evaluation is still in progress) and also in relation to future work.

Second, evaluation aids in attracting grants and other funding as it demonstrates the value of your work to potential funders. Strong evaluative practices show rigour and accountability and thereby boost credibility and enhance the quality of your grant application. Furthermore, grants organisations increasingly require that projects have an evaluation component. It is becoming necessary to incorporate evaluative practices in order to win grants.

Evaluation can take a variety of forms. At one end of the spectrum is large, formal, external evaluations, while at the other is a small, internal reflection/learning event (for example a half day gathering of staff and partners to consider what worked, what didn’t and ways forward).

Evaluation can be a particularly powerful tool for CLCs if it is integrated into systemic work. If you gather data about how your work is tracking and reflect upon it regularly as you proceed, then ongoing learning and insights are possible that can inform refinements to your strategies and approach.

What should a CLC evaluate?

All systemic work – whether substantial or minor – merits some form of evaluation.[i] You can evaluate the impact of a one-off submission or you can evaluate the impact of an entire campaign.

Your evaluation should include an assessment of a range of things, including: outcomes, processes (how the project is being implemented), strategies and partnerships.

When should a CLC evaluate?

It is best practice to build evaluation into your advocacy work from the outset. If possible, plan how to track the progress of your project as and when you design it. It can be difficult to simply assess performance at the end as opportunities to collect all of the information necessary to inform meaningful conclusions may have been missed.

It is never too late to implement evaluative practices, even if your project is already underway. It is still better to collect some data to assist you to assess the success of your work, rather than none.

How would a CLC evaluate a specific advocacy project?

Key Messages

Plan to evaluate from the beginning

It is important to build evaluation into your project design from the outset. A prospective approach to evaluation allows you to set your indicators of success in advance and collect data as you go. This also enables you to collect baseline data (i.e. data that shows what the case or circumstances were before the project commenced).

Formulate a series of outcomes

Your outcomes are the things you want to accomplish and the changes you aim to bring about. It can be helpful to think of your outcomes in a hierarchy – immediate outcomes will be achieved in the short-term, intermediate outcomes will be achieved in the longer term, final results will be achieved within the life of your project and social outcomes will ultimately stem from those final results. If you have developed a theory of change, you will be able to use it to help identify your intended outcomes. For more information about preparing a theory of change, please see Chapter Three.

Set solid and tangible indicators of success

Your indicators show whether (and to what extent) you achieve your intended outcomes. They are measures that together tell a credible story about the extent to which outcomes have been achieved. Indicators may be quantitative (e.g. numbers, percentages) or qualitative (descriptive). Both qualitative and quantitative indicators are important for advocacy evaluation.

Develop consistent data collection methods

In order to produce information that is credible and useful to your CLC and others, accurate and consistent data collection methods are crucial. In particular, planned and systematic documentation of your activities and the results will boost the reliability of your data.

An example

If one of your outcomes is increased public awareness of the issue, then your indicators may include:

  1. awareness levels among the target audience on various, specific elements of the issue before the commencement of the project, along the way, and after the completion of the project (i.e. what was public awareness about an issue before you started your campaign and what was it after you finished). This suits some contexts and projects – particularly large and well-resourced ones – better than others;
  2. Estimated audience size reached by campaign messages (number of times the campaign is mentioned on the radio or television by size of listenership); PLUS description of key messages contained in mentions.
  3. the number of articles about the issue written by the project and appearing in newspapers; PLUS description of key messages contained in articles; PLUS estimates of readership size per publication;
  4. the number of downloads or hits that your online publications received.

Different campaigns will require different data collection methods at different times to generate useable data.[ii] In relation to the indicators outlined above, media monitoring is the key data collection method.

Sample framework of the evaluation process

There are three main stages of evaluation:[iii]

The framework below summarises the basic steps and standards essential to an effective evaluation process.

‘Six Steps’ Framework for Project Evaluation.[iv]

These are steps that can be taken in any evaluation. They are designed to be used as a starting point and tailored to suit a particular issue and context.

CH13image2

Where can I learn more?

For a collection of practical tools, articles, reports and other resources relevant to advocacy evaluation, visit:

Sources

This chapter draws on the work of Emma Pritchard from Emma Pritchard Consulting (e.pritchard.consulting@gmail.com OR +61 466 652 867).

 

[i] The same applies to CLC programs and services more broadly.

[ii] For more information about identifying indicators (or ‘measures’) and selecting methods to capture them, see Harvard Family Research Project, ‘A User’s Guide to Advocacy Evaluation Planning’, 13–17.

[iii] Click here to view Emma Pritchard’s presentation for more information.

[iv] Adapted from the Centre for Disease Control and Prevention, Framework for Program Evaluation in Public Health (1999) MMWR 48, 4. For a full explanation of these steps and standards.

 

For a print friendly version of this chapter, click here.