Search for
in
Filter results by subjects:
Select a Topic to filter by Subject
Filter results by content type:
Sort by:
Useful organisations

Evaluation – generating useful evidence for learning and impact

Evaluation is central to Centre for Ageing Better’s ambition of creating measurable and lasting change to help everybody enjoy a good later life.

We expect that by generating this strong and comprehensive evidence base covering these kinds of questions we’ll be providing organisations up and down the country with the kind of evidence they need to make practical use of the learning from evaluations.

Evaluation is central to Centre for Ageing Better’s ambition of creating measureable and lasting change to help everybody enjoy a good later life. Whether it’s working with other organisations to evaluate existing programmes to fill the gaps in the evidence base, or evaluating our own programmes and strategies to test new ideas, scale or spread what we know works – the appetite for evaluation and learning is high and the opportunities are huge. This is at the heart of our fundamental aim as an organisation – to find out ‘what works’.

Only two weeks into the job, and I’m getting involved in some very interesting evaluations covering different aspects of ageing. For example I’m advising on the BLF’s Ageing Better programme evaluation which is testing out approaches to tackling social isolation in 14 areas. We’re also at an early stage in thinking about evaluation for a project we’re undertaking with the Calouste Gulbenkian Foundation to test new ways to support people to plan for their retirement.

So I’ve started thinking about what we are hoping to learn from evaluation – across these different programmes.

Well, as we work with partners to develop and shape interventions we’ll want to explore some of the key ‘how’ and ‘why’ questions. Firstly we’ll want to understand what the intervention is – what is it made up of, how is it delivered and by whom? Then we’ll want to know more about how the intervention works to achieve impacts so we’ll need to explore questions like how do interventions successfully attract the right participants? Once participating, what are the things that really matter to people and make a difference in their lives? And what works in keeping people involved and bring about sustainable change?

And we know that successful interventions in the real world are rarely simple or straightforward. To work well and be effective for most people services or programmes need to be adapted to suit the requirements and circumstances of different types of people. They need to capitalise on supportive features of implementation context and work in harness with existing services to make up for deficits or needs that cannot be met through the delivery of the intervention alone. So we’ll need to understand how interventions are tailored to suit the needs and circumstances of different types of people and which additional services or interventions are needed to create sustainable impacts.

As we build stronger theory and stronger evidence base around these kinds of questions we’ll be better able to develop and apply stronger evaluative designs which are capable of robustly measuring outcomes, and provide strong accounts of overall impacts and cost effectiveness.

We expect that by generating this strong and comprehensive evidence base covering these kinds of questions we’ll be providing organisations up and down the country with the kind of evidence they need to make practical use of the learning from evaluations – the knowledge of how to apply the evidence to the likely different local contexts each organisation operates in.

We’ll be continuing to develop and refine this core list of evaluation questions and our evaluation approach over the coming months. If you’re currently involved in developing and testing out an ageing related intervention or considering evaluating an existing programme I would love to hear from you.

Matt Baumann
Matt
Baumann