SPEAKING AT HiAP 2018A Strategy for Improving Population Health / Register now

Measuring impact: our thoughts and exploration

I’m a regular customer at Crate Brewery in Hackney Wick. However – despite my best efforts – I’ve never been able to engage my friends in a conversation about data measurement over beer and pizza. As such, I was very happy when I found myself upstairs the white building at the London-based follow-up of the ‘Measured’ summit – that took place in New York City earlier this year – to take part in an evening’s discussion about measuring the social impact of design.

We’ve been doing a lot of thinking here at Uscreates about measuring impact, and it was a pleasure to be in the company of lots of experts exploring new avenues around this age-old problem. This blog summarises some of that conversation, as well as our musings from the studio.

Why and for whom? Why do we measure impact? And for whom?

Evaluative: to prove something works

Formative: to learn and iteratively improve something

Skim or splash? Impact measurement has traditionally been assessed at the end of a trial or test; however – as outcome impact often takes a long time to materialise – this approach can pose problems. Colleagues from Nile and the Royal Bank of Scotland, presenting on their redesign of the Scottish £5 note at the SDN London conference in 2016, explained how they measured the ripple effects of design throughout the process as well as the large splash at the end. We’ve started to create a matrix of different proxy measures along the way.

However, measuring shorter-term output or activity as impact can also drive perverse behaviours. Over the last 10 years, central government has slowly pulled back from directive targets as these have incentivised people to act in strange ways (notoriously, the police’s Offences Brought to Justice target (OBTJ) drove the police to target young people [‘low hanging fruit’]). Our work with clients has also shown us that target-driven cultures disempower frontline staff, affecting their ability to devote time and creativity to problem solving and ‘doing the right thing’ for users (which, indeed, might not necessarily be the activity demanded by the target).

“Pilot with a view to roll-out nationally”. The other issue with traditional, end-of-test impact measurements is that you cannot change anything along the way. As a civil servant, I have been guilty of writing “we are going to pilot XXX with a view to rolling it out nationally” in various strategies. It means that pilots are set up to succeed rather than to be allowed to fail, even if the experiment doesn’t actually work. Jesper Christiansen from Nesta gave the ultimate example of an expensive trial in Denmark that was set on a course to succeed by its political masters, despite evidence to the contrary (written up in Nina Holm Vohnsen’s PhD thesis).

Measuring impact at the end of a trial is evaluative, and is done in order to prove that something works. It takes a long time and means that you can’t make alterations during the process. However – as stated earlier – focusing on output measures can drive perverse incentives and action.

Innovations don’t happen in isolation. They take place in organisations and places where other stuff – innovations and otherwise – are happening, and these will also affect the impact. Place-based approaches to health will be affected by the particular physical (buildings, transport, air conditions) and human (communities, services, politics) elements in that place. Some new types of innovation are activity encouraging this complexity. ‘Combinatorial innovation’ (as being trialled by the NHS test-bed programme) deliberately tests technological innovations alongside developing other new approaches. Accepting that things are complex and evolving, their approach is to assess impact through a rigorous and regular process of qualitative interviews with those involved.

It seems clear, therefore, that a new approach is needed . A more effective procedure would be to a) create a theory of change that links outputs and outcomes; and b) create a learning culture so that staff actively explore and reflect on their activity, and have a safe space to change the activity if they don’t feel it is fulfilling the theory of change and leading to the desired outcome.

Iterative impact measurement is a phrase that is beginning to pop up. Shareth Jeevan from STIR talked about a variation – agile impact measurement – at the DfID Tech and Education workshop in April. By providing frontline staff with real-time feedback from users about how well they are doing (whether qualitative – users’ opinions – or quantitative – users’ behaviour measured by service interactions), they can reflect critically on whether their action is leading to outcomes. Digital services – and the data they generate – provide a big opportunity for real-time feedback. It also makes it easy for users to provide feedback. If the feedback relates to how they are using the online platform, they might not even realise they are doing it. But it might also be interesting to see feedback as a value exchange. If users know that service providers are going to act on feedback (and improve their offer), they might be more likely to give it.

Ironically, perhaps – when compared to Randomised Control Trials and pilots that are desired to succeed – by building reflective learning and iteration into the process, this approach is designed to succeed.

Systems thinking. In this context, it is useful to think of impact measurement through the lens of systems thinking. Systems are made up of elements and the relationships between them, which are constantly changing the shape of the overall system. Social design is often applied to complex and evolving problems. Design – or designers – are often only part of the solution, and there are often bigger structural issues at play. How can we measure the impact of service re-design on preventing homelessness, when the forces of house prices and welfare reform are at play? What part can be apportioned to designers when frontline staff are implementing any reforms?

If design is part of a complex system, is there any point in measuring the impact of interventions (and their design) at a point in time? Or, in fact, would it be better to measure smaller feedback in order to constantly evolve the intervention as the wider system changes around it? Lankelly Chase’s and Point People’s System Changers programme is a project that encourages frontline staff to reflect and create feedback loops to the wider system. Nesta’s Rapid Results Team encourages frontline teams to identify issues and solutions and create change. And an increasing amount of our projects are requiring us to build this problem-solving mindset and approach in frontline staff as a way of implementing and sustaining innovation. Perhaps we need to see measuring impact as a way of delivering services rather than evaluating them?

Feel free to join the conversation, share your ideas or comment by contacting cat.drew@uscreates.com.

Cat Drew is Uscreates’ Delivery Director. She oversees delivery and direction of projects by using data and design techniques, ensuring that they link to the client’s wider strategic objectives and deliver impact. Previously Cat was a senior policy advisor at Policy Lab, working with government departments to promote design-based techniques in policymaking, and Head of Policy IT & Digitisation Policy at the Home Office. She was responsible for supporting police forces to digitally transform their services and processes by working with tech experts and other forces to co-design digital capabilities that set out what a digital force looks like. Prior to that Cat was Head of Neighbourhood Policing at the Home Office, and a researcher at IPPR.