The writer Saville Kushner, commenting on the difficulty of evaluating participatory arts projects taking place in education contexts, once observed that ‘evaluation – as a representation of human experience – is an intractable a problem as the art it observes and all evaluators can ever do is their best’.
Arguably there is a particularly urgent need for evaluators to ‘do their best’ at the moment as policy makers, funders, project co-ordinators and participants recognise the importance of understanding and articulating what constitutes ‘quality’. Effective evaluation is needed not only to assess the ‘success’ of projects, but also to enhance the progress of a programme, represent different participants’ experiences, disseminate good practice and learn from previous activities. In simple terms we need to understand and make explicit what we are doing, how we are doing it, why we are doing it, the effects it has and what we could do better. Identifying ‘quality’ is no easy task.
One response to this challenge is to search for the holy grail in the form of the workable and ‘objective’ evaluation framework; the single, simple, yet effective tool that can make sense of the total human experience. One of the most commonly used of these (The Generic Learning Outcomes model) provides a method for demonstrating impacts and outcomes. It is an approach that has undoubted value and, in my experience, can prove useful in terms of advocacy. Others I know have used it as a starting point for discussion internally, with the ‘outcomes’ serving as provocations. But what this framework does not do is present the whole picture. For instance, in my view, the ‘what’, the ‘how’ and the ‘why’ of arts education activity are difficult to evaluate using the GLOs, which in turn can make it challenging to consider how the processes, structures and systems we are using can be improved. Focusing only on one aspect (in this case, outcomes) can lead to partial representations and insights. This in turn suggests that multiple frameworks need to be employed, in order to have ‘complete’ understanding.
But the need for evaluation methods that are not too time-consuming or unwieldy also remains. Funders understandably need meaningful evaluation of work they have supported and practitioners want ways of understanding and improving programmes. All this when, in most cases, time and resources are limited. Complicated data collection techniques and multiple approaches that require hours of analysis are not realistic. Within the Learning department at Tate Gallery, despite its size and scale, the situation is no different. We seek to understand what constitutes quality in all that we do, but in a realistic and sustainable way and, therefore, we are adopting approaches to interrogating and mapping ‘quality’ in ways that are simultaneously pragmatic and idealistic. We are doing this by going back to first principles; by identifying the complexity of what we are trying to look at, but acknowledging the need for simplicity; by making clear what the values are that underpin our activity in order to provide a basis against which to evaluate, but recognising that there are other agendas to consider; by drawing on a range of evaluation frameworks and methods that best suit the nature and ambitions of the specific programme and the audience for whom the evaluation is intended and by trying to be as clear and honest as possible in what we are attempting to do. As far as possible we aspire to embed evaluation within the activity, rather than bolting it on at specific moments.
We are, without question, facing challenges and frustrations, but greater understanding is emerging and the usefulness and relevance of particular concepts and frameworks are becoming clearer (GLOs for advocacy, ‘Appreciative Enquiry’ for programme development, for example). It is work in progress, with a view to using and developing evaluation frameworks (understood within a ‘meta-framework’) that are genuinely fit for purpose. Above all, and with Saville Kushner’s observation in mind, we are trying to do ‘our best’.
Head of Learning Practice, Research and Policy