Written by Dr Sara Javanparast Research Fellow, Research Centre for Palliative Care, Death, and Dying.
In 2021, the Research Centre for Palliative Care, Death and Dying (RePaDD) was commissioned by SA Department for Health and wellbeing to develop an evaluation framework and implement the evaluation of Comprehensive Palliative Care in Aged Care (CPCiAC) pilot projects in South Australia.
The CPCiAC program is an Australian national initiative aiming to improve access to quality palliative and end-of-life care for older Australians living in residential aged care. Joint fundings from the Commonwealth and state/territory governments are allocated to trial new models of palliative care or improve existing models in residential aged care facilities. In South Australia, a combined funding of $7.65 million was allocated to improve shared care between specialist palliative care services, general practitioners and aged care workforce, increase specialist palliative care in-reach support, enhance workforce education and training, and provide grief and bereavement support to families. Two sites participated in South Australia: 1) Eldercare (a not-for-profit aged care provider); 2) Rural support service (a government entity providing support to the 6 regional Local Health Networks).
The program evaluation has been successfully completed and the findings are being disseminated through various strategies. In this blog, I would like to reflect on the evaluation process and emphasise three critical things that need to be considered when evaluating complex and large scale projects.
FIRST, a good understanding of the ‘context’ within which the program is implemented is crucial. This includes the broader policy context, but also organisational internal context and culture.
For the purpose of this evaluation, we undertook a comprehensive review of the national and South Australian palliative care policies, Australian Royal Commission into Aged Care Quality and Safety report and the Aged Care Quality Standards. This provided us a great insight into palliative care policy priorities and challenges in aged care setting and what we needed to consider while designing and implementing the evaluation. This process was particularly important to assess how program outcomes align with the policy context, and their ability to sustain and scale up.
At the organisational level, contextual factors including workforce, infrastructure, internal capacities, and resources would significantly impact on the level of engagement with the project team, aged care workforce and residents/families. These factors could also enable or hinder the collection and quality of the evaluation data. The two sites involved in our evaluation included a private organisation with facilities mainly in metropolitan Adelaide as well as public facilities in rural and regional areas. Indeed, without acknowledging the contextual differences and taking contextual factors into consideration, it would have been difficult to thoroughly evaluate models of palliative care in multiple sites.
SECOND, participatory evaluation methodologies to engage with stakeholders is the way to build mutually respectful relationships, which in turn improves the evaluation quality.
A key to successful evaluation is to engage with those who influence and are influenced by the project. We used various strategies to ensure meaningful engagement with stakeholders. These included the establishment of regular meetings with SA Health and the project teams, phone and email communications when needed, and pre/post evaluation forums to discuss evaluation processes and potential challenges. Stakeholder mapping helped to identify potential stakeholders, who to engage with and how. The forums provided great opportunities to seek inputs from the sites, discuss priorities, and ways to address challenges. As the result of pre-evaluation forum, we adjusted some evaluation approaches and methodologies to meet each site’s needs and priorities. After the completion of the evaluation, we shared lessons learnt, dissemination plans and ways forward in a post evaluation forum. This participatory approach assisted us as evaluators to refine our methodologies based on needs, but also supported project sites to improve their data quality, documentation, and reporting.
THIRD, there is not a ‘one size fits all’ approach to successfully implement and evaluate complex projects in complex health and aged care settings.
We employed a critical realist approach to answer questions ‘what works, for whom, and in what circumstances’. Although a few models of care were implemented across both project sites, they used quite difference approaches to its implementation and so did we in the evaluation. A good example was related to palliative care Needs Rounds which was tailored in each site based on their needs, geographical location, workforce issues and organisational capacities and resources. As evaluators, it was crucial for us to recognise that depending on the circumstances, an individual model of care may or may not work the same in different settings and people.
This work reaffirmed that any project evaluation should be seen as a transformative and powerful change process with great potential to inform practice and policy. Building relationships could be costly and time consuming but a critical element of successful evaluation and requires evaluation knowledge and skills, expertise in participatory approaches, policy analysis, and systems thinking. Our learnings from the evaluation of CPCiAC project are applicable to other healthcare interventions and their consideration is important for evaluators, project implementors and, program funders.
- Pawson R, Tilley N. Realistic evaluation. Thousand Oaks, CA:SAGE;1997.
Hear more from Dr Sara Javanparast about the Comprehensive Palliative Care in Aged Care (CPCiAC) at the RePadd Seminar Series:
Dr Sara Javanparast