The domain of numbers and science
The randomised controlled trial has taken its place as the gold standard for providing evidence that a treatment or intervention works. It helps to describe the important features and characteristics that influence its working, which must be replicated in order to achieve the desired results in other settings. It has been compared to a control group that lacked the desired features, and the differences have been carefully enumerated. The systematic reviews and meta-analyses help to establish the cross-research validity of the findings to further boost the value of the message. It comes dressed up in science and numbers to give it greater gravitas, but equally making it less accessible to many of the people who should be more aware of the messages and how to implement them.
The power of numbers is very clear in the scientific world of precise measurement, but as Goldacre (2009) points out, unless it is undertaken in a transparent way it has tremendous potential to be used in ways that deceive people into believing what the procurer of the evidence wants people to believe. The report of a Practice Based Evidence initiative with the National Audit Office (Morgan and Hunte, 2008) also illustrates how the power of numbers channeled through the slavish pursuit of targets can completely distort priorities on the ground away from recognised best practice.
If so much attention is placed on demonstrating the evidence for what we do, just how much of what we do in healthcare is evidence-based? Goldacre (2009) draws a distinction between true evidence-based medicine and the wider activity that happens in delivering a healthcare service: “From the state of current knowledge, around 13% of all treatments have good evidence, and a further 21% are likely to be beneficial… These real world studies give a more meaningful figure: lots were done in the 1990’s, and it turns out, depending on speciality, that between 50% and 80% of all medical activity is ‘evidence-based’. (p199)” The power of the medical lobby is still evident in these broader findings… he is still talking about ‘medical activity’, so is this the most desirable approach to adopt for developing the wider health and social care sector that is not medical based?
Venturing into touchy-feely territory
The academics, researchers, policy-makers and medical establishment are very quick to put down areas of interest that do not conveniently fall into their evidence-based domain; it must be the touchy-feely stuff of casual distraction but of little scientific value! Models of team-working have received great attention from the research community, but it is arguable whether they are really concerned to get involved in the questions of micro-detail, preferring again to restrict their evaluations of whether a particular type of team works depending on the numerical impacts it has, particularly on hospital bed use and medical symptomatology. Only casual significance is accorded to the narrative messages of case study and service user statements of experience.
Practice Based Evidence is not about undermining good quality research, but it does place a fundamentally different emphasis on the priorities of active practice development and geography (Morgan, 2008). Evidence-based practice is often concerned with examining controlled variables from the distance of apparent scientific impartiality, then reporting the findings for practitioners through papers and workshops to take away into their practice to interpret for themselves. Practice-based evidence should be more about working alongside people in the workplace, to help develop their interpretations of recognised good practice, and to subsequently evaluate the outcomes of changes they are able to put into place.
Finding the common ground
The focus and approach of evidence-based practice provides vitally important messages for the development of defined and controlled specific types of intervention or treatment. However, it has a weaker role to play when addressing the challenges of wider questions, such as:
- What does good practice in team-working look like?
- How can we promote a more integrated configuration of teams across a local service?
- How and why should we develop a strengths approach to individual practice and team-working?
- How and why should we be taking risks?
These questions require an entirely different approach to developing and examining evidence, but also the narrative and collaborative approaches to evaluating practice development should not be interpreted through the same prism owned by the evidence-based practice lobby, as it is collected differently to address different but equally necessary questions.
Goldacre, B. (2009) Bad Science. London: Harper Collins.
Morgan, S. and Hunte, K. (2008) One foot in the door. Mental Health Today, March pp32-35.