Re: Implementation strategies – moving guidance into practice

Authors

  • Glenn Blanchette MBChB FRANZCOG FRCOG BSc DipPG


Dear Sir

I disagree with the philosophy which provoked the a recent article in TOG.[1] As Latibeaudiere et al. point out, many authors have complained about the difficulties of implementing research evidence in clinical practice. The disparaging term inertia has been used, most commonly to describe those of us who are aging practitioners. I would like to present a complementary opinion.

My PhD research interest is in artificial neural networks. Artificial intelligence is a younger science than medicine but I believe it offers an objective view of learning. Machine learning in a supervised context relies on a delta of the present experience (the current research), which is the difference between the desired output and the actual output. However, any machine which trains using this delta alone is likely to fail. The best algorithms include a mathematical term representing the past accumulated experience of the machine.[2] This adaptive factor carries the machine forward across ‘pot holes in the problem surface’, which are inadequate solutions. Researchers in the field of machine learning have not so disparagingly used the term; momentum, a protection to the machine against the possibility of getting stuck in blind allies (or in the mathematical parlance – local minima).

Moreover, the concepts of inertia and momentum have a subtle difference. Inertia can only be negative; it only results in delay. Momentum can both limit or accelerate change. In situations where the current evidence is at odds with past experience it reduces the change in direction that might be expected from the evidence. However, in situations where the direction of the past and present coincide, reinforcement occurs. As an illustration, I cite the much-maligned ‘Breech Trial’ by Hannah et al.[3] Here is a piece of research that was rapidly taken into common practice from the day it was published, because the results of the trial agreed with the past accumulated experience of the profession (there was reinforcement).

I would like to ask the authors this: do they see a difference between research evidence and best practice for an individual patient?

And if they don't: what point do they see in training doctors?

Ancillary