Clinical equipoiseSpencer HeyAlex John Londonand Charles Weijer define personal equipoise that there is no better framework for justifying patient participation in research. But Annette Rid and Franklin Miller say that it is equipois mistake to require clinical research ethics to align with the norms of perrsonal practice. Doctors also have a duty to provide patients with care consistent with professional standards. Clinical equipoise was first proposed as a solution to the problem of randomisation 30 years ago. It is defined as a state of disagreement or uncertainty in define personal equipoise informed, expert medical community equipolse the relative testosteron hormone merits of the intervention arms in a trial. Anate steroid define personal equipoise also mean that some experts favour one intervention over the others, but different experts prefer different interventions for the same patients.
Equipoise | definition of equipoise by Medical dictionary
The focus of the paper was to inform the reader about the perils of failing to control for lack of personal equipoise in randomized controlled trials, and in a related sense, the hazards of an inappropriate study design that lacks clinical equipoise in other words, the study was set up for one intervention to succeed over another.
A true state of equipoise exists when one has no good basis for a choice between two or more care options. Joe Brence to further discuss this important issue. If a team of McKenzie-based clinicians provided a McKenzie or MDT approach versus a generic application of unsophisticated manual therapy care, one might also suggest bias.
Further, because of assumed bias, sponsored interventions from device companies, equipment suppliers, or others who have a financial interest in the outcome, are often very difficult to publish. The authors of these studies are frequently required to report their vested interest in one side of the intervention. JBJS is a characteristic example of this challenge, since the primary author was also a paid consultant for the STAR total ankle replacement device company.
We the authors were required to disclose our personal interests in the outcome of the trial, which indeed did favor the STAR device. I was able to disclose that I had no personal interests. The Commission on Publication Ethics COPE guidelines actually requires this disclosure for all publications and when this is absent, it is considered a strong enough reason to request retraction of the publication. Recently, there have been a number of manual therapy studies that were designed by clinicians who have a personal interest in the success of one intervention over another.
These clinicians have either a vested interest in the applications that are part of the interventional model because they provide instruction of these techniques in continuing education courses in which they profit from although it may seem minimal, it is not ; or because the tools are part of a philosophical approach or a decision tool that was designed from their efforts or efforts from those they were affiliated with.
In nearly all cases, the bias is non-intentional and certainly not malicious. Nonetheless, in most cases, the comparative intervention comparator is designed in such a manner that it does not adequately represent clinical practice and in some cases, the same comparator is used despite the fact that it has been demonstrated to be ineffective in past clinical trials.
It is my impression that these studies lack clinical and personal equipoise and require a disclosure of conflict of interest that is outlined within the COPE guidelines. This concerns me greatly for a number of reasons. Certainly, once information is advocated within the clinical population, it takes years to diffuse its use, even after acknowledgement of its erroneous findings.
All-Star Posts , research. You know how some patients have learned helplessness? Our profession perpetuates that type of attitude when it comes to research… research learned helplessness. Cook, First of all, thanks for a fantastic post. Second, I agree that when there is a personal bias, anyone can design a study which will result in their favor.
We are neurobiologically wired for bias. Otherwise, for most of us, our heads would be spinning as we try to make sense of what is going on in the world. However, the next time I talk with an expert and ask; where is the evidence? Journals share an important responsability in my mind in sorting out these kind of papers where authors have a vested interest in one of the interventions. That is why they should be very cautious when deciding when an article is deemed for publication.
And also why I cannot understand why some journals let some papers with a very questionnble methodology make it in their pages. That is especially true when there are relatively few papers on the topic or question asked by the paper as this will give the questionnable paper a relatively important weight among the currently published evidence about the topic studied. I think this applies really well in the manips vs mobs current debate. Also, agreeing with Arthur in this recent post: Not naming names either but other therapists who tweet media reports with inaccurate interpretation of one journal article are promoting some of that learned research helplessness.
Belief to me means that one has reconciled or accepted that something is true, whereas for the most part we live in a world of probabilities, where truth is a dynamic mystery. As a wannabe brilliant epidemiologist, it seems to me that the more I dig, the more complexity is revealed. What are the assumptions of using said bootstrap method and were the assumptions met?
Would an above average clinician, well versed in statistics, be able to critique a meta-analysis with that depth of knowledge, understanding and expertise? I find myself in the later category more often than not, and am comfortable nowadays with the uncertainty of it all. If I were a fly on the wall of an EBM-type physical therapist, and I had my clipboard out taking notes during their assessment, treatment and advice, how many times would I find a logical fallacy, or a point during the diagnosis when the test results were ambiguous, but ignored?
I base this confidence on personal experience first, and then on my direct observation of many therapists. Very interesting thoughts posted by all. As a relative new comer to the PT blog scene I am continually impressed by the passion and debate that is ongoing in cyberspace.
As a new graduate I was inundated with the importance of EBM in graduate school and essentially developed a blind faith that all research was developed for the benefit of our patients. Unfortunately as I have continued to grow as a clinician and my knowledge has started to catch-up with my inquisitive nature I have come to realize that sadly alternative motives exist for the publication of research.
I share some of the same concerns that others do regarding research design and bias and I am simply left in a state of frustration. I do not have nearly the amount of research experience as others on this post and thus am left hoping that the research I encounter is performed for a just cause. As I read more and more articles and debates about EBM, more and more flags are raised cautioning me to not always believe what I read.
Whether it is two seemingly well-constructed studies providing contrasting results, or researcher bias as stated above I find it difficult to weed through the amount of material to find the information which is truly applicable and valuable to my patients.
Furthermore I am very concerned about the number of clinicians who base their decisions and practice philosophy on supposedly keystone articles which may of may not support their pre-existing bias. How can they be certain there is not a fatal flaw in the research design that they are overlooking due to psychosocial factors they are unaware of.
I acknowledge and understand the importance of EBM but how, as a novice in the research realm, can I find the truth for my patients. Any thoughts from those with a greater understanding. Mr Maiers, It is very simple. In order for you to gain a greater understanding you must as Dr. If you cannot find a course obtaining Foundations of Clinical Research: To give a man a fish you feed him for a day.
To teach a man to fish you feed him for a lifetime. Thanks Alden, I will investigate course offerings and certainly look into the book you mentioned. More importantly perhaps is the need for a specific research outcome to be produced in multiple locations, by multiple researchers, over an extended period of time.
Think resveratrol, glucosamine, chondroitin, fish oil…. Cook, thanks for an excellent post to provide some excellent food for thought for those of us trying to grasp EBM and how to provide the best care for out patients on a daily basis. I think your point in regards to the competing interest area is an important one. As you stated many times the author may list none, but it is hard not to imagine there is none.
Getting the study published might help with providing their continuing education course that they are involved with, getting tenure at their university, many possible conflicts of interests exist. It is hard to imaging while most papers state the authors have no competing interests, really have none. Nic, thanks for pointing out that it is okay to be comfortable with a little uncertainty. I think we search for things that may or may not be there just to try and create certainty for ourselves with some patients.
I think being comfortable with uncertainty is very important in being an effective physical therapist. And it is often what we are asking of our patients as we reassure them that a bioanatomical cause of their pain may never be found and in fact is not necessary to relieve their pain and restore their function.
I think there is sufficient reason to believe there is bias in all research that I read all of it with some cynicism. I would like some actual references to a paper or two here, I wonder why the hesitancy to provide an example when it is published research, making it fair game for lively discussion? Post-publication peer review is a great way to generate some critical thinking. I need to be able to defend my clinical assumptions and to do that I have to read the literature.
I think everything we do in the clinic should be fair game for fresh reviews. I really enjoyed this editorial follow-up by Dr. Cook and glad to see a follow up here on a blog. I would love to have follow up from authors on my site too. I think this drives significant amount of discussion and provides learning environment for all, especially from experts in our field.
I agree with Sandy that I would like to know more details about which articles are being discussed here. I have a good idea but feel out of loop some. You are commenting using your WordPress.
You are commenting using your Twitter account. You are commenting using your Facebook account. Notify me of new comments via email. Notify me of new posts via email. All-Star Posts , research Tagged as: So, toss up some examples?
Poking Holes in the Evidence for Acupuncture Physiological. Leave a Reply Cancel reply Enter your comment here Fill in your details below or click an icon to log in: Email required Address never made public. Search and Learn Search for Sounds like someone is getting nervous Now we understand that motion is lotion to the body.
Sports Medicine of Atlanta. Follow Blog via Email Enter your email address to follow this blog and receive notifications of new posts by email. Join 8, other followers. Jen on Why does complex regional pain…. Katherine on California court Issues Tempor….