Favorite Saved

THE EDITOR'S CORNER

Bias vs. Science in Clinical Decision-Making

Over the years, I have written several times about statistics and evidence-based dentistry.1-3 In my very first JCO interview, I was the interviewee rather than the interviewer, as our Senior Editor, Dr. Gene Gottlieb, questioned me about the science of statistics and its application in clinical orthodontic research.4 One of the major points of that interview was that the results of presumably valid, statistically analyzed, evidence-based papers need to be evaluated by individual practitioners within the context of their own actual experience, sophisticated judgment, and critical-thinking skills. More simply stated, you should not necessarily believe something you read in a journal merely because the authors have relied on a scientific research design or a statistical analysis to conclude that their results are "significant". Always remember that in the parlance of statistics, unlike everyday language, "significance" is not synonymous with "importance". In statistics, "significance" simply means that the obtained results were unlikely to have occurred due to random chance alone. "Importance" is another matter entirely.

I have also mentioned in this column that in addition to being a professor of orthodontics and a practicing orthodontist at the University of Southern California's Ostrow School of Dentistry, I am a professor of statistics and research methodology, teaching classes and supervising student and federally funded faculty research in USCA's Department of Biokinesiology and Rossier School of Education. Basically, I teach PhD and EdD students how to conduct research projects and how to generate and interpret statistics. There is one quotation that I always repeat to my beginning statistics students because of its ironic humor and underlying existential truth. Its original authorship is debatable--having been variously ascribed to Mark Twain, Benjamin Disraeli, and others--but no matter who first said it, it rings true to this day: "There are lies, there are damned lies, and there are statistics." Though it is almost impossible to actually "lie" with honestly, objectively designed research, investigators can "spin" scientific studies and statistical results in many ways to support conclusions that they wanted to demonstrate from the start.

This type of scientific bias--in reality, scientific cheating-- was first brought to my attention when I was in school, finishing the research paper that is required of every orthodontic specialty graduate. After objectively and conscientiously gathering all my data, I took them to our program's consulting statistician. When I handed him the floppy disk (remember those?), he looked at me and asked, "How do you want this to come out?" As a young idealist who had started out in the field of experimental biology, I was a little shocked and disillusioned by such a question. I didn't want it to come out in any particular way; I wanted to let the chips fall where they might and arrive at some valid conclusions about my research subject, based on the probabilities associated with their outcomes.

A couple of years later, as a budding assistant professor just beginning a long career in academic orthodontics, I was "volunteered" to participate in a study comparing treatment outcomes obtained by "real" orthodontists to those obtained by general dentists practicing orthodontics, using like samples of finished cases. When I suggested to the senior lead in the project, a well-known academic orthodontist, that we have the cases evaluated by a matched and calibrated panel of judges with equal representation of orthodontists and general dentists, in order to eliminate rater bias, his immediate reply was, "No! They will mess it up!" In other words, he wanted no GPs to muddle up the judging; he wanted only experienced orthodontists on the panel, so he could demonstrate that the cases treated by specialists were better than those treated by generalists. This was a classic case of experimenter bias being exerted a priori with the intent of achieving a desired outcome. Fortunately, the study never took place, due to our inability to obtain a viable random sample of patients.

A final example of research bias was brought to my attention by a colleague who, like me, is both an academic and a wet-fingered (or wet-gloved) orthodontist. A clinical study was in progress to compare two treatment modalities with regard to their effectiveness in Class II correction. The first appliance has been used for many years and is generally approved by the orthodontic establishment. The second modality has traditionally been cast in a negative light by more conservative clinicians. As the trial progressed, it became clear that the second appliance was every bit as effective as the first--if not more so. It was also obvious to the researchers that the reason for this was a lack of compliance with the first method. The appliance could not bring about satisfactory treatment outcomes if the patients would not use it. This apparently upset the researchers, who favored the first modality in their own practices and in their teaching, so they started paying the patients in the first group to use the appliances. It was scientific cheating at its most blatant. One has to wonder how often similar biases have influenced clinical trials of appliances that have not found favor with the conservative establishment. Devices such as self-ligating brackets, aligners, and functional appliances come to mind.

Am I implying that all published "scientific", evidence-based papers should be regarded as heavily biased scientific hokum? Absolutely not. I know many, many orthodontic researchers, and the vast majority of them are people I admire, honest scientists of the highest personal and professional integrity. What I am saying, however, is that some research presented as "science"  is less than honest and therefore questionable. The problem for clinicians is that it is difficult to identify such studies by their stated methodologies once they hit the press. Moreover, many other research outcomes are withheld from publication for fear of incurring the wrath of the powers that be. Experience- based decision-making--often referred to as "anecdotal", in contrast to evidence-based decision-making-- has taken considerable heat from organized science in recent years. This is partially due to genuine concerns about scientific validity, but it is also motivated by the desire of an entrenched, academically based hierarchy to endorse certain treatment modalities over others.

When a highly trained, reasonably experienced orthodontist is faced with a clinical conundrum, there are three options: 1) rely on his or her own background, education, and clinical judgment to make a "best guess" decision; 2) consult respected colleagues or their writings; or 3) depend on the published evidence- based literature. I submit that the best approach is to pursue all three options, adding a healthy dose of skepticism and critical thought. In reality, most experienced and ethical practitioners do quite well in "best guess" situations. No two clinical scenarios are exactly alike; making inferences and predictions about one case based on experience with similar cases is what day-to-day decision-making is all about. As I have shown here, any study you read--whether anecdotal or evidence-based--can and will be influenced by the authors' biases, no matter what their motivation and credentials. You need to subject every paper to your own experiential filters because, as I remind all my students, there are lies, there are damned lies, and there are statistics. Be highly selective about the experts you choose to follow, and don't believe everything you read in the scientific literature.

RGK

REFERENCES

  • 1.   Keim, R.G.: The weight of the evidence, J. Clin. Orthod. 38:121-122, 2004.
  • 2.   Keim, R.G.: The power of the pyramid, J. Clin. Orthod. 41:587-588, 2007.
  • 3.   Keim, R.G.: Lies, damned lies, and statistics, J. Clin. Orthod. 45:61-62, 2011.
  • 4.   Gottlieb, E.L.: JCO Interviews Robert G. Keim, DDS, on living with statistics, J. Clin. Orthod. 31:307-314, 1997.

REFERENCES 2

DR. ROBERT G. KEIM DDS, EDD, PhD

DR. ROBERT G.  KEIM DDS, EDD, PhD

My Account

This is currently not available. Please check back later.

Please contact heather@jco-online.com for any changes to your account.