I’m confused. And that’s not necessarily such a bad thing.
I’m as confused, or perhaps more so, than most people. And even that’s not necessarily a bad thing.
But what I’m confused about are issues around health and medicine, and that is a bad thing.
After years of schooling and training, experience in medical research and teaching, and decades of clinical practice, I should not be so confused about health and medical issues.
If I am, what does that mean about available data and information, and how that is being conveyed to the public and to medical and scientific professionals? And how are professionals and non-professionals supposed to act in the face of such confusion and uncertainty?
The number of confusing issues in the areas of health and medicine is staggering. With concerns about lifestyles, vaccinations, medications, referrals and consultations, non-physician “healthcare providers,” accessibility to care, costs, public policies, ageing, and so on, the list seems endless.
Why so much confusion?
I certainly don’t have all the answers, but some things seem evident to me.
First, science evolves. New findings, new studies, new techniques, new inquiries, new analyses can show that a prior belief is false or inaccurate. We accept that things that we thought were so are, in fact, no longer supported by solid evidence. I recall being told in medical school that half the things we were being taught would turn out to be untrue. (Of course, we didn’t know which half.)
But, when scientific “facts “or conclusions are proven wrong, it undermines faith in science itself. How can we believe in science when it changes its mind? The key point, I think, is that when science changes it is because new and better evidence comes to light. Most science proceeds incrementally; new findings usually add to prior ones, rather than destroy them. Creative destruction is not a scientific commonplace. Occasionally, a brilliant insight changes our understanding dramatically; but it must meet some test of feasibility, reality. That’s good: Science marches on.
Second, some things we believe are matters of faith, not evidence. Belief in a supreme deity, belief in an afterlife, belief in a soul separate from the body — these are held as immutable truths by some but regarded as fictions and superstitions by others. Where proof is lacking to some but conviction is unshakeable to others there is bound to be confusion, uncertainty, and disagreement.
Third, not all evidence is equal, and our faith and confidence in conclusions drawn from evidence may vary depending on the strength of the supporting data and information. Also, depending on the nature of the evidence, and the importance in our lives of the “facts” the evidence supports, we may have different levels of reliance on the same information. In so-called Evidence Based Medicine (EBM), for example, there are generally accepted hierarchies of medical evidence indicating the strength of the data. And in current medical guidelines prepared by panels of experts, there are even gradations in the strength of the recommendations based on the perceived strength of the underlying evidence.
Fourth, there are some individuals and groups who hold and promote ideas that lie far outside the mainstream of accepted evidence. Perhaps the ultimate example of this is the Flat Earth Movement. Yes, believe it or not, there are people who firmly maintain that the earth is flat, despite this being disproven in ancient Greece and disproven again and again throughout history.
When people with unfounded beliefs are in positions of authority, their opinions often achieve more acceptance than deserved. While in power, their ideas may command attention and credence due to the “weight” of their office; but remove from them their bully pulpit and their beliefs lose their semblance of credibility.
Fifth, there are some who promote ideas based largely on their self-interest. When awards, reputations, compensation, career advancement, and other benefits depend on a specific outcome in science, business or some other area, people may be willing to fudge the truth or outright lie to protect their “investment.” There are numerous examples of forced retractions of scientific papers, even from highly reputable scientific journals, when authors have been found to have falsified data. I had the personal experience of foiling a medical undertaking promoted by a group who would benefit from its acceptance — I pointed out the dangers to patients that were essentially hidden by the promoters — and while the project was cancelled, my action was met with resentment and retaliation.
The notion that people with a stake in a particular outcome might falsify data and information is recognized in many circles. In medical science, this largely takes the form of requiring researchers and lecturers to list their so-called Conflicts of Interest (COI) when they publish an article or present their findings. This does not imply that everybody with a personal interest in a specific outcome will be dishonest, but it at least alerts the community to possible falsifications or misemphases. Of course, the requirement of disclosure is itself often abused.
Finally, there are those who simply like the idea of creating mistrust, disbelief, loss of confidence, and even fear. They may not have a personal stake in a specific outcome. They may not have a personal belief that they wish to promote. They may not even care about the particular issues or ideas with which they seem to be concerned. They seek, for whatever reason, to manipulate, to cause dissension and disagreement. When the effect on community, on society, is profound and disruptive, they achieve their goal.
So, how does one conduct oneself in the face of uncertainty about scientific “truth,” particularly in regard to health and medical issues? My own strategy, and it is not rigidly defined, is to first look at the evidence supporting any claim. Does the evidence seem reliable? Was it gathered in a way that makes sense? Does it fit with the reality of the world around it? Are the people reporting it reliable, honest, and free of entanglements? Do they have personal credibility? Does the evidence cancel or contradict old beliefs, simply modify them, or does it add to their relevance?
I then try to assess what would happen if I acted on the new evidence or belief. Would it affect me or my family or my community in a major way? Would it be disruptive? Would it be salutary? Would it require a big effort to act on the new belief? What would have to change in our lives? Would there be any impact at all? What would be the costs, not just monetary but in terms of safety, comfort, security?
If the evidence, new or old, and those endorsing it, seem reliable, and acting on the evidence produces satisfactory results at reasonable costs, I’ll adopt the belief and the actions it supports. If the evidence is reliable but the costs are high, then I require that the results of acting upon the evidence be extremely beneficial. If the evidence and its supporters are unreliable, then I’ll require clear and unequivocal proof of benefit, and will discount any unproven claims.
Applying this process to the currently controversial issue of vaccines, the evidence for the “old” vaccine recommendations seems quite strong, the endorsers quite reliable, and the personal and societal consequences very beneficial. The promoters of newer vaccine schedules provide little reliable evidence, in my mind, that their ideas are sound, that their conclusions are based on knowledge and expertise, and that the consequences of acting upon their recommendations are safe and beneficial.
This sort of heuristic process does not guarantee the right decision. But it allows me to approach scientific uncertainty in a somewhat organized way and come to conclusions that not only make intellectual sense, but account for the real-life consequences of belief and action.
Leave a comment