Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

InFocus

“We need to follow the evidence, but we also need to try to work out what the data is not saying, what is missing and why. We need to understand the evidence we don’t realise we have”

The rapid fire of daily consultations that we all deal with does not always lend itself to working in the most evidence-based way. Many of us will be familiar with the idea of an evidence-base pyramid made up of multiple layers. The layers of the pyramid represent the number of different bits of evidence, and the further up the pyramid you go, the more valid that evidence becomes.

So, at the base we have the hundreds of “n=1” or “n=2” experiences that you and your colleagues report – reliable people, but only very small numbers. Next up are the many case reports, not usually proving cause and effect but feeding into the next layer up of case-control studies. Then there are cohort studies, and the penultimate layer is the gold-standard clinical evidence of randomised blinded clinical trials. Sitting above these, and slightly aloof, are the meta-analyses, where we academics compare lots of these trials in a systematic review.

In veterinary medicine, however, large-scale trials are few and far between. One of the largest well-known ones was the EPIC study into pimobendan use in dogs, which featured a double-blinded placebo-controlled study in 360 dogs across 11 countries (Boswood et al., 2020). Yet this would still be viewed as tiny in the human medical field.

These all deal with the “known unknowns” – they look at things we know about and want to test. However, there are many things that walk into our consult room that we don’t know about: as Donald Rumsfeld (2002) famously said, “Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say, we know there are some things we do not know.” 

As Donald Rumsfeld famously said […] “we also know there are known unknowns; that is to say, we know there are some things we do not know”

These thoughts were brought home to me when I read about the mathematician Abraham Wald, and how his insights led to the lives of many airmen being saved over the years. During the Second World War, fighter planes would return from battle riddled with bullet holes from enemy fire, but these seemed to have a predilection for certain areas on the planes. Naturally, the Allies reinforced the areas that appeared most commonly shot at by the enemy. Fortifying the areas of the plane where they had plotted the most holes was an obvious move; however, Wald pointed out that perhaps the reason some areas of the planes did not have bullet holes was that planes shot in those areas did not return at all. This insight led to a change of tactics, and the armour of the planes was reinforced in the areas where there were generally no bullet holes on the returning planes. This policy was continued in the Korean war and has been credited with saving many lives.

This story shows that the reasons we are missing certain data might be more meaningful than the available data itself. We need to follow the evidence, but we also need to try to work out what the data is not saying, what is missing and why. We need to understand the evidence we don’t realise we have.

The reasons we are missing certain data might be more meaningful than the available data itself

A recent veterinary Facebook discussion brought these two thoughts together and is a simple example of where evidence or lack thereof is interesting. Lightly salt with a bit of confirmation bias and you have a very confusing picture! The discussion was whether a certain brand of cat food led to more male cats having a urethral obstruction. Cue quite a few people saying that in their experience this was true. For others, they had never heard of it in that cat food, but they had for another one. Cue more people agreeing. I added that, hypothetically, if a certain cat food accounted for 60 percent of cases, that would be a serious cause of concern, but what if that brand had a 65 percent market share? Then it would actually be preventative. Lastly, I added: what about the ones who, like the planes, never made it? There may be a brand of food that is so bad for causing urethral obstructions that the cats die before they get to the vets or that the demographic of people who buy it cannot afford vet treatment, so never make the statistics.

In veterinary medicine, like with our World War Two planes, we need to question the data, and if there are noticeable absences, ask ourselves “why?” and “what does that mean?”

Have you heard about our
IVP Membership?

A wide range of veterinary CPD and resources by leading veterinary professionals.

Stress-free CPD tracking and certification, you’ll wonder how you coped without it.

Discover more