Student Stories
April 2nd, 2025
The World Health Organization (WHO) 1 has published continuous and comprehensive statistics on clinical trials in the world. In 2024 there have been 38 788 newly registered clinical trials adding up to more than 900 thousand clinical trials since 1999. The majority of clinical trials are registered in the region of the Western Pacific, whilst both the EU and USA have similar numbers accounting for around 35% of all clinical trials. Such clinical trials inform us about new interventions and diagnostic or prognostic tools that can be further utilised in medicine and, hence, are of immense importance for healthcare progress. They are designed to accommodate the conditions specific to the disease, population, and intervention to translate the results into real-world practice.
This replicative characteristic of the clinical trials contributes to the external validity by confirming that results can be generalised across different populations and settings. That is, clinical trials that are done in one population should be replicable in another population under the same conditions and bear similar results. Unfortunately, this is not the case. In a Lancet commentary by Macleod and colleagues 2 , it is outlined that around half of the studies did not refer to the systematic reviews when designing trials as well as that they did not implement proper procedures to minimise biases. In science, we do like seeing papers reporting contradictory results much like reading papers building upon recent evidence or confirming it. However, this usually has to be done under rather similar conditions. This brings us to the reporting issues in papers. Namely, in the same commentary authors outlined that around 30% of trials’ interventions are not fully described. Such failure leads to the inability to replicate the study conditions, thus leading to the inability to appropriately compare the results. A similar issue was addressed by Stark 3 who says that for reproducibility we need to implement preproducibility first, and by that, he means that the methods need to be thorough and robust for anyone to reproduce them.
The irreproducibility crisis is not limited to clinical trials but also to other fields such as natural sciences and psychology. Lithgow, Driscoll, and Phillips wrote a Nature Commentary 4 on their attempts to standardise worm handling in labs. It took them 100 thousand worms until they realised that subtle disparities such as handling and age determination impact the results. Regardless, they still had in-house run-to-run differences.
In medicine, when clinical trials are read, they undergo critical appraisal. That means that doctors and scientists analyse them to understand the advantages and disadvantages of trials’ design and to what extent they could positively or negatively affect the outcomes. Several major domains are analysed. One is field-specific. Clinical trials must be appraised by specialists in respective fields as they can spot lacking clinical features in design. This is built on further by a biostatistician who appraises the statistical methods employed in the study. Finally, both clinician and biostatistician appraise potential biases and their level of impact on the outcomes. This process is crucial for developing a systematic review with meta-analysis.
Systematic reviews ensure that all papers about a topic of interest are gathered in one place and summarised narratively, whilst the meta-analysis provides a quantitative summary of data. This is called evidence synthesis. They would not be possible without replication studies. However, even though there are replicated studies, if their designs differ significantly, they are not comparable, especially in a meta-analysis. Consequently, studies addressing, for instance, the same condition but with different populations in age or drug combinations are excluded from such reviews. This limitation may impede systematic reviews and meta-analyses production as there may not exist enough homogenous data to produce one. In the very end, this affects guidelines’ recommendations on disease management since they rely on high-quality evidence to support their recommendations. Simultaneously, this requires clinicians to base their rational decision-making on lower-quality evidence.
Finally, a high-quality study design without methodological deficiencies produces trustworthy high-quality evidence that improves rational decision-making through making better informed decisions. This, in turn, improves healthcare and patient-related outcomes.
Tarik Suljic
Faculty of Medicine, University of Sarajevo
2024/2025 BOSANA Scholarship holder
1 https://www.who.int/observatories/global-observatory-on-health-research-and-
development/monitoring/number-of-trial-registrations-by-year-location-disease-and-phase-of-development
2 https://doi.org/10.1016/S0140-6736(13)62329-6
Your Generosity, Their Hope
Donate now via PayPal or Global Giving to transform lives through education.
Once selected into our scholarship program, we cover students with the tools they need to succeed academically and professionally.
$35 per month
$120 per month
$130 per year
$400 per year
$600 per year
$3,500 per year