Dear Editor
Nguyen et al has underlined key issues with systematic reviews and meta-analyses as part of the REPRISE project (1). These research modalities exist to offer reliable high-quality conclusions that inform clinical and non-clinical practice.
From the 300 included studies, Nyugen et al were able to demonstrate an increase in reviews citing reporting guidelines such as PRISMA but not actually entirely report all the elements included in the reporting guideline (1). Only 1 of the 300 incorporated reviews cited a protocol and registration record, which highlights a significant area for improvement (1).
The authors stated that over 50% of reviews originated from China, UK and United States (1). It would be interesting to investigate where any variability in reporting and adherence to PRISMA guidelines between countries exist and possible reasons behind this. Do the authors have any information regarding this?
Lastly, this study highlights the culture of review authors acquiring unpublished data from authors of individual studies to be later utilised in analysis thus reducing the ability reproduce such analyses. Hence, data sharing in systematic reviews/meta-analyses is advocated. Word counts and restrictions to figures when submitting to journals may be a reason for reduced publishing of review data. Even where supplementary information is accepted, there is often limit on number of files allowed. Would minimising journal restrictions facilitate data sharing in reviews?
References:
1. Nguyen P, Kanukula R, McKenzie J E, Alqaidoom Z, Brennan S E, Haddaway N R et al. Changing patterns in reporting and sharing of review data in systematic reviews with meta-analysis of the effects of interventions: cross sectional meta-research study BMJ 2022; 379 :e072428 doi:10.1136/bmj-2022-072428
Re: Changing patterns in reporting and sharing of review data in systematic reviews with meta-analysis of the effects of interventions: cross sectional meta-research study