Workshops

Objectives: 
To provide the review author with a background to the problem of outcome reporting bias (ORB) and how it might lead to misleading conclusions, to demonstrate how a review author might identify such bias in their review, and to present techniques for assessing the robustness of the meta-analysis to such bias.

Description:
Within-study selective reporting bias has been defined as the selection, on the basis of the results, of a subset of the analyses undertaken to be included in a study publication. Sources of such bias will be described. The workshop will focus on ORB. Direct empirical evidence for the existence of outcome selection bias is accumulating. In a meta-analysis it is often the case that a total number of k eligible studies are identified but only n report the data of interest.

The reviewer needs to examine the remaining (k-n) studies to establish whether the outcome of interest has been collected but not reported. This should ideally involve contact with the original trialists, which may result in missing data being made available or it may confirm that the outcome data were not recorded. However it is likely that in a subset of these studies, m (< k-n), no such information is forthcoming. It is important to assess the level of suspicion that selective non-reporting has occurred in these mstudies.

Methods for the identification of ORB in a meta-analysis and an individual study will be described and illustrated using examples. If the level of suspicion is high, a useful first stage is to undertake a sensitivity analysis to assess the robustness to selective reporting. Two methods for such an analysis will be illustrated and compared using examples. Participants will be encouraged to undertake such assessments for examples provided and to discuss the issues for their own reviews.