Testing and Implementing Signal Impact Analysis in a Regulatory Setting
- 1 January 2005
- journal article
- Published by Springer Nature in Drug Safety
- Vol. 28 (10) , 901-906
- https://doi.org/10.2165/00002018-200528100-00006
Abstract
Background and aim: Statistical signal detection methods such as proportional reporting ratios (PRRs) detect many drug safety signals when applied to databases of spontaneous suspected adverse drug reactions (ADRs). Impact analysis is a tool that was developed as an aid to prioritisation of such signals. This paper describes a pilot project whereby impact analysis was simultaneously introduced into practice in a regulatory setting and tested in comparison with the existing approach. Methods: Impact analysis was run on signals detected during a 26-week period from the UK Adverse Drug Reactions On-line Information Tracking (ADROIT) database of spontaneous ADRs that met minimum criteria (PRR ≥3.0, χ2 ≥4.0 and ≥3 reported cases) and related to established drugs (i.e. those that have been available for at least 2 years and no longer carry the ‘black triangle’ symbol). The current method of signal prioritisation (i.e. the collective judgement at a weekly meeting) was initially performed without knowledge of the findings of impact analysis. Subsequently, the meeting was presented with the findings and, where appropriate, given the opportunity to reconsider the judgement made. The categories arising from the two methods were compared and the ultimate action recorded. Inter-observer variation between scientists performing impact analysis was also assessed. Results: Eighty-six separate signals were analysed by impact analysis, of which 5% were categorised as high priority (A), 14% as requiring further information (B), 31% as low priority (C) and 50% as no action required (D). In general, the new method tended to give a higher level of priority to signals than the existing approach. Overall, there was 59% agreement between the impact analysis and the collective judgement at the meetings (kappa statistic = 0.30). There was slightly greater agreement between impact analysis and the final action taken (kappa statistic = 0.39), indicating that the findings of an impact analysis had an influence on the outcome. Assessment of inter-observer variation demonstrated that the method is repeatable (kappa statistic for overall category = 0.77). Almost 70% of those who participated in the pilot study believed that impact analysis represented an improvement in how signals were prioritised. Conclusions: Impact analysis is a repeatable method of signal prioritisation that tended to give a higher level of priority to signals than the standard approach and which had an influence on the ultimate outcome.Keywords
This publication has 11 references indexed in Scilit:
- Impact Analysis of Signals Detected from Spontaneous Adverse Drug Reaction Reporting DataDrug Safety, 2005
- Automated support for pharmacovigilance: a proposed systemPharmacoepidemiology and Drug Safety, 2002
- A comparison of measures of disproportionality for signal detection in spontaneous reporting systems for adverse drug reactionsPharmacoepidemiology and Drug Safety, 2002
- Signal Selection and Follow-Up in PharmacovigilanceDrug Safety, 2002
- Determinants of signal selection in a spontaneous reporting system for adverse drug reactionsBritish Journal of Clinical Pharmacology, 2001
- Use of proportional reporting ratios (PRRs) for signal generation from spontaneous adverse drug reaction reportsPharmacoepidemiology and Drug Safety, 2001
- Responding to drug safety issuesPharmacoepidemiology and Drug Safety, 1999
- A Bayesian neural network method for adverse drug reaction signal generationEuropean Journal of Clinical Pharmacology, 1998
- The Measurement of Observer Agreement for Categorical DataPublished by JSTOR ,1977