• Aucun résultat trouvé

LES BIOMARQUEURS CARDIOVASCULAIRES COMME EXEMPLES DE SUCCÈS ET D’ÉCHEC AFIN DE PRÉDIRE LA

REVUE DE LITTÉRATURE

MEASURED VARIABLES Central Nervous System

2. LES BIOMARQUEURS CARDIOVASCULAIRES COMME EXEMPLES DE SUCCÈS ET D’ÉCHEC AFIN DE PRÉDIRE LA

SÉCURITÉ CHEZ L’HUMAIN

Ce deuxième document de revue de littérature présente la notion de biomarqueur dans le cadre du développement des médicaments. Ce chapitre de livre donne suite à une invitation d’un éditeur pour un livre sur le développement des médicaments (Wiley Interscience, Hoboken, NJ, USA, 2009) et fût écrit par le Docteur Simon Authier avec l’assistance de ces mentors. Le chapitre présente la complexité associée à l’utilisation de modèles visant à prévoir les réponses chez les patients. En plus de donner une image plus approfondie du contexte réglementaire de développement des médicaments, ce chapitre présente l’interdépendance entre les études cliniques et précliniques et les conséquences importantes qui découlent de l’association ou du manque d’association entre les biomarqueurs et les résultats cliniques incluant la mortalité et la morbidité.

Ce chapitre s’inscrit dans la démarche générale de la thèse visant à définir les enjeux et les limitations possibles dans le développement d’un plan d’évaluation de la sécurité d’un médicament. On introduit un sujet épineux pour la communauté pharmaceutique et médicale, celui de la démonstration de l’efficacité des nouveaux médicaments.

En effet, tel que présenté dans le chapitre, l’industrie du développement pharmaceutique connait une stagnation en matière de développement des médicaments. Celle-ci est attribuable, au moins en partie, aux difficultés à démontrer l’efficacité des nouvelles thérapies. La présente thèse, ayant pour objectif l’évaluation de la sécurité mais aussi l’efficacité de l’OT dans le traitement de l’infarctus du myocarde, s’articule autour de cette inextricable utilisation des biomarqueurs comme outils principaux d’évaluation des deux considérations essentielles : la sécurité et de l’efficacité. Comme le suggère Zhao

et al. (2009), les échecs en développement de médicament doivent forcer un

comme assise d’une médecine axée sur l’identification de cibles thérapeutiques pour ensuite mettre en place des nouvelles thérapies pour celles-ci.

Cardiovascular biomarkers as examples of success and failure in predicting safety in humans

Simon Authier, Michael K Pugsley, Eric Troncy, Michael J Curtis

Simon Authier, D.V.M., M.Sc., M.B.A. LAB Research Inc.

445 Armand Frappier Laval, QC

Canada, H7V 4B3

Michael K. Pugsley, M.Sc., Ph.D., F.B.Pharmacol.S., Johnson & Johnson PR&D,

Global Preclinical Toxicology/Pathology Raritan, NJ 00869 USA

Eric Troncy, D.V., M.Sc., Ph.D., D.Un. Faculté de médecine vétérinaire

Université de Montréal 1500 des Vétérinaires C.P. 5000

Saint-Hyacinthe, QC Canada, J2S 7C6

Michael J Curtis, Ph.D., F.H.E.A., F.B.Pharmacol.S. Cardiovascular Division School of Medicine Rayne Institute St Thomas' Hospital London SE17EH, UK tel. 0207-1881095 fax 0207-1880970

“Biomarker: A characteristic that is objectively measured and evaluated as an

indicator of normal biological processes, pathogenic processes, or

pharmacologic responses to a therapeutic intervention.” (National Institute of

Health Biomarker Definitions Working Group, 2001).

“A surrogate endpoint or marker is a laboratory measurement or physical sign

that is used in therapeutic trials as a substitute for a clinically meaningful endpoint that is a direct measure of how a patient feels, functions or survives and is expected to predict the effect of the therapy” (Temple, 1999)

Introduction

Drug discovery begins with a hypothesis. This hypothesis leads to development of a new chemical entity (NCE) that is designed to alter a particular target. However, before clinical efficacy can be tested, proof of concept requires identification of a likelihood of efficacy. This is interrogated using animal models of disease and usually involves use of biomarkers. Biomarkers are also used as surrogate indicators of the state of a disease (for diagnosis and prognosis). Biomarkers (those used for safety assessment purposes as well as efficacy) are therefore cornerstones of medical research. In their central role, biomarkers reflect the presumed link between conceptual understanding of pathophysiological processes and their modulation for therapeutic purposes. In this unique position, biomarkers serve a role in translational medicine where research advances are converted into practical decision algorithms for clinicians or potential new therapies for development by the pharmaceutical industry. While this concept may appear simple, it depends on the predictivity of the biomarker. Given the close relatedness of biomarkers used in drug discovery (non-clinical) and clinical applications, both perspectives will be presented and discussed.

For clinicians, biomarkers are useful for diagnosis, prognosis or to help select and guide treatment for a given patient. In drug development, biomarkers are the scientific endpoints that orient study design and decision-making. In clinical trials, a biomarker may be elevated to the status of ‘surrogate endpoint’ where it serves as a quantitative measure of the effectiveness or the potential safety of a treatment. A surrogate endpoint is therefore a biomarker which (to be of value) has been validated for a given application as a predictive measure of the true clinical outcome. The controversial process by which a biomarker is validated to become a surrogate marker will be discussed throughout the chapter.

Biomarkers are used in all areas of medicine and many well-known examples include the use of bone density as a surrogate of fractures in osteoporosis (Marshall et al. 1996; Cranney et al., 2007), CD4 lymphocyte count and quantitative measurement of viral load and proviral DNA in human immunodeficiency virus (HIV) (Antiretroviral Therapy Cohort Collaboration 2008; Torti et al., 2008), or albuminemia in chronic renal disease (Honda et al., 2008).

The interdependency of non-clinical research, clinical trials and therapeutics To set the basis for discussion, the context of drug development will be outlined. Drug development can be divided into clinical and non-clinical areas of investigation. Clinical investigation includes research activities commensurate with study in healthy volunteers or patients while non-clinical investigation encompasses research activities for drug development that include a diverse spectrum of in vitro and in silico assays and animal models. Non-clinical research is initiated prior to first in human (FIH) administration (preclinical) but can continue during clinical trials. Non-clinical testing requirements increase as the process proceeds and the NCE advances in development; however, note that data received in clinical trials may also prompt additional non-clinical testing. The different requirements of non-clinical testing for drug development in relation to clinical trials and drug approval will be presented later. Drug development

research and clinical diagnosis share the same final interest - the patient. As a result, a significant proportion of biomarkers used in non-clinical research and early clinical trials are also used in the clinic during the conduct of Phase-3 (i.e., randomized controlled multicenter trials on large patient populations) and 4 (post- marketing safety surveillance) studies.

There is a growing consensus among regulatory authorities and the pharmaceutical industry regarding the need for biomarkers that are common to non-clinical research and clinical trials, and which can eventually be used by clinicians. This is important because non-clinical data can be used to inform clinical decision making when issues of efficacy and (more commonly) safety arise. For example, a biomarker used in non-clinical research to quantify liver toxicity could later be used to monitor signs of possible hepatic toxicity in clinical trials. Thus, common clinical and non-clinical biomarkers offer the promise of greater coherence and ease of decision making. However, for this to work, a question about the process of biomarker development must be addressed.

Biomarkers are developed by integrating the outcomes of clinical trials, clinical research (independent of ongoing evaluation of therapeutic interventions), and non-clinical research. Drugs with known clinical effects are most useful for development and validation of biomarkers in non-clinical models, where non- clinical biomarkers are assessed for their ability to predict a confirmed clinical outcome. Clinical research excluding therapeutic interventions may identify biomarkers for diagnostic or prognostic purposes. However, establishing causality between change to the biomarker and improvement of clinical outcome due to the treatment (drug) is often no more than a ‘leap of faith’ as will be explained later using high-density lipoproteins (HDL) for cardiovascular disease as an example. Evolution of biomarker development

The development of biomarkers and their role in drug development has evolved rapidly leveraged primarily by advances in life science technologies. At a time when biomarkers carry great hope for medical advances, the history of biomarkers

may have a lesson to teach the modern medical world. Considerable efforts have been invested to characterize the predictive value of each current biomarker as a surrogate endpoint. The iconic Framingham heart study (started in 1947) was amongst the pioneer initiatives of the era of prospective epidemiological clinical studies to undertake systematic investigation of causes of cardiovascular disease and risk factors. Findings from the Framingham study allowed for an assessment of biomarker validation for cardiovascular disease resulting in the utilization and subsequent adoption of serum cholesterol (Oppenheimer 2005) as a primary biomarker for cardiovascular health status. As described in the initial study outline by Meadors (1947): “this project is designed to study the expression of

coronaryartery disease in a normal or unselected population and to determinethe

factors predisposing to the development of the disease through clinical and

laboratory examination and long term follow-upof such a group”. While serum

cholesterol can be used as a biomarker to establish the general health status of the cardiovascular system, the troponins (T and I), recognized as highly sensitive and specific markers of myocardial damage, illustrate biomarkers that have been developed to provide direct evidence of disease (The Joint European Society of Cardiology/American College of Cardiology Committee, 2000). The use of biomarkers to assess disease risk factors is common in clinical diagnosis (e.g. for identification of signs of malignancy by histology of tumor biopsy) but also in clinical trials (e.g. from assessment of QT prolongation) whereas the effect of a treatment on a biomarker may be used to predict efficacy or safety (e.g. troponin T and I, serum level of low density lipoprotein (LDL) or glomerular filtration rate).

However the use of unvalidated biomarkers (i.e., characteristics that have not yet been determined to be reliable) is potentially hazardous. First, it is recognized that a treatment effect on a surrogate endpoint does not necessarily guarantee correct inference of the treatment effect on the relevant clinical endpoint (Baker & Kramer, 2003; Berger 2004; Prentice, 1989). The concept of biomarkers as risk factors is intimately related to validation of surrogate

endpoints. Surrogate endpoints may include biomarkers which represent direct evidence of disease as illustrated previously with troponins or biomarkers validated as predictive of clinical outcome exemplified by the QT interval that is widely used to assess the risk of the syndrome torsades de pointes (TdP). The QT interval, which represents the interval between the start of ventricular depolarization and the end of repolarization, is recognized by the scientific community (Lawrence et al., 2006; Wallis 2007) and regulatory agencies (Anon, 2005) as the most convenient biomarker to assess the risk of developing TdP. Consequently, most of the attention from both the scientific community and regulatory agencies (FDA, EMEA and MHLW) has been directed toward QT prolongation as a risk factor for drug-induced TdP. Sensitivity and specificity limitations of QT prolongation have been reported by several groups (Eckardt et

al., 2002; Redfern et al., 2003) and there has been criticism of over-reliance on

the use of the QT interval (Hondeghem 2008). Some drugs such as amiodarone and pentobarbital induce QT prolongation but have no reported ability to cause TdP. The use of QT prolongation as a surrogate for TdP in drug development may lead to discontinuation of valuable treatments. On the other hand, while increasing evidence is emerging that QT shortening predisposes to ventricular fibrillation (Lu et al., 2008), regulatory guidelines on QT interval have yet to address this possible concern. As a result, a widely accepted and validated biomarker used as risk factor for a potentially fatal condition relies on questionable and evolving foundations. Despite the limitations of QT prolongation, ethical and economical considerations prevent use of the true clinical endpoint (TdP in patients) to assess the safety of new treatments. Considerable efforts have been made to refine and validate the use of QT as a surrogate marker for TdP (Fossa, 2008; Nolan et al., 2006; Ollerstam et al. 2007; Pugsley et al., 2008) but this has tended to serve only to emphasize its limitations. Increasingly QT is seen as just one part of an integrated risk assessment (Gintant 2008).

Composite endpoints

As one might expect, given the complex nature of most diseases, if there is no single definitive biomarker for a given condition, a combination of biomarkers is normally used to forecast potential clinical outcome (mortality, morbidity and quality of life). Medicine has always aimed at improving the predictive value of biomarkers. Selection and validation processes have evolved into an organized framework where evidence based medicine (EBM) benefits from meta-analyses of the medical literature, risk-benefit assessment and randomized controlled trials to weigh the predictive value of biomarker combinations. The use of an integrated approach combining more than one biomarker to increase the predictive value is noted in the clinic where prognostic indexes using multiple biomarkers have been developed in major areas of medicine including, but not limited to, cardiology (Meuwissen et al., 2008, Lev 2008), oncology (Rees et al., 2008; Mitry et al., 2004) and neurology (Hansson et al., 2006). Similar approaches have been developed and are now utilized in non-clinical drug development where an integrated risk assessment is used to estimate the sensitivity and specificity of a combination of non-clinical models. Pollard et al. (2008) recently assessed the predictive value of a combination of non-clinical assays to quantify TdP risk potentials. When combining in vitro (hERG) and in vivo QT data, the predictive value to man was reported to be >80%. This may seem high, but in fact it implies a 20% failure rate to predict a potentially life threatening condition. Initiatives to assess the integrated predictive value of drug development screening platforms may have a long-term impact on development of new therapies where selection and timing of the various assays is traditionally based on experience of the research groups rather than on calculated and proven predictive value. With calculated predictive value, one could reassess the construct of a drug development program and optimize timeline and resource allocation.

Among the disciplines using multiple factor analysis, genomic and proteomic approaches offer potentially one of the best hopes for rapid medical progress. With the increasing availability of microarray technologies, genomics

and proteomics give rise to a new paradigm in biomarker development. The quest to establish a relationship between biomarkers and clinical outcome has challenged medical research for the past century. The modern medical world is now faced with a unique challenge: determination of whether the correlations identified have genuine predictive value. Genomics and proteomics are particularly affected by this since although they allow extensive characterization of chromosome and protein expression, so much data is generated by these powerful screening technologies that correlation of one or more biomarker with an experimental variable is inevitable. It then becomes necessary to interrogate the relevance of the correlation. This validation of biomarker candidates is an area of intensive activity. The imperative to confirm the scientific value of “discoveries” from high output microarray technologies requires novel approaches to data analysis supported by bioinformatics (Gormley et al., 2007; Hwang et al., 2008). At bedside, patient genome screening is now commercially available and can be used to evaluate multiple single-nucleotide polymorphisms (SNPs) for disease susceptibility. Genome profiling is a start point for personalized preventive medicine and targeted therapies (Sawyers, 2008). In spite of recognized potential for improved diagnosis, the clinical utility of SNPs remains limited given the lack of controlled clinical trials to evaluate the clinical value of genetic biomarker screening (Hunter et al., 2008).

Considerations for the use of biomarkers: Is validation achievable?

Validation requires value as well as validity. A biomarker is useful only if it is sufficiently accurate and a therapeutically useful drug with a good risk/benefit ratio is available (i.e., the biomarker can be used to usefully inform therapeutic decision making). Prostate specific antigen (PSA; also known as kallikrein III or P30 antigen) is a prostate-specific protein that is usually present in minute quantities in the serum of normal men but which is elevated in prostate cancer (Thompson et al., 2004). The measurement of PSA for use in prostate cancer assessment began commercially in1982 – yet more than twenty-five years later,

its value as a routine screening diagnostic tool is still debated (Lin et al., 2008) partly due to a relatively high rate of false negatives (reported to be as high as 27%; Carter, 2004). The psychological consequences of a false positive in the case of a cancer biomarker may outweigh the biomarker’s diagnostic value as a routine screening tool.

This emphasizes the key driver in validation: the patient is the primary focus. New generations of biomarkers succeed older generations on the basis of improved sensitivity, specificity or other considerations such as economical and psychological impacts. Thus, lactate dehydrogenase (LDH), a marker of cardiac ischemia (Randall & Jones, 1997) has been largely replaced by troponin T, and troponin T is now challenged by a more sensitive marker (H-FABP) for early detection of myocardial ischemia (Ishii et al., 2005; McCann et al., 2008).

Biomarkers used as surrogate endpoints evolve in a regulated environment where generic validation for a clinical condition takes priority over validation for a given drug or treatment (Katz, 2004). In other words, an ideal validation would demonstrate the predictive value of a surrogate endpoint across different drug classes to treat a given clinical indication (Hughes, 2002). However, even widely accepted biomarkers struggle to comply with such stringent validation requirements, as will be discussed below, but first the regulatory context of biomarker validation will be presented.

Regulatory considerations

In the pharmaceutical industry, guidelines that have been provided by regulatory authorities serve as a start point for non-clinical and clinical study designs. Regulatory approval is usually based on the manifest effects of the treatment on survival or on the symptoms of the disease (Katz, 2004). Approval is based “…upon a determination that the product has an effect on a clinical endpoint or

on a surrogate endpoint that is reasonably likely to predict clinical benefit”.

Examples of surrogate endpoints that were accepted by the US Food and Drug Administration (FDA, www.fda.gov) and the European Medicines Agency

(EMEA, http://www.emea.europa.eu) include blood pressure and cholesterol for heart attacks, stroke and death.

Validation of biomarkers is recognized as a process that needs to be independent from drug submission review (Goodsaid & Frueh, 2007). Several initiatives by regulatory authorities have provided for a better understanding of biomarkers and their use in the regulatory approval of investigational drugs. A pilot group structure was developed by the FDA around the Interdisciplinary Pharmacogenomic Review Group (IPRG). Although the primary mission of the IPRG was to establish a scientific and regulatory framework for reviewing genomic data, it was also logical to allow the contributors from this group to aid in the qualification of new biomarkers for the evaluation of new drugs. This subsequent initiative comprised of FDA experts from the Center for Drug Evaluation and Research (CDER), Center for Biologicals Evaluation and Research (CBER), Center for Devices and Radiological Health and National Center for Toxicological Research, and is known as the Biomarker Qualification Review Team. This team is mandated to coordinate the evaluation of data submitted as related to the qualification of novel biomarkers of drug safety using clinical, non-clinical and statistical methodology (Goodseid and Frueh, 2007). Coordination initially involves a review of the intended context of use of the biomarker utilizing data submitted from the applicant. The context of use is a critical component of qualification, since a biomarker may be relevant in more than one particular clinical setting. Thus, once the context of use has been reviewed, the biomarker qualification study strategy is devised and, in an iterative process, a consensus can be sought between the regulatory authority and the sponsor. After completion of the qualification study, the Biomarker Qualification

Documents relatifs