1. History of The QT Interval and Its Importance in Early Drug Development
In 1895, Einthoven developed the human electrocardiogram (ECG) and described the PR segment, QRS complex and the T wave while a decade later he and Lewis also recognized the existence of U waves [
1]. In subsequent years, while T wave morphology changes were deemed pathologic and QT lengthening was noted in the setting of myocardial infarction, routine measurement of this interval was not deemed warranted. However, in 1957 Jervell and Lange-Nielsen identified a family in which 4 of 6 children had QT prolongation associated with deafness in the absence of any structural heart disease [
2]. The affected persons were noted to have syncopal episodes and 3 of the children subsequently died before the age of 10 without any autopsy evidence of cardiac pathology. In the ensuing years multiple families were discovered with long QT intervals often without associated deafness, who had a proclivity to syncope and sudden death. In the early 1960s Romano and Ward [
3,
4] both described young individuals with long QT intervals and a history of fainting and ventricular fibrillation, frequently mediated by exercise and ameliorated by beta blockers. In light of these observations, cardiologists and other physicians became sensitized to the importance of evaluating the QT interval and the recognition that abnormal lengthening may predispose to serious ventricular arrhythmias and sudden death.
In 1964 [
5], a small series of patients being treated with quinidine, was reported to have syncope due to ventricular tachycardia in the setting of a prolonged QT interval. The morphology of the ventricular tachycardia had a peculiar undulating or twisting appearance which in 1966 was termed Torsades de Pointes (TdP) by Desertennes [
6]. In the subsequent decades, attention was consequently directed to routine measurement of the QT interval in patients receiving cardiac antiarrhythmic drugs such as procainamide, quinidine and disopyramide but routine surveillance for other classes of medicines was not performed as there was no known link between those agents and QT prolongation or TdP. The landscape changed in 1988, when prenylamine (Segontin) [
7], a calcium channel blocking analogue of amphetamine used in the treatment of angina pectoris, was the first drug to be withdrawn from the market due to sudden death associated with QT prolongation. Several years later a case report of TdP and QT lengthening with the antihistamine terfenadine (Seldane) was published and this drug was eventually removed from the market in 1997 [
8]. In the following years, additional classes of medications including antibiotics and psychotropic drugs were linked to TdP and a number of these agents were subsequently withdrawn by the Food and Drug Administration (FDA) [
9]. In response to these events, government regulators and the pharmaceutical industry realized that a comprehensive evaluation of the QT interval and arrhythmia risk should be incorporated into a new compound’s development program.
The occurrence of life threatening ventricular arrhythmias in the presence of a prolonged QT interval also garnered the interest of British regulators who in 1999 drafted a paper entitled “Points to consider: the assessment of the potential for QT interval prolongation by non-cardiovascular medicinal products” [
10]. This document was written in response to multiple reports of non-antiarrhythmic drugs prolonging the QT interval on the ECG resulting in TdP. It formed the basis for regulatory agencies worldwide to focus attention on the arrhythmic potential of drugs that were previously deemed to be safe.
Shortly thereafter Canada followed suit and developed a similar document in 2001 entitled “Assessment of the QT Prolongation Potential of Non Antiarrhythmic Drugs” [
11] but no formal testing protocol was enumerated. In 2003 the International Conference on Harmonization (ICH) S7B document was published [
12], which delineated the
preclinical strategy to assess the potential of non-cardiovascular compounds to alter ventricular repolarization. This was followed in 2005 by regulators from the United States, Europe and Japan updating ICH S7B while also drafting the seminal clinical guidance document ICH E14 [
13], which outlined recommendations to assess the arrhythmic risk of investigational human non-cardiac pharmaceuticals and their effect on ventricular repolarization. To date ICH S7B has not been formally amended, whereas ICH E14 has been updated with Q&A responses on multiple occasions in 2008, 2012, 2014 and 2015 [
12] but has not been officially revised in entirety.
2. Overview of Ventricular Repolarization Electrophysiology
In order to properly understand the QT interval’s role in the assessment of ventricular repolarization as well as contextualize its evolution in regulatory guidance, a basic review of the cardiac action potential and principal ion channels is warranted.
The electrical activity in mammalian cardiac cells is mediated by regenerative action potentials resulting from numerous voltage gated and ligand or receptor bound gated channels. These channels are proteins which change configuration so as to regulate the movement of various ions across the cell membrane. The channels may be open, closed or inactive and are dependent upon changes in myocyte membrane potential. Coordination of these action potentials in the ventricles is responsible for ventricular depolarization while their recovery and return to their baseline resting membrane potential results in repolarization. The cardiac action potential is divided into 5 phases as noted in
Figure 1.
- ➢
Phase 0: The sharp upstroke of the action potential is primarily the result of a transient and rapid influx of Na+ (INa) through opening of Na+ channels;
- ➢
Phase 1: The termination of the action potential upstroke and initiation of the early repolarization phase are mediated by inactivation of Na+ channels and the transient outward movement of K+ (Ito) through K+ channels and chloride (Cl−) ions;
- ➢
Phase 2: The action potential plateau is ascribed to the balance between the influx of Ca2+ (ICa) through long opening L-type Ca2+ channels and the efflux of repolarizing K+ currents;
- ➢
Phase 3: The sustained downward slope of the action potential and the late repolarization phase are due to the egress of K+ (IKr and IKs) through delayed rectifier K+ channels;
- ➢
Phase 4: The resting membrane potential is supported by the inward rectifier K+ current (IK1), the sodium potassium ATPase pump, and the Na+/Ca2+ exchanger.
Potassium channels are the most abundant of all mammalian ion channels. There are four physiologic subtypes: those which are calcium mediated, those which are known as tandem pore domain channels seen in neurons, those which are inwardly rectifying, and those which are voltage gated. The latter two subtypes are the main channels governing cardiac cell function (reviewed in [
15]). Voltage gated cardiac potassium channels are pores in the membrane of excitable cells that open and close due to alterations in transmembrane voltage and electrochemical gradients thereby allowing the egress of potassium out of the cell. Of the multiple genes that have been identified that encode potassium currents, the inward rapidly delayed potassium rectifier current I
Kr encoded by the human ether a-go-go related gene (hERG) on chromosome 7 termed KCNH2 and the slowly activating delayed rectifier current I
Ks encoded by KCNQ1 and KCNE1 genes are the biggest contributors that influence action potential duration (APD) [
16,
17]. The inner region of the potassium pores is dense with aromatic and polar residues, which act as promiscuous target binding sites for both cardiac and non-cardiac pharmacological drugs [
18]. Genes that encode the potassium channels are responsible for the formation of both alpha and beta subunits which combine into hetero-oligomeric complexes that affect the gating function of these channels. Blockade of the alpha subunit which carries the I
Kr current is the most common mechanism by which pharmaceuticals prolong the action potential resulting in QT lengthening which forms the basis for TdP.
In the absence of dispersion of depolarization, changes in phase 3 of the action potential are primarily responsible for QT prolongation and increases in the effective refractory period of cardiac tissue. This lengthening, in turn, may be related to modification or blockade of various cation voltage gated channels. To illustrate, prolongation of the action potential can result from inhibition of one or more of the outward K
+ currents, decreased inactivation of the inward Na
+ or Ca
2+ currents, increased activation of the Ca
2+ current, alteration of potassium channel trafficking or disruptions in protein synthesis [
19], see
Figure 1. Less commonly, QT lengthening may also be mediated by drugs such as quinidine that prolong APD and the effective refractory period of cardiac cells via blockade of the fast inward Na
+ current channels corresponding to the upstroke (phase 0) of the action potential. The resultant impact on cardiac tissue is to produce a transmural gradient of repolarization which provides the substrate for re-entrant arrhythmias.
3. How Is the QTc Calculated: Popular Correction Formulae for QT Values
There is typically an inverse relationship between the heart rate (RR interval) and the QT interval with shorter QT intervals at higher heart rates and longer QT intervals at lower heart rates. As such, multiple efforts have been advanced in an attempt to correct for this relationship throughout a range of heart rates ideally demonstrating that the regression line comparing the corrected QT (QTc) value against the RR interval is a flat line with a slope of zero. In 1920 Bazett determined that the duration of ventricular systole on the ECG was related to the square root of the heart rate and proposed a correction formula based upon measurements in only 12 subjects [
20], see
Table 1. In the same year, Fredericia posited that this relationship was a function of the cube root of the RR interval in a small sample of ECGs from a presumably normal population of individuals [
21]. Since then numerous correction formulae involving exponential, linear, and logarithmic equations have been advanced in an effort to identify which formula optimally negates the effect of heart rate on the QT interval. In general, each of these population-based formulae using various mathematical models were applied to the dataset rather than allowing the dataset to drive the best fit relationship between the heart rate and QT interval. Moreover, these mathematical models were derived from heterogeneous populations of individuals thereby contributing to a lack of congruity especially when there is heart rate variability. As such, the most common population-based formulae have all demonstrated that the plot of QTc vs. the RR interval is usually imperfect with positive slopes that do not completely eliminate the impact of heart rate. As such, a single mathematical formula may not be appropriate to account for the QT-RR relationship when there is significant heart rate variability.
Historically, the most commonly used correction formula is Bazett which over the years was integrated into the output of most standalone digital ECG acquisition carts and is still utilized today by many clinicians worldwide. It is most accurate between heart rates of 60–100 beats per minute (bpm). At slower heart rates less than 60 bpm, the formula undercorrects the QTc value while at values over 100 bpm the formula overcorrects the QTc interval. The 2005 ICH E14 guidance document originally relied upon the Bazett calculation for reporting of cardiodynamic and safety ECG data in early clinical trials. Subsequently, due to the shortcomings noted above, Bazett’s corrected data has been shown to be inferior to the Fredericia correction method and is no longer routinely warranted for reporting by the FDA (Q&A from FDA reference) [
29]. It still, however, does enjoy a primary role in screening and management of patients with congenital long QT syndromes especially in the pediatric population. To amplify this point, there is some evidence that in a very young pediatric population, Bazett’s formula best corrects for heart rate. In support of this proposal, Phan et al. [
30] interrogated a database of 702 children under 2 years of age being screened for long QTc syndrome and compared regression slopes of heart rate to QTc employing Bazett, Fredericia, Hodges and Framingham formulae (
Table 1). In this specialized cohort, Bazett proved to yield the most consistent calculation of QTc and the upper limit of normal value (ULN) was 460 ms which coincides with that recommended for congenital long QTc screening in this population.
Fredericia’s correction (QTcF) is currently the standard adopted by the FDA when submitting QT data for review. When there is suboptimal QTc correction by Fredericia, consideration should be given to use an individual correction factor especially when there is some variability in heart rate as described below. This approach requires determining an individual exponent that best eliminates the impact of heart rate on the QT interval. The magnitude of variability that has been proposed by regulators to perform this calculation is a documented heart rate change of 6–10 beats per minute. Morganroth [
31] recommended a number of ECGs under non-treatment conditions for each individual be obtained across a sufficient range of heart rates (35 to 50 ECGs covering a range of heart rates from 50 to 80 bpm) in order to adequately make this calculation. Zareba and coworkers [
32] have published data recommending that at least 400 predose QT-RR pairs for each individual subject are needed to compute an acceptable individual correction. Malik recently presented his viewpoint that QT/RR regression analysis in small cohorts of subjects as seen in early phase human studies is fraught with both false positive and false negative conclusions due to significant inter-individual variability [
33,
34]. To add, he stated that a single mathematical QT/RR relationship that fits for all persons individually is unobtainable and therefore should be derived separately for each subject in a clinical trial. He further commented that the individual correction factor is universally superior to population-based formulae although this position is not universally supported.
When electing to perform an individual correction (QTcI), one popular approach for this calculation is to obtain a full day of Holter recording at baseline prior to any treatment (Day −1) and then examine all evaluable QT-RR pairs to calculate this correction factor according to the following protocol: All QT/RR pairs from each subject are used for that person’s individual correction coefficient, which is derived from a linear regression model: log(QT) = log(a) + b × log(RR). The coefficient of log(RR) for each subject, b
i, is then used to calculate QTcI for that subject as follows: QTcI = QT/RR
bi. Also, the effects of any potential hysteresis needs to be an important adjunct to this calculation [
35].
There are a number of additional methodologies as to how to calculate QTcI in the presence of an increased heart rate as reported by Garnett et al. [
35]. These authors reviewed the relative strengths and weaknesses of alternative techniques including Holter bin analysis, beat to beat comparisons and one stage fixed effect pharmacodynamic-pharmacokinetic modeling although there was no consensus as to which of these approaches was optimal. They further discussed the role of exercise and autonomic maneuvers including postural changes designed to augment the heart rate although these latter protocols are not standardized and routine acceptance has proven elusive.
Most recently, using a large US National Health and Nutrition Survey (NHANES) database of subjects from the general population, a cross validated spline correction factor was reported which essentially “obliterated” the relationship between the QTc and RR intervals and this effect was independent of gender [
27]. The authors also compared this correction factor to 6 other well-known methods and found that the spline method showed superior performance. They deemed their approach “heart rate agnostic” as the regression lines of heart rate vs. QTc interval were virtually flat in all cases. Adoption of this novel approach, while intriguing, must await further validation and reproducibility in additional studies involving large populations of healthy subjects encompassing a spectrum of heart rates and ages.
4. What Is a Normal QTc Value
There is considerable debate as to what constitutes a prolonged QTc interval and there have been a large number of published studies on this subject over many decades. In general, the studies are not homogeneous as some included manual caliper measurement on paper while others involved the more precise method of utilizing on screen digital calipers with digitally acquired ECGs. To add, some protocols entailed measuring a single complex in a single lead, multiple complexes in a single lead and then averaging the results, choosing the longest lead on the 12-lead tracing, or using a representative global median beat for QTc determination. Moreover, studies were not always performed in healthy volunteers and were comprised of populations with different age cutoffs as well as those who may have had underlying cardiovascular or other clinical pathology. Lastly, not all studies employed the same QT correction method when reporting results although correction formulae for the lower limits of normal (LLN) values were generally consistent and independent of gender while those for the ULN values showed substantial variation [
20].
In 2006, Goldenberg and colleagues examined 581 subjects of varying ages and found the ULN threshold for men was 450 ms while for women it was 470 ms [
32]. Thereafter, in 2009 Rautaharju et al. published the American Heart Association (AHA)/American College of Cardiology (ACC)/Heart Rhythm Society (HRS) recommendations for normal QTc values which have been the most widely accepted with healthy males having an ULN value of 450 ms while for females the value was 460 ms [
36]. In contrast, according to the AHA expert panel report by Drew and colleagues [
37] on prevention of TdP, a prolonged QTc should be defined as a value that exceeds the 99th percentile of the population in question. For healthy postpubertal males this was 470 ms and for females this was 480 ms. Reijinbeek et al. studied 13,354 individuals ranging in age from 16–90 years and evaluated 12 lead QTc intervals employing a broad array of different correction formulae with a value exceeding the 98th percentile considered abnormal [
38]. They found that the ULN for men was 460 ms while for women it was 470 ms employing Bazett’s correction which was routinely 10–15 ms higher when compared to the Fredericia formula. They also determined that the QTc value increased slightly with age and was always higher in women by around 10 ms independent of the correction factor chosen. One criticism of this study is that it was a meta-analysis of data from 4 other trials which included heterogeneous populations of both healthy individuals and those with underlying disease.
Mason et al. derived reference ranges from 79,743 subjects enrolled in clinical drug trials with ages ranging from 3 months to 99 years [
39]. Some of these individuals did have cardiovascular disease or other pathology and the QTc was determined using both single lead median beat measurement as well as taking the average of 3 complexes measured in the lead with the longest QTc value. There were a total of 21,457 males and 24,562 females with suitable ECGs for QTc analysis using digital on screen calipers. They employed the 98th percentile for the ULN threshold and found that for men that value using Fredericia’s correction was 438 ms while for women the number was 450 ms. The corresponding QTcF LLN values were 355 ms for men and 365 ms for women. A final report of interest is that of Olbertz and colleagues who reviewed a cohort of 39,303 ECGs from healthy volunteers using ULN cutoffs at the 97.5th percentile while the LLN cutoff was at the 2.5th percentile [
40]. They showed that the ULN for the entire population was 450 ms with females exhibiting an approximately 11 ms longer QTcF than males. The LLN cutoff for the entire population was <350 ms and there was no significant gender difference for this determination.
The decision as to which set of criteria should be utilized depends upon the goal of screening. To best identify individuals from the general adult population who may have congenital long or short QTc syndromes, LLN values of <330 ms and ULN values of >460 ms have been suggested by Ackerman [
41]. In contrast, for early clinical trials there is a tradeoff of using lower ULN values to reduce the risk of the QTc reaching a threshold of regulatory concern during study conduct while offering a better margin of subject safety, versus a higher QTc value which would facilitate the recruitment of healthy volunteers. In this regard, a value of >450 ms irrespective of gender as originally suggested by the FDA in their ICH E14 guidance document is most often utilized in first in human protocols [
12]. Alternatively, given the known gender differences, a cutoff value of >450 ms for males and >460 ms for females as per the AHA/ACC/HRS consensus document would also seem reasonable for screening in early phase clinical trials. Whatever numbers are chosen, these thresholds should be reviewed with sponsors when designing protocols and modified when appropriate depending upon preclinical data, the compound under investigation, and the study design, acknowledging that subject safety is always paramount in these discussions.
5. Measurement of the QT Interval
The QT interval is measured from the beginning of the QRS complex which depicts the surface representation of the initiation of ventricular depolarization acknowledging that some myocytes will begin repolarizing during this phase of the cardiac cycle. The end of the T wave corresponds to the surface representation of the termination of ventricular repolarization (reviewed in [
42,
43]), and shown in
Figure 2. Ventricular repolarization in turn, is primarily demarcated from the end of the QRS or J point to the end of the T wave, also referred to as the J-T interval. Measurement can be performed on multiple QT complexes in a single lead, on a single lead median beat, on a 12-lead representative complex or on a computer generated 12 lead global composite median beat.
The ECG complex(es) used for interval measurement is not uniform and differs between core laboratories. When using a single lead for analysis, controversy exists as to which lead or leads to utilize for measurement since the QT interval can vary by over 20 ms within a single lead or between leads in a 10 s tracing even in the presence of stable sinus rhythm [
45]. Furthermore, an individual’s QT and QTc demonstrates diurnal variability with the longest values occurring during sleep and the shortest values occurring during the day. This individual variability due to circadian rhythm was reported to be as high as 95 ± 20 ms by Molnar and associates in a small group of 21 subjects with the longest QT being recorded just after awakening [
46]. In addition, the QT interval adapts to changes in heart rate with a delay known as QT hysteresis which may not be evident for several minutes, thereby underscoring the importance of measurements during periods of heart rate stability and in the absence of artifact or arrhythmias. As such, it is imperative in assessing the impact of drugs on the QT segment in the same lead or using the same composite lead methodology and at the same time of day during the study (time matched). Finally, standard and rigorous acquisition procedures should be adopted to control for the role of food, sleep, body position, gender, autonomic tone, electrolyte abnormalities and environmental factors all of which can influence the QT measurement.
For decades, the QT interval was measured on a 10 s gridline paper ECG by manual caliper placement at a paper speed of 25 mm/s in which a 1 mV calibration pulse signal produced a 10 mm deflection. Higher paper speeds of either 50 mm/s or 100 mm/s were occasionally used to provide more accuracy especially when there was a need to discern the presence of heart block or arrhythmias. However, these higher recording speeds are not routinely advised as they can lead to distortion of U waves and other low amplitude waveforms thereby confounding accurate interval measurement.
Prior to the 2005 ICH E14 guidance document, 3 QT complexes were usually assessed in lead II and the three measured intervals were then averaged to provide a mean QT value. Thereafter, the QTc was calculated from this number based upon the associated RR intervals bracketing each complex. The rationale for selecting lead II for measurement was rooted in a number of factors. In the past, Bazett used lead II for QT analysis since at that time ECGs were devoid of any precordial leads and this therefore became the defacto choice for many years. Secondly, in the majority of children with congenital long QT syndromes, lead II was recommended as it usually yielded the longest QT result in this group. Lastly, lead II has been the most popular lead since the P, QRS and T wave vector axes are mainly directed inferolaterally towards that lead and confounding U waves are not usually evident or as prominent as in the precordial leads.
The 2009 AHA/ACC/HRS article states that the QT should be measured in the lead demonstrating the longest interval and they suggest using leads V2 and V3 as these leads tend to have earlier QRS initiation compared to the limb leads [
36]. This recommendation is somewhat contrary to findings from the observational study of 4429 volunteer subjects enrolled in clinical trials (RML personal observation), where the shortest QT intervals were noted in leads I, V1, and V2 while the longest intervals were seen in leads V3, V4 and V5 (
Figure 3).
However, other authors have argued that midprecordial leads should not be used as they may overestimate the true QT interval and they suggest using either leads II or V5 which remain the most commonly employed by ECG core laboratories as well as in the congenital long QT registry. Lead II and a “precordial lead” are also those that were initially recommended by the FDA in their 2005 guidance and are therefore widely accepted for regulatory submission.
FDA guidelines from the 2005 ICH E14 and Q&A R3 documents states that ECGs should be measured by a “few skilled readers” ideally in a centralized core laboratory. They advocated for measurement by either a fully manual or a manual adjudication methodology in which computers initially placed the interval markers which were then adjusted by the skilled reader. In view of the variability between different algorithms and equipment manufacturers, fully automated measurements were only deemed appropriate for later phase trials with compounds that did not reveal any QT liability concerns in preclinical or first in human studies.
An important area for discussion in this recommendation is that the FDA left open who should be considered a “skilled reader.” A landmark paper by Viskin et al. found that the majority of physicians could not identify a long QT interval when one was present [
47]. They provided 2 normal ECGs and 2 ECGs from patients with long QT syndrome to a worldwide group of physicians including 25 QT experts, 102 arrhythmia specialists, 332 cardiologists and 442 non-cardiologists and asked them to classify the QT as normal or abnormal. Ninety-six percent of QT experts correctly identified the QT intervals as long or normal, whereas only 62% of arrhythmia experts and less than 42% of cardiologists and non-cardiologists were correct in their QT diagnoses. This study underscores the importance of utilizing a centralized core laboratory with highly experienced pharma QT experts as the “skilled reader” pool. Furthermore, these readers should ideally be blinded to the treatment, time and all subject identifiers when interpreting ECGs in clinical trials.
Another factor of considerable significance is that intra- and inter-reader variability should be minimized and preferably established for each clinical trial while core laboratory quality metrics should be periodically obtained and tabulated to ensure reader certification and maintenance of their reading skills. As a standard operation procedure, when reading studies, assigning a single reader per subject or, if feasible, a single reader to the entire trial would optimize consistency and reduce variability. In this same vein, a comprehensive lexicon of diagnostic statements with associated definitions and criteria should be adopted by the core laboratory to establish uniformity in diagnostic interpretation and measurement. In core laboratories where the skilled readers only review computer generated outliers, it is further suggested that a 2% or greater blinded sampling rate of computer generated “inliers” should be procured and sent to the skilled reader(s) on a study specific basis to ensure the output of quality data.
As previously mentioned, the initial ICH E14 document stated that all ECGs in a clinical trial could be measured by the skilled reader pool using “manual adjudication” with or without computer assistance. In view of the high cost and considerable time involved in this scheme and pursuant to the 2014 FDA Questions and Answer (Q&A R2) guidance, there has been a shift away from using “skilled readers” to manually review all ECGs towards the use of computer assisted ECG algorithms [
12]. These algorithms are designed to accelerate drug development timelines at reduced expense while maintaining data quality and integrity. These computerized platforms have been validated, their measurements are not statistically different from manually adjudicated ECGs, and datasets from these highly automated systems have been successfully submitted and accepted by the FDA for New Drug Applications (NDA). With the advent of on-screen digital calipers in combination with digital ECG acquisition, the ability to measure the QT interval precisely and accurately has dramatically improved. Today there are a number of software programs that permit the reader to evaluate this interval in a variety of presentations most commonly in a single lead II median beat format or in a representative superimposed 12 lead median beat. Finally, the approach of analyzing a limited number of single complexes may soon be supplanted by technology which permits automated measurement of all representative complexes over a 5 min or longer period of time so as to provide a more accurate temporal representation of the QT interval and repolarization lability.
An essential point to clarify is the fundamental difference between safety and cardiodynamic ECGs produced in clinical trials both of which may be obtained from the same digital recording device. The former are typically single replicate 12 lead 10 s ECG tracings secured for screening of subjects prior to enrollment, again prior to drug dosing, and periodically after dosing so as to enable
real time review by the principal investigator to evaluate subject safety. Safety ECGs are also used in conjunction with clinical and laboratory information as an integral part of dose escalation decisions. In contrast, cardiodynamic ECGs are also 12 lead 10 s tracings that are typically obtained over a 24 h period from a continuous Holter recording device at specific timepoints coinciding with pharmacokinetic drug determinations. These ECGs are most commonly extracted in triplicate (or higher multiples) over a 5 min time window when subjects have been supine for at least 10 min with careful attention directed at minimizing any environmental stimuli that might affect data quality. The extracted cardiodynamic ECGs are then processed and blindly
reviewed at a later time by the ECG core laboratory readers to determine if the test compounds has any impact on ECG morphology or fiducial intervals related to cardiac conduction and ventricular repolarization. This information is then sent to the sponsor in a Statistical Analysis System (SAS) validated dataset for their review. The waveform data is concomitantly sent to the FDA warehouse in an HL7-XML format where the FDA provides ECG quality metrics to the core laboratory [
13], a representative excerpt of which is noted in
Figure 4. An added benefit of the government’s ECG warehouse is that it serves as an open access repository for research projects by all stakeholders to further the science in this field.
The traditional approach advocated by the FDA for cardiodynamic ECG sampling was to obtain ECGs in triplicate and this recommendation has been supported by several analyses both of which showed that obtaining more than 3 ECGs at each nominal timepoint yielded decreasingly small benefits in QT variability at increased cost [
48,
49]. However, within the past several years, there has been growing attention centered on the methodology known as high precision QT analysis [
50], which involves extracting up to ten 10 s 12 lead ECGs at each protocol specific nominal timepoint as a means to enhance measurement precision. While this approach may indeed improve precision compared to conventional triplicate ECG tracings, it is unclear if the additional data points significantly reduce reader variability, justify the higher costs of this strategy, enhance the ability to detect a QT signal, or improve the likelihood that a new compound will be approved. Moreover, the FDA does not endorse any specific methodology, certification process, or core laboratory for early clinical trials nor have they mandated that more than triplicate ECGs be obtained to assess proarrhythmic risk. Thus, the advantage(s) of obtaining higher numbers of replicates remains controversial.
10. The Evolution of Regulatory Guidance Regarding Ventricular Repolarization
ICH S7B [
12] refers to the
pre-clinical “Evaluation of the Potential for Delayed Ventricular Repolarization (QT Interval Prolongation) by Human Pharmaceuticals.” The elements of cardiac safety testing detailed in this publication were formulated based upon the knowledge that all drugs that were withdrawn from the market due to QT prolongation and TdP were associated with prolongation of phase 3 of the cardiac action potential primarily due to hERG channel block. As such, the major foci of the ICH S7B strategy included an in vitro assessment of ionic currents targeting a compound’s effect on I
Kr along with an in vivo QT assay in conscious or anesthetized non-human primates. Additionally, evaluation of action potential pharmacodynamics and proarrhythmic effects was suggested in isolated cardiac preparations the results of which were to be integrated with the above assays into a composite risk assessment for QT prolongation. All studies were to be conducted using Good Laboratory Practice (GLP) standards to comply with regulatory submission.
Of interest is the ability of preclinical assays to predict clinical QT effects as recently reviewed by Park et al. [
71]. They examined hERG assays, APD and in vivo QTc effects obtained for 150 drugs in which ventricular repolarization had been evaluated in thorough QT trials [
71]. They found that these assays showed a high false positive rate (poor sensitivity) but a low false negative rate (high specificity) at low concentrations with QTc as an endpoint. hERG and in vivo QTc data had better positive predictive values than the APD and were of moderate utility although they cautioned that these results can be variable dependent on drug concentration and testing conditions. Somewhat similar findings were reported by Kleiman and colleagues who computed the sensitivity and specificity of preclinical in vitro and in vivo studies to predict clinical QT prolongation [
72]. Reduced sensitivity (30%–68%) across all assays was again noted while APD demonstrated a slightly higher specificity of 93% compared to the other studies at 85%.
ICH E14 [
12] was designed to assess the
clinical impact of pharmaceutical agents on the QT/QTc interval. This document recommended that most new chemical entities (NCE) that have systemic bioavailability be subjected to a randomized dedicated ECG trial involving normal volunteers termed a thorough QT (TQT) study. The purpose of this study is to determine if the novel agent under investigation demonstrates a threshold effect on the QTc interval. TQT study designs can be crossover, parallel or adaptive in configuration and recruitment of both male and female subjects, when appropriate, is encouraged. The conventional TQT study is typically comprised of 4 components: a therapeutic dose of the investigational compound, a supratherapeutic dose of the test compound, a placebo cohort and a positive control arm with a known QT prolonging agent, most often the antibiotic moxifloxacin, designed to confirm assay sensitivity. The parameters to be analyzed in these protocols include either a change from baseline of the heart rate corrected QT interval (ΔQTc) or the maximum time matched placebo adjusted change in heart rate corrected QT (ΔΔQTc) at any time point during the study. The threshold level of regulatory concern was set at a mean ΔΔQT effect of approximately 5 ms. This translates to a 10 ms upper bound of the one sided 95% confidence interval (CI) around the largest time matched mean effect of the baseline adjusted difference between placebo and study drug. This boundary is independent of whether it is breached by either the therapeutic or supratherapeutic dose of the compound. ΔΔQTc mean effects less than 10 ms provides reasonable assurance that the compound under investigation does not adversely alter ventricular repolarization. In this case, routine collection of on drug ECGs in later stage trials would be sufficient to monitor cardiac liability. In contrast, with pharmaceuticals that display a 10 ms (20 ms for oncology agents) or greater ΔΔQTc signal, expanded ECG surveillance in later stage development would almost always be required to characterize the QT effect of the agent. However, a positive signal should not automatically be interpreted as proarrhythmic nor should this constitute the sole grounds for the compound to be halted in development. In analyzing the pharmacodynamics effect of the test compound on the ECG, an important consideration is that while a mean change in a cohort(s) may not reach the 10 ms threshold, there may well be individuals within the study who have abnormally high changes in QTc exceeding 10 ms from baseline values. These “concealed outliers” would likely impact drug development including go/no-go decisions as well as the intensity of ECG surveillance in later stage protocols.
The TQT study is typically carried out late in stage II or early in stage III of drug development after pharmacokinetic and proof of concept information is available and is not limited to only NCE. It might also be undertaken for approved drugs when a different patient population is targeted, or a new dosing regimen or route of administration is being considered. In contrast, regulators have suggested that TQT studies are not routinely required for drug combinations when the individual components have previously been evaluated and did not manifest any QT liability. They also proposed that TQT trials are usually not indicated for large molecular weight biologic proteins or monoclonal antibodies which do not enter the cell and have minimal direct ion channel effects or agents which do not manifest any systemic bioavailability. Finally, during execution of these TQT studies, compliance with Good Clinical Practice (GCP) is necessary to ensure that data meets the regulatory standards for submission [
43].
An additional topic that was recently addressed in a 2016 FDA document is the role of race and ethnicity in clinical trials [
73,
74]. Historically, TQT studies have focused on the QT liability or intrinsic risk of a compound independent of its effects on people of various races or ethnic persuasions and many trials are underrepresented by these subgroups. ICH E14 stated that ethnicity would likely not affect the results of a TQT trial. Although there is a scarcity of large-scale scientific studies, it is now recognized that different populations of individuals may have a genetic predisposition to being more
susceptible and sensitive to a pharmaceutical’s effects either by attaining higher exposures or altering the drug’s disposition. This in turn may be due to heterogeneity in various alleles that code for different ionic channels or that impact metabolizing enzymes. To this point, one well known example of differential racial sensitivity is that of the agent BiDil [
75], a combination of a nitrate and hydralazine. Despite considerable controversy, it was approved by the FDA in 2005 as there was a statistically significant improvement in all-cause mortality and first hospitalizations in self-identified Black subjects versus the overall study population. In contrast to TQT trials, early phase studies typically encompass small cohorts of subjects and therefore are underpowered to provide clinically meaningful results regarding ethnicity, race, and pharmacogenetic susceptibility and sensitivity. Therefore, efforts should be expended to recruit subjects of varying backgrounds especially as drug development proceeds into larger scale later phase studies. To this end, the FDA recommends that all Investigational New Drug (IND), NDAs and Biologic License Applications (BLAs) report tabulated demographic data per their 2016 guidance document [
74].
11. Current FDA Guidance for Assessing QT Liability
TQT studies are resource intensive and expensive to conduct averaging between 1–4 million dollars depending upon study design. As such, all stakeholders have been interested in developing a more cost effective and equally informative scheme to assess a compound’s effect on the QT interval and its propensity to induce arrhythmias. To this end, there has been a paradigm shift towards a more comprehensive mechanistic approach to defining a compound’s cardiac risk and potential to produce TdP rather than only profiling its effects on the QT interval. This strategy entails a more robust preclinical and clinical evaluation of new compounds integrated with concentration QT (cQT) analysis in early phase clinical trials.
11.1. Comprehensive In Vitro Proarrhythmia Assay (CiPA)
In 2013 an initiative was undertaken by industry, academicians and government regulators to develop a suite of preclinical and early clinical assays which would complement the ICH S7B guidance and provide an integrated assessment of arrhythmia risk. This initiative was known as the Comprehensive in vitro Proarrhythmia Assay (CiPA) [
76]. In its most current iteration, it consists of the following 4 elements:
11.1.1. Preclinical
Evaluation of multiple cardiac ion channels as a tool to profile risk when clinical studies demonstrate a modest QTc signal (typically <20 ms) and there is evidence of potassium channel/hERG block. In addition to a hERG kinetic score, the most important channels to be evaluated include L-type calcium (L-typeCa2+), late sodium (late Na+), slow potassium (IKs), and occasionally other inward and outward voltage gated cations which may affect ventricular repolarization
Robust in silico computer modeling of a compound’s likelihood to cause TdP based upon data derived from ion channel studies. Compounds are to be classified as low, moderate, and high risk relative to a training and validation set of 28 approved reference drugs with known torsadogenic risk. A TdP metric score known as qNET has been devised to quantify risk into three subsets at exposures that are a multiple of the maximal concentration (Cmax). This model targets the assessment of early after depolarizations which are thought to be involved in the pathogenesis of TdP. It is not, however, designed to look at a compound’s direct effects on the cardiac action potential.
Human induced pleuripotent stem cell cardiomyocyte preparations have been formulated to ascertain whether there are any unanticipated effects of a compound on ventricular cardiac tissue not documented by other investigations. This assay may be useful to uncover a NCE’s effects not identified by the in silico model, in situations where there is inadequate human exposure in early phase studies, or if there is discordant data from other preclinical assays.
11.1.2. Clinical
Novel ECG biomarkers particularly evaluation of the early phase of cardiac repolarization, known as the J-T
peak interval, a mechanistic biomarker of multichannel ion block, which may mitigate cardiac risk when a modest (<20 ms) QT signal is observed in first in human studies. The J-T interval primarily corresponds to ventricular repolarization and is governed by the interplay of multiple voltage gated cations. This interval, in turn can be subdivided into the J-T
peak and the Tpeak-Tend segments (see
Figure 7).
While I
Kr is the predominant outward ion channel influencing the length of ventricular repolarization and hence the overall J-T interval, inward currents mediated by calcium and late Na
+ channel block are the principal factors responsible for the J-T
peakc subinterval. It has been demonstrated that in compounds that prolong the QT interval due to single channel I
Kr block, an associated change of <9 ms in the J-T
peakc value of the test compound is an indicator of late sodium multichannel ion block. [
78]. This late sodium channel block has been postulated to attenuate the effects of I
Kr blockade and may therefore reduce the likelihood of ventricular arrhythmias.
The presence of multichannel ion block was first evaluated by Johanssen et al. [
79], who published a paper in which they measured the QT, J-T
peakc and Tpeak-Tend intervals in 4 drugs which were known to prolong the QT interval; ranolazine, quinidine, dofetilide and verapamil. They found that the pure hERG blocker dofetilide prolonged both the early and late components of the J-Tc interval and therefore had a higher risk of proarrhythmia. In contrast, the other three agents had a low risk of proarrhythmia and TdP due to multichannel ion block as evidenced by no appreciable prolongation of the corrected J-T
peakc interval. In a follow up investigation to amplify their findings, these authors studied the ability to counteract the QT prolonging effects of the pure hERG blocker dofetilide by administering mexiletine and lidocaine which are two drugs known to affect late Na
+ current [
80]. They showed that these late Na
+ channel blocking agents shortened the QTc by 20 ms and that this shortening was exclusively due to shortening of the J-T
peakc interval.
Pursuant to these observations, at a recent presentation by Vicente [
81], updated commentary was provided about the utility of measuring the early phase (J-T
peakc) and late phase (J-T
peakc-Tend) of the J-T interval as biomarkers that may modulate proarrhythmic concerns in the setting of a moderately positive QT signal and reduce the need for intensive ECG monitoring in late stage trials. To this end, open source code was released by the FDA using a vector magnitude function and software vendors including AMPS [
82], Mortara and Philips have developed technology which offers measurement of these intervals based upon different mathematical models [
83]. Although a meaningful disparity between the different vendors’ algorithms has not been evidenced, a number of critical elements in this approach have yet to be standardized which may affect the accuracy and utility of this biomarker. These include which computer algorithm to use; how best to determine the J point origin and the peak of the T wave especially when the T wave apex is not monophasic; in what lead(s) the J-T component intervals should be measured; and what is the optimal heart rate correction factor to utilize. It is unclear at this time if these metrics will be routinely required by regulators to assess QT liability or only be recommended to qualify risk whenever a modest QT signal is identified. Moreover, the decision as to which algorithm ECG core laboratory should use and validate is problematic given that the FDA’s posture is not to favor or endorse any specific methodology.
To date the FDA has recommended CiPA assays to 5 sponsors as part of their IND submissions although it is not a mandatory requirement that they be conducted or incorporated into drug protocols [
84]. It therefore remains to be determined when CiPA information will be requested by government regulators, which elements of CiPA will be required, and who should be entrusted to provide these analyses. Furthermore, whether these assays should be pre-emptively undertaken by sponsors prior to IND submission and what would be the economic impact of adding these assays to drug development are both open questions for discussion. Hence, as additional CiPA assays are performed and the FDA databases this information, their precise role in a compound’s development scheme will be better delineated.
In summary, the major purpose of the CiPA initiative appears to be to improve product labeling regarding proarrhythmic potential, enable further development of drugs that previously may have been prematurely terminated due to perceived cardiac risk, and influence the intensity of ECG monitoring in late phase trials [
85]. It may also prove helpful when uninterpretable TQT data due to heart rate changes are present, when the test compound cannot be given to healthy volunteers such as in oncology patients, and when inadequate or low exposure margins are seen in phase I trials. Its implementation may further be beneficial for pharmaceuticals that manifest significant heart rate changes and when there is confounding QT data from preclinical and early clinical studies. Finally, it should be viewed as a complementary approach to the ICH S7B guidance as hERG assays and animal telemetry observations are still required to assess cardiovascular safety.
11.2. Concentration QT Modelling
The second major development in QT liability assessment involves the use of exposure response analysis (ER) or concentration QT modelling (cQT) [
86,
87]. This type of analysis was under investigation in 2003 and became an integral part of TQT trials in 2006 albeit as a secondary analysis tool relative to the primary central tendency by time point approach known as the Intersection Union Test (IUT) as detailed in ICH E14. In 2008 the FDA drafted a guidance document elucidating the role of cQT in drug development [
88] and in 2015 it became a key element in the FDA’s Q&A (R3) publication [
12] that established it as a primary analysis tool to be applied predominantly in dose escalation studies as an alternative or “substitute” to conducting a large scale TQT study.
The emergence of cQT as a primary analysis modality is supported by a number of factors. Chief amongst these are the results of the Innovation and Quality Cardiac Safety Research Consortium (IQ-CSRC) SAD-like study [
89], supporting the value of this approach in a small cohort of subjects, multiple computer simulation studies undertaken by stakeholders during the past 10 years, and a large body of knowledge garnered by conducting this analysis in hundreds of TQT trials. cQT has the distinct benefits of reducing resources and development timelines while enabling earlier go/no-go decisions which may obviate the need to undertake a large scale conventional TQT study. Additional advantages include facilitating extrapolation of a compound’s cardiac risk at exposures that may not have been evaluated but are clinically important, obtaining information in challenging populations such as oncology patients, and clarifying ambiguous results that may have been generated when the IUT was undertaken. The correlation between the findings of cQT analysis and those of the IUT applied to traditional QT studies is high when the highest clinically relevant exposure is achieved, and regulators have reviewed and accepted cQT with multiple IND submissions. cQT modeling has now become an important part of the discussion with sponsors as they plan their drug development programs and study designs.
When applying cQT analysis, most commonly, a pre-specified linear mixed effects model is used to compute the mean and two-sided 90% CI of ΔΔQTc at clinically relevant exposures covering concentrations that are therapeutic as well as those that might be seen with hepatic or renal impairment or in the presence of CYP enzyme polymorphisms. When the upper bound of the two-sided 90% CI for the QTc effect of a drug treatment as determined by exposure-response analysis is <10 ms, one can infer that the compound under investigation is not high risk and further testing beyond routine ECG surveillance in later stage development is usually not required assuming that the highest clinically relevant exposure has been attained. In contrast, when the cQT regression line manifests a positive slope that crosses the 10 ms threshold of regulatory concern, the compound is deemed to be at higher risk for proarrhythmia and may justify additional preclinical testing per the CiPA paradigm or more extensive ECG monitoring in later phase studies. Finally, when employing the cQT model, it is essential to avoid extrapolating data at concentrations that were not clinically observed and to ensure that tests for hysteresis are applied.
cQT is typically performed in SAD studies as the highest drug concentrations are usually achieved in these protocols and the probability of reaching a targeted exposure level is greater than that which would be seen in multiple ascending dose (MAD) protocols. Incumbent in the SAD trials is that there are a sufficient number of subjects to provide adequate power to exclude a false negative result. This typically includes 3 or more cohorts of 6–9 subjects on active drug and at least 2 subjects per cohort receiving placebo. However, there are situations when cQT in SAD investigations may be challenging and the preferred strategy in these cases would be to incorporate cQT analysis in a MAD design. These scenarios include drugs which may be time released such that the model fails to capture a sufficient range of drug exposures, drugs which exhibit significant hysteresis and heart rate variability, and for drugs in which hERG blocking effects of the parent moiety and an associated active metabolite may not be fully manifest in a single dose escalation protocol. Moreover, in circumstances when the investigational agent has non-linear kinetics, or when there may be a long half-life, major metabolites or significant drug accumulation, the SAD study may not have enough subjects for meaningful analysis. In these cases, data from the SAD investigation may be combined with pharmacokinetic (PK) and pharmacodynamic data from a complementary MAD study assuming that clinical conduct of both trials is homogeneous and ECGs are rigorously collected, of high quality, and reviewed in a consistent and blinded manner so as to minimize data variability [
43].
The decision to undertake cQT in early phase trials may be influenced by budgetary constraints although the expense associated with this strategy is significantly less than that of a dedicated TQT trial. In these situations, several cost effective alternatives may be attractive. One would be to only analyze the PK/QTc relationship in the highest dose cohorts while another approach would be to collect Holter recordings for each cohort, and store and save the information for future analysis, as most NCE will not proceed beyond phase II in development.
An important issue of regulatory concern is whether or not a positive control to document assay sensitivity is required for exposure response analysis to reduce the likelihood of false negative results. The original ICH E14 document stated that “the confidence in the ability of the study to detect QT/QTc prolongation can be greatly enhanced by the use of a concurrent positive control group (pharmacological or non-pharmacological) to establish assay sensitivity.” Current guidance for cQT does not mandate the use of a pharmacologic positive control assuming the drug achieves an adequate exposure margin defined as twice the highest clinically relevant exposure which is generally higher than the supratherapeutic dose observed in conventional TQT trials. This exposure represents a worst-case clinical scenario accounting for both intrinsic and extrinsic factors that can affect drug metabolism. These include renal or hepatic impairment, drug-drug interactions, gender, age, food and metabolic enzymes that impact drug disposition. As such, the margin for patient safety and the likelihood of false negative responses are optimally addressed by analyzing data bracketing the Cmax, the highest clinical exposure, and at the timepoint when a multiple of the highest relevant clinical exposure occurs. In the event that a 2X exposure margin is not reached, then a positive control with moxifloxacin administered to 20–24 subjects has been proposed to establish assay sensitivity.
Alternatively, a non-pharmacologic approach to assess assay sensitivity known as “bias metric” [
90], is currently undergoing examination and is most suitable in cases where the exposure margin falls just short of waiving the need for a positive control. This approach uses the slope of Bland Altman plots to address any potential bias between fully automated computer-generated QT measurements and those that are procured from the core laboratory algorithm used in the primary analysis. Darpo et al. published their results using this metric for 5 drugs with known QT prolonging effects that were also administered in the IQ-CSRC study [
91]. When bias severity was greater than −20 ms over a range of QTc values of 100 ms or greater, then the ability to predict the QT effects of the five studied drugs failed. Contrariwise, when there was a QTc difference of <−10 ms bias severity, then the chance of a false negative finding was lowered to less than 5%. They concluded that a metric of bias severity “has to be included” in all reported QTc information obtained in early phase studies using exposure response analysis to minimize the risk of false negative results and ensure accuracy and consistency of fiducial measurements. To date, this recommendation has not been routinely fostered by government regulators who view its application on a case by case basis where it seems to have the greatest impact with drugs that demonstrate some degree of QT prolongation. Other non-pharmacologic approaches that have been proposed to establish assay sensitivity include the effects of standardized meals as advocated by Taubel [
92], and the role of positional changes reported by Wheeler [
93]. Neither of these have gained traction within industry and all non-pharmacologic approaches to determine assay sensitivity await further testing and validation before they become viable and accepted alternatives to moxifloxacin.
From December 2015–November 2016, the QT-Interdisciplinary Review Team (QT-IRT) reviewed 25 proposals for cQT analysis submitted under the aegis of ICH E14 (Q&A R3) [
94], 18 of which were early phase SAD/MAD study designs including some where data was pooled from the separate trials. The reviewers accepted 11 proposals while rejecting 14 of the submissions with the predominant reason for rejection being inadequate exposure margins or an inadequate modeling analysis plan. In a more recent update of 16 study designs of non-oncology compounds, the QT IRT accepted 14 as a substitute for conducting a TQT study while rejecting 2 [
84]. Those that were rejected were once again primarily due to inadequate exposure margins. Other possible reasons for protocol modification or outright rejection have included failure to sample at Cmax, failure to sample for a sufficient time out to 24 h, failure to consider the impact of metabolites and inadequate justification for pooling of data from multiple studies. Moreover, when submitting a proposal involving cQT, there are other important elements to consider in the study design that might prompt rejection if omitted. These entail exploratory plots to test model assumptions, utilizing a linear mixed effects model or other alternative models such as Emax, and comprehensive treatment−placebo difference calculations. While not all protocols are mandated to be submitted for review to the FDA’s cardio-renal division and QT-IRT, submission of all first in human drug proposals should be considered if there are concerns about study design, modelling or exposure levels. This review process typically encompasses 30–60 days following submission and should be incorporated by sponsors into their development timelines.
12. Commentary
The 10 ms threshold of regulatory concern, while having been successful in preventing any new drugs that are torsadogenic from coming to market since 2005, is not typically a value that would prompt intervention by clinicians, particularly if the calculated QTc after drug administration remains well within the normal range. Moreover, depending upon the specialty of practice, clinicians may not be aware that certain compounds have a risk of inducing rhythm disturbances and, more importantly, that drug combinations in the setting of underlying disease or electrolyte abnormalities may be synergistic in increasing the risk of an arrhythmic event. To add, even when there is documented and known QT liability for a drug, clinicians may not procure pre and post treatment ECGs to monitor the QTc interval. For example, a study by Choo et al. from the UK surveyed the prescribing practice of hospital based cardiologists and general practitioners over 6 months involving drugs which carried a risk of prolonging the QTc interval [
95]. A baseline prolonged QTc was recognized by only 14% of cardiologists and 6% of general practitoners. Of the 4133 patients in the study, 22 had QTc values >500 at baseline but only 2 of these were identified prior to drug dosing. Surprisingly, despite manifesting a prolonged QTc at baseline, a significant number of subjects (37.8%) were nonetheless prescribed QTc prolonging drugs and only 8% had a follow up ECG at 48 h. Broszko and Stanciu surveyed a small group of psychiatrists from a university program and found that despite American Psychiatric Association (APA) recommendations [
96], a large number of the physicians did not routinely screen for syncope, family history of sudden death, heart disease or congenital long QTc syndromes. Moreover, obtaining ECGs prior to initiating treatment was performed much less frequently in the outpatient cohorts compared to the inpatient population. This study may well be representative of other subspecialties underscoring the need for all practitoners to be cognizant of a drug’s potential for producing arrhythmias prior to initiating therapy coupled with careful cardiovascular screening. To aid in this endeavor, the website CredibleMeds is an important and user-friendly reference in the public domain as it maintains a comprehensive list of medications that may be linked to TdP and classifies them as “known risk, possible risk and conditional risk”. It also has a category of drugs to be avoided in the setting of congenital long QTc syndromes and the drug lists are continually updated as new information becomes available.
Based upon the aforementioned information, it is evident that prescribers including cardiac specialists fall short in appropriate surveillance for QT liability and need to be more diligent. This would entail performing a careful history for cardiovascular disease and obtaining a baseline ECG if indicated. While it is incumbent that regulatory agencies focus on public safety including the challenge of weighing a compound’s benefit versus risk during the approval process, more attention needs to be directed towards practitioners to increase their awareness about drugs with known QTc effects often embedded in drug insert black box warnings. To illustrate this need, in a survey of roughly 5 million prescriptions in the UK, 23% of patients were prescribed a drug that the prescriber did not recognize could prolong the QT interval [
97]. Accordingly, the mission of protecting the public is a shared responsibility involving both government regulators and health care providers and additional safeguards should be instituted to ensure patient safety when drugs that may affect cardiac repolarization are administered.
As an aid in this endeavor, computer software programs that identify adverse drug responses and potential drug interactions based upon individual patient demographic and laboratory data need to be developed and universally adopted to optimally safeguard patients from QT liability. To this end, the Mayo Clinic and University of Indiana have independently implemented clinical data support systems (CDSS) in an effort to alert pharmacists and physicians to medications which may predispose to arrhythmias [
98]. For example, at Indiana University Medical Center, 2400 electronic records from coronary care unit patients were interrogated and a QTc alert risk score was developed based upon a QTc >500 ms or a change from baseline of >60 ms. Using this approach, they demonstrated that the risk of exceeding either of these thresholds could be reduced for both cardiac and non-cardiac drugs that have been associated with QT prolongation and TdP. Unfortunately, however, in the majority of cases when alerts were generated by the CDSS systems, prescribers and pharmacists ignored the alerts thereby compromising the favorable impact of these systems on reducing the likelihood of serious ventricular arrhythmias.
Despite these shortcomings, the further development of comprehensive CDSS that are integrated with laboratory and clinical proarrhythmic risk factors should be pursued in combination with continuing education for those entrusted with dispensing and prescribing medications. The development of wearable smart devices or handheld software applications that can monitor a patient’s rhythm, assess the QT value, and transmit an alert would be useful for risk stratification and surveillance. Moreover, as genetic screening becomes more widespread and cost effective, this information could be integrated into these computer alert platforms thereby allowing persons with congenital QT syndromes to be identified sooner. This in turn would translate into heightened awareness of all parties involved and permit earlier prophylactic intervention so as to reduce patient morbidity and mortality. Future strategies to mitigate arrhythmia risk may include stereoselective and molecular engineering of new compounds to reduce undesirable arrhythmogenic effects, the development of agents that shorten the QT interval such as hERG activators that could be targeted to counteract the QT prolonging activity of drugs, and compounds that can shorten APD via blockade of the Ca
2+/Na
+ exchanger. However, as cautioned by Malik [
99], the science to support these approaches is yet to be substantiated and there is the possibility that QT shortening compounds may paradoxically create unwanted heterogeneity of repolarization and arrhythmogenicity in some cases.