OnLine First

Testosterone Therapy (TRT) in Men With Untreated Prostate Cancer (PrCa)


Testosterone therapy has been shown to have a number of beneficial effects in men with testosterone deficiency, including improvement in symptoms of fatigue, decreased libido and sexual performance. However, a major concern has been that increased serum testosterone may cause growth of prostate cancer. This concern is based on the observations that androgen deprivation causes PrCa regression as reflected by decreased serum PSA, and that normalization of serum T in androgen deprived men causes an increase in PSA. Historically there has been an absolute prohibition against the use of TRT in men with any prior history of PrCa.

However, recent literature has called this traditional paradigm into question. A small series of retrospective studies have reported benign outcomes in men who underwent TRT following definitive treatment of localized PrCa. Of these studies 3, with a total population of 74 men, were of men with testosterone deprivation (TD) who had undergone radical prostatectomy for PrCa. None of the participants had biochemical recurrence with follow-up as long as 12 years. Another study reported no biochemical recurrence in 31 men who underwent TRT for a median of 4.5 years following brachytherapy for localized PrCa.

One explanation for the lack of biochemical recurrence is that men were cured of PrCa. Another is that a higher serum T concentration does not necessarily lead to greater PrCa growth. A global pooled collaborative analysis of 18 longitudinal studies of sex hormones and PrCa comprised of 3,886 men with PrCa and 6,438 age matched controls found no relationship between endogenous serum androgen concentrations and PrCa. In another study intraprostatic T and DHT concentrations were unchanged after 6 months of TRT in men with TD despite large changes in serum T concentrations. A saturation model has been proposed to account for the substantial prostatic changes observed with institution or cessation of androgen deprivation therapy and the relative indifference of prostate tissue to changes in serum T above the near castrate range.

Without large, randomized, long-term studies of TRT in men with TD it remains an unanswered question whether TRT may cause greater PrCa growth in this cohort. This question is of particular relevance to the increasing number of men who currently elect active surveillance for newly diagnosed PrCa, some of whom are symptomatic from T deficiency and desirous of TRT.

As new data have suggested a less risky relationship between T and PrCa than previously assumed, we and others have become more open to the practice of offering TRT to men with a history of PrCa. A decrease in PSA was noted in an elderly man who received TRT during PrCa surveillance. In this study they report on prostate biopsy and PSA results in a group of symptomatic T deficient men who received TRT while undergoing active surveillance for untreated PrCa. The emergence of serial prostate biopsies as a critical component of active surveillance protocols in men with PrCa presents a new and unique opportunity to evaluate the effect of TRT on PrCa growth.


Morgentaler A, Lipshultz LI, Bennett R, Sweeney M, Avila D, Jr., Khera M. Testosterone Therapy in Men With Untreated Prostate Cancer. J Urol. Testosterone therapy in men with untreated prostat... [J Urol. 2011] - PubMed result

PURPOSE: A history of prostate cancer has been a longstanding contraindication to the use of testosterone therapy due to the belief that higher serum testosterone causes more rapid prostate cancer growth. Recent evidence has called this paradigm into question. In this study we investigate the effect of testosterone therapy in men with untreated prostate cancer.

MATERIALS AND METHODS: We report the results of prostate biopsies, serum prostate specific antigen and prostate volume in symptomatic testosterone deficient cases receiving testosterone therapy while undergoing active surveillance for prostate cancer.

RESULTS: A total of 13 symptomatic testosterone deficient men with untreated prostate cancer received testosterone therapy for a median of 2.5 years (range 1.0 to 8.1). Mean age was 58.8 years. Gleason score at initial biopsy was 6 in 12 men and 7 in 1. Mean serum concentration of total testosterone increased from 238 to 664 ng/dl (p <0.001). Mean prostate specific antigen did not change with testosterone therapy (5.5 +/- 6.4 vs 3.6 +/- 2.6 ng/ml, p = 0.29). Prostate volume was unchanged. Mean number of follow-up biopsies was 2. No cancer was found in 54% of followup biopsies. Biopsies in 2 men suggested upgrading, and subsequent biopsies in 1 and radical prostatectomy in another indicated no progression. No local prostate cancer progression or distant disease was observed.

CONCLUSIONS: Testosterone therapy in men with untreated prostate cancer was not associated with prostate cancer progression in the short to medium term. These results are consistent with the saturation model, ie maximal prostate cancer growth is achieved at low androgen concentrations. The longstanding prohibition against testosterone therapy in men with untreated or low risk prostate cancer or treated prostate cancer without evidence of metastatic or recurrent disease merits reevaluation.
 
Handelsman DJ. Commentary: Androgens and "Anabolic Steroids": The One-Headed Janus. Endocrinology 2011;152(5):1752-4. http://endo.endojournals.org/cgi/content/full/152/5/1752?etoc (Endocrinology -- Sign In Page)


Steroid nomenclature has the difficult task of bridging the gap between the approved but arcane systematic nomenclature (1) and the generic names needed to provide the clear functional descriptions as essential shorthand for scientific discourse. The gap between the strictly correct and the necessarily practical creates the possibility of misleading terminology. When this gap grows sufficiently large, it can become a chasm of misunderstanding, and cleaning up becomes a thankless task over which lingers a disagreeable whiff of pedantry. Ultimately, however, lucid scientific thinking requires accurate terminology. For estrogens, endocrinology gained a valuable service by an editorial (2), hopefully to be cited more among trainees than in bibliometric indices. Now the spotlight turns to an analogous task for androgens; notably, the meaningless term "anabolic steroid" when used alone or in the oxymoron "androgenic-anabolic steroid."

An androgen is classically defined as a substance capable of developing and maintaining masculine reproductive characteristics and contributing to the anabolic status of somatic tissues (3). This physiological definition is complemented by a biochemical definition that an androgen is a chemical that binds to and activates the androgen receptor (AR) (4). Following its characterization as the mammalian male sex hormone in the mid-1930s, testosterone was quickly evaluated in numerous clinical and experimental applications before the hiatus of the war. The early postwar decades, the golden age of steroid pharmacology, saw the pharmaceutical industry successfully commercialize synthetic estrogens, progestins, and glucocorticoids in an epoch that witnessed the development of the oral contraceptives and synthetic glucocorticoids that remain among the most widely used medicines.

These monumental achievements encouraged industry research in a parallel quest for wide application of synthetic androgens. The goal was to identify an anabolic steroid, an androgen without virilising properties, making it suitable for use in women and children, not just men. Unlike the other challenges, this systematic search was abandoned by the 1970s, having failed comprehensively (5). The reasons for the failure are now understood to include both flaws in the whole animal androgen (Hershberger) bioassay guiding the search as well as the singularity of the AR, which directs essentially similar receptor-mediated effects in reproductive tissue and muscle. The original prewar whole animal androgen bioassay used to characterize testosterone involved measuring prostate and seminal vesicle responses of castrated mature rats (6). The postwar search for an anabolic steroid required a measure of myotrophic activity and ultimately the levator ani muscle was selected (7) in what became standardized as the Hershberger bioassay (8) whereby myotropic (anabolic) could be separated from androgenic activity of synthetic androgens. However, even the original description noted the choice of the levator ani as being because "this muscle is more responsive to castration and testosterone than other striated muscles" (7), a choice of convenience probably reflecting the unusual androgen responsiveness characteristic of autonomically innervated pelvic organs (9) rather than intrinsic to that muscle (as implied by a muscle-specific endpoint). This undermines the goal of reflecting all striated muscles, as implied by searching for an anabolic steroid. Recent studies (re)discover other limitations of the Hershberger-type bioassays whereby the relative (anabolic:androgenic) potency of a test chemical depends markedly on bioassay design features rather than being a relatively fixed characteristic of that chemical (10). Although the choice of muscle endpoint seemed adroitly practical, especially compared with the cumbersome alternatives (like nitrogen retention, or its modern successor, lean mass), this may have been at the expense of the overall goal. By the late 1950s, the limitations of the levator ani endpoint were understood (11) as summarized in effectively a postmortem of that failed search (5). Despite the modern improvements in the Hershberger bioassay (12), it still embodies fundamentally the same approach so that repeating the search based on the same bioassay is most likely to yield the same outcome.

In the intervening three decades, there have been major advances in understanding androgen action. As exerted by testosterone, the major natural androgen, its distinctive features, including dual prereceptor steroidogenic activation (5? reduction, aromatization), a singular AR, and postreceptor coregulator modulation. AR differs from estrogen and progestin receptors, which each exhibit two receptor isoforms with usually opposing physiological effects (13, 14), a duality that facilitates exploitation of tissue differences in net estrogen or progestin action. Tissue-specific differences have been developed in nonsteroidal synthetic estrogens as specific estrogen receptor modulators, mixed estrogen agonists/antagonists (15) with fortuitous and advantageous differences in estrogentarget tissues (14). Despite remaining uncertainty over the responsible mechanisms, this serendipitous discovery stimulated interest in analogous synthetic steroid analogs for other nuclear receptor classes, including androgens (selective AR modulators).Mineralocorticoid and glucocorticoid receptors represent a unique pairing with prereceptor steroid metabolism as a gate-keeper determining net tissue effects of analogs (16). Although development of the first nonsteroidal androgens (17, 18) as candidate selective AR modulators (19) raises hope of resurrecting this defunct term (20), prereceptor activation mechanisms cannot apply to nonsteroidal androgens, and the singular AR lacks a dual drive mechanism of the other paired sex steroid receptors. Consequently, it is not surprising that available knowledge (21) provides only slender hope that this failed, and probably false, dichotomy will now succeed through a renewed search guided by the same in vivo bioassay.

However, this failed search left a residue, the now meaningless term anabolic steroid, which perpetuates a distinction without a difference. Now surviving long after its scientific eclipse, devoid of meaning, it serves principally as a journalistic device for demonization outside science, adding to public misunderstanding about "steroids," which confuses anabolic steroids and glucocorticoids and mystifies discussion within science. Dispensing with the confused term anabolic steroid, whether used in isolation or joined to the word "androgen" in the oxymoron anabolic-androgenic steroid, is overdue. Although in poetry anything mellifluous goes, accurate terminology matters in scientific communication. Although it may be argued that anabolic-androgenic steroid conveys two apparently different endpoints of androgen action, applying Occam's razor, we never refer to "luteal-gestational progestins" or "mammary-uterine estrogens." A little thought experiment highlights the issue. Imagine that some scientists come to believe that a unicorn exists and they habitually write about an animal species called the "horse-unicorn" as the generic name for a species, including both unicorns and horses. There would be no real alternative to rejecting such inaccurate terminology and ignoring claims that a unicorn will soon be found until one is.

Only if the scientists set this example can the vanguard of knowledgeable scientific journalists gradually educate public thinking. This misnomer distorts logical thinking and, whether by application of Occam's razor or scientific commonsense, should have been quietly but firmly exiled long ago. In the happy but unlikely event that a nonsteroidal androgen ever proves to have the desired tissue specificity, this term would become legitimate for the first time. In the meantime, all androgens should, for the sake of clear thinking, be termed simply androgens.
 

Attachments

Re: Prostate Cancer

And for an ENTIRERLY different POV:


Morales A. Effect of testosterone administration to men with prostate cancer is unpredictable: a word of caution and suggestions for a registry. BJU International 2011;107(9):1369-73. Effect of testosterone administration to men with prostate cancer is unpredictable: a word of caution and suggestions for a registry - Morales - 2011 - BJU International - Wiley Online Library

What’s known on the subject? and What does the study add?

Very little is known on the effect of T in men with untreated PCa. There are only two small series (including the present one) available that comprise less than 25 men with PCs receiving TTh.

It raises a warning that testosterone administration to men with PCa is not always safe, that current precautions should be maintained, and that only an international registry would provide prompt answers to these issues. To assess the evidence for the concept that the androgen receptor of prostate cancer (PCa) cells becomes saturated when testosterone values exceed castrate levels, so that testosterone administration in hypogonadal men with untreated PCa does not stimulate tumour growth. To propose basic criteria for administration of testosterone to untreated patients with PCa and, as this is a rare clinical situation, to encourage the establishment of an international registry for these patients.

Men with a diagnosis of PCa and symptomatic testosterone deficiency received testosterone therapy (TTh). Patients were assessed quarterly. Prostate-specific antigen (PSA) velocity was used as the criterion to discontinue therapy and a return to nadir PSA levels allowed re-initiation of testosterone supplementation. The responses to testosterone supplementation were varied according to each individual and were unpredictable. While some men showed little change after years of treatment, others exhibited a rapid and significant increase in PSA levels. In others, the use of intermittent therapy resulted in synchronous changes in PSA levels. Interruption of TTh invariably translated into a decrease in PSA to pre-therapy levels. Available evidence regarding the effect of testosterone administration to hypogonadal men with untreated PCa is too limited to be considered reliable. In addition, the response to this treatment appears to be varied and unpredictable.

Hypogonadism associated with untreated PCa is not common, therefore, we propose the establishment of an international registry as the quickest way to establish the basic parameters for consideration of TTh in this situation and recommendations for follow-up. Until credible evidence becomes available, the current restrictions regarding the administration of testosterone to men with PCa should remain in place.
 
Athletes at Risk: New, Inexpensive Test for 'Sudden Death Syndrome'
Athletes at risk: New, inexpensive test for 'sudden death syndrome'

ScienceDaily (Apr. 22, 2011) — When a high school athlete drops dead, the rare but fatal condition called "sudden death syndrome" dominates the headlines. For reasons that remain a mystery to scientists, some young athletes -- especially young males -- begin to experience an unusual heart arrhythmia. With over-exertion, their hearts stop pumping, leading to sudden death.

Until now, screening for the hard-to-detect syndrome has been prohibitively expensive. But cardiologist Dr. Sami Viskin of Tel Aviv University's Sackler Faculty of Medicine has developed a new test that's already being used by doctors in America -- and may have already saved lives.

The "Viskin Test" is based on the researcher's discovery that almost imperceptible abnormalities between normal and at-risk patients could be suddenly be made more visible using a simple bedside test that requires a subject to suddenly stand up. When standing, at-risk patients will experience a measurable difference in a portion of their heart rate called the QT interval. The difference can be detected by an electrocardiogram (ECG).

Dr. Viskin described his research in a recent issue of the Journal of the American College of Cardiology.

The long and short of the QT test

"Current screening methods offer no real therapeutic value, because very few people who experience arrhythmias, up to 20 percent of the population, will ever die from sudden death," Dr. Viskin says. "Moreover, there is such a significant overlap between what's normal and abnormal on an ECG that we need additional screening parameters. This test, when done on people with strong symptoms, can really give us doctors a yardstick to compare those at risk for sudden death syndrome to those who would otherwise go on to live a healthy life."

According to Dr. Viskin, the QT interval shortens when the heart rate accelerates, but this response is not instantaneous. In the study, he tested whether sudden changes in a body as it stands up would reveal an abnormally prolonged QT interval patients with long QT syndrome (LQTS), the most common cause of sudden death. Those affected with LQTS have a normal heart structure but have problems in the electrical discharge in the heart -- they have trouble "recharging" after these sudden changes.

Young people who suddenly faint for no reason, have dizzy spells or have a family history of LQTS are very good potential candidates for this new test, Dr. Viskin says.

A warning for immediate intervention

In the study that led to the "Viskin Test," Dr. Viskin assessed 68 patients with LQTS and 82 control subjects who all underwent a baseline ECG while resting in the supine position. They were then asked to get up quickly and stand still during continuous ECG recording. The QT interval was then studied over various time periods.

In response to brisk standing, both the LQTS patients and the control subjects showed similar heart rate acceleration. But Dr. Viskin discovered that the QT interval in LQTS patients increased significantly.

"This test adds diagnostic value. And the beauty of the test," Dr. Viskin adds, "is that it's easily done at the patient's bedside. It eliminates the need for more expensive IV tests and more strenuous exercise tests."

He adds that, while untreated LQTS can be life-threatening, the therapies to treat it can be very effective.


Gollob MH, Redpath CJ, Roberts JD. The Short QT Syndrome: Proposed Diagnostic Criteria. Journal of the American College of Cardiology 2011;57(7):802-12. The Short QT Syndrome: Proposed Diagnostic Criteria -- Gollob et al. 57 (7): 802 -- Journal of the American College of Cardiology

Objectives: We aimed to develop diagnostic criteria for the short QT syndrome (SQTS) to facilitate clinical evaluation of suspected cases.

Background: The SQTS is a cardiac channelopathy associated with atrial fibrillation and sudden cardiac death. Ten years after its original description, a consensus regarding an appropriate QT interval cutoff andspecific diagnostic criteria have yet to be established.

Methods: The MEDLINE database was searched for all reported cases of SQTS in the English language, and all relevant data were extracted. The distribution of QT intervals and electrocardiographic (ECG)features in affected cases were analyzed and compared to data derived from ECG analysis from general population studies.

Results: A total of 61 reported cases of SQTS were identified. Index events, including sudden cardiac death, aborted cardiac arrest, syncope, and/or atrial fibrillation occurred in 35 of 61 (57.4%) cases. The cohort was predominantly male (75.4%) and had a mean QTc value of 306.7 ms with values ranging from 248 to 381 msin symptomatic cases. In reference to the ECG characteristics of the general population, and in consideration of clinical presentation, family history, and genetic findings, a highly sensitive diagnostic scoring system was developed.

Conclusions: Based on a comprehensive review of 61 reported cases of the SQTS, formal diagnostic criteria have been proposed that will facilitate diagnostic evaluation in suspected cases of SQTS. Diagnostic criteria may lead to a greater recognition of this condition and provoke screening of at-risk family members.
 
If anyone is interested in the above topic, they should also look into Brugada syndrome. Its something that most doc's have rarely heard of in the West, but is certaintly a "silent killer." Its also tested via ECG and a drug challenge
 
If anyone is interested in the above topic, they should also look into Brugada syndrome. Its something that most doc's have rarely heard of in the West, but is certaintly a "silent killer." Its also tested via ECG and a drug challenge

Brugada Syndrome

Brugada syndrome (BS) is a distinct form of idiopathic ventricular fibrillation (VF) characterized by a unique ECG pattern consisting of ST elevation in the anterior precordial leads with/without right bundle branch block like morphology. It is generally accepted that patients with the type 1 ECG pattern (coved type) and with ventricular tachyarrhythmias (symptomatic BS patients) must receive an implantable cardioverter defibrillator to prevent a second VF attack. However, in those without syncope, family history or documented VF (asymptomatic BS patients), the best strategy is controversial because the prevalence of Brugada-type ECG change by daily medical check-ups has been reported to be approximately 0.1–1.0% in healthy subjects, but their prognosis is good compared with that of symptomatic patients. However, some asymptomatic BS patients occasionally become symptomatic and sudden cardiac death can occur with the first VF attack. Therefore, a marker that can differentiate high-risk asymptomatic patients from low-risk asymptomatic patients is needed. To date, several invasive and noninvasive parameters have been proposed for identification of patients at risk of VF, including spontaneous type 1 ST elevation, characteristics of the S wave, presence of late potentials, coexisting atrial fibrillation, augmented ST elevation after exercise, fragmented QRS wave, early repolarization pattern in the inferior/lateral leads, third intercostal ECG9, and inducibility of VF using programmed electrical stimulation, but the usefulness of these indexes remains controversial.


Ohkubo K, Watanabe I, Okumura Y, et al. Prolonged QRS Duration in Lead V2 and Risk of Life-Threatening Ventricular Arrhythmia in Patients With Brugada Syndrome. Int Heart J 2011;52(2):98-102. http://www.jstage.jst.go.jp/article/ihj/52/2/98/_pdf

Brugada syndrome is an inherited disorder that predisposes some patients to sudden cardiac death. It is not well established which Brugada syndrome patients are at risk of life-threatening arrhythmias. We investigated whether standard 12-lead electrocardiograms (ECG) can identify such patients. The subjects were 35 men with Brugada syndrome (mean age, 50.1 +/- 12.4 years). Documented ventricular fibrillation or aborted sudden cardiac arrests were judged to be related to the Brugada syndrome. Ten patients (mean age, 49.6 +/- 14.9 years) were symptomatic, and 25 (mean age, 50.3 +/- 11.5 years) were asymptomatic. We determined the PR interval, QRS duration, and QT interval from baseline 12-lead ECG leads II and V2 as well as the J point elevation amplitude of lead V2. The QRS interval was measured from QRS onset to the J point in leads II and V2. The only significant difference between the symptomatic and asymptomatic patients was the QRS duration measured from lead V2. The mean QRS interval was 129.0 +/- 23.9 ms in symptomatic patients versus 108.3 +/- 15.9 ms in asymptomatic patients (P = 0.012). A QRS interval in lead V2 >/= 120 ms was found to be a possible predictor of a life-threatening ventricular arrhythmia and/or syncope (P = 0.012). Prolonged QRS duration as measured on a standard 12-lead ECG is associated with ventricular arrhythmia and could serve as a simple noninvasive marker of vulnerability to life-threatening cardiac events in patients with Brugada syndrome.
 
Disturbances in serum calcium and phosphorus concentrations have been associated with increased risks of total and cardiovascular mortality. Low concentrations of vitamin D [25-hydroxyvitamin D (25[OH] D) and its active metabolite 1,25-dihydroxyvitamin D (1,25 [OH]2D)] have also been associated with an increased risk of death. In addition, 1,25(OH)2D regulates calcium and phosphorus metabolism, and abnormalities in serum calcium concentrations affect electrocardiographic QT interval duration. Therefore, it is possible that changes in QT interval duration might mediate the mortality effects of vitamin D, calcium, and phosphorus. The association of 25(OH)D, phosphorus, and calcium concentrations with electrocardiographic QT interval duration, however, has not been evaluated in general population samples. The purpose of this analysis was to evaluate these associations in two large samples of the U.S. general population, the Third National Health and Nutrition Examination Survey (NHANESIII), and the Atherosclerosis Risk in Communities (ARIC) study.


Zhang Y, Post WS, Dalal D, et al. Serum 25-Hydroxyvitamin D, Calcium, Phosphorus, and Electrocardiographic QT Interval Duration: Findings from NHANES III and ARIC. J Clin Endocrinol Metab. Serum 25-Hydroxyvitamin D, Calcium, Phosphorus, an... [J Clin Endocrinol Metab. 2011] - PubMed result

Context: Disturbances in 25-hydroxyvitamin D, calcium, and phosphorus concentrations have been associated with increased risks of total and cardiovascular mortality. It is possible that changes in electrocardiographic QT interval duration may mediate these effects, but the association of 25-hydroxyvitamin D, phosphorus, and calcium concentrations with QT interval duration has not been evaluated in general population samples.

Objective: The objective of the study was to evaluate the association of 25-hydroxyvitamin D, phosphorus, and calcium concentrations with QT interval duration in two large samples of the U.S. general population.

Design: This study included cross-sectional analyses the Third National Health and Nutrition Survey (NHANES III) and the Atherosclerosis Risk in Communities (ARIC) study.

Setting: The study was conducted in the general community.

Patients or Other Participants: Patients included 7,312 men and women from NHANES III and 14,825 men and women from the ARIC study.

Interventions: Serum 25-hydroxyvitamin D, total and ionized calcium, and inorganic phosphorus were measured in NHANES III, and serum total calcium and inorganic phosphorus were measured in ARIC.

Main Outcome Measure: QT interval duration was obtained from standard 12-lead electrocardiograms.

Results: In NHANES III, the multivariate adjusted differences in average QT interval duration comparing the highest vs. the lowest quartiles of serum total calcium, ionized calcium, and phosphorus were -3.6 msec (-5.8 to -1.3; P for trend = 0.005), -5.4 msec (-7.4 to -3.5; P for trend <0.001), and 3.9 msec (2.0-5.9; P for trend <0.001), respectively. The corresponding differences in ARIC were -3.1 msec (-4.3 to -2.0; P for trend <0.001), -2.9 msec (-3.8 to -1.9; P for trend <0.001), and 2.3 msec (1.3-3.3; P for trend <0.001). No association was found between 25-hydroxyvitamin D concentrations and QT interval duration.

Conclusions: In two large samples of the general population, QT interval duration was inversely associated with the serum total and ionized calcium and positively associated with serum phosphorus.
 
Sudden Death in Athletes May Be Undercounted
Medical News: Sudden Death in Athletes May Be Undercounted - in Cardiovascular, Myocardial Infarction from MedPage Today

Previous studies may have underestimated the rate of sudden cardiac death among college athletes, researchers found.

Combining data from the National Collegiate Athletic Association (NCAA) and media reports, Kimberly Harmon, MD, of the University of Washington in Seattle, and colleagues estimated the rate of sudden cardiac death to be one in every 43,770 student-athletes per year.

That makes dying suddenly from cardiac reasons the leading medical cause of death among college athletes, the researchers reported in the April 19 issue of Circulation: Journal of the American Heart Association.

The rate is higher than in most previous studies, which have relied primarily on media reports and insurance claims data.

According to Harmon and her colleagues, the findings could have implications for the routine use of electrocardiographic screening of athletes before participation in sports, because estimates of cost-effectiveness rely on an accurate assessment of the risk of sudden cardiac death.

Professional organizations differ on the cost-effectiveness of routine ECG screening.

The European Society of Cardiology and the International Olympic Committee recommend an ECG as part of routine screening before sports participation, but the American Heart Association recommends using a detailed medical history and physical examination, with an ECG reserved for the follow-up of concerning signs.

But, Harmon and her colleagues argued, "a history and physical examination without an ECG are of questionable value and have been shown not to be cost-effective because of their poor sensitivity and specificity. Targeting high-risk groups may prove a reasonable starting point to begin ECG screening programs in the United States."

Numerous studies have tried to assess the rate of sudden cardiac death, but the estimates vary widely because of differences in methodology. Most previous studies have relied on media reports, retrospective surveys, voluntary registries, and NCAA catastrophic insurance claims data.

The current study used a database that combined information from two sources -- the NCAA's database on student-athletes who died and a database from Parent Heart Watch, a nonprofit organization that performs weekly systematic searches of media reports of sudden cardiac death and cardiac arrest in young people.

From 2004 through 2008 -- which covered 1,969,663 athlete participant-years -- there were 273 deaths from any cause.

About two-thirds (68%) did not occur during athletic participation and included accidents, suicide, homicide, and drug overdoses.

Another 29% of deaths were from medical causes. Of those, 45 (56%) were attributed to sudden cardiac death.

There were differences in the rate of sudden death in various athlete subgroups.

Males were more than twice as likely as females to die suddenly, with one out of every 33,134 male athletes dying suddenly compared with one out of every 76,646 female athletes per year.

The risk was higher among blacks compared with whites -- one out of every 17,696 versus one out of every 58,653 per year.

Basketball players were most likely to die suddenly from cardiac causes, followed swimmers, lacrosse players, football players, and cross-country runners.

The athletes with the highest risk were Division I male basketball players -- one out of every 3,126 per year.

Harmon and her colleagues determined that 87% of the sudden cardiac deaths would have been identified with the NCAA database alone, 56% would have been identified with the use of public media reports alone, and only 20% would have been identified from NCAA catastrophic claims data alone.

In addition to reconsideration of routine ECG screening -- at least in high-risk groups -- the findings suggest that consideration should be given to placing automated external defibrillators (AEDs) in venues where the sports with the highest risks of sudden cardiac death occur, according to the researchers.

They noted that preparing for a sudden cardiac arrest in distance runners is more difficult.

"Runners should be encouraged to train with a partner and to carry cell phones," the authors wrote. "Organizers of cross-country meets should consider having AEDs accessible via carts or bikes, and spotters on the course should have a communication system with access to the medical team."

Potential limitations of the study included the lack of confirmation of the cause of death by autopsy in most cases, the retrospective data collection, and the possibility that some deaths were missed.


Harmon KG, Asif IM, Klossner D, Drezner JA. Incidence of Sudden Cardiac Death in National Collegiate Athletic Association Athletes. Circulation 2011;123(15):1594-600. Incidence of Sudden Cardiac Death in National Collegiate Athletic Association Athletes -- Harmon et al. 123 (15): 1594 -- Circulation

Background--The true incidence of sudden cardiac death (SCD) in US athletes is unknown. Current estimates are based largely on case identification through public media reports and estimated participation rates. The purpose of this study was to more precisely estimate the incidence of SCD in National Collegiate Athletic Association (NCAA) student-athletes and assess the accuracy of traditional methods for collecting data on SCD.

Methods and Results--From January 2004 through December 2008, all cases of sudden death in NCAA student-athletes were identified by use of an NCAA database, weekly systematic search of public media reports, and catastrophic insurance claims. During the 5-year period, there were 273 deaths and a total of 1 969 663 athlete participant-years. Of these 273 deaths, 187 (68%) were due to nonmedical or traumatic causes, 80 (29%) to medical causes, and 6 (2%) to unknown causes. Cardiovascular-related sudden death was the leading cause of death in 45 (56%) of 80 medical cases, and represented 75% of sudden deaths during exertion. The incidence of SCD was 1:43 770 participants per year. Among NCAA Division I male basketball players, the rate of SCD was 1:3100 per year. Thirty-nine (87%) of the 45 cardiac cases were identified in the NCAA database, only 25 (56%) by use of public media reports, and 9 (20%) from catastrophic claims data.

Conclusions--SCD is the leading medical cause of death and death during exercise in NCAA student-athletes. Current methods of data collection underestimate the risk of SCD. Accurate assessment of SCD incidence is necessary to shape appropriate health policy decisions and develop effective strategies for prevention.
 
American Heart Association Lowers Triglyceride Target to 100mg/dl from 150mg/dl

In a newly released scientific statement on triglycerides, the AHA recommends that 100 mg/dL replace 150 mg/dL as the upper limit for the “optimal level” for triglycerides. But, the statement acknowledges, the cut point should not be used as a therapeutic target for drug therapy, “because there is insufficient evidence that lowering triglyceride levels” can improve risk. Instead, the statement puts a large emphasis on lifestyle changes, especially with diet and exercise, to cut triglycerides and reduce risk.

The new statement “is not intended to serve as a specific guideline,” according to the AHA, but instead “will be of value” to the Adult Treatment Panel IV (ATP IV) - http://www.nhlbi.nih.gov/guidelines/cholesterol/atp4/index.htm - of the National Cholesterol Education Program, which will be available for public review and comment this fall and is expected to be published in the spring of 2012.

“The good news is that high triglycerides can, in large part, be reduced through major lifestyle changes,” said the chair of the committee, Michael Miller, in an AHA press release. “In contrast to cholesterol, where lifestyle measures are important but may not be the solution, high triglycerides are often quite responsive to lifestyle measures that include weight loss if overweight, changes in diet and regular physical activity.”

The statement includes specific recommendations for reducing added sugar, fructose, saturated fat, trans fat, and alcohol for those with elevated triglycerides. According to the statement, 31% of US adults have triglyceride levels greater than 150 mg/dL


Miller M, Stone NJ, Ballantyne C, et al. Triglycerides and Cardiovascular Disease: A Scientific Statement From the American Heart Association. Circulation. Triglycerides and Cardiovascular Disease: A Scientific Statement From the American Heart Association -- Miller et al., 10.1161/CIR.0b013e3182160726 -- Circulation
 
Calcium Score May Refine Cardio Risk
Medical News: Calcium Score May Refine Cardio Risk - in Cardiovascular, Atherosclerosis from MedPage Today

Adding a coronary artery calcium (CAC) score to a Framingham risk assessment might provide useful information in patients with low or intermediate cardiovascular risk, researchers found.

In patients with a very low Framingham risk (0% to 5%), only 1.7% to 4.4% had an advanced CAC score of 300 or higher, according to senior author Donald Lloyd-Jones, MD, of Northwestern University in Chicago, and colleagues.

In patients with intermediate Framingham risk (10% to 20%), however, 15.6% to 24% had an advanced CAC score, the researchers reported in the May 3 issue of the Journal of the American College of Cardiology.

"The high rate of CAC of 300 or more in the Framingham risk score category with predicted risk of 15.1% to 20% suggests a group at high risk for coronary heart disease who particularly may benefit from screening for CAC of 300 or more to aid further risk factor interventions, especially in situations where there is uncertainty regarding the use of drug therapy," they wrote.

"Our study data suggest that CAC measurement should be carried out within the context of traditional cardiovascular risk factors, rather than in isolation," the authors concluded, "and provide support for avoidance of radiation exposure as well as time, money, and effort spent on CAC measurement and scoring for clinical guidance in very low-risk patients."

Both CAC scores and Framingham risk scores are predictors of coronary heart disease. A CAC score of 300 or more is associated with the highest risk, even in patients with a low Framingham risk score.

But professional organizations -- including the American College of Cardiology, American Heart Association, and U.S. Preventive Services Task Force -- have not recommended using CAC scoring for widespread use.

They have suggested that such screening should be used for risk refinement in patients with intermediate Framingham risk (10% to 20%), in whom therapeutic options are less well defined than in those with higher risk.

To explore the issue, Lloyd-Jones and his colleagues examined the distribution of CAC scores across Framingham risk categories in the Multi-Ethnic Study of Atherosclerosis (MESA). The current analysis included 5,660 patients 79 years and younger (mean age 60.9).

Overall, 46.4% had a CAC score greater than 0, 20.6% had a score of 100 or more, and 10.1% had a score of 300 or more, which was considered advanced.

Both the prevalence of advanced CAC score and the median amount of CAC increased as Framingham risk scores increased (P<0.01 for trends), patterns that held up in multivariate analyses controlling for race or ethnicity, socioeconomic factors, and other cardiovascular risk factors.

Also, the number needed to screen to identify one patient with an advanced CAC score dropped as Framingham risk score increased.

For example, the numbers needed to screen in the lowest Framingham risk categories -- 0% to 2.5% and 2.6% to 5% -- were 59.7 and 22.7, respectively.

The numbers needed to screen in higher Framingham risk categories -- 5.1% to 7.5%, 7.6% to 10%, 10.1% to 15%, 15.1% to 20%, and greater than 20% -- were 13.4, 7.6, 6.4, 4.2, and 3.3, respectively.

"Taken together, our prevalence and number needed to screen data suggest the benefit of CAC testing for further risk stratification in asymptomatic low-risk (Framingham risk score of 5.1% to 10.0%) and intermediate-risk (Framingham risk score of 10.1% to 20.0%) persons," Lloyd-Jones and his colleagues wrote.

They acknowledged that the study was limited by the small numbers of participants in each age, sex, and race or ethnicity category and by the cross-sectional design, which precluded any definitive conclusions about the cost benefit of CAC measurement.


Okwuosa TM, Greenland P, Ning H, et al. Distribution of Coronary Artery Calcium Scores by Framingham 10-Year Risk Strata in the MESA (Multi-Ethnic Study of Atherosclerosis): Potential Implications for Coronary Risk Assessment. J Am Coll Cardiol 2011;57(18):1838-45. Distribution of Coronary Artery Calcium Scores by Framingham 10-Year Risk Strata in the MESA (Multi-Ethnic Study of Atherosclerosis): Potential Implications for Coronary Risk Assessment -- Okwuosa et al. 57 (18): 1838 -- Journal of the American Colle

Objectives By examining the distribution of coronary artery calcium (CAC) levels across Framingham risk score (FRS) strata in a large, multiethnic, community-based sample of men and women, we sought to determine if lower-risk persons could benefit from CAC screening.

Background The 10-year FRS and CAC levels are predictors of coronary heart disease. A CAC level of 300 or more is associated with the highest risk for coronary heart disease even in low-risk persons (FRS, <10%); however, expert groups have suggested CAC screening only in intermediate-risk groups (FRS, 10% to 20%). MethodsWe included 5,660 Multi-Ethnic Study of Atherosclerosis participants. The number needed to screen (number of people that need to be screened to detect 1 person with CAC level above the specified cutoff point) was used to assess the yield of screening for CAC. CAC prevalence was compared across FRS strata using chi-square tests.

Results CAC levels of more than 0, of 100 or more, and of 300 or more were present in 46.4%, 20.6%, and 10.1% of participants, respectively. The prevalence and amount of CAC increased with higher FRS. A CAC level of 300 or more was observed in 1.7% and 4.4% of those with FRS of 0% to 2.5% and of 2.6% to 5%, respectively (number needed to screen, 59.7 and 22.7, respectively). Likewise, a CAC level of 300 or more was observed in 24% and 30% of those with FRS of 15.1% to 20% and more than 20%, respectively (number needed to screen, 4.2 and 3.3, respectively). Trends were similar when stratified by age, sex, and race or ethnicity.

Conclusions: Our study suggests that in very low-risk individuals (FRS </= 5%), the yield of screening and probability of identifying persons with clinically significant levels of CAC is low, but becomes greater in low- and intermediate-risk persons (FRS 5.1% to 20%).
 
Changes In Relative Fitness And Frailty Across The Adult Lifespan

Frailty: Not Just in the Aged Any More
Medical News: Frailty: Not Just in the Aged Any More - in Geriatrics, General Geriatrics from MedPage Today

Frailty -- known to be associated with a greater risk of death -- is not just a problem of older people, Canadian researchers reported.

And people of any age who are frail -- defined as having more self-reported deficits such as health problems and difficulty with tasks like climbing stairs -- are more likely to die than those who are relatively more fit at the same age, according to Kenneth Rockwood, MD, of Dalhousie University in Halifax, Nova Scotia, and colleagues.

And the proportion of those who are frail grows steadily as the population ages, Rockwood and colleagues reported online in CMAJ.

"The prevalence of frailty increased exponentially with age throughout the adult life span and not just after age 65, where the sharpest inflection of the curve occurred," Rockwood and colleagues reported.

The analysis is based on data from Canada's National Population Health Survey, which followed 14,713 respondents in two-year cycles over 12 years, 1994-1995 to 2006-2007.

Rockwood and colleagues rated participants on a 42-item "Frailty Index" that included symptoms, diseases, and disabilities, as well as items not usually measured in studying older people, such as asthma and food allergies.

The frailty index was defined as the number of self-reported deficits, divided by 42. For example, the researchers wrote, a person with poor self-rated health, arthritis, and trouble climbing stairs would have an index score of 3/42 or 0.07.

Participants at any age were regarded as relatively fit if their index score was 0.03 or less, as less fit if their score was greater than 0.03 but no more than 0.10, as least fit if the score was more than 0.10 and not more than 0.21, and frail at scores above 0.21.

The study also looked at death, use of health services, and change in health status as measured by the frailty index, Rockwood and colleagues reported.

At baseline, 7,183 participants were relatively fit and only 1,019 were frail, the researchers found. The frailty index over the whole population was 0.068, corresponding to an average of 2.8 deficits.

As might be expected, the prevalence of frailty increased with age, from 2.0% among those younger than 30 to 22.4% for those older than 65 and 43.7% for those 85 and older, the researchers found.

The prevalence of relative fitness decreased monotonically with age, while the prevalence of frailty increased exponentially, the researchers found. The two curves crossed at age 75.

But at all ages, Rockwood and colleagues reported, frailty predicted mortality.

For example, they found, the 160-month mortality rate was 2% among relatively fit people at age 40, compared with 16% among those who were frail according to the Frailty Index.

Although the proportions increased, the same pattern was seen at 75 or older: 42% among the relatively fit and 83% among the frail.

In a multivariate regression analysis, the researchers found that frailty, age, sex, and education were associated with 12-year survival. Specifically:

• For each 1% increase in the Frailty Index, the hazard ratio was 1.04, with a 95% confidence interval from 1.03 to 1.04.

• For each year of age, the hazard ratio was 1.08, with a 95% confidence interval from 1.08 to 1.09.

• For each additional year of education, the hazard ratio was 0.96, with a 95% confidence interval from 0.94 to 0.97.

• And being female was associated with a hazard ratio of 0.54, with a 95% confidence interval from 0.50 to 0.59.

"That deficits accumulate with age is not surprising," the authors concluded. But the data "suggest that deficit accumulation is a fact of aging, not age, and that the antecedents of frailty in late life manifest at least by middle age."

They cautioned that the study had limitations, especially the use of self-reported data "such that the relatively fittest people were those who reported having the least things wrong with them."

As well, complete data for about one in four participants were missing for at least one cycle; those nonrespondents had slightly (but not statistically) higher values on the Frailty Index, the authors noted.

While Rockwood and colleagues suggest that the relationship between the Frailty Index and adverse outcomes "suggests that this approach offers a good guide and is feasible for use in population studies;" whether it can also be used clinically is under investigation.

However, they also noted that it is "already known that deficit counts based on clinical data are related to short-term outcomes in clinical settings."


Rockwood K, Song X, Mitnitski A. Changes in relative fitness and frailty across the adult lifespan: evidence from the Canadian National Population Health Survey. CMAJ:cmaj.101271. http://www.cmaj.ca/cgi/rapidpdf/cmaj.101271v1 (Changes in relative fitness and frailty across the adult lifespan: evidence from the Canadian National Population Health Survey -- Rockwood et al., 10.1503/cmaj.101271 -- Canadian Medical Association Journal)

Background: The prevalence of frailty increases with age in older adults, but frailty is largely unreported for younger adults, where its associated risk is less clear. Furthermore, less is known about how frailty changes over time among younger adults. We estimated the prevalence and outcomes of frailty, in relation to accumulation of deficits, across the adult lifespan.

Methods: We analyzed data for community-dwelling respondents (age 15–102 years at baseline) to the longitudinal component of the National Population Health Survey, with seven two-year cycles, beginning 1994–1995. The outcomes were death, use of health services and change in health status, measured in terms of a Frailty Index constructed from 42 self-reported health variables.

Results: The sample consisted of 14 713 respondents (54.2% women). Vital status was known for more than 99% of the respondents. The prevalence of frailty increased with age, from 2.0% (95% confidence interval [CI] 1.7%–2.4%) among those younger than 30 years to 22.4% (95% CI 19.0%–25.8%) for those older than age 65, including 43.7% (95% CI 37.1%–50.8%) for those 85 and older. At all ages, the 160-month mortality rate was lower among relatively fit people than among those who were frail (e.g., 2% v. 16% at age 40; 42% v. 83% at age 75 or older). These relatively fit people tended to remain relatively fit over time. Relative to all other groups, a greater proportion of the most frail people used health services at baseline (28.3%, 95% CI 21.5%–35.5%) and at each follow-up cycle (26.7%, 95% CI 15.4%–28.0%).

Interpretation: Deficits accumulated with age across the adult spectrum. At all ages, a higher Frailty Index was associated with higher mortality and greater use of health care services. At younger ages, recovery to the relatively fittest state was common, but the chance of complete recovery declined with age.
 
In Hypertension, Strong Men Live Longer
Medical News: In Hypertension, Strong Men Live Longer - in Cardiovascular, Hypertension from MedPage Today

Hypertensive men with the most muscle strength appear to have a lower risk of dying than their weaker counterparts, researchers found.

Even after controlling for cardiorespiratory fitness level and other potential confounders, men in the upper third of muscle strength were 34% less likely to die during an average follow-up of about 18 years (HR 0.66, 95% CI 0.45 to 0.98), according to Enrique Artero, PhD, of the University of Granada in Spain, and colleagues.

The men with the greatest reduction in mortality risk were those who had the most muscular strength and high fitness (HR 0.49, 95% CI 0.30 to 0.82), the researchers reported in the May 3 issue of the Journal of the American College of Cardiology.

Although the researchers urged caution in interpreting the results because of the low number of deaths (183), the findings are consistent with previous studies in nonhypertensive individuals.

"The apparent protective effect of muscular strength against risk of death might be due to muscular strength in itself, to respiratory muscular strength and pulmonary function, to muscle fiber type or configuration, or as a consequence of regular physical exercise, specifically resistance exercise," Artero and his colleagues wrote.

"Hypertensive men should follow current physical activity guidelines and engage in muscle-strengthening activities that involve major muscle groups, not only to reduce resting blood pressure but also to potentially reduce long-term mortality risk," they wrote.

The researchers noted that the physical activity guidelines from the U.S. Department of Health and Human Services encourage adults to perform muscle-strengthening activities that involve major muscle groups at least two days a week, with supervision from a healthcare professional for individuals with chronic medical conditions.

To find out whether previously observed inverse relationships between muscular strength and all-cause and cancer mortality applied to men with hypertension, Artero and his colleagues turned to the Aerobics Center Longitudinal Study, which followed patients treated at the Cooper Clinic in Dallas.

The analysis included 1,506 men, ages 40 and up, who had a resting blood pressure of at least 140/90 mm Hg or had received a hypertension diagnosis from a physician. All underwent muscular strength tests -- a bench press and a leg press -- and fitness testing on a treadmill at baseline. The men were mostly white, well educated, and in the middle and upper classes.

During an average follow-up of 18.3 years, 183 men (12.2%) died.

The age-adjusted death rates per 10,000 person-years rose linearly across increasing tertiles of muscular strength -- 81.8, 65.5, and 52.0 (P<0.05 for trend).

After adjustment for age, physical activity, smoking, alcohol intake, body mass index, blood pressure, total cholesterol, diabetes, abnormal electrocardiogram, and family history of cardiovascular disease, the risk of dying during follow-up decreased with increasing muscular strength (P=0.02 for trend).

Although additional adjustment for cardiorespiratory fitness rendered the trend nonsignificant, men with the most muscular strength still had a significantly lower risk of death than those with the least (HR 0.66).

The authors noted that the small number of deaths did not allow for an assessment of the relationship between muscular strength and disease-specific mortality.

Additional limitations included the lack of sufficient information on diet or medication use and the use of measures of muscular strength and fitness at baseline only.


Artero EG, Lee D-c, Ruiz JR, et al. A Prospective Study of Muscular Strength and All-Cause Mortality in Men With Hypertension. J Am Coll Cardiol 2011;57(18):1831-7. A Prospective Study of Muscular Strength and All-Cause Mortality in Men With Hypertension -- Artero et al. 57 (18): 1831 -- Journal of the American College of Cardiology

Objectives This study sought to assess the impact of muscular strength on mortality in men with hypertension.

Background Muscular strength is inversely associated with mortality in healthy men, but this association has not been examined in men with hypertension.

Methods We followed 1,506 hypertensive men age 40 years and older enrolled in the Aerobics Center Longitudinal Study from 1980 to 2003. Participants received an extensive medical examination at baseline. Muscular strength was quantified by combining 1 repetition maximum (1-RM) measures for leg and bench press and cardiorespiratory fitness assessed by maximum exercise test on a treadmill.

Results During an average follow-up of 18.3 years, 183 deaths occurred. Age-adjusted death rates per 10,000 person-years across incremental thirds of muscular strength were 81.8, 65.5, and 52.0 (p < 0.05 for linear trend). Multivariable Cox regression hazard ratios were 1.0 (reference), 0.81 (95% confidence interval [CI]: 0.57 to 1.14), and 0.59 (95% CI: 0.40 to 0.86) across incremental thirds of muscular strength. After further adjustment for cardiorespiratory fitness, those participants in the upper third of muscular strength still had a lower risk of death (hazard ratio
: 0.66; 95% CI: 0.45 to 0.98). In the muscular strength and CRF combined analysis, men simultaneously in the upper third of muscular strength and high fitness group had the lowest mortality risk among all combination groups (HR: 0.49; 95% CI: 0.30 to 0.82), with men in the lower third of muscular strength and low fitness group as reference.

Conclusions High levels of muscular strength appear to protect hypertensive men against all-cause mortality, and this is in addition to the benefit provided by cardiorespiratory fitness.
 
Machismo Kills Men
Men With Conservative Ideas About Masculinity Skip Doctors' Visits - Ideas Market - WSJ

By Christopher Shea

Sixty-five-year-old men with “macho” attitudes are about half as likely as their peers to have gotten basic preventative medical care in the past year, a research paper shows.

Men with stereotypically masculine views often avoid doctors.

Researchers drew their information from the Wisconsin Longitudinal Study, a continuing survey of roughly 1,000 men who graduated from high school in 1957. Among other things, the men were shown statements about male identity: Men should have the final say about large family purchases, men should never admit to weakness, strong men with large muscles are especially attractive to women, men should have jobs and women should be homemakers, and so on. They rated agreement on a scale of 1 to 8.

Men who rated in the top quarter on this test of “idealized” masculinity were 46% less likely to have gotten three basic preventative measures— physical, prostate exam, and flu shot (all three)— in the past year than the rest of the men. The tendency, the researchers said, may help explain the paradox of American men having shorter average lifespans than women, despite more financial resources, on average.

Curiously, unlike almost every other factor concerning health, education did not influence this one: Macho men with college degrees were just as likely to skip checkups as macho men without them. The effect of occupational prestige was actually negative: the more high-paying and prestigious his job, the less likely a man with hyper-masculine ideals was to pay attention to preventative care.


Springer KW, Mouzon DM. "Macho Men" and Preventive Health Care: Implications for Older Men in Different Social Classes. Journal of Health and Social Behavior. "Macho Men" and Preventive Health Care: Implications for Older Men in Different Social Classes

The gender paradox in mortality—where men die earlier than women despite having more socioeconomic resources—may be partly explained by men’s lower levels of preventive health care. Stereotypical notions of masculinity reduce preventive health care; however, the relationship between masculinity, socioeconomic status (SES), and preventive health care is unknown. Using the Wisconsin Longitudinal Study, the authors conduct a population-based assessment of masculinity beliefs and preventive health care, including whether these relationships vary by SES. The results show that men with strong masculinity beliefs are half as likely as men with more moderate masculinity beliefs to receive preventive care. Furthermore, in contrast to the well-established SES gradient in health, men with strong masculinity beliefs do not benefit from higher education and their probability of obtaining preventive health care decreases as their occupational status, wealth, and/or income increases. Masculinity may be a partial explanation for the paradox of men’s lower life expectancy, despite their higher SES.
 
Re: Thyroid Metabolism Quick Post

Local Control Of Thyroid Hormone Action – Role Of Type 2 Deiodinase

Thyroid hormones are important homeostatic regulators that act via nuclear thyroid hormone receptors (TRs) in virtually all tissues during development and throughout post-natal life. 3,5,3’,5’-L-tetraiodothyronine (thyroxine, T4) is a prohormone that circulates at a high concentration in peripheral blood relative to the active hormone 3,5,3’-L-triiodothyronine (T3). Concentrations of T4 and T3 in target tissues are controlled by metabolism; local conversion of T4 to T3 is catalyzed by the type 2 iodothyronine deiodinase enzyme (DIO2), whilst the type 3 enzyme (DIO3) prevents activation of T4 and inactivates T3. This pre-receptor control of ligand availability to TRs in target cells is a crucial mechanism that regulates the timing of cellular responses to thyroid hormones in a tissue specific manner. The physiological importance of this co-ordinated process has been demonstrated in several organ systems by a series of elegant in vivo studies, and here they review recent developments with particular emphasis on the importance of hormone activation mediated by DIO2.


Williams GR, Bassett D. Local control of thyroid hormone action - role of type 2 deiodinase. J Endocrinol:JOE-10-0448. Local control of thyroid hormone action - role of type 2 deiodinase -- Williams and Bassett, 10.1530/JOE-10-0448 -- Journal of Endocrinology

The thyroid gland predominantly secretes the pro-hormone thyroxine (T4) which is converted to the active hormone 3,5,3'-L-triiodothyronine (T3) in target cells. Conversion of T4 to T3 is catalyzed by the type 2 iodothyronine deiodinase enzyme (DIO2) and T3 action in target tissues is determined by DIO2-regulated local availability of T3 to its nuclear receptors, TR{alpha} and TR{beta}. Studies of Dio2 knockout mice have revealed new and important roles for the enzyme during development and in adulthood in diverse tissues including the cochlea, skeleton, brown fat, pituitary and hypothalamus. In this review we discuss the molecular mechanisms by which DIO2 controls intra-cellular T3 availability and action.

THYROID-FIG1.gif

Figure 1: Negative feedback regulation of the hypothalamic-pituitary-thyroid axis. The role of DIO2 in negative feedback control of the HPT axis occurs predominantly in thyrotrophs of the anterior pituitary gland. Para-ventricular nucleus (PVN); thyrotopin releasing hormone (TRH); thyroid stimulating hormone (TSH); type 2 deiodinase enzyme (DIO2); thyroid hormone receptor ?2 (TR?2); thyroxine (T4); 3,5,3’-L-triiodothyronine (T3).

THYROID-FIG2.gif

Figure 2: Regulation of intra-cellular supplies of T3 to the nucleus of T3 target cells. Monocarboxylate transporters 8 and 10 (MCT8, MCT10); organic acid transporter protein-1C1 (OATP1C1); type 2 and 3 deiodinase enzymes (DIO2, DIO3); thyroid hormone receptor (TR), retinoid X receptor (RXR); thyroxine (T4); 3,5,3’-L-triiodothyronine (T3); 3,3’,5’-triiodothyronine (rT3); 3,3’diiodothyronine (T2).
 
The Association of Age With Body Temperature

Studies of body temperature in large cohorts of human participants are rare. Although body temperature is recognized as a clinically useful physiological parameter in the context of infection or extreme environmental exposures, few epidemiological studies have included body temperature as a routine measurement. One consistent observation that has emerged from studies of human body temperature, however, has been that advanced age is associated with lower body temperature. In landmark cross-sectional studies, which included 25,000 participants and established the “normal” body temperature as 98.6°F—lower body temperatures among the elderly participants were observed. The largest reported longitudinal study of body temperature in humans, reported a decade ago among men in the Baltimore Longitudinal Study of Aging, found that participants with core body temperatures in the lower 50% of the study population had significantly lower mortality than those with body temperatures in the upper 50% over 25 years of follow-up, suggesting an element of selection. Alternatively, lower body temperatures at older ages may reflect a loss of thermoregulation, which has been demonstrated with aging and is associated with shorter life spans in mice.

Most studies of longevity and body temperature in animals have been performed in the context of caloric restriction. Studies in a variety of mammalian homeothermic species have shown that caloric restriction is associated with both lengthened life span and lower core body temperature. The effect of enforced reduced dietary intake on lowering temperature in mice appears to be under genetic control. Other studies in mice have demonstrated that the association of lower body temperature and increased longevity can occur even in the absence of caloric restriction.

Here, researchers have studied the distribution of body temperature across the age spectrum in more than 18,000 participants who underwent oral body temperature determination by a defined protocol (an accurate measure of core body temperature in humans) as part of a standardized health appraisal visit. An extensive associated database allowed them to explore associations of other physiological parameters with oral temperature.


Waalen J, Buxbaum JN. Is Older Colder or Colder Older? The Association of Age With Body Temperature in 18,630 Individuals. The Journals of Gerontology Series A: Biological Sciences and Medical Sciences 2011;66A(5):487-92. Is Older Colder or Colder Older? The Association of Age With Body Temperature in 18,630 Individuals

In animal studies, caloric restriction resulting in increased longevity is associated with a reduction in body temperature, which is strain specific and likely under genetic control. Small studies in humans have suggested that temperatures may be lower among elderly populations, usually attributed to loss of thermoregulation. We analyzed cross-sectional data from 18,630 white adults aged 20–98 years (mean 58.3 years) who underwent oral temperature measurement as part of a standardized health appraisal at a large U.S. health maintenance organization. Overall, women had higher mean temperatures (97.5 ± 1.2°F) than men (97.2 ± 1.1°F; p < .0001). Mean temperature decreased with age, with a difference of 0.3°F between oldest and youngest groups after controlling for sex, body mass index, and white blood cell count. The results are consistent with low body temperature as a biomarker for longevity. Prospective studies are needed to confirm whether this represents a survival advantage associated with lifetime low steady state temperature.
 
Prevalence Of Hypertrophic Cardiomyopathy (HCM) And Long QT Syndrome Mutations In Young Sudden Cardiac Death (SD)

Geneticists, forensic pathologists, and medical doctors, all of them share the necessity of finding the cause of death in sudden death (SD) cases. The most frequent cause of SD is sudden cardiac death (SCD), defined as an unexpected death from a cardiac cause which occurs within a short time, generally within 1 h of symptom onset, in a person with or without pre-existing heart disease.

SCD is one of the most common causes of death in developed countries with an incidence of 30–200/100,000 people each year. Atherosclerosis and acquired forms of cardiomyopathies are the most frequent findings after autopsy in adults with SCD, whereas in people under 35 years old, non-ischemic diseases are responsible for a large number of cases.

In around 5–10% of the SD cases, especially in young people, the cause of death cannot be explained neither after autopsy nor after laboratory tests. The investigation of the causes of these unexpected deaths has become one of the most important objectives of forensic pathologists since, in many cases, the death occurs in previously healthy young people. Inherited heart diseases such as hypertrophic cardiomyopathy (HCM), arrhythmogenic right ventricular cardiomyopathy (ARVC), and primary electrical diseases such as long QT syndrome (LQTS), Brugada syndrome (BrS), or catecholaminergic polymorphic ventricular tachycardia (CPVT), are the main causes of death in young adults with no previous pathological clinical history. Most often, these inherited cardiac disorders give rise to lethal ventricular arrhythmias and they show an autosomal dominant pattern of inheritance. Genetic screening of major genes involved may help to determine the cause of death and also to evaluate the potential risk of relatives.

The main objective in this study was to try to understand the real impact of genetic causes hidden behind SCD cases. Researchers aimed to evaluate if genetic screening in SCD is advisable in negative autopsies, not only to clarify the cause of the death but also to provide clinical and genetic counseling to the relatives carriers of asymptomatic causative mutations, in order to prevent the recurrence of any fatal cardiac event in both individuals diagnosed with one of these pathologies and also in asymptomatic relatives of SCD cases. To achieve this goal, they investigated the prevalence of LQTS and HCM causing mutations in both a cohort with personal history of SCD (group I) and a cohort of relatives of individuals that suffered SCD (group II).


Allegue C, Gil R, Blanco-Verea A, et al. Prevalence of HCM and long QT syndrome mutations in young sudden cardiac death-related cases. Int J Legal Med. Prevalence of HCM and long QT syndrome mutations i... [Int J Legal Med. 2011] - PubMed result

Cardiomyopathies and channelopathies are major causes of sudden cardiac death. The genetic study of these diseases is difficult because of their heterogenic nature not only in their genetic traits but also in their phenotypic expression. The purpose of the present study is the analysis of a wide spectrum of previously known genetic mutations in key genes related to hypertrophic cardiomyopathy (HCM), long QT syndrome (LQTS), and Brugada syndrome (BrS) development. The samples studied include cases of sudden cardiac death (SCD) in young adults and their relatives in order to identify the real impact of genetic screening of SCD in forensic cases. Genetic screening of described variation in 16 genes implicated in the development of HCM and three more genes implicated in LQTS and BrS was performed by using MassARRAY technology. In addition, direct sequencing of the two most prevalent genes implicated in the development of SQTL type 1 and 2 was also carried out.

Genetic screening allowed us to unmask four possibly pathogenic mutation carriers in the 49 SCD cases considered; carriers of mutation represent 9% (2/23) of the probands with structural anomalies found after autopsy and 7% (1/14) of the probands with structurally normal hearts after in depth autopsy protocol. One mutation was found among 12 of the recovered SCD cases considered. In people with direct family history of sudden cardiac death, but not themselves, 11 additional mutation carriers were found. Three different mutations were found in six of the 19 LQTS patients, representing three families and two different mutations were found among six patients with previous syncope. Genetic analysis in sudden cardiac death cases could help to elucidate the cause of death, but it also can help in the prevention of future deaths in families at risk. The study presented here shows the importance and relevance of genetic screening in patients with signs of cardiac hypertrophy and in family cases with more than one relative affected.
 
Inherited Cardiomyopathies

Inherited cardiomyopathies are a major cause of heart disease in all age groups, often with an onset in adolescence or early adult life. Not only the patients but also their families can be severely burdened by these illnesses. More than 20 years ago, the first “disease gene” for hypertrophic cardiomyopathy was identified. This finding led to the concept that hypertrophic cardiomyopathy is a disease of the sarcomere. Similar advances in the elucidation of the genetic basis of other forms of cardiomyopathy, as well as in other inherited cardiovascular diseases, soon followed.

The identification of disease genes in numerous inherited diseases has raised expectations for new forms of treatment, but experience has shown that such novel therapies rarely follow. For some inherited cardiomyopathies, however, there are realistic prospects that molecular insights will soon lead to novel treatments. This review focuses on recent findings regarding the mechanisms underlying cardiomyopathies that will inform clinical practice and guide the search for therapeutic targets.


Watkins H, Ashrafian H, Redwood C. Inherited Cardiomyopathies. New England Journal of Medicine 2011;364(17):1643-56. MMS: Error
 
Levothyroxine Dose And Risk Of Fractures

Hypothyroidism is common in older people, particularly women, and over 20% of older people receive levothyroxine replacement long term. With normal ageing, thyroid hormone production, secretion, and degradation decreases, and therefore older people with hypothyroidism have lower requirements for levothyroxine replacement than younger people. Most people with hypothyroidism are diagnosed in early or middle adulthood, thus most will have been treated for many years by the time they reach older age. Although regular monitoring of levothyroxine doses is indicated, evidence suggests that the dose often remains unchanged as people age, and over 20% of older adults are overtreated, leading to iatrogenic hyperthyroidism.

Chronic hyperthyroidism may increase the risk of fractures, particularly in older people and postmenopausal women who already have a higher risk of osteoporosis and fractures. Studies have found that higher compared with lower doses of levothyroxine replacement and subclinical hyperthyroidism are associated with a lower bone density and bone quality, as measured by ultrasonography. An excess of thyroid hormone can also affect neuromuscular function and muscle strength and increase the risk of arrhythmias and falls, which can raise the risk of fractures independent of bone density. Previous studies of the association between levothyroxine and fractures have had mixed results, largely because of small sample sizes and the inclusion of younger, lower risk populations. This problem has not been dealt with adequately in older women, and older people in general, who are at higher risk of fractures, more likely to be treated with levothyroxine, and more vulnerable to adverse events related to overtreatment. Similarly, the effect of levothyroxine dose on risk for fracture has not been explored. In this population based study we quantified the risk of any fracture associated with increasing doses of replacement levothyroxine therapy in older men and women.


Turner MR, Camacho X, Fischer HD, et al. Levothyroxine dose and risk of fractures in older adults: nested case-control study. BMJ 2011;342. http://www.bmj.com/content/342/bmj.d2238.full.pdf

Objective To quantify the effect of levothyroxine dose on risk of fractures in older adults.
Design Nested case-control study.

Setting Population based health databases, Ontario, Canada.

Participants Adults aged 70 or more prescribed levothyroxine between 1 April 2002 and 31 March 2007 and followed for fractures until 31 March 2008. Cases were cohort members admitted to hospital for any fracture, matched with up to five controls from within the cohort who had not yet had a fracture.

Main outcome measure Primary outcome was fracture (wrist or forearm, shoulder or upper arm, thoracic spine, lumbar spine and pelvis, hip or femur, or lower leg or ankle) in relation to levothyroxine use (current, recent past, remote). Risk among current users was compared between those prescribed high, medium, and low cumulative levothyroxine doses in the year before fracture.

Results Of 213?511 prevalent levothyroxine users identified, 22?236 (10.4%) experienced a fracture over a mean 3.8 years of follow-up, 18?108 (88%) of whom were women. Compared with remote levothyroxine use, current use was associated with a significantly higher risk of fracture (adjusted odds ratio 1.88, 95% confidence interval 1.71 to 2.05), despite adjustment for numerous risk factors. Among current users, high and medium cumulative doses (>0.093 mg/day and 0.044-0.093 mg/day) were associated with a significantly increased risk of fracture compared with low cumulative doses (<0.044 mg/day): 3.45 (3.27 to 3.65) and 2.62 (2.50 to 2.76), respectively.

Conclusion Among adults aged 70 or more, current levothyroxine treatment was associated with a significantly increased risk of fracture, with a strong dose-response relation. Ongoing monitoring of levothyroxine dose is important to avoid overtreatment in this population.
 
Naltrexone Effective for Opioid Addiction, but Concerns Linger
Findings From Single Study Formed Basis for FDA approval
Medscape: Medscape Access

April 28, 2011 — Extended-release (ER) naltrexone (Vivitrol), a receptor antagonist, is a safe and effective option for treating opioid dependence disorder (ODD), new research suggests.

In a phase 3 randomized controlled trial of 250 patients in Russia with ODD, investigators found that those who received once-monthly injections of ER naltrexone had significantly more opioid-free weeks during a 6-month period and fewer cravings than those who received placebo.

In addition, only 2 patients in each group dropped out of the study because of adverse events (AEs), and only 2% of those treated with naltrexone experienced one or more serious AEs (compared with 3% of the placebo-treated group).

"I was very pleased by the positive results. These findings demonstrate that Vivitrol is an important new treatment for this disease," lead study author Evgeny Krupitsky, MD, professor and chief of the Laboratory of Clinical Psychopharmacology of Addictions at St. Petersburg State Pavolv Medical University and chief of the Department of Addictions at the Bekhterev Research Psychoneurological Institute in St. Petersburg, Russia, told Medscape Medical News.

Calling the findings "robust and conclusive," Dr. Krupitsky said they are consistent with previous studies showing that naltrexone "prevents the reestablishment of opioid dependence."

The study was published online April 28 in The Lancet.

Why Russia?

ER naltrexone was approved by the US Food and Drug Administration (FDA) for ODD last October and reported by Medscape Medical News at that time. However, a month before the approval, an FDA advisory panel voiced concerns that the sanctioning of the drug for this indication was based solely on data from this trial, which was conducted outside the United States.

"One study does bother me a little bit if we approve this drug [but] we must understand that one drug is not going to be perfect or efficacious for everyone. We have so few options for addiction, especially opioid addiction," FDA panel member Chung-yui Betty Tai, PhD, with the National Institute on Drug Abuse (NIDA) said at the time.

Further, in an accompanying editorial also published online April 28 in The Lancet, 6 authors from the United States, France, and Australia also express concerns about the FDA's approval of the drug for ODD.

"The study is striking...for the questions it raises about the FDA's approval processes and clinical trial ethics," write Daniel Wolfe, from the Open Society Foundation and International Harm Reduction Development Program in New York City, and colleagues.

The editorialists note that the Declaration of Helsinki states that the benefits, risks, burdens, and effectiveness of a new medication should be tested against "best available treatment." In addition, they write, a placebo group should only be authorized when no accepted standard of care exists.

"This is not the case for opioid dependence. The fact that Russia does not permit methadone or buprenorphine treatment does not excuse the use of placebo but rather raises the question of why investigators chose that country to test a drug for which US approval would be sought," they write.

"The testing of depot naltrexone in Russia is akin to finding a location with no access to antiretrovirals and then testing a new HIV drug against placebo."

Turning Point

The editorialists also note that data submitted to the FDA did not adequately investigate risk for posttreatment opioid overdose in the participants. This is particularly worrisome, they write, because past studies have shown a 3-fold higher risk for overdose for those taking the oral formulation of this drug compared with those taking methadone or buprenorphine and a 6-fold higher risk of experiencing a heroin overdose once out of treatment compared with during treatment.

They also question the FDA's approval of injectable ER naltrexone based primarily on findings from this study, which were at that time unpublished.

"The FDA should justify why it has lowered the scientific, regulatory, and ethical standards in approving depot naltrexone. Although there is public demand and a market for new treatments for [ODD], approval in this instance might endanger patients and sets a precedent that unjustifiably degrades standards for all treatment of opioid dependence," conclude the editorial authors.

Still, in a message on the NIDA Web site posted immediately following the FDA's approval of naltrexone last year, NIDA director Nora Volkow, MD, noted that injectable ER naltrexone marks "an important turning point in our approach to treatment."

In addition, it "obviates the daily need for patients to motivate themselves to stick to a treatment regimen — a formidable task, especially in the face of multiple triggers of cravings and relapse," she added.

Agonists vs Antagonists

Methadone and buprenorphine are opioid agonists that have previously been shown to be effective for managing ODD.

"However, in 122 of 192 [United Nations] member states, agonist therapy is restricted or unavailable because of philosophical preferences for opioid-free treatment or policy concerns about physiological dependence or abuse and illegal drug diversion," write the researchers.

Naltrexone, on the other hand, is a µ-opioid receptor antagonist.

"It has a differentiated mechanism of action that blocks opioid receptors in the brain, producing no euphoria or sedation and generating no physical opioid dependence," explained Dr. Krupitsky.

"NIDA has been calling for a long-acting formulation of naltrexone since 1976. So conducting this clinical program was an important step in advancing the field," he added.

For this trial, 250 patients older than 18 years with ODD were enrolled at 13 sites in Russia between July 2008 and October 2009.

Participants were randomized to receive either monthly injections of 380 mg of ER naltrexone (n = 126; mean age, 29.4 years; 90% male; mean duration of ODD, 9.1 years) or placebo (n = 124; mean age, 29.7 years; 86% male; mean duration of ODD, 10 years) during 24 weeks. They also underwent 12 biweekly drug counseling sessions.

At enrollment, all patients had 30 days or less of inpatient detoxification and 7 days or more off all opioids. The study's primary endpoint was the response profile for confirmed abstinence during weeks 5 to 24, assessed by urine drug tests and self-report of nonuse.

Secondary endpoints included safety, opioid craving scores, self-reported opioid-free days, and days of retention.

A "naloxone challenge" was also conducted if needed to assess relapse to physiological opioid dependence. If a study participant was considered by the site investigator to be at risk of having relapsed, a short-acting opioid antagonist was administered. If signs of withdrawal were shown, the next dose of the blind study drug was withheld and the patient was removed from the trial.

Nonaddictive Option

Results showed that the naltrexone-treated group had a significantly higher median proportion of weeks of abstinence at 90% (95% confidence interval [CI], 69.9% – 92.4%) vs 35% (95% CI, 11.4% – 63.8%) for the placebo group (P = .0002). The number of patients with total confirmed abstinence was also higher in the naltrexone group at 45 vs 28 in the placebo group (P = .0224).

The patients receiving the study drug also had a higher proportion of opioid-free days, with a median of 99.2% vs 60.4% for those receiving placebo (P = .0004), more median days of retention (168 vs 96, P = .0042), and a greater mean reduction in craving scores (?10.1 vs 0.7, P < .0001).

In addition, relapse to opioid dependence was confirmed in 17 of the placebo-treated patients compared with only 1 in the naltrexone group (P < .0001).

Although 50% of the naltrexone-treated patients and 32% of the placebo-treated patients experienced at least 1 AE, only 2 in each group dropped out due to this cause. Nasopharyngitis, insomnia, hypertension, and injection site pain were the AEs most reported by those who received naltrexone.

Serious AEs "were uncommon and no episodes of intractable pain management were reported," note the study authors.

Overall, ER naltrexone "in conjunction with psychosocial treatment might improve acceptance of opioid dependence pharmacotherapy and provide a useful treatment option for many patients," they write.

Vivitrol offers an antagonist, or nonaddictive, nonnarcotic option. And the once-a-month administration helps to ensure patient compliance and that therapeutic concentrations of the medication are maintained," added Dr. Krupitsky.

Dr. Krupitsky added that addiction treatment medications "is still a newer area of research and new drug development" compared with other disorders.

"I will continue my work, and encourage other researchers to continue theirs, to bring forward new options to help with this serious, life-threatening disease."

[The study was funded by Alkermes, the drug's manufacturer. Dr. Krupitsky is a consultant for Alkermes, 1 of the 6 study authors is on the Alkermes advisory board, and 3 are full-time employees of the company. The other study author reports being an advisory board member for Alkermes and US World Med, has received research funding from Titan Pharmaceuticals and Hythiam, and received research support, an unrestricted educational grant, and speaker support from Reckitt Benckiser. Dr. Wolfe has disclosed no relevant financial relationships. The other editorialists report event speaker honorarium from Reckitt Benckiser, advisory board membership and honorarium from King Pharmaceuticals and Covidien, and/or support from Mundipharma.]


Wolfe D, Carrieri MP, Dasgupta N, Wodak A, Newman R, Bruce RD. Concerns about injectable naltrexone for opioid dependence. The Lancet 2011;377(9776):1468-70. Concerns about injectable naltrexone for opioid dependence : The Lancet


Krupitsky E, Nunes EV, Ling W, Illeperuma A, Gastfriend DR, Silverman BL. Injectable extended-release naltrexone for opioid dependence: a double-blind, placebo-controlled, multicentre randomised trial. The Lancet 2011;377(9776):1506-13. Injectable extended-release naltrexone for opioid dependence: a double-blind, placebo-controlled, multicentre randomised trial : The Lancet

Background - Opioid dependence is associated with low rates of treatment-seeking, poor adherence to treatment, frequent relapse, and major societal consequences. We aimed to assess the efficacy, safety, and patient-reported outcomes of an injectable, once monthly extended-release formulation of the opioid antagonist naltrexone (XR-NTX) for treatment of patients with opioid dependence after detoxification.

Methods - We did a double-blind, placebo-controlled, randomised, 24-week trial of patients with opioid dependence disorder. Patients aged 18 years or over who had 30 days or less of inpatient detoxification and 7 days or more off all opioids were enrolled at 13 clinical sites in Russia. We randomly assigned patients (1:1) to either 380 mg XR-NTX or placebo by an interactive voice response system, stratified by site and gender in a centralised, permuted-block method. Participants also received 12 biweekly counselling sessions. Participants, investigators, staff, and the sponsor were masked to treatment allocation. The primary endpoint was the response profile for confirmed abstinence during weeks 5—24, assessed by urine drug tests and self report of non-use. Secondary endpoints were self-reported opioid-free days, opioid craving scores, number of days of retention, and relapse to physiological opioid dependence. Analyses were by intention to treat.

Findings - Between July 3, 2008, and Oct 5, 2009, 250 patients were randomly assigned to XR-NTX (n=126) or placebo (n=124). The median proportion of weeks of confirmed abstinence was 90•0% (95% CI 69•9—92•4) in the XR-NTX group compared with 35•0% (11•4—63•8) in the placebo group (p=0•0002). Patients in the XR-NTX group self-reported a median of 99•2% (range 89•1—99•4) opioid-free days compared with 60•4% (46•2—94•0) for the placebo group (p=0•0004). The mean change in craving was ?10•1 (95% CI ?12•3 to ?7•8) in the XR-NTX group compared with 0•7 (?3•1 to 4•4) in the placebo group (p<0•0001). Median retention was over 168 days in the XR-NTX group compared with 96 days (95% CI 63—165) in the placebo group (p=0•0042). Naloxone challenge confirmed relapse to physiological opioid dependence in 17 patients in the placebo group compared with one in the XR-NTX group (p<0•0001). XR-NTX was well tolerated. Two patients in each group discontinued owing to adverse events. No XR-NTX-treated patients died, overdosed, or discontinued owing to severe adverse events.

Interpretation - XR-NTX represents a new treatment option that is distinct from opioid agonist maintenance treatment. XR-NTX in conjunction with psychosocial treatment might improve acceptance of opioid dependence pharmacotherapy and provide a useful treatment option for many patients.
 
Back
Top