Conflicts-of-Interest

[This is not unlike the use of MRI for hypogonadism!!! The Meso readers are aware of the Cascade Effect - https://thinksteroids.com/community/posts/747132 ; https://thinksteroids.com/community/posts/670064 ; https://thinksteroids.com/community/posts/659856 ]

Sports Medicine Said to Overuse M.R.I.’s
http://www.nytimes.com/2011/10/29/health/mris-often-overused-often-mislead-doctors-warn.html

By GINA KOLATA
Published: October 28, 2011

Dr. James Andrews, a widely known sports medicine orthopedist in Gulf Breeze, Fla., wanted to test his suspicion that M.R.I.’s, the scans given to almost every injured athlete or casual exerciser, might be a bit misleading. So he scanned the shoulders of 31 perfectly healthy professional baseball pitchers.

The pitchers were not injured and had no pain. But the M.R.I.’s found abnormal shoulder cartilage in 90 percent of them and abnormal rotator cuff tendons in 87 percent. “If you want an excuse to operate on a pitcher’s throwing shoulder, just get an M.R.I.,” Dr. Andrews says.

He and other eminent sports medicine specialists are taking a stand against what they see as the vast overuse of magnetic resonance imaging in their specialty.

M.R.I.’s can be invaluable in certain situations — finding serious problems like tumors or helping distinguish between competing diagnoses that fit a patient’s history and symptoms. They also can make money for doctors who own their own machines. And they can please sports medicine patients, who often expect a scan.

But scans are easily misinterpreted and can result in misdiagnoses leading to unnecessary or even harmful treatments.

For example, said Dr. Bruce Sangeorzan, professor and vice chairman of the department of orthopedics and sports medicine at the University of Washington, if a healthy, uninjured person goes out for a run, a scan afterward will show fluid in the knee bone. It is inconsequential. But in an injured person, fluid can be a sign of a bone that is stressed or even has a crack and is trying to heal.

“An M.R.I. is unlike any other imaging tool we use,” Dr. Sangeorzan said. “It is a very sensitive tool, but it is not very specific. That’s the problem.” And scans almost always find something abnormal, although most abnormalities are of no consequence.

“It is very rare for an M.R.I. to come back with the words ‘normal study,’ “ said Dr. Christopher DiGiovanni, a professor of orthopedics and a sports medicine specialist at Brown University. “I can’t tell you the last time I’ve seen it.”

In sports medicine, where injuries are typically torn muscles or tendons or narrow cracks in bones, specialists like Dr. Andrews and Dr. DiGiovanni say M.R.I.’s often are not needed — they usually can figure out what is wrong with just a careful medical history, a physical exam and, sometimes, a simple X-ray.

M.R.I.’s are not the only scans that are overused in medicine but, in sports medicine, where many injuries involve soft tissues like muscles and tendons, they rise to the fore.

In fact, one prominent orthopedist, Dr. Sigvard T. Hansen, Jr., a professor of orthopedics and sports medicine at the University of Washington, says he pretty much spurns such scans altogether because they so rarely provide useful information about the patients he sees — those with injuries to the foot and ankle.

“I see 300 or 400 new patients a year,” Dr. Hansen says. “Out of them, there might be one that has something confusing and might need a scan.”

The price, which medical facilities are reluctant to reveal, depends on where the scan is done and what is being scanned. One academic medical center charges $1,721 for an M.R.I. of the knee to look for a torn ligament. The doctor who interprets the scan gets $244. Doctors who own their own M.R.I. machines — and many do — can pocket both fees. Insurers pay less than the charges — an average of $150 to the doctor and $960 to the facility.

Steve Ganobcik is something of a poster child for what can go wrong with the scans. A salesman who turns 44 on Saturday, Mr. Ganobcik twisted his knee skiing in Colorado in February. He continued skiing anyway and skied again the next two days as well, not wanting to cut his vacation short.

When he got home to Cleveland, his knee still bothered him, so he saw a sports medicine orthopedist. The doctor immediately ordered an M.R.I. and said it showed a torn anterior cruciate ligament, or A.C.L. It is one of the most common — and most devastating — sports injuries. The standard treatment is surgery, with a difficult recuperation lasting six months to a year.

Mr. Ganobcik looked into surgical techniques and decided he wanted a different one than the one his doctor offered. So he saw another sports medicine orthopedist who, agreeing that Mr. Ganobcik’s ligament was torn, scheduled the operation.

Meanwhile, Mr. Ganobcik heard that Dr. Freddie H. Fu, chairman of the division of sports medicine at the University of Pittsburgh, had what might be an even better technique, so he went to see him.

To Mr. Ganobcik’s surprise, Dr. Fu told him his ligament was not torn after all. His pain was from a fracture in a long bone in the lower leg that the other doctors had also noticed was broken. An M.R.I. at the University of Pittsburgh confirmed it, showing a perfectly normal A.C.L. (Dr. Fu adds that Mr. Ganobcik’s original scans had an image that was ambiguous. He wanted a better one, to see if Mr. Ganobcik’s ligament had been partly torn and was healing or had never been torn at all. He would not need surgery with a partial tear, but he would need more careful recuperation.)

Dr. Fu’s suspicions were raised by Mr. Ganobcik’s story. He could never have continued skiing with a torn A.C.L. The diagnosis “made no sense,” Dr. Fu said.

And that, Dr. Fu says, illustrates a common problem: relying on an M.R.I. instead of a history and an exam. Dr. Fu’s diagnosis “was a shock,” Mr. Ganobcik said. “I thought he was going to talk about options for surgery.”

M.R.I.’s can be extremely useful in sports medicine, said Dr. Andrew Green, the chief of shoulder and elbow surgery at Brown University. But, he says, there is a fine line between appropriate use and overuse.

That, at least, is what he found in one of the few studies to address the issue. The ideal study would randomly assign patients to have scans or not and then assess their outcomes. Such a study has not been done. Instead, a few researchers asked if scans made a difference for people who happened to have them. They found they did not — at least in two common situations.

Dr. Green and his colleagues reviewed the records of 101 patients who had shoulder pain lasting at least six weeks and that had not resulted from trauma, like a fall. Forty-three arrived bearing M.R.I.’s from a doctor who had seen them previously. The others did not have scans. In all cases, Dr. Green made a diagnosis on the basis of a physical exam, a history, and regular X-rays.

A year later, Dr. Green re-assessed the patients. There was no difference in the outcome of the treatment of the two groups of patients despite his knowledge of the findings on the scans. M.R.I.’s, he said, are not needed for the initial evaluation and treatment of many whose shoulder pain does not result from an actual injury to the shoulder.

Dr. DiGiovanni did a similar study with foot and ankle patients, looking back at 221 consecutive patients over a three-month period, 201 of whom did not have fractures. More than 15 percent arrived with M.R.I.’s obtained by doctors they had seen before coming to Dr. DiGiovanni. Nearly 90 percent of those scans were unnecessary and half had interpretations that either made no difference to the patient’s diagnosis or were at odds with the diagnosis.

“Patients often feel like they are getting better care if people are ordering fancy tests, and there are some patients who come in demanding an M.R.I. — that’s part of the problem,” he said.

Some doctors might also feel they are providing better care if they order the scans, Dr. DiGiovanni said, and doctors often feel that they risk malpractice charges if they fail to scan a patient and then miss a diagnosis.

Dr. Hansen teaches his fellows — doctors in training — to be careful with scans and explains the risks of making the wrong diagnosis if they order them unnecessarily. He also knows it is not easy to refrain from ordering an M.R.I.

It’s different for him, Dr. Hansen says. He is so eminent that patients tend not to question him.

“When I say ‘You don’t need a scan,’ then it’s over,” Dr. Hansen said. His fellows get a different response. Patients, he says, “look at them like, ‘You don’t know what you’re doing.’ “
 
Scientists' Elusive Goal: Reproducing Study Results
Scientists' Elusive Goal: Reproducing Study Results - WSJ.com

By GAUTAM NAIK

Two years ago, a group of Boston researchers published a study describing how they had destroyed cancer tumors by targeting a protein called STK33. Scientists at biotechnology firm Amgen Inc. quickly pounced on the idea and assigned two dozen researchers to try to repeat the experiment with a goal of turning the findings into a drug.

Bayer Researchers at Bayer's labs often find their experiments fail to match claims made in the scientific literature.

It proved to be a waste of time and money. After six months of intensive lab work, Amgen found it couldn't replicate the results and scrapped the project.

"I was disappointed but not surprised," says Glenn Begley, vice president of research at Amgen of Thousand Oaks, Calif. "More often than not, we are unable to reproduce findings" published by researchers in journals.

This is one of medicine's dirty secrets: Most results, including those that appear in top-flight peer-reviewed journals, can't be reproduced.

"It's a very serious and disturbing issue because it obviously misleads people" who implicitly trust findings published in a respected peer-reviewed journal, says Bruce Alberts, editor of Science. On Friday, the U.S. journal is devoting a large chunk of its Dec. 2 issue to the problem of scientific replication.

Reproducibility is the foundation of all modern research, the standard by which scientific claims are evaluated. In the U.S. alone, biomedical research is a $100-billion-year enterprise. So when published medical findings can't be validated by others, there are major consequences.

Drug manufacturers rely heavily on early-stage academic research and can waste millions of dollars on products if the original results are later shown to be unreliable. Patients may enroll in clinical trials based on conflicting data, and sometimes see no benefits or suffer harmful side effects.

There is also a more insidious and pervasive problem: a preference for positive results.

Unlike pharmaceutical companies, academic researchers rarely conduct experiments in a "blinded" manner. This makes it easier to cherry-pick statistical findings that support a positive result. In the quest for jobs and funding, especially in an era of economic malaise, the growing army of scientists need more successful experiments to their name, not failed ones. An explosion of scientific and academic journals has added to the pressure.

When it comes to results that can't be replicated, Dr. Alberts says the increasing intricacy of experiments may be largely to blame. "It has to do with the complexity of biology and the fact that methods [used in labs] are getting more sophisticated," he says.

It is hard to assess whether the reproducibility problem has been getting worse over the years; there are some signs suggesting it could be. For example, the success rate of Phase 2 human trials—where a drug's efficacy is measured—fell to 18% in 2008-2010 from 28% in 2006-2007, according to a global analysis published in the journal Nature Reviews in May.

"Lack of reproducibility is one element in the decline in Phase 2 success," says Khusru Asadullah, a Bayer AG research executive.

In September, Bayer published a study describing how it had halted nearly two-thirds of its early drug target projects because in-house experiments failed to match claims made in the literature.

The German pharmaceutical company says that none of the claims it attempted to validate were in papers that had been retracted or were suspected of being flawed. Yet, even the data in the most prestigious journals couldn't be confirmed, Bayer said.

In 2008, Pfizer Inc. made a high-profile bet, potentially worth more than $725 million, that it could turn a 25-year-old Russian cold medicine into an effective drug for Alzheimer's disease.

The idea was promising. Published by the journal Lancet, data from researchers at Baylor College of Medicine and elsewhere suggested that the drug, an antihistamine called Dimebon, could improve symptoms in Alzheimer's patients. Later findings, presented by researchers at the University of California Los Angeles at a Chicago conference, showed that the drug appeared to prevent symptoms from worsening for up to 18 months.

"Statistically, the studies were very robust," says David Hung, chief executive officer of Medivation Inc., a San Francisco biotech firm that sponsored both studies.

In 2010, Medivation along with Pfizer released data from their own clinical trial for Dimebon, involving nearly 600 patients with mild to moderate Alzheimer's disease symptoms. The companies said they were unable to reproduce the Lancet results. They also indicated they had found no statistically significant difference between patients on the drug versus the inactive placebo.

Pfizer and Medivation have just completed a one-year study of Dimebon in over 1,000 patients, another effort to see if the drug could be a potential treatment for Alzheimer's. They expect to announce the results in coming months.

Scientists offer a few theories as to why duplicative results may be so elusive. Two different labs can use slightly different equipment or materials, leading to divergent results. The more variables there are in an experiment, the more likely it is that small, unintended errors will pile up and swing a lab's conclusions one way or the other. And, of course, data that have been rigged, invented or fraudulently altered won't stand up to future scrutiny.

According to a report published by the U.K.'s Royal Society, there were 7.1 million researchers working globally across all scientific fields—academic and corporate—in 2007, a 25% increase from five years earlier.

From the Archives

Mistakes in Scientific Studies Surge 8/10/2011

"Among the more obvious yet unquantifiable reasons, there is immense competition among laboratories and a pressure to publish," wrote Dr. Asadullah and others from Bayer, in their September paper. "There is also a bias toward publishing positive results, as it is easier to get positive results accepted in good journals."

Science publications are under pressure, too. The number of research journals has jumped 23% between 2001 and 2010, according to Elsevier, which has analyzed the data. Their proliferation has ratcheted up competitive pressure on even elite journals, which can generate buzz by publishing splashy papers, typically containing positive findings, to meet the demands of a 24-hour news cycle.

Dr. Alberts of Science acknowledges that journals increasingly have to strike a balance between publishing studies "with broad appeal," while making sure they aren't hyped.

Drugmakers also have a penchant for positive results. A 2008 study published in the journal PLoS Medicine by researchers at the University of California San Francisco looked at data from 33 new drug applications submitted between 2001 and 2002 to the U.S. Food and Drug Administration. The agency requires drug companies to provide all data from clinical trials. However, the authors found that a quarter of the trial data—most of it unfavorable—never got published because the companies never submitted it to journals.

The upshot: doctors who end up prescribing the FDA-approved drugs often don't get to see the unfavorable data.

"I would say that selectively publishing data is unethical because there are human subjects involved," says Lisa Bero of UCSF and co-author of the PLoS Medicine study.

In an email statement, a spokeswoman for the FDA said the agency considers all data it is given when reviewing a drug but "does not have the authority to control what a company chooses to publish."

Venture capital firms say they, too, are increasingly encountering cases of nonrepeatable studies, and cite it as a key reason why they are less willing to finance early-stage projects. Before investing in very early-stage research, Atlas Ventures, a venture-capital firm that backs biotech companies, now asks an outside lab to validate any experimental data. In about half the cases the findings can't be reproduced, says Bruce Booth, a partner in Atlas' Life Sciences group.

There have been several prominent cases of nonreproducibility in recent months. For example, in September, the journal Science partially retracted a 2009 paper linking a virus to chronic fatigue syndrome because several labs couldn't replicate the published results. The partial retraction came after two of the 13 study authors went back to the blood samples they analyzed from chronic-fatigue patients and found they were contaminated.

Some studies can't be redone for a more prosaic reason: the authors won't make all their raw data available to rival scientists.

John Ioannidis of Stanford University recently attempted to reproduce the findings of 18 papers published in the respected journal Nature Genetics. He noted that 16 of these papers stated that the underlying "gene expression" data for the studies were publicly available.

But the supplied data apparently weren't detailed enough, and results from 16 of the 18 major papers couldn't fully be reproduced by Dr. Ioannidis and his colleagues. "We have to take it [on faith] that the findings are OK," said Dr. Ioannidis, an epidemiologist who studies the credibility of medical research.

Veronique Kiermer, an editor at Nature, says she agrees with Dr. Ioannidis' conclusions, noting that the findings have prompted the journal to be more cautious when publishing large-scale genome analyses.

When companies trying to find new drugs come up against the nonreproducibility problem, the repercussions can be significant.

A few years ago, several groups of scientists began to seek out new cancer drugs by targeting a protein called KRAS. The KRAS protein transmits signals received on the outside of a cell to its interior and is therefore crucial for regulating cell growth. But when certain mutations occur, the signaling can become continuous. That triggers excess growth such as tumors.

The mutated form of KRAS is believed to be responsible for more than 60% of pancreatic cancers and half of colorectal cancers. It has also been implicated in the growth of tumors in many other organs, such as the lung.

So scientists have been especially keen to impede KRAS and, thus, stop the constant signaling that leads to tumor growth.

In 2008, researchers at Harvard Medical School used cell-culture experiments to show that by inhibiting another protein, STK33, they could prevent the growth of tumor cell lines driven by the malfunctioning KRAS.

The finding galvanized researchers at Amgen, who first heard about the experiments at a scientific conference. "Everyone was trying to do this," recalls Dr. Begley of Amgen, which derives nearly half of its revenues from cancer drugs and related treatments. "It was a really big deal."

When the Harvard researchers published their results in the prestigious journal Cell, in May 2009, Amgen moved swiftly to capitalize on the findings.

At a meeting in the company's offices in Thousand Oaks, Calif., Dr. Begley assigned a group of Amgen researchers the task of identifying small molecules that might inhibit STK33. Another team got a more basic job: reproduce the Harvard data.

"We're talking about hundreds of millions of dollars in downstream investments" if the approach works," says Dr. Begley. "So we need to be sure we're standing on something firm and solid."

But over the next few months, Dr. Begley and his team got increasingly disheartened. Amgen scientists, it turned out, couldn't reproduce any of the key findings published in Cell.

For example, there was no difference in the growth of cells where STK33 was largely blocked, compared with a control group of cells where STK33 wasn't blocked.

What could account for the irreproducibility of the results?

"In our opinion there were methodological issues" in Amgen's approach that could have led to the different findings, says Claudia Scholl, one of the lead authors of the original Cell paper.

Dr. Scholl points out, for example, that Amgen used a different reagent to suppress STK33 than the one reported in Cell. Yet, she acknowledges that even when slightly different reagents are used, "you should be able to reproduce the results."

Now a cancer researcher at the University Hospital of Ulm in Germany, Dr. Scholl says her team has reproduced the original Cell results multiple times, and continues to have faith in STK33 as a cancer target.

Amgen, however, killed its STK33 program. In September, two dozen of the firm's scientists published a paper in the journal Cancer Research describing their failure to reproduce the main Cell findings.

Dr. Begley suggests that academic scientists, like drug companies, should perform more experiments in a "blinded" manner to reduce any bias toward positive findings. Otherwise, he says, "there is a human desire to get the results your boss wants you to get."

Adds Atlas' Mr. Booth: "Nobody gets a promotion from publishing a negative study."
 
The High Cost of Failing Artificial Hips
http://www.nytimes.com/2011/12/28/business/the-high-cost-of-failing-artificial-hips.html

The most widespread medical implant failure in decades — involving thousands of all-metal artificial hips that need to be replaced prematurely — has entered the money phase.

Medical and legal experts estimate the hip failures may cost taxpayers, insurers, employers and others billions of dollars in coming years, contributing to the soaring cost of health care. The financial fallout is expected to be unusually large and complex because the episode involves a class of products, not a single device or just one company.
 
Hopkins scientists retract prostate cancer screening study at center of 2009 lawsuits
Hopkins scientists retract prostate cancer screening study at center of 2009 lawsuits Retraction Watch

The authors of a study in Urology that was at the center of two 2009 lawsuits brought by a company that funded the work have retracted the paper.

The idea behind the research — by Robert Getzenberg and colleagues at Johns Hopkins — was to find an alternative to the prostate specific antigen (PSA) test, which many urologists recommend, but which many groups — including the US Preventive Services Task Force — find wanting. The work gave rise to a company, Onconome, Science reported in a 2009 story about the lawsuits:
 
Who Else Is Paying Your Doctor?
http://www.nytimes.com/2012/01/21/opinion/who-else-is-paying-your-doctor.html

January 20, 2012

It took longer than expected, but the Obama administration is finally poised to enact badly needed regulations requiring that the manufacturers of drugs, medical devices and medical supplies disclose all payments they make to doctors or teaching hospitals. The information, which would be posted on a government Web site, will allow patients to decide whether they need to worry about any possible conflicts of interest.

Such payments can be for legitimate research and consulting. But there is also a lot of cash being spread around to pay for doctors’ travel and entertainment or for gifts or modest meals for a prescribing doctor’s staff.

As Robert Pear reported in The Times this week, some prominent doctors and researchers receive hundreds of thousands or even millions of dollars a year in exchange for providing advice to a company or giving lectures on its behalf. About a quarter of all doctors take some cash payments from drug or device makers and nearly two-thirds accept meals or food gifts. Analysts contend that even seemingly trivial gifts can influence doctors to prescribe expensive drugs that may not be best for a patient’s health or pocketbook.

The new rules were championed by Senator Charles Grassley, a Republican, and Senator Herb Kohl, a Democrat, and incorporated into the health care reforms enacted in 2010. The reform law required the Department of Health and Human Services to establish reporting procedures by Oct. 1, 2011, and required manufacturers to start collecting the relevant data by Jan. 1, 2012. The proposed rules were finally issued on Dec. 14 and are subject to comment until Feb. 17, after which they will be revised and issued in final form.

The Centers for Medicare and Medicaid Services will publish the disclosure data on a public Web site that the law says must be searchable and understandable so that patients and advocacy groups can see which doctors are being paid and how much. Manufacturers could be fined up to $150,000 a year for failing to report payments and up to $1 million a year for “knowingly” failing to report.

The new rules should give a welcome boost to otherwise spotty efforts by some companies, medical centers, scientific journals, states and ethical codes to eliminate, minimize or at least disclose financial interests that might cloud medical judgments. The existence of the Web site could deter some questionable payments. And it could help patients decide which doctors to rely on.
 
Summary Points

The American Psychiatric Association (APA) instituted a financial conflict of interest disclosure policy for the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM).

The new disclosure policy has not been accompanied by a reduction in the financial conflicts of interest of DSM panel members.

Transparency alone cannot mitigate the potential for bias and is an insufficient solution for protecting the integrity of the revision process.

Gaps in APA's disclosure policy are identified and recommendations for more stringent safeguards are offered.


Cosgrove L, Krimsky S. A Comparison of DSM-IV and DSM-5 Panel Members' Financial Associations with Industry: A Pernicious Problem Persists. PLoS Med 2012;9(3):e1001190. PLoS Medicine: A Comparison of DSM-IV and DSM-5 Panel Members' Financial Associations with Industry: A Pernicious Problem Persists
 
No wonder there is a rush to pathologize so many psychological conditions...

Three-fourths of the work groups continue to have a majority of their members with financial ties to the pharmaceutical industry. It is also noteworthy that, as with the DSM-IV, the most conflicted panels are those for which pharmacological treatment is the first-line intervention. For example, 67% (N = 12) of the panel for Mood Disorders, 83% (N = 12) of the panel for Psychotic Disorders, and 100% (N = 7) of the Sleep/Wake Disorders (which now includes “Restless Leg Syndrome”) have ties to the pharmaceutical companies that manufacture the medications used to treat these disorders or to companies that service the pharmaceutical industry.
 
[ame=http://www.youtube.com/watch?v=qCda6GK0S38]March 22, 2012: Victims Speak Out about Dangerous Loophole, Unsafe Medical Devices on the Market - YouTube[/ame]
 
Open Clinical Trial Data for All

In this issue of PLoS Medicine, Doshi and colleagues argue that the full clinical trial reports of authorized drugs should be made publicly available to enable independent re-analysis of drugs' benefits and risks. We offer comments on their call for openness from a European Union drug regulatory perspective.

For the purpose of this discussion, we consider “clinical study reports” to comprise not just the protocol, summary tables, and figures of (mostly) randomized controlled trials (RCTs), but the full “raw” data set, including data at the patient level. We limit discussion to data on drugs for which the regulatory benefit-risk assessment has been completed.


Eichler H-G, Abadie E, Breckenridge A, Leufkens H, Rasi G. Open Clinical Trial Data for All? A View from Regulators. PLoS Med 2012;9(4):e1001202. PLoS Medicine: Open Clinical Trial Data for All? A View from Regulators

Hans-Georg Eichler from the European Medicines Agency and colleagues provide a view from regulators on access to clinical trial data.


Doshi P, Jefferson T, Del Mar C. The Imperative to Share Clinical Study Reports: Recommendations from the Tamiflu Experience. PLoS Med 2012;9(4):e1001201. PLoS Medicine: The Imperative to Share Clinical Study Reports: Recommendations from the Tamiflu Experience

Peter Doshi and colleagues describe their experience trying and failing to access clinical study reports from the manufacturer of Tamiflu and challenge industry to defend their current position of RCT data secrecy.

Summary Points

Systematic reviews of published randomized clinical trials (RCTs) are considered the gold standard source of synthesized evidence for interventions, but their conclusions are vulnerable to distortion when trial sponsors have strong interests that might benefit from suppressing or promoting selected data.

More reliable evidence synthesis would result from systematic reviewing of clinical study reports—standardized documents representing the most complete record of the planning, execution, and results of clinical trials, which are submitted by industry to government drug regulators.

Unfortunately, industry and regulators have historically treated clinical study reports as confidential documents, impeding additional scrutiny by independent researchers.

We propose clinical study reports become available to such scrutiny, and describe one manufacturer's unconvincing reasons for refusing to provide us access to full clinical study reports. We challenge industry to either provide open access to clinical study reports or publically defend their current position of RCT data secrecy.


Hrynaszkiewicz I, Altman DG. Towards agreement on best practice for publishing raw clinical trial data. Trials 2009;10:17. Trials | Full text | Towards agreement on best practice for publishing raw clinical trial data

Many research-funding agencies now require open access to the results of research they have funded, and some also require that researchers make available the raw data generated from that research. Similarly, the journal Trials aims to address inadequate reporting in randomised controlled trials, and in order to fulfil this objective, the journal is working with the scientific and publishing communities to try to establish best practice for publishing raw data from clinical trials in peer-reviewed biomedical journals. Common issues encountered when considering raw data for publication include patient privacy - unless explicit consent for publication is obtained - and ownership, but agreed-upon policies for tackling these concerns do not appear to be addressed in the guidance or mandates currently established. Potential next steps for journal editors and publishers, ethics committees, research-funding agencies, and researchers are proposed, and alternatives to journal publication, such as restricted access repositories, are outlined.
 
Reproducibility Project
The project is a large-scale, open collaboration in order to estimate the reproducibility of a sample of studies from the scientific literature.
http://openscienceframework.org/project/shvrbV8uSkHewsfD4/wiki/index

Do normative scientific practices and incentive structures produce a biased body of research evidence? The Reproducibility Project is the first known empirical effort to estimate the reproducibility of a sample of studies from the scientific literature. The project is a large-scale, open collaboration involving dozens of scientists from around the world. The investigation is currently sampling from the 2008 issues of three prominent psychology journals - Journal of Personality and Social Psychology, Psychological Science, and Journal of Experimental Psychology: Learning, Memory, and Cognition. Individuals or teams of scientists follow a structured protocol for designing and conducting a close, high-powered replication of a key effect from the selected articles. The project will evaluate the ability to reproduce the original study procedures and the overall probability of replicating the original results. Further, it will examine the predictors of replication success - e.g., publishing journal, number of conceptual/direct replications in the published literature, citation impact of the original article, closeness of the replication to the original circumstances: sample, setting, materials. Interested contributors can still join the project.

[While not intimately familiar with these journals, I have seen a number of published studies from them that are amusing and fanciful. They are entertaining at best and make for good conversation.]
 
A Sharp Rise in Retractions Prompts Calls for Reform
http://www.nytimes.com/2012/04/17/s...etractions-prompts-calls-for-reform.html?_r=1

April 16, 2012
By CARL ZIMMER

In the fall of 2010, Dr. Ferric C. Fang made an unsettling discovery. Dr. Fang, who is editor in chief of the journal Infection and Immunity, found that one of his authors had doctored several papers.

It was a new experience for him. “Prior to that time,” he said in an interview, “Infection and Immunity had only retracted nine articles over a 40-year period.”

The journal wound up retracting six of the papers from the author, Naoki Mori of the University of the Ryukyus in Japan. And it soon became clear that Infection and Immunity was hardly the only victim of Dr. Mori’s misconduct. Since then, other scientific journals have retracted two dozen of his papers, according to the watchdog blog Retraction Watch.

“Nobody had noticed the whole thing was rotten,” said Dr. Fang, who is a professor at the University of Washington School of Medicine.

Dr. Fang became curious how far the rot extended. To find out, he teamed up with a fellow editor at the journal, Dr. Arturo Casadevall of the Albert Einstein College of Medicine in New York. And before long they reached a troubling conclusion: not only that retractions were rising at an alarming rate, but that retractions were just a manifestation of a much more profound problem — “a symptom of a dysfunctional scientific climate,” as Dr. Fang put it.

Dr. Casadevall, now editor in chief of the journal mBio, said he feared that science had turned into a winner-take-all game with perverse incentives that lead scientists to cut corners and, in some cases, commit acts of misconduct.

“This is a tremendous threat,” he said.

Last month, in a pair of editorials in Infection and Immunity, the two editors issued a plea for fundamental reforms. They also presented their concerns at the March 27 meeting of the National Academies of Sciences committee on science, technology and the law.

Members of the committee agreed with their assessment. “I think this is really coming to a head,” said Dr. Roberta B. Ness, dean of the University of Texas School of Public Health. And Dr. David Korn of Harvard Medical School agreed that “there are problems all through the system.”

No one claims that science was ever free of misconduct or bad research. Indeed, the scientific method itself is intended to overcome mistakes and misdeeds. When scientists make a new discovery, others review the research skeptically before it is published. And once it is, the scientific community can try to replicate the results to see if they hold up.

But critics like Dr. Fang and Dr. Casadevall argue that science has changed in some worrying ways in recent decades — especially biomedical research, which consumes a larger and larger share of government science spending.

In October 2011, for example, the journal Nature reported that published retractions had increased tenfold over the past decade, while the number of published papers had increased by just 44 percent. In 2010 The Journal of Medical Ethics published a study finding the new raft of recent retractions was a mix of misconduct and honest scientific mistakes.

Several factors are at play here, scientists say. One may be that because journals are now online, bad papers are simply reaching a wider audience, making it more likely that errors will be spotted. “You can sit at your laptop and pull a lot of different papers together,” Dr. Fang said.

But other forces are more pernicious. To survive professionally, scientists feel the need to publish as many papers as possible, and to get them into high-profile journals. And sometimes they cut corners or even commit misconduct to get there.

To measure this claim, Dr. Fang and Dr. Casadevall looked at the rate of retractions in 17 journals from 2001 to 2010 and compared it with the journals’ “impact factor,” a score based on how often their papers are cited by scientists. The higher a journal’s impact factor, the two editors found, the higher its retraction rate.

The highest “retraction index” in the study went to one of the world’s leading medical journals, The New England Journal of Medicine. In a statement for this article, it questioned the study’s methodology, noting that it considered only papers with abstracts, which are included in a small fraction of studies published in each issue. “Because our denominator was low, the index was high,” the statement said.

Monica M. Bradford, executive editor of the journal Science, suggested that the extra attention high-impact journals get might be part of the reason for their higher rate of retraction. “Papers making the most dramatic advances will be subject to the most scrutiny,” she said.

Dr. Fang says that may well be true, but adds that it cuts both ways — that the scramble to publish in high-impact journals may be leading to more and more errors. Each year, every laboratory produces a new crop of Ph.D.’s, who must compete for a small number of jobs, and the competition is getting fiercer. In 1973, more than half of biologists had a tenure-track job within six years of getting a Ph.D. By 2006 the figure was down to 15 percent.

Yet labs continue to have an incentive to take on lots of graduate students to produce more research. “I refer to it as a pyramid scheme,” said Paula Stephan, a Georgia State University economist and author of “How Economics Shapes Science,” published in January by Harvard University Press.

In such an environment, a high-profile paper can mean the difference between a career in science or leaving the field. “It’s becoming the price of admission,” Dr. Fang said.

The scramble isn’t over once young scientists get a job. “Everyone feels nervous even when they’re successful,” he continued. “They ask, ‘Will this be the beginning of the decline?’ ”

University laboratories count on a steady stream of grants from the government and other sources. The National Institutes of Health accepts a much lower percentage of grant applications today than in earlier decades. At the same time, many universities expect scientists to draw an increasing part of their salaries from grants, and these pressures have influenced how scientists are promoted.

“What people do is they count papers, and they look at the prestige of the journal in which the research is published, and they see how many grant dollars scientists have, and if they don’t have funding, they don’t get promoted,” Dr. Fang said. “It’s not about the quality of the research.”

Dr. Ness likens scientists today to small-business owners, rather than people trying to satisfy their curiosity about how the world works. “You’re marketing and selling to other scientists,” she said. “To the degree you can market and sell your products better, you’re creating the revenue stream to fund your enterprise.”

Universities want to attract successful scientists, and so they have erected a glut of science buildings, Dr. Stephan said. Some universities have gone into debt, betting that the flow of grant money will eventually pay off the loans. “It’s really going to bite them,” she said.

With all this pressure on scientists, they may lack the extra time to check their own research — to figure out why some of their data doesn’t fit their hypothesis, for example. Instead, they have to be concerned about publishing papers before someone else publishes the same results.

“You can’t afford to fail, to have your hypothesis disproven,” Dr. Fang said. “It’s a small minority of scientists who engage in frank misconduct. It’s a much more insidious thing that you feel compelled to put the best face on everything.”

Adding to the pressure, thousands of new Ph.D. scientists are coming out of countries like China and India. Writing in the April 5 issue of Nature, Dr. Stephan points out that a number of countries — including China, South Korea and Turkey — now offer cash rewards to scientists who get papers into high-profile journals. She has found these incentives set off a flood of extra papers submitted to those journals, with few actually being published in them. “It clearly burdens the system,” she said.

To change the system, Dr. Fang and Dr. Casadevall say, start by giving graduate students a better understanding of science’s ground rules — what Dr. Casadevall calls “the science of how you know what you know.”

They would also move away from the winner-take-all system, in which grants are concentrated among a small fraction of scientists. One way to do that may be to put a cap on the grants any one lab can receive.

Such a shift would require scientists to surrender some of their most cherished practices — the priority rule, for example, which gives all the credit for a scientific discovery to whoever publishes results first. (Three centuries ago, Isaac Newton and Gottfried Leibniz were bickering about who invented calculus.) Dr. Casadevall thinks it leads to rival research teams’ obsessing over secrecy, and rushing out their papers to beat their competitors. “And that can’t be good,” he said.

To ease such cutthroat competition, the two editors would also change the rules for scientific prizes and would have universities take collaboration into account when they decide on promotions.

Ms. Bradford, of Science magazine, agreed. “I would agree that a scientist’s career advancement should not depend solely on the publications listed on his or her C.V.,” she said, “and that there is much room for improvement in how scientific talent in all its diversity can be nurtured.”

Even scientists who are sympathetic to the idea of fundamental change are skeptical that it will happen any time soon. “I don’t think they have much chance of changing what they’re talking about,” said Dr. Korn, of Harvard.

But Dr. Fang worries that the situation could be become much more dire if nothing happens soon. “When our generation goes away, where is the new generation going to be?” he asked. “All the scientists I know are so anxious about their funding that they don’t make inspiring role models. I heard it from my own kids, who went into art and music respectively. They said, ‘You know, we see you, and you don’t look very happy.’ ”
 
Author retracts weight loss surgery paper after admitting most, if not all, of the subjects were made up
http://retractionwatch.wordpress.com/2012/05/02/author-retracts-weight-loss-surgery-paper-after-admitting-most-if-not-all-of-the-subjects-were-made-up/ (Author retracts weight loss surgery paper after admitting most, if not all, of the subjects were made up)
 
Research: Uncovering misconduct
Research: Uncovering misconduct : Naturejobs
http://www.nature.com/naturejobs/2012/120503/pdf/nj7396-137a.pdf

Biostatisticians Keith Baggerly and Kevin Coombes, like many, were intrigued by claims of personalized chemotherapy treatments by geneticist Anil Potti in 2006. Then at Duke University in Durham, North Carolina, Potti published results indicating that gene expression signatures could identify which chemotherapy drug could best treat lung or breast cancer — results that led to the setup of three clinical trials. But Baggerly and Coombes quickly found something amiss in the data. What began as concerns over apparent errors, including mislabelled samples and mismatched gene names, eventually snowballed into one of the most notorious cases of scientific misconduct in the United States in recent years.

During some 1,500 hours of work over four years, Baggerly and Coombes, both of the University of Texas MD Anderson Cancer Center in Houston, repeatedly showed that Potti's findings did not match the raw data. They analysed the data, had conversations with Potti and his supervisor, alerted the US National Cancer Institute to the likely mistakes and contacted the editors of the journal publishing Potti's work.

Repeated enquiries and complaints by Baggerly and Coombes led senior officials at the University of Texas to advise them to drop what was starting to look like a vendetta. “We were focused on the fact that the data used to justify clinical trials were wrong; we thought that should be enough,” says Baggerly. “How the data got in this shape was not our immediate concern.”

Their objections were finally proved valid. Six years on, ten papers by Potti have been retracted, and the clinical trials were halted eventually. Baggerly and Coombes say that their persistence was down to their obligation to scientific ethics and the consequences of a clinical trial based on incorrect data.
 
Is misconduct more likely in drug trials than in other biomedical research?
http://retractionwatch.wordpress.com/2012/05/17/is-misconduct-more-likely-in-drug-trials-than-in-other-biomedical-research/ (Is misconduct more likely in drug trials than in other biomedical research?)
 
Insurers Pay Big Markups as Doctors Dispense Drugs
http://www.nytimes.com/2012/07/12/b...ng-millions-selling-drugs.html?pagewanted=all

July 11, 2012
By BARRY MEIER and KATIE THOMAS

When a pharmacy sells the heartburn drug Zantac, each pill costs about 35 cents. But doctors dispensing it to patients in their offices have charged nearly 10 times that price, or $3.25 a pill.

The same goes for a popular muscle relaxant known as Soma, insurers say. From a pharmacy, the per-pill price is 60 cents. Sold by a doctor, it can cost more than five times that, or $3.33.

At a time of soaring health care bills, experts say that doctors, middlemen and drug distributors are adding hundreds of millions of dollars annually to the costs borne by taxpayers, insurance companies and employers through the practice of physician dispensing.

Most common among physicians who treat injured workers, it is a twist on a typical doctor’s visit. Instead of sending patients to drugstores to get prescriptions filled, doctors dispense the drugs in their offices to patients, with the bills going to insurers. Doctors can make tens of thousands of dollars a year operating their own in-office pharmacies. The practice has become so profitable that private equity firms are buying stakes in the businesses, and political lobbying over the issue is fierce.

Doctor dispensing can be convenient for patients. But rules in many states governing workers’ compensation insurance contain loopholes that allow doctors to sell the drugs at huge markups. Profits from the sales are shared by doctors, middlemen who help physicians start in-office pharmacies and drug distributors who repackage medications for office sale.

Alarmed by the costs, some states, including California and Oklahoma, have clamped down on the practice. But legislative and regulatory battles over it are playing out in other states like Florida, Hawaii and Maryland.

In Florida, a company called Automated HealthCare Solutions, a leader in physician dispensing, has defeated repeated efforts to change what doctors can charge. The company, which is partly owned by Abry Partners, a private equity firm, has given more than $3.3 million in political contributions either directly or through entities its principals control, public records show.

Insurers and business groups said they were amazed by the little-known company’s spending spree. To plead its case to Florida lawmakers, Automated HealthCare hired one of the state’s top lobbyists, Brian Ballard, who is also a major national fund-raiser for the Mitt Romney campaign.

“I consider the fees that these people are charging to be immoral,” said Alan Hays, a Republican state senator in Florida who introduced a bill to bar physicians from dispensing pills that was defeated. “They’re legal under the current law, but they’re immoral.”

Physician prescribing works like this: Middlemen like Automated HealthCare help doctors set up office pharmacies by providing them with billing software and connecting them with suppliers who repackage medications for office sale. Doctors sell the drugs but they do not collect payments from insurers. In the case of Automated HealthCare, the company pays the doctor 70 percent of what the doctor charges, then seeks to collect the full amount from insurers.

The number of doctors nationwide who dispense drugs in their office is not known and the practice is prevalent only in states where workers’ compensation rules allow for large markups.

Dr. Paul Zimmerman, a founder of Automated HealthCare, said that insurers and other opponents of doctor dispensing were distorting its costs by emphasizing the prices of a few drugs, rather than the typical price spread between physician- and pharmacy-dispensed drugs.

Both Dr. Zimmerman and physicians who sell drugs also said the workers’ compensation system was so bureaucratic and complex that an injured employee could wait days before getting a needed medication through a pharmacy.

“We did not institute this because of the money,” Dr. Marc Loev, a managing partner of the Spine Center, a chain of clinics in Maryland, testified last year at a public hearing in Baltimore. “We instituted it because we were having significant difficulty providing the care for workers’ compensation patients.”

The loophole that raises the price of physician-dispensed drugs often involves a benchmark called “average wholesale price.” The cost of a medication dispensed through a workers’ compensation plan is pegged in some states to that benchmark, which is supposed to represent a drug’s typical wholesale cost.

But doctor-dispensed drugs can undergo an “average wholesale price” makeover. It happens when firms that supply doctors with medications buy them in bulk from wholesalers and repackage them for office sale. These “repackagers” can set a new “average wholesale price,” one that is often many times higher than the original.

For example, in 2010, a physician associated with the Spine Center, Dr. Loev’s practice in Maryland, gave a patient a prescription for 360 patches containing a pain-numbing drug, lidocaine. The worker’s insurer was charged $7,304, according to a copy of that bill provided to The New York Times by a lawyer, Michael S. Levin, who represents insurance companies.

A similar number of patches dispensed by a doctor in California, which changed its regulations in 2007, is about $4,068, according to the California Workers’ Compensation Institute, a research group.

Warren G. Moseley, the president of a company in Tulsa, Okla., Physicians Total Care, that repackages drugs for office sale by doctors, said it charged physicians $2,863 for 360 patches.

Dr. Loev, who uses Automated HealthCare’s services, declined to be interviewed and did not respond to specific written questions from The Times.

Dr. Charles Thorne, a principal at Multi-Specialty HealthCare, another Maryland-based chain of clinics that dispenses drugs, also declined to be interviewed.

Dr. Zimmerman, the co-founder of Automated HealthCare, said that drug prices are set by companies that repackage medications for office sales.

He added that Automated HealthCare referred doctors to about a dozen repackagers. But the company has a relationship with one repackaging company called Quality Care Products, based in the Midwest. The two firms have exhibited their services together and jointly sponsor a charity golf tournament.

The president of Quality Care, Gene Gunderson, declined to be interviewed and the company did not respond to written questions.

Data collected by Florida insurers who handle workers’ compensation claims shows that Quality Care supplies about 40 percent of the drugs sold by doctors in the state, a market share three times as high as that of its closest competitor.

Robert M. Mernick, the president of Bryant Ranch Prepack, a company in North Hollywood, Calif., that repackages medications for office sale, said he found it extraordinary that lawmakers in other states like Florida and Maryland were allowing such drug markups to continue.

“I see it as corruption,” he said. “I think it is horrible.”

In 2010, Abry Partners, a private equity firm in Boston, bought a stake in Automated HealthCare for $85 million. Officials of Abry also declined to be interviewed for this article.

That same year, Florida lawmakers tried to clamp down on how much doctors could charge for drugs. Automated HealthCare responded with a major lobbying and spending campaign, focusing its efforts on state leaders like the president of the Florida senate, Mike Haridopolos.

When the bill was reintroduced this year, Mr. Haridopolos declined to allow a vote. The state’s insurance commissioner had backed the move, saying it would annually save firms and taxpayers $62 million, a figure disputed by Automated HealthCare.

Mr. Haridopolos said he didn’t believe the bill had a chance of winning. “It seemed like a big political food fight,” he said.

Mr. Hays, the legislator who introduced the measure, said he found that hard to believe. “The strategy of the people that were opposed to this bill was to put the right amount of dollars in the right hands and get the bill blocked,” he said. “And they were successful in doing that.”
 
Vast Effort by FDA Spied on E-Mails of Its Own Scientists
http://www.nytimes.com/2012/07/15/us/fda-surveillance-of-scientists-spread-to-outside-critics.html

July 14, 2012
By ERIC LICHTBLAU and SCOTT SHANE

WASHINGTON — A wide-ranging surveillance operation by the Food and Drug Administration against a group of its own scientists used an enemies list of sorts as it secretly captured thousands of e-mails that the disgruntled scientists sent privately to members of Congress, lawyers, labor officials, journalists and even President Obama, previously undisclosed records show.

What began as a narrow investigation into the possible leaking of confidential agency information by five scientists quickly grew in mid-2010 into a much broader campaign to counter outside critics of the agency’s medical review process, according to the cache of more than 80,000 pages of computer documents generated by the surveillance effort.

Moving to quell what one memorandum called the “collaboration” of the F.D.A.’s opponents, the surveillance operation identified 21 agency employees, Congressional officials, outside medical researchers and journalists thought to be working together to put out negative and “defamatory” information about the agency.

The agency, using so-called spy software designed to help employers monitor workers, captured screen images from the government laptops of the five scientists as they were being used at work or at home. The software tracked their keystrokes, intercepted their personal e-mails, copied the documents on their personal thumb drives and even followed their messages line by line as they were being drafted, the documents show.

The extraordinary surveillance effort grew out of a bitter dispute lasting years between the scientists and their bosses at the F.D.A. over the scientists’ claims that faulty review procedures at the agency had led to the approval of medical imaging devices for mammograms and colonoscopies that exposed patients to dangerous levels of radiation.

A confidential government review in May by the Office of Special Counsel, which deals with the grievances of government workers, found that the scientists’ medical claims were valid enough to warrant a full investigation into what it termed “a substantial and specific danger to public safety.”

The documents captured in the surveillance effort — including confidential letters to at least a half-dozen Congressional offices and oversight committees, drafts of legal filings and grievances, and personal e-mails — were posted on a public Web site, apparently by mistake, by a private document-handling contractor that works for the F.D.A. The New York Times reviewed the records and their day-by-day, sometimes hour-by-hour accounting of the scientists’ communications.

Congressional staff members who were regarded as sympathetic to the scientists were then cataloged by name in 66 huge directories. Drafts and final copies of letters to Mr. Obama about the scientists’ safety concerns were also included.

Last year, the scientists found that a few dozen of their e-mails had been intercepted by the agency. They filed a lawsuit over the issue in September, after four of the scientists had been let go, and The Washington Post first disclosed the monitoring in January. But the wide scope of the F.D.A. surveillance operation, its broad range of targets across Washington and the huge volume of computer information it generated were not previously known, even to some of the targets.

The F.D.A. defended the surveillance operation in a statement on Friday. The computer monitoring “was consistent with F.D.A. policy,” the agency said, and the e-mails “were collected without regard to the identity of the individuals with whom the user may have been corresponding.” The statement did not explain the inclusion of Congressional officials, journalists and others referred to as “actors” who were in contact with the disaffected scientists.

While federal agencies have broad discretion to monitor their employees’ computer use, the F.D.A. program may have crossed legal lines by grabbing and analyzing confidential information that is specifically protected under the law, including attorney-client communications, whistle-blower complaints to Congress and workplace grievances filed with the government.

White House officials were so alarmed to learn of the F.D.A. operation that they sent a governmentwide memo last month from the Office of Management and Budget emphasizing that while the internal monitoring of employee communications was allowed, it could not be used under the law to intimidate whistle-blowers. Any monitoring must be done in ways that “do not interfere with or chill employees’ use of appropriate channels to disclose wrongdoing,” the memo said.

Although some senior F.D.A. officials appear to have been made aware of aspects of the surveillance,which went on for months, the documents do not make clear who at the agency authorized the program or whether it is still in operation.

But Stephen Kohn, a lawyer who represents six scientists who are suing the agency, said he planned to go to federal court this month seeking an injunction to stop any surveillance that may be continuing against the two medical researchers among the group who are still employed there.

The scientists who have been let go say in a lawsuit that their treatment was retaliation for reporting their claims of mismanagement and safety abuses in the F.D.A.’s medical reviews.

Members of Congress from both parties were irate to learn that correspondence between the scientists and their own staff members had been gathered and analyzed.

Representative Chris Van Hollen, a Maryland Democrat who has examined the agency’s medical review procedures, was listed as No. 14 on the surveillance operation’s list of targets — an “ancillary actor” in the efforts to put out negative information on the agency. (An aide to Mr. Van Hollen was No. 13.)

Mr. Van Hollen said in a statement on Friday after learning of his status on the list that “it is absolutely unacceptable for the F.D.A. to be spying on employees who reach out to members of Congress to expose abuses or wrongdoing in government agencies.”

Senator Charles E. Grassley, an Iowa Republican whose former staff member’s e-mails were cataloged in the surveillance database, said that “the F.D.A. is discouraging whistle-blowers.” He added that agency officials “have absolutely no business reading the private e-mails of their employees. They think they can be the Gestapo and do anything they want.”

While national security agencies have become more aggressive in monitoring employee communications, such tactics are unusual at domestic agencies that do not handle classified information.

Much of the material the F.D.A. was eager to protect centered on trade secrets submitted by drug and medical device manufacturers seeking approval for products. Particular issues were raised by a March 2010 article in The New York Times that examined the safety concerns about imaging devices and quoted two agency scientists who would come under surveillance, Dr. Robert C. Smith and Dr. Julian Nicholas.

Agency officials saw Dr. Smith as the ringleader, or “point man” as one memo from the agency put it, for the complaining scientists, and the surveillance documents included hundreds of e-mails that he wrote on ways to make their concerns heard. (Dr. Smith and the other scientists would not comment for this article because of their pending litigation.)

Lawyers for GE Healthcare charged that the 2010 article in The Times — written by Gardiner Harris, who would be placed first on the surveillance program’s list of “media outlet actors” — included proprietary information about their imaging devices that may have been improperly leaked by F.D.A. employees.

F.D.A. officials went to the inspector general at the Department of Health and Human Services to seek a criminal investigation into the possible leak, but they were turned down. The inspector general found that there was no evidence of a crime, noting that “matters of public safety” can legally be released to the news media.

Undeterred, agency officials began the electronic monitoring operation on their own.

The software used to track the F.D.A. scientists, sold by SpectorSoft of Vero Beach, Fla., costs as little as $99.95 for individual use, or $2,875 to place the program on 25 computers. It is marketed mainly to employers to monitor their workers and to parents to keep tabs on their children’s computer activities.

“Monitor everything they do,” says SpectorSoft’s Web site. “Catch them red-handed by receiving instant alerts when keywords or phrases are typed or are contained in an e-mail, chat, instant message or Web site.”

The F.D.A. program did all of that and more, as its operators analyzed the results from their early e-mail interceptions and used them to search for new “actors,” develop new keywords to search and map out future areas of concern.

The intercepted e-mails revealed, for instance, that a few of the scientists under surveillance were drafting a complaint in 2010 that they planned to take to the Office of Special Counsel. A short time later, before the complaint was filed, Dr. Smith and another complaining scientist were let go and a third was suspended.

In another case, the intercepted e-mails indicated that Paul T. Hardy, another of the dissident employees, had reapplied for an F.D.A. job “and is being considered for a position.” (He did not get it.)

While the surveillance was intended to protect trade secrets for companies like G.E., it may have done just the opposite. The data posted publicly by the F.D.A. contractor — and taken down late Friday after inquiries by The Times — includes hundreds of confidential documents on the design of imaging devices and other detailed, proprietary information.

The posting of the documents was discovered inadvertently by one of the researchers whose e-mails were monitored. The researcher did Google searches for scientists involved in the case to check for negative publicity that might hinder chances of finding work.

Within a few minutes, the researcher stumbled upon the database.

“I couldn’t believe what I was seeing,” said the researcher, who did not want to be identified because of pending job applications. “I thought: ‘Oh my God, everything is out there. It’s all about us.’ It was just outrageous.”
 
This is why I am my own DOC!
I have somethign wrogn i go get it checked out and get report back (ignoring ANY SORT of drug/treatment talk there) I then do my OWN research and eitehr deal with it my way (natural) or go get the needed drugs with least side effects and most history.

SCREW these doc's!
Most are just drug pushers!
The most messed up I felt in my life was when i mentioned I didn't like highschool and felt depressed.
they gave me effexor and I never felt so messed up in my life!
First time was "free samples" too!

same story with my GF and same drug!
Hers she was reason to be depressed but should not have been given pills. cognitive therapy (talking) is what was needed there.

PISS'S ME OFF!


Anyway GREAT posts!
As always!
 

Sponsors

Back
Top