Research Findings Are Not Necessarily True And Might Be False

Deacon

New Member
here is an article for you bro's with all of the studies and science articles;


http://www.livescience.com/othernews...l_studies.html

Study Finds One-third of Medical Studies are Wrong
By Lindsey Tanner
Assocaited Press
posted: 14 July 2005
10:16 am ET


CHICAGO (AP) -- New research highlights a frustrating fact about science: What was good for you yesterday frequently will turn out to be not so great tomorrow.

The sobering conclusion came in a review of major studies published in three influential medical journals between 1990 and 2003, including 45 highly publicized studies that initially claimed a drug or other treatment worked.

Subsequent research contradicted results of seven studies -- 16 percent -- and reported weaker results for seven others, an additional 16 percent.

That means nearly one-third of the original results did not hold up, according to the report in Wednesday's Journal of the American Medical Association.

"Contradicted and potentially exaggerated findings are not uncommon in the most visible and most influential original clinical research,'' said study author Dr. John Ioannidis, a researcher at the University of Ioannina in Greece.

Ioannidis examined research in the New England Journal of Medicine, JAMA and Lancet -- prominent journals whose weekly studies help feed a growing public appetite for medical news.

Experts say the report is a reminder to doctors and patients that they should not put too much stock in a single study and understand that treatments often become obsolete with medical advances.

"The crazy part about science and yet the exciting part about science is you almost never have something that's black and white,'' said Dr. Catherine DeAngelis, JAMA's editor-in-chief.

Editors at the New England Journal of Medicine added in a statement: "A single study is not the final word, and that is an important message.''

The refuted studies dealt with a wide range of drugs and treatments. Hormone pills were once thought to protect menopausal women from heart disease but later were shown to do the opposite, and Vitamin E pills have not been shown to prevent heart attacks, contrary to initial results.

Contradictions also included a study that found nitric oxide does not improve survival in patients with respiratory failure, despite earlier claims. And a study suggested an antibody treatment did not improve survival in certain sepsis patients; a smaller previous study found the opposite.

Ioannidis acknowledged an important but not very reassuring caveat: "There's no proof that the subsequent studies ... were necessarily correct.'' But he noted that in all 14 cases in which results were contradicted or softened, the subsequent studies were either larger or better designed. Also, none of the contradicted treatments is currently recommended by medical guidelines.

Not by accident, this week's JAMA also includes a study contradicting previous thinking that stomach-lying helped improve breathing in children hospitalized with acute lung injuries. The new study found they did no better than patients lying on their backs.

DeAngelis said she included the study with Ioannidis' report to highlight the issue. She said the media can complicate matters with misleading or exaggerated headlines about studies.

Ioannidis said scientists and editors should avoid "giving selective attention only to the most promising or exciting results'' and should make the public more aware of the limitations of science.

"The general public should not panic'' about refuted studies, he said. "We all need to start thinking more critically.''


in other words sometimes the guys who have actually used stuff do know what they are talking about
 
Last edited by a moderator:
Re: For you science types

There is a fundamental reason for this that is interesting, or at least can be depending on what one finds interesting.

It's a really unfortunate thing, as scientists use probability (for example p values, which have to do with how random variation relates with the findings) on a constant basis. But any number of scientists and certainly doctors and other professionals have a completely wrong "understanding" of what the statistics mean.

If let's say something is found "statistically significant" to p=0.01, for example, most think, "Oooh! That is pretty good! Only a one percent probability that that was only luck, we can ignore that, this is good stuff!"

In fact the standard for publication usually is p of 0.05 (five percent) or better.

But that is NOT what it means, at all.

What it means is that WHEN chance alone is the only factor, that percentage of the time -- whether 1% or 5% or whatever was calculated by the statistics -- these results will occur and it will falsely APPEAR as if there is a result.

This has got little to do (think about it!) with what the probability is that THESE results are from chance. Generally speaking the probability is far higher!

It is a natural consequence of statistics that something like the above-reported 1/3 of apparent results are in fact not real at all, but were only the product of chance -- the treatment group happening to be a group of people that mostly got better and the placebo group happening to be a group that mostly didn't.

This will happen by chance much more often than the percentage indicated by the p value, for a couple of reasons.

But yet absolutely one can go all the way through a doctoral science program and never be taught this about the MEANING of the statistics.

Very briefly and oversimplified due to my not remembering the details exactly: if the best estimate of the likelihood of a real effect occurring -- for example an untried drug, and we know from previous trials of related drugs that say 99% of the time they don't work, so entering into the new study the probability of actually being effective is at this time best estimated at 1% -- then the needed p value to have a mere 95% confidence of actual effect is not p <= 0.05, but p <= 0.0005!!

Which needless to say, scarcely any medical study meets.

It gets worse.

That is only when considering one study in isolation!

When considering published studies, there is on top of this the selection bias. The detected failures don't get reported. Only the success stories, or apparent success stories, are published, for the most part.

So let's say that there was a class of drugs where in fact NONE of them work, but many studies are done on very many of them.

By chance alone, if many studies are done, in some of them the treatment group will do "significantly" better.

So with the sort of p values that are accepted as being "significant" and with selection bias on top of that, it's no wonder that a high percentage of evidence "showing" that various things supposedly work actually is the product of chance alone.

It's not only no wonder: it's essentially inevitable.
 
Last edited:
Re: For you science types

Ioannidis JPA. Contradicted and Initially Stronger Effects in Highly Cited Clinical Research. JAMA 2005;294(2):218-28.

Context Controversy and uncertainty ensue when the results of clinical research on the effectiveness of interventions are subsequently contradicted. Controversies are most prominent when high-impact research is involved.

Objectives To understand how frequently highly cited studies are contradicted or find effects that are stronger than in other similar studies and to discern whether specific characteristics are associated with such refutation over time.

Design All original clinical research studies published in 3 major general clinical journals or high-impact-factor specialty journals in 1990-2003 and cited more than 1000 times in the literature were examined.

Main Outcome Measure The results of highly cited articles were compared against subsequent studies of comparable or larger sample size and similar or better controlled designs. The same analysis was also performed comparatively for matched studies that were not so highly cited.

Results Of 49 highly cited original clinical research studies, 45 claimed that the intervention was effective. Of these, 7 (16%) were contradicted by subsequent studies, 7 others (16%) had found effects that were stronger than those of subsequent studies, 20 (44%) were replicated, and 11 (24%) remained largely unchallenged. Five of 6 highly-cited nonrandomized studies had been contradicted or had found stronger effects vs 9 of 39 randomized controlled trials (P = .008). Among randomized trials, studies with contradicted or stronger effects were smaller (P = .009) than replicated or unchallenged studies although there was no statistically significant difference in their early or overall citation impact. Matched control studies did not have a significantly different share of refuted results than highly cited studies, but they included more studies with "negative" results.

Conclusions Contradiction and initially stronger effects are not unusual in highly cited research of clinical interventions and their outcomes. The extent to which high citations may provoke contradictions and vice versa needs more study. Controversies are most common with highly cited nonrandomized studies, but even the most highly cited randomized trials may be challenged and refuted over time, especially small ones.
 

Attachments

Re: For you science types

Shrier I, Boivin J-Fo, Steele RJ, et al. Should Meta-Analyses of Interventions Include Observational Studies in Addition to Randomized Controlled Trials? A Critical Examination of Underlying Principles. American Journal of Epidemiology 2007;166(10):1203-9.

Some authors argue that systematic reviews and meta-analyses of intervention studies should include only randomized controlled trials because the randomized controlled trial is a more valid study design for causal inference compared with the observational study design. However, a review of the principal elements underlying this claim (randomization removes the chance of confounding, and the double-blind process minimizes biases caused by the placebo effect) suggests that both classes of study designs have strengths and weaknesses, and including information from observational studies may improve the inference based on only randomized controlled trials. Furthermore, a review of empirical studies suggests that meta-analyses based on observational studies generally produce estimates of effect similar to those from meta-analyses based on randomized controlled trials. The authors found that the advantages of including both observational studies and randomized studies in a meta-analysis could outweigh the disadvantages in many situations and that observational studies should not be excluded a priori.
 

Attachments

Re: For you science types

Hackam DG, Redelmeier DA. Translation of Research Evidence From Animals to Humans. JAMA 2006;296(14):1731-2.

To the Editor: Most medical therapies in use today were initially developed and tested in animals, yet animal experiments often fail to replicate when tested in rigorous human trials. We conducted a systematic review to determine how often highly cited animal studies translate into successful human research.

Comment

Only about a third of highly cited animal research translated at the level of human randomized trials. This rate of translation is lower than the recently estimated 44% replication rate for highly cited human studies. Limitations of this review include a focus on highly cited animal studies published in leading journals, which by their positive and highly visible nature may have been more likely to translate than less frequently cited research. In addition, this study had limited power to discern individual predictors of translation.

Nevertheless, we believe these findings have important implications. First, patients and physicians should remain cautious about extrapolating the findings of prominent animal research to the care of human disease. Second, major opportunities for improving study design and methodological quality are available for preclinical research. Finally, poor replication of even high-quality animal studies should be expected by those who conduct clinical research.
 

Attachments

Re: For you science types

after all of this I do have one observation to add- I have never understood where studies on average men and women as well as lab animals equate to steroid using bodybuilders - you doctors may disagree but for years many of the well respected medical experts who haunted the boards always clarified their posts with this - lately it just seems as if everyone posts studies as if they are discussing training athletes - this is not the case I am sure you will agree - studies on 60 year old hypogonadal men in eastern europe does not relate to a steroid using bodybyuilder


or has this changed??
 
Re: For you science types

It depends on what you are talking about.

For example, suppose a study finds, to our utter surprise, that unlike the 3b-hydroxy-17-keto derivative of testosterone (4-dehydroepiandrosterone), the 3b-hydroxy-17-keto of DHT is a potent anabolic.

It happened to be that they did the study in 60 year old men.

Regardless that they were 60 year old men, the qualitative fact that this can activate androgenic anabolic pathways would have applicability to bb'ers.

It wouldn't show whether it is better or worse, but it would indicate that it will have at least some effect.

Of course judgment has to be applied in assessing how likely it is that findings for one population will apply for another population, and what extent is likely.

But if someone takes the path of nobility and insists he isn't interested in any information other than what is acquired from his own body or that of his identical twin -- simply won't listen to anything else because it might not apply, as people in the study are not exactly the same -- he will find himself at a disadvantage compared to those who use judgment and recognize that subjects of a study are not the same as themselves but human physiology and biochemistry don't vary so extremely much that there is unlikely to be any relation at all.
 
Re: For you science types

this is a valid observation - however I do not believe that the majority of bodybuilders/ lifters care about the facts much past their own bodies - it is quite evident in their questions and replies on any board - they want to know what will get THEM big and strong or ripped and whatever else they wish to gain - so while I agree with your points about keeping one's mind open to new information that wopuld require looking past personal gain - and I do not believe our community is looking for that outside of a very few

expanding your knowledge base is commendable but for most of the "bros" here and on other boards it is not their focus and what posting alot of studies and findings does is simply add to their confusion without giving them simple answers

what you and Doc Scally do is commendable with your answers - but sometimes it would be better to cut the medical talk a bit and explain in layman's terms - I have seen both of you do this from time to time and those are often the best answers
 
Re: For you science types

Agreed: it's already known how to use anabolic steroids and other performance-enhancing drugs successfully and from the standpoint of the bb'er or athlete, that experience base -- if they can sort out the good material from the bad -- is incomparably more productive than is searching through Pubmed abstracts.
 
Why Most Published Research Findings Are False

Lies, Damned Lies, and Medical Science

Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. So why are doctors—to a striking extent—still drawing upon misinformation in their everyday practice? Dr. John Ioannidis has spent his career challenging his peers by exposing their bad science.

By DAVID H. FREEDMAN
NOVEMBER 2010 ATLANTIC MAGAZINE

IN 2001, RUMORS were circulating in Greek hospitals that surgery residents, eager to rack up scalpel time, were falsely diagnosing hapless Albanian immigrants with appendicitis. At the University of Ioannina medical school’s teaching hospital, a newly minted doctor named Athina Tatsioni was discussing the rumors with colleagues when a professor who had overheard asked her if she’d like to try to prove whether they were true—he seemed to be almost daring her. She accepted the challenge and, with the professor’s and other colleagues’ help, eventually produced a formal study showing that, for whatever reason, the appendices removed from patients with Albanian names in six Greek hospitals were more than three times as likely to be perfectly healthy as those removed from patients with Greek names. “It was hard to find a journal willing to publish it, but we did,” recalls Tatsioni. “I also discovered that I really liked research.” Good thing, because the study had actually been a sort of audition. The professor, it turned out, had been putting together a team of exceptionally brash and curious young clinicians and Ph.D.s to join him in tackling an unusual and controversial agenda.

Last spring, I sat in on one of the team’s weekly meetings on the medical school’s campus, which is plunked crazily across a series of sharp hills. The building in which we met, like most at the school, had the look of a barracks and was festooned with political graffiti. But the group convened in a spacious conference room that would have been at home at a Silicon Valley start-up. Sprawled around a large table were Tatsioni and eight other youngish Greek researchers and physicians who, in contrast to the pasty younger staff frequently seen in U.S. hospitals, looked like the casually glamorous cast of a television medical drama. The professor, a dapper and soft-spoken man named John Ioannidis, loosely presided.

One of the researchers, a biostatistician named Georgia Salanti, fired up a laptop and projector and started to take the group through a study she and a few colleagues were completing that asked this question: were drug companies manipulating published research to make their drugs look good? Salanti ticked off data that seemed to indicate they were, but the other team members almost immediately started interrupting. One noted that Salanti’s study didn’t address the fact that drug-company research wasn’t measuring critically important “hard” outcomes for patients, such as survival versus death, and instead tended to measure “softer” outcomes, such as self-reported symptoms (“my chest doesn’t hurt as much today”). Another pointed out that Salanti’s study ignored the fact that when drug-company data seemed to show patients’ health improving, the data often failed to show that the drug was responsible, or that the improvement was more than marginal.

Salanti remained poised, as if the grilling were par for the course, and gamely acknowledged that the suggestions were all good—but a single study can’t prove everything, she said. Just as I was getting the sense that the data in drug studies were endlessly malleable, Ioannidis, who had mostly been listening, delivered what felt like a coup de grâce: wasn’t it possible, he asked, that drug companies were carefully selecting the topics of their studies—for example, comparing their new drugs against those already known to be inferior to others on the market—so that they were ahead of the game even before the data juggling began? “Maybe sometimes it’s the questions that are biased, not the answers,” he said, flashing a friendly smile. Everyone nodded. Though the results of drug studies often make newspaper headlines, you have to wonder whether they prove anything at all. Indeed, given the breadth of the potential problems raised at the meeting, can any medical-research studies be trusted?

That question has been central to Ioannidis’s career. He’s what’s known as a meta-researcher, and he’s become one of the world’s foremost experts on the credibility of medical research. He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed. His work has been widely accepted by the medical community; it has been published in the field’s top journals, where it is heavily cited; and he is a big draw at conferences. Given this exposure, and the fact that his work broadly targets everyone else’s work in medicine, as well as everything that physicians do and all the health advice we get, Ioannidis may be one of the most influential scientists alive. Yet for all his influence, he worries that the field of medical research is so pervasively flawed, and so riddled with conflicts of interest, that it might be chronically resistant to change—or even to publicly admitting that there’s a problem.

Continue Reading: Lies, Damned Lies, and Medical Science - Magazine - The Atlantic
 
Re: Why Most Published Research Findings Are False

Drug companies funding drug research is just as crazy as corporations funding election campaigns.
 
The Truth Wears Off
Is there something wrong with the scientific method?
The decline effect and the scientific method : The New Yorker

DECEMBER 13, 2010
By Jonah Lehrer

On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties. The drugs, sold under brand names such as Abilify, Seroquel, and Zyprexa, had been tested on schizophrenics in several large clinical trials, all of which had demonstrated a dramatic decrease in the subjects’ psychiatric symptoms. As a result, second-generation antipsychotics had become one of the fastest-growing and most profitable pharmaceutical classes. By 2001, Eli Lilly’s Zyprexa was generating more revenue than Prozac. It remains the company’s top-selling drug.

But the data presented at the Brussels meeting made it clear that something strange was happening: the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.

Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.

But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.

For many scientists, the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.

Jonathan Schooler was a young graduate student at the University of Washington in the nineteen-eighties when he discovered a surprising new fact about language and memory. At the time, it was widely believed that the act of describing our memories improved them. But, in a series of clever experiments, Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”

The study turned him into an academic star. Since its initial publication, in 1990, it has been cited more than four hundred times. Before long, Schooler had extended the model to a variety of other tasks, such as remembering the taste of a wine, identifying the best strawberry jam, and solving difficult creative puzzles. In each instance, asking people to put their perceptions into words led to dramatic decreases in performance.

But while Schooler was publishing these results in highly reputable journals, a secret worry gnawed at him: it was proving difficult to replicate his earlier findings. “I’d often still see an effect, but the effect just wouldn’t be as strong,” he told me. “It was as if verbal overshadowing, my big new idea, was getting weaker.” At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”

Schooler tried to put the problem out of his mind; his colleagues assured him that such things happened all the time. Over the next few years, he found new research questions, got married and had kids. But his replication problem kept on getting worse. His first attempt at replicating the 1990 study, in 1995, resulted in an effect that was thirty per cent smaller. The next year, the size of the effect shrank another thirty per cent. When other labs repeated Schooler’s experiments, they got a similar spread of data, with a distinct downward trend. “This was profoundly frustrating,” he says. “It was as if nature gave me this great result and then tried to take it back.” In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”

Schooler is now a tenured professor at the University of California at Santa Barbara. He has curly black hair, pale-green eyes, and the relaxed demeanor of someone who lives five minutes away from his favorite beach. When he speaks, he tends to get distracted by his own digressions. He might begin with a point about memory, which reminds him of a favorite William James quote, which inspires a long soliloquy on the importance of introspection. Before long, we’re looking at pictures from Burning Man on his iPhone, which leads us back to the fragile nature of memory.

Although verbal overshadowing remains a widely accepted theory—it’s often invoked in the context of eyewitness testimony, for instance—Schooler is still a little peeved at the cosmos. “I know I should just move on already,” he says. “I really should stop talking about this. But I can’t.” That’s because he is convinced that he has stumbled on a serious problem, one that afflicts many of the most exciting new ideas in psychology.

One of the first demonstrations of this mysterious phenomenon came in the early nineteen-thirties. Joseph Banks Rhine, a psychologist at Duke, had developed an interest in the possibility of extrasensory perception, or E.S.P. Rhine devised an experiment featuring Zener cards, a special deck of twenty-five cards printed with one of five different symbols: a card was drawn from the deck and the subject was asked to guess the symbol. Most of Rhine’s subjects guessed about twenty per cent of the cards correctly, as you’d expect, but an undergraduate named Adam Linzmayer averaged nearly fifty per cent during his initial sessions, and pulled off several uncanny streaks, such as guessing nine cards in a row. The odds of this happening by chance are about one in two million. Linzmayer did it three times.

Rhine documented these stunning results in his notebook and prepared several papers for publication. But then, just as he began to believe in the possibility of extrasensory perception, the student lost his spooky talent. Between 1931 and 1933, Linzmayer guessed at the identity of another several thousand cards, but his success rate was now barely above chance. Rhine was forced to conclude that the student’s “extra-sensory perception ability has gone through a marked decline.” And Linzmayer wasn’t the only subject to experience such a drop-off: in nearly every case in which Rhine and others documented E.S.P. the effect dramatically diminished over time. Rhine called this trend the “decline effect.”

Schooler was fascinated by Rhine’s experimental struggles. Here was a scientist who had repeatedly documented the decline of his data; he seemed to have a talent for finding results that fell apart. In 2004, Schooler embarked on an ironic imitation of Rhine’s research: he tried to replicate this failure to replicate. In homage to Rhine’s interests, he decided to test for a parapsychological phenomenon known as precognition. The experiment itself was straightforward: he flashed a set of images to a subject and asked him or her to identify each one. Most of the time, the response was negative—the images were displayed too quickly to register. Then Schooler randomly selected half of the images to be shown again. What he wanted to know was whether the images that got a second showing were more likely to have been identified the first time around. Could subsequent exposure have somehow influenced the initial results? Could the effect become the cause?

The craziness of the hypothesis was the point: Schooler knows that precognition lacks a scientific explanation. But he wasn’t testing extrasensory powers; he was testing the decline effect. “At first, the data looked amazing, just as we’d expected,” Schooler says. “I couldn’t believe the amount of precognition we were finding. But then, as we kept on running subjects, the effect size”—a standard statistical measure—“kept on getting smaller and smaller.” The scientists eventually tested more than two thousand undergraduates. “In the end, our results looked just like Rhine’s,” Schooler said. “We found this strong paranormal effect, but it disappeared on us.”

The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time! Hell, it’s happened to me multiple times.” And this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”

In 1991, the Danish zoologist Anders Møller, at Uppsala University, in Sweden, made a remarkable discovery about sex, barn swallows, and symmetry. It had long been known that the asymmetrical appearance of a creature was directly linked to the amount of mutation in its genome, so that more mutations led to more “fluctuating asymmetry.” (An easy way to measure asymmetry in humans is to compare the length of the fingers on each hand.) What Møller discovered is that female barn swallows were far more likely to mate with male birds that had long, symmetrical feathers. This suggested that the picky females were using symmetry as a proxy for the quality of male genes. Møller’s paper, which was published in Nature, set off a frenzy of research. Here was an easily measured, widely applicable indicator of genetic quality, and females could be shown to gravitate toward it. Aesthetics was really about genetics.

In the three years following, there were ten independent tests of the role of fluctuating asymmetry in sexual selection, and nine of them found a relationship between symmetry and male reproductive success. It didn’t matter if scientists were looking at the hairs on fruit flies or replicating the swallow studies—females seemed to prefer males with mirrored halves. Before long, the theory was applied to humans. Researchers found, for instance, that women preferred the smell of symmetrical men, but only during the fertile phase of the menstrual cycle. Other studies claimed that females had more orgasms when their partners were symmetrical, while a paper by anthropologists at Rutgers analyzed forty Jamaican dance routines and discovered that symmetrical men were consistently rated as better dancers.

Then the theory started to fall apart. In 1994, there were fourteen published tests of symmetry and sexual selection, and only eight found a correlation. In 1995, there were eight papers on the subject, and only four got a positive result. By 1998, when there were twelve additional investigations of fluctuating asymmetry, only a third of them confirmed the theory. Worse still, even the studies that yielded some positive result showed a steadily declining effect size. Between 1992 and 1997, the average effect size shrank by eighty per cent.

And it’s not just fluctuating asymmetry. In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”

What happened? Leigh Simmons, a biologist at the University of Western Australia, suggested one explanation when he told me about his initial enthusiasm for the theory: “I was really excited by fluctuating asymmetry. The early studies made the effect look very robust.” He decided to conduct a few experiments of his own, investigating symmetry in male horned beetles. “Unfortunately, I couldn’t find the effect,” he said. “But the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.” For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.

Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.

While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts. Richard Palmer, a biologist at the University of Alberta, who has studied the problems surrounding fluctuating asymmetry, suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.

The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”

Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”

One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.

John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.” In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.

The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”

According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”

The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”

That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.) “I’ve learned the hard way to be exceedingly careful,” Schooler says. “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”

In a forthcoming paper, Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”

Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.

In the late nineteen-nineties, John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.

The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.

The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.

This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.

Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
 
Last edited:
I have a study showing that studies that have proven medical studies are often wrong... are wrong themselves.

;)
 
I can't let this one go

two types of errors one can make in doing reaearch

1. alpha errors = showing false diffeences = rejecting the null hypothesis when it in fact is true

showing false differences

consider the null hypothesis

drug egffect = placebo effect
(if we accept this the drug doesn't work better than sugar pills)

so we reject it, maximum probability of being wrong = alpha value that is usually set a priori at .05. .01 or .001

we got a bias sample by chance we picked 20 random subjects for the drug treatment and we picked 20 random people for the placebo, However the 20 people we picked for the drug treatment are all studs and studdettes (not reflective of the population. The 20 people we picked for the placebo are sick fuckers (not reflective of the population). we do the test and it looks like the drug works when it in fact doesn't ---it was just the healthy physiology of the studs and studdettes.

we fucked up-- because of the samples

2. beta error = accepting the null hypothesis when it in fact is wrong = not showing true differences.

imagine opposite of previous example -- sick fucker got thedrug, health people git tge placebo. do experiment anf it loks like the drug doesn't work bgut it was just because the drug group wer sick fuckerd and the placebo group were helthy studs and studettes
 
Re: OnLine First

Comparison of Effect Sizes Associated With Biomarkers Reported in Highly Cited Individual Articles

Many new biomarkers are continuously proposed as potential determinants of disease risk, prognosis, or response to treatment. The plethora of statistically significant associations increases expectations for improvements in risk appraisal. However, many markers get evaluated only in 1 or a few studies. Among those evaluated more extensively, few reach clinical practice.

This translational attrition requires better study. Are the effect sizes proposed in the literature accurate or overestimated? It is interesting to address this question in particular for biomarker studies that are highly cited. Many of these risk factors are also evaluated in meta-analyses that allow overviews of the evidence. However, some meta-analyses may suffer bias from selective reporting, especially among small data sets; then large studies may provide more unbiased evidence.

Here, researchers examined biomarkers that had been evaluated in at least 1 highly cited study and for which at least 1 meta-analysis had been performed for that same association. They aimed to compare the effect size of these associations in the most highly cited studies vs what was observed in the largest studies and the corresponding meta-analyses.

This empirical evaluation of 35 top-cited biomarker studies suggests that many of these highlighted associations are exaggerated. In some cases, these markers may have no predictive ability, if one trusts the subsequent replication record, in particular the results of the largest studies on the same associations. Less than half of these biomarkers have shown nominally significant results in the largest studies that have been conducted on them, and only 1 in 5 has shown an RR greater than 1.37. There are several true associations, but they correspond predominantly to small or modest effects with uncommon exceptions. Such effects, even if genuine, may have only incremental translational value for clinical use.


Ioannidis JPA, Panagiotou OA. Comparison of Effect Sizes Associated With Biomarkers Reported in Highly Cited Individual Articles and in Subsequent Meta-analyses. JAMA: The Journal of the American Medical Association 2011;305(21):2200-10. Comparison of Effect Sizes Associated With Biomarkers Reported in Highly Cited Individual Articles and in Subsequent Meta-analyses, June 1, 2011, Ioannidis and Panagiotou 305 (21): 2200 — JAMA

Context Many biomarkers are proposed in highly cited studies as determinants of disease risk, prognosis, or response to treatment, but few eventually transform clinical practice.

Objective To examine whether the magnitude of the effect sizes of biomarkers proposed in highly cited studies is accurate or overestimated.

Data Sources We searched ISI Web of Science and MEDLINE until December 2010.

Study Selection We included biomarker studies that had a relative risk presented in their abstract. Eligible articles were those that had received more than 400 citations in the ISI Web of Science and that had been published in any of 24 highly cited biomedical journals. We also searched MEDLINE for subsequent meta-analyses on the same associations (same biomarker and same outcome).Data Extraction In the highly cited studies, data extraction was focused on the disease/outcome, biomarker under study, and first reported relative risk in the abstract. From each meta-analysis, we extracted the overall relative risk and the relative risk in the largest study. Data extraction was performed independently by 2 investigators.

Results We evaluated 35 highly cited associations. For 30 of the 35 (86%), the highly cited studies had a stronger effect estimate than the largest study; for 3 the largest study was also the highly cited study; and only twice was the effect size estimate stronger in the largest than in the highly cited study. For 29 of the 35 (83%) highly cited studies, the corresponding meta-analysis found a smaller effect estimate. Only 15 of the associations were nominally statistically significant based on the largest studies, and of those only 7 had a relative risk point estimate greater than 1.37.

Conclusion Highly cited biomarker studies often report larger effect estimates for postulated associations than are reported in subsequent meta-analyses evaluating the same associations.
 
Last edited:
It’s Science, but Not Necessarily Right
http://www.nytimes.com/2011/06/26/opinion/sunday/26ideas.html

June 25, 2011
By CARL ZIMMER

ONE of the great strengths of science is that it can fix its own mistakes. “There are many hypotheses in science which are wrong,” the astrophysicist Carl Sagan once said. “That’s perfectly all right: it’s the aperture to finding out what’s right. Science is a self-correcting process.”

If only it were that simple. Scientists can certainly point with pride to many self-corrections, but science is not like an iPhone; it does not instantly auto-correct. As a series of controversies over the past few months have demonstrated, science fixes its mistakes more slowly, more fitfully and with more difficulty than Sagan’s words would suggest. Science runs forward better than it does backward.

Why? One simple answer is that it takes a lot of time to look back over other scientists’ work and replicate their experiments. Scientists are busy people, scrambling to get grants and tenure. As a result, papers that attract harsh criticism may nonetheless escape the careful scrutiny required if they are to be refuted.

In May, for instance, the journal Science published eight critiques of a controversial paper that it had run in December. In the paper, a team of scientists described a species of bacteria that seemed to defy the known rules of biology by using arsenic instead of phosphorus to build its DNA. Chemists and microbiologists roundly condemned the paper; in the eight critiques, researchers attacked the study for using sloppy techniques and failing to rule out more plausible alternatives.

But none of those critics had actually tried to replicate the initial results. That would take months of research: getting the bacteria from the original team of scientists, rearing them, setting up the experiment, gathering results and interpreting them. Many scientists are leery of spending so much time on what they consider a foregone conclusion, and graduate students are reluctant because they want their first experiments to make a big splash, not confirm what everyone already suspects.

“I’ve got my own science to do,” John Helmann, a microbiologist at Cornell and a critic of the Science paper, told Nature. The most persistent critic, Rosie Redfield, a microbiologist at the University of British Columbia, announced this month on her blog that she would try to replicate the original results — but only the most basic ones, and only for the sake of science’s public reputation. “Scientifically I think trying to replicate the claimed results is a waste of time,” she wrote in an e-mail.

For now, the original paper has not been retracted; the results still stand.

Even when scientists rerun an experiment, and even when they find that the original result is flawed, they still may have trouble getting their paper published. The reason is surprisingly mundane: journal editors typically prefer to publish groundbreaking new research, not dutiful replications.

In March, for instance, Daryl Bem, a psychologist at Cornell University, shocked his colleagues by publishing a paper in a leading scientific journal, The Journal of Personality and Social Psychology, in which he presented the results of experiments showing, he claimed, that people’s minds could be influenced by events in the future, as if they were clairvoyant.

Three teams of scientists promptly tried to replicate his results. All three teams failed. All three teams wrote up their results and submitted them to The Journal of Personality and Social Psychology. And all three teams were rejected — but not because their results were flawed. As the journal’s editor, Eliot Smith, explained to The Psychologist, a British publication, the journal has a longstanding policy of not publishing replication studies. “This policy is not new and is not unique to this journal,” he said.

As a result, the original study stands.

Even when follow-up studies manage to see the light of day, they still don’t necessarily bring matters to a close. Sometimes the original authors will declare the follow-up studies to be flawed and refuse to retract their paper. Such a standoff is now taking place over a controversial claim that chronic fatigue syndrome is caused by a virus. In October 2009, the virologist Judy Mikovits and colleagues reported in Science that people with chronic fatigue syndrome had high levels of a virus called XMRV. They suggested that XMRV might be the cause of the disorder.

Several other teams have since tried — and failed — to find XMRV in people with chronic fatigue syndrome. As they’ve published their studies over the past year, skepticism has grown. The editors of Science asked the authors of the XMRV study to retract their paper. But the scientists refused; Ms. Mikovits declared that a retraction would be “premature.” The editors have since published an “editorial expression of concern.”

Once again, the result still stands.

But perhaps not forever. Ian Lipkin, a virologist at Columbia University who is renowned in scientific circles for discovering new viruses behind mysterious outbreaks, is also known for doing what he calls “de-discovery”: intensely scrutinizing controversial claims about diseases.

Last September, Mr. Lipkin laid out several tips for effective de-discovery in the journal Microbiology and Molecular Biology Reviews. He recommended engaging other scientists — including those who published the original findings — as well as any relevant advocacy groups (like those for people suffering from the disease in question). Together, everyone must agree on a rigorous series of steps for the experiment. Each laboratory then carries out the same test, and then all the results are gathered together.

At the request of the National Institutes of Health, Mr. Lipkin is running just such a project with Ms. Mikovits and other researchers to test the link between viruses and chronic fatigue, based on a large-scale study of 300 subjects. He expects results by the end of this year.

This sort of study, however, is the exception rather than the rule. If the scientific community put more value on replication — by setting aside time, money and journal space — science would do a better job of living up to Carl Sagan’s words.
 
Re: For you science types

this is a valid observation - however I do not believe that the majority of bodybuilders/ lifters care about the facts much past their own bodies - it is quite evident in their questions and replies on any board - they want to know what will get THEM big and strong or ripped and whatever else they wish to gain - so while I agree with your points about keeping one's mind open to new information that wopuld require looking past personal gain - and I do not believe our community is looking for that outside of a very few

expanding your knowledge base is commendable but for most of the "bros" here and on other boards it is not their focus and what posting alot of studies and findings does is simply add to their confusion without giving them simple answers

what you and Doc Scally do is commendable with your answers - but sometimes it would be better to cut the medical talk a bit and explain in layman's terms - I have seen both of you do this from time to time and those are often the best answers

I agree completly with the last statement you made. While there are many here who can follow what the Doc are saying many of us can't and while I appreciate all the studies that are posted I actual lose interest in the reading because it all looks Greek to me.
 
Back
Top