Post Reply 
Thread Rating:
  • 0 Votes - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Meta Analysis as "Evidence"?
Author Message
James Offline

Posts: 2,827
Joined: Feb 2012
Reputation: 15
Post: #1
Meta Analysis as "Evidence"?
Research is essential to separating real therapies from quackery. There are different forms of studies each with their pros and cons. One form of study is the "meta analysis". A meta analysis first of all is just a review of different studies to come to a conclusion about a hypothesis.

Are studies always good sources of evidence?

Studies can be excellent sources of evidence provided they are done properly and interpreted properly. Unfortunately this is not always the case.

There are various reasons for this but bottom line is that studies are often designed to produce the results the tester wishes to derive often to please the group financing their study. These researchers are not going to bite the hand that feeds them.

Great examples are seen in various studies bashing herbs and supplements. How many times have we looked at these studies only to find that the test subjects were either severely underdosed or overdosed to make the herbs or supplements appear either ineffective or dangerous?

For example, one study claimed the herb tribulus terrestris was ineffective in raising testosterone. Looking at the study though I found that the test subjects were given doses way to low to elicit an effect. In addition, the study was only conducted for a few weeks, not long enough to raise testosterone even if proper doses were used.

Another study claimed chromium picolinate could cause cancer. Again when I looked at the study I found that they had given the mice 6,000 times the equivalent dose that would be used in humans! Using that same reasoning we could conclude that all pharmaceutical drugs are 100% lethal since 6,000 times the recommended dose of any pharmaceutical drug would kill a person. As we can see it is very simple to manipulate a study to "prove" whatever it is that a person wishes to prove.

Another problem with meta analysis is that the person performing the analysis can simply pick and choose what studies fits their belief system in order to show whatever it is they wish to "prove". For example, let's say someone is the head of a big pharmaceutical company so they hire some researcher to prove statins lower the risk of cardiovascular events. Are they going to include the studies that show the that liver damage from statins can raise cholesterol or the studies that show depleting CoQ10 with statins will lead to heart failure? Of course not because this is not going to please the person(s) funding their research and thus paying their bills. In fact, why is it that when they talk about the side effects of statins that they mention rhabdomyolosis (muscle tissue breakdown), but fail to mention that the heart is also a muscle and prone to the same breakdown? And why do they use the use the term "cardiovascular events" referring to heart attack and stroke, but fail to mention the heart failure the statins cause? Their wording is carefully chosen to imply that the statins help protect the heart by using the term "cardiovascular events". Since most of the public has no real idea what this entails they assume that this means it will protect the heart. But heart failure from statins is not necessarily considered a "cardiovascular event" so they are playing with words to make their drugs appear safer than they are.

Next is the problem of interpretation. Studies are very easy to manipulate, especially when it comes to the interpretation of the findings.

This is frequently done with chemotherapy drug testing for example as was reported in the Journal of the American Medical Association (JAMA) a while back. Several doctors had reported that the drug companies were dropping people from studies who died or did not respond to the drugs in order to make the chemotherapy drugs "appear" effective when they really were not.

As another example there was a drug company that was claiming in their commercials that taking their aspirin reduced the risk of a "second heart attack" by 50%. Unfortunately for them some reporter looked in to how they came to that conclusion and it was aired on national TV. What the drug company was doing is they tested the drug on over 100 people. Then they carefully chose 6 cases out of the over 100 and found that 3 of them had not had a second heart attack by that time. Thus they concluded the reduction of a second heart attack by 50%. Of course this is misleading on several grounds. First of all it did not take in to account all of the test subjects for a more accurate percentage. Secondly, it only took in to account that the three people had not had a second heart attack by that time. What if the other three had a second heart attack a day later? These heart attacks would not be counted since they occurred after the study was completed.

Another great example I saw recently was about oleander extract as a treatment for cancer. The article was about oleander extract passing phase 1 FDA trials. But the author tried to mislead readers in to thinking the study showed oleander extract was effective against cancer by stating that oleander "apparently can be effective against a wide variety of cancers” according to the study. Why is this claim a problem? Well first of all phase 1 studies have NOTHING to do with testing effectiveness of a compound. Phase 1 testing is to determine safety of dosing. Secondly, if the author would have actually read the study then he would have found that the effects were minimal. The author claims "7 of the 46 trial participants had their cancers stabilized for 4 or more months". To start with "stabilized" does not mean cured and thus does not mean effective. You can stabilize a fractured leg but this does not mean the fracture is healed and you can go running. So this was a very misleading statement.

There were also side effects to contend with:

This second abstract though is more telling:

First of all it shows the author's conflicts of interest with stock ownership, research funding and advisory role.

The most important part of this abstract though is this statement "Of the 15 evaluable pts, 3 had stable disease for > 4 months, with bladder, colorectal, and fallopian tube cancer pts having an 11, 16, and 10% reduction by RECIST respectively. "

"RECIST" means Response Evaluation Criteria In Solid Tumors. This simply is a set of standards by which it is determined if a patient gets better, worse or remains stable during the course of treatment. This is primarily by the measurement of tumor size. One problem that has been reported with RECIST though is that there is no standard for how measurements are done, which has called the whole RECIST program in to question.

Let's assume though that there was a standardization. If we look at the improvements they are not that impressive. Over a 4 month period there was only between a 10-16% reduction of tumor size in the three evaluable patients. To start with a 10-16% reduction in tumor size over 4 months is nothing really. Many chemotherapy drugs have a better track record than that, and many people such as myself consider chemotherapy to be quackery based on their low success rates.

There is another statement in the study that really bothers me though. First they state "To date 15 pts have received PBI-05204" (oleander extract). So they claim 15 patients to date have received the oleander extract, but then state "Of the 15 evaluable pts". If there were only 15 patients to begin with then why are they stating "of the 15 evaluable patients"? To me this would imply that there were more than 15 patients to begin with since there were only 15 that were evaluable. Is this another case where patients are being dropped from studies if they die or do not respond to make the drugs appear effective? In fact, if you look at both studies they both claim to be about the first phase 1 trials of PBI-05204, but the one abstract shows 15 participants while the other shows 46 participants.

Regardless, the results were not as impressive as were being portrayed showing how easy it is to misinterpret findings to mislead the public on the effectiveness of a product.

The above also shows how easy it is to manipulate the findings, especially when conflicts of interest are present.
06-18-2012 12:08 AM
Find all posts by this user Quote this message in a reply
Post Reply 

Forum Jump:

User(s) browsing this thread: 1 Guest(s)