Bad science methods
Bad science is a term for methods using in scientific publication system for intentional or unintentional fraud and misconduct.[1]
Some scientists and other people use those methods in order to maintain their position in science society or to obtain grants for their research,[2] what leads to contradictions and in some cases, for example in medicine, could be a reason for introduction dangerous drugs or harmful therapies.[3] Also journalists and popular media canals, such science blogs and webpages often misinterpreting results of studies or exaggerate them, what makes them more attractive for publicity.[4] Peer review system does not prevent of misleading publications.[5] Publication bias is one of the most common and most important factor, distorting the research outcome.[6]
Prevalence[edit]
As research suggest, practices of bad science are common and widespread and in specific situation it is a source of big problems.[7] For example misleading data can leads to approval of ineffective or dangerous medicine. In 2005 John Ioannidis checked forty nine most cited, high impact researches published in few important medicine journals between 1990 and 2003. He found, that in forty five of those researches, claimed that medical intervention was effective, "7 (16%) were contradicted by subsequent studies, 7 others (16%) had found effects that were stronger than those of subsequent studies, 20 (44%) were replicated, and 11 (24%) remained largely unchallenged[8] ". Only twenty of all studies he checked were later replicated and it's mean, that more than half of single published research about medical interventions are untrustworthy, even the randomized trials.[9][10]
Daniele Fanelli review in 2009 surveys on falsification and fabrication in science.[11] Near 2% of scientists admitted, that at least once have conducted serious form of misconduct and near 37% admitted other questionable research practices and such finding was confirmation of previous surveys.[12][13][14] However when asking for colleagues behavior, they pointed about 14% of them for falsification and 72% for other questionable research practices. Most of the misconducts are probably committed by males scientists.[15]
Similar situation is on forensic science, for example in USA, where most of work performed by crime labolatories suffer from misanalyzing, contamination or lying.[16]
Some research argues, that 79% of published articles on neuroscience are misleading, and 92% of all in brain imaging field, due to small samples of participants.[17]
Near all papers in economy could be wrong because of publication bias and other misconducts.[18]
Randall and Gibson checked in 2013 ninety four studies about ethical beliefs and behavior of organizational members and they found, that "majority of empirical research articles expressed no concern for the reliability or validity of measures, were characterized by low response rates, used convenience samples, and did not offer a theoretic framework, hypotheses, or a definition of ethics".[19] Full methodological detail was provided in less than one half of the checked articles.
More than half biostatisticians know about fraudulent projects in medical research.[20]
One of the bad science practice is the hiding of the raw data. Scientist often don't share data, when the evidence are weak.[21] About 30% of all articles published in the high impact journals don't share raw data.[22][23]
Surrogates in research[edit]
In many studies scientists use surrogates and generalize the observed effect for the whole population, despite such practise is questionable.[24] In social science and psychology popular surrogate are students and some research find, that outcome from studies with students participants differ from studies with non-students.[25][26] Students as good surrogate could only be considered in specific situations.[27]
In some trials participants could not make the representative samples of whole population and a lot of them are often homeless and poor people or people addicted from alcohol or drugs.[28] In medical research, especially in pharmaceutics, the variety of mammals are used as surrogates for human. Mammals physiology differ across the species, and sometimes the drug which look safe for animals will be dangerous for human. The most known example is the TGN1412 drug, which almost kills a few volunteers in 2006 despite of the very large dose of it was safe for animals.[29] Around three of four drugs are rejected in the I phase of trials on human, despite drugs looked safe with animals trials.[30] Research indicates that animal model could be failure in mimic human clinical diseases.[31] One systematic review revealed, that only about one third of influential studies about drugs tested on animals later was approved for people, partially because animal trials was performing with less care.[32] Most animal trials lack of methodological quality, for example in the case of testing stroke drugs.[33]
Poor statistical standards[edit]
In social sciences about 17-25% of all findings are probably false, because of poor statistical standards[34][35] and publication bias.[36] In psychology 15% of published papers could have a statistical error, which change the conclusion of the paper.[37]
In genetics many studies try to link specific genes with diseases or other biological and even psychological factors like personality traits. However such findings should be treated with caution because of complexity of possible factors.[38][39][40] Mathematical analyses showed that small genetics studies suffer from low statistical quality[41] and are less replicable.[42] In 2008 Nicole Allen with colleagues calculated, that in case of schizophrenia only one study of hundred could be right pointing the association between gene combination and disease.[43] Similar odds have been calculated in case of Alzheimer's disease,[44] Parkinson disease,[45] few other diseases[46] and association with violence and aggression[47] or race.[48] However even such studies could suffer from inconsistent results and interpretations.[49][50]
Genomic method fails in case of prediction human height and research showed, that gene correlated with height could only explain 4-6% of all variance in comparison with 40% in case of 125-year-old technique of averaging the heights of both parents.[51]
Peer review system does not prevent publishing articles with statistical errors.[52]
Mismeasurement[edit]
Measure are essential for science. However sometimes mismeasuring leads to problems and misleading findings. In 2008 some scientists have found the temperature bias, when they noticed, that record of oceanic temperatures were performed with different methodology in different periods of twentieth century.[53] In medicine measurement bias could distorting the final results of trials, for example measure of blood pressure[54] and in individual cases could effect misdiagnose, for example in children weight and height.[55]
Epidemiological studies are very often badly interpreted because of large number of possible interactions between confounded factors. Despite it such studies are the base reference for health and diet recommendations. For example big study points that red meat consumption are associated with increased risk of all cause mortality among human,[56] however other recent big study suggest, that it is not red meat, but rather processed meat[57] is a cause and another indicates both.[58] For years overweight and obesity were considered as a main cause of chronic diseases, however some recent studies suggest, that overweight is only accidentally correlated with such events and the main causes are rather lack of physical activity and permanent inflammation.[59]
Conflict of interest[edit]
Research show strong correlations between positive findings in medicine and industry sponsorship[60] and studies rarely report such conflict of interest.[61] Some companies paid the university researches for their signature on conducted research, so called ghost-scripting, and it blurring the real scale of problem.[62][63] In biomedical research rate of conflict of interest is about 30%.[64] Every of the 170 psychiatric experts who contributed to the DSM IV have had the financial ties to the psychiatric drugs manufacturers.[65]
Findings replication[edit]
The core of modern science is replication of findings. However some studies found, that most of published findings aren't replicated and even if they so, it is hard to publish replicated study in the high impact journal.[66] Wrong finding are persistently present in the literature for prolonged time[67] and the retraction rate is lower in the prestigious journal, than should be expected.[68] Because there is too much published studies, scientists have a problem keeping up with recent findings.[69]
Real life consequences[edit]
Poor quality of published studies and bad science methods could lead to real life consequences. In psychology for example many single studies showed that males and females differ substantially, however more reliable studies, meta-analysies and reviews clearly show, that gender differences are minor and exaggerated.[70][71][72] Also many single neuroimaging studies suggested, that gender difference in human psychology could be observed on brain level, and again systematic reviews show, that such differences in brain physiology are rather artifacts and simply doesn't exist.[73] One study found, that reported sex differences in relative size of human corpus callosum are greater in research with smaller samples[74] and another observed, that most studies about any group differences in size of corpus callosum are poor quality with conflicting findings.[75] Despite it such untrustworthy findings could support discriminative beliefs like sexism.[76]
In psychology questionable surrogates, surrogate measures and small samples of participants often lead to false positive findings.[77] Even big reviews and meta-analysis could provide contradicting conclusions from those findings because of publication bias and lack of statistical standards.[78][79][80] For example one recent meta-analytic review showed that women clearly change their mate preferences through the menstrual cycle,[81] but another meta-analysis observed no such effect.[82]
In agricultural sciences one big study showed, that organic food contains more nutrients and less pesticides than conventional food,[83] another showed it isn't true[84] and another concluded, that evidence are lack to draw any strong conclusion.[85]
Some research found that 50% of drug trials reporting efficiency of treatment and 65% of drug trials reporting harm outcomes were incompletely reported and scientist have a tendency for excluding inconvenient data.[86][87][88] About 31% of studies antidepressants aren't published because of publication bias, and most of them are studies showing inefficiency of the drug[89] and overall efficiency of antidepressants is questionable because of poor quality of evidence.[90] The effect of similar practices could be harmful and some physicians reported, that in Europe 50% available and confirmed drugs are useless, 20% are poorly tolerated, 5% are potentially very dangerous, what cause 20 000 death yearly in France.[91] Even efficiency of major drugs could not be generalized for the whole population and them work only with 25-60% of patients.[92]
See also[edit]
- Scientific method
- Junk science
- Antiscience
- Fringe science
- Cargo cult science
- Normative science
- Pathological science
- Scientific misconduct
- Superseded scientific theories
- Publication bias
- Meta analysis
- systematic review
References[edit]
- ↑ Goldacre, Ben (2008). Bad Science. UK: HarperCollins. ISBN 9780007283194. Search this book on
- ↑ De Vries, Raymond (2006). "Normal Misbehavior: Scientists Talk about the Ethics of Research". Journal of Empirical Research on Human Research Ethics. doi:10.1525/jer.2006.1.1.43.
- ↑ Goldacre, Ben (2012). Bad Pharma: How Medicine is Broken, And How We Can Fix It. UK: HarperCollins. ISBN 9780007363643. Search this book on
- ↑ Freedman, David (2010). Wrong. Brown: Little. ISBN 9780316087919. Search this book on
- ↑ Triggle, Chris; Triggle, David (2007). "What is the future of peer review? Why is there fraud in science? Is plagiarism out of control? Why do scientists do bad things? Is it all a case of:"All that is necessary for the triumph of evil is that good men do nothing?"". Vascular Health and Risk Management. PMID 1994041.
- ↑ Dwan, Kerry; et al. (2008). "Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias". PLOS ONE. doi:10.1371/journal.pone.0003081.
- ↑ Buchanan, Anne; et al. (2006). "Dissecting complex disease: the quest for the Philosopher's Stone?". International Journal of Epidemiology. doi:10.1093/ije/dyl001.
- ↑ Ioannidis, John (2005). "Contradicted and Initially Stronger Effects in Highly Cited Clinical Research". JAMA. doi:10.1001/jama.294.2.218.
- ↑ Ioannidis, John (2005). "Why Most Published Research Findings Are False". PLOS Medicine. doi:10.1371/journal.pmed.0020124.
- ↑ Carr, Daniel (2008). "When bad evidence happens to good treatments". Regional anesthesia and pain medicine. doi:10.1016/j.rapm.2008.01.005.
- ↑ Fanelli, Daniele (2009). "How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data". PLOS ONE. doi:10.1371/journal.pone.0005738.
- ↑ Martinson, Brian; et al. (2005). "Scientists behaving badly". Nature. doi:10.1038/435737a.
- ↑ Geggie, D. (2001). "A survey of newly appointed consultants' attitudes towards research fraud". BMJ. doi:10.1136/jme.27.5.344.
- ↑ Gardner, William; et al. (2005). "Authors' reports about research integrity problems in clinical trials". Contemporary Clinical Trials. doi:10.1016/j.cct.2004.11.013.
- ↑ Feric, Fang; et al. (2013). "Males Are Overrepresented among Life Science Researchers Committing Scientific Misconduct". mBio. doi:10.1128/mBio.00640-12.
- ↑ Fountain, Henry (2009). "Plugging Holes in the Science of Forensics". New York Times.
- ↑ Button, Katherine; et al. (2013). "Power failure: why small sample size undermines the reliability of neuroscience". Nature Reviews Neuroscience. doi:10.1038/nrn3475.
- ↑ De Long, Bradford; Lang, Kevin (1992). "Are all economic hypotheses false?". Journal of Political Economy. doi:10.2307/2138833.
- ↑ Randall, Donna; Gibson, Annetta (2013). "Methodology in Business Ethics Research: A Review and Critical Assessment". Citation Classics from the Journal of Business Ethics. doi:10.1007/978-94-007-4126-3_10.
- ↑ Ranstam, Jonas; et al. (2000). "Fraud in Medical Research: An International Survey of Biostatisticians". Controlled Clinical Trials. doi:10.1016/S0197-2456(00)00069-6.
- ↑ Wicherts, Jelte; et al. (2011). "Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results". PLOS ONE. doi:10.1371/journal.pone.0026828.
- ↑ Alsheikh-Ali, Alawi; et al. (2011). "Public Availability of Published Research Data in High-Impact Journals". PLOS ONE. doi:10.1371/journal.pone.0024357.
- ↑ Piwowar, Heather (2011). "Who Shares? Who Doesn't? Factors Associated with Openly Archiving Raw Research Data". PLOS ONE. doi:10.1371/journal.pone.0018657.
- ↑ Dipboye, Robert; Flanagan, Michael (1979). "Research settings in industrial and organizational psychology: Are findings in the field more generalizable than in the laboratory?" (PDF). American Psychologist. doi:10.1037/0003-066X.34.2.141.
- ↑ Gordon, Michael; et al. (1986). "The "Science of the Sophomore" Revisited: from Conjecture to Empiricism". Academy of Management Review. doi:10.5465/AMR.1986.4282666.
- ↑ Ashton, Robert (1980). "Students as Surrogates in Behavioural Accounting Research: Some Evidence". Journal of Accounting Research. doi:10.2307/2490389.
- ↑ Greenberg, Jerald (1987). "The College Sophomore as Guinea Pig: Setting the Record Straight". Academy of Management Review. doi:10.5465/AMR.1987.4306516.
- ↑ Elliott, Carl (2008). "Guinea-pigging". The New Yorker.
- ↑ Hanke, Thomas (2006). "Lessons from TGN1412". The Lancet. doi:10.1016/S0140-6736(06)69651-7.
- ↑ Kola, Ismail; Landis, John (2004). "Can the pharmaceutical industry reduce attrition rates?" (PDF). Nature. doi:10.1038/nrd1470.
- ↑ Perel, Pablo; et al. (2007). "Comparison of treatment effects between animal experiments and clinical trials: systematic review". BMJ. doi:10.1136/bmj.39048.407928.BE.
- ↑ Hackam, Daniel; Redelmeier, Donald (2006). "Translation of Research Evidence From Animals to Humans" (PDF). JAMA. doi:10.1001/jama.296.14.1731.[dead link]
- ↑ Horn, J.; et al. (2001). "Nimodipine in Animal Model Experiments of Focal Cerebral Ischemia. A Systematic Review". Stroke. doi:10.1161/hs1001.096009.
- ↑ Hayden, Erica Check (2013). "Weak statistical standards implicated in scientific irreproducibility". Nature. doi:10.1038/nature.2013.14131.
- ↑ Johnson, Valen (2013). "Revised standards for statistical evidence". PNAS. doi:10.1073/pnas.1313476110.
- ↑ Yong, Ed (2012). "Replication studies: Bad copy". Nature. doi:10.1038/485298a.
- ↑ Bakker, Marjan; Wicherts, Jelte (2011). "The (mis)reporting of statistical results in psychology journals". Behavior Research Methods. doi:10.3758/s13428-011-0089-5.
- ↑ Ioannidis, John; et al. (2009). "Validating, augmenting and refining genome-wide association signals". Nature Reviews Genetics. doi:10.1038/nrg2544.
- ↑ McCarthy, Mark; et al. (2008). "Genome-wide association studies for complex traits: consensus, uncertainty and challenges". Nature Reviews Genetics. doi:10.1038/nrg2344.
- ↑ Hewitt, John (2012). "Editorial Policy on Candidate Gene Association and Candidate Gene-by-Environment Interaction Studies of Complex Traits". Behavior Genetics. doi:10.1007/s10519-011-9504-z.
- ↑ Ioannidis, John; et al. (2003). "Genetic associations in large versus small studies: an empirical assessment" (PDF). The Lancet. doi:10.1016/S0140-6736(03)12516-0.
- ↑ Ioannidis, John; et al. (2001). "Replication validity of genetic association studies". Nature Genetics. doi:10.1038/ng749.
- ↑ Allen, Nicole; et al. (2008). "Systematic meta-analyses and field synopsis of genetic association studies in schizophrenia: the SzGene database" (PDF). Nature Genetics. doi:10.1038/ng.171.
- ↑ Bertram, Lars; et al. (2007). "Systematic meta-analyses of Alzheimer disease genetic association studies: the AlzGene database". Nature Genetics. doi:10.1038/ng1934.
- ↑ Lill, Christina; et al. (2012). "Comprehensive Research Synopsis and Systematic Meta-Analyses in Parkinson's Disease Genetics: The PDGene Database". PLOS Genetics. doi:10.1371/journal.pgen.1002548.
- ↑ Khoury, Muin; et al. (2009). "Genome-Wide Association Studies, Field Synopses, and the Development of the Knowledge Base on Genetic Variation and Human Diseases". American Journal of Epidemiology. doi:10.1093/aje/kwp119.
- ↑ Vassos, E.; et al. (2013). "Systematic meta-analyses and field synopsis of genetic association studies of violence and aggression". Molecular Psychiatry. doi:10.1038/mp.2013.31.
- ↑ Ioannidis, John; et al. (2004). "'Racial' differences in genetic effects for complex diseases". Nature Genetics. doi:10.1038/ng1474.
- ↑ Kavvoura, Fotini; Ioannidis, John (2008). "Methods for meta-analysis in genetic association studies: a review of their potential and pitfalls" (PDF). Human Genetics. doi:10.1007/s00439-007-0445-9.
- ↑ Sagoo, Gurdeep; et al. (2009). "Systematic Reviews of Genetic Association Studies". PLOS Medicine. doi:10.1371/journal.pmed.1000028.
- ↑ Aulchenko, Yurii; et al. (2009). "Predicting human height by Victorian and genomic methods". European Journal of Human Genetics. doi:10.1038/ejhg.2009.5.
- ↑ Godlee, Fiona; et al. (1998). "Effect on the Quality of Peer Review of Blinding Reviewers and Asking Them to Sign Their ReportsA Randomized Controlled Trial". JAMA. doi:10.1001/jama.280.3.237.
- ↑ Forest, Chris; Reynolds, Richard (2008). "Climate change: Hot questions of temperature bias". Nature. doi:10.1038/453601a.
- ↑ Wingfield, David; et al. (2002). "Terminal digit preference and single-number preference in the Syst-Eur trial: influence of quality control" (PDF). Blood Pressure Monitoring. PMID 12131074.
- ↑ Rifas-Shiman, Sheryl; et al. (2005). "Misdiagnosis of Overweight and Underweight Children Younger Than 2 Years of Age Due to Length Measurement Bias". Medscape General Medicine. PMID 1488725.
- ↑ Pan, An; et al. (2012). "Red Meat Consumption and MortalityResults From 2 Prospective Cohort Studies". JAMA. doi:10.1001/archinternmed.2011.2287.
- ↑ Micha, Renata (2010). "Red and Processed Meat Consumption and Risk of Incident Coronary Heart Disease, Stroke, and Diabetes Mellitus A Systematic Review and Meta-Analysis". Epidemiology & Prevention. doi:10.1161/CIRCULATIONAHA.109.924977.
- ↑ Larsson, Susanna; Orsini, Nicola (2013). "Red Meat and Processed Meat Consumption and All-Cause Mortality: A Meta-Analysis". American Journal of Epidemiology. doi:10.1093/aje/kwt261.
- ↑ Lavie, Carl (2014). The Obesity Paradox: When Thinner Means Sicker and Heavier Means Healthier. Penguin. ISBN 9780698148512. Search this book on
- ↑ Bekelman, J. E. (2003). "Meta-analysis of 37 COI Studies (1,000 s of Trials)". Journal of the American Medical Association.
- ↑ Roseman, Michelle; et al. (2011). "Reporting of Conflicts of Interest in Meta-analyses of Trials of Pharmacological Treatments". JAMA. doi:10.1001/jama.2011.257.
- ↑ Ross, Joseph (2008). "Guest Authorship and Ghostwriting in Publications Related to RofecoxibA Case Study of Industry Documents From Rofecoxib Litigation" (PDF). JAMA. doi:10.1001/jama.299.15.1800.
- ↑ Gøtzsche, Peter; et al. (2007). "Ghost Authorship in Industry-Initiated Randomised Trials". PLOS Medicine. doi:10.1371/journal.pmed.0040019.
- ↑ Warner, Teddy; Gluck, John (2003). "What do we really know about conflicts of interest in biomedical research?". Psychopharmacology. doi:10.1007/s00213-003-1657-x.
- ↑ Vedantam, Shankar (2006). "Psychiatric experts found to have financial links to drugmakers / All who worked with mood and psychotic disorders had such ties". Washington Post.
- ↑ Giles, Jim (2006). "The problem with replication". Nature. doi:10.1038/442344a.
- ↑ Tatsioni, Athina; et al. (2007). "Persistence of Contradicted Claims in the Literature". JAMA. doi:10.1001/jama.298.21.2517.
- ↑ Cokol, Murat (2007). "How many scientific papers should be retracted?". EMBO reports. doi:10.1038/sj.embor.7400970.
- ↑ Parolo, Pietro; et al. (2015). "Attention Decay in Science". SSRN. doi:10.2139/ssrn.2575225.
- ↑ Hyde, Janet (2005). "The Gender Similarities Hypothesis" (PDF). American Psychologist. doi:10.1037/0003-066X.60.6.581.
- ↑ Carothers, Bobbi; Reis, Harry (2013). "Men and women are from Earth: Examining the latent structure of gender" (PDF). Journal of Personality and Social Psychology. doi:10.1037/a0030437.
- ↑ Hyde, Janet (2014). "Gender Similarities and Differences". Annual Reviews. doi:10.1146/annurev-psych-010213-115057.
- ↑ Wallentin, Mikkel (2009). "Putative sex differences in verbal abilities and language cortex: A critical review" (PDF). Brain and Language. doi:10.1016/j.bandl.2008.07.001.
- ↑ Bishop, Katherine; Wahlsten, Douglas (1997). "Sex Differences in the Human Corpus Callosum: Myth or Reality?" (PDF). Neuroscience and Biobehavioral Reviews. doi:10.1016/S0149-7634(96)00049-8.
- ↑ Beaton, Alan (1997). "The Relation of Planum Temporale Asymmetry and Morphology of the Corpus Callosum to Handedness, Gender, and Dyslexia: A Review of the Evidence" (PDF). Brain and Language. doi:10.1006/brln.1997.1825.
- ↑ Fine, Cordelia (2010). Delusions of Gender: How Our Minds, Society, and Neurosexism Create Difference. Norton. ISBN 9780393068382. Search this book on
- ↑ Bakker, Marjan; et al. (2012). "The Rules of the Game Called Psychological Science" (PDF). Perspectives on Psychological Science. doi:10.1177/1745691612459060.
- ↑ Ioannidis, John (2009). "Integration of evidence from multiple meta-analyses: a primer on umbrella reviews, treatment networks and multiple treatments meta-analyses". CMAJ. doi:10.1503/cmaj.081086.
- ↑ Linde, Klaus; Willich, Stefan (2003). "How objective are systematic reviews? Differences between reviews on complementary medicine". JRSM. PMID 539366.
- ↑ Moher, David; et al. (2009). "Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement". Annals of Internal Medicine. doi:10.7326/0003-4819-151-4-200908180-00135.
- ↑ Gildersleeve, Kelly; et al. (2014). "Do women's mate preferences change across the ovulatory cycle? A meta-analytic review" (PDF). Psychological Bulletin. doi:10.1037/a0035438.
- ↑ Wood, Wendy; et al. (2014). "Meta-Analysis of Menstrual Cycle Effects on Women's Mate Preferences" (PDF). Emotion Review. doi:10.1177/1754073914523073.
- ↑ Barański, Marian; et al. (2014). "Higher antioxidant and lower cadmium concentrations and lower incidence of pesticide residues in organically grown crops: a systematic literature review and meta-analyses". The British Journal of Nutrition. doi:10.1017/S0007114514001366.
- ↑ Dangour, Alan; et al. (2010). "Nutrition-related health effects of organic foods: a systematic review". The American Journal of Clinical Nutrition. doi:10.3945/ajcn.2010.29269.
- ↑ Smith-Spangler, Crystal; et al. (2012). "Are Organic Foods Safer or Healthier Than Conventional Alternatives?: A Systematic Review" (PDF). Annals of Internal Medicine. doi:10.7326/0003-4819-157-5-201209040-00007.
- ↑ An-Wen, Chan; et al. (2004). "Empirical Evidence for Selective Reporting of Outcomes in Randomized TrialsComparison of Protocols to Published Articles". JAMA. doi:10.1001/jama.291.20.2457.
- ↑ An-Wen, Chan (2004). "Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research" (PDF). CMAJ. doi:10.1503/cmaj.1041086.
- ↑ An-Wen, Chan; Altman, Douglas (2005). "Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors". BMJ. doi:10.1136/bmj.38356.424606.8F.
- ↑ Turner, Eric; et al. (2008). "Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficac" (PDF). The New England Journal of Medicine. doi:10.1056/NEJMsa065779.
- ↑ Moncrieff, Joanna (2001). "Are Antidepressants Overrated? A Review of Methodological Problems in Antidepressant Trials". The Journal of Nervous & Mental Disease. doi:10.1097/00005053-200105000-00003.
- ↑ Even, Philippe; Debré, Bernard (2012). Guide des 4000 médicaments utiles, inutiles ou dangereux (in french). Le Cherche Midi. ISBN 9782749130019.CS1 maint: Unrecognized language (link) Search this book on
- ↑ Wilkinson, Grant (2005). "Drug Metabolism and Variability among Patients in Drug Response" (PDF). The New England Journal of Medicine. doi:10.1056/NEJMra032424.
This article "Bad science methods" is from Wikipedia. The list of its authors can be seen in its historical. Articles copied from Draft Namespace on Wikipedia could be seen on the Draft Namespace of Wikipedia and not main one.