Open Access

Does reductio ad absurdumhave a place in evidence-based medicine?

BMC Medicine201412:106

DOI: 10.1186/1741-7015-12-106

Received: 23 May 2014

Accepted: 23 May 2014

Published: 25 June 2014


In a meta-analysis published in BMC Medicine, we explored whether evidence-based medicine can actually be sure that ‘sucrose = sucrose’ in the treatment of depression. This paper, based upon a reductio ad absurdum, addressed an epistemological question using a ‘scientific’ approach, and could be disconcerting as suggested by Cipriani and Geddes’ commentary. However, most papers are based upon a mixture of observations and discussions about sense and meaning. Ultimately, there is nothing more than a story, told with words or numbers. Randomised controlled trials provide information about average patients that do not exist. These results ignores an entire segment of therapeutics that plays a crucial role, namely care. This information is usually set out using a ‘grammar’ that is ambiguous, since statistical tests of hypothesis have raised epistemological questions that are not as yet solved. Moreover, many of these stories remain untold, and unpublished. For these reasons evidence-based medicine is a vehicle for many paradoxes and controversies. Reductio ad absurdum can be useful in precisely this case, to underline how and why the medical literature can sometimes give an impression of absurdity of this sort. Even if the data analysis in our paper was rather rhetorical, we agree that it should comply with the classic standards of reporting and we provide the important extra data that Cipriani and Geddes have requested.

Please see related articles: and


Epistemology Evidence-based medicine Publication bias Reductio ad absurdum Statistics


In our recent paper published in BMC Medicine[1], we explored in a thought-provoking manner whether science can actually be sure that ‘sucrose = sucrose’ in the treatment of major depressive disorder. This paper was an original piece of research based upon a reductio ad absurdum, but basically an essay about science and care with certain epistemological dimensions. We submitted it to stimulate reflection about the validity of scientific knowledge in medicine and we are glad that it has worked, with a very interesting commentary by Cipriani and Geddes on our published article [2].

Beyond addressing a complex and controversial issue within the field of antidepressant research, we believed that the form of this unusual paper raised an important issue implicitly suggested by Cipriani and Geddes’ comments: does reductio ad absurdum have a place in evidence-based medicine?

Evidence-based stories

Cipriani and Geddes are right to point out that while our paper addressed an epistemological question we approached it ‘scientifically’ by conducting a systematic review and meta-analysis. Indeed, this can be considered as a challenge to the comfortable dualism at the basis of modern science: words are seen as being for philosophers and numbers for ‘hard’ scientists. We clearly militate for abandoning such a position, which sterilises the debate, especially in the field of psychiatric research [3]. To sum up our position on this point, we consider that in most papers, whether they come from social and human sciences or from ‘harder’ sciences, there is a mixture of observation (the core activity of science, present in the methods and results section of papers) and discussion about sense and meaning (more related to philosophy, and in general present in the discussion section). Ultimately, there is nothing more than a story, told with words or numbers, even if it can sometimes be an evidence-based story.

Evidence-based stories concerning an imaginary average patient

To deal with the question of variability and randomness, randomised controlled trials (RCT) tell stories about average patients who, unfortunately or hopefully, do not exist in practice. Statistical inferences underpinning RCT conclusions concern expected values of random variables. In more human terms, these inferences involve the comparison of two or three run-of-the-mill patients (average), with blurred profiles (standard deviation). The story told by an RCT focuses on efficacy, sometimes effectiveness, and specifically on the pre-post difference in a very limited aspect of the average patient. This story does not tell much about the individual patient’s story that a clinician is faced with [4] and ignores an entire segment of therapeutics that plays a crucial role: care that draws on what we might call the patient’s ‘irrationality’, which has no place in mainstream evidence-based stories.

Stories with ambiguous grammar

Natural languages are mainly bottom-up constructions and this means that stories have more or less the same meaning for everybody that reads them. This is not true for evidenced-based stories, which rely on a top-down grammar that is substantially misunderstood. Indeed, statistical tests of hypotheses have raised epistemological questions that are not yet solved [5]. Statistical tests can be used for ‘behavioural inference’ (the perspective proposed by Neyman and Pearson) or for ‘inductive inference’ (the Fisher perspective). In the first situation, type-one error is considered, in the second, the P-value. Unfortunately, most clinicians and non-statistician researchers are in the habit of mixing the two approaches, and this creates fuzziness in most conclusions in biomedical research in general, and RCTs in particular. The importance of significance testing in science is surely overstated, with a fallacious tendency to deflect attention from the actual size of an effect [6, 7] and dramatic suggestive power. Translated into clinical practice, statistical significance converts into an argument of authority.

Family secrets and storytellers

Most families have secrets, but the kind and importance vary. When evidence-based stories are told, the storytellers at the top of the paper are not always a good indication of who actually wrote them, since ghost-writing is endemic [8]. Moreover, many evidence-based stories remain untold, especially when they do not convey a positive picture [9]. The highly competitive environment for researchers’ funding and career promotion, authors’ own ideological conflicts of interest, and the tendency by editors, reviewers and readers to be ‘scientific novelty-seekers’, select the newest and most attractive storylines [10]. The recent contestation of the European Medicine Agency’s plans for sharing data from clinical trials by AbbVie and InterMune [11] has illustrated how untold stories can be explicitly considered as trade secrets.

‘Family secrets’ can sometimes cause illnesses and scientific remedies cannot be the cure in the case of Evidence Based Stories. Europe has recently made an important step forward by voting in favour of a law that will require all clinical drug trials in Europe to be registered and their results reported in a public database [12].

Absurd stories and reductio ad absurdum

For these reasons, and despite a grammar based upon probability, evidence-based medicine boasts more than its fair share of improbable stories. All the ingredients listed above are present in antidepressant literature, resulting in many paradoxes and controversies, based upon different ways of telling the same stories [13, 14]. Reductio ad absurdum can be precisely useful in this case, to underline how and why medical literature can give an impression of absurdity of this sort. In evidence-based medicine, this manner of reasoning is usually confined to Christmas issues in general journals. This curious ritual probably has an outlet function, authorising a return of repressed material from evidence-based medicine in a socially acceptable manner.


We are glad that the BMC Medicine editors and reviewers accepted our improbable story in which the data analysis was in fact rather rhetorical (as indeed are all evidence-based stories). We agree that it should nonetheless comply with the classic standards of story-telling [15] and we are pleased to provide the important extra data that Cipriani and Geddes asked for in two files (Additional files 1 and 2).

Authors’ information

NF is a psychiatrist who works as a chief resident at the Department of Psychiatry of the University of Rennes. His main research focuses on methodological peculiarities of the evaluation of treatments in psychiatry. FB is a psychiatrist and professor in biostatistics at the Université Paris-Sud. He works on subjective measurements and more generally in innovative methods that can be used to grasp the patient’s perspective.



Randomised controlled trials.



We thank Angela Swaine Verdier for revising the English. This paper was supported by the Institut National de la Santé et de la Recherche Médicale (INSERM). The sponsor had no role concerning preparation, review or approval of the manuscript.

Authors’ Affiliations

INSERM, U669 Maison de Solenn
Centre d’Investigation Clinique CIC-P INSERM 0203, Hôpital de Pontchaillou, Centre Hospitalier Universitaire de Rennes et Université de Rennes 1
Centre Hospitalier Guillaume Régnier, Service Hospitalo-Universitaire de Psychiatrie
Université Paris-Sud, Université Paris Descartes
Département de Santé Publique, AP-HP, Hôpital Paul Brousse


  1. Naudet F, Millet B, Charlier P, Reymann JM, Maria AS, Falissard B: Which placebo to cure depression? A thought-provoking network meta-analysis. BMC Med. 2013, 11: 230-10.1186/1741-7015-11-230.View ArticlePubMedPubMed CentralGoogle Scholar
  2. Cipriani A, Geddes JR: Placebo for depression? We need to improve the quality of scientific information but also reject too simplistic approaches or ideological nihilism. BMC Med. 2014, 12: 105.View ArticlePubMedPubMed CentralGoogle Scholar
  3. Falissard B, Revah A, Yang S, Fagot-Largeault A: The place of words and numbers in psychiatric research. Philos Ethics Humanit Med. 2013, 8: 18-10.1186/1747-5341-8-18.View ArticlePubMedPubMed CentralGoogle Scholar
  4. Cartwright N: A philosopher’s view of the long road from RCTs to effectiveness. Lancet. 2011, 377: 1400-1401. 10.1016/S0140-6736(11)60563-1.View ArticlePubMedGoogle Scholar
  5. Lehmann EL: The Fisher, Neyman-Pearson Theories of testing hypotheses: one theory or two?. Selected Works of E L Lehmann. Edited by: Rojo J. 2012, New York: Springer US, 201-208. Selected Works in Probability and StatisticsView ArticleGoogle Scholar
  6. Nuzzo R: Scientific method: statistical errors. Nature. 2014, 506: 150-152. 10.1038/506150a.View ArticlePubMedGoogle Scholar
  7. Johnson VE: Revised standards for statistical evidence. Proc Natl Acad Sci U S A. 2013, 110: 19313-19317. 10.1073/pnas.1313476110.View ArticlePubMedPubMed CentralGoogle Scholar
  8. Collier R: Prevalence of ghostwriting spurs calls for transparency. CMAJ. 2009, 181: E161-162. 10.1503/cmaj.109-3036.View ArticlePubMedPubMed CentralGoogle Scholar
  9. Joober R, Schmitz N, Annable L, Boksa P: Publication bias: what are the challenges and can they be overcome?. J Psychiatry Neurosci. 2012, 37: 149-152. 10.1503/jpn.120065.View ArticlePubMedPubMed CentralGoogle Scholar
  10. Turner EH: Publication bias, with a focus on psychiatry: causes and solutions. CNS Drugs. 2013, 27: 457-468. 10.1007/s40263-013-0067-9.View ArticlePubMedGoogle Scholar
  11. Groves T, Godlee F: The European Medicines Agency’s plans for sharing data from clinical trials. BMJ. 2013, 346: f2961-10.1136/bmj.f2961.View ArticlePubMedGoogle Scholar
  12. European Parliament: Clinical trials: clearer rules, better protection for patients [press release]. 2014, []Google Scholar
  13. Davis JM, Giakas WJ, Qu J, Prasad P, Leucht S: Should we treat depression with drugs or psychological interventions? a reply to Ioannidis. Philos Ethics Humanit Med. 2011, 6: 8-10.1186/1747-5341-6-8.View ArticlePubMedPubMed CentralGoogle Scholar
  14. Ioannidis JP: Effectiveness of antidepressants: an evidence myth constructed from a thousand randomized trials?. Philos Ethics Humanit Med. 2008, 3: 14-10.1186/1747-5341-3-14.View ArticlePubMedPubMed CentralGoogle Scholar
  15. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D: The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009, 6: e1000100-10.1371/journal.pmed.1000100.View ArticlePubMedPubMed CentralGoogle Scholar


© Naudet and Falissard; licensee BioMed Central Ltd. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.