Evidence marshaling software MarshalPlan
Friday, December 21, 2012
Jim Coetzee Recommends Jim Franklin
Evidence marshaling software MarshalPlan
Face-to-Face Confrontation in Canada
R. v.
N.S., 2012 SCC 72 (Dec. 20, 2012) (4-2-1):
Portion of summary of the majority opinion on the Supreme Court's web site (full opinion is here):
Portion of summary of the majority opinion on the Supreme Court's web site (full opinion is here):
McLachlin C.J.
and Deschamps, Fish and Cromwell JJ:
The
issue is when, if ever, a witness who wears a niqab for religious
reasons can be required to remove it while testifying. Two sets of
Charter rights are potentially engaged — the witness’s
freedom of religion and the accused’s fair trial rights, including
the right to make full answer and defence. An extreme approach that
would always require the witness to remove her niqab while
testifying, or one that would never do so, is untenable. The answer
lies in a just and proportionate balance between freedom of religion
and trial fairness, based on the particular case before the court. A
witness who for sincere religious reasons wishes to wear the niqab
while testifying in a criminal proceeding will be required to remove
it if (a) this is necessary to prevent a serious risk to the
fairness of the trial, because reasonably available alternative
measures will not prevent the risk; and (b) the salutary effects
of requiring her to remove the niqab outweigh the deleterious effects
of doing so.
Applying
this framework involves answering four questions. First, would
requiring the witness to remove the niqab while testifying interfere
with her religious freedom? To rely on s. 2(a) of the
Charter, N.S. must show that her wish to wear the niqab while
testifying is based on a sincere religious belief. The preliminary
inquiry judge concluded that N.S.’s beliefs were not sufficiently
strong. However, at this stage the focus is on sincerity rather than
strength of belief.
The
second question is: would permitting the witness to wear the niqab
while testifying create a serious risk to trial fairness? There is a
deeply rooted presumption in our legal system that seeing a witness’s
face is important to a fair trial, by enabling effective
cross-examination and credibility assessment. The record before us
has not shown this presumption to be unfounded or erroneous.
However, whether being unable to see the witness’s face threatens
trial fairness in any particular case will depend on the evidence
that the witness is to provide. Where evidence is uncontested,
credibility assessment and cross-examination are not in issue.
Therefore, being unable to see the witness’s face will not impinge
on trial fairness. If wearing the niqab poses no serious risk to
trial fairness, a witness who wishes to wear it for sincere religious
reasons may do so.
If
both freedom of religion and trial fairness are engaged on the facts,
a third question must be answered: is there a way to accommodate both
rights and avoid the conflict between them? The judge must consider
whether there are reasonably available alternative measures that
would conform to the witness’s religious convictions while still
preventing a serious risk to trial fairness.
If
no accommodation is possible, then a fourth question must be
answered: do the salutary effects of requiring the witness to remove
the niqab outweigh the deleterious effects of doing so? Deleterious
effects include the harm done by limiting the witness’s sincerely
held religious practice. The judge should consider the importance of
the religious practice to the witness, the degree of state
interference with that practice, and the actual situation in the
courtroom – such as the people present and any measures to limit
facial exposure. The judge should also consider broader societal
harms, such as discouraging niqab-wearing women from reporting
offences and participating in the justice system. These deleterious
effects must be weighed against the salutary effects of requiring the
witness to remove the niqab. Salutary effects include preventing
harm to the fair trial interest of the accused and safeguarding the
repute of the administration of justice. When assessing potential
harm to the accused’s fair trial interest, the judge should
consider whether the witness’s evidence is peripheral or central to
the case, the extent to which effective cross-examination and
credibility assessment of the witness are central to the case, and
the nature of the proceedings. Where the liberty of the accused is
at stake, the witness’s evidence central and her credibility vital,
the possibility of a wrongful conviction must weigh heavily in the
balance. The judge must assess all these factors and determine
whether the salutary effects of requiring the witness to remove the
niqab outweigh the deleterious effects of doing so.
Evidence marshaling software MarshalPlan
Thursday, December 20, 2012
The Brouhaha Over Replicability of Experimental Results in Psychology
Perspectives on Psychological Science (Nov. 2012):
&&&
The dynamic evidence page
Evidence marshaling software MarshalPlan
Editors’ Introduction to the Special Section on Replicability in Psychological Science
A Crisis of Confidence?
- Hal Pashler, University of California, San Diego, Department of Psychology, 9500 Gilman Drive #0109, La Jolla, CA 92093-0109 E-mail: hpashler@ucsd.edu
Is there currently a crisis of confidence in psychological science reflecting an unprecedented level of doubt among practitioners about the reliability of research findings in the field? It would certainly appear that there is. These doubts emerged and grew as a series of unhappy events unfolded in 2011: the Diederik Stapel fraud case (see Stroebe, Postmes, & Spears, 2012, this issue), the publication in a major social psychology journal of an article purporting to show evidence of extrasensory perception (Bem, 2011) followed by widespread public mockery (see Galak, LeBoeuf, Nelson, & Simmons, in press; Wagenmakers, Wetzels, Borsboom, & van der Maas, 2011), reports by Wicherts and colleagues that psychologists are often unwilling or unable to share their published data for reanalysis (Wicherts, Bakker, & Molenaar, 2011; see also Wicherts, Borsboom, Kats, & Molenaar, 2006), and the publication of an important article in Psychological Science showing how easily researchers can, in the absence of any real effects, nonetheless obtain statistically significant differences through various questionable research practices (QRPs) such as exploring multiple dependent variables or covariates and only reporting these when they yield significant results (Simmons, Nelson, & Simonsohn, 2011).
For those psychologists who expected that the embarrassments of 2011 would soon recede into memory, 2012 offered instead a quick plunge from bad to worse, with new indications of outright fraud in the field of social cognition (Simonsohn, 2012), an article in Psychological Science showing that many psychologists admit to engaging in at least some of the QRPs examined by Simmons and colleagues (John, Loewenstein, & Prelec, 2012), troubling new meta-analytic evidence suggesting that the QRPs described by Simmons and colleagues may even be leaving telltale signs visible in the distribution of p values in the psychological literature (Masicampo & Lalande, in press; Simonsohn, 2012), and an acrimonious dust-up in science magazines and blogs centered around the problems some investigators were having in replicating well-known results from the field of social cognition (Bower, 2012; Yong, 2012).
Although the very public problems experienced by psychology over this 2-year period are embarrassing to those of us working in the field, some have found comfort in the fact that, over the same period, similar concerns have been arising across the scientific landscape (triggered by revelations that will be described shortly). Some of the suspected causes of unreplicability, such as publication bias (the tendency to publish only positive findings) have been discussed for years; in fact, the phrase file-drawer problem was first coined by a distinguished psychologist several decades ago (Rosenthal, 1979). However, many have speculated that these problems have been exacerbated in recent years as academia reaps the harvest of a hypercompetitive academic climate and an incentive scheme that provides rich rewards for overselling one’s work and few rewards at all for caution and circumspection (see Giner-Sorolla, 2012, this issue). Equally disturbing, investigators seem to be replicating each others’ work even less often than they did in the past, again presumably reflecting an incentive scheme gone askew (a point discussed in several articles in this issue, e.g.,Makel, Plucker, & Hegarty, 2012).
The frequency with which errors appear in the psychological literature is not presently known, but a number of facts suggest it might be disturbingly high. Ioannidis (2005)has shown through simple mathematical modeling that any scientific field that ignores replication can easily come to the miserable state wherein (as the title of his most famous article puts it) “most published research findings are false” (see also Ioannidis, 2012, this issue, and Pashler & Harris, 2012, this issue). Meanwhile, reports emerging from cancer research have made such grim scenarios seem more plausible: In 2012, several large pharmaceutical companies revealed that their efforts to replicate exciting preclinical findings from published academic studies in cancer biology were only rarely verifying the original results (Begley & Ellis, 2012; see also Osherovich, 2011; Prinz, Schlange, & Asadullah, 2011).
Closer to home, the replicability of published findings in psychology may become clearer with the Reproducibility Project (Open Science Collaboration, 2012, this issue; see also Carpenter, 2012). Individuals and small groups of service-minded psychologists are each contributing their time to conducting a replication of a published result following a structured protocol. The aggregated results will provide the first empirical evidence of reproducibility and its predictors. The open project is still accepting volunteers. With small contributions from many of us, the Reproducibility Project will provide an empirical basis for assessing our reproducibility as a field (to find out more, or sign up yourself, visit:http://openscienceframework.org/project/EZcUj/).
This special section brings together a set of articles that analyze the causes and extent of the replicability problems in psychology and ask what can be done about it. The first nine articles focus principally on diagnosis; the following six articles focus principally on treatment. Those readers who need further motivation to change their research practices are referred to the illustration provided by Neuroskeptic (2012). The section ends with a stimulating overview by John Ioannidis, the biostatistician whose work has led the way in exposing problems of replicability and bias across the fields of medicine and the life sciences.
Many of the articles in this special issue make it clear why the replicability problems will not be so easily overcome, as they reflect deep-seated human biases and well-entrenched incentives that shape the behavior of individuals and institutions. Nevertheless, the problems are surely not insurmountable, and the contributors to this special section offer a great variety of ideas for how practices can be improved.
In the opinion of the editors of this special section, it would be a mistake to try to rely upon any single solution to such a complex problem. Rather, it seems to us that psychological science should be instituting parallel reforms across the whole range of academic practices—from journals and journal reviewing to academic reward structures to research practices within individual labs—and finding out which of these prove effective and which do not. We hope that the articles in this special section will not only be stimulating and pleasurable to read, but that they will also promote much wider discussion and, ultimately, collective actions that we can take to make our science more reliable and more reputable. Having found ourselves in the very unwelcome position of being (to some degree at least) the public face for the replicability problems of science in the early 21st century, psychological science has the opportunity to rise to the occasion and provide leadership in finding better ways to overcome bias and error in science generally.
Article Notes
- Declaration of Conflicting Interests The authors declared that they had no conflicts of interest with respect to their authorship or the publication of this article.
References
- ↵
- Begley C. G.,
- Ellis L. M.
- ↵
- Bem D. J.
- ↵
- Bower B.
- ↵
- Carpenter S.
- ↵
- Galak J.,
- LeBoeuf R. A.,
- Nelson L. D.,
- Simmons J. P.
- ↵
- Giner-Sorolla R.
- ↵
- Ioannidis J. P. A.
- ↵
- Ioannidis J. P. A.
- ↵
- John L. K.,
- Loewenstein G.,
- Prelec D.
- ↵
- Makel M.,
- Plucker J.,
- Hegarty B.
- ↵
- Masicampo E. J.,
- Lalande D. R.
- ↵ Neuroskeptic. (2012). The nine circles of scientific hell. Perspectives on Psychological Science, 7, 643–644.
- ↵ Open Science Collaboration. (2012). An open, large-scale, collaborative effort to estimate the reproducibility of psychological science. Perspectives on Psychological Science, 7, 657–660.
- ↵
- Osherovich L.
- ↵
- Pashler H.,
- Harris C. R.
- ↵
- Prinz F.,
- Schlange T.,
- Asadullah K.
- ↵
- Rosenthal R.
- ↵
- Simmons J. P.,
- Nelson L. D.,
- Simonsohn U.
- ↵
- Simonsohn U.
- ↵
- Stroebe W.,
- Postmes T.,
- Spears R.
- ↵
- Wagenmakers E. J.,
- Wetzels R.,
- Borsboom D.,
- van der Maas H. L. J.
- ↵
- Wicherts J. M.,
- Bakker M.,
- Molenaar D.
- ↵
- Wicherts J. M.,
- Borsboom D.,
- Kats J.,
- Molenaar D.
- ↵
- Yong E.
Evidence marshaling software MarshalPlan
Subscribe to:
Posts (Atom)