Sunday, December 30, 2012

2013 Summer School on Law and Logic

The dates for the Harvard Law School - European University Institute 2013 Summer School on Law and Logic are now settled: July 15 - 26, 2013. Venue: "The school will take place at the Badia Fiesolana, in San Domenico di Fiesole, which is located in the hills above Florence, at a short distance from both Florence and Fiesole." My law school will be one of the sponsors of this summer program. See

Join us!
The dynamic evidence page

Evidence marshaling software MarshalPlan

Thursday, December 27, 2012

Formalizing Visual Thinking? Formalizing Insight?

In an interesting interview in the most recent issue of the ever-interesting electronic journal The Reasoner, Mateja Jamnik of the University of Cambridge argues that it might be possible to automate the invention of the kinds of visual images, or representations, that mathematicians sometimes or often use to get insights into the development of mathematical proofs. Her argument is significant in part because, if successful, her argument undercuts one of Roger Penrose's arguments against purely computational theories of the mind (or brain). Beyond that Jamnik's argument raises the ever-interesting question of why visual models (sometimes considered "informal") are as important as they seem to be in "exact sciences" such as mathematics and physics.

Here are some snippets from this fascinating interview:

Mateja Jamnik: I think that a proof is a social construct in mathematics. I think the history of mathematics shows us that where mathematicians—famous mathematicians—came up with a solution or a proof of a problem or a theorem,and presented it to other famous mathematicians,as long as they convinced them, they trusted that that was the correct proof. Andit was only when the logicians came along that they formalised the logic of that domain and were able to either prove or disprove the proof, to actually formalise it. And that’s what we mean when we talk about the formal proof, that you can verify that it is correct, that it follows from a set of axioms and so on. Whereas mathematicians, I don’t think, are very interested in that.They are really more interested in the insight in the proof, and trying to understand why the theorem holds, and that’s why I say it’s a social construct because as long as they convince enough people that it’s correct, nobody’s going to go and check it out to see whether it is or not. I mean, as long as fellow mathematicians believe that they understand and that they trust the process of the proof, then they’re fine. And history of mathematics is full of examples like that, where there are proofs that were thought to be proofs for fifty years and even by very famous people, and they were disproved and it was shown that they were not proofs at all. Whereas, from a formalist point of view, formal proof has a very technical meaning, which is that it follows from a set of axioms. 

[snip, snip]

ML [interviewer Mary Leng]: So what interests me in your work is that you’re saying that these methods, though visual,can be formalized.

MJ: Yes, that’s exactly what I’m trying to do.

ML: Whereas some of the thought in thinking about diagrammatic reasoning in philosophy is to say that there’s this element of our cognition of mathematics that isn’t formalizable. I suppose that’s something that Penrose is pushing at as well, in his claim that there are diagrammatic proofs that cannot be automated, because he wants to push the idea that there’s something special about us, that we’re not purely computational.

MJ: That’s right. It is. But what I’m trying to show is that all these so-called ‘informal’ methods, they can be formalized. And so I’m looking into the use of diagrams. In fact I just came from a conference on diagrams where everybody was coming from areas of mathematics, philosophy, cognitive science, cognitive psychology, computer science, artificial intelligence,and basically we have a common interest, which is to study diagrams, and the applications and theoretical foundations of the use of diagrams. And there are lots of people who come up with representations—diagrammatic representations—that they formalize and they use reasoning on. So it’s totally possible. And also my work for my PhD, on DIAMOND [Jamnik’s interactive theorem prover, which made use of diagrams to construct proofs of theorems], was in the domain of natural number arithmetic, where the proofs were not at all like the normal logical proofs. In fact, I would say that one of the hypotheses that came out of that work was that people use something like what we call ‘schematic proof’ to find a solution to a problem. So basically, you look at a few examples—concrete examples—of your problem and you solve them, and then you spot the pattern and you generalize that pattern, and you try to make an argument about how that pattern is a justification for the general statement for all cases. So what I did with natural number arithmetic was that I would represent these theorems and statements in mathematics using diagrams, and then use just manipulations of concrete cases of diagrams—for some natural number like 5, 6, or whatever—and then spot the pattern and generalize this into a program which, basically upon input, will produce a solution for that particular case. We call this—this program, this general pattern, this procedure—we call this ‘schematic proof’. My hypothesis is that this is one possible model of how people do proofs, and there’s plenty of evidence of that from history.

[snip, snip]

ML [interviewer Mary Leng]: This brings me to the issue about Penrose, because Penrose wants to say that we’re fundamentally different from machines. You mention in your book (Mathematical Reasoning with Diagrams, 2001) Penrose’s scepticism about the possibility of modelling diagrammatic reasoning in computers, and I suppose behind all that is the thought that we want to find things that we can do that computers can’t. So if it turns out that we can model this reasoning in automated settings, then that speaks against this idea that we’re so different.

MJ: Yes absolutely. I think that we don’t understand reasoning enough to be able to make claims like this. So what spurred me on to say something about Penrose in my book is because he presented this example about a cube—about something which is innately human reasoning that machines wouldn’t be able to do. He presented this example of a proof that the sum of the first n hexagonal numbers is n-cubed. He presented this visual proof which basically says, “I’ll give you an example for a cube, that is of size three, and if you split it up in this way you can see that those are the first three hexagonal numbers, and if you continue doing that, then you understand that the general theorem holds.” And that’s precisely what, for example, the theorem about the odd natural numbers, in 2-D, is. It’s an analogue of that in 2-D. And I thought, there’s something very procedural about this. You know, and we see, I suppose he’s appealing to the fact, the visual effect, that you sort of ‘splatter’ the 3-D onto 2-D to see that those three shells of the cube form a hexagonal number. But you can think of it in the same way about the odd natural numbers. You have that the sum of the first n odd natural numbers is n-squared. You can take a square of size three and then you split it into ‘ells’, and you see that each ell is the subsequent odd natural number. And that’s exactly an analogue of what Penrose presented. And obviously I showed we can capture that, you know? DIAMOND could easily do that. OK it doesn’t have a 3-D interface, so technically you can’t, but in principle, there’s nothing that would stop it really. So I suspect that Penrose is asking, really, “Could the computer come up with an idea like that?” and we don’t know that yet.

ML: So just to clarify, DIAMOND is a proof checker rather than a theorem prover?

MJ: It’s an interactive theorem prover, so it means that the user constructs the sample cases, so it means the user has the insight, really, into what the proof should look like. So the computer’s not coming up with an insight.

ML: In your book you mention the hope that you could actually develop a computer program that could do the insight steps. How have things moved in that regard?

MJ: I haven’t moved much in that direction yet, because I’ve been looking at a different direction with reasoning with diagrams, but that would definitely be the next step, to put some sort of search procedure on top of all these visual methods and geometric manipulations of diagrams and check whether anything interesting comes up. Now of course Penrose would probably say, the computer doesn’t have the insight. But where does that come from in a mathematician? It comes from experience, it comes from...well we don’t know. That’s why I’m interested in modelling this type of reasoning.


The dynamic evidence page

Evidence marshaling software MarshalPlan

"Law, Virtue and Justice," eds., Amalia Amaya and Ho Hock Lai

From Hart Publishing (2012):

Law, Virtue and Justice
Edited by Amalia Amaya and Ho Hock Lai

This book explores the relevance of virtue theory to law from a variety of perspectives. The concept of virtue is central in both contemporary ethics and epistemology. In contrast, in law, there has not been a comparable trend toward explaining normativity on the model of virtue theory. In the last few years, however, there has been an increasing interest in virtue theory among legal scholars. 'Virtue jurisprudence' has emerged as a serious candidate for a theory of law and adjudication. Advocates of virtue jurisprudence put primary emphasis on aretaic concepts rather than on duties or consequences. Aretaic concepts are, on this view, crucial for explaining law and adjudication. This book is a collection of essays examining the role of virtue in general jurisprudence as well as in specific areas of the law. Part I puts together a number of papers discussing various philosophical aspects of an approach to law and adjudication based on the virtues. Part II discusses the relationship between law, virtue and character development, with some of the essays selected analysing this relationship by combining both eastern perspectives on virtue and character with western approaches. Parts III and IV examine problems of substantive areas of law, more specifically, criminal law and evidence law, from within a virtue-based framework. Last, Part V discusses the relevance of empathy to our understanding of justice and legal morality.

Amalia Amaya is a Researcher in the Institute of Philosophical Research at the National Autonomous University of Mexico.
Ho Hock Lai is a Professor in the Faculty of Law at the National University of Singapore.


The dynamic evidence page

Evidence marshaling software MarshalPlan

Friday, December 21, 2012

Jim Coetzee Recommends Jim Franklin

Jim Coetzee, Nobel Prize winner, asked for his list of 2012 best reads, said:

"... In The Science of Conjecture, James Franklin shows us how deeply and subtly jurists and philosophers from ancient Greece onwards have explored how we can deal rationally with real-life cases (law cases, for instance, or scientific experiments) where the link between cause and effect is not obvious...."

Jim Franklin's classic book was published in 2002.
The dynamic evidence page

Evidence marshaling software MarshalPlan

Face-to-Face Confrontation in Canada

R. v. N.S., 2012 SCC 72 (Dec. 20, 2012) (4-2-1):

Portion of summary of the majority opinion on the Supreme Court's web site (full opinion is here):

McLachlin C.J. and Deschamps, Fish and Cromwell JJ:
The issue is when, if ever, a witness who wears a niqab for religious reasons can be required to remove it while testifying. Two sets of Charter rights are potentially engaged — the witness’s freedom of religion and the accused’s fair trial rights, including the right to make full answer and defence. An extreme approach that would always require the witness to remove her niqab while testifying, or one that would never do so, is untenable. The answer lies in a just and proportionate balance between freedom of religion and trial fairness, based on the particular case before the court. A witness who for sincere religious reasons wishes to wear the niqab while testifying in a criminal proceeding will be required to remove it if (a) this is necessary to prevent a serious risk to the fairness of the trial, because reasonably available alternative measures will not prevent the risk; and (b) the salutary effects of requiring her to remove the niqab outweigh the deleterious effects of doing so.

 Applying this framework involves answering four questions. First, would requiring the witness to remove the niqab while testifying interfere with her religious freedom? To rely on s. 2(a) of the Charter, N.S. must show that her wish to wear the niqab while testifying is based on a sincere religious belief. The preliminary inquiry judge concluded that N.S.’s beliefs were not sufficiently strong. However, at this stage the focus is on sincerity rather than strength of belief.
The second question is: would permitting the witness to wear the niqab while testifying create a serious risk to trial fairness? There is a deeply rooted presumption in our legal system that seeing a witness’s face is important to a fair trial, by enabling effective cross-examination and credibility assessment. The record before us has not shown this presumption to be unfounded or erroneous. However, whether being unable to see the witness’s face threatens trial fairness in any particular case will depend on the evidence that the witness is to provide. Where evidence is uncontested, credibility assessment and cross-examination are not in issue. Therefore, being unable to see the witness’s face will not impinge on trial fairness. If wearing the niqab poses no serious risk to trial fairness, a witness who wishes to wear it for sincere religious reasons may do so.
If both freedom of religion and trial fairness are engaged on the facts, a third question must be answered: is there a way to accommodate both rights and avoid the conflict between them? The judge must consider whether there are reasonably available alternative measures that would conform to the witness’s religious convictions while still preventing a serious risk to trial fairness.   
If no accommodation is possible, then a fourth question must be answered: do the salutary effects of requiring the witness to remove the niqab outweigh the deleterious effects of doing so? Deleterious effects include the harm done by limiting the witness’s sincerely held religious practice. The judge should consider the importance of the religious practice to the witness, the degree of state interference with that practice, and the actual situation in the courtroom – such as the people present and any measures to limit facial exposure. The judge should also consider broader societal harms, such as discouraging niqab-wearing women from reporting offences and participating in the justice system. These deleterious effects must be weighed against the salutary effects of requiring the witness to remove the niqab. Salutary effects include preventing harm to the fair trial interest of the accused and safeguarding the repute of the administration of justice. When assessing potential harm to the accused’s fair trial interest, the judge should consider whether the witness’s evidence is peripheral or central to the case, the extent to which effective cross-examination and credibility assessment of the witness are central to the case, and the nature of the proceedings. Where the liberty of the accused is at stake, the witness’s evidence central and her credibility vital, the possibility of a wrongful conviction must weigh heavily in the balance. The judge must assess all these factors and determine whether the salutary effects of requiring the witness to remove the niqab outweigh the deleterious effects of doing so.
The dynamic evidence page

Evidence marshaling software MarshalPlan

Thursday, December 20, 2012

The Brouhaha Over Replicability of Experimental Results in Psychology

Perspectives on Psychological Science (Nov. 2012):

Editors’ Introduction to the Special Section on Replicability in Psychological Science

A Crisis of Confidence?

  1. Eric–Jan Wagenmakers2

  1. 1University of California, San Diego
  2. 2University of Amsterdam, The Netherlands
  1. Hal Pashler, University of California, San Diego, Department of Psychology, 9500 Gilman Drive #0109, La Jolla, CA 92093-0109 E-mail:
Is there currently a crisis of confidence in psychological science reflecting an unprecedented level of doubt among practitioners about the reliability of research findings in the field? It would certainly appear that there is. These doubts emerged and grew as a series of unhappy events unfolded in 2011: the Diederik Stapel fraud case (see Stroebe, Postmes, & Spears, 2012, this issue), the publication in a major social psychology journal of an article purporting to show evidence of extrasensory perception (Bem, 2011) followed by widespread public mockery (see Galak, LeBoeuf, Nelson, & Simmons, in pressWagenmakers, Wetzels, Borsboom, & van der Maas, 2011), reports by Wicherts and colleagues that psychologists are often unwilling or unable to share their published data for reanalysis (Wicherts, Bakker, & Molenaar, 2011; see also Wicherts, Borsboom, Kats, & Molenaar, 2006), and the publication of an important article in Psychological Science showing how easily researchers can, in the absence of any real effects, nonetheless obtain statistically significant differences through various questionable research practices (QRPs) such as exploring multiple dependent variables or covariates and only reporting these when they yield significant results (Simmons, Nelson, & Simonsohn, 2011).
For those psychologists who expected that the embarrassments of 2011 would soon recede into memory, 2012 offered instead a quick plunge from bad to worse, with new indications of outright fraud in the field of social cognition (Simonsohn, 2012), an article in Psychological Science showing that many psychologists admit to engaging in at least some of the QRPs examined by Simmons and colleagues (John, Loewenstein, & Prelec, 2012), troubling new meta-analytic evidence suggesting that the QRPs described by Simmons and colleagues may even be leaving telltale signs visible in the distribution of p values in the psychological literature (Masicampo & Lalande, in pressSimonsohn, 2012), and an acrimonious dust-up in science magazines and blogs centered around the problems some investigators were having in replicating well-known results from the field of social cognition (Bower, 2012Yong, 2012).
Although the very public problems experienced by psychology over this 2-year period are embarrassing to those of us working in the field, some have found comfort in the fact that, over the same period, similar concerns have been arising across the scientific landscape (triggered by revelations that will be described shortly). Some of the suspected causes of unreplicability, such as publication bias (the tendency to publish only positive findings) have been discussed for years; in fact, the phrase file-drawer problem was first coined by a distinguished psychologist several decades ago (Rosenthal, 1979). However, many have speculated that these problems have been exacerbated in recent years as academia reaps the harvest of a hypercompetitive academic climate and an incentive scheme that provides rich rewards for overselling one’s work and few rewards at all for caution and circumspection (see Giner-Sorolla, 2012, this issue). Equally disturbing, investigators seem to be replicating each others’ work even less often than they did in the past, again presumably reflecting an incentive scheme gone askew (a point discussed in several articles in this issue, e.g.,Makel, Plucker, & Hegarty, 2012).
The frequency with which errors appear in the psychological literature is not presently known, but a number of facts suggest it might be disturbingly high. Ioannidis (2005)has shown through simple mathematical modeling that any scientific field that ignores replication can easily come to the miserable state wherein (as the title of his most famous article puts it) “most published research findings are false” (see also Ioannidis, 2012, this issue, and Pashler & Harris, 2012, this issue). Meanwhile, reports emerging from cancer research have made such grim scenarios seem more plausible: In 2012, several large pharmaceutical companies revealed that their efforts to replicate exciting preclinical findings from published academic studies in cancer biology were only rarely verifying the original results (Begley & Ellis, 2012; see also Osherovich, 2011Prinz, Schlange, & Asadullah, 2011).
Closer to home, the replicability of published findings in psychology may become clearer with the Reproducibility Project (Open Science Collaboration, 2012, this issue; see also Carpenter, 2012). Individuals and small groups of service-minded psychologists are each contributing their time to conducting a replication of a published result following a structured protocol. The aggregated results will provide the first empirical evidence of reproducibility and its predictors. The open project is still accepting volunteers. With small contributions from many of us, the Reproducibility Project will provide an empirical basis for assessing our reproducibility as a field (to find out more, or sign up yourself, visit:
This special section brings together a set of articles that analyze the causes and extent of the replicability problems in psychology and ask what can be done about it. The first nine articles focus principally on diagnosis; the following six articles focus principally on treatment. Those readers who need further motivation to change their research practices are referred to the illustration provided by Neuroskeptic (2012). The section ends with a stimulating overview by John Ioannidis, the biostatistician whose work has led the way in exposing problems of replicability and bias across the fields of medicine and the life sciences.
Many of the articles in this special issue make it clear why the replicability problems will not be so easily overcome, as they reflect deep-seated human biases and well-entrenched incentives that shape the behavior of individuals and institutions. Nevertheless, the problems are surely not insurmountable, and the contributors to this special section offer a great variety of ideas for how practices can be improved.
In the opinion of the editors of this special section, it would be a mistake to try to rely upon any single solution to such a complex problem. Rather, it seems to us that psychological science should be instituting parallel reforms across the whole range of academic practices—from journals and journal reviewing to academic reward structures to research practices within individual labs—and finding out which of these prove effective and which do not. We hope that the articles in this special section will not only be stimulating and pleasurable to read, but that they will also promote much wider discussion and, ultimately, collective actions that we can take to make our science more reliable and more reputable. Having found ourselves in the very unwelcome position of being (to some degree at least) the public face for the replicability problems of science in the early 21st century, psychological science has the opportunity to rise to the occasion and provide leadership in finding better ways to overcome bias and error in science generally.

Article Notes

  • Declaration of Conflicting Interests The authors declared that they had no conflicts of interest with respect to their authorship or the publication of this article.













The dynamic evidence page

Evidence marshaling software MarshalPlan