Trial by Mathematics - Reconsidered
(footnotes omitted; rough draft-please do not quote))
by Peter Tillers
(for eventual publication in Law, Probability and Risk, http://lpr.oxfordjournals.org/ )
In 1970 Michael O. Finkelstein (with William B. Fairley) proposed that under some circumstances a jury in a criminal trial might be invited to use Bayes' Theorem to address the issue of the identity of the criminal perpetrator. In 1971 Laurence Tribe of Harvard Law School responded to this proposal with a rhetorically-powerful and multipronged attack on what he called "trial by mathematics." Professor Tribe, who went on to have a distinguished career as a scholar of American (U.S.) constitutional law, argued that any use of probability theory in trials (particularly in criminal trials) to regulate the drawing of inference from evidence has a variety of vices.
Tribe's article focused on the probability calculus in part because he was responding to a proposal to use a theorem of probability theory, Bayes' Theorem. But in that same article Tribe said that his objections to the use of Bayes Theorem in legal trials have broader implications. The subject matter of his article was, as he put it, "the entire family of formal techniques of analysis that build on explicit axiomatic foundations, employ rigorous principles of deduction to construct chains of argument, and rely on symbolic modes of expression calculated to reduce ambiguity to a minimum."
Tribe argued that the use of mathematics to model factual inference and proof in trials (particularly in criminal trials) is a bad idea because:
1. Bayes' Theorem makes precise what is inherently imprecise;
2. Bayes' Theorem makes objective what is subjective;
3. Trial by mathematics and statistics is morally and socially offensive;
4. Lay triers of fact cannot understand matters such as Bayes' Theorem; and
5. Numbers tend to dwarf soft variables, considerations expressible in numbers swamp unquantifiable considerations, doubts & uncertainties.
The debate about "trial by mathematics" – or, more broadly, the debate about the use of formal analysis to model of evidence and inference in legal proceedings – took many twists and turns after Tribe's formidable, rhetorically-powerful assault.
Finkelstein and Fairley published several brief rejoinders to Tribe. One of the more interesting rejoinders was made jointly by Fairley and the eminent statistician Robert Mosteller of Harvard University. Fairley and Mosteller argued that Tribe's most technical objection to Bayesian analysis of identification – Tribe's claim that Bayesian cannot accommodate uncertain evidential premises – was incorrect; they argued that the product rule for dependent conditional events can accommodate the sorts of uncertainties (and the redundancies) that Tribe mentioned in his article. The attempted rebuttals by Finkelstein and Fairley had little effect. It seemed to most American legal scholars (to those legal scholars, in any event, who weren't entirely mystified by the debate) -- it seemed to most legal American scholars at the time that Tribe had killed the baby (Bayesian analysis of evidence in legal trials) practically at the moment of its birth; it seemed that the Bayes-Baby was born still-born.
But it turned out that the baby that Tribe had attacked was hard to kill. In 1975 Professor Richard Lempert published an influential article that did much to resurrect interest in the use of mathematics and probability theory to model factual inference and proof in legal proceedings. Although Lempert said he agreed with Tribe that the "costs of attempting to integrate mathematics into the factfinding process of a legal trial outweigh the benefits," Lempert argued that that Tribe had overlooked the possibility of heuristic use of mathematical models of inference. Lempert argued that judges and legal scholars, for example, could and should use subjective Bayesian logic to explore their own thinking and reasoning about inferences from evidence.
But two years after Lempert published his influential article, the debate took a different turn. L. Jonathan Cohen, an Oxford philosopher, published the influential book, The Probable and the Provable (Oxford, 1977). In that book and elsewhere he called accounts of inference and proof that rest on the standard probability calculus, "Pascalian"; and he argued that another way of thinking about inference, induction, and proof, which he called "Baconian," was also, at a minimum, valid and important.
Six years later the debate about the nature of inference and factual proof took yet another direction. In 1983 three social psychologists -- Reid Hastie, Steven D. Penrod, and Nancy Pennington -- published the book Inside the Jury (Harvard, 1983). There and elsewhere they advanced what they later called the "story model" of proof: They argued that American juries typically evaluate evidence in part by constructing stories.
Some observers who were uncomfortable with Bayesian accounts of inference and proof in legal trials found solace and support both in the work of L.J. Cohen and in the work of Hastie, Penrod, and Pennington. Some or most of these observers thought that Cohen's theory was anti-mathematical and some or many observers thought that an account of inference that emphasizes story-telling is incompatible with, or at least fundamentally different from, a Bayesian model of evidential inference.
Many of the protagonists and participants in these debates and discussions, together with some others, came together in a conference at Boston University School of Law in 1986. At this conference many of these participants discovered, or so they said, that the differences between their various approaches and theories were not really as stark or as fundamental as many observers had supposed. However, some of the participants in the conference -- in the main several newcomers to the debate -- stuck to or picked up their guns and insisted that mathematical analysis of evidence in trials, while not necessarily invidious, is radically incomplete and imperfect. In particular, Professor Ronald Allen, argued that only a non-mathematical theory that emphasizes stories and storytelling can provide an accurate model or picture of juridical proof. (Professor Allen invoked both L.J. Cohen's theory and Inside the Jury to support his theory.)
The debate and the discussion about trial by mathematics, of course, continued after the 1986 conference. (It did so in part at several conferences that I organized.) However, as the years passed, it seemed increasingly apparent to some observers that the debate about trial by mathematics was becoming unproductive and sterile. I was one of those people.
It seemed to me that many of the opponents of Bayesian analysis of inference in legal trials and, more broadly, of mathematical analysis of proof in trials never really understood (and still fail to understand) what at least some of the proposals for mathematical analysis were all about and that because of this failure of understanding most of the counterattacks (against mathematical and formal models of factual inference) were made against a straw theory. For example, it was a mistake for critics of trial by mathematics to suppose that Cohen's Baconian theory is anti- or non-mathematical; it was a mistake to suppose that story-telling is inconsistent with Bayesian or mathematical analysis of evidence, it was a mistake to suppose that Bayesian analysis is equivalent to objective or statistical analysis; it was a mistake to suppose that formal or mathematical analysis is necessarily "mechanical" or that formal mathematical analysis necessarily amounts to an "algorithm"; it was perhaps even a mistake to suppose that non-mathematicians and ordinary human beings could never be made to understand Bayesian logic or any kind of mathematically-grounded account of inference and proof; and, finally, it was a profound mistake to suppose that the debate over trial by mathematics was really only a debate about the uses of mathematics in or about factfinding in the legal process rather than (as Tribe himself recognized) a debate about the broader question of uses and limits of formal argument about evidence, factual inference, and factual proof in legal proceedings. (These points, it seemed to me, had all been very clearly made by the unjustifiably-modest polymath David Schum. I had also tried to make some of these points. I just could not understand why law professors could not seem to understand these basic points, the ones I have just mentioned.)
At about the time that I was thinking such sour and possibly-arrogant thoughts, I began thinking again and at more length about Lotfi Zadeh's theory of fuzzy sets. I was convinced (and I am still convinced) that fuzzy logic potentially has much to say about how human beings do and should reason about evidence in legal contexts (and about legal reasoning in general). However, although I felt this way, I found that I could not escape the suspicion that fuzzy logic does not address some important features of argument about evidence in American trials and legal proceedings. Although I was not sure and I still am not sure that my understanding of fuzzy logic was correct, I had the sense that fuzzy logic is in the main a theory about the natural behavior of concepts and words in much the way that meteorology is a theory of the behavior of the atmosphere. I could not readily see how fuzzy logic could be used to portray the sort of argument one often hears in courtrooms and trials about the inferences to be drawn from some collection of evidence, some set of evidential premises. As I thought about this question -- as I puzzled over both the power and the limits of fuzzy logic -- I became even more convinced than I had been before that it is very important to keep in mind that formal argument about evidence (whether the argument uses numbers or not) can serve quite distinct purposes.
Much of the fear that many legal scholars and judges have of mathematical and formal argument may be rooted in two intuitions, one of which seems valid to me and one of which does not. The valid intuition or sentiment that quite possibly lies at the root of much of the distrust by "legal professionals" of mathematical and formal analysis of evidence is the belief that in legal proceedings argument from and about evidence must be "transparent" to "ordinary" people such as judges and jurors. This intuition, or "prejudice," is in part rooted in the sentiment that the ultimate decision makers in legal proceedings must be human beings and in the correlative sentiment or belief that decision making about evidential inferences cannot be handed over to a logic that ordinary judges and jurors cannot follow and whose trustworthiness such judges and jurors therefore cannot assess.
The invalid intuition or suspicion on which legal professionals' fear of formal analysis rests is the notion that formal analysis is necessarily mechanical -- "mechanical" in the sense that mathematical or formal analysis is necessarily removed from and impenetrable to ordinary human judgment and intuition and therefore necessarily runs "on its own," beyond the control or effective supervision of ordinary mortals.
But the responsibility for this "mistake" -- for the mistaken notion that formal argument necessarily runs on its own and beyond the control of the personal judgments of ordinary human beings -- is not entirely or even primarily attributable to the supposed naiveté of ordinary people. It is in fact the case that most complex argument about inferences from evidence rests on almost innumerable personal or subjective judgments. The mistake in thinking that formal argument necessarily works irrespective of or independent of such personal or subjective judgments and intuitions is probably largely attributable to the failure of the practitioners of formal analysis to show how formal arguments can be made intelligible to ordinary people (i.e., to non-logicians and non-mathematicians) and their failure therefore to show how at least some formal arguments can be rooted in and made responsive to subjective human sentiments and judgments of ordinary people.
These are the views and sentiments that led me to develop the following table or list of possible purposes of mathematical and formal argument about inference from evidence
1. To predict how judges and jurors will resolve factual issues in litigation.
2. To devise methods that can replace existing methods of argument and deliberation in legal settings about factual issues.
3. To devise methods that mimic conventional methods of argument about factual issues in legal settings.
4. To devise methods that support, or facilitate, existing, or ordinary, argument and deliberation about factual issues in legal settings by legal actors (such as judges, lawyers, and jurors) who are generally illiterate in mathematical and formal analysis and argument.
5. To devise methods that would capture some but not all ingredients of argument in legal settings about factual questions questions.
6. To devise methods that perfect – that better express, that increase the transparency of – the logic or logics that are immanent, or present, in existing ordinary inconclusive reasoning about uncertain factual hypotheses that arise in legal settings.
In the abstract for this talk and paper I explained the purpose of my list in the following way:7. To devise methods that have no practical purpose – and whose validity cannot be empirically tested – but that serve only to advance understanding - possibly contemplative understanding – of the nature of inconclusive argument about uncertain factual hypotheses in legal settings.
Before any further major research project on "trial by mathematics" is begun, interested researchers in mathematics, probability, logic, and related fields, on the one hand, and interested legal professionals, on the other hand, should try to reach agreement about the possible distinct purposes that any given mathematical or formal analysis of inconclusive argument about uncertain factual hypotheses might serve. Putting aside the special (and comparatively trivial) case of mathematical and formal methods that make their appearance in legal settings because they are accoutrements of admissible forensic scientific evidence, I propose that discussants, researchers, and scholars of every stripe begin by carefully considering the possibility that mathematical and formal analysis of inconclusive argument about uncertain factual questions in legal proceedings could have any one (or more) of the ... distinct purposes [that I enumerate in my abstract].
Over the years I happen to have been most interested in the fourth possible purpose of formal argument about and from evidence: "To devise methods that support or facilitate existing, or ordinary, argument and deliberation about factual issues in legal settings by legal actors (such as judges, lawyers, and jurors) who are generally illiterate in mathematical and formal analysis and argument." Making and assessing arguments is hard work. Probing the strengths and weaknesses of arguments, including arguments about evidence and inference, is also hard work. Nothing will ever change that. But I believe that people who study mathematics and formal logic have it in their power to make many of their propositions about logic and to make many of their formal arguments intelligible to people such as judges and jurors. For example, I believe that pictures and picture-thinking may be one way in which the worlds of the formal and informal sciences can learn to communicate effectively with each other. If that is the case, the day may yet come when rigorous formal argument about evidence, factual inference, and factual proof looks and feels warm and friendly to ordinary and mathematically illiterate people such as me.
The dynamic evidence page
It's here: the law of evidence on Spindle Law. See also this post and this post.
1 comment:
Commonwealth v. Ferreira, 2011 Mass. LEXIS 977 (Oct. 21, 2011): "The prosecutor also erred in equating proof beyond a reasonable doubt with a numerical percentage of the probability of guilt, in this case, ninety-eight per cent. '[T]o attempt to quantify proof beyond a reasonable doubt changes the nature of the legal concept of "beyond a reasonable doubt," which seeks "abiding conviction" or "moral certainty" rather than statistical probability.' Commonwealth v. Rosa, 422 Mass. 18, 28 (1996). 'The idea of reasonable doubt is not susceptible to quantification; it is inherently qualitative.' Commonwealth v. Sullivan, 20 Mass. App. Ct. 802, 806 (1985). See Commonwealth v. Mack, 423 Mass. 288, 291 (1996) ('the concept of reasonable doubt is not a mathematical one')."
Post a Comment