Wednesday, December 30, 2009
Tuesday, December 29, 2009
The second generation of the New Evidence Scholarship focused on mathematically-laden problems of scientific evidence (e.g., DNA evidence) and on problems of factual inference that seem tractable to statistical analysis.
The third generation of the New Evidence Scholarship (NES) also uses mathematical argument and analysis. But this variant of NES does not require or expect consumers of mathematical analysis to do computations. Instead, NES-3rd uses mathematics and computations to develop tools for deliberation about inference, tools that do not require or expect the user of the tool to do computations.
A key premise of this third generation of NES is this: rigorous analysis (including mathematical analysis) is required to design a tool that promotes or supports or facilitates logical inference by ordinary people about ordinary [non-scientific] problems but the tool thus produced must not require such ordinary people to do mathematical computations.Two major practitioners of NES-3rd are Douglas Walton and Tim van Gelder. (There are others.) Of course, the third approach to factual inference was, so to speak, there all along, at least in a germinal form: Wigmore's charting method (which appeared in print in 1937) anticipated key ingredients of the third approach. William Twining refurbished and modernized Wigmore's charting notations (and was among the very first to defend the importance of Wigmorean-style charting of evidential inference). David Schum married Wigmorean charting with mathematics and produced probabilistic inference networks. Working from left field (i.e., not starting within NES-1st or NES-2nd), Tim van Gelder is now effectively taking this progression to the final and critical stage. He is doing so by emphasizing how important it is that math- and logic-generated charts, diagrams, pictures, images, and, in general, conceptual tools present and portray problems of inference in a way that is intuitive and natural and intelligible to "ordinary" human beings (whose reasoning capacities are in fact extraordinary).
The most exciting and revolutionary developments in NES are yet to come. And some of the most exciting of these exciting developments are bubbling up from down under.
The difference between false memories and true ones is the same as for jewels: it is always the false ones that look the most real, the most brilliant.
Thursday, December 24, 2009
State v. Guinn, 114 Idaho 30, 39 at 39-41 (Idaho Ct. App. 1988) (Burnett, J., concurring):
I join the Court in setting aside the judgment of conviction and in remanding the case for a new trial. For guidance on remand, the Court has discussed the defendant's attempt to discredit a prosecution witness by showing that the witness used marijuana. The discussion focuses on I.R.E. 608(b). I write separately to offer a critical evaluation of Rule 608(b) and to comment on its application, together with other rules, in the present case.
Impeachment is a vexatious subject because it brings into conflict several objectives of our judicial system. On one hand, we seek to ascertain the truth in factual disputes. If there is reason to doubt the credibility of a witness, the triers of fact should be so informed in order to make an intelligent assessment of the testimony. On the other hand, we also strive for judicial efficiency. If challenges to the credibility of witnesses are not regulated in some fashion, trials may become sidetracked by the pursuit of collateral issues. In addition, we seek to uphold the integrity of judicial processes and to protect the dignity of persons who participate in them. If attacks on witnesses are unrestrained, citizen respect for -- and cooperation with -- the courts may be impaired.
Before the Idaho Rules of Evidence were adopted, the scope of impeachment was tightly confined. It reflected a dominant concern for efficiency, court decorum and witness protection. Impeachment was regulated by Rule 43(b)(6), I.R.C.P. This civil rule prohibited impeachment of a witness "by evidence of particular wrongful acts, except . . . [by] prior conviction of a felony . . . relevant to his credibility . . . ." Thus, it was impermissible to attack the credibility of a witness by attempting to show that he had engaged in bad acts, other than felony convictions, which called his character for truthfulness into question. Of course, if evidence of such bad acts was relevant to another issue in the case, it could be admitted -- but only for that limited purpose. E.g., State v. Dayley, 96 Idaho 527, 531 P.2d 1172 (1975); see generally REPORT OF THE IDAHO STATE BAR EVIDENCE COMMITTEE at C 608, p. 3 (December 16, 1983).
Today the Idaho Rules of Evidence and the similar federal rules reflect an increased concern for the truth-seeking objective of a trial. They broaden the opportunity to challenge the credibility of a witness on the basis of his prior misconduct. Rule 608(b) authorizes the trial judge, in the exercise of discretion, to allow inquiry upon cross-examination into any specific acts which are probative of the witness's character for untruthfulness.
The rule must be read carefully. It is important not only for what it says but also for what it does not say. It says that impeachment to show character for untruthfulness is limited to an inquiry upon cross-examination. The impeaching party may not introduce extrinsic evidence of prior bad acts for this purpose. However, the rule is silent regarding impeachment by specific acts to challenge credibility on other grounds. The rule says nothing, for example, about impeachment to show bias or improper motive for testifying. Commentators on the federal rule have treated this silence as pregnant -- that is, as an indication that extrinsic evidence can be used to show bias or improper motive on the part of a witness. See, e.g., 1 G. JOSEPH & S. SALTZBURG, EVIDENCE IN AMERICA Section 42.3 (1987). Indeed, some jurisdictions have added language to their rules of evidence explicitly distinguishing between impeachment to show character for untruthfulness and impeachment to show bias or improper motive. See, e.g., Rule 609.1, Hawaii Rules of Evidence.
The distinction, simply restated, is between a propensity to lie and a reason to lie. The witness with a character for untruthfulness has a propensity to lie; the witness with a bias or improper motive has a reason to lie. By allowing the issue of untruthful character to be raised only in cross-examination, but allowing bias or improper motive to be shown by extrinsic evidence, Rule 608(b) reveals an hidden hypothesis. The hypothesis is that the truthseeking objective of a trial is threatened less by a propensity to lie than by a reason to lie.
This hypothesis is grounded in the conventional wisdom that a propensity to lie is a general trait; it may or may not be exhibited on a particular occasion or on a particular subject. In contrast, a reason to lie is specific; it may be triggered by the occasion and subject matter of the trial itself. Accordingly, it is thought to be a more direct threat to the truth-seeking process.
Such conventional wisdom is valid in the abstract; but it breaks down when a propensity to lie actually manifests itself in the courtroom. If a witness on cross-examination denies a prior bad act which indicated a character for untruthfulness, and if it can be shown that the denial is false, the witness's lack of credibility is confirmed. He has demonstrated his willingness to lie under oath during the trial itself. The triers of fact would have strong reason to doubt his testimony on any issue. Their skepticism would be no less abiding than if the impeaching party had presented facts from which a possible bias or improper motive might be inferred. Nevertheless, Rule 608(b) prevents the triers of fact from learning that the witness has testified falsely about a fact relating to his character for untruthfulness. Extrinsic evidence to contradict the false testimony may not be presented. The cross-examiner must accept the witness's answer.
The anomaly is obvious, yet the limitation persists. One reason, perhaps, is that a crafty lawyer may be able to impugn a witness's character on cross-examination, without resorting to extrinsic evidence. As a distinguished federal judge has noted:
[T]he very question itself can convey the theoretically barred information to the jury. A skillful but unscrupulous cross-examiner can, with a great flourish of impressive-looking papers, ask the witness about incidents in his life in such detail as to time and place as to render his denials completely suspect.3 J. WEINSTEIN, WEINSTEIN'S EVIDENCE (1984), at 608-25 and 608-26. But if Rule 608(b) is intended, at least in part, to uphold the decorum of the court and to protect the dignity of witnesses, then Judge Weinstein's observation tells us that the rule is not working. It would be a strange logic that justifies a rule-imposed limitation on the ground that the rule can be circumvented anyway.
The other rationale for Rule 608(b) is judicial efficiency. The rule shortens a trial by avoiding a dispute over extrinsic facts relating to a witness's character for untruthfulness. Efficiency is a valid purpose. However, an absolute bar on extrinsic evidence accomplishes this purpose at a cost. It withholds from the triers of fact evidence that the witness has lied on cross-examination. Of course, this cost may be insignificant where the cross-examiner has succeeded in casting aspersions upon the witness by the use of thespian techniques. But in many cases the cross-examiner is neither so skillful nor so unscrupulous. In those cases the cost of an absolute rule is high.
Do we need the rigid restriction imposed by Rule 608(b) in order to achieve judicial efficiency? I think not. Elsewhere in the Idaho Rules of Evidence are provisions granting judges discretionary authority to prevent trials from becoming embroiled in collateral matters. Rule 403 authorizes a judge to exclude evidence, although relevant, "if its probative value is substantially outweighed by the danger of unfair prejudice, confusion of the issues, or misleading the jury, or by considerations of undue delay, waste of time, or needless presentation of cumulative evidence." In addition, Rule 611(a) empowers the judge to "exercise reasonable control over the mode and order of interrogating witnesses and presenting evidence so as to (1) make the interrogation and presentation effective for the ascertainment of truth, (2) avoid needless consumption of time, and (3) protect witnesses from harassment or undue embarrassment."
Both of these rules authorize the judge to limit questions or presentations of evidence where the probative value is outweighed by other considerations. In my view, these rules are sufficient to serve the objective of judicial efficiency. There is no persuasive reason to add the narrow and absolute prohibition against extrinsic evidence of untruthful character now found in Rule 608(b). Although this prohibition has an historical lineage, and exists in the rules of many other jurisdictions, we should consider abolishing it in Idaho. It does not serve well the purposes ascribed to it, and it is not consistent with the flexible tenor of the Idaho Rules of Evidence, taken as a whole.
Friday, December 18, 2009
But the junk science of repressed memory may finally be dead in some states. See generally cases on repressed memory evidence gathered in Spindle Law's evidence module.
In this case the nature of the claim is one which "by its very nature [is] subject to long-repressed memories." Diocese of Dallas, 379 Ill. App. 3d at 793, 885 N.E.2d at 385, citing Pedigo v. Pedigo, 292 Ill. App. 3d 831, 839, 686 N.E.2d at 1185 (1997). Any statute of repose applicable to incidents of childhood sexual abuse inherently fails to recognize that the nature of the claim is subject to long-repressed memories. Applying the repeal of the statute of repose retroactively to allow plaintiffs to bring suit long after the alleged abuse occurred would correct that problem and bring the current application of the law into line with the nature of the claim.This opinion, by itself, leaves a theoretical possibility that in a sexual abuse case that is considered on the merits, a trial judge might condemn expert evidence to support a claim of allegedly long-repressed memories as irrelevant, unscientific, unreliable, or, more simply put, as junk science. Let's hope that this possibility materializes (or has materialized) in the courts of the State of Illinois.
Wednesday, December 16, 2009
In an address I am giving in early January -- in a talk or lecture called "Trial by Mathematics - Reconsidered" -- at a conference in Munich I will make roughly the following comments at the end of my address:
My thoughts about the possible purposes of formal analysis of factual inference and proof have a variety of causes and sources. But among the most important influences on my thinking about the purposes of formal theorizing about factual inference were my (feeble) efforts to discern the possible implications of fuzzy probability for uncertainty in law. Lotfi Zadeh's revolutionary way of looking at uncertainty struck me as both a powerful window and an inadequate one. In trying to puzzle out the reason for these opposing sentiments, I realized that fuzzy probability (to the extent that I understand it) addresses some types of uncertainty in law in an extremely enlightening way but that it does not address some other types of uncertainty in a way that I find useful or enlightening. For example, I found that I could imagine that fuzzy logic, with the appropriate semantic data, might someday usefully describe and predict -- to some degree -- the behavior of some legal concepts and words in the real judicial world, but I had a much harder time imagining -- and I still do -- how fuzzy logic could serve as a reasonably complete guide to argument directed at a court about the appropriate interpretation of some legal word or concept.1 [1. However, the distinction between predicting the behavior of legal concepts and making arguments about legal concepts in forum such as a courtroom is neither sharp nor simple. For example, there is the simple fact that predictions about the behavior of legal concepts are usually an important part of legal argument addressed to a legal decision maker such as a judge.] This awareness (though quite possibly rooted in a mistaken premise) made me more acutely aware of the variousness of the ways in which the standard probability calculus (and other types of formal theory) might be used to deal with uncertain inference in law.
Another matter has strongly influenced my thinking about "trial by mathematics." Although in this talk [this paper] I have not said which of the possible purposes of formal analysis that I previously mentioned are most likely "viable," I do wish to suggest that it may be very fruitful to develop conceptual tools that combine the function of inference support with the function of increasing the transparency of inference. There is a reasonable chance that human beings can use formal analysis to (i) make more transparent to themselves some of their own cognitive or mental processes and (ii) thereby improve (at least occasionally) the workings of "the logic or logics that are immanent, or present, in existing ordinary inconclusive reasoning about uncertain factual hypotheses that arise in legal settings." If one shares my view that much human reasoning is subconscious but that there are degrees of awareness (rather than a crisp disjunction between awareness and non-awareness),2 [2. The important work of Timothy van Gelder draws on this insight. See his web site (with references and discussion) at http://timvangelder.com/] there is some reason to believe or hope that some formal conceptual tools (such as diagrams of arguments) can lead human beings to better understand what they think subliminally (to some degree) and thereby put them in a position to evaluate, critique, and improve some reasoning of theirs that was previously largely subliminal. Of course, even if we have such tools, there is no guarantee that "facilitated human awareness" will improve the accuracy of human inference in legal proceedings such as trials. That's in part because some human mental processes will forever -- or at least for the foreseeable future -- remain inaccessible to introspection. Nonetheless, human progress in the sciences (and in some other "intellectual fields"?) suggests that a hope that rigorous introspection can and will improve the accuracy of human factual inferences is not irrational. And with such a rational hope, I think, we can and should be content.
Law on Display: The Digital Transformation of Legal Persuasion and Judgment (2009)
Neal Feigenson and Christina Spiesel
“This is a widely informed, wisely reasoned, accessible analysis of how, for good or for evil, digital visual technology is transforming the conduct of trials and the very meaning of truth in the courtroom. It is essential reading alike for litigators and for everyone concerned with the legal fall-out of our culture’s accelerating shift from verbal to multimedia communication and comprehension.”
- Anthony G. Amsterdam, NYU School of Law
“Feigenson and Spiesel combine their impressive talents in law and visual persuasion to provide us with an insightful account of how new media are transforming legal advocacy in powerful new directions. Their critical analyses of fascinating case studies illustrate how cutting-edge lawyers are employing visual and digital media. The authors alert us to the new media's transformative capacity yet also its manipulative potential, and cogently discuss the ethical and legal quandaries that new media present for the courts. Highly recommended.”
- Valerie P. Hans
Sunday, December 13, 2009
I normally have a charitable intellectual attitude toward many modern but possibly-quixotic intellectual endeavors. For example, although I think the explicit (crude) materialism of some versions of Artificial Intelligence is incorrect (and naive & oxymoronic), I believe much can be learned through the study of the work of AI scholars who say they embrace (crude) materialism. However, the (few) examples of research in law & sociobiology (L & S) I have seen leave me, at last, with the firm impression that L & S is little more than warmed-over social Darwinism. (One prominent L & S theorist has moved into neuroscience and law. If past performance predicts future performance, there is reason to doubt the prospects for this new gambit.)
Bicycling enthusiast Chris Bray, a Bergen-Lafayette resident for the past five years, recalled at the meeting a conversation with Mayor Jerramiah Healy about making Jersey City more amenable to bicycle riding.
According to Bray, “[Healy] said, and this is a direct quote, ‘You bike around here, are you crazy? I want people to use public transportation.’ ”
Friday, December 11, 2009
Wednesday, December 09, 2009
What Does the Rule of Law Mean in New Jersey? (It Means that Ms. Lopez Gets to Keep Her Public Office.)
In the meantime Jersey City is governed by the Hon. Jerramiah Healy, Public Official 1 or 4 (so identified in indictments and criminal complaints), that fervent admirer of the presumption of innocence (for his appointed and indicted deputy mayor and former treasurer of his election campaign, in any event). Mr. Healy of course had utterly no idea that the fake bribes his deputy mayor took from the remarkable Mr. Solomon Dwek for the mayor's election campaign committee were illegal (even though Mr. Mayor was present on two occasions when Mr. Dwek offered the money to the deputy mayor in exchange for some help with a fake real estate development).
Oh yes, the indicted president of the Jersey City council also remains in office -- and votes on real estate tax abatement proposals for major developers and on other such unimportant matters. That's heartening too -- for it shows that he too believes in the presumption of innocence.
Well, let's see. I wonder how many unindicted Jersey City council members are left. One, two, ....
Mr. Putin should come to New Jersey and see how the rule of law works. He would find much to emulate here.
Tuesday, December 08, 2009
Psychologist Lenore Terr, a defender of repressed memory therapy, argues that repression occurs for repeated or multiple traumas, such as a repeatedly abused child. Schacter notes that "hundreds of studies have shown that repetition of information leads to improved memory, not loss of memory, for that information." He also notes that people who have experienced repeated traumas in war, even children, generally remember their experiences. A person who suffers a great trauma often finds that she cannot get the event out of her mind or dreams. Terr's theory is that the child becomes practiced at repression to banish the awful events from awareness, and forgetting might aid in the child's survival. Her dissociative theory, however, is based on speculation rather than scientific evidence.
See the still-sparse authority on repressed memory in the node on repressed memory in Spindle Law's evidence module.
See also cases collected in the Advanced Evidence web site.
Monday, December 07, 2009
Enter your thoughts & news here and, please, in the evidence module of Spindle Law. Go in particular to this node (reliable vel non?) of the evidence module. (I am interested in legal developments abroad as well as domestic legal developments.)
It’s marvelous ... when you think of the hundreds and hundreds of priests and how very few have even been accused, and how very few have even come close to having anyone prove anything.The NYTimes apparently thinks that Cardinal Egan's own words condemn him.
But what if what cardinal Egan said was true?
In the Middle Ages it took a lot of witnesses to overcome the testimony of a single bishop (or so it is sometimes said, perhaps by the same people who say that folks in the Middle Ages thought that the earth is flat). Today -- as the New York Times would apparently have it -- not even the testimony of a hundred or a thousand bishops can overcome the testimony of even a single money-bedazzled plaintiff.
I am tired of anti-religious bigotry. It is time to attack this sort of bigotry. Perhaps the power of the "new media" can overcome the power of the "old media" in this arena? I surely hope so. Ye believers in religious freedom, unite!
P.S. I am against sexual predators. However, I do not favor the idea that every accuser of a priest should be believed -- and paid off. (Yes, Virginia, there are some liars out there.)
There are obvious difficulties with presenting the arguments in the original works of Derrida or Lacan, or Baudrillard. They do not write in any natural language, they do not put the premises before the conclusion, the conclusion is distributed over the text rather than appearing in any one sentence, positions are assumed to have been established outside the texts one is actually reading, in previous texts, or perhaps future ones, and so on.James Franklin, What Science Knows and How It knows It 42 (Encounter Books 2009).
For some unknown reason, Franklin's comments about postmodern folk put me in mind of a different kind of strange philosophy -- J.L. Austin's. Austin's "ordinary language" philosophy is still thought of as having been a respectable sort of thing. But some of Austin's extraordinary ordinary language can make one wonder why:
Are cans constitutionally iffy? Whenever, that is, we say that we can do something, or could do something, or could have done something, is there an if in the offing—suppressed, it may be, but due nevertheless to appear when we set out our sentence in full or when we give an explanation of its meaning?J.L. Austin, “Ifs and Cans,” Proceedings of the British Academy (1956), in Philosophical Papers, p. 205 (Oxford: 2nd ed., 1970)
Sometimes I'm quite glad I decided to become a law professor rather than a modern (or, worse yet, postmodern) philosopher.
Saturday, December 05, 2009
by Peter Tillers
In 1970 Michael O. Finkelstein (with William B. Fairley) proposed that under some circumstances a jury in a criminal trial might be invited to use Bayes' Theorem to address the issue of the identity of the criminal perpetrator. In 1971 Laurence Tribe responded to this proposal with a rhetorically-powerful and wide-ranging attack on what he called "trial by mathematics." Finkelstein responded to Tribe's attack by further explaining, refining, and defending his proposal. After a brief rejoinder, Tribe fell silent – forever – on the issue of the use of mathematical and formal methods to dissect or regulate uncertain factual proof in legal proceedings. However, Tribe's silence did not end the debate about "trial by mathematics." Tribe's attack on "trial by mathematics" had exactly the opposite effect: Tribe's attack precipitated a decades-long debate about mathematical analysis of factual inference and proof. However, that debate, which continues to this day, became generally (but not uniformly) unproductive, sterile, and repetitive long ago. Although surely a variety of factors led to this unfortunate condition, the debate about "trial by mathematics" was doomed to die with a whimper rather than a bang because two misunderstandings plagued much of the debate from the very beginning. The first misunderstanding was a widespread failure to appreciate that mathematics (including the probability calculus and Bayes' Theorem) is part of a broader family, or class, of rigorous methods of reasoning, a family of methods that is often called "formal." The second misunderstanding was a widespread failure to appreciate that mathematical and formal analyses (including but not only analyses that use numbers) can have a large variety of purposes. However, it is not too late to right this upended ship. Before any further major research project on "trial by mathematics" is begun, interested researchers in mathematics, probability, logic, and related fields, on the one hand, and interested legal professionals, on the other hand, should try to reach agreement about the possible distinct purposes that any given mathematical or formal analysis of inconclusive argument about uncertain factual hypotheses might serve. Putting aside the special (and comparatively trivial) case of mathematical and formal methods that make their appearance in legal settings because they are accoutrements of admissible forensic scientific evidence, I propose that discussants, researchers, and scholars of every stripe begin by carefully considering the possibility that mathematical and formal analysis of inconclusive argument about uncertain factual questions in legal proceedings could have any one (or more) of the following distinct purposes:
1. To predict how judges and jurors will resolve factual issues in litigation.
2. To devise methods that can replace existing methods of argument and deliberation in legal settings about factual issues.
3. To devise methods that mimic conventional methods of argument about factual issues in legal settings.
4. To devise methods that would capture some but not all ingredients of argument in legal settings about factual questions questions.
5. To devise methods that support or facilitate existing, or ordinary, argument and deliberation about factual issues in legal settings by legal actors (such as judges, lawyers, and jurors) who are generally illiterate in mathematical and formal analysis and argument.
6. To devise methods that clarify – that better express and increase the transparency of – the logic or logics that are immanent, or already present, in existing ordinary human inconclusive reasoning about uncertain factual hypotheses that arise in legal settings.
7. To devise methods that have no practical purpose – and whose validity cannot be empirically tested – but that serve only to advance understanding – possibly contemplative understanding – of the nature of inconclusive argument about uncertain factual hypotheses in legal settings.
Tuesday, December 01, 2009
My congratulations to Henry Prakken, Bart Verheij, and Floris Bex, who have collaborated in meticulous and pathbreaking research on the the nature of factual inference and proof. Their work will dictate many research agendas for decades to come.
Saturday, November 28, 2009
Friday, November 27, 2009
We talked some more and she said to me, "After the war I really believed 'never again!'." But, of course (we agreed), it has happened again -- in Biafra, in Indonesia (the Sukarno years), in Cambodia (Pol Pot), Uganda (Idi Amin), Rwanda, the Sudan, and so on.
Ach weh! Human depravity!
I remember telling my high school teachers of 55,000 Latvians -- that's the number I had heard -- having been deported to Siberia by the Stalinists before WWII. I think my teachers thought of me as a rabid right-wing nut. I hope my former teachers later read Solzhenitsyn -- or that, at least, heard of his accounts of the Soviet Gulag Archipelago.
I also remember the high school teacher who tossed me out of his course ("Problems of Democracy") when he said, "Anyone in America can become President," and I moaned, "No, that's not true." That former teacher of mine taught "Moral Rearmament" in his course (a public high school course). He had ordered me to keep my mouth shut after I had questioned (politely, I thought) some of his strange claims. It was after that happened that he threw me out of his class. And I was a conservative-minded fellow at the time. But not conservative enough.
Solzhenitsyn. He got a Nobel Prize. And he was invited to give a commencement address at Harvard. He disappointed his hosts by saying (in 1978), for example:
But the persisting blindness of superiority continues to hold the belief that all of the vast regions of our planet should develop and mature to the level of contemporary Western systems, the best in theory and the most attractive in practice; that all those other worlds are but temporarily prevented (by wicked leaders or by severe crises or by their own barbarity and incomprehension) from pursuing Western pluralistic democracy and adopting the Western way of life. Countries are judged on the merit of their progress in that direction. But in fact such a conception is a fruit of Western incomprehension of the essence of other worlds, a result of mistakenly measuring them all with a Western yardstick. The real picture of our planet's development bears little resemblance to all this.The Solzhenitsyn Reader pp. 563-564 (Edward E. Ericson, Jr. & Daniel J. Mahoney, ISI Books, 2006).
His Harvard hosts had apparently expected this champion of liberty to be a liberal campaigner against tyranny.
Tyrants victimize all sorts of people. Stalin tormented all sorts of people -- right-wing, left-wing, middle of the road, Jews, non-Jews, Poles, Chechens. It rained on both the poor and the rich, the good and the evil. He repressed, killed, and imprisoned them all. Consider Vasily Grossman, Life and Fate (Harper & Row: Robert Chandler, trans., 1985) (Translator's Introduction, p. 8: "In February 1953, however, as a new series of purges, directed particularly at Jews, gathered momentum, Grossman was again attacked, possibly at the instigation of Stalin himself. During the following months he was repeatedly and hysterically denounced as a Jewish nationalist, a reactionary idealist alienated from Soviet society..."). See also John Garrard & Carol Garrard, The Bones of Berdichev: The Life and Fate of Vasily Grossman (Free Press, 1996) (the blurb on the book's flap states: "Born a Russian Jew and an ardent patriot of the Soviet motherland, Vasily Grossman rationalized away the Stalinist horror of his time as he chronicled the Red Army's westward sweep during World War II, becoming the Soviet Army's premier wartime correspondent. It was not until he discovered 30,000 victims were massacred by Nazi forces in his hometown of Berdichev--including his own mother--that he confronted his own Jewishness and the genocidal horror of the Holocaust. Determined to tell the story of Soviet complicity with the Nazi extermination of Russian Jewry, Grossman was labeled an enemy of the state by both Stalin and Kruschev--barely escaping Stalin's death squads--and his exposes were suppressed and buried deep within the Communist Party's archives.")
I stand against tyranny: right, left, or middle.
Sunday, November 22, 2009
1. Judicial hostility towards ‘quantification’ of reasonable doubt
The U.S. constitutional guarantee of due process permits an accused to be convicted of a crime after trial, only if the evidence presented at the trial proves the accused’s guilt beyond a reasonable doubt in the eyes of the trier of fact. Occasionally, an actor in a criminal trial—typically a prosecutor, but sometimes a trial judge—will use numbers of one kind or another in an attempt to explain or clarify the reasonable doubt standard in some fashion. Appellate courts have condemned such attempts at quantification of reasonable doubt whenever they have encountered them. For example, in one case, a court condemned a prosecutor’s use of a bar graph that displayed, in percentages, the prosecutor’s view of the numerical equivalents of various levels of proof....
It could be argued that judicial hostility to quantification of reasonable doubt is only a transitory state of affairs. Many American judges now accept that mathematical and quantitative methods can shed light on many legal problems. All American judges now either accept or must accept that the results generated by mathematical and quantitative methods are often admissible at trial. Perhaps the ever-increasing use of mathematical and quantitative methods in litigation both foreshadows and reflects a transformation in judicial attitudes towards the hard sciences. Perhaps, such a change in the intellectual culture of the judiciary will create fertile judicial soil for the eventual ‘mathematization’ of the reasonable doubt standard. Perhaps so, but solid evidence that mathematization of the reasonable doubt standard will come to pass is hard to find.
Consider Judge Weinstein, a leading authority on the American law of evidence. He has long advocated more extensive forensic use of statistical methods. If any reputable judge were to advocate quantification of the reasonable doubt standard, one might expect that Weinstein would be the one to do so. A search ofWeinstein’s judicial record does show thatWeinstein has written two opinions that discuss quantification of the reasonable doubt standard. See United States v. Fatico, 458 F.Supp. 388, 409–11 (E.D.N.Y. 1978) and Vargas v. Keane, 86 F.3d 1273, 1281–84 (2nd Cir. 1996) (Weinstein, concurring, sitting ‘by designation’—i.e. temporarily—on the United States Court of Appeals for the Second Circuit). However, neither of these opinions directly embraces quantification of the reasonable doubt standard. ... If even math-friendly judges such as Jack Weinstein do not endorse the use of numbers in criminal trials to clarify or reformulate the reasonable doubt standard, the prospects for mathematical quantification at trial of the reasonable doubt standard would seem to be virtually nonexistent.
But what are we to make of another decision by the very same Jack B. Weinstein: United States v. Copeland, 369 F. Supp. 2d 275 (E.D.N.Y. 2005)? In Copeland, Weinstein used a numerical probability (expressed as a percentage) to quantify a standard of persuasion (‘reasonable probability’). Is Copeland compatible with the prevailing rule that reasonable doubt cannot be quantified in trials? If ‘substantial probability’ can and should be quantified, why cannot and why should not ‘reasonable doubt’ be quantified? Does Copeland amount to a collateral attack on the rule prohibiting quantification of the reasonable doubt standard?
3. Myths about quantification of reasonable doubt
The myth of ‘trial by mathematics (or statistics)’
The language of some judicial opinions suggests that some judges believe that quantification of the reasonable doubt standard entails the vice of trial by statistics. Now trial by statistics—whatever it is—might or might not be a bad thing. But it is important to understand that ‘trial by mathematics’ does not necessarily entail ‘trial by statistics’. Assume that the phrase trial by mathematics refers to trials in which decisions at trial are governed by the (use of the methods of) probability calculus. Assume further that a judicial trial becomes a trial by mathematics, if the law quantifies burdens of persuasion in criminal trials by informing triers of fact that they may find a defendant guilty of crime (or find facts essential to criminal guilt) if and only if they believe that the probability of criminal guilt (or of each fact essential to guilt) exceeds some specified numerical probability. This sort of trial by mathematics—if it be trial by mathematics—does not necessarily involve statistics. Probabilities are not the same thing as statistically grounded probabilities. Yes, modern statistical analysis does involve the probability calculus. But, as the word ‘statistics’ implies, statistical analysis involves and requires systematic collection of data or observations, data and observations that can be summarized in the form of statistics. It is possible to talk—and talk coherently—about odds or probabilities without systematically gathering data, compiling statistics or analysing systematically gathered collections of data. In short, although it is not possible to do statistics without doing probability, it is possible to do probability without doing statistics. Hence, any uneasiness about the use of statistical methods in criminal trials does not explain the judiciary’s uneasiness about quantification of the reasonable doubt standard.
A scholarly debate about the virtues and vices of mathematical analysis of evidence has raged for more than three decades. The outcome of that debate remains unclear: it is unclear whether the proponents or the opponents of mathematical analysis of evidence and inference will ultimately prevail. (Tillers confesses that he’s betting on the advocates of heuristic mathematical analysis.) But one thing about that long-running and often acrimonious debate is relatively clear: most of that debate is immaterial to the question of quantification of the reasonable doubt standard. Scholarly arguments about mathematical analysis of evidence and inference largely have to do with the logic or structure of argument about and from evidence—i.e. the logic or structure of factual or evidential inference or evidential argument. Like other forms of inference, evidential inference involves at least one step—a step, e.g. from an evidential premise to a factual conclusion. (That a step is required is the reason why we call the step ‘inference’.) Disagreements about mathematical analysis of evidence and inference mainly involve disagreements about how inferences are or should be drawn. The sorts of quantitatively phrased standards of persuasion under discussion here do not implicate controversies about the structure of evidential inference because the type of quantification under discussion here specifies only how much uncertainty is acceptable at the end of the day—after the trier has used whatever logic it chooses to use to draw inferences from and about the available evidence. Quantified standards of persuasion of this sort appear to say nothing about the kind of logic or reasoning the trier should use to reach its final (uncertain) conclusion.
Quantification of bottom-line inferences does contemplate that the trier of fact will measure and express its uncertainty by using the language of probabilities and odds. But it is hard to see why a trier’s use of the language of probabilities and odds to describe the extent of its uncertainty about its ultimate factual conclusions compels the trier to use any particular method for drawing inferences from and about evidence, let alone a method of inferential analysis that is rooted in the standard probability calculus. ...
If numerical quantification of a standard of persuasion does not require that mathematics or numbers be used to analyse evidential inference, not much is left of the claim that quantification of a standard of persuasion amounts to trial by mathematics. It must be granted, of course, that quantification of the reasonable doubt standard in terms of odds, probabilities or chances...would require a trier such as a juror to use numbers when interrogating itself about the sufficiency and strength of the evidence against an accused. ...But so what? Numbers are not inherently evil things. The use of numbers to express the degree of a person’s uncertainty about a factual possibility does not require the use of higher mathematics—or even intermediate mathematics. Arithmetic will do. ...
The curious myth of ‘mathematical certainty’
Occasionally, it is said that mathematical analysis of evidence or mathematical accounts of inference is unacceptable because mathematical analysis aims for a kind of certainty—mathematical certainty—that is unattainable in ordinary affairs or in inferential deliberation. Is it conceivable that this sort of argument would be made about quantification of the reasonable doubt standard—that quantification of the reasonable doubt standard would somehow convert the standard into one that requires mathematical certainty of guilt? We hope not. But if the argument were to be made, it would be so preposterous that it might be difficult to know what to say about it.
The objection that mathematical analysis of evidence and inference entails a (spurious) mathematical certainty about evidence and inference fundamentally misconceives the entire point of using probability theory to analyse factual proof. ... The entire point of using probability theory is to talk coherently about uncertainty—not to eliminate uncertainty.
The myth of excessive mathematical precision
Courts often suggest that quantification of the reasonable doubt standard entails precise quantification of the standard—and that such precise quantification would be a bad thing because a quantitatively precise formulation of the burden of persuasion in criminal trials would be excessively precise. ... The objection to quantification of standards of persuasion on the ground that quantified standards are precise may seem to require no explanation. The notion of ‘precise quantification’, however, has various connotations, and each of these connotations seems to have different wellsprings.
In some instances, the thesis (or suspicion) that quantification of matters such as probable cause and reasonable doubt necessarily produces an excessive and spurious degree of precision about uncertainty may be rooted in the following two related assumptions:
(i) Any quantification of the reasonable doubt standard in terms of probabilities would have to use relatively discrete rather than relatively coarse probabilities—such as probabilities that run to three or even to five or more decimal places, e.g. the probability 0.953 or the probability 0.95312, andThe objection to quantification of standards of persuasion is not well grounded if it rests only on these two propositions. It is very probably true that triers’ uncertainty about many types of facts that are legally essential to a finding of criminal guilt—about possible facts such as ‘intent to kill’—is ordinarily, relatively coarse. However, nothing in mathematical logic or in probability theory dictates that mathematical measures of uncertainty must be highly granular. Today there is an entire family of mathematical theories of uncertainty that are dedicated to the study of ‘imprecise probabilities’. Even before the advent of nonstandard mathematical approaches to uncertainty, it was well known that probabilities can be imprecise. ...
(ii) The degree of doubt and uncertainty about matters such as criminal guilt is necessarily, relatively imprecise; it is always comparatively coarse.
The objection to ‘precise quantification’ of burdens of persuasion sometimes may have a basis entirely different from the (erroneous) notion that mathematical probabilities must be granular. Consider again the passage by the U.S. Supreme Court quoted above. In part of that passage, the Court emphasized that precision about probable cause is bad because probable cause ‘depends on the totality of the circumstances’. Pringle, 540 U.S. at 371. The evil hinted at by this part of the Court’s language is not any excessive granularity of probability judgements, but the invariability of the degree of probability that, the Court suggests, would be required for a finding of ‘probable cause’ were the probable cause requirement quantified.
The notion that mere use of the language of mathematical probability to describe the relationship between uncertainty and probable cause requires that ‘probable cause’ be assigned some invariant numerical (‘mathematical’) probability is almost silly beyond words. ...
The myth of an absolute disjunction between qualitative and quantitative judgements
Courts frequently declare that the reasonable doubt standard requires the trier of fact to make qualitative rather than quantitative judgements. To make sense of this proposition—to make it amount to more than the tautology that a verbal formulation of the reasonable doubt standard is not a numerical formulation—it is necessary to understand it as an assertion that judgements about states of the world are either qualitative or quantitative, but not both. If this is the kind of notion that is at work here, it is hard to understand.
Perhaps the thesis of a disjunction between quantitative and qualitative judgements rests on the premise that numbers somehow speak for themselves—and that, thus, no qualitative human thinking is required when numbers are involved in an argument or assessment. There are any number of difficulties with this idea. The first is that numbers often do not come into existence ‘on their own’. That is the case here, where numbers are not being used to tally—to enumerate—the number of entities (such as automobiles) in some domain (such as some street or city). Furthermore, even after numbers have appeared or have been made to appear, they must usually be interpreted by human actors and often arguments about the significance of the available numbers for the thesis in question must be constructed and assessed. Such activities seem to involve ‘qualitative’ mental processes as well as quantitative ones.
Perhaps the thesis of a disjunction between quantitative and qualitative judgements about evidence involves the notion that mathematical procedures for the assessment of evidence amount to mechanical recipes—‘algorithms’—that automatically—or, in any event, in a machine-like fashion—determine the probative value of evidence. But debates about the advantages and disadvantages of ‘algorithmic’ methods of analysing evidence are beside the point here: Algorithmic reasoning would not be required by a quantified legal standard of persuasion that merely specifies the level of certitude that must exist in the mind of a trier of fact if the trier is to take some action such as casting a vote in favour of verdict of guilty in a criminal case. A quantified legal rule of this sort assumes that the trier somehow reaches a conclusion about his level of certitude. It does not describe the type of reasoning that the trier should use to reach a conclusion about his or her level of certitude or incertitude. See discussion [above].
The myth of the unquantifiability of degrees of belief
More than half a century ago, the dean of all scholars of the Anglo-American law of evidence—John Henry Wigmore—wrote:
The truth is that no one has yet invented or discovered a mode of measurement for the intensity of human belief. Hence there can be yet no successful method of communicating intelligibly to a jury a sound method of self-analysis for one’s belief. If this truth be appreciated, courts will cease to treat any particular form of words as necessary or decisive in the law for that purpose; for the law cannot expect to do what logic and psychology have not yet done.9 John H. Wigmore, EVIDENCE IN TRIALS AT COMMON LAW Section 2497 (3d ed. 1940)
Wigmore’s language is sweeping. The sentiment it expresses is practically hypermodern. Just as Kenneth Arrow argued that interpersonal comparisons of preferences are impossible, Wigmore seemed to suggest that interpersonal comparisons of the strength of credal states—interpersonal comparisons of the strength of beliefs about states of the world—are impossible. Indeed, Wigmore seemed to go yet further: he seemed to assert that ‘intrapersonal’ comparisons of the strength of credal states are also impossible—that individuals cannot compare the degree of their own uncertainty about the truth or falsity of different propositions about the world. In short, Wigmore seemed to suggest that in the end, we just feel that this or that proposition is true or false and that we cannot tell others or even ourselves just how strongly we feel that this or that proposition is in fact true or false.
If it is true that both intrapersonal and interpersonal comparisons of degrees of persuasion or degrees of uncertainties are impossible, it seems to follow that all legal rules mandating a certain level of certitude on the part of the trier of fact in specified situations are both meaningless and useless. ...
But American law on standards of persuasion does not bear traces of such hyper-skepticism. A legally mandated standard of persuasion for criminal trials—the reasonable doubt standard—does exist. Furthermore, American law mandates the use of various other standards of persuasion for other kinds of cases and situations. ...
That’s the way things stand. But do legal standards of persuasion amount to a shell game? Do they amount to a kind of verbal sound and fury signifying nothing?
The thesis that the strength of human credal states is not knowable or communicable cannot be comprehensively evaluated in a paper such as this; this comment would have to become a treatise. However, it should be noted that it is not self-evident that Wigmore’s radical thesis about credal states is true. ...
The immediate impetus for Wigmore’s expression of skepticism about the ability of people to determine and describe the degree of their uncertainty was not Wigmore’s wish to demonstrate the futility of using numbers to quantify standards of persuasion: the immediate impetus for Wigmore’s skeptical outburst was instead his desire to demonstrate the futility of using words to explain the reasonable doubt standard. Of course, had Wigmore been asked, he would also have condemned the use of numbers to describe the meaning of the reasonable doubt standard. But the point remains that Wigmore’s critique cuts at least as much against verbalization as against quantification of the reasonable doubt standard. We emphasize this point because it suggests an important insight about the true nature of debates about quantification of standards of persuasion such as the reasonable doubt standard.
The true question is not whether a standard such as the reasonable doubt standard should be quantified or not quantified. The question of quantification is tied up with the more general question of the advantages and disadvantages of using both words and numbers to describe a standard of persuasion such as the reasonable doubt standard. When the question of quantification is framed in this way, we can more readily appreciate that words as well as numbers can be used and are used to grade—quantify!—degrees of certainty or uncertainty. The debate about quantification is not really about quantification. If we reject (as we should) the radical thesis that uncertainty is not subject to any discernible gradations, the debate about quantification is really about the kind of language that should be used to grade and quantify uncertainty and to communicate to triers of fact in legal proceedings, society’s judgement about the kind and amount of factual uncertainty that society views as acceptable or unacceptable in criminal trials.
The myth of the (allegedly) necessary—but (allegedly) spurious—objectivity of quantifications of reasonable doubt
This myth is a noxious but hardy weed. It first erupted—in modern legal memory—in 1971, when Laurence Tribe made his renowned attack on trial by mathematics. Mathematical analysis of evidence, he argued, can perhaps do a nice job of handling ‘hard variables’, but quantitative analysis (in the form of probability theory) either cannot quantify soft variables or does a lousy job of quantifying them. Although Tribe does not define hard variables, he intimates that they amount to readily enumerable—readily countable—phenomena.
The notion that soft variables cannot be quantified is a myth. For example, I can and do make uncertain judgements about how my neighbour will feel next time I see her—and, if asked, I can and will tell you what I think are the chances that I am right. ...
Given the withering scholarly criticism that has been directed at the myth that probability theory deals with objective or hard facts and therefore cannot regulate uncertain (or inconclusive) reasoning about nonobjective phenomena, one might think that even judges would now refrain from asserting that quantitative methods cannot deal with ‘soft variables’. But it is not so—at least not universally so: the wrong-headed notion that mathematical measures of the strength of evidence can measure only the strength of evidence (or judgements about the strength of evidence) about ‘objective’ phenomena has resurfaced in judicial discussions of the reasonable doubt standard.
To see what the authors have to say in the remainder of the article -- in particular, to see what they have to say about what "genuine issues" are raised by proposals for mathematical formulations of the reasonable doubt standard -- , you will have to read the article itself. I suspect that what they have to say will both surprise and interest you -- but, then, I am biased on this point.
Go here for legal material on proof beyond a reasonable doubt.
Thursday, November 19, 2009
N.B. Perhaps it is fitting that Wigmore now has an ethereal existence as well as a material one: Wigmore takes up a substantial chunk of the Loislaw databases. Perhaps John Henry W is lurking thereabouts as well.
Monday, November 16, 2009
I’m often asked when we’re going to “release” Spindle Law, and I always give a too-long answer, not only because I’m unfortunately in the habit of answering questions that way, but also because there’s no single event that I equate with Spindle’s “release.” There are many steps in the process of making the site more and more accessible, and more and more useful to more and more people, and since we don’t plan a big marketing campaign to accompany any of these steps, we don’t have a very good reason to label any one of them in particular our “release.”
We’re now, though, preparing to take a step that’s probably as close as any other to what people are thinking when they ask the release question. Joel is building features that will allow us to open the site, partially, to people who are not signed in (that’s what’s called “anonymous” access to the site), and to allow those who want to try out the whole of the site to sign up on their own (”self-registration”). More specifically, the implementation we’re planning will allow anonymous researchers to view our whole hierarchy of topics and rules, and a few other things, too. We hope lawyers searching the web for answers to legal questions will find us this way. (Many lawyers begin their research with a web search, it turns out.) Until they sign up and sign in, they won’t be able to view authorities or contribute, nor will they have access to SpinDoc, Spindle’s research-collection and writing tool. Signing up and then signing in are easy, though, and once that’s done they’ll have access to all of what we have to offer.
At least that’s what we’re planning right now. When it’s done, one of us will have more to say about it, I’m sure. And, of course, there will still be many other steps of “release” thereafter: We’ll keep releasing new content and new features, at some point we’ll take “alpha” off the top of each page on the site (maybe for a while we’ll replace it with “beta,” maybe not), and I hope it won’t be too long before we execute a plan to make some money (about which I’ll also post something; it’s not a secret). Among other things.
The initial upgrade, or quasi-release, of Spindle Law should take effect tonight.
In an effort finally to master the mysteries of Crawford, I began to print out the comprehensive (and characteristically irreverent) discussion of Crawford by Kenneth Graham, Jr., in the 2009 "pocket part" of 30A C. Wright (deceased) & K. Graham Graham, Jr., Federal Practice & Procedure. I emphasize the word "began": after I ordered the computer printer to print, I noticed it was churning for quite some time. On closer inspection, I realized that I had ordered a print job of approximately 255 pages. And these 255 pages include only Graham's discussion of Crawford; Graham's discussion of the Court sequelae to Crawford is found elsewhere. For fear of decimating the forests of the world, I ordered the printer to stop. I wept. I wept not about Graham's wordiness (he is indeed a bit wordy), but about a Supreme Court opinion that demands so much explication -- explication that mostly consists of passages that point out the insoluble riddles and paradoxes that Crawford presents.
Why do so many of my colleagues seemingly relish the task of talking about Crawford and its successors? The labors of those who work at explaining Crawford are very much like the labors of Sisyphus: such labors are endless and fruitless! (I exaggerate, of course -- but only slightly.) One does begin wonder whether so much human intelligence should be devoted to such a (largely) pointless task. However, law teachers are probably incapable of doing anything other than law teaching. So perhaps it's just as well that they are consigned to such labors: they believe they are doing something useful and this feeling of being useful perhaps impedes the development of serious revolutionary (i.e., rabble-rousing) sentiments among at least a portion of the intelligentsia (and one hopes that most of the rest of the intelligentsia has genuinely useful work to do).
Go here for more (and more serious) material on the Sixth Amendment Right of Confrontation.
Sunday, November 15, 2009
Temporal logic is very important or essential for analysis of fact investigation in or for litigation; fact investigation is a dynamic process.
Friday, November 13, 2009
Inter-University Conference on Justice and Fairness at TUM Munich
On Proportionality and Justice – Quantitative Aspects of Justice and Fairness
Technische Universität München & Ludwig-Maximilians-Universität München, January 6 – 7, 2010
December 11th, 2009: Abstract submission deadline
December 16th, 2009: Notification of acceptance
December 20th, 2009: Early registration deadline
January 6 – 7, 2010: Conference
February 15th, 2010: Paper (e-book) submission deadline
April 30th, 2010: Paper (hard copy) submission deadline
From Wednesday 6th to Thursday 7th January, 2010 an inter-university and interdisciplinary conference on justice and fairness will be held at the TUM – Technische Universität München. This is the first inter-university, analytical and quantitative oriented conference on fairness and justice in Germany as well as at the TUM and LMU.
The Technical University of Munich (TUM) and the University of Munich (LMU) – are both the highest ranked universities of Germany (see below) –inviting cordially to this inter-university and interdisciplinary conference on justice and fairness.
The conference is interdisciplinary: we invite papers from philosophy, didactics, computer science, media science, literature, social science, economics and related disciplines. Justice has at least two sides. Actually, there is a huge effort and focus on institutionalization. Whether this and the associated costs and bureaucracy is of value or not may one question among others. Sometimes this view, especially in Germany omits that finally a human ought to make a decision. A decision, he has to take the responsibility for. A glance at history shows us that the quantification of law was going hand in hand with a humanization and equalization of society. As there is a widespread altercation with justice among scholar form diverse faculties we invite them to a interdisciplinary discussion. Our thoughts and actions, our perception, imagination, and experience depend more and more on informational, computational, and robotic systems with increasing complexity and autonomy. What are their epistemic, ethical, and societal challenges for the future of mankind? The workshop will promote scholarly dialogues on all aspects of this turn of society.
Ruth Hagengruber (University of Paderborn)
Klaus Mainzer (TUM, Munich)
Lothar Philipps (LMU, Munich)
Peter Tillers (Cardozo Law School, New York)
RELEVANT RESEARCH AREAS
We call for papers that cover topics pertaining to analytical and quantitative aspects of fairness and justice from the following list (but not restricted to this list):
Analytical philosophy: from quantity to quality, sets, structures, processes
Ontology /Theory of Mind: limits of revaluation
Metaphysics: foundation of quantification …
Ethics: institutional vs. human-centered view, problem of comparison …
Law: compensation & degree of penalty …
Economics: insurances, value of human as capital goods ...
Business studies: benefits, earnings & compensation …
Representation in media: representation of humans and life in economic, quantitative terms
Literature: cross-culture comparisons & concepts
Healthcare: value of quality time and care
Medicine: transplantation, ranking in palliative care, emergency medical aid …
Theology: concepts in Judaism, Christianity, Islam, Buddhism …
Computer Science: how to handle short resources intelligent and fair, e.g. under constraints?
Mathematics: game theory, geometry, etc.
Didactics: How to teach quantification understandable?
We invite (analytical) philosophers, mathematicians, lawyers, economists, computer scientists, theologians, physicians, didactic, media scientists and other scholars interested in discussing these or other related topics to join the workshop, for an exchange of ideas on these subjects. Contributions focused on quantitative and analytical issues of fairness are especially encouraged, but papers on any aspect of justice are welcome. Final Papers should be not more than 10 to 15 pages long (3.000 to 5.000 words) to be presented in 20 minutes maximum, and to allow up to 10 minutes discussion. All accepted papers will be published within an e-book.
Selected papers from the conference will be considered for publication (hard copy). Contributions in English as well as in German are welcome.
Submissions should include (a) title, and (b) extended abstract of 300–500 words long. The submissions should preferably be sent in PDF (or MS Word or RTF if PDF is not possible) as an attachment to an email that should contain the author's name, affiliation, contact details, and the title of the submission.
The submissions should be made electronically, either as PDF, or in RTF or Word format to: firstname.lastname@example.org
Registration fees (in EURO): 20 €; after December 20th, 2009: 30 €.
To book accommodation, please visit the official conference web site. The TUM campus has no own accommodation facilities (hotels), but TUM’s close situation (and good public transport service connections) open the possibility to stay all-around in Munich city.
Technische Universitaet Muenchen, Chair for Philosophy and Philosophy of Science and Technology & Carl von Linde-Academy, Munich
Ludwig-Maximilans Universitaet Muenchen, Chair for Philosophy of Law, Munich
CORRESPONDENCE AND SUBMISSIONS
Rainhard Z. Bengez (email@example.com) Department for Philosophy and Philosophy of Science and Technology, TUM – Technical University of Munich, Arcisstr. 21, D-80333 Muenchen, Germany
Rainhard Z. Bengez, Technical University of Munich, Germany
Lilija Mieliauskiene, Kaunas Technology College, Lithuania
Lothar Philipps, University of Munich, Germany
Wolfgang Pietsch, Technical University of Munich, Germany
Gerhard Spilgies, Institute for Anthropotechnical Studies, Spain
Carsten Stolz, University of Ingolstadt, Germany
Fu Ching Wang, National Yunlin University of Science & Technology, Taiwan
The Technische Universität München (TUM: http://www.tum.de ) and the Ludwig-Maximilians University (LMU: http://www.lmu.de ) are Germany’s Universities of Excellence and according to QS World University Ranking 2009 the best German universities (http://www.topuniversities.com/university-rankings/world-university-rankings/2009/results ) with a long historical tradition. There are several famous research centers in, e.g., Garching, Weihenstephan, and Rechts der Isar, Martinsried, etc. The conference will take place in the central buildings of TUM (Stammgelände) in the center of Munich near to beautiful museums, Schwabing, and close to TUM’s robotic center. This workshop is co-organized by the interdisciplinary center of TUM: http://www.cvl-a.de
Wednesday, November 11, 2009
Alpha is an ambulatory person, a pedestrian. Beta is a rich boor. C is a chauffeur. He works for Beta.
One day Alpha ambulates, he goes for a walk. Beta has a cold. So he stays home in bed. However, he orders C to get some groceries.
On his way to the grocery store C runs into A -- he literally and actually runs into A -- with B’s Rolls Royce.
After the accident, C tells O, an onlooker, “I’m about to kick the bucket. I’m gonna have a heart attack. I just ran into A. And I’m injured. My arm is broken and my leg is broken.”
On hearing of the accident, B said to Z, “M’gosh, C was reckless.”
Lawsuits, naturally, ensue, including the following federal civil actions:
A brings a civil action against B. A seeks to recover on a theory of respondeat superior, for injuries inflicted by the negligent acts of B’s employee, C, while in the course of B’s employment.B’s pretrial statement “M’gosh, C was reckless” is offered in the trial of A v B.
A brings a separate civil action against C. He seeks to recover for personal injuries negligently inflicted by C.
C brings a civil action against B. He seeks to recover damages for injuries sustained while on the job.
B objects that the statement is hearsay.B’s pretrial statement is offered in the trial of A v. C.
The objection is overruled. Explain why.
C objects on the ground of hearsay.C’s pretrial statement “I’m about to kick the bucket. I’m gonna have a heart attack. I just ran into A. And I’m injured. My leg arm is broken and my leg is broken” is offered in the trial of C v. B.
A replies that B’s statement is an admission.
A is wrong. Please explain why.
A tries again: He states, “Your Honor, B’s statement is plainly against his interest. It is admissible under the exception for statements against interest.”
The trial judge responds, “No it isn’t.”
Explain the trial judge’s reply. Give two reasons why the trial judge is correct.
B responds, “It’s hearsay, Your Honor.
C replies, “It’s a dying declaration, your Honor.”
B responds, “This isn’t a murder case. And C isn’t dead. His statement isn’t a dying declaration. And he’s offering the statement on his own behalf.”
The trial judge replies, “C’s statement is not admissible as a dying declaration. But not for the reasons you gave, counsel.”
The trial judge is correct. Please explain.
Does C’s statement nevertheless overcome the hearsay hurdle? If so, please explain why.
Sunday, November 08, 2009
N.B. #1: I am a resolute heterosexual. So my questioning and doubting are not motivated by a personal "sexual orientation" agenda.I think it is very unlikely that wrongful sexual conduct by members of the clergy in Protestant churches is less common than wrongful sexual conduct by Roman Catholic clergy. I find support for this guess in (i) my beliefs about human nature and human sexual activity, (ii) anecdotal evidence, (iii) some (sparse) scholarly literature, and (iv) reports such as the following:
N.B. #2: I am not a Roman Catholic. I have always been a Lutheran and for the time being I remain one.
Daniel Burke, "Study: 3 Percent of Women Victims of Clergy Sexual Advances,"Ethicsdaily.com (September 11, 2009): More than 3 percent of adult women who attend religious services at least once a month have been victims of clergy sexual misconduct, according to researchers at Baylor University. Put another way: in a congregation of 400 people, seven adult women have been targets of sexual advances by clergy, the study says. In addition, in one of 50 cases, the religious leader was married, according to the report. Four percent of respondents said they knew of a close friend or family member who had experienced a sexual advance by a clergy member in their own congregation, the study says. Baylor researchers said their report is the largest scientific study into clergy sexual misconduct with adults in the U.S. “Because many people are familiar with some of the high-profile cases of sexual misconduct, most people assume that it is just a matter of a few charismatic leaders preying on vulnerable followers,” said Diana Garland, dean of the School of Social Work at Baylor University and lead researcher in the study. “What this research tells us, however, is that clergy sexual misconduct with adults is a widespread problem in congregations of all sizes and occurs across denominations.”So what accounts for the focus by media outlets such as the Boston Globe on sexual abuse by Catholic clergy?
While the sexual abuse of children, particularly by Catholic priests, has received outsize attention in the media and academia, the abuse of adults has received relatively little notice, according to Baylor researchers.
“We hope these findings will prompt congregations to consider adopting policies and procedures designed to protect their members from leaders who abuse their power,” said Garland. “Many people—including the victims themselves—often label incidences of clergy sexual misconduct with adults as `affairs.’ In reality, they are an abuse of spiritual power by the religious leader.”
The research was conducted using questions included in the National Opinion Research Center’s 2008 General Social Survey of more than 3,500 American adults and followed up by interviews with respondents.
These are some possibilities:
1. One possibility is simple ignorance -- ignorance that Protestant clergy (and clerics in other kinds of religious organizations) "do it too."
But if that's part of the explanation, I am inclined to view such ignorance as wilful. The reporters at the Boston Globe, for example, were probably quite familiar with popular literature such as Sinclair Lewis's Elmer Gantry, in which the "hero" -- a Protestant preacher -- engages in sexual misconduct of at least a sort.2. Another possibility is that media outlets such as the Boston Globe have a degree of "homophobia."
This may explain some of the behavior of some media outlets. But it does not readily explain the behavior of the Boston Globe (which, ironically, once championed the activities of the gay "street priest" Paul Shanley [a central figure in the Boston Catholic clergy sex abuse scandal that broke 2002 A.D.] and decried the efforts of the Archdiocese of Boston to suppress Shanley's "street ministry").3. It is also possible that anti-Catholic bias played (and still plays) a role in the behavior of media outlets such as the Boston Globe and the New York Times.
However, anti-homosexual attitudes very probably play a large part in explaining the depth of the public outrage about homosexual abuse of minors by Catholic clergy.
The thesis that the Boston Globe in particular has suffered from anti-Catholic bias should not be too readily dismissed.4. Some or much of the media focus on the Catholic clergy was probably the result of the activities of plaintiffs' lawyers who brought lawsuits on behalf of victims of clergy sex abuse. Those lawsuits have been brought almost exclusively against Catholic clergy and branches of the Roman Catholic Church.
I do not think it likely that a disproportionate number of those lawyers were either anti-Catholic or anti-gay (although I would not be astonished to find that I am wrong about this). My guess is that plaintiffs' lawyers focused on Catholic targets of lawsuits for damages because Catholic clergy and the Roman Catholic church were the most alluring targets for financial reasons: the hierarchical structure of the Roman Catholic Church made it possible -- after some tinkering was done with immunity rules pertaining to non-profits -- to bring actions for damages against entities with relatively deep pockets. (I deeply discount the hypothesis that the beneficent intentions of plaintiffs' lawyers explain why those lawyers so fervently pursued Catholic targets instead of Protestant targets. Plaintiffs' lawyers -- like many of the rest of us -- are primarily interested in money.)Now that financially-rewarding Roman Catholic targets are drying up -- they have almost been exhausted -- we can expect to see -- and I think we are seeing -- an increasing number of sex abuse lawsuits against Protestant clergy and churches, Jewish clergy and organizations, other religious organizations and their clergy (e.g., Mormons, Muslims, Seventh Day Adventists), and -- eventually -- educators (regardless of religious persuasion) and those who employ educators (school boards, universities, and the like).