A while ago I posted a note about the theory underlying the software. I have now written an expanded and revised note that I plan to embed in the MarshalPlan software. See below for a rough version of this forthcoming note. The punctuation in this material is a bit inconsistent because I had started to change the punctuation in order to make the note work as speech in MarshalPlan. I trust you will forgive me for that.
This is the still-rough version of the note:
My motivations for developing MarshalPlan have been theoretical as much as practical. But I did not and I do not see a tension between my theoretical and practical ambitions. My neo-empiricist inclinations lead me to conclude that a sound theory of inference must be able to prove itself in the world. By developing MarshalPlan I wanted to both explore and illustrate some basic hypotheses about the nature of (wo)man's acquisition of knowledge about his (her) world.
Since my "constructivist" agenda was positive rather than negative, until now I have not used the notes embedded in MarshalPlan to make arguments against views of evidential inference that I think are mistaken. I thought that the success or failure of MarshalPlan could be the primary test of whether my own views of evidential inference (in legal settings) are or are not mistaken.
But now that I have largely completed the outline of a working model of MarshalPlan, I think it might be useful for me to identify the theoretical premises and perspectives that I do not have and that do not undergird the MarshalPlan project. So I will do that. However, I will describe those rejected perspectives only in a shorthand way. This is why I use the word "dogmas" below to refer to my views about several theoretical perspectives that I find wanting. After identifying some theoretical premises and perspectives that I do not embrace, I describe some of my central affirmative hunches about the nature and foundations of empirical human knowledge. These are either theses that have supported the development of MarshalPlan or they are theses that are supported by MarshalPlan.
There is a long-running debate in the American legal academy and elsewhere about the use of mathematics to analyze evidence in trials. This debate is a red herring; it misses the boat; it does not address the fundamental issues about inference. This is not to say that questions about when mathematics might or should play a role in evidential argument in legal settings are uninteresting or unimportant; such questions are plainly both interesting and important. I also do not claim that discussions that focus of the role of mathematics can say nothing or have said nothing about fundamental epistemological issues. That's not the case. Such discussions can and sometimes do touch on fundamental questions and in this way shed light on key facets of inference and reasoning from evidence. But this defense of the debate about mathematical analysis of evidence is a bit like saying that WWII was a good thing because it led to the development of V-2 rockets. The debate about mathematical analysis of evidence has caused more intellectual havoc than enlightenment.
There have been arguments, both within the literature on evidential inference narrowly conceived and in the broader philosophical literature, that inference is fundamentally "subjective." I myself have occasionally made such arguments.
The claim that human inference is subjective is true but trivial.
Human beings (and sharks [see below]) do have some capacity to draw accurate inferences about the world. If they did not, they could not have survived as long as they have -- unless they had an extraordinary amount of dumb luck. In any case, accomplishments such as the development of the internal combustion machine, the development of nuclear weapons, the development of the microchip, and the construction of cathedrals that often manage to stand for decades and sometimes for centuries are evidence enough that accurate factual inference is sometimes possible. Complete epistemological or inferential relativism or skepticism is a non-starter for any serious student of human evidential inference.
The opposing thesis -- the thesis that there are objective methods of drawing inferences about human behavior -- is, however, also incorrect. This thesis is incorrect if by "objective" we mean self-standing (or autonomous or largely-autonomous) methods of reasoning, artificial methods of reasoning that can replace ordinary, seemingly-sloppy, and commonsense human methods of reasoning about matters such as human behavior.
Usually such objective methods are not available. This is true even though it is also true that some problems or questions in our world are now configured in such a way that artificial and autonomous methods of ratiocination, rumination, or computation can yield solutions that are less likely to be wrong than those reached by unaided human common sense.
As you can see, this dogma -- Dogma Number 3 -- is closely related to Dogma Number 1, my dogma about the irrelevance of much of the longstanding debate about mathematical analysis of evidence and inference.
The brain -- or the neurobiological system -- or possibly we will have to call it the neuro-magneto-electro-biological system -- is a very complex mechanism. Some persons say that it is the most complex mechanism in the universe. They may be right about that. Whether they are or not right about that, it is unlikely that artificial methods of computation (such as computer-based computation) can replace the human brain anytime soon. True, as I said earlier, in some domains computers can outperform humans; for example, computers can now play chess and checkers better than even the most extraordinary humans can. But in most arenas computers do far less well than human beings do. That's likely to be the case for some time to come.
Given Dogma Number 3, it does not follow that theorizing about inference is pointless. On the contrary: it is possible for theorizing about inference to have both theoretical value and cash-value: it is possible that theorizing about inference can lead to improvements in both our understanding of evidential inference and in the quality of real-world inference in legal settings.
Given our present understanding of evidential inference and given the limitations on our current understanding of evidential inference, there is reason to believe and hope that images of reasoning about evidence can improve human inference if such images are used together with with ordinary human reasoning and common sense logic. More precisely stated, it is possible for images (or pictures) of inference to be a useful tool of common sense, it is possible for images of evidential inference to support, facilitate, and enhance natural, or pre-existing, methods of human reasoning about evidence. This is roughly the fundamental insight that Timothy van Gelder holds and purveys, and I align myself with him. (However, Tim bears no responsibility for the details of the MarshalPlan system or for its many defects and failings.)
(No more dogmas.)
Those are some of my theoretical beliefs and dogmas. But now I must tackle a hard question rather than an easy one:
I begin my attack on this question by making a few more comments about the general direction that theorizing about inference might take and why this general tack might turn out to be a profitable one.
The brain -- or the human neurobiological system, or possibly the human electro-magneto-neuro-biological system -- is a computational mechanism of enormous complexity, subtlety, and power. A model of inference -- an artificial construct -- might try to capitalize on the power of this natural mechanism and make it function more effectively. How could an image or images of inference do that?
Stated most abstractly, my answer is this: an image or model of evidential inference could improve the quality of human inferential performance if it could trigger natural computational mechanisms and processes (such as the brain) and make it possible for human beings to use their native, or inbred, computational mechanisms and processes more efficiently, more effectively, and more productively.
That artificial devices might be helpful or useful in this derivative way -- that images or models of evidential inference could serve as handmaidens of natural human reason -- that some artificial constructs depicting inference might be useful cognitive tools, or helpful cognitive crutches -- that appropriately-drawn images of inference might function as supports for native human reasoning -- is suggested by two considerations.
First, any particular line of reasoning about any real-world problem almost inevitably involves multiple steps. Properly designed artificial devices -- cognitive tools, heuristic devices, "inference support tools," whatever they are called -- might well improve the ability of humans literally to keep in mind -- to keep in conscious thought, to be more aware of, to have more awareness of -- the steps in any train of reasoning that they decide to follow when considering any particular factual hypothesis.
Second, human beings reason about evidence and the world along multiple tracks, in a multitude of ways. Even though the brain is a very powerful mechanism, it is not an infallible one, and the different ways of thinking or reasoning or the different tracks the mind takes are difficult to keep in mind (so to speak) at the same time. But these different tracks, these different ways of thinking about a problem, influence each other. So keeping multiple lines of reasoning in mind at the same time is important; indeed, it is essential. Cognitive crutches can help mortals keep in mind the many different tracks along which their minds are running.
In sum, there is reason to think or hope that artificial tools (including, for example, simple diagrams and checklists drawn on paper) can make it easier for human beings to literally better keep in mind their various ways of thinking about a factual question and the numerous steps that human beings characteristically take and construct within each track of the many tracks of thinking that they follow.
The next question is what particular sorts of images or models of evidential inference are likely to be useful and necessary.
Much recent theoretical work on inference centers on inference networks. Such work is very important and it must continue. However, MarshalPlan has relatively little to say about inference networks. It focuses on other methods or marshaling or organizing evidence. MarshalPlan emphasizes comparatively simple evidence marshaling strategies methods such as event time lines, scenarios, and marshaling of evidence by legal rules.
Card Number 2 of the stack Network Manager -- the card in which this note is embedded -- serves in part as an outline of the evidence marshaling methods found in MarshalPlan.
The evidence marshaling strategies listed on Card Number 2 can be sorted into several broad categories.
In one set of methods time plays a central role. This is is true of event chronologies, or time lines. It is true of scenarios. And it is true, in a more complicated way, of narrative and story-telling.
Nota Bene: There are several sub-categories of time lines: (1) time lines of the events at issue in a case, (2) time lines that show the history of sources of evidence (both "real evidence" and human sources, or witnesses), (3) time lines showing the order in which evidence is collected, handled, and presented.Another set of evidence marshaling methods deals with the influence of legal doctrines and norms on evidence marshaling, analysis, collection, and assessment in legal settings. I am now referring, for example, to the red buttons (or links) called "Legal Rules," "Legal Argument," "Legal Source Material," and "Evidence and Material Facts." These and other stacks deal, in the aggregate, with "legal marshaling," which is my shorthand for the way that legal doctrines and legal norms influence the gathering and assessment of evidence.Each of these categories can have subcategories or subdivisions. For example, time lines for events at issue include time lines that focus on the actors in the possible events at issue.
Another set of evidence marshaling methods amounts to a system for filing evidence and information. These are the methods (or stacks) called "Raw Evidence," "Legal Source Material," "Persons," "Analysts," "Legal Actors," and so on. It is probably true that the filing of information on the basis of such categories ordinarily does not require great intellectual labor. Nonetheless, the filing of evidence and information on the basis of such categories is not a trivial act. Evidence and information can be more easily accessed, and recalled, and they are also more suggestive if they are stored according to categories or classifications that are meaningful to the user.
Another group of evidence marshaling strategies in MarshalPlan's collection of evidence marshaling strategies inches toward the development of inference networks. Thinking of inference as a network or web of inference is mainly (but not exclusively) useful when the factual questions are stable and the available evidence is know. In situations such as this -- in situations in which the facts in issue seem relatively stable -- a decision maker is most likely to want to focus on evidence sorting methods such as "Evidence of Material Facts," "Evidence for and against Material Facts," "Witness Credibility," and "Argument about Evidence, or "Probative Value."
This above catalogue of evidence marshaling strategies leaves out two of the main types of strategies that appear on Card Number 2 of Network Manager.
One important strategy not yet discussed is in this note is the cognitive strategy or process here called "Case Theory."
As it now stands, the stack "Case Theory" is less a picture of how this evidence marshaling strategy works than it is a general reminder that, first, there is a very important synthetic or constructive aspect to fact finding and evidential inference, second, the various evidence marshaling strategies identified and described by MarshalPlan influence each other and depend on each other, and, third, the strength of a claim to have correctly or plausibly determined the important legally-material facts depends in large part on the extent to which the various evidence marshaling strategies that a decision maker uses are in harmony with each other and reinforce each other and, thus, on the the extent to all evidence marshaling strategies taken together generate a state of mind of epistemic equanimity, an epistemic reflective equilibrium. (If I were a brilliant programmer, which I am not, I could figure out how to develop a "Case Theory" stack that would allow the user to rotate through all of the evidence marshaling strategies shown in the Network Manager stack while still keeping, to some substantial degree, all evidence marshaling strategies in the mind's eye.)
Another group of stacks (or evidence marshaling strategies) lies at almost the opposite pole from the case theory stack (that is, at the opposite pole from thinking about "the whole ball of wax"). Case theory development involves synthetic thinking -- which in this instances involves the attempt to view the various parts of evidence marshaling in relationship to each other and the attempt to sense the degree to which the results of various evidence marshaling strategies are consistent with each other. This kind of synthetic and global thinking tends to become most explicit once the key ingredients of an inferential puzzle have been identified and studied. But reasoning about evidence also involves and requires exploratory thinking. Several stacks in Network Manager are designed to facilitate and support imaginative thinking about possibilities. See, for example, the stack "Possibilities" and the stack "Loose Thoughts."
You now have a general sketch of the evidence marshaling strategies that are collected in MarshalPlan. This collection of strategies looks a little bit like a network. But the collection of strategies found in MarshalPlan is not a true network; it is a quasi-network. This is a central feature of MarshalPlan.
In my picture of evidence marshaling the results of any one kind of evidence marshaling -- e.g., time line, legal marshaling, etc. -- do not have determinate, or "computable," implications for any other evidence marshaling strategy; for example, any specific time line is logically compatible with innumerable scenarios. However, it is my hypothesis that the evidence marshaling strategies in my collection nevertheless do influence each other. For example, an assessment of the plausibility of some scenario may be affected and is likely to be affected by my assessment of the credibility of this or that witness. This is why I call my network a quasi-network even though MarshalPlan is not a true network.
On this point, I entirely embrace David Schum's thesis (advanced in a different context) that marshaling evidence in one particular way may be evocative of or suggestive of evidence marshaling that has a different axis and follows a different logic.I believe that a quasi-network better portrays how the mind -- the accessible part of the human mind, in any event -- works when it ruminates about evidence in legal settings (and, putting aside legal marshaling, how the mind works in other settings as well).
How, you might ask, did I arrive at the evidence marshaling strategies and methods that are included in MarshalPlan, that are found in the catalogues of evidence marshaling operations or methods found in places such as Cards 2 and 3 of Network Manager and in places such as the stack Loose Thoughts?
A variety of considerations -- a bit of logic, a bit of philosophy, some personal legal experiences, and so on -- led me to the list of evidence marshaling strategies found in MarshalPlan. But it is very important for me to say and forthrightly admit that subjective introspection was a critical source of my catalogue of evidence marshaling strategies; that is, I peered into my own mind and I tried to see how I think about evidence and how I organize evidence in "legal contexts" such as litigation; I tried to identify the different ways that I, Peter Tillers, think about evidence when I try to understand evidence and assess its implications.
So it is fair to say that in many respects MarshalPlan has an affinity with "mind maps." But a mind map isn't worth much if it's just a map of one person's idiosyncratic mind. You and I may think in different ways about evidence. The fact that I think one way may just demonstrate that I have an enormous capacity for self-delusion or that I am very stupid. It is also possible that things I do not understand or see drive me to think the way I do. But there is reason to think and hope that MarshalPlan is more than just a map of the way one particular human creature, Peter Tillers, thinks.
I do have an quasi-objective explanation or justification for some of the methods on my list of evidence marshaling strategies. For example, I believe that plausible ontological considerations support the thesis that almost every factual issue either explicitly or implicitly also presents a question about scenarios. I have similar quasi-objective explanations and justifications for several other evidence marshaling strategies. Beyond that, I appeal to common experience -- both your subjective sense of how you think when you think about evidence and how society (e.g., particular legal rules) tends to say that evidence should be marshaled and analyzed.
But if I purport to be thinking rigorously, I cannot ignore the type of challenge laid down by some very serious students of artificial intelligence, brain science, and consciousness: What is the justification or explanation for focusing on conscious mental processes rather than the "real logic" that perhaps drives the workings of our minds and brains?
My general answer is this: although I entirely agree that at least some subterranean brain processes may help to shape the way we think, it does not follow that conscious mental processes are nothing more than epiphenomena. In any case, we do not yet understand subterranean brain processes well enough to show in detail how they make us think as we do. So the thesis of the reality and potency of mental processes that are visible to our consciousness, to introspection, is, at a minimum, a good working hypothesis. In the long run this working hypothesis might even turn out to be true and it might turn out to be the case that the influence of subterranean neural processes on human thinking has been exaggerated by critics of "folk logic.")
The evidence marshaling methods found in MarshalPlan are both varied and relatively simple. In some quarters, these features of MarshalPlan might be considered defects because, first, it might be supposed that the process of drawing inferences cannot be that messy and inelegant and because, second, it might be supposed that the process of assessing evidence and drawing inferences from evidence really can't really be as simple as I seem to suggest or suppose.
My general answer would be that the real-world drawing of inferences about real-world factual questions is in fact a very messy business -- a process that involves a large variety of ways of thinking.
If some one were to ask me, "What is the key to factual inference?," I might give a variety of answers. But my first response should be to say that there is no magic key to factual inference. I should begin by saying that drawing inferences requires the use of many keys. If someone were to say to me that one logic (e.g., Bayesian logic) animates or underlies all valid factual inference, I should then say, "Even if that is true -- even granting your premise -- it does not follow that only that one logic is needed to do inference. It is as if you said to me, 'A trip to Mars requires the equation F = MA.' In response, I would say, 'Yes, perhaps you're right, but making a trip to Mars requires a great many other things as well. In any event, although some of the things I must do to get to Mars -- e.g., get astronauts to read dials carefully or get machines to record sensory signals to a certain degree of accuracy -- may well be governed by F = MA (or by some other universal equation or equations of your choice), I don't yet understand precisely how reading dials is governed by that equation and, until [and unless] I do, I will have to use something other than F = MA to teach astronauts (or machines) how to read dials carefully and accurately.' So, you see, in addition to a rule such as Bayes' Theorem, I need procedures for storing legal rules, making legal arguments, constructing time lines, keeping track of persons, thinking about possibilities, and so on, and on, and on."
This point moves us to the second major feature of the evidence marshaling strategies found in in MarshalPlan: their seeming simplicity. Those methods seem to be little more than common sense. Is that a defect?
Common sense is, yes, aw shucks!, quite common. But it does not follow that common sense lacks intelligence. If common sense and intuitive sense were not "intelligent," (wo)man would long since have perished from the earth. (I grant you that this argument suggests that sharks are quite intelligent. Yes, in certain respects, they are quite intelligent. That is one reason why they have existed -- apparently -- for hundreds of millions of years.)
The miracle of the human mind is some respects like the miracle of human life: we do not understand very well how we manage to think as well as we do but in fact our seemingly shoddy and shabby and sloppy and simple methods of thinking often work quite well, thank you. So if the evidence marshaling strategies found in MarshalPlan look and are relatively simple, that does not necessarily count against them. Those simple methods may be effective tools for evoking simple but intelligent, or effective, ways of thinking. Yes, I grant you, it almost surely must be the case that very complex processes produce, or underlie, these simple forms of conscious thinking and ordering, and it may also be the case that if we could grasp and explicitly describe those complex processes, we could think and infer much better than we do at present. But we cannot wait until heaven arrives. We must make our best guesses now.
I have often puzzled over fuzzy logic. Despite occasional claims to the contrary, I have the sense that fuzzy logic is sometimes a powerful tool for the management (control) of real-world processes. That this should be so may seem a mystery -- because fuzzy logic, to the extent that I understand it, is far more akin to a semantic theory than to a causal theory; that is, although fuzzy logic largely or entirely abjures causal accounts of natural processes, it often seems to control those selfsame natural processes quite nicely, thank you. How is this possible?
My guess is that the power of fuzzy logic in the world of nature is possible because (i) fuzzy logic is indeed at heart a semantic theory and (ii) our words and concepts (including our ordinary words and concepts) somehow harbor, in a way we do not understand, much knowledge about our world. An analogous notion may explain why the "ordinary" and "commonsense" procedures found in MarhalPlan work -- and why they work as well as they do (if, that is, they do indeed work well, which remains to be seen): carefully disassembling and then reassembling some of our common ways of making good guesses about our world may lead to important advances in our general understanding of how human beings manage to understand the world to the extent that they do.
I cannot claim sole credit for MarshalPlan. I have hesitated to identify my collaborators because I don't know if they want to take credit or blame (as the case may be) for the current version of MarshalPlan. But I do feel impelled to note that the current version of MarshalPlan grew out of a joint NSF reserach project that David Schum and I conducted years ago. We summarized many of the major results of our research in P. Tillers & D. Schum, "A Theory of Preliminary Fact Investigation," 24 University of California at Davis Law Review 931 (1991).
I may not know much about evidential inference or about matters such as investigative discovery. But if I know anything worthwhile about such things, it is largely because I had a master teacher, David A. Schum. (I am also deeply indebted to William Twining, Richard Lempert, David Kaye, and many other luminous intellects and generous human beings. I hope my many mentors & teachers will forgive me for failing to name all of them here.)
Coming soon: the law of evidence on Spindle Law