On the other hand, one might try to design a device that has many of the ingredients or components that the human neurobiological system has. One might call this approach "biological engineering."
Sometimes, it seems to me, the logicist approach has predominated in AI. But I have the impression that at other times there has been considerable stress on the study of the architecture of the human brain (and connected biological matter).
These two approaches are not necessarily in conflict with each other. For example, if one takes a biological engineering approach to AI, one might emphasize the notion that the brain (or the neural system or what have you) either is or must be understood as a "physical symbol system." In this event, it might not be long before a researcher's preoccupation with the physical architecture of the brain becomes largely supplanted by an interest in a type of formal reasoning (albeit a particular type of formal reasoning).
So what? Is that a bad thing? Isn't it true that there must be, ultimately, some kind of logic (or set of operations) that ties the parts of the brain together and allows them to cooperate, work together?
It's a fair guess that there must be some such underlying logic. But the hooker in the hypothesis is the word "ultimately."
There is a reason why it might make sense for AI researchers to study the different components of the brain with great care and postpone the search for the brain's underlying logic. That reason is: ignorance.
It will likely be a long time before the logic that ties the different parts of the brain together is reasonably well understood. A necessary prelude would seem to be an understanding of the logic of different parts of the brain. When that understanding is in hand, perhaps human beings can finally grasp the metalogic of the logics of the various parts of the brain. Or perhaps progress in these two arenas must and will proceed contemporaneously (more or less). In either case, it will probably be a very long time before the brain's underlying metalogic comes close to being understood.
For years, I have been fiddling with a procedure that I call MarshalPlan. See A Theory of Preliminary Fact Investigation and MarshalPlan 2.5. But from a certain vantage point, MarshalPlan looks more like an ensemble of procedures than "a procedure." For example, I argue that a person who reasons about evidence in legal settings often should or might use operations or procedures such as the development of time lines (of various kinds), the development of scenarios (of various kinds), the marshaling of evidence on the basis of legal rules (of various kinds), and legal reasoning (of various kinds). These various forms of reasoning or thinking and others do seem different from each other and they are not reducible to some single form of reasoning or cognitive operation. So the spirit of MarshalPlan seem closer to the spirit of AI biological engineering than to the spirit of AI logicism.
It is indeed true that many of procedures identified by MarshalPlan were inspired by what people thought they saw when they peered into their own heads in an attempt to see how the thinking in their heads actually works when they ponder problems of evidence, inference, persuasion, and proof in legal settings.
Of course, there is one very striking difference between AI biological engineering and MarshalPlan: the ingredients of MarshalPlan do not purport to be the processes that regulate particular physical parts or sectors of the human brain. (I know next to nothing about, for example, the "logic" regulating the hypothalamus -- or even if there is such a logic.) Nonetheless, there is an interesting analogy between MarshalPlan and the biological engineering approach to "artificial intelligence."
Ultimately there may be -- and, I suppose, there must be - some underlying logic that connects the different parts of MarshalPlan, the different mental operations that human beings use or should use when they reason or deliberate about evidence in legal settings. But, as before, the hooker is the word "ultimately." Perhaps there is some such underlying logic. Perhaps, for example, it is some kind of Bayesian logic. Or perhaps it is some form of fuzzy logic. Or perhaps the underlying logic will prove to be of yet a different kind. But we do not yet know what that underlying logic is. Nor are we close to discovering it, much less to identifying its properties. So here again a large measure of ignorance is our lot.
But it does not follow that this general perspective on reasoning about evidence -- a perspective that emphasizes the ignorance about the underlying cause or ground of mental "phenomena" about evidence -- is useless. First, this perspective perhaps explains and justifies why the "network" of evidence marshaling strategies in MarshalPlan is not a true network. The collection of mental operations in MarshalPlan is not and cannot be a true network because we do not yet know what sort of logic ties those individual evidence marshaling strategies together and what sort of metalogic makes it possible for them to "cooperate" and influence each other. Second, our awareness that an actual, real-world actor -- the human organism -- uses the various strategies in MarshalPlan helps to explain why it is sensible to believe or hypothesize that the individual evidence marshaling strategies in an account such as MarshalPlan, while not forming a strict or true network, nevertheless must form a quasi-network -- why, that is, it is sensible and perhaps even necessary to believe that separate evidence marshaling strategies do, in some presently-ineffable way, influence each other and feed into each other. Reason: if a human organism is doing the thinking, there probably must be -- ultimately -- some underlying logic -- some sort of a metalogic -- that the human organism or brain uses to combine the various mental operations that the human animal or brain sometimes uses when thinking about evidence problems in legal settings.
Coming soon: the law of evidence on Spindle Law