Pandemonium projects the active building - no control
Connect your Spotify account to your account and scrobble everything you listen to, from any Spotify app on any device or platform.
Find showtimes, watch trailers, browse photos, track your Watchlist and rate your favorite movies and TV shows on your phone or tablet!
The field of AI and Law is at least thirty years old. 4 It has had a vibrant history. 5
Its concerns have often mirrored-and sometimes anticipated----, streams of research in AI at large: from logic to expert systems and logic programming, from frames and scripts to cases and CBR and to hybrid systems, from theorem-proving to defeasible and non monotonic reasoning and to agents for e-commerce.
In their 1970 Stanford Law Review article "Some Speculation about Artificial Intelligence and Legal Reasoning", Buchanan and Headrick discussed the possibilities of modeling legal research and reasoning, particularly for advice-giving, legal analysis and ar gument construction, and even though they envisioned using goal-directed rule-based approaches, they presciently pointed out the importance of analogical reasoning. 6 Many years before that, Layman Allen had begun his research program on using logic as a tool to improve drafting and interpretation of legal documents. 7 In 1977, the Harvard Law Review published a landmark paper by L. Thorne McCarty on his TAXMAN system, which pur sued a theorem-proving approach to reasoning with issues in corporate tax law. Based on his experiences with this early system, he began his research program to address problems of open texture and develop deep models of legal concepts, like stock ownership in the context of tax law. Both of these lines of research are on-going. In 1978, Carole Hafner published her doctoral research on a system that used an AI approach to improve legal information retrieval (IR) in the domain of negotiable instruments; it used semantic net representations to push beyond purely keyword-based approaches. At about this time, the Norwegian Center for Computers and Law, founded in 1971 by Knut Selmer and Jon Bing, extended its focus on IR to include intelligent techniques. With the advent of the web, re search on intelligent legal IR is once again flourishing.
In the 1980s, work in AI and Law intensified tremendously. By 1981, Donald Waterman and Mark Peterson at the RAND Corporation's Center for Civil Justice had built an expert system for legal decision making in the settlement of product liability cases in tort law; they later explored the use of expert systems in the specific area of asbestosis cases. Marek Sergot, Robert Kowalski and their colleagues at Imperial College London used logic programming to model part of the British Nationality Act, a large, self-contained statute; in an important paper in the Communications of the ACM, they reflected on their project and discussed a few problematic aspects of the rule-based approach: the open textured nature of legal predicates and the difficulties in modeling negation, exceptions, and counterfactual conditionals. Waterman and Peterson had encountered similar problems. The project at Imperial College also demonstrated how such an approach could be used to help "debug" a statute while it is being drafted, for instance, by finding rule conflicts and ambiguities. The use of executable logical models (especially in PROLOG) was extended to larger, more complex statutes in a large collaborative project centered on UK welfare benefits law. By the mid 1990s these techniques would be sufficiently mature to form the basis of operational systems used in local and central administrative governmental agencies, especially in the Netherlands and Australia. In the early 1980s, the Istituto per Ia Documentazione Giuridica (the "IDG") in Florence, originally founded in 1968, began, under the directorship of Antonio Martino, to expand its activities to include AI techniques and to host a series of international conferences on expert systems and law.
Anne Gardner's 1984 doctoral dissertation at Stanford focused on the problem of what happens "when the rules run out" - when the antecedent of a rule uses a predicate that is not defined by further rules-particularly due to the inherent open-textured nature of legal concepts and problems involving the relationship between technical and common-sense meaning of words. It drew attention to the fact well-known in the law that one cannot reason by rules alone, and that in response to failure, indeterminacy, or simply the desire for a sanity check, one examines examples. In Gardner's system, which analyzed so-called "issue spotter" questions from law school and bar exams in the offer-and-acceptance area of contract law, the examples were not actual specific precedents but general, prototypical, fact patterns. Her work sought a principled computational model of the distinction between "hard" and "easy" cases, much discussed in jurisprudence. 8 She framed her discussion in terms of defeasible reasoning, a topic of intense interest today.
While progress continued on rule-based reasoning (RBR) systems in the 1980s, there began to emerge a community of AI researchers who focused on reasoning with cases and analogies-that is, case-based reasoning. In the early 1980s, Rissland had investigated reasoning with hypothetical cases particularly in Socratic law school interchanges. In 1984, she and Ashley first reported on the legal argument program HYPO and the mechanism of "dimensions". This line of research had grown out of Rissland's earlier work on example-based reasoning and "constrained example generation" in mathematics. 9
Initially concerned with the problem of generating hypotheticals (hence its name), HYPO reached full maturity as a case-based argumentation program in Ashley'doctoral dissertation. It was the first true CBR system in AI and Law, and one of the pioneering systems in CBR in general. Thus by the mid 1980s, RBR and CBR approaches were making themselves felt in AI and Law.
In her excellent review article, Anne Gardner points out that this bifurcation between rule-based and case-based approaches is longstanding. We note that often champions of one approach appreciate full well the importance or need for the other (., Buchanan), switch their focus (., McCarty), seek to bridge the gap between them (., Gardner), attempt to reconcile them through reconstruction (., Prakken, Sartor and Bench-Capon), or are intrigued by hybrid approaches (., Rissland).
In the mid 1980s, a few leading American law schools began conducting seminars on AI and Law. The first was given at Stanford Law School in 1984 by three law professors: Paul Brest (later to become Dean), Tom Heller and Bob Mnookin. Rissland launched her seminar on AI and Legal Reasoning at the Harvard Law School in 1985, and Berman and Hafner theirs at Northeastern in 1987. Over the years, such seminars have proliferated and have served as forums bringing together the AI and legal communities.