Papers
-
-
Informational Richness and its Impact on Algorithmic Fairness
×
Philosophical Studies, forthcoming
The literature on algorithmic fairness has examined exogenous sources of biases such as shortcomings in the data and structural injustices in society. It has also examined internal sources of bias as evidenced by a number of impossibility theorems showing that no algorithm can concurrently satisfy multiple criteria of fairness. This paper contributes to the literature stemming from the impossibility theorems by examining how informational richness affects the accuracy and fairness of predictive algorithms. With the aid of a computer simulation, we show that informational richness is the engine that drives improvements in the performance of a predictive algorithm, in terms of both accuracy and fairness. The centrality of informational richness suggests that classification parity, a popular criterion of algorithmic fairness, should be given relatively little weight. But we caution that the centrality of informational richness should be taken with a grain of salt in light of practical limitations, in particular, the so-called bias-variance trade off. [co-authored with Ruobin Gong]
-
A Probabilistic Analysis of Cross-Examination Using Bayesian Networks
×
Philosophical Issues, 31(1): 2021
The legal scholar Henry Wigmore asserted that cross-examination is ‘the greatest legal engine ever invented for the discovery of truth.’ Was Wigmore right? Instead of addressing this question upfront, this paper offers a conceptual ground clearing. It is difficult to say whether Wigmore was right or wrong without becoming clear about what we mean by cross-examination; how it operates at trial; what it is intended to accomplish. Despite the growing importance of legal epistemology, there is virtually no philosophical work that discusses cross-examination, its scope and function at trial. This paper makes a first attempt at clearing the ground by articulating an analysis of cross-examination using probability theory and Bayesian networks. This analysis relies on the distinction between undercutting and rebutting evidence. A preliminary assessment of the truth-seeking func- tion of cross-examination is offer at the end of the paper.
-
When Statistical Evidence Is Not Specific Enough
×
Synthese, 199:2021
Many philosophers have pointed out that statistical evidence, or at least some forms of it, lack desirable epistemic or non-epistemic properties, and that this should make us wary of litigations in which the case against the defendant rests in whole or in part on statistical evidence. Others have responded that such broad reservations about statistical evidence are overly restrictive since appellate courts have expressed nuanced views about statistical evidence. In an effort to clarify and reconcile, I put forward an interpretive analysis of why statistical evidence should raise concerns in some cases but not others. I argue that when there is a mismatch between the specificity of the evidence and the expected specificity of the accusation, statistical evidence—as any other kind of evidence—should be considered insufficient to sustain a conviction. I rely on different stylized court cases to illustrate the explanatory power of this analysis.
-
Legal Probabilism
×
Stanford Encyclopedia of Philosophy, 2021
Legal probabilism is a research program that relies on probability theory to analyze, model and improve the evaluation of evidence and the process of decision-making in trial proceedings. This entry surveys the existing literature on legal probabilism and the major objections against it. [co-authored with Rafal Urbaniak]
-
Profile Evidence, Fairness, and the Risks of Mistaken Convictions
×
Ethics, 130: 2020
Many oppose the use of profile evidence against defendants at trial, even when the statistical correlations are reliable and the jury is free from prejudice. The literature has struggled to justify this opposition. We argue that admitting profile evidence is objectionable because it violates what we call “equal protection”—that is, a right of innocent defendants not to be exposed to higher ex ante risks of mistaken conviction compared to other innocent defendants facing similar charges. We also show why admitting other forms of evidence, such as eyewitness, trace, and motive evidence, does not violate equal protection. [co-authored with Collin O'Neil]
-
Plausibility and Reasonable Doubt in the Simonshaven Case
×
Topics in Cognitive Science, 12(14): 2020
I comment on two analyses of the Simonshaven case: one by Prakken (2019), based on arguments, and the other by van Koppen and Mackor (2019), based on scenarios (or stories, narratives). I argue that both analyses lack a clear account of proof beyond a reasonable doubt because they lack a clear account of the notion of plausibility. To illustrate this point, I focus on the defense argument during the appeal trial and show that both analyses face difficulties in modeling key features of this argument.
-
Proof Paradoxes and Normic Support: Socializing or Relativizing?
×
Mind, 129(516): 2020
Smith (2018) argues that, unlike other forms of evidence, naked statistical evidence fails to satisfy normic support . This is his solution to the puzzles of statistical evidence in legal proof. This paper focuses on Smith's claim that DNA evidence in cold-hit cases does not satisfy normic support. I argue that if this claim is correct, virtually no other form of evidence used at trial can satisfy normic support. This is troublesome. I discuss a few ways in which Smith can respond.
-
Trial by Statistics: Is a High Probability of Guilt Enough to Convict?
×
Mind, 128 (512): 2019
Suppose one hundred prisoners are in a yard under the supervision of a guard, and at some point, ninety-nine of them collectively kill the guard. If, after the fact, a prisoner is picked at random and tried, the probability of his guilt is 99%. But despite the high probability, the statistical chances, by themselves, seem insufficient to justify a conviction. The question is why. Two arguments are offered. The first, decision-theoretic argument shows that a conviction solely based on the statistics in the prisoner scenario is unacceptable so long as the goal of expected utility maximization is combined with fairness constraints. The second, risk-based argument shows that a conviction solely based on the statistics in the prisoner scenario lets the risk of mistaken conviction surge potentially too high. The same, by contrast, cannot be said of convictions solely based on DNA evidence or eyewitness testimony. A noteworthy feature of the two arguments in the paper is that they are not confined to criminal trials and can in fact be extended to civil trials.
-
Plausibility and Probability in Juridical Reasoning
×
The International Journal of Evidence and Proof, 23(1/2): 2019
This note discusses three issues that Allen and Pardo believe to be especially problematic for a probabilistic interpretation of standards of proof: (1) the subjectivity of probability assignments; (2) the conjunction paradox; and (3) the non-comparative nature of probabilistic standards. I offer a reading of probabilistic standards that avoids these criticisms.
-
Can Probability Theory Explain Why Closure Is Both Intuitive and Prone to Counterexamples? ×
Philosophical Studies, 175(9): 2018
Epistemic closure under known implication is the principle that knowledge of A and knowledge of "if A, then B", together, imply knowledge of B. This principle is intuitive, yet several putative counterexamples have been formulated against it. This paper addresses the question, why is epistemic closure both intuitive and prone to counterexamples? In particular, the paper examines whether probability theory can offer an answer to this question based on four strategies. The first probability-based strategy rests on the accumulation of risks. The problem with this strategy is that risk accumulation cannot accommodate certain counterexamples to epistemic closure. The second strategy is based on the idea of evidential support, that is, a piece of evidence supports a proposition whenever it increases the probability of the proposition. This strategy makes progress and can accommodate certain putative counterexamples to closure. However, this strategy also gives rise to a number of counterintuitive results. Finally, there are two broadly probabilistic strategies, one based on the idea of resilient probability and the other on the idea of assumptions that are taken for granted. These strategies are promising but are prone to some of the shortcomings of the second strategy. All in all, I conclude that each strategy fails. Probability theory, then, is unlikely to offer the account we need.
-
Epistemic Closure, Assumptions and Topics of Inquiry×
Synthese, 191(16): 2014
According to the principle of epistemic closure, knowledge is closed under known implication. The principle is intuitive but it is problematic in some cases. Suppose you know you have hands and you know that 'I have hands' implies 'I am not a brain-in-a-vat'. Does it follow that you know you are not a brain-in-a-vat? It seems not; it should not be so easy to refute skepticism. In this and similar cases, we are confronted with a puzzle: epistemic closure is an intuitive principle, but at times, it does not seem that we know by implication. In response to this puzzle, the literature has been mostly polarized between those who are willing to do away with epistemic closure and those who think we cannot live without it. But there is a third way. Here I formulate a restricted version of the principle of epistemic closure. In the standard version, the principle can range over any proposition; in the restricted version, it can only range over those propositions that are within the limits of a given epistemic inquiry and that do not constitute the underlying assumptions of the inquiry. If we adopt the restricted version, I argue, we can preserve the advantages associated with closure, while at the same time avoiding the puzzle I've described. My discussion also yields an insight into the nature of knowledge. I argue that knowledge is best understood as a topic-restricted notion, and that such a conception is a natural one given our limited cognitive resources.
-
Informational Richness and its Impact on Algorithmic Fairness
×
Book chapters
-
-
Evidential Reasoning ×
In Bongiovanni, Postema, Rotolo, Sartor, Valentini and Walton (eds). Handbook of Legal Reasoning and Argumentation , Springer, 2018
The primary aim of this chapter is to explain the nature of evidential reasoning, the characteristic difficulties encountered, and the tools to address these difficulties. Our focus is on evidential reasoning in criminal cases. There is an extensive scholarly literature on these topics, and it is a secondary aim of the chapter to provide readers the means to find their way in historical and ongoing debates. [co-authored with Bart Verheij]
-
Betraying Davidson: A Quest for the Incommensurable×
In Carrara and Morato (eds). Language, Knowledge, and Metaphysics , College Publications, 2009
Talks of incommensurable conceptual schemes typically allude to the following picture: there is reality that is inaccessible per se, on the one hand, and there is us, or different groups of people, accessing reality by means of incommensurable conceptual schemes, on the other. But what does it mean for two schemes to be incommensurable? And is incommensurability at all intelligible to us? Donald Davidson famously argued that incommensurability is unintelligible. His argument first provides a definition of incommensurability, and then shows that incommensurability thus defined is unintelligible. I will argue, contra Davidson, that his definition of incommensurability does not lend any support to the unintelligibility claim.
-
LKIF Core : Principled Ontology Development for the Legal Domain×
In Breuker, Casanovas, Klein and Francesconi (eds). Law, Ontology, and the Semantic Web , IOS Press, 2009 (with Hoekstra, Breuker, and Boer)
In this paper we describe a legal core ontology that is part of the Legal Knowledge Interchange Format: a knowledge representation formalism that enables the translation of legal knowledge bases written in different representation formats and formalisms. A legal (core) ontology can play an important role in the translation of existing legal knowledge bases to other representation formats, in particular as the basis for articulate knowledge serving. This requires that the ontology has a firm grounding in commonsense and is developed in a principled manner. We describe the theory and methodology underlying the LKIF core ontology, compare it with other ontologies, introduce the concepts it defines, and discuss its use in the formalisation of an EU directive.
-
Evidential Reasoning ×
Reviews
Theses
-
-
Statistics and Probability in Criminal Trials: The Good, the Bad, and the Ugly×
Ph.D. Thesis, Stanford University, 2013
Is a high probability of guilt, in and of itself, enough to convict? I maintain that the correct answer is No. I argue that the prosecutor's burden of proof does not only consist in establishing the high probability of the defendant's guilt; it also consists in (1) establishing guilt with a resiliently high probability, and in (2) offerring a reasonably specific and detailed narrative of the crime. My account has applications to debates in epistemology about lottery propositions, and to questions in legal scholarship regarding the use of statistical evidence in criminal trials.
-
Formalizing Legislation in the Event Calculus: The Case of the Italian Citizenship Law×
M.Sc. Thesis, University of Amsterdam (ILLC), 2007
I explores to what extent legal knowledge and legal reasoning can be encoded in the Event Calculus and how the Event Calculus needs to be extended (if at all) to accommodate legal reasoning and knowledge. My working hypothesis is that a considerable portion of legal knowledge and reasoning is contained in legislative texts (laws, decrees, regulations, directives, etc.). So my primary task will be to formalize a piece of legislation in the language of the Event Calculus.
-
Statistics and Probability in Criminal Trials: The Good, the Bad, and the Ugly×
Fall 2024
-
-
Legal Probabilism ×
Legal probabilism is an ongoing research program that aims to harness the powers of probability theory to analyze, model and improve the evaluation of evidence and the process of decision-making in trial proceedings. Despite its promise, significant hurdles exist in carrying out this program, and many have criticized it. These critiques range from difficulties in assessing the probability of someone’s criminal or civil liability to the dehumanization of trial decisions to misconstruing how the process of evidence evaluation and decision-making takes place in trial proceedings. The seminar examines these current debates and their relevance to larger questions about AI and the human future.
-
Legal Probabilism ×
Past courses
-
-
Race Causality Discrimination ×
Claims of racial discrimination and structural racism are ubiquitous today. On one hand, studies in the social sciences show that, after controlling for variables other than race, racial disparities still persist in incarceration, health outcomes and wealth. Thus, race appears to be a key variable for explaining racial disparities. On the other, many point out that, again after controlling for variables other than race, the same racial disparities significantly decrease. Thus, they think that the explanatory role of race has been exaggerated. This debate leaves out a crucial question, however. What causal and explanatory role (if any) can race play in the first place? If, as many hold, race is a social construct emerging from the fabric of social interactions, does it make sense to isolate it and control for other variables? The course aims to untangle the assumptions about the causality of race that inform the ongoing debate about racial discrimination and structural racism. Readings will be drawn from the literature in the social sciences on racial inequalities, the statistical and philosophical literature on causality, and the philosophical literature on theories of race.
-
The Common Ground ×
This seminar examines the idea of the common ground as a condition for meaningful disagreement, valid reasoning, rational inquiry about matters of fact, and political decision-making. The motivation for thinking about the common ground stems from the growing political polarization on issues such as climate change, vaccines, abortion, race, the 1619 project, #MeToo, and more. This polarization in people’s opinions, commitments, and values exists alongside deep divisions in wealth, education, housing, health, lived experiences. Seeking a common ground is sometimes touted as the solution to the problem of polarization, and the lack of a common ground as its cause. On the other hand, attempting to overcome polarization by seeking a common ground is sometimes considered the wrong way of addressing the problem. But what is a common ground, in the first place? If it is the solution to polarization, what kind of common ground should we be seeking? Is a common ground even possible or does it necessarily lead to the exclusion of people’s identities? This seminar examines these questions from a philosophical standpoint. It will not offer answers, but rather, a framework of analysis.
-
AI for Judges ×
This module examines how tools from AI can help judges make decisions. Judges make decisions at several junctures of the justice system, pre- and post-trial. Their decisions concern matters of fact as well as matters of law. We will focus on decisions about matters of fact and examine three topics: 1. Machine learning tools for risk assessment applied to decisions about bail, preventive detention and sentencing; 2. Multiagent systems for deciding about the relevance and admissibility of evidence; and 3. Argumentation structures and Bayesian networks for representing complex bodies of evidence. The module will also examine the different ways in which the judiciary can rely on findings about matters of fact that are entirely or partially based on automated elements, as well as the current legislative policies and case-law on the topic.
-
Algorithmic Fairness ×
Public and private sector entities rely on algorithms to streamline decisions about healthcare, loans, social services, sentencing and more. This course examines the ongoing debate among computer scientists, legal scholars and philosophers about the fairness of algorithmic decision-making. What does it mean for an algorithm to be fair? How does algorithmic fairness relate to other ideas such as anti-discrimination and fair equality of opportunity? Can fairness be measured quantitatively? What other values and goals should inform the design of ethical algorithms, such as minimizing the harm toward disadvantaged minorities or respecting each person’s individuality?
-
Bayesian Networks in Philosophy ×
An exploration of the philosophical applications of Bayesian networks The course consists of three parts: (a) Probability theory refresher; (b) Intro to Bayesian networks; and (c) Philosophical applications.
-
Introduction to Logic ×
Spring 2019; Spring 2014
Syntax and semantics of propositional and predicate logic; modal and inductive logic.
-
Asian Philosophies ×
Spring 2019
Study of the history of Indian philosophy, especially its early beginnings. Readings from key texts in the Hindu and Buddhist philosophical traditions.
-
Is the Criminal Justice System Racist? ×
Examination of how and to what extent the US criminal justice system is racist.
-
Legal Reasoning ×
Fall 2019; Fall 2017; Fall 2015
Examination of how judges and lawyers reason, in particular, how legal rules apply to individual cases; how judges interpret the text of the law; how conclusions about “matters of fact” are reached in court; and how past court decisions influence current decisions.
-
Problems of philosophy ×
Fall 2019; Fall 2017; Fall 2015; Fall 2014
Fundamental questions about ourselves and the world. What is justice? What is our place in the world and in society? What makes us happy? What is the relationship between mind and body? Are we free? Does God exist and can we prove it?
-
Philosophy of Law/Jurisprudence ×
Spring 2016; Spring 2015; Spring 2020
An introduction to philosophical questions about law, legal reasoning and the justice system. What makes a written text a piece of law? Are laws mere human conventions? How should judges decide difficult cases? Should we obey immoral laws? Why should we obey the law in the first place? How are algorithms used in the justice system today?
-
Critical Reasoning ×
Fall 2018; Spring 2016; Fall 2014
Introduction to the concepts and methods of thinking, reading and writing analytically.
-
Convicting the Innocent ×
Summer 2016
Adversarial v. inquisitorial trials; casues of mistaken convictions; remedies; modelling error and truth in the criminal trial.
-
Math on Trial ×
Summer 2015
Uses and abuses of statistics and probability in the criminal trial; examination of infamous cases, such as Collins and Lucia de Berk; cold-hit DNA matches.
-
Probability and the Law ×
Winter 2014; Winter 2013
What does it mean to prove guilt beyond a reasonable doubt? Can we interpret legal standards of proof probabilistically? What is the role of probability and statistics in the courtroom? How are quantitative methods changing legal proceedings? No statistical or legal background is expected.
-
Introduction to Philosophy ×
Summer 2013
Questions about the essence of reality; the limits of knowledge; happiness and a good life; mind and body; free will and determinism.
-
Introduction to Statistics ×
Summer 2013
Basic statistical concepts. Uses, misuses and limitations of statistical methods. Emphasis on the concepts rather than the computations. No mathematical background is needed, except high school algebra and willingness to think carefully.
-
Modal Logic and its Applications×
Summer 2007
Introduction to modal logic and its applications to metaphysics, epistemology, formal semantics and artificial intelligence. No previous knowledge of logic or philosophy is expected.
-
Race Causality Discrimination ×