Mental RepresentationPhilosophy of Biology

A Critical and Systematic Analysis of Andrew Rubner’s 2024 Rutgers Dissertation

With reconstructed arguments, original objections, and a sustained assessment of the nearly-all theory and its consequences for teleosemantics

ABSTRACT

Andrew Rubner’s 2024 doctoral dissertation develops two interrelated philosophical theories: an ahistorical, statistical account of natural function which he calls the nearly-all theory, and a teleosemantic account of perceptual content grounded in that function theory. The present analysis provides a systematic critical examination of both projects. Part I reconstructs the formal architecture of the nearly-all theory — its goals, circumstances, typicality condition, and interventionist-causal analysis of contribution — and assesses its advantages over etiological theories and the prior statistical account of Garson and Piccinini. Part II examines how the theory is deployed to ground a theory of perceptual content, focusing on the Reichardt-detector case study, the informativity criterion for resolving content indeterminacy, the treatment of the Swampman case, and the novel three-place representation relation proposed to handle reliable systematic misperception. The analysis concludes with original objections, a discursive comparison with major competitors, and a sustained account of the open research problems the dissertation’s commitments generate. We argue that the nearly-all theory constitutes the most formally rigorous ahistorical treatment of biological function in the current literature. Its application to perceptual content is philosophically serious but incomplete in several important respects, particularly in the characterization of the approximation metric and the logical form of the three-place representation relation. The dissertation opens a productive research program whose most significant legacy may be methodological: demonstrating that the etiological assumption pervading teleosemantics is, contrary to widespread belief, not indispensable.

Keywords: teleological function · perceptual content · teleosemantics · nearly-all theory · content indeterminacy · reliable misperception · ahistorical function · Reichardt detectors · Bayesian perception · Swampman

✶ ✶ ✶

§1 Introduction and Orientation

There are few problems in analytic philosophy of mind where the tensions between naturalistic ambition, descriptive adequacy, and formal precision are felt more acutely than in the attempt to explain mental representation without invoking irreducibly intentional, semantic, or phenomenal primitives. The difficulty begins with a deceptively simple observation: mental states are about things. When I perceive the coffee cup on my desk, my perceptual state is about the cup — it represents the cup as being there, as being brown, as occupying a certain region of space. When a frog snaps its tongue at a moving fly, its neural state is about the fly — it represents something as present, as edible, as requiring a particular motor response. What is peculiar and philosophically challenging about this is that no physical state, considered purely as a physical state, is intrinsically about anything. A neural firing pattern, a retinal image, a molecular concentration — none of these are about anything in the way that mental states seem to be. Yet we are natural organisms, and if mental states are physical states, then somehow the physical must give rise to, or ground, or constitute, the aboutness of mind. The project of explaining how this is possible without invoking primitive intentional or semantic notions is the naturalization project, and it has occupied a significant fraction of analytic philosophy of mind for the past half-century.

The dominant strategy in this naturalization project, at least since Fred Dretske’s Knowledge and the Flow of Information (1981) and Ruth Millikan’s Language, Thought, and Other Biological Categories (1984), has been to ground representation in biological function. The basic proposal has an elegant simplicity to it: a perceptual state has a content — represents some property F — because the mechanism that produces it has the biological function to covary with instances of F in the organism’s environment. When the mechanism produces the state in the absence of an F, or in the presence of something that is not an F, we have misrepresentation, and this misrepresentation is understood as a kind of function-failure: the mechanism is not doing what it is biologically supposed to do. The normative standard against which perceptual accuracy is evaluated is thus provided not by any intrinsic feature of the perceptual state itself but by the biological function of the mechanism that produces it. This is the core idea of teleosemantics, and it is an idea with considerable philosophical appeal, because it promises to derive normative, intentional notions from purely natural, causal-historical facts about organisms and their evolutionary history.

Andrew Rubner’s dissertation, submitted to Rutgers University in May 2024 and written under the direction of Susanna Schellenberg, belongs squarely to this teleosemantic tradition. Its central and original ambition, however, is to show that the tradition has been operating under an unnecessary and philosophically costly assumption: the assumption that the relevant notion of biological function must be etiological — that is, grounded in the selection history of the organism’s lineage. Rubner argues, with considerable care and formal precision, that an ahistorical account of biological function — what he calls the nearly-all theory — can underwrite the teleosemantic project just as well as its historical predecessors, while avoiding the well-known difficulties that beset those predecessors. These difficulties include the so-called Swampman problem, the novel-function problem, and a persistent tension between etiological theories and the actual practice of biologists who attribute functions to items without any knowledge of, or appeal to, the evolutionary history of those items.

The dissertation is organized in two parts that mirror its two theoretical ambitions. Chapters One and Two constitute Part I and develop the theory of natural function, situating it carefully within the existing landscape of philosophical theories of teleological function and making the case for the nearly-all approach on both philosophical and scientific grounds. Chapter One — which appeared in the British Journal for the Philosophy of Science in 2023 — contains the core theoretical contribution, developing the formal apparatus of the nearly-all theory and pressing a precise technical objection against the most sophisticated recent competitor. Chapter Two — published in Philosophy Compass — surveys the role of function-theoretic notions in the philosophy of mind, providing the conceptual background necessary for Part II. Chapters Three and Four constitute Part II and develop the theory of perceptual content. Chapter Three proposes the nearly-all+ account of perceptual content and applies it to a detailed case study in vision science. Chapter Four — targeting Philosophy and Phenomenological Research — confronts what Rubner takes to be the hardest problem for any teleosemantic account: the phenomenon of reliable systematic misperception, where the visual system produces systematically wrong perceptual states not as a result of any malfunction but as the predictable output of its normal operation. Rubner’s solution to this problem is the most conceptually innovative contribution of the dissertation: a proposal to reconceive the representation relation itself as three-place rather than two-place.

The present analysis proceeds as follows. Sections Two through Four examine Part I of the dissertation — the background problem of teleological function, the two main competing approaches prior to Rubner, and the nearly-all theory itself in its formal detail. Sections Five through Seven examine Part II — the teleosemantic bridge from function to content, the central case study of Reichardt detectors, the Swampman case, and the treatment of reliable misperception. Sections Eight through Eleven provide critical assessment: what the dissertation establishes, where its arguments are most vulnerable, a discursive comparison with the major competing positions in the literature, and a sustained account of the open research problems that Rubner’s commitments generate. Section Twelve concludes.

§2 The Teleological Function Problem: Background and Stakes

To understand what Rubner is trying to do and why it matters, one needs to understand the problem of teleological function in biology and the philosophical significance of that problem for the theory of mental content. The problem begins with a simple observation about biological explanation. When biologists explain how the circulatory system works, they do not merely describe the causal sequence by which the heart contracts, forcing blood through the chambers and into the arteries, which carry it to the capillary beds, where oxygen and nutrients are exchanged with the surrounding tissue. They also say what each component of the system is for. The heart is for pumping blood. The arteries are for carrying oxygenated blood to the tissues. The capillaries are for facilitating exchange. These for-locutions are not mere rhetorical decoration that can be paraphrased away without loss. They are doing genuine explanatory work: they identify the function of each component within the hierarchical organization of the whole system, and it is in terms of these functions that the system’s normal operation, its pathological failures, and the relationship between structure and activity are all understood.

What makes these functional attributions philosophically interesting — and philosophically problematic — is that they carry a kind of normativity that ordinary causal descriptions lack. When we say that the function of the heart is to pump blood, we are not merely saying that hearts cause blood to circulate, which is a causal claim. We are saying that hearts are supposed to pump blood, that a heart which fails to pump is malfunctioning, that what explains the presence of hearts in organisms is their blood-pumping activity rather than any of the other things hearts do. This normative dimension — the ‘supposed to’, the standard of correctness, the distinction between fulfilling and failing to fulfill a function — is what makes teleological function so philosophically interesting and what makes it so useful for the philosophy of mind. For the normativity that teleological function provides is exactly the kind of normativity that mental representation seems to require: a perceptual state is accurate or inaccurate relative to a standard, and that standard needs to come from somewhere.

Rubner, following earlier work by Neander (2017) and others, codifies this normative demand in two desiderata that any adequate theory of teleological function must satisfy. The first, which he calls the function-malfunction distinction, requires that a theory allow for the possibility that a token item has a function that it is currently unable to fulfill. A diseased heart that can no longer pump blood does not thereby lose its function; it still has the function to pump blood, and its inability to do so is precisely what makes it a malfunctioning heart rather than merely a non-functioning piece of tissue. This desideratum is not difficult for most theories to satisfy, since it essentially amounts to requiring that function be a type-level rather than a token-level property: the function of the heart is determined by what items of the heart type are supposed to do, not by what this particular heart happens to be doing.

The second desideratum is philosophically more demanding and more interesting. Rubner calls it the function-accident distinction, and it requires that a theory distinguish, for any item, between those activities of that item that constitute its functions and those activities that, while causally beneficial in some circumstances, are merely accidental or coincidental contributions to the organism’s welfare. The paradigm case that Rubner uses throughout the dissertation is the heart’s production of a thump-thump noise with each contraction. This noise is causally beneficial in a medical examination context: a physician who auscultates an irregular heartbeat can diagnose a disease and thereby contribute to the patient’s survival. The heart’s noise-making thus contributes, in this context, to the organism’s welfare. Yet it would be deeply counterintuitive and biologically incorrect to say that the function of the heart is to aid in medical diagnosis. The activity of noise-making is an accidental accompaniment of the heart’s genuine function to pump blood. The function-accident distinction demands that a theory of teleological function explain this difference — that it explain why pumping blood is a function of the heart while aiding diagnosis is not, despite the fact that both activities are causally beneficial in appropriate circumstances.

The Etiological Theory and Its Difficulties

The dominant philosophical theory of teleological function, the one that has shaped teleosemantics for the past four decades, is the etiological theory, associated primarily with Wright (1973), Millikan (1984, 1989), and Neander (1991, 1995). In its canonical formulation, the theory holds that the function of a biological item is determined by what it was selected for in the evolutionary history of the lineage to which it belongs. More precisely, the proper function of an item x is whatever activity ψ explains why items of x’s type are present — or have been retained — in the relevant population. The heart pumps blood, and it is because hearts that pump blood were preferentially retained over hearts that did not pump blood — or rather, over organisms whose hearts did not pump blood — that we can say the function of the heart is to pump blood. The noise-making is not a function because it did not explain the retention of hearts; whether or not a heart makes a particular kind of noise has no appreciable effect on evolutionary fitness under ordinary conditions.

The etiological theory satisfies both desiderata. It satisfies the function-malfunction distinction because the proper function is determined at the type level by the selection history, not by what any particular token currently does: a heart that cannot pump blood still has the function to pump blood, because the type was selected for pumping, and this remains true regardless of the current state of the token. It satisfies the function-accident distinction because it ties function to the explanation of evolutionary retention: a heart that occasionally aids in diagnosis does not acquire a diagnostic function thereby, because the occasional diagnostic utility of hearts does not explain why hearts were selected for.

These are genuine philosophical virtues, and they go a long way toward explaining why the etiological theory has been so influential. But Rubner argues — and here he is working within a tradition of criticism that includes Walsh (1996), Schlosser (1998), and Weber (2005), among others — that the etiological theory pays for these virtues with costs that are, on reflection, unacceptable. The first cost concerns the relationship between the theory and actual biological practice. Scientists attribute functions to biological items on the basis of experimental and structural evidence, and they do so without any reference to, or knowledge of, evolutionary selection history. Rubner’s paradigm case is the discovery of the thymus’s function in the early 1960s. Burnet, Miller, and their colleagues demonstrated through neonatal thymectomy experiments — removing the thymus gland from neonatal animals and observing the resulting immune deficiencies — that the thymus has the function of producing lymphocytes and thereby enabling adaptive immunity. As Schaffner documents in his history of immunology, no evolutionary argument was offered, consulted, or apparently deemed necessary by the researchers making this attribution. The function was inferred entirely from current causal-structural evidence: from what the thymus does when it is present, from what fails when it is absent, and from the interventional consequences of its removal. If the etiological theory is correct, then the thymus’s function is grounded in a selection history that the researchers who discovered it had no knowledge of and made no appeal to. This is metaphysically possible, but it creates an uncomfortable gap between the theory and the epistemic practices of the discipline whose attributions the theory purports to explicate.

The second cost is more philosophically dramatic and concerns the phenomenon of novel biological functions. The etiological theory requires a selection history in order to ground any function attribution, but there are clearly cases where new functions arise in biological systems without any selection history backing them. Weber’s (2005) illustrative case is a bacterial enzyme: suppose a random mutation produces a novel enzyme that enables a bacterium to digest a previously indigestible sugar. The enzyme has never been selected for this activity; it is the very first instance of this type. Yet it seems entirely natural and biologically appropriate to say that this enzyme has the function of metabolizing sugar — it does it reliably, it contributes to the bacterium’s reproductive success, and when the bacterium reproduces, the descendant organisms will have enzymes with the same function. On the etiological theory, however, the function attribution is unavailable at this initial stage: since no selection has yet acted on the enzyme, there is no selection history to ground the attribution. The function, on the etiological account, can only come later, after selection has operated across multiple generations. This is a counterintuitive prediction, and it suggests that the etiological theory ties the concept of biological function too closely to the mechanism of natural selection, when in fact the concept seems to apply more broadly to any item whose activity systematically contributes to the goals of the system of which it is a part.

The Garson-Piccinini Account and Its Technical Failure

The difficulties with the etiological theory have motivated several philosophers to develop ahistorical alternatives, attempting to ground teleological function in current statistical or structural facts rather than in evolutionary history. The most developed and philosophically sophisticated of these recent ahistorical accounts is due to Garson and Piccinini (2014), who propose what they call a biostatistical theory. Their core idea is that the function of an item x is to do ϕ if, within the relevant reference class of items of that type, items of that type contribute to the relevant biological goal (survival, reproduction, or inclusive fitness) with non-negligible probability, and they do so by ϕ-ing with non-negligible probability given that they are contributing at all. The appeal to non-negligibility rather than typicality or high probability is deliberate: Garson and Piccinini want to accommodate biological functions that are realized only rarely, such as the function of sperm to fertilize ova, while still maintaining a statistical grip on the concept.

Rubner’s central technical objection to the Garson-Piccinini account is precise, clean, and — in our assessment — decisive. The objection targets the second desideratum, the function-accident distinction, and establishes that the non-negligibility threshold is simply too weak to draw the required line. The argument runs as follows. Consider again the claim that the function of the heart is to aid in diagnosing various diseases. Garson and Piccinini’s first condition is satisfied: hearts clearly contribute to human survival with non-negligible probability. Their second condition asks: given that a heart is contributing to survival or fitness, what is the probability that it does so by aiding diagnosis? This probability is non-negligible. If it were negligible, physicians would not routinely listen to patients’ hearts; the diagnostic utility of cardiac auscultation is real and well-established. Therefore, on Garson and Piccinini’s account, the heart has a function to aid in diagnosing disease. But this is precisely the kind of accidental benefit that any adequate theory of biological function should exclude. The diagnostic utility of the heart is a contingent benefit that arises only in the specific context of medical examination; it does not explain the presence or retention of hearts in organisms, it does not capture what hearts are biologically for, and it is not the kind of thing that biologists mean when they speak of the heart’s function.

The structural diagnosis of this failure is important for understanding what the nearly-all theory does differently. Rubner argues that the problem with the non-negligibility threshold is that it admits too much: it includes any activity that is even occasionally and non-trivially beneficial, regardless of whether that activity is the characteristic or typical mode of beneficial activity for items of the relevant type. Nearly all hearts, when they are contributing to an organism’s welfare, do so by pumping blood; only a small fraction of hearts, in specific medical contexts, contribute by making diagnostically significant sounds. The difference between ‘nearly all’ and ‘some non-negligible fraction’ is exactly the difference between a genuine function and an accidental benefit, and it is a difference that the Garson-Piccinini account, precisely because it eschews typicality in favor of non-negligibility, cannot capture.

§3 The Nearly-All Theory: Formal Construction and Philosophical Motivation

The nearly-all theory that Rubner develops in Chapter One takes its name from the central normative concept at its core: the claim that a function of an item x is to ϕ when nearly all items of x’s type that are making a contribution to the relevant goal in the relevant circumstances do so by ϕ-ing. The theory is built on a small number of carefully defined primitives, and its virtue lies not just in the central insight about typicality but in the precision with which Rubner analyzes the supporting concepts of goal, circumstance, and contribution. Each of these deserves extended discussion.

Goals, Circumstances, and the Scope of Function Attributions

A goal of a system, in Rubner’s framework, is an observable capacity exercised at some level of biological organization. Goals in this sense are not the subjective aims or intentions of agents — they are the objective functional capacities of biological systems at whatever level of organization is relevant for the explanatory project at hand. The circulatory system has the goal of transmitting nutrients and oxygen to tissues and removing metabolic waste. The immune system has the goal of identifying and neutralizing pathogens and damaged cells. At the level of the whole organism, the overarching goal is typically characterized in terms of what Wouters (2005) calls the life-state: the maintenance of the complex organization that distinguishes living organisms from their non-living surroundings. This plurality of goals is important because it means that the same item can have multiple functions relative to different goals: the circulatory system contributes both to nutrient transmission and to thermoregulation, and these are different goals, grounding potentially different function attributions.

A circumstance, in the theory, is a class of spatio-temporal regions. The point of introducing this parameter is to relativize function attributions to the conditions under which an item typically operates and contributes. Polar bear fur contributes to maintaining the organism’s life-state in arctic conditions by reducing heat loss; it does not contribute in this way in tropical conditions. The function of polar bear fur to reduce heat loss is therefore a function relative to arctic-like circumstances, and a theory that ignored this relativization would be unable to capture the fact that function attributions are always implicitly indexed to some range of normal operating conditions. Crucially, and this is one of the most important features of the theory, circumstances are defined as classes of possible spatio-temporal regions rather than as actual historical events. An item can have a function relative to a circumstance it has never actually occupied, so long as it would contribute in the relevant way if it were in that circumstance. This ahistorical characterization of circumstances is part of what makes the theory genuinely ahistorical: it does not require that any actual selection event have occurred in the relevant circumstances, only that the item be the kind of thing that would contribute in the relevant way there.

The Analysis of Contribution

The most technically sophisticated part of the dissertation is the analysis of the contribution relation, and it is here that Rubner makes his most original and precise formal contribution to the philosophy of biology. Prior accounts of biological function that invoke contribution — including Cummins’s (1975) influential functional analysis account, Craver’s (2001) mechanistic account, and Garson and Piccinini’s statistical account — all treat contribution as a primitive or near-primitive concept. They invoke it without analyzing it, which means that whatever work contribution does in distinguishing functions from accidents is work that is done without explanation. Rubner provides a two-condition reductive analysis.

The first condition is probabilistic and counterfactual. Item x contributes to goal G of system S in circumstance C by ϕ-ing, relative to rate-intervals R and R′, only if it is true that if x were to ϕ at a rate within R, the objective probability of S’s fulfilling G in C would be v, and if x were to ϕ at a rate within R′ (an inappropriate rate), that probability would drop to v′, where v exceeds v′. The rate-relativization is motivated by attention to the biological detail: a heart contributes to circulatory function when it pumps within the normal resting range of roughly sixty to one hundred beats per minute; pumping dramatically above or below this rate does not contribute to circulatory function and may actively impede it. The probabilistic interpretation of this condition is objectivist — it invokes propensities or objective frequencies rather than degrees of belief, which is necessary to preserve the biological objectivity of function attributions.

The second condition is explicitly causal, and it is the one that does the work of distinguishing pumping from thumping. Item x contributes to G by ϕ-ing only if there is an interventionist causal connection, in the sense developed by Woodward (2003), between x’s ϕ-ing at the appropriate rate and S’s achieving G. Woodward’s interventionism defines a causal connection between two variables in terms of what would happen under ideal interventions: X causes Y if and only if there exists an ideal intervention on X that changes the value of Y. Crucially, ideal interventions need not be nomologically possible; they are conceptually defined as any manipulation that varies X without affecting Y except through X. This means that we can consider worlds in which cardiac anatomy is radically different from actuality — worlds in which the physical link between pumping and noise-making is broken — and ask what would happen in such worlds.

The discriminatory force of the causal condition can be seen most clearly by considering three contrasting scenarios. In the actual world, the heart both pumps blood and makes noise, and these two activities are physically inseparable given actual cardiac anatomy. In a counterfactual world where the heart makes noise but does not pump blood — a world that is nomologically impossible but conceptually coherent — the circulatory system fails entirely: nutrients are not delivered, waste is not removed, and the organism dies. In a second counterfactual world where the heart pumps blood silently, without producing any sounds, the circulatory system functions normally. The interventionist test asks: what would happen under an ideal intervention that holds pumping fixed and eliminates noise? Circulatory function would be unaffected. What would happen under an ideal intervention that holds noise fixed and eliminates pumping? Circulatory function would collapse. Therefore, there is an interventionist causal connection between pumping and circulatory goal-fulfillment, but not between noise-making and circulatory goal-fulfillment. Only pumping satisfies the causal condition, and therefore only pumping is a genuine contribution to the system’s goal.

The Complete Theory and Its Empirical Test Cases

Bringing these elements together, the nearly-all theory holds that a function of item-type X in system S is to ϕ, relative to goal G and circumstance C, if and only if G is a goal of S and a typical way in which items of type X make a contribution, in the full causal-probabilistic sense just analyzed, to G in C is by ϕ-ing. The typicality condition is drawn from Wilhelm’s (2022) formal theory of typicality and is understood in terms of nearly-universal exemplification within the reference class: it is not merely that some non-negligible fraction of contributing hearts pump blood, but that nearly all of them do, in the sense that pumping blood is the characteristic or typical mode of beneficial activity for items of the heart type. This is importantly different from saying that all contributing hearts pump blood — the theory allows for exceptions, for hearts that contribute in unusual ways in unusual circumstances — while still providing a normative standard that excludes accidental benefits.

The theory’s predictions on the standard test cases are clean and intuitively correct. The heart pumps blood because nearly all hearts-in-contribution-contexts — nearly all hearts that are making a causal-probabilistic contribution to the organism’s circulatory function — do so by pumping blood. This is trivially satisfied for any organism with a functioning circulatory system, so the function attribution is immediate and stable. The heart does not have the function to aid in diagnosis because, even within a medical examination context, nearly all hearts-in-contribution-contexts contribute to the organism’s welfare by pumping blood; only a subset of hearts, the diseased ones whose irregular rhythms are diagnostically significant, contribute in the diagnostic way. The diagnostic contribution is not typical within the reference class.

The sperm case, which has been a persistent challenge for statistical theories because sperm have an extraordinarily low success rate at fertilization, receives an elegant solution through the reference-class restriction. The relevant reference class is not all sperm simpliciter, but sperm that are making a contribution to the organism’s reproductive goals in the relevant circumstance — roughly, sperm in the presence of a viable ovum under appropriate conditions for fertilization. Within this restricted class, nearly all sperm that are contributing do so by fertilizing the ovum; it is not as if some sperm contribute by doing something other than fertilizing. The reference-class restriction is what allows the theory to accommodate the vast majority of sperm, which never make a contribution of any kind, without having those non-contributing sperm dilute the typicality claim. This is a philosophically significant move: the normative work that selection history does in etiological accounts — explaining why we should look at what the successful sperm do rather than what all sperm do — is done instead by the restriction to contributing items in the appropriate circumstance.

Dysfunction and Its Conditions

A theory of biological function is not complete without an account of dysfunction, and Rubner provides one that integrates naturally with the nearly-all framework. A token item x in a token system s is dysfunctional relative to a function ϕ, a goal G, and a circumstance C, if and only if three conditions are met. First, G must be a goal of systems of the relevant type: there must be a genuine functional standard that applies to x’s type. Second, x must be unable to contribute by ϕ-ing in C, where ϕ-ing is a typical contribution-mode for items of x’s type: x must fail to fulfill the very activity that constitutes the function. Third, x’s inability must not be the result of a functional trade-off, where the system is prioritizing one functional demand over another in response to limited resources. This third condition is necessary to handle cases that might otherwise appear to be malfunctions but are actually instances of normal functional regulation: the digestive system’s reduced activity during vigorous exercise, when blood is redirected to the muscles, is not a malfunction of the digestive system but a consequence of competing functional demands within a resource-limited organism.

§4 Ahistoricity: Arguments, Responses, and Residual Difficulties

The most philosophically distinctive feature of the nearly-all theory is its ahistoricity, and it is worth being precise about what this means. The theory is ahistorical in the sense that neither the statement of the theory nor any of its supporting definitions makes reference to the selection history of the items assigned functions. Circumstances are defined as classes of possible spatio-temporal regions, not as actual evolutionary environments or historical epochs. Goals are characterized as current observable capacities of biological systems, not as the historically specific adaptive challenges that shaped those capacities. Contribution is analyzed in terms of interventionist causal connections and objective probabilities, with no indexing to evolutionary time. This means that the theory’s verdicts about what items have what functions are determined entirely by present structural and causal-statistical facts, not by historical facts about selection.

Three Arguments from Cases

This ahistorical character generates three categories of cases where the nearly-all theory gives intuitively correct verdicts that the etiological theory cannot reach. The first and perhaps most straightforwardly compelling is the novel function case, illustrated by Weber’s bacterial enzyme. When a mutation creates a novel enzyme that enables a bacterium to metabolize a previously indigestible sugar, the nearly-all theory immediately assigns the function of sugar metabolism to that enzyme: given that there is only one such enzyme, trivially nearly all enzymes of that structural type that contribute to the bacterium’s reproductive capacity do so by metabolizing sugar, and there is a direct interventionist causal connection between the metabolic activity and the relevant goal. The etiological theory cannot reach this verdict until selection has acted across multiple generations, which means the etiological theory has the strange implication that a biological item can be doing precisely what it does, benefiting the organism in precisely the way it benefits it, contributing to precisely the goals it contributes to, and yet have no function, simply because it is the first item of its type. This implication is at odds with the straightforward biological judgment that novel beneficial biochemical activities have functions from the moment they arise.

The second category of case is Davidson’s Swampman, the philosophical thought experiment in which a lightning bolt assembles, by improbable chance, a molecule-for-molecule duplicate of a human being from the organic matter of a swamp. Swampman has no evolutionary history, no ancestors, no developmental origins in any normal biological sense. On the etiological theory, Swampman’s organs have no proper functions, because their presence is not explained by any selection process. Therefore, on any teleosemantic theory grounded in etiological function, Swampman’s perceptual states have no representational content. This is a conclusion that many philosophers find deeply counterintuitive: Swampman processes light in the same way I do, in the same environment, with what appears to be the same functional organization, and its behavior is guided by its states in the same ways that my behavior is guided by mine. To say that its states represent nothing seems to go against everything we ordinarily mean by representation. The nearly-all theory handles Swampman straightforwardly: since the theory’s verdicts depend on structural type-membership and current causal-statistical facts about contribution rather than on selection history, and since Swampman’s organs are structurally of the same type as mine, the theory assigns identical functions to Swampman’s organs and mine from the moment of assembly.

The third category of case is perhaps the most practically significant: cases where biological functions are attributed by scientists on the basis of experimental intervention evidence, without any knowledge of or appeal to evolutionary history. The thymus case is Rubner’s paradigm, but it is representative of a much broader pattern in biological and medical research. Pharmacologists attribute functions to receptor proteins on the basis of binding affinities and downstream signaling effects, not on the basis of phylogenetic analysis. Developmental biologists attribute functions to transcription factors on the basis of loss-of-function and gain-of-function experiments, not on the basis of selection arguments. The etiological theory implies that all these attributions are epistemically grounded in selection histories that the researchers making the attributions have no access to and make no reference to, which is an uncomfortable implication for a theory that purports to capture what biological function attributions mean.

The Etiologist’s Best Response and Its Failure

The most sophisticated response available to the etiologist to these cases is not to deny that the nearly-all theory makes the correct predictions, but to insist that merely making correct predictions is insufficient: an account of biological function needs to capture not just the extension of the function concept but its normative character. The argument runs as follows. What makes something a function rather than merely a typical activity is that there is a standard of correct performance — that items are supposed to do the relevant thing, that failing to do it constitutes malfunction, that the function-failure can be meaningfully distinguished from normal non-performance. And the only plausible metaphysical ground for this kind of normativity, the argument continues, is evolutionary history: what makes a heart supposed to pump blood, as opposed to merely typically pumping blood, is that hearts were selected for this activity and retained in populations because of it. Without this historical ground, the typicality claim is just a statistical fact about typical behavior, and statistical facts are not norms. They describe how things are; they do not say how things ought to be.

Rubner’s response is to argue that this objection begs the question. The argument for the etiological theory is offered as a reason to adopt the etiological account over the nearly-all account, but it presupposes that ahistorical typicality cannot ground genuine teleological normativity. Since the nearly-all theory claims precisely that it can, and offers the function-malfunction and function-accident distinctions as evidence that it does, the etiologist cannot assume the contrary without independent argument. Rubner presses a deeper point: the normativity of function attributions consists precisely in the evaluative gap between what an item does and what it is supposed to do, and this gap is perfectly well-defined on the nearly-all theory. An item is supposed to ϕ if and only if ϕ-ing is the typical contribution-mode of items of that structural type in contribution-contexts. A dysfunction occurs when a token item fails to ϕ despite ϕ-ing being the typical mode for its type. The evaluative standard is provided by synchronic typicality within the reference class, and this is a genuine standard — not merely a statistical description — because it applies to individual tokens via their membership in a type whose members characteristically behave in a certain way. The ‘supposed to’ is grounded in what is normal for items of that kind, and what is normal in this sense is a normative notion even if it is cashed out in terms of typicality rather than history.

The Pandemic Disease Objection and Reference-Class Sensitivity

Rubner does not claim that the nearly-all theory is without difficulties, and two residual problems merit careful attention, particularly since his treatment of each is less than fully satisfying. The first is Neander’s (1991) pandemic disease objection. Suppose that a pandemic disease infects all members of a species such that their hearts can no longer pump blood. On Boorse’s (1977) statistical account, which identifies function with typical activity in the population, this pandemic would immediately imply that pumping blood is no longer the function of those hearts, since pumping blood is no longer the statistically normal activity for items of that type in that population. The counterintuitive consequence is that the organisms are no longer diseased — they merely have hearts with different functions. Rubner argues that this objection, while devastating for Boorse, does not apply to his own account, because the relevant reference class is not all hearts in the population but hearts that are making a contribution to the organism’s circulatory goals in the relevant circumstances. In a pandemic, this reference class becomes very small — perhaps only the hearts of the few organisms that have not yet been infected — but within that restricted class, contributing hearts presumably still contribute by pumping blood. The pandemic reduces the size of the contributing class; it does not change what items in that class do when they are contributing.

This response is plausible as far as it goes, but it does not fully resolve the worry. In the extreme case where the pandemic has infected all members of the species and no heart is capable of pumping blood, the reference class of contributing hearts is empty, and the nearly-all theory appears to make no prediction at all — since the typicality claim is vacuously true for any activity if the reference class contains no members. This is a different kind of counterintuitive result, and Rubner does not adequately address it. One natural response would be to index function attributions to possible as well as actual contribution: the function of the heart is to pump blood because if the pathology were removed, the items of that structural type would contribute by pumping blood in the relevant circumstances. But this counterfactual extension of the account is not developed in the dissertation, and it would need careful handling to avoid smuggling in historical assumptions through the back door.

The second residual difficulty is reference-class sensitivity: the theory’s predictions depend on how the reference class is individuated, and different individuation criteria may yield different predictions. The relevant class is defined as items of a given type making a contribution in the relevant circumstances, but what determines the type? Rubner’s implicit answer, which draws on Amundson and Lauder’s (1994) work in evolutionary morphology, is that biological types are individuated by morphological and structural criteria rather than by functional criteria: the type heart is defined by a cluster of structural and developmental characteristics — its tissue composition, its developmental origin from particular embryonic precursors, its position in the overall morphological pattern of the cardiovascular system — not by the activity of pumping. This prevents circularity: if types were individuated by function, then the typicality claim would be trivially true (all items of the pumping-blood type pump blood), and the theory would have no content. But while the morphological individuation criterion is the right kind of answer, the dissertation does not defend it in detail against alternatives, and reference-class sensitivity is a well-known problem for all statistical theories of function that deserves more sustained attention.

§5 From Function to Content: The Teleosemantic Bridge

With the nearly-all theory of natural function in place, Rubner’s Part II turns to the central question of perceptual content: how do we get from the claim that a perceptual mechanism has a biological function to the claim that the states it produces represent a particular property? The teleosemantic bridge between function and content looks straightforward: a state represents property F because the mechanism that produces it has the function to covary with F-instances. But this simple bridge is almost immediately blocked by what is perhaps the most serious and persistent challenge to teleosemantic theories: the content indeterminacy problem.

The Content Indeterminacy Problem

The content indeterminacy problem arises because any perceptual mechanism will have functions to produce states in response to multiple properties simultaneously, for two structurally distinct reasons. The first is the logical structure of entailment: if a mechanism has the function to respond to motion, then since motion from A to B entails that something was in A and then in B, the mechanism will also have a function to respond to the latter property, because responding to motion just is, in part, responding to temporal succession in the relevant region. The second reason is the ecological structure of property co-instantiation: whatever property a mechanism is directly sensitive to, that property will typically co-occur with many others in the organism’s natural environment, and this co-occurrence means that the mechanism will also contribute in appropriate ways with respect to those co-occurring properties, thereby potentially acquiring functions with respect to them as well. The naïve teleosemantic account — states of type R represent property F just in case the mechanism M that produces R has the function to produce R in response to F — is therefore insufficient to determine a unique content for any given state, because mechanisms typically have multiple functions corresponding to multiple candidate properties.

The problem is classically illustrated by the toad-fly case that Neander (2017) has used to great effect in her own teleosemantic work. Toads have a class of neural states — the T5-2 cells first described by Ewert in the 1980s — that are activated by small, elongated, moving objects in the visual field and that trigger tongue-snapping behavior. These neural states plausibly have representational content: they represent the toad as being in a situation calling for a tongue-snap. But which property, exactly, is their content? There are at least three plausible candidates. The states could represent the property small-elongated-moving, which is what the cells are directly causally sensitive to at the level of their input circuitry. They could represent the property toad food, which is the ecologically relevant property that the mechanism has been organized around detecting. Or they could represent the property worm, a specific type of toad food that is frequently in the environment and that co-occurs reliably with the other two properties. The naïve theory has the function to produce T5-2 states in response to all three properties, and so all three are candidate contents. Yet the theory as stated provides no resources for determining which of the three is the actual content.

Reichardt Detectors: A Case Study in Vision Science

Rubner’s approach to the indeterminacy problem, which constitutes the theoretical core of Chapter Three, is developed not through the toad-fly case but through a more detailed and scientifically grounded case study: the motion-detection mechanism known as the Reichardt detector. This choice reflects a methodological commitment that runs throughout the dissertation and that distinguishes it from much of the theoretical literature on teleosemantics: Rubner consistently grounds his philosophical arguments in detailed engagement with actual neuroscience, rather than relying on idealized or speculative examples. Reichardt detectors are the neural circuits, first described mathematically by Werner Reichardt in the early 1960s and subsequently identified anatomically in a variety of species, that are believed to implement motion detection in the human and animal visual system. Their computational structure is well understood: two photoreceptors in adjacent spatial regions are connected through a temporal delay filter on one of them, so that the circuit produces an output signal when the two photoreceptors are stimulated in temporal succession consistent with motion from one region to the other.

This architecture creates the indeterminacy problem in a particularly clear form, because it makes the Reichardt detector causally sensitive to three distinct distal properties. First, the detector responds to something in region A, then something in region B, because the temporal delay filter and the AND gate together are sensitive to any pattern of temporal succession between the two photoreceptors, regardless of whether the stimulation is caused by a single object moving from A to B. Second, and this is a special case of the first, the detector responds to motion from A to B, because any object moving from A to B will produce exactly the kind of temporal succession that activates the circuit. Third, the detector responds to catchable objects in certain circumstances: since catchable objects are, by definition, moving objects that can be intercepted, any environment in which objects are regularly propelled through the air will be one in which the Reichardt detector is activated by catchable objects with systematic regularity. Rubner argues that the nearly-all theory of function, applied carefully to these three properties and to the relevant circumstances, assigns functions to the Reichardt detector with respect to all three. This is precisely the indeterminacy that the theory needs to resolve.

The resolution of the indeterminacy requires grasping two logically independent moves that Rubner makes. The first move uses the circumstance parameter to eliminate the catchable-object candidate without even getting to the informativity condition. Rubner introduces the concept of the broadest circumstance relative to which a mechanism has any functions — the most general class of conditions in which the mechanism can be in operation at all. For visual mechanisms, this broadest circumstance is the class of regions in which light can reach the subject’s retinas appropriately, which Rubner calls Cvis_data. The Reichardt detector’s function to respond to catchable objects is a function relative to the narrower circumstance in which objects are actually being propelled through the air, which Rubner calls Cair. Since Cair is a proper subclass of Cvis_data, and since the theory’s content conditions are formulated with respect to the broadest circumstance 𝔽(M) rather than narrower circumstances, the catchable-object function drops out of the content-determination process: the detector does not have a function to respond to catchable objects relative to 𝔽(M) = Cvis_data, only relative to the more restricted Cair. This move elegantly dispatches an entire family of action-oriented properties — not just catchable, but avoidable, throwable, and any other properties whose ecological relevance is tied to specific and restricted circumstances — without needing to invoke the informativity condition at all.

The Informativity Criterion and the Nearly-All+ Theory

The remaining indeterminacy — between motion from A to B and the more general property of temporal succession — is resolved by the informativity criterion that Rubner introduces as the second component of the nearly-all+ theory. The key observation is that motion from A to B is a more informative property than temporal succession, in the sense that motion entails succession but succession does not entail motion. Rubner develops a detailed account of what he calls informational nesting: property G is informationally nested in property F just in case F is more informative than G, where this is grounded in the logical or nomological relationship between the properties. In the clearest case, F logically entails G but G does not logically entail F, which means that representing something as F carries more information than representing it as G. More generally, informational nesting can be grounded in natural laws rather than logical entailment, and Rubner adds a probabilistic generalization: G is informationally nested in F if the conditional probability of G given F is higher than the conditional probability of F given G, which holds whenever being F makes it more likely that something is G than vice versa.

The nearly-all+ theory holds that states of type R represent property F just in case the mechanism M has a function, relative to 𝔽(M), to produce R in response to F, and F is the most informative property among those with respect to which M has such a function relative to 𝔽(M). Applied to the Reichardt case: both motion from A to B and temporal succession satisfy the function condition relative to Cvis_data, but motion from A to B is more informative than succession because motion logically entails succession but not vice versa. The theory therefore predicts that the content of RV is motion from A to B. This prediction is confirmed by the apparent motion phenomenon: when two flashing lights alternate in regions A and B without any single object moving, the Reichardt detector is triggered by the temporal succession pattern, and the perceiver undergoes an illusion of motion. The fact that this is an illusion — that we readily recognize it as a perceptual error rather than a veridical experience — confirms that the state’s content is motion from A to B, not merely succession: if the content were succession, no illusion would occur, since the succession of lights in A and B is accurately represented. The normativity of illusoriness is thus itself a piece of evidence for the content attribution, and Rubner uses it as such. This is a methodologically important move: rather than treating content attribution as purely theoretical, he grounds it in the phenomenology of perceptual experience, specifically in our pre-theoretical judgments about when percepts are accurate and when they are not.

The toad case receives a clean resolution under the same framework. The property worm is eliminated by the function condition: since toads eat a diverse array of prey — beetles, slugs, millipedes, earthworms, and various insects — not nearly all T5-2-cells-in-contribution-contexts are responding to worms. The cells have functions with respect to a much broader category of prey items. Between small-elongated-moving and toad food, the informativity condition selects via the probabilistic nesting clause: given normal background conditions in a toad’s natural environment, the probability that a small-elongated-moving object is toad food is higher than the probability that a piece of toad food is small-elongated-moving, because many items that count as toad food are at rest at any given moment. Small-elongated-moving objects in the toad’s visual field are therefore reliably toad food, making small-elongated-moving the more informative property relative to the ecological distribution of toad food in the toad’s natural environment. The content of T5-2 cell states is therefore small-elongated-moving, not toad food — a verdict that aligns with the dominant position in the empirical literature on toad visuo-motor behavior.

§6 The Swampman Case Revisited

The Swampman case receives its full force in the context of perceptual content rather than biological function, because it is the content implications of the case that are most philosophically vivid. Millikan’s etiological teleosemantics implies that Swampman’s perceptual states have no content: since they are not produced by mechanisms with any proper functions — functions grounded in selection history — they represent nothing, mean nothing, are not even genuine perceptual states in the philosophically relevant sense. This is a bullet that Millikan is willing to bite, and she has defended the conclusion with considerable ingenuity. But the cost is substantial: it requires maintaining that functional organization, in the absence of the right kind of history, is entirely irrelevant to the representational properties of states, and this seems to run against everything we ordinarily mean by saying that a system represents something.

Rubner handles the Swampman case without special pleading and without revising the core theory. Since the nearly-all+ theory grounds content in structural type-membership and current causal-statistical facts about contribution, and since Swampman’s perceptual mechanisms are of the same structural types as mine — they have the same morphological organization, the same signal-processing architecture, the same causal sensitivity to environmental features — the theory assigns identical functions and thereby identical contents to Swampman’s states and mine. A Swampman who has assembled with functioning Reichardt detectors will have visual states with content motion from A to B, because the mechanisms that produce those states are of the Reichardt-detector type, and nearly all mechanisms of that type that contribute to guiding action in Cvis_data do so by producing states in response to motion from A to B. The ahistorical character of the theory makes this the natural result rather than an ad hoc accommodation.

§7 Reliable Systematic Misperception: The Problem and Rubner’s Solution

The Phenomenon and Its Philosophical Significance

The most difficult and conceptually innovative part of the dissertation is Chapter Four, which confronts the phenomenon of reliable systematic misperception. This is the phenomenon, well-documented in psychophysics, where the visual system, operating normally and without any malfunction, produces systematically incorrect perceptual states as a predictable consequence of its normal operation. This is not ordinary misperception — the misperception that occurs when lighting conditions are unusual, when stimuli are presented at the limits of the perceptual system’s resolution, or when some element of the visual pathway has been damaged. Reliable systematic misperception occurs under standard conditions, in neurologically intact observers, and is highly consistent across individuals and across trials. It represents not the failure of the perceptual system but in some sense its normal mode of operation in the context of certain stimulus configurations.

Rubner develops his discussion around two empirically well-documented cases. The first, drawn from the work of Nundy and colleagues (2000), concerns the misperception of angles. Psychophysical experiments have established that human observers systematically overestimate the size of acute angles and underestimate the size of obtuse angles. The bias follows a smooth curve as a function of the stimulus angle: a 30-degree angle is perceived as approximately 33 degrees, a 150-degree angle as approximately 147 degrees. This bias is not caused by any pathological condition; it is a universal feature of human angle perception, it underlies a family of classic visual illusions including the Zöllner and tilt illusions, and it is robust across variations in viewing condition. The second case, drawn from Hibbard and colleagues (2012), concerns the misperception of surface aspect ratios. Subjects asked to judge the shape of surfaces viewed at various slants consistently misperceive the aspect ratios of those surfaces in a systematic way that can be described by the equation  = 0.87 · cos(0.66 · S), where  is the perceived aspect ratio and S is the surface slant. Crucially, this misperception persists even when subjects are provided with full binocular information about the slant of the surface — information that would in principle allow a geometrically accurate inference about the true aspect ratio — showing that the error is not simply due to insufficient information but to something in the visual system’s processing of that information.

The philosophical significance of these cases is that they create a direct conflict with the central explanatory commitment of teleosemantic theories of perceptual content. All such theories, including Rubner’s own nearly-all+ account, rely at some level on a lawlike or nomic correlation between the content-property of a state and the stimulus conditions that normally produce that state. The basic structure of any teleosemantic account requires that if a state has the content F, then F-instances must be among the normal causes of that state — must be positively nomically correlated with tokenings of the state — because it is only in virtue of such a correlation that the mechanism can have a function to covary with F. Reliable systematic misperception severs precisely this correlation: the 30-degree-angle-detection state does not have 30-degree angles among its normal causes. Its normal causes are 30-degree angles, which produce states with the content 33 degrees; the perceptual state representing 33 degrees is normally caused not by 33-degree angles but by 30-degree angles. The causal chain from stimulus to content has been displaced: the state reliably and systematically represents a magnitude other than the one that normally produces it.

Why Standard Teleosemantic Theories Cannot Handle These Cases

Rubner demonstrates in careful detail that this problem affects all the major teleosemantic and informational theories of perceptual content, not just the ahistorical ones. Fodor’s (1990) asymmetric dependency theory requires, as its first condition, that there be a nomic relation between the property F and the property of being a cause of states with content F: in Fodor’s terms, it must be a law that F’s cause F-content states. But the psychophysical data show that 33-degree angles do not cause 33-degree-content states under normal conditions; 30-degree angles do. There is no law relating 33-degree angles to 33-degree-content states. Neander’s (2017) informational teleosemantics requires that states of type R have the function to carry information about the property F they represent, where carrying information is understood in terms of nomic correlation. But R33° is not nomically correlated with 33-degree angles; it is nomically correlated with 30-degree angles. Schellenberg’s (2018) capacitism holds that the content of a perceptual state is determined by the perceptual capacity that is employed in producing it, and that a capacity has a function to single out and discriminate particulars of a certain kind. But the capacity that produces R33° tokens does not appear to successfully single out 33-degree angles under any normal conditions; it singles out 30-degree angles. The challenge is structurally identical for all three theories, and it extends to Rubner’s own nearly-all+ theory for the same reason: the theory grounds content in function, function requires that the mechanism have the capacity to covary with the relevant property in normal circumstances, and reliable systematic misperception shows that the mechanism does not, in fact, covary with the property that is supposed to be its content.

Rubner is admirably candid about the self-referential character of this problem — that it undermines his own theory just as much as it undermines his competitors’. This candor is both a philosophical virtue and a dialectical strength: by acknowledging that the problem is general rather than parochially targeting competing views, he positions his proposed solution as a contribution to the field rather than a move in a theoretical debate. The solution he develops is an amendment to the basic teleosemantic approach that can, in principle, be adopted by any theory of the relevant type — any theory that attempts to provide sufficient conditions for representation in non-semantic, non-intentional, non-phenomenal terms.

The Three-Place Representation Relation

Rubner’s proposed solution reconceives the representation relation itself. Standard teleosemantic theories treat representation as a two-place relation between a state and a property: state S represents property F. Rubner proposes that representation is better understood as a three-place relation: state S represents property F relative to property F*, where F* is the property that the state actually and reliably co-varies with and that approximates F. The three components of the relation play distinct theoretical roles. The content F is grounded in the biological function of the mechanism, just as in standard teleosemantic accounts: F is the property with respect to which the mechanism has the nearly-all function to covary. The approximating property F* is grounded in the actual nomic structure of the environment and the causal-probabilistic behavior of the mechanism: F* is what the mechanism in fact reliably tracks, as established by the psychophysical data. The approximation relation between F and F* captures the sense in which reliable misperception is a genuine form of misperception rather than accurate perception of a different property: the state is inaccurate because F* ≠ F, but F* is close enough to F — in some to-be-specified metric — that the state can be said to represent F rather than representing F* or representing nothing.

The philosophical payoff of this reconception is precisely the dissolution of the dilemma that reliable misperception creates for standard theories. On the standard two-place view, the theorist is forced to choose: either the content of the 33-degree-detection state is 33 degrees (in which case the content is not nomically correlated with the stimulus, undermining the function-theoretic grounding) or the content is 30 degrees (in which case the perceptual system’s angle perception is not systematically biased, which conflicts with the psychophysical evidence). Neither horn is satisfactory. On the three-place view, there is no dilemma. The content is 33 degrees — this is grounded in the mechanism’s function to covary with 33-degree angles, a function that is attributed on the basis of the mechanism’s structural organization and its role in the visual system’s goal of guiding action. The approximating property is 30 degrees — this is what the mechanism actually, reliably co-varies with under normal stimulus conditions, as established by Nundy et al.’s psychophysical data. The state is inaccurate because 30 degrees ≠ 33 degrees. The misperception is real and systematic. Both the function-theoretic grounding of content and the psychophysical evidence about reliable covariation are accommodated simultaneously, rather than being forced into conflict.

The Bayesian Explanation of Why Misperception Occurs

Beyond the formal proposal, Rubner provides an explanation of why reliable systematic misperceptions occur and why their occurrence is compatible with the claim that the perceptual system is functioning normally rather than malfunctioning. The explanation draws on the Bayesian framework for perceptual inference that has become central to computational neuroscience. On this framework, the visual system is understood as performing probabilistic inference over possible states of the distal environment given the pattern of stimulation at the retina. The system has internalized prior probability distributions over the statistical structure of natural environments — distributions that encode regularities like the fact that acute angles in natural scenes tend to be projections of right-angle corners seen at oblique viewing angles, or the fact that vertical line segments are, in typical natural environments, more likely to be extended in depth than horizontal line segments of the same length. These priors are combined with likelihood functions derived from the current sensory input to generate posterior estimates of scene properties, and these posterior estimates are the perceptual outputs.

The systematic biases documented by Nundy et al. and Hibbard et al. are, on this account, not errors in the Bayesian inference process but rather the correct outputs of a Bayesian inference process using priors that are ecologically valid for natural environments but are being applied to artificial stimuli that do not share those statistical regularities. An isolated acute angle drawn on a plain white sheet of paper does not share the statistical structure of a natural scene in which acute-angle projections are typically produced by right-angle corners. The visual system’s priors about how angles in natural scenes relate to the underlying three-dimensional structure of those scenes lead it to a posterior estimate that is systematically displaced from the true angle value when applied to the decontextualized experimental stimulus. The system is doing exactly what a Bayesian reasoner should do given its priors; the problem is that the stimulus is not drawn from the distribution over which those priors were calibrated. This is what Rubner calls an ecological mismatch: the normal operation of the visual system, applied to a stimulus that falls outside the ecological conditions in which that system evolved and operates, produces a reliable bias.

The philosophical significance of this Bayesian explanation for Rubner’s theory is that it justifies the claim that reliable systematic misperception is not dysfunction in the sense defined by the nearly-all theory. The perceptual mechanism is not unable to fulfill its function; it is fulfilling its function perfectly well in the broad range of natural ecological conditions that define the circumstance 𝔽(M) relative to which the function is attributed. The systematic bias in psychophysical experiments arises in a restricted subset of conditions where the stimuli are specifically designed to probe the limits of the natural prior, and in those conditions the mechanism continues to operate normally as a Bayesian inference engine — it produces the posterior estimate that a rational Bayesian agent would produce given those priors and that input. The mechanism is, in this precise sense, doing what it is supposed to do. The misrepresentation is not a failure of the mechanism but a consequence of applying it outside its natural domain of ecological validity.

§8 What the Dissertation Establishes

Having laid out the arguments in detail, we are in a position to offer a considered assessment of what the dissertation actually establishes, as distinguished from what it gestures at, argues for inconclusively, or leaves as open problems for future research. We identify five results that are established with sufficient rigor and originality to constitute genuine contributions to their respective debates.

The first established result is the demonstration that the Garson-Piccinini non-negligibility threshold fails the function-accident distinction. The heart-diagnosis counterexample is not a vague intuitive objection but a precise derivation of an unacceptable prediction from the specific formal commitments of the Garson-Piccinini account. It establishes that any ahistorical statistical theory of biological function must invoke typicality, or something equivalent to it, rather than mere non-negligibility, if it is to satisfy D2. This is a contribution to the philosophy of biology that is independent of the broader teleosemantic project.

The second established result is the interventionist-causal analysis of the contribution relation. This is the most formally precise treatment of contribution in the philosophy of biology literature. Previous accounts — including Cummins’s functional analysis, Craver’s mechanistic account, and Garson and Piccinini’s statistical account — invoke contribution without analyzing it. Rubner provides a reductive analysis in terms of rate-relativized probabilistic counterfactuals combined with interventionist causal connections, and he demonstrates the discriminatory force of this analysis through the pumping-thumping case. Whether or not one accepts the nearly-all theory as a whole, this analysis of contribution is a substantive philosophical achievement that stands on its own.

The third established result is the deployment of the apparent motion phenomenon as a non-negotiable adequacy constraint on theories of perceptual content. Rubner establishes that any theory of perceptual content that assigns a content to the output of Reichardt detectors must assign the content motion from A to B rather than temporal succession, on pain of being unable to account for apparent motion as a genuine illusion. This methodological move — using the phenomenological and behavioral normativity of perceptual error as a constraint on theoretical content attributions — is philosophically important and insufficiently appreciated in the literature.

The fourth established result is the precise characterization of the reliable misperception problem and its demonstration that the problem is structurally general — that it undermines not just etiological teleosemantics but informational teleosemantics, asymmetric dependency theories, and capacitism as well as the nearly-all+ theory. By establishing the generality of the problem, Rubner motivates the development of a structural solution rather than a piecemeal fix, and positions the three-place representation relation as addressing a genuine gap in the architecture of teleosemantic theories.

The fifth established result is the three-place representation relation itself. Whether or not one accepts all the details of Rubner’s implementation, the conceptual proposal is original and philosophically substantive. It provides a framework within which reliable systematic misperception can be accommodated without either revising content attributions or abandoning the link between content and function. This is a genuine advance, and the basic architecture of the proposal — distinguishing the function-grounded content F from the nomic-correlation-grounded approximating property F* — will likely prove fruitful for future work even if the specific formulation requires development.

§9 Internal Tensions and Original Objections

The Approximation Problem

The most pressing unresolved problem in the dissertation is the specification of the approximation relation in the three-place representation theory. The proposal that state S represents F relative to F* requires F* to approximate F, but Rubner provides no metric for approximation. For the specific psychophysical cases he discusses — the 30-degree-versus-33-degree case, the 0.87-versus-1.0 aspect ratio case — the approximation seems intuitive enough: the magnitudes are close together, the deviations are small, and it seems natural to say that one approximates the other. But the theoretical proposal requires a principled account of approximation that goes beyond intuition.

The problem is philosophically substantial, not merely technical. Consider the following challenge. Suppose a perceptual state representing F is in fact reliably caused by a stimulus that differs from F by a substantial amount — say, by 20 degrees rather than 3 degrees. Is 20-degree displacement within the approximation radius? What about 30 degrees? At some point, the displacement becomes large enough that it seems more natural to say the state has a different content — that it represents the actual stimulus magnitude rather than the function-grounded magnitude F — than to say it represents F inaccurately. But without a principled account of where that threshold lies, the three-place theory is indeterminate in a troubling way: for any state and any property, one can always find a property that is ‘approximately’ equal to it in some informal sense, which means the theory threatens to assign arbitrary contents to arbitrary states simply by invoking the approximation relation.

The most natural way to address this problem, and the one that seems most consistent with Rubner’s broader theoretical commitments, is to integrate the approximation metric with the Bayesian framework that he uses to explain why reliable misperceptions occur. On the Bayesian model, the visual system maintains a posterior probability distribution over possible stimulus magnitudes given the current sensory input. The width of this posterior distribution — its variance, or equivalently its standard deviation — provides a natural measure of how much displacement from the true stimulus value is consistent with the system’s own uncertainty. One could then define F* as approximating F within the approximation radius if and only if the absolute difference between the two is no greater than one or two standard deviations of the posterior distribution, where the posterior is the one the visual system computes over the relevant stimulus dimension given typical natural sensory input. This would tie the approximation metric to the system’s own computational structure rather than to any externally imposed criterion, and it would make the approximation radius a principled function of the stimulus’s ecological salience and the system’s representational precision.

The Representational Form Problem

A distinct and deeper difficulty with the three-place representation relation concerns its logical form and its implications for the interface between perceptual representation and cognitive representation. Standard teleosemantic theories, which treat representation as a two-place relation, have a clear logical form: R(s, F). This form integrates naturally with the standard semantic apparatus for propositional attitudes: the content of a perceptual state is a property, and when that perceptual state grounds a belief, the belief is a two-place relation between the believer and a proposition whose truth condition is given by the property F. The two-place theory of perceptual content plugs into the two-place theory of belief content in a philosophically well-understood way.

The three-place representation relation has the form R(s, F, F*). This is no longer a simple property-attribution, and its implications for the perceptual-cognitive interface are unclear and potentially troublesome. Consider what happens when a perceiver forms a belief on the basis of a perceptual state that represents 33 degrees relative to 30 degrees. What is the content of that belief? If the belief inherits the full three-place content of the perceptual state, then the belief is about the pair (33°, 30°) rather than simply about 33°, and a world in which the angle is 33° makes the belief true only if the perceptual state is also triggered by 30° stimuli — which is a bizarre condition for a simple angular measurement belief. If, on the other hand, the belief strips out the approximating property and takes on only the function-grounded content F, then we need an account of how this projection happens: how the perceptual system delivers a three-place content to the doxastic system, and how the doxastic system reduces this to a two-place content. Rubner does not address this interface, and its complexity is not trivial. A full account of reliable misperception and its implications for knowledge and justified belief will require sustained engagement with the perceptual-cognitive interface question that goes beyond what the dissertation provides.

The Parasitic Action-Guidance Problem

A third original objection targets the informativity criterion itself, and specifically the claim that catchable is excluded as a content candidate because its action-guiding role is ‘parasitic’ on its co-instantiation with the motion property. The word ‘parasitic’ is doing significant philosophical work here, and Rubner does not fully analyze it. The idea is that catchable’s capacity to guide action — its ability to indicate to the organism what actions are appropriate, in particular the action of intercepting the object — depends on the fact that catchable objects are moving objects, so that any action that is appropriate in response to a catchable object is appropriate in virtue of the object’s motion rather than in virtue of its catchability per se. But this claim seems to presuppose that motion is the ‘real’ content of the relevant states, because it is only if we already know that the system is representing motion that we can say catchable’s guidance role is derivative on, rather than foundational for, the system’s representational activity. If we did not already know the content, we could equally well describe the situation by saying that motion’s action-guiding role is ‘parasitic’ on its co-instantiation with catchable: any action that is appropriate in response to a moving object is appropriate because moving objects are the kind of thing that can hit you or that you can intercept, which is their catchability. The parasitism claim seems to run the risk of assuming what it is supposed to establish.

A non-circular account of parasitic action-guidance would need to be grounded in something other than the presumed content. The most promising approach, we suggest, is a causal-constitutional account: G’s action-guiding role is parasitic on its co-instantiation with F just in case the mechanism’s causal sensitivity to G is entirely mediated by its causal sensitivity to properties in G’s constitutive basis — where the constitutive basis of an action-oriented property like catchable is the set of non-action-oriented physical properties whose co-instantiation is nomologically necessary and sufficient for catchability. The Reichardt detector’s sensitivity to catchable objects is entirely mediated by its sensitivity to motion, which is a member of catchable’s constitutive basis. This causal-mediation account does not presuppose any prior knowledge of which property is the content; it is grounded entirely in the causal architecture of the mechanism and the metaphysics of property constitution. Whether this is exactly what Rubner has in mind is not entirely clear from the text, but it provides the non-circular grounding that the informativity criterion requires.

The Retraction Problem

A final original objection concerns what we call the retraction problem for novel function cases. On the nearly-all theory, the novel bacterial enzyme immediately acquires the function of sugar metabolism because, trivially, nearly all enzymes of that structural type that contribute to the bacterium’s reproductive success do so by metabolizing sugar — a trivially true claim since there is only one such enzyme. But suppose the gene encoding this enzyme subsequently spreads through the population, and in the course of spreading, a significant fraction of the copies acquire secondary mutations that prevent them from metabolizing sugar. Now not nearly all enzymes of that structural type that are contributing to reproductive success do so by metabolizing sugar; many of them contribute through other pathways enabled by their other biochemical activities. The typicality claim is no longer true, and the function attribution must be retracted.

Rubner acknowledges this possibility in a brief passage and argues that it is ‘counterbalanced’ by the advantage of being able to make function attributions early, at the moment when a novel function arises. But this response is not fully satisfying. Biologists do not regard established function attributions as hostage to future population events of this kind. The function of the thymus to produce lymphocytes was established in the 1960s and has not been revised in response to any subsequent population genetics. Function attributions are treated in biological practice as relatively stable, not as provisional statistical claims that can be overturned by changes in the frequency distribution of structural variants in a population. This discrepancy between the theory’s implications and biological practice is the kind of evidence against the theory that the practice-alignment argument — which Rubner himself uses against the etiological theory — would seem to count. One possible response is to add a temporal stability condition to the theory: perhaps the typicality claim needs to hold not just at the current instant but across some historical window of the item’s existence in the relevant population. This would partially re-introduce historical considerations, but in a more modest and principled way than full-blown etiological theories require. Working out the details of this extension is a task for future research.

§10 The Dissertation in the Field: A Discursive Comparison

The dissertation’s most important theoretical competitors can be grouped into two broad families: historical theories, including Millikan’s (1984) proper function account and Neander’s (2017) selectionist teleosemantics; and ahistorical theories, including Fodor’s (1990) asymmetric dependency theory, Schellenberg’s (2018) capacitism, and the Garson-Piccinini biostatistical account. The nearly-all+ theory has a distinctive profile among these options, and understanding that profile requires engaging with each family in some depth.

Against the historical theories, the nearly-all+ account has the obvious advantage of ahistoricity: it handles Swampman, novel functions, and the practice of function attribution in biology without difficulty, whereas Millikan and Neander must either bite counterintuitive bullets or develop complex auxiliary theories to address these cases. Neander’s (2017) most recent work on teleosemantics is the closest ancestor of Rubner’s Part II, and it is worth noting the depth of the relationship. Neander’s account uses the indeterminacy problem as a central organizing theme, and her treatment of the toad-fly case introduces much of the conceptual vocabulary that Rubner inherits and refines. The key difference is that Neander’s function theory remains etiological: she holds that only selection history can ground the normative dimension of biological function that teleosemantics requires. Rubner’s argument against this commitment, as we have examined, is that the normative dimension is equally well grounded in typicality, and that the etiological theory therefore imposes unnecessary historical requirements on a project that does not need them.

The comparison with Schellenberg’s (2018) capacitism is philosophically the most interesting and the least adequately developed in the dissertation. This is somewhat ironic given the advisor-advisee relationship between Rubner and Schellenberg, which suggests a deep familiarity with the capacitist view. Schellenberg’s account holds that perceptual content is constituted by the exercise of perceptual capacities — abilities to discriminate and single out particulars of a certain kind — and that the content of a perceptual state is the kind of particular that the relevant capacity functions to discriminate. Like Rubner’s account, Schellenberg’s is ahistorical, grounded in current functional organization, and deeply connected to actual perceptual psychology. The two accounts share many structural features and are in some respects more similar to each other than either is to its nearest historical competitors. The critical question is whether the function-theoretic notion that grounds Schellenberg’s account — the notion of a capacity that functions to discriminate a kind of particular — is itself adequately analyzed by the nearly-all theory, or whether it requires an independent theoretical foundation. Rubner discusses this question in Chapter Two but does not reach a definitive conclusion, and the relationship between the two accounts remains underexplored.

Fodor’s (1990) asymmetric dependency theory occupies a peculiar position in this landscape. It is ahistorical and provides a clear account of misrepresentation, but it handles content indeterminacy through the asymmetric dependency relation — the claim that if cats cause CAT-tokens and if there are other non-cat causes of CAT-tokens, then the non-cat causal relations depend asymmetrically on the cat-causal relation. This mechanism for determining content is quite different from Rubner’s informativity criterion, and the two approaches have different strengths and weaknesses. Fodor’s account has been criticized on the grounds that the asymmetric dependency relation is difficult to cash out in a way that generates intuitively correct verdicts across a wide range of cases, particularly for biological content that is evolutionarily primitive. Rubner’s informativity criterion faces the different challenge of specifying informational nesting in a way that is both non-circular and general enough to handle the full range of perceptual content. A sustained comparison of the two approaches — examining which kinds of cases each handles better and worse — would be philosophically productive but lies beyond the scope of the dissertation.

One important class of ahistorical function theories that goes largely unexamined in the dissertation is the organizational tradition, associated with Mossio, Saborido, and Moreno (2009) and the broader literature on biological autonomy and self-organization. On organizational accounts, biological functions are grounded not in typicality or selection history but in the self-maintaining organizational closure of living systems: an item has a function if and only if its contribution to the system’s activity is part of the closed causal loop that enables the system to maintain itself in existence. This is an ahistorical account, it aligns naturally with biological practice, and it grounds teleological normativity in the structural organization of the system itself rather than in either statistical regularities or historical selection events. Rubner acknowledges the existence of organizational theories in his survey chapter but does not engage with them in the course of developing or defending the nearly-all theory. Since both theories are ahistorical and both claim to capture what is philosophically essential about biological function, a sustained comparison would seem to be exactly the kind of philosophical work that the nearly-all theory’s development calls for.

§11 Open Research Problems

Philosophical dissertations of genuine quality are often distinguished not by the completeness of what they settle but by the clarity with which they open new problems. Rubner’s dissertation opens several research problems whose investigation will advance both the philosophy of biology and the philosophy of mind. We discuss the six most pressing.

The approximation metric is the most urgent outstanding problem, for the reasons discussed in the previous section. The Bayesian integration sketched above provides a promising direction, but the details need to be worked out carefully. In particular, any adequate approximation metric needs to satisfy at least three desiderata: it must be principled, in the sense that the radius is not chosen ad hoc to accommodate specific examples; it must be non-circular, in the sense that the metric does not presuppose knowledge of the content in order to determine whether F* falls within the approximation radius of F; and it must scale naturally with the psychological and ecological context, so that the same physical displacement may be within the approximation radius in some stimulus domains and outside it in others. The Bayesian posterior-distribution approach satisfies all three desiderata in principle, and its development would constitute a significant advance in the theory of reliable misperception.

The formal development of the informativity criterion is a second pressing problem. The current formulation in terms of informational nesting provides an ordering relation among properties but does not yield a computable criterion in the general case, particularly for the probabilistic nesting clause which requires knowledge of natural-law conditional probabilities that may be difficult to determine empirically. A more tractable formulation might draw on information-theoretic concepts: the action-guiding information of a property F could be defined as the reduction in entropy over the organism’s action space given that F is represented, where the action space is the set of possible responses weighted by their biological relevance. Properties that reduce action-space entropy more dramatically are more informative in the relevant sense, and the informativity ordering could be grounded in this entropy-reduction measure. This would connect the informativity criterion to the broader literature on information-theoretic approaches to perception and action, providing a firmer theoretical foundation while also making the criterion more directly testable.

The extension of the theory to non-visual modalities is a third open problem that the dissertation does not address. The nearly-all+ theory is developed and tested exclusively against visual mechanisms, and many of its specific theoretical moves — the identification of Cvis_data as the broadest circumstance for visual mechanisms, the use of Reichardt detectors as the primary case study — are vision-specific. It remains to be shown that the theory generalizes naturally to audition, proprioception, interoception, and olfaction, each of which has a distinctive processing architecture, a different relationship between the stimulus and the representation, and different ecological statistics governing the distribution of relevant properties in the organism’s sensory environment. Auditory motion perception, particularly the precedence effect and binaural localization, might serve as a natural next case study, since auditory motion detection shares structural features with visual motion detection that would allow principled comparisons.

The perceptual-cognitive interface question, raised in the discussion of the representational form problem, is a fourth open problem. Any adequate theory of the three-place representation relation needs to specify how perceptual contents with the three-place structure (F, F*) are integrated into the two-place structure of propositional attitudes. Does systematic perceptual bias propagate into systematically biased beliefs? Under what conditions does the approximating property F* enter into the believer’s doxastic state, and under what conditions is it suppressed in favor of the function-grounded content F? How does the three-place character of perceptual content interact with the epistemic role of perceptual experience in grounding perceptual knowledge and justified belief? These questions connect the dissertation’s proposal to debates in epistemology and philosophy of action that Rubner does not address, and engaging with them would considerably deepen the theoretical significance of the three-place relation.

The retraction problem, as discussed above, requires the development of either a temporal stability condition or some alternative mechanism for insulating established function attributions from revision in response to future population events. This is a relatively focused technical problem rather than a broad research direction, but its resolution is important for the near-term credibility of the nearly-all theory as an account of scientific practice.

Finally, the comparison with organizational theories of biological function — which are, as noted above, both ahistorical and systematic — represents a significant unaddressed theoretical challenge. If organizational accounts can ground teleological function in the self-maintaining closure of biological systems, and if this grounding is adequate for the teleosemantic project, then the nearly-all theory needs to either show that it is superior to organizational accounts, show that it is compatible with them and can be seen as a refinement or specification of the organizational approach, or show that the two approaches capture different aspects of the biological function concept. None of these options has been pursued in the dissertation, and the intellectual landscape of the problem is sufficiently complex that a thorough engagement would likely require a sustained research project rather than a single paper.

§12 Conclusion

Andrew Rubner’s dissertation stands as a philosophically serious, formally rigorous, and empirically grounded contribution to the philosophical problems of biological function and perceptual content. Its most important claim — that an ahistorical account of natural function can underwrite the teleosemantic project without the costs that have plagued etiological approaches — is established with a degree of precision and argumentative care that is uncommon in this literature. The causal-interventionist analysis of contribution, the typicality-versus-non-negligibility distinction, the Reichardt-detector case study, and the three-place representation relation are each substantial contributions in their own right, and the dissertation as a whole represents a coherent and ambitious theoretical program rather than a collection of independent essays.

The most significant weakness is the incomplete treatment of the approximation metric in the three-place theory. This is not a superficial gap; as we have argued, the absence of a principled approximation measure leaves the theory’s predictions about what counts as reliable misperception, as distinguished from accurate tracking of a different property, indeterminate in a philosophically problematic way. The Bayesian integration we have sketched provides a promising direction for resolving this, but the details have yet to be worked out, and until they are, the three-place theory remains more of a theoretical framework than a fully implemented account.

What may ultimately prove most significant about the dissertation, however, is not any particular theorem or case study but a methodological demonstration: that the etiological commitment that pervades teleosemantics from Millikan to Neander is not forced on us by the logic of the teleosemantic program. An ahistorical teleosemantics is philosophically defensible, biologically motivated, and empirically tractable. The obstacles that have seemed to make the historical approach unavoidable — the need to ground teleological normativity, the need to handle novel functions and Swampman, the need to explain scientific function attribution practice — can all be addressed by a theory grounded in current structural and causal-statistical facts. Whether the nearly-all+ theory in its current form is the best implementation of this ahistorical approach, or whether it will be refined or supplanted by successors, remains to be seen. But the space that Rubner has opened — the space of ahistorical teleosemantic theories that take biological function seriously without depending on the particular causal-historical story that etiological theories tell — is philosophically important and underexplored. The dissertation has made a compelling case that this space is worth inhabiting.

Understanding perception requires not only understanding the conditions under which a percept provides a veridical representation of the world, but also the various conditions under which it fails to do so.

— Andrew Rubner, Dissertation Introduction

References 

Amundson, R. and Lauder, G.V. (1994). Function without purpose: The uses of causal role function in evolutionary biology. Biology and Philosophy, 9(4), 443–469.

Artiga, M. (2019). The biological origin of representational content. Synthese, 196(6), 2271–2295.

Boorse, C. (1977). Health as a theoretical concept. Philosophy of Science, 44(4), 542–573.

Craver, C.F. (2001). Role functions, mechanisms, and hierarchy. Philosophy of Science, 68(1), 53–74.

Craven, B.J. (1993). Orientation dependence of human line-length judgements matches statistical structure in real scenes. Proceedings of the Royal Society B, 253, 101–106.

Dretske, F. (1981). Knowledge and the Flow of Information. MIT Press.

Ewert, J.P. (1987). Neuroethology of releasing mechanisms: prey-catching in toads. Behavioural Brain Sciences, 10, 337–405.

Fodor, J.A. (1990). A theory of content. In A Theory of Content and Other Essays. MIT Press, 51–136.

Garson, J. and Piccinini, G. (2014). Functions must be performed at appropriate rates in appropriate situations. British Journal for the Philosophy of Science, 65(1), 1–20.

Godfrey-Smith, P. (1994). A modern history theory of functions. Noûs, 28(3), 344–362.

Green, E.J. (2017). A layered view of shape perception. British Journal for the Philosophy of Science, 68(2), 355–387.

Hibbard, P.B. et al. (2012). Misperceptions of aspect ratio in smoothly curved surfaces. Journal of Vision, 12(10), 1–11.

Howe, C.Q. and Purves, D. (2005). Perceiving Geometry: Geometrical Illusions Explained by Natural Scene Statistics. Springer.

McLaughlin, B. (2016). Systematic perceptual errors as a challenge for theories of perceptual content. Unpublished manuscript.

Mendelovici, A. (2013). Reliable misrepresentation and tracking theories of mental representation. Philosophical Studies, 165(2), 421–443.

Millikan, R.G. (1984). Language, Thought, and Other Biological Categories. MIT Press.

Millikan, R.G. (1989). Biosemantics. Journal of Philosophy, 86(6), 281–297.

Mossio, M., Saborido, C., and Moreno, A. (2009). An organizational account of biological functions. British Journal for the Philosophy of Science, 60(4), 813–841.

Neander, K. (1991). Functions as selected effects: The conceptual analyst’s defense. Philosophy of Science, 58(2), 168–184.

Neander, K. (1995). Misrepresenting and malfunctioning. Philosophical Studies, 79(2), 109–141.

Neander, K. (2017). A Mark of the Mental: In Defense of Informational Teleosemantics. MIT Press.

Nundy, S. et al. (2000). Why are angles misperceived? Proceedings of the National Academy of Sciences, 97(10), 5592–5597.

Piccinini, G. (2020). Neurocognitive Mechanisms: Explaining Biological Cognition. Oxford University Press.

Price, N.S.C. and Born, R.T. (2009). Timescales of sensory- and decision-related activity in MT and MST. Journal of Neuroscience, 29(46), 14424–14433.

Rubner, A. (2024). Natural Function and Perceptual Content. Doctoral Dissertation, Rutgers University.

Schaffner, K.F. (1993). Discovery and Explanation in Biology and Medicine. University of Chicago Press.

Schellenberg, S. (2018). The Unity of Perception: Content, Consciousness, Evidence. Oxford University Press.

Schlosser, G. (1998). Self-re-production and functionality: A systems-theoretical approach to teleological explanation. Synthese, 116(3), 303–354.

Walsh, D.M. (1996). Fitness and function. British Journal for the Philosophy of Science, 47(4), 553–574.

Weber, M. (2005). Philosophy of Experimental Biology. Cambridge University Press.

Wilhelm, I. (2022). Typical: A theory of typicality and typicality explanation. British Journal for the Philosophy of Science, 73(2), 561–581.

Woodward, J. (2003). Making Things Happen: A Theory of Causal Explanation. Oxford University Press.

Wouters, A. (2005). The function debate in philosophy. Acta Biotheoretica, 53(2), 123–151.

Wright, L. (1973). Functions. Philosophical Review, 82(2), 139–168.