Top Qs
Timeline
Chat
Perspective

Agentive logic

Field of philosophical and mathematical logic studying agency and action From Wikipedia, the free encyclopedia

Remove ads

Agentive logic (also called the logic of action or logic of agency) is the field of philosophical logic and logic in computer science that studies formal representations of agents, their actions, and their abilities. An agentive logic in the narrower sense is a formal system whose primitive operators express that an agent does something, can do something, or sees to it that something is the case.[1][2]

Agentive logics generalise modal logic by adding modalities indexed to agents and to actions. Typical examples include:

  • STIT logics (from sees to it that) with operators of the form meaning that agent sees to it that holds;[3]
  • dynamic logics of action with program-like modalities and meaning, roughly, that after every (respectively, some) execution(s) of action , holds;[4]
  • logics with explicit agentive operators such as "can do", "brings about", or "is able to ensure".[5]

Agentive logics are used in action theory in philosophy, in the semantics of natural language, in the theory of program verification, and in artificial intelligence, where they underpin formalisms for reasoning about actions, planning, and intelligent agents.[6]

Remove ads

Terminology and scope

Summarize
Perspective

The adjective agentive derives from the Latin agens ("one who acts") and originally referred to the grammatical agent of a verb. In logical contexts it designates operators or predicates whose primary argument position is an agent rather than a proposition alone, for example ("agent does ") or ("agent can bring about ").[5]

In contemporary literature, agentive logic is sometimes used narrowly for formal reconstructions of St. Anselm's modal account of facere ("to do").[5][7] More broadly, the term is used interchangeably with logic of action or logic of agency to cover a family of modal and dynamic logics designed to capture the structure of action and choice.[1][2]

Remove ads

Historical background

Summarize
Perspective

Medieval and early modern roots

Medieval logicians already explored analogies between modalities of action and alethic modalities such as possibility and necessity, for instance in discussions of obligation and power.[8]

An influential early agentive analysis is due to St. Anselm (11th century), who treated "doing " as a kind of modal operator on propositions, anticipating later modal logics of agency. Modern reconstructions of Anselm's theory show that the resulting "agentive logic" can be modelled with neighbourhood semantics and satisfies a recognisable square of opposition.[9][5]

Modern logic of action

Modern study of the logic of action began in the mid-20th century, parallel to developments in deontic logic and tense logic. Early systems were proposed by Georg Henrik von Wright, Stig Kanger, and others, often motivated by questions about norms and responsibility.[10][11]

From the 1960s onward, two largely independent but eventually converging traditions emerged:[2]

  • a branching-time tradition, culminating in STIT logics, emphasising agents' choices among possible futures; and
  • dynamic logics of programs and actions, developed within computer science to reason about program execution.

In the 1990s and 2000s, action logics were further developed in connection with knowledge representation, planning, and multi-agent systems in AI, and with dynamic and update semantics in linguistics.[6]

Remove ads

Core ideas

Despite their diversity, most agentive logics share some general themes:[2]

  • Agents are treated as explicit indices of modal operators, as in or .
  • Actions are represented either implicitly, via changes between possible worlds along an accessibility relation, or explicitly, as terms denoting primitive and composite actions.
  • Choice and ability are captured by modalities describing what an agent can ensure, usually relative to assumptions about the environment and other agents.
  • Formal properties such as closure under composition, interaction between different agents, and connections to obligation (what an agent ought to do) and knowledge (what an agent knows how to do) are investigated.

STIT logics

Summarize
Perspective

STIT ("sees to it that") logics, originating in work by Nuel Belnap and collaborators, treat agency in a branching-time framework. A STIT model consists of a partially ordered set of moments with a tree-like structure, sets of histories (maximal branches through the tree), and for each agent at each moment, a partition of the histories through that moment representing the choices available to the agent.[3][2]

Intuitively, an agent's action at a moment determines which equivalence class (choice cell) of histories becomes actual; a formula is true at a history–moment pair if holds on all histories in the choice cell corresponding to the agent's current action. Different STIT operators have been distinguished, notably:

  • the Chellas STIT operator, often written , which requires only that the agent's choice guarantees ; and
  • the deliberative STIT operator, , which additionally requires that is not already historically necessary.[12]

STIT frameworks have been extended with group agency operators, temporal modalities, epistemic operators, and deontic operators to study responsibility, collective action, and obligations under indeterminism.[3][13]

Remove ads

Dynamic logics of action

Summarize
Perspective

Dynamic logic was originally developed to reason about the behaviour of computer programs, treating program execution as a kind of action.[4] In propositional dynamic logic (PDL), action terms denote abstract programs or actions, and formulas of the form and express that all, respectively some, terminating executions of lead to states where holds.

From the standpoint of agentive logic, dynamic logic provides:

  • a language for building complex actions from primitives via sequencing, choice, and iteration (e.g. , , );
  • a Kripke semantics in which actions correspond to labelled accessibility relations; and
  • proof systems (such as Hoare logic and weakest precondition calculi) for reasoning about the correctness of action sequences.[14]

Extensions such as concurrent dynamic logic add operators for parallel composition, allowing reasoning about interacting processes and concurrent actions.[1] John-Jules Ch. Meyer and others have argued that dynamic logic is a natural base for logics of agents, by adding modalities for knowledge, belief, and ability on top of the action modalities.[15]

Dynamic logics have also been applied to normative reasoning, yielding dynamic deontic logics where actions are related to obligations and permissions, and to dynamic epistemic logics in which information-changing actions such as announcements are modelled as programs.[16]

Remove ads

Situation calculus and other action formalisms

Summarize
Perspective

In artificial intelligence, reasoning about action and change is often based on first-order languages that explicitly represent situations, events, and fluents (time-varying properties). The best known is situation calculus, introduced by John McCarthy and developed extensively by Raymond Reiter.[17][18]

In such formalisms:

  • action terms name primitive actions;
  • a function symbol (often ) maps an action and a situation to a successor situation; and
  • axioms describe which fluents hold in which situations and how actions change them.

Reiter's successor state axioms give compact specifications of how each fluent changes under all actions, and precondition axioms specify when actions are possible.[18] Related formalisms include the event calculus and fluent calculus, which provide alternative ways of representing events and their effects.[19]

While these systems are often first-order rather than modal, they are closely related to agentive logics: their action terms and transition structures can be seen as providing models for dynamic or STIT-style modalities, and conversely dynamic logics can be used as abstract specification languages for such AI formalisms.[6]

Remove ads

Many agentive logics introduce explicit operators for ability or "can-do" notions. For example, bimodal logics have been studied that combine an action modality (for what an agent actually does) with an ability modality capturing what the agent could do in the current circumstances.[1]

Reconstructed versions of Anselm's agentive logic interpret expressions of the form as saying that agent performs (or brings about) , and investigate interaction principles such as:

  • distribution of agency over conjunctions, and
  • constraints linking agency to possibility and responsibility.[5]

In modern STIT and dynamic-epistemic frameworks, ability is often analysed in terms of the existence of a strategy or plan that ensures a desired outcome against various patterns of environmental behaviour, leading to connections with game logic, coalition logic, and ATL.[20][21]

Remove ads

Applications

Summarize
Perspective

Philosophy

In philosophical action theory and ethics, agentive logics are used to analyse notions such as free will, intentional action, and moral responsibility. STIT logics in particular were developed partly to formalise debates about whether agents can be responsible in an indeterministic world, and how to model "could have done otherwise" claims.[3]

Agentive operators also interact with deontic operators: logics combining "ought" and "can bring about" have been proposed to capture principles such as "ought implies can" and to reason about what agents are obliged to make true.[22]

Linguistics and speech acts

In linguistics, logical tools for agency are used in the semantics of verbs of action, intention, and control, and in the analysis of speech acts, which are themselves actions performed by uttering sentences. Dynamic and agentive logics contribute to dynamic semantics and theories of discourse representation, where meaning is understood in terms of context-changing actions on an information state.[23][2]

Computer science and verification

In computer science, logics of action underpin formal methods for specifying and verifying programs and reactive systems. Hoare logics and dynamic logics are standard tools for proving that a program, seen as a complex action, has certain input–output or temporal properties.[4]

Temporal logics and their extensions (such as CTL and ATL) can be seen as endogenous agentive logics, where the "system" and its environment are treated as agents whose possible behaviours are quantified over by temporal operators. These logics are the basis of model checking techniques widely used in hardware and software verification.[24]

Artificial intelligence and multi-agent systems

Logic-based AI uses agentive logics to model reasoning about action, planning, and belief–desire–intention (BDI) agents. Dynamic logics enriched with modalities for belief, knowledge, goal and intention have been proposed as specification languages for rational agents and multi-agent systems.[25][15]

In multi-agent settings, action and ability modalities are combined with epistemic and deontic modalities to formalise teamwork, joint intentions, and norms governing agent societies.[26][27]

Remove ads

Relationship to deontic and epistemic logics

Summarize
Perspective

Agentive logic is closely connected to deontic logic, which studies obligation and permission, and to epistemic logic, which studies knowledge and belief. Many questions about what agents ought to do, or what they know how to do, require combining these perspectives.

For example:

  • dynamic deontic logics model obligations to perform certain actions and the effects of normative updates; and[28]
  • dynamic epistemic logics treat public announcements and observations as epistemic actions that transform agents' knowledge states.[29]

The combined study of how obligations, knowledge, and abilities interact—for instance, whether agents are obliged to know how to fulfill their obligations—is an active area of research at the intersection of these logics.[2]

John Horty's ought-to-do logic

A prominent attempt to connect agentive logics with deontic and epistemic logics is John F. Horty's "ought-to-do" programme. In his book Agency and Deontic Logic (2001) Horty develops a deontic logic whose primary operator is an agent-indexed "ought to do" (what an agent ought to see to at a time) rather than the more familiar "ought to be" operator of standard deontic logic.[30] The semantics is built on a branching-time STIT framework of agency in indeterministic time, together with decision-theoretic ideas: at each moment, an agent faces a set of available choices (choice cells), and these choices are ordered by a value or utility ranking over their possible continuations (histories). Roughly, an agent "ought to do" an action when that choice is among the best options according to this ordering.[31][32]

On Horty's picture, many traditional paradoxes of deontic logic – such as Good Samaritan–style cases and contrary-to-duty scenarios – arise from conflating propositions about how the world ought to be with claims about what agents ought to do. By defining a deontic operator directly over agentive choices (within STIT semantics) and by using utilitarian dominance or value-maximization over those choices, his "ought-to-do" logic offers alternative treatments of such paradoxes while remaining compatible with standard modal techniques.[30][33] This has made Horty's system a central reference point for later work in deontic STIT logics and logics of action more generally.[34]

Horty has also argued that a satisfactory account of "what an agent ought to do" must be sensitive to the agent's epistemic situation, not just to the objective structure of outcomes. In his later work on "epistemic oughts" within STIT semantics, he extends the basic ought-to-do framework with epistemic accessibility relations, and distinguishes objective obligations from knowledge-dependent, subjective obligations (what an agent ought to do given what they know).[35] This explicitly combines deontic and epistemic modalities: the same branching-time STIT models are enriched with epistemic indistinguishability relations, and the definition of the relevant "best" choices is restricted to the histories compatible with the agent's knowledge state. The resulting logics aim to capture the kind of knowledge-relative obligations highlighted in work on combined deontic–epistemic systems, such as obligations that depend on what an agent knows or ought to know in a situation.[36]

Horty's framework has inspired a broader research programme in epistemic and doxastic deontic STIT logics, in which various "ought-to-do" operators are studied alongside epistemic or belief operators to analyse notions such as knowingly doing, objective versus subjective oughts, and responsibility under uncertainty.[37][38] In this way, Horty's ought-to-do logic provides a bridge between formal theories of agency, traditional deontic logic, and contemporary epistemic logics.

Remove ads

See also

References

Further reading

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads