Hubbry Logo
Argument (linguistics)Argument (linguistics)Main
Open search
Argument (linguistics)
Community hub
Argument (linguistics)
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Argument (linguistics)
Argument (linguistics)
from Wikipedia

In linguistics, an argument is an expression that helps complete the meaning of a predicate,[1] the latter referring in this context to a main verb and its auxiliaries. In this regard, the complement is a closely related concept. Most predicates take one, two, or three arguments. A predicate and its arguments form a predicate-argument structure. The discussion of predicates and arguments is associated most with (content) verbs and noun phrases (NPs), although other syntactic categories can also be construed as predicates and as arguments. Arguments must be distinguished from adjuncts. While a predicate needs its arguments to complete its meaning, the adjuncts that appear with a predicate are optional; they are not necessary to complete the meaning of the predicate.[2] Most theories of syntax and semantics acknowledge arguments and adjuncts, although the terminology varies, and the distinction is generally believed to exist in all languages. Dependency grammars sometimes call arguments actants, following Lucien Tesnière (1959).

The area of grammar that explores the nature of predicates, their arguments, and adjuncts is called valency theory. Predicates have a valence; they determine the number and type of arguments that can or must appear in their environment. The valence of predicates is also investigated in terms of subcategorization.

Arguments and adjuncts

[edit]

The basic analysis of the syntax and semantics of clauses relies heavily on the distinction between arguments and adjuncts. The clause predicate, which is often a content verb, demands certain arguments. That is, the arguments are necessary in order to complete the meaning of the verb. The adjuncts that appear, in contrast, are not necessary in this sense. The subject phrase and object phrase are the two most frequently occurring arguments of verbal predicates.[3] For instance:

Jill likes Jack.
Sam fried the vegetables.
The old man helped the young man.

Each of these sentences contains two arguments (in bold), the first noun (phrase) being the subject argument, and the second the object argument. Jill, for example, is the subject argument of the predicate likes, and Jack is its object argument. Verbal predicates that demand just a subject argument (e.g. sleep, work, relax) are intransitive, verbal predicates that demand an object argument as well (e.g. like, fry, help) are transitive, and verbal predicates that demand two object arguments are ditransitive (e.g. give, lend).

When additional information is added to our three example sentences, one is dealing with adjuncts, e.g.

Jill really likes Jack.
Jill likes Jack most of the time.
Jill likes Jack when the sun shines.
Jill likes Jack because he's friendly.

The added phrases (in bold) are adjuncts; they provide additional information that is not necessary to complete the meaning of the predicate likes. One key difference between arguments and adjuncts is that the appearance of a given argument is often obligatory, whereas adjuncts appear optionally. While typical verb arguments are subject or object nouns or noun phrases as in the examples above, they can also be prepositional phrases (PPs) (or even other categories). The PPs in bold in the following sentences are arguments:

Sam put the pen on the chair.
Larry does not put up with that.
Bill is getting on my case.

We know that these PPs are (or contain) arguments because when we attempt to omit them, the result is unacceptable:

*Sam put the pen.
*Larry does not put up.
*Bill is getting.

Subject and object arguments are known as core arguments; core arguments can be suppressed, added, or exchanged in different ways, using voice operations like passivization, antipassivization, applicativization, incorporation, etc. Prepositional arguments, which are also called oblique arguments, however, do not tend to undergo the same processes.

Psycholinguistic (argument vs adjuncts)

[edit]

Psycholinguistic theories must explain how syntactic representations are built incrementally during sentence comprehension. One view that has sprung from psycholinguistics is the argument structure hypothesis (ASH), which explains the distinct cognitive operations for argument and adjunct attachment: arguments are attached via the lexical mechanism, but adjuncts are attached using general (non-lexical) grammatical knowledge that is represented as phrase structure rules or the equivalent.

Argument status determines the cognitive mechanism in which a phrase will be attached to the developing syntactic representations of a sentence. Psycholinguistic evidence supports a formal distinction between arguments and adjuncts, for any questions about the argument status of a phrase are, in effect, questions about learned mental representations of the lexical heads.[citation needed]

Syntactic vs. semantic arguments

[edit]

An important distinction acknowledges both syntactic and semantic arguments. Content verbs determine the number and type of syntactic arguments that can or must appear in their environment; they impose specific syntactic functions (e.g. subject, object, oblique, specific preposition, possessor, etc.) onto their arguments. These syntactic functions will vary as the form of the predicate varies (e.g. active verb, passive participle, gerund, nominal, etc.). In languages that have morphological case, the arguments of a predicate must appear with the correct case markings (e.g. nominative, accusative, dative, genitive, etc.) imposed on them by their predicate. The semantic arguments of the predicate, in contrast, remain consistent, e.g.

Jack is liked by Jill.
Jill's liking Jack
Jack's being liked by Jill
the liking of Jack by Jill
Jill's like for Jack

The predicate 'like' appears in various forms in these examples, which means that the syntactic functions of the arguments associated with Jack and Jill vary. The object of the active sentence, for instance, becomes the subject of the passive sentence. Despite this variation in syntactic functions, the arguments remain semantically consistent. In each case, Jill is the experiencer (= the one doing the liking) and Jack is the one being experienced (= the one being liked). In other words, the syntactic arguments are subject to syntactic variation in terms of syntactic functions, whereas the thematic roles of the arguments of the given predicate remain consistent as the form of that predicate changes.

The syntactic arguments of a given verb can also vary across languages. For example, the verb put in English requires three syntactic arguments: subject, object, locative (e. g. He put the book into the box). These syntactic arguments correspond to the three semantic arguments agent, theme, and goal. The Japanese verb oku 'put', in contrast, has the same three semantic arguments, but the syntactic arguments differ, since Japanese does not require three syntactic arguments, so it is correct to say Kare ga hon o oita ("He put the book"). The equivalent sentence in English is ungrammatical without the required locative argument, as the examples involving put above demonstrate. For this reason, a slight paraphrase is required to render the nearest grammatical equivalent in English: He positioned the book or He deposited the book.

Distinguishing between arguments and adjuncts

[edit]

Arguments vs. adjuncts

[edit]

A large body of literature has been devoted to distinguishing arguments from adjuncts.[4] Numerous syntactic tests have been devised for this purpose. One such test is the relative clause diagnostic. If the test constituent can appear after the combination which occurred/happened in a relative clause, it is an adjunct, not an argument, e.g.

Bill left on Tuesday. → Bill left, which happened on Tuesday. – on Tuesday is an adjunct.
Susan stopped due to the weather. → Susan stopped, which occurred due to the weather. – due to the weather is an adjunct.
Fred tried to say something twice. → Fred tried to say something, which occurred twice. – twice is an adjunct.

The same diagnostic results in unacceptable relative clauses (and sentences) when the test constituent is an argument, e.g.

Bill left home. → *Bill left, which happened home. – home is an argument.
Susan stopped her objections. → *Susan stopped, which occurred her objections. – her objections is an argument.
Fred tried to say something. → *Fred tried to say, which happened something. – something is an argument.

This test succeeds in identifying prepositional arguments as well:

We are waiting for Susan. → *We are waiting, which is happening for Susan. – for Susan is an argument.
Tom put the knife in the drawer. → *Tom put the knife, which occurred in the drawer. – in the drawer is an argument.
We laughed at you. → *We laughed, which occurred at you. – at you is an argument.

The utility of the relative clause test is, however, limited. It incorrectly suggests, for instance, that modal adverbs (e.g. probably, certainly, maybe) and manner expressions (e.g. quickly, carefully, totally) are arguments. If a constituent passes the relative clause test, however, one can be sure that it is not an argument.

Obligatory vs. optional arguments

[edit]

A further division blurs the line between arguments and adjuncts. Many arguments behave like adjuncts with respect to another diagnostic, the omission diagnostic. Adjuncts can always be omitted from the phrase, clause, or sentence in which they appear without rendering the resulting expression unacceptable. Some arguments (obligatory ones), in contrast, cannot be omitted. There are many other arguments, however, that are identified as arguments by the relative clause diagnostic but that can nevertheless be omitted, e.g.

a. She cleaned the kitchen.
b. She cleaned. – the kitchen is an optional argument.
a. We are waiting for Larry.
b. We are waiting. – for Larry is an optional argument.
a. Susan was working on the model.
b. Susan was working. – on the model is an optional argument.

The relative clause diagnostic would identify the constituents in bold as arguments. The omission diagnostic here, however, demonstrates that they are not obligatory arguments. They are, rather, optional. The insight, then, is that a three-way division is needed. On the one hand, one distinguishes between arguments and adjuncts, and on the other hand, one allows for a further division between obligatory and optional arguments.

Arguments and adjuncts in noun phrases

[edit]

Most work on the distinction between arguments and adjuncts has been conducted at the clause level and has focused on arguments and adjuncts to verbal predicates. The distinction is crucial for the analysis of noun phrases as well, however. If it is altered somewhat, the relative clause diagnostic can also be used to distinguish arguments from adjuncts in noun phrases, e.g.

Bill's bold reading of the poem after lunch
*bold reading of the poem after lunch that was Bill'sBill's is an argument.
Bill's reading of the poem after lunch that was boldbold is an adjunct
*Bill's bold reading after lunch that was of the poemof the poem is an argument
Bill's bold reading of the poem that was after lunchafter lunch is an adjunct

The diagnostic identifies Bill's and of the poem as arguments, and bold and after lunch as adjuncts.

Representing arguments and adjuncts

[edit]

The distinction between arguments and adjuncts is often indicated in the tree structures used to represent syntactic structure. In phrase structure grammars, an adjunct is "adjoined" to a projection of its head predicate in such a manner that distinguishes it from the arguments of that predicate. The distinction is quite visible in theories that employ the X-bar schema, e.g.

Argument picture 1

The complement argument appears as a sister of the head X, and the specifier argument appears as a daughter of XP. The optional adjuncts appear in one of a number of positions adjoined to a bar-projection of X or to XP.

Theories of syntax that acknowledge n-ary branching structures and hence construe syntactic structure as being flatter than the layered structures associated with the X-bar schema must employ some other means to distinguish between arguments and adjuncts. In this regard, some dependency grammars employ an arrow convention. Arguments receive a "normal" dependency edge, whereas adjuncts receive an arrow edge.[5] In the following tree, an arrow points away from an adjunct toward the governor of that adjunct:

Argument picture 2

The arrow edges in the tree identify four constituents (= complete subtrees) as adjuncts: At one time, actually, in congress, and for fun. The normal dependency edges (= non-arrows) identify the other constituents as arguments of their heads. Thus Sam, a duck, and to his representative in congress are identified as arguments of the verbal predicate wanted to send.

Relevant theories

[edit]

Argumentation theory focuses on how logical reasoning leads to end results through an internal structure built of premises, a method of reasoning and a conclusion. There are many versions of argumentation that relate to this theory that include: conversational, mathematical, scientific, interpretive, legal, and political.

Grammar theory, specifically functional theories of grammar, relate to the functions of language as the link to fully understanding linguistics by referencing grammar elements to their functions and purposes.

A variety of theories exist regarding the structure of syntax, including generative grammar, categorial grammar, and dependency grammar.

Modern theories of semantics include formal semantics, lexical semantics, and computational semantics. Formal semantics focuses on truth conditioning. Lexical Semantics delves into word meanings in relation to their context and computational semantics uses algorithms and architectures to investigate linguistic meanings.

The concept of valence is the number and type of arguments that are linked to a predicate, in particular to a verb. In valence theory verbs' arguments include also the argument expressed by the subject of the verb.

History of argument linguistics

[edit]

The notion of argument structure was first conceived in the 1980s by researchers working in the government–binding framework to help address controversies about arguments.[6]

Importance

[edit]

The distinction between arguments and adjuncts is crucial to most theories of syntax and grammar. Arguments behave differently from adjuncts in numerous ways. Theories of binding, coordination, discontinuities, ellipsis, etc. must acknowledge and build on the distinction. When one examines these areas of syntax, what one finds is that arguments consistently behave differently from adjuncts and that without the distinction, our ability to investigate and understand these phenomena would be seriously hindered. There is a distinction between arguments and adjuncts which is not really noticed by many in everyday language. The difference is between obligatory phrases versus phrases which embellish a sentence. For instance, if someone says "Tim punched the stuffed animal", the phrase stuffed animal would be an argument because it is the main part of the sentence. If someone says, "Tim punched the stuffed animal with glee", the phrase with glee would be an adjunct because it just enhances the sentence and the sentence can stand alone without it.[7]

See also

[edit]

Notes

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In , an argument is a syntactic constituent, such as a , that a predicate (typically a ) selects as obligatory to express its core semantic relations, distinguishing it from optional that merely modify . These arguments participate in described by the predicate, often realizing thematic roles like agent (the doer), theme (the affected entity), or (the endpoint). For example, in the sentence "The chef baked the cake," "the chef" and "the cake" are arguments of the verb "baked," fulfilling agent and theme roles, respectively, while an adjunct like "in the " provides additional circumstantial information without being required. The concept of argumenthood bridges and semantics, where arguments are lexically encoded in the verb's frame and must satisfy the predicate's valency—the number and type of complements it requires. Semantically, arguments are core participants essential to the predicate's meaning, as per neo-Davidsonian event semantics, in which verbs denote properties of events and arguments link individuals to those events via functional heads like Voice (for external arguments such as agents) or little v (for internal arguments like themes). Syntactically, arguments are projected within the or clause structure, adhering to principles like the Theta-Criterion, which ensures each argument receives a unique thematic role and every role is assigned. Distinguishing arguments from adjuncts relies on diagnostics such as obligatoriness, lexical selection, and processing ease: arguments cannot be omitted without altering grammaticality or meaning (e.g., "*John baked" is incomplete), whereas like locatives or manner phrases can be. Psycholinguistic evidence supports this divide, showing faster integration of arguments during sentence comprehension due to their pre-specified lexical ties, as opposed to the more flexible attachment of . Theoretical frameworks, including , Role and Reference Grammar, and the , formalize these properties, though challenges arise in borderline cases like instruments or possessives, which may exhibit hybrid semantic-syntactic behaviors. Argument structure thus forms a foundational element of syntactic theory, influencing phenomena like passivization, where arguments shift positions (e.g., agent demoted in "The cake was baked by the chef"), and cross-linguistic variation in valency patterns.

Core Concepts

Definition and Role in Predication

In , an is a syntactic or semantic constituent required to complete the meaning of a predicate, such as a , by specifying the participants in the event, state, or relation denoted by that predicate. Predicates, which include , adjectives, or nouns functioning predicatively, express properties, actions, or relations that inherently demand certain entities to instantiate them fully, with arguments serving as those entities in roles like agent, , or theme. For instance, in the sentence "John devours the book," the "devours" is the predicate, and "John" (agent) and "the book" () are its arguments, without which the predicate's meaning remains incomplete. The role of arguments in predication lies in saturating the predicate's valency—the fixed number and type of participants it requires to form a grammatically and semantically well-formed clause. This saturation process ensures that the predicate's thematic slots, which represent conceptual roles in the described scenario, are filled, thereby yielding a complete proposition about the world. Valency varies across predicates: zero-valent (or avalent) predicates like "rain" require no arguments and describe impersonal states; one-valent (monovalent or intransitive) predicates like "sleep" take a single argument, typically the subject; two-valent (divalent or transitive) predicates like "eat" require a subject and object; and three-valent (trivalent) predicates like "give" demand a subject, direct object, and indirect object. These valency patterns stem from the predicate's lexical semantics, as outlined in valency theory pioneered by Lucien Tesnière, who likened a verb's need for arguments to a chemical element's bonding requirements. Predicates differ from arguments in that predicates denote the core relational or attributive content, whereas arguments supply the specific referents that occupy the predicate's positions, often aligned with thematic roles such as agent (initiator of action) or (affected entity). Verbs typically specify frames in their lexical entries, which detail the syntactic categories and positions of required arguments, such as a (NP) for the subject and another NP for the direct object in transitive verbs. For example, the verb "put" has a frame requiring an NP subject, an NP object, and a prepositional phrase (PP) indicating location, as in "She put the vase on the table." This framework ensures syntactic well-formedness while linking to the semantic roles that arguments fulfill. Semantic arguments emphasize the thematic contributions to meaning, while syntactic arguments focus on structural realization, though the two views often align in core cases.

Arguments Versus Adjuncts

In , arguments and represent a fundamental distinction in how constituents contribute to predicate and sentence meaning. Arguments are the obligatory participants required to complete the valency of a predicate, such as a , and they typically bear core thematic roles like agent, , or theme, as specified in the predicate's argument . In contrast, are optional modifiers that provide additional circumstantial information, such as manner, time, location, or reason, without fulfilling the predicate's core requirements. This opposition ensures that predicates achieve semantic completeness through arguments alone, while enrich the description of the event without altering its basic participants. A clear example illustrates this contrast in the sentence "She ran to the store quickly." Here, "she" functions as the agent argument, obligatory as the external argument of the intransitive verb "run." Both "to the store" (goal adjunct) and the adverb "quickly" (manner adjunct), however, add optional information about the direction of motion and how the running occurred without being required for the verb's meaning. Removing the argument would render the sentence incomplete or semantically anomalous (e.g., "*Ran to the store quickly" lacks the agent), whereas omitting the adjuncts preserves core interpretability ("She ran"). Semantically, arguments fill designated slots in the predicate's argument structure, encoding essential relations to , such as causation or affectedness, which are entailed by the predicate. , by comparison, do not occupy these slots but instead modify as a whole, contributing non-entailed, descriptive content that can often be integrated across multiple predicates. For instance, in "John built the house in the summer," "the house" is a argument entailing an affected , while "in the summer" is a temporal adjunct that adds context without specifying a core role. Behaviorally, arguments influence predicate selection through subcategorization, where verbs impose syntactic and semantic restrictions on their complements (e.g., transitive verbs like "devour" require a direct object denoting consumable items). Adjuncts lack such selectional ties and can be iterated indefinitely without violating grammaticality (e.g., "She ran to the store quickly in the morning yesterday"), highlighting their modifier status. This initial distinction underscores arguments' role in defining the predicate's core event frame, distinct from adjuncts' peripheral enrichment.

Types of Arguments

Syntactic Arguments

In generative syntax, syntactic arguments are defined as syntactic phrases—such as noun phrases (NPs), prepositional phrases (PPs), or complementizer phrases (CPs)—that occupy designated argument positions within the hierarchical phrase structure of a sentence, typically as specifiers or complements selected by a head, particularly a . These positions are crucial for satisfying the 's syntactic requirements and ensuring grammatical well-formedness. Common structural positions for arguments include the subject in the specifier of the Tense Phrase (Spec-TP), the direct object as the complement of the (V'), and indirect objects in dative constructions, often realized as PPs or NPs adjacent to the . For instance, in English like "Alex washed the car," the NP "Alex" occupies Spec-TP as the subject, while "the car" serves as the NP complement of the . In dative constructions such as "Birch gave Brooke a ," the indirect object "Brooke" appears as an NP preceding the direct object in double object constructions or as a PP complement like "to Brooke." Verbs specify their syntactic through , which dictates the number and type of arguments they require; for example, transitive verbs like "hit" subcategorize for an NP complement, as in "hit the ball," while ditransitive verbs like "give" select for two NPs or an NP and a PP, as in "give a to John." This is part of the verb's lexical entry and ensures that sentences conform to the verb's structural demands. Cross-linguistically, variations arise in how these are realized; in ergative languages like Dyirbal or , the patient (transitive object) may occupy the Spec-TP position for case assignment, while the agent remains in the VP specifier, inverting the nominative-accusative pattern of languages like English. Argument alternations demonstrate how syntactic roles can shift without altering the underlying structure in certain ways. In dative shift, a construction like "give a book to John" alternates with "give John a book," where the indirect object moves from a PP to an NP position adjacent to the verb. Passivization similarly affects syntactic roles by promoting the direct object to subject position in Spec-TP, as in the shift from active "The dog attacked the cat" to passive "The cat was attacked by the dog," with the original subject demoted to a by-phrase. These alternations highlight the flexibility of syntactic argument realization across constructions.

Semantic Arguments

In linguistics, semantic arguments refer to the entities that participate in the situation or event described by a predicate, such as a , with each argument fulfilling a specific interpretive role relative to that predicate. These roles, often termed roles or thematic roles, capture the semantic relations between the predicate and its participants, including prototypical categories like agent (the instigator of an action), theme (the entity affected or undergoing change), and (the endpoint or beneficiary of the action). This conceptualization originates from frameworks, where deep structural cases encode the essential semantic contributions of arguments to the , independent of surface syntactic form. A key constraint governing semantic arguments is the theta criterion, which stipulates that each argument must receive exactly one theta role, and each theta role assigned by the predicate must be realized by exactly one argument. Predicates thus project a fixed number of theta roles determined by their lexical semantics, ensuring a bijective mapping between arguments and roles without redundancy or omission. For instance, in the sentence "The chef sent the cake to the guests," the verb "send" assigns the agent role to "the chef" (the willful initiator), the theme role to "the cake" (the entity affected), and the goal role to "the guests" (the intended recipients). This criterion enforces the completeness and uniqueness of semantic interpretation in predication. Cross-linguistically, while the syntactic expression of arguments varies—such as through , case marking, or agreement—core theta roles like agent, theme, experiencer (the entity perceiving or undergoing a psychological state), and exhibit remarkable consistency as universal components of event semantics. These roles derive from principles of , providing a stable semantic-syntactic interface that structures argument participation across languages, as evidenced in typological patterns and acquisition data where children reliably distinguish initiators from affected entities early on. For example, psych verbs like "" universally mark the experiencer as a core participant, despite diverse realizations in languages like English (subject) versus Latin (dative). The syntactic realization of these roles, such as subject or object positions, is addressed in discussions of syntactic arguments.

Distinction Criteria

Obligatoriness and Optionality

In , obligatoriness serves as a key diagnostic for distinguishing arguments from adjuncts, with arguments being those elements required by a predicate's valency to form a complete syntactic . Obligatory arguments must be realized in the sentence; their omission typically results in ungrammaticality. For instance, the English give requires both a theme (direct object) and a (indirect object) as arguments: "John gave the to Mary" is grammatical, whereas "*John gave the book" is not, as the goal argument is missing. This requirement stems from the verb's subcategorization frame, which specifies the number and type of complements needed to satisfy its valency. Optional arguments, by contrast, can be omitted without rendering the sentence ungrammatical, though their absence may lead to default or contextually inferred interpretations. Such arguments are selected by the predicate but not syntactically mandated in every occurrence. For example, the English claim, a reporting , optionally takes a clausal complement: "He claimed" is grammatical on its own, implying a that can be recovered from , while "He claimed that it was true" explicitly realizes the complement for added clarity. This optionality highlights how arguments contribute to semantic completeness without always enforcing syntactic presence, differing from that add peripheral information freely. Valency reduction mechanisms allow obligatory arguments to become implicit, suppressing their overt realization while preserving their semantic role. In English middle constructions, the external (agentive) argument is demoted to an implicit status, reducing the verb's syntactic valency: "The book reads easily" implies an arbitrary agent capable of reading it, but "*The book reads" without the adverb is infelicitous, as the construction requires adverbial support to license the implicit argument. Similarly, zero anaphora in pro-drop languages permits the omission of phonologically null but semantically obligatory , where rich verbal agreement recovers the argument's . Cross-linguistically, like Italian exemplify this through null subjects in pro-drop systems, where the subject argument is obligatory for predication but optionally unrealized morphologically. In "Parla italiano" ('(He/She) speaks Italian'), the null subject pro fills the obligatory subject position, licensed by the verb's agreement morphology, without affecting ; overt pronouns like "Lui parla italiano" are used for emphasis or contrast but are not required. This contrasts with non-pro-drop languages like English, where subjects must be overt, underscoring how obligatoriness interacts with language-specific licensing conditions for arguments.

Structural and Behavioral Tests

Structural and behavioral tests provide syntactic diagnostics to distinguish arguments from , complementing preliminary criteria such as obligatoriness. These tests examine how phrases interact with core syntactic operations like binding, movement, and coordination, revealing differences in structural integration. Arguments typically occupy positions within the that allow them to participate in these operations, whereas , being externally merged, exhibit distinct behaviors. One key structural test involves c-command relations, particularly through binding asymmetries under Principle C of the Binding Theory. Arguments c-command their dependents, enabling coreference restrictions, while adjuncts, adjoined higher in the structure, do not. For instance, in the argument structure "*[Which picture of John_i] did he_i like?", the possessive phrase "of John" reconstructs under movement, violating Principle C because the pronoun "he" c-commands the R-expression "John". In contrast, the adjunct example "[Which picture taken by John_i] did he_i like?" is grammatical, as the adjunct does not reconstruct, avoiding the c-command violation. This asymmetry holds experimentally, with arguments obligatorily reconstructing (mean acceptability rating 2.19) unlike adjuncts (3.24). Movement tests further differentiate arguments by assessing their ability to undergo passivization or raising, operations that promote internal arguments to subject position while suppressing external ones. Arguments, as core complements, can shift positions without degradation, as in "The book was given to Mary" from "Someone gave the book to Mary", where "to Mary" (an indirect object argument) alternates with the subject in related constructions. Adjuncts resist such promotion; for example, "*Quickly was the race run by John" is ungrammatical, unlike the acceptable "The race was run quickly by John", because the manner "quickly" cannot serve as the derived subject. Similarly, in VP-preposing, arguments must move with the ("*Draw a picture she did" is ill-formed), whereas can remain ("Leave on Monday she did"). These patterns confirm arguments' tighter integration into the 's projection. Coordination and scoping tests highlight behavioral differences in phrase combination and iteration. Arguments coordinate readily with other core phrases of the same type, as in "John studied French and German", where both are direct object arguments. Mixing an argument with an adjunct fails, yielding ungrammaticality like "*John studied French and last night". Adjuncts, however, coordinate easily among themselves and exhibit adverbial scoping, as in "John left in and on ", where multiple locative/temporal adjuncts iterate without limit ("John left in , on , and quietly" is possible). Arguments resist such iteration; "*John saw the man the cookie" is impossible. These tests underscore ' looser, modifier-like attachment, allowing flexible stacking and scoping over the predicate. Cross-linguistically, diagnostics like the stranding test in treat particles in verb-particle constructions as argument-like elements. In English and Dutch, objects can strand the particle under movement, as in "call the meeting off" becoming "the meeting, call off" in questions, indicating the particle occupies a position akin to a verbal argument rather than a peripheral adjunct. This stranding is unavailable for true adjunct prepositional phrases, which do not permit object extraction past them without island effects. Such behavior supports analyzing particles as integrated into the verb's argument structure across Germanic varieties.

Arguments and Adjuncts in Noun Phrases

In noun phrases, arguments function as core participants essential to the semantic interpretation of the head noun, often realized through genitive constructions or prepositional phrases such as "of." For instance, in the phrase "the destruction of the city," the prepositional phrase "of the city" serves as an internal argument to the "destruction," specifying the theme affected by the event denoted by the noun. This pattern is particularly evident in complex event nominalizations, where the noun inherits the argument structure from its verbal base, making such phrases obligatory for full semantic coherence. In contrast, adjuncts in noun phrases provide additional, non-essential descriptive and are typically optional modifiers, including attributive or prepositional phrases. Examples include "the old destruction of the city" (with "old" as an attributive ) or "the destruction in 1945 of the city" (with "in 1945" as a temporal ). These elements can be iterated or omitted without rendering the noun phrase semantically incomplete, distinguishing them from arguments through tests like optionality and iterability. Relational nouns, such as "" or "," inherently denote binary or multi-participant relations and thus require arguments to express their full meaning, often via possessives or prepositions. In "the between the teams," the prepositional phrase "between the teams" realizes the relational arguments, which are obligatory for the noun's interpretive completeness. Within nominalizations, arguments can vary in obligatoriness: eventive nominals like "examination of the students" demand their arguments (e.g., "of the students" as theme), while result nominals like "the examination" treat them as optional adjuncts. Cross-linguistically, similar distinctions appear in relational s, particularly in , where they behave like arguments taking obligatory complements. In Spanish, the relational adjective "enemigo de" in phrases like "enemigo de Roma" (enemy of ) requires the "de"-phrase as an argument to specify the relational participant, akin to genitive arguments in English nominals. This structure highlights how relational elements in noun phrases parallel verbal argumenthood, though adapted to nominal syntax.

Representation Methods

In Syntactic Frameworks

In syntactic frameworks, arguments are typically represented as structural sisters to predicates or in specifier positions within phrase structure trees, ensuring they directly saturate the valence requirements of the head. For instance, in phrases, core arguments like subjects and objects occupy complement or specifier slots adjacent to the , while are adjoined to higher phrasal nodes, such as VP or IP levels, to modify without fulfilling needs. This distinction maintains hierarchical organization, where arguments integrate into the core projection of the predicate, as seen in ditransitive constructions employing VP shells: the indirect object serves as a specifier in an inner vP, with the direct object as its complement, and the raising to a higher head. X-bar theory formalizes this representation by positing that arguments saturate intermediate bar-level projections (X') of the head, forming the recursive backbone of phrases through complementation and specification. Complements, as arguments, attach as sisters to X' to complete the head's selectional properties, whereas adjoin to X' or XP nodes, introducing optional modification without saturation. This binary structure—head with complement forming X', then specifier attaching to yield XP—ensures arguments are obligatory for in argument-taking categories like or , distinguishing them from adjuncts that expand phrases peripherally. Dependency grammar, in contrast, eschews phrase-level embedding for a head-dependent relation, portraying arguments as primary dependents directly subordinated to the predicate to satisfy its valency. The predicate governs its arguments via labeled arcs indicating grammatical functions, such as subject or object, while appear as secondary, peripheral dependents with looser attachment, often modifying via or circumstantial relations. This flatter structure highlights arguments' centrality to the dependency tree's core, with branching outward to encode additional, non-essential information. In (LFG), constituent structure (c-structure) trees explicitly annotate argument positions with functional equations, such as (↑ SUBJ) = ↓ for subjects in Spec-IP and (↑ OBJ) = ↓ for objects in complement-VP, linking surface form to abstract . These annotations ensure arguments map to f-structure slots like SUBJ or OBJ, distinguishing them from adjuncts labeled as ADJ, which adjoin without functional integration. Similarly, the enforces binary branching in Merge operations, positioning arguments in specifier or complement roles within extended projections, such as little-vP for external arguments and VP for internal ones, to derive theta-linked positions economically.

In Semantic Frameworks

In semantic frameworks, theta grids provide a lexical representation of a predicate's argument structure by specifying slots filled by arguments bearing particular theta roles, such as agent, theme, and . For instance, the verb give is associated with a theta grid <agent, theme, goal>, where the agent initiates the action, the theme is transferred, and the receives it; this notation encodes the number and types of obligatory arguments required for semantic well-formedness. Theta grids also distinguish external arguments, typically mapped to the subject position and bearing roles like agent or experiencer, from internal arguments, which are objects within the and often themes or patients. Predicate logic formalizes arguments as variables bound by predicates or lambda operators, enabling precise composition of meaning. In this approach, a sentence like "x eats y" is represented as eat(x,y)eat(x, y), where xx and yy are argument variables denoting the eater and eaten, respectively, and lambda abstraction λxλy.eat(x,y)\lambda x \lambda y . eat(x, y) abstracts over these variables to form a functor that combines with actual noun phrases during interpretation. This variable-binding mechanism ensures that arguments saturate predicate positions, contributing to the truth-conditional semantics of the sentence. Event semantics, particularly in the neo-Davidsonian framework, treats arguments as participants in events rather than direct complements of predicates, reifying events as entities with their own argument structure. For example, the verb run is analyzed as run(e)run(e), with the theme argument linked separately as theme(e,x)theme(e, x), where ee is the event variable and xx the runner; this allows modifiers like adverbs to attach to the event while thematic relations specify participant roles. Such representations facilitate the decomposition of complex sentences into atomic events and their arguments, supporting analyses of aspect and causation. Montague grammar employs compositional semantics where arguments occupy functor positions in a typed lambda calculus, ensuring that syntactic structure mirrors semantic combination. Predicates act as higher-type functions that take arguments as inputs to yield truth values; for instance, a transitive verb like love is λyλx.love(x,y)\lambda y \lambda x . love'(x, y), applying first to the object argument yy to form a one-place predicate, then to the subject xx to complete the proposition. This functor-argument application captures how quantified noun phrases, such as "every man," bind variables over argument slots, enabling scope resolution in sentences with multiple arguments.

Theoretical Frameworks

Theta Theory and Argument Structure

Theta theory forms a core component of the Government and Binding (GB) framework in generative linguistics, regulating the assignment of semantic roles—known as theta roles—to syntactic arguments of a predicate. The theory's foundational principle is the Theta Criterion, which enforces a one-to-one bijection between the theta roles specified in a predicate's lexical entry and the arguments that saturate them in the sentence structure. This ensures that every argument receives exactly one theta role and that no theta role remains unassigned, preventing over- or under-generation in argument realization. Complementing this is the Projection Principle, which mandates that lexical properties, including argument structures, project onto the deep structure (D-structure) and remain invariant through subsequent syntactic transformations, thereby linking thematic requirements directly to syntactic representation. In GB theory, argument structure is represented lexically for each predicate, often involving into atomic semantic predicates to capture the verb's meaning and its required arguments. For instance, verbs like "break" can participate in alternations such as the causative-inchoative, where the transitive form ("John broke the window") introduces an external agent theta role alongside a theme, while the inchoative form ("The window broke") suppresses the agent, leaving only the theme. This alternation reflects systematic variations in argument realization, constrained by theta theory to maintain the without violating . Such decompositions highlight how lexical entries encode not just roles but also the predicate's event structure, influencing possible syntactic frames. Theta grids provide a formal mechanism for constructing and representing argument structure, listing the theta roles a predicate assigns along with their ordering and selectional restrictions. Typically, the agent role occupies the highest position in the , followed by or , experiencer, theme, and or instrument at lower levels, reflecting proto-agent and proto-patient properties that guide linking to syntactic positions. Assignment occurs at D-structure under by the predicate, subject to the condition: a chain must bear a Case feature to be visible for theta-marking, ensuring that only Case-bearing nominals can receive roles. For example, the theta grid for the "give" might specify an agent (external argument), a theme (direct object), and a (indirect object), as illustrated below:
Theta RoleObligatory?Hierarchical Position
AgentYes1 (highest)
ThemeYes2
Yes3
This grid enforces the Theta Criterion by requiring all positions to be filled by distinct arguments in surface structure. Despite its explanatory power for configurational languages like English, theta theory faces criticisms for its limited applicability to non-configurational languages, where free and null arguments challenge the strict D-structure assignment and hierarchical projections assumed in GB. Languages such as Warlpiri exhibit polysynthesis and flat structures that do not align neatly with theta grid projections, requiring ad hoc adjustments like base-generated adjuncts or revised conditions for assignment. Post-1980s extensions have attempted cross-linguistic adaptations, but the framework remains incomplete for typological diversity, particularly in polysynthetic or head-marking languages, highlighting gaps in universal theta mechanisms.

Lexical Functional Grammar and Arguments

(LFG), developed by Joan Bresnan and Ronald M. Kaplan, posits that arguments are mapped from lexical argument structures (a-structures) to functional structures (f-structures), where they are represented by grammatical functions (GFs) such as subject (SUBJ) and object (OBJ). A-structures encode the lexical properties of predicates, including the number, syntactic type, and hierarchical organization of arguments, while f-structures capture the abstract syntactic relations among these arguments without reliance on underlying deep structures. This parallel architecture separates constituent structure (c-structure), which handles surface , from f-structure, allowing arguments to be identified independently of linear position. In LFG, argument realization distinguishes external arguments, typically mapped to SUBJ, from internal arguments, such as direct objects (OBJ) or oblique objects (OBL), through lexical mapping theory that assigns features like [±restricted] and [±objective] to govern these associations. For non-canonical word orders, functional uncertainty equations in f-structures permit flexible mappings, ensuring that arguments cohere with predicate requirements regardless of surface arrangement. This approach contrasts with configurational theories by emphasizing functional relations over hierarchical phrase structure, facilitating cross-linguistic uniformity in argument encoding. A representative example is Warlpiri, a non-configurational Australian language with free , where arguments like subjects and objects are identified via case marking (e.g., ergative for agents, absolutive for patients) rather than fixed positions relative to the verb. In LFG, the flat c-structure of Warlpiri sentences yields a consistent f-structure that assigns GFs based on morphological cues, such as in the sentence Kurdu-jarra-rlu ka-pala maliki wajili-pi-nyi wita-jarra-rlu ("The two small children are chasing the dog"), where the discontinuous ergative-marked NPs map to SUBJ and the absolutive NP to OBJ, irrespective of order. This functional abstraction preserves argument roles without positing empty categories or movements. LFG offers advantages over for non-configurational languages by avoiding the need for rigid X-bar hierarchies or null elements to explain free ordering, instead deriving such phenomena directly from the independence of c- and f-structures. This functionalist perspective better accommodates languages like Warlpiri, where discourse-driven variations do not disrupt core argument relations, thus providing a more parsimonious account of syntactic diversity than transformational approaches.

Other Relevant Theories

In Head-Driven Phrase Structure Grammar (HPSG), arguments are represented as valence features within the lexical signs of predicates, specifying the complements that a head for in a structured list known as the SUBCAT list. This approach treats as a constraint-based mechanism where unification operations merge the valence requirements of a head with the categories of its arguments, ensuring syntactic compatibility and licensing the overall phrase structure without relying on transformations. For instance, a like "put" has a valence feature requiring a subject, a direct object, and a locative complement, which unification enforces during structure building. Construction Grammar posits that argument roles do not derive exclusively from lexical items but emerge holistically from form-meaning pairings in argument structure constructions, allowing verbs to participate in patterns beyond their inherent semantics. In this framework, constructions themselves contribute profiled semantic roles, as seen in the ditransitive construction (e.g., "She gave him a book"), where the pattern assigns a transfer-of-possession meaning and roles like recipient to the second , even for verbs like "send" that might otherwise subcategorize differently. This emergent view integrates lexical and constructional specifications through a of , where the construction's semantics can override or extend the verb's, fostering systematic generalizations across sentence types. Role and Reference Grammar (RRG) employs a layered clause structure to link arguments semantically and syntactically, organizing the clause into a nucleus (the predicate), core (nucleus plus arguments), and periphery (). Central to this are macroroles— (the most agent-like argument, typically initiating action) and undergoer (the most patient-like, affected by it)—which generalize across thematic roles and map arguments to syntactic positions via the Actor-Undergoer Hierarchy. For example, in a transitive like "The boy kicked the ball," the boy maps to in the core, while the ball maps to undergoer, with the layering ensuring all semantic arguments are realized under the Completeness Constraint. Recent developments in usage-based models within emphasize corpus-derived data to model probabilistic preferences in realization, viewing grammar as an inventory of entrenched form-function units shaped by frequency and context. These approaches analyze large corpora to quantify alternation biases, such as the dative construction's preference for animate recipients over inanimates, revealing constraints that influence speaker choices beyond categorical rules. By integrating experimental validation with corpus frequencies, post-2010 work highlights how repeated exposure strengthens constructional schemas, accounting for individual and typological variation in structure.

Historical Development

Early Foundations

The concept of arguments in linguistics traces its roots to early structuralist approaches in , where scholars emphasized the functional dependencies within sentences. André Martinet, building on the Prague School's functionalist tradition, advanced a view of syntax that highlighted the obligatory relationships between linguistic elements, influencing later valency theories by treating verbal functions as requiring specific complements to fulfill communicative purposes. This European structuralism, particularly through Martinet's work in the mid-20th century, provided a foundation for distinguishing core sentence elements from optional modifiers, underscoring the role of functional necessity in syntactic organization. A pivotal development came with Lucien Tesnière's in Éléments de syntaxe structurale (1959), which formalized the notion of valency—the capacity of a to require a fixed number of dependents, termed actants. Tesnière likened verbs to atoms with electrons, where actants represent obligatory arguments such as subjects and objects, while (circumstants) are optional. This framework shifted focus from linear to hierarchical dependencies, establishing arguments as essential syntactic slots anchored in the verb's semantic properties. Tesnière's ideas, though published posthumously, bridged European functionalism and emerging syntactic theories by emphasizing the verb's governing role over its arguments. In the American structuralist tradition, Leonard Bloomfield's (1933) laid groundwork by analyzing through observable form classes and required complements, viewing sentences as constructions where certain elements, like direct objects, are indispensable for . Bloomfield's distributional approach treated these complements as integral to predicate structures, distinguishing them from free adjuncts based on co-occurrence patterns. extended this in Methods in (1951), introducing rigorous procedures to identify obligatory elements through substitution tests and distributional classes, thereby formalizing arguments as non-optional constituents in syntactic analysis. These pre-generative foundations culminated in Noam Chomsky's (1957), which introduced deep structure as a level where underlying arguments are represented in kernel sentences before transformations apply. Chomsky argued that arguments in deep structure capture the core relational content of sentences, such as agent and roles, distinguishing them from surface variations. This work marked a transitional step, integrating structuralist notions of obligatoriness with generative mechanisms to explain argument realization. Building on this, Chomsky's Aspects of the Theory of Syntax (1965) introduced frames, where verbs lexically specify the types and number of arguments they require, providing a formal basis for argument selection in the .

Modern Developments

In the late 1960s and 1970s, generative semantics emerged as a significant advancement in understanding argument structures through lexical decomposition, with key contributions from , John Robert Ross, and James McCawley, who argued that s' semantic representations involve breaking down meanings into primitive predicates to account for argument realization and syntactic irregularities. This approach posited deep semantic structures where arguments are linked via abstract or inchoative components, influencing later theories on how lexical items select and project their arguments in syntax. Around the same time, Fillmore's (1968) proposed that sentences have a deep structure with a and one or more case roles (e.g., agent, , ) filled by arguments, emphasizing semantic relations over purely syntactic ones and paving the way for thematic role theories. During the 1980s Government and Binding (GB) era, formalized theta theory as a module constraining argument linking, requiring each argument to receive a unique thematic role (such as agent or ) from the while ensuring structural positions align with these roles through relations. This framework addressed the by integrating with theta-role assignment, preventing mismatches like unassigned roles or overgeneration of arguments. Complementing this, Beth Levin's 1993 classification of into alternation classes highlighted how shared argument structure behaviors—such as the causative-inchoative alternation (e.g., break the window vs. the window breaks)—reveal systematic patterns in verb semantics and syntax. The , developed by Chomsky from the 1990s onward, streamlined argument licensing by emphasizing uninterpretable features on functional heads (like v and T) that drive the Agree operation, allowing phi-features (, number, ) to match and check between probes and goals without heavy reliance on movement for case assignment. This shift prioritized economy in derivations, where arguments are licensed locally via feature valuation, reducing the apparatus of earlier models while preserving theta-role projections from little . Distributed Morphology, proposed by Heidi Harley and Rolf Noyer in the late 1990s, further integrated lexical argument structure by distributing insertion across syntactic and post-syntactic levels, treating roots as category-neutral elements that combine with functional heads to realize arguments morphologically. In recent decades, research has extended argument structure analysis to sign languages, where studies show that classifiers and spatial mappings encode thematic roles modally, yet conform to universal principles like the theta criterion, as evidenced in languages such as and . Similarly, investigations into creoles reveal hybrid argument structures influenced by substrate languages, with exemplifying features like serial verb constructions that allow argument sharing across predicates, reflecting African influences. has advanced neural models for using transformer-based architectures on datasets like PropBank, achieving high performance through attention mechanisms that model predicate-argument dependencies.

Applications and Significance

Psycholinguistic Perspectives

Psycholinguistic research has demonstrated a in how the integrates and during sentence comprehension, with being incorporated more rapidly and efficiently than . This distinction arises because are lexically specified by the and obligatory for semantic completeness, whereas provide optional modifications. Reading time studies show that phrases interpreted as elicit shorter latencies compared to those as , as seen in experiments where prepositional phrases following were disambiguated by plausibility, leading to faster integration when aligned with the 's structure. Event-related potential (ERP) studies further support this asymmetry, revealing distinct neural responses to disruptions in argument versus adjunct integration. Specifically, omissions or violations of expected arguments elicit a robust N400 component, a negative-going waveform peaking around 400 ms post-stimulus, reflecting difficulties in semantic integration and thematic role assignment. For instance, when a verb's required argument is missing, the N400 amplitude increases over posterior sites, indicating heightened effort to resolve the incomplete event structure. In contrast, similar disruptions to adjuncts do not consistently trigger the N400, suggesting that the brain prioritizes obligatory elements in building sentence meaning. The complexity of a verb's argument structure also influences processing, particularly in cases of syntactic ambiguity that lead to garden-path effects—temporary misanalyses requiring re-parsing. A classic example is the sentence "The horse raced past the barn fell," where "raced" is initially parsed as the main verb with "past the barn" as its argument (a directional phrase), but the correct structure treats "raced" as a past participle in a reduced , making the PP an adjunct. This misparse arises from the verb's subcategorization preferences and results in delayed reading times and increased error rates during reanalysis. Frequency effects on valency expectations exacerbate such effects; verbs with frequent transitive frames bias parsers toward interpretations, leading to stronger garden-path disruptions when the low-frequency intransitive frame is required. In , children demonstrate sensitivity to argument structure by initially overgeneralizing frames, treating verbs as more flexible than adult grammars allow. Bowerman's longitudinal studies of young children's speech reveal errors in locative alternations, such as applying the goal-locative frame ("put the socks in the drawer") to verbs that only permit the source-locative ("open the drawer on the socks"), producing ungrammatical utterances like "I'm opening the drawer the socks." These overgeneralizations stem from children's productivity rules based on , but retreat occurs through mechanisms like preemption from input evidence, where correct alternatives block erroneous frames. By age 4–5, children restrict verbs to their canonical argument structures, reflecting the acquisition of verb-specific lexical constraints. Neuroimaging data from the 2010s highlight distinct cortical regions for argument versus adjunct integration, addressing gaps in earlier behavioral work. (fMRI) studies show that processing argument relations activates the posterior (pSTS) and anterior (aIFG) more strongly than adjunct attachments, with the involved in unifying event concepts across both but showing parametric increases for obligatory theta-role assignments. For example, verb phrases requiring argument integration elicit bilateral inferior parietal activation, contrasting with adjuncts' reliance on general syntactic mechanisms in left perisylvian areas. These findings indicate specialized neural circuitry for the syntax-semantics interface in argument processing.

Implications for Language Processing and Typology

In , the concept of argument structure plays a crucial role in tasks such as (SRL), where systems identify and annotate the roles that phrases play in relation to predicates in sentences. PropBank, a key resource for English, provides verb-specific annotations of core and peripheral arguments, enabling models to predict roles like agent or with accuracies exceeding 80% in supervised settings, as demonstrated in dependency-based SRL systems. This annotation framework supports downstream applications in natural language understanding by mapping syntactic dependencies to semantic arguments, facilitating more robust of complex sentences. In , argument structure helps address valency mismatches between source and target languages, where verbs differ in the number or type of required arguments. For instance, translation systems must handle cases where a in one language corresponds to an intransitive form in another, often requiring structural adjustments to preserve semantic fidelity; studies on frame-semantic mismatches show that such shifts can involve adding or dropping arguments, impacting translation quality in up to 20% of verb instances in parallel corpora. Typological studies reveal significant variations in argument structure across languages, influencing how arguments are encoded and realized. In , applicative constructions use a verbal to increase valency, promoting a peripheral argument (such as a ) to core object status, thereby expanding the base verb's argument frame without altering its inherent transitivity. This results in symmetrical or asymmetrical object symmetries, where both the original and applied objects can alternate as passivized elements, as seen in languages like and Ndendeule. Ergative alignment systems further diverge from accusative patterns by coding the single of intransitive verbs identically to the patient of transitives, affecting realization in and morphology. In split-ergative languages, this leads to context-dependent marking, such as based on tense or , which complicates cross-linguistic comparisons and requires decoupled models of to capture prominence hierarchies. In (SLA), learners frequently produce errors in argument realization due to transfer from their first language's structure, particularly with verbs exhibiting variable valency. Korean-English learners, for example, overgeneralize intransitive patterns to transitive verbs or misuse passive forms with unaccusative verbs, reflecting challenges in mapping L1 argument frames to L2 constructions. These errors persist in intermediate stages, highlighting the need for targeted instruction on predicate- alternations. Aphasia research underscores argument omission as a hallmark deficit, especially in agrammatic , where patients produce verbs without obligatory arguments, reducing sentence complexity. Studies of production show that individuals with exhibit greater difficulty producing unaccusative verbs compared to unergative ones due to argument structure complexity, suggesting impairments in thematic role assignment rather than verb selection alone. The integration of argument structure into large language models (LLMs) enhances their ability to generate coherent text by enforcing predicate-argument consistency, as evidenced by analyses of BERT's handling of argument structure constructions (ASCs). In argument mining tasks, LLMs like GPT variants achieve F1 scores above 0.75 for extracting argument components from texts, aiding applications in . Societally, this informs in legal contexts, where precise argument encoding resolves ambiguities in predicate relations, supporting tools for interpreting contractual or statutory language.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.