Lexical Semantics
Lexical Semantics ---> https://urloso.com/2tkMNv
Lexical semantics (also known as lexicosemantics), as a subfield of linguistic semantics, is the study of word meanings.[1][2] It includes the study of how words structure their meaning, how they act in grammar and compositionality,[1] and the relationships between the distinct senses and uses of a word.[2]
The units of analysis in lexical semantics are lexical units which include not only words but also sub-words or sub-units such as affixes and even compound words and phrases. Lexical units include the catalogue of words in a language, the lexicon. Lexical semantics looks at how the meaning of the lexical units correlates with the structure of the language or syntax. This is referred to as syntax-semantics interface.[3]
Cognitive semantics is the linguistic paradigm/framework that since the 1980s has generated the most studies in lexical semantics, introducing innovations like prototype theory, conceptual metaphors, and frame semantics.[5]
Lexical items contain information about category (lexical and syntactic), form and meaning. The semantics related to these categories then relate to each lexical item in the lexicon.[6] Lexical items can also be semantically classified based on whether their meanings are derived from single lexical units or from their surrounding environment.
Lexical semantics also explores whether the meaning of a lexical unit is established by looking at its neighbourhood in the semantic net, (words it occurs with in natural sentences), or whether the meaning is already locally contained in the lexical unit.
First proposed by Trier in the 1930s,[7] semantic field theory proposes that a group of words with interrelated meanings can be categorized under a larger conceptual domain. This entire entity is thereby known as a semantic field. The words boil, bake, fry, and roast, for example, would fall under the larger semantic category of cooking. Semantic field theory asserts that lexical meaning cannot be fully understood by looking at a word in isolation, but by looking at a group of semantically related words.[8] Semantic relations can refer to any relationship in meaning between lexemes, including synonymy (big and large), antonymy (big and small), hypernymy and hyponymy (rose and flower), converseness (buy and sell), and incompatibility. Semantic field theory does not have concrete guidelines that determine the extent of semantic relations between lexemes. The abstract validity of the theory is a subject of debate.[7]
Knowing the meaning of a lexical item therefore means knowing the semantic entailments the word brings with it. However, it is also possible to understand only one word of a semantic field without understanding other related words. Take, for example, a taxonomy of plants and animals: it is possible to understand the words rose and rabbit without knowing what a marigold or a muskrat is. This is applicable to colors as well, such as understanding the word red without knowing the meaning of scarlet, but understanding scarlet without knowing the meaning of red may be less likely. A semantic field can thus be very large or very small, depending on the level of contrast being made between lexical items. While cat and dog both fall under the larger semantic field of animal, including the breed of dog, like German shepherd, would require contrasts between other breeds of dog (e.g. corgi, or poodle), thus expanding the semantic field further.[9]
The analysis of these different lexical units had a decisive role in the field of \"generative linguistics\" during the 1960s.[12] The term generative was proposed by Noam Chomsky in his book Syntactic Structures published in 1957. The term generative linguistics was based on Chomsky's generative grammar, a linguistic theory that states systematic sets of rules (X' theory) can predict grammatical phrases within a natural language.[13] Generative Linguistics is also known as Government-Binding Theory.Generative linguists of the 1960s, including Noam Chomsky and Ernst von Glasersfeld, believed semantic relations between transitive verbs and intransitive verbs were tied to their independent syntactic organization.[12] This meant that they saw a simple verb phrase as encompassing a more complex syntactic structure.[12]
Lexicalist theories became popular during the 1980s, and emphasized that a word's internal structure was a question of morphology and not of syntax.[14] Lexicalist theories emphasized that complex words (resulting from compounding and derivation of affixes) have lexical entries that are derived from morphology, rather than resulting from overlapping syntactic and phonological properties, as Generative Linguistics predicts. The distinction between Generative Linguistics and Lexicalist theories can be illustrated by considering the transformation of the word destroy to destruction:
A lexical entry lists the basic properties of either the whole word, or the individual properties of the morphemes that make up the word itself. The properties of lexical items include their category selection c-selection, selectional properties s-selection, (also known as semantic selection),[12] phonological properties, and features. The properties of lexical items are idiosyncratic, unpredictable, and contain specific information about the lexical items that they describe.[12]
By the early 1990s, Chomsky's minimalist framework on language structure led to sophisticated probing techniques for investigating languages.[15] These probing techniques analyzed negative data over prescriptive grammars, and because of Chomsky's proposed Extended Projection Principle in 1986, probing techniques showed where specifiers of a sentence had moved to in order to fulfill the EPP. This allowed syntacticians to hypothesize that lexical items with complex syntactic features (such as ditransitive, inchoative, and causative verbs), could select their own specifier element within a syntax tree construction. (For more on probing techniques, see Suci, G., Gammon, P., & Gamlin, P. (1979)).
This brought the focus back on the syntax-lexical semantics interface; however, syntacticians still sought to understand the relationship between complex verbs and their related syntactic structure, and to what degree the syntax was projected from the lexicon, as the Lexicalist theories argued.
In the mid 1990s, linguists Heidi Harley, Samuel Jay Keyser, and Kenneth Hale addressed some of the implications posed by complex verbs and a lexically-derived syntax. Their proposals indicated that the predicates CAUSE and BECOME, referred to as subunits within a Verb Phrase, acted as a lexical semantic template.[16] Predicates are verbs and state or affirm something about the subject of the sentence or the argument of the sentence. For example, the predicates went and is here below affirm the argument of the subject and the state of the subject respectively.
Kenneth Hale and Samuel Jay Keyser introduced their thesis on lexical argument structure during the early 1990s.[19]They argue that a predicate's argument structure is represented in the syntax, and that the syntactic representation of the predicate is a lexical projection of its arguments. Thus, the structure of a predicate is strictly a lexical representation, where each phrasal head projects its argument onto a phrasal level within the syntax tree. The selection of this phrasal head is based on Chomsky's Empty Category Principle. This lexical projection of the predicate's argument onto the syntactic structure is the foundation for the Argument Structure Hypothesis.[19] This idea coincides with Chomsky's Projection Principle, because it forces a VP to be selected locally and be selected by a Tense Phrase (TP).
Based on the interaction between lexical properties, locality, and the properties of the EPP (where a phrasal head selects another phrasal element locally), Hale and Keyser make the claim that the Specifier position or a complement are the only two semantic relations that project a predicate's argument. In 2003, Hale and Keyser put forward this hypothesis and argued that a lexical unit must have one or the other, Specifier or Complement, but cannot have both.[20]
Morris Halle and Alec Marantz introduced the notion of distributed morphology in 1993.[21] This theory views the syntactic structure of words as a result of morphology and semantics, instead of the morpho-semantic interface being predicted by the syntax. Essentially, the idea that under the Extended Projection Principle there is a local boundary under which a special meaning occurs. This meaning can only occur if a head-projecting morpheme is present within the local domain of the syntactic structure.[22] The following is an example of the tree structure proposed by distributed morphology for the sentence \"John's destroying the city\". Destroy is the root, V-1 represents verbalization, and D represents nominalization.[22]
In her 2008 book, Verb Meaning and The Lexicon: A First-Phase Syntax, linguist Gillian Ramchand acknowledges the roles of lexical entries in the selection of complex verbs and their arguments.[23] 'First-Phase' syntax proposes that event structure and event participants are directly represented in the syntax by means of binary branching. This branching ensures that the Specifier is the consistently subject, even when investigating the projection of a complex verb's lexical entry and its corresponding syntactic construction. This generalization is also present in Ramchand's theory that the complement of a head for a complex verb phrase must co-describe the verb's event.
The change-of-state property of Verb Phrases (VP) is a significant observation for the syntax of lexical semantics because it provides evidence that subunits are embedded in the VP structure, and that the meaning of the entire VP is influenced by this internal grammatical structure. (For example, the VP the vase broke carries a change-of-state meaning of the vase becoming broken, and thus has a silent BECOME subunit within its underlying structure.) There are two types of change-of-state predicates: inchoative and causative. 59ce067264