Jump to content

Semantic role labeling

From Wikipedia, the free encyclopedia

In natural language processing, semantic role labeling (also called shallow semantic parsing or slot-filling) is the process that assigns labels to words or phrases in a sentence that indicates their semantic role in the sentence, such as that of an agent, goal, or result.

It serves to find the meaning of the sentence. To do this, it detects the arguments associated with the predicate or verb of a sentence and how they are classified into their specific roles. A common example is the sentence "Mary sold the book to John." The agent is "Mary," the predicate is "sold" (or rather, "to sell,") the theme is "the book," and the recipient is "John." Another example is how "the book belongs to me" would need two labels such as "possessed" and "possessor" and "the book was sold to John" would need two other labels such as theme and recipient, despite these two clauses being similar to "subject" and "object" functions.[1]

History

[edit]

In 1968, the first idea for semantic role labeling was proposed by Charles J. Fillmore.[2] His proposal led to the FrameNet project which produced the first major computational lexicon that systematically described many predicates and their corresponding roles. Daniel Gildea (Currently at University of Rochester, previously University of California, Berkeley / International Computer Science Institute) and Daniel Jurafsky (currently teaching at Stanford University, but previously working at University of Colorado and UC Berkeley) developed the first automatic semantic role labeling system based on FrameNet. The PropBank corpus added manually created semantic role annotations to the Penn Treebank corpus of Wall Street Journal texts. Many automatic semantic role labeling systems have used PropBank as a training dataset to learn how to annotate new sentences automatically.[3]

Uses

[edit]

Semantic role labeling is mostly used for machines to understand the roles of words within sentences.[4] This benefits applications similar to Natural Language Processing programs that need to understand not just the words of languages, but how they can be used in varying sentences.[5] A better understanding of semantic role labeling could lead to advancements in question answering, information extraction, automatic text summarization, text data mining, and speech recognition.[6]

See also

[edit]

References

[edit]
  1. ^ Laux, Michael (2019-01-13). "If you did not already know". SunJackson Blog (in Simplified Chinese). Retrieved 2020-12-08.
  2. ^ Boas, Hans; Dux, Ryan. "From the past into the present: From case frames to semantic frames" (PDF).
  3. ^ Gildea, Daniel; Jurafsky, Daniel (2000). "Automatic labeling of semantic roles". Proceedings of the 38th Annual Meeting on Association for Computational Linguistics - ACL '00. Hong Kong: Association for Computational Linguistics: 512–520. doi:10.3115/1075218.1075283.
  4. ^ Nizamani, Sarwat; Memon, Nasrullah; Nizamani, Saad; Nizamani, Sehrish (August 2017). "TDC: Typed Dependencies-Based Chunking Model". Arabian Journal for Science and Engineering. 42 (8): 3585–3595. doi:10.1007/s13369-017-2587-y. ISSN 2193-567X. S2CID 67233431.
  5. ^ Park, Jaehui (2019). "Selectively Connected Self-Attentions for Semantic Role Labeling". Applied Sciences. 9 (8) – via ProQuest.
  6. ^ Gildea, Daniel; Jurafsky, Daniel. "Automatic Labeling of Semantic Roles" (PDF). Association for Computational Linguistics. 28 (3).
[edit]