{"id":942,"date":"2021-08-09T05:17:54","date_gmt":"2021-08-09T05:17:54","guid":{"rendered":"https:\/\/achrafothman.net\/site\/?p=942"},"modified":"2021-10-24T03:43:18","modified_gmt":"2021-10-24T03:43:18","slug":"overview-of-text-to-gloss-in-computational-sign-language-processing-slp","status":"publish","type":"post","link":"https:\/\/achrafothman.net\/site\/overview-of-text-to-gloss-in-computational-sign-language-processing-slp\/","title":{"rendered":"Overview of Text-to-Gloss in Computational Sign Language Processing (SLP)"},"content":{"rendered":"<p><span style=\"font-size: 10pt;\">Authors: Achraf Othman<\/span><\/p>\n<hr class=\"hrline\" \/>\n<p><span style=\"font-size: 10pt;\">Research and Innovation Letters \u2022 Volume 1 \u2022 Issue 1 \u2022 August 2021 \u2022 Published: August 9, 2021 \u2022 <a href=\"https:\/\/achrafothman.net\/site\/wp-content\/uploads\/Overview-of-Text-to-Gloss-in-Computational-Sign-Language-Processing-SLP-1.pdf\" target=\"_blank\" rel=\"noopener\">PDF<\/a><\/span><\/p>\n<hr class=\"hrline\" \/>\n<div class=\"abstract\"><strong>Abstract- <\/strong><\/div>\n<p>Digital Accessibility to the content in web environments for people with hearing disabilities and with hearing impairment with a low level of literacy is becoming increasingly critical (<span style=\"color: #0000ff;\">Dena et al., 2020, Lahiri et al., 2020<\/span>). Several applications have been developed to solve this challenge (<span style=\"color: #0000ff;\">Othman et al., 2019<\/span>). Today, the existing solutions proposed by the literature to improve digital accessibility to web content and media for people with hearing disabilities remain limited and even not existing in Sign Language (SL) (<span style=\"color: #0000ff;\">Jemni et al., 2013<\/span>). Some of these applications have been developed to set up conversational agents based on three-dimensional technologies called avatars for translating a written text to SL. All these tools do not take into consideration the intonation and rhythm of the uttered words (<span style=\"color: #0000ff;\">Kipp et al., 2011<\/span>). This greatly reduces the quality of the translated web content for these people and can even make it incomprehensible only if we use video translating the content in SL by interpreters or Deaf users where the cost is very high. In any Sign Language, we\u2019re facing more challenges and issues based on the nature of the language itself. Shifting and interpreting from written text to SL requires many levels of linguistic processing toward reaching the cognitive meaning of the sentence.<\/p>\n<p>Transcription is the operation that substitutes a grapheme or a group of graphemes of a writing system for every phoneme or for every sound. It thus depends on the target language, a unique phoneme that can correspond to various graphemes following the considered language. In short, it is the writing of words or pronounced sentences in a given system. The transcription also aims at being without loss, so that it should be ideally possible to reconstitute the original pronunciation from this one by knowing the rules of transcription.<\/p>\n<p>Text-to-gloss\u2014also knows as sign language translation\u2014is the task to translate between a spoken language text and sign language glosses.\u00a0<span style=\"color: #0000ff;\">Zhao et al. (2000)<\/span> used a Tree Adjoining Grammar (TAG) based system for translating between English sentences and American Sign Language glosses. They parse the English text and simultaneously assemble an American Sign Language gloss tree, using Synchronous TAGs (<span style=\"color: #0000ff;\">Shieber and Schabes 1990; Shieber 1994<\/span>), by associating the ASL elementary trees with the English elementary trees and associating the nodes at which subsequent substitutions or adjunctions can take place. Synchronous TAGs have been used for machine translation between spoken languages (<span style=\"color: #0000ff;\">Abeill\u00e9, Schabes, and Joshi 1991<\/span>), but this is the first application to a signed language.<\/p>\n<p>For the automatic translation of gloss-to-text, <span style=\"color: #0000ff;\">Othman and Jemni (2012)<\/span> identified the need for a large parallel sign language gloss and spoken language text corpus. They develop a part-of-speech-based grammar to transform English sentences taken from the Gutenberg Project ebooks collection (<span style=\"color: #0000ff;\">Lebert 2008<\/span>) into American Sign Language gloss. Their final corpus contains over 100 million synthetic sentences and 800 million words and is the largest English-ASL gloss corpus that we know of. Unfortunately, it is hard to attest to the quality of the corpus, as they didn\u2019t evaluate their method on real English-ASL gloss pairs, and only a small sample of this corpus is available online.<\/p>\n<h2>Acknowledgment<\/h2>\n<p><a href=\"https:\/\/www.pexels.com\/photo\/woman-in-yellow-long-sleeve-shirt-sitting-beside-man-in-yellow-long-sleeve-shirt-6322051\/\" target=\"_blank\" rel=\"noopener\">Photo by cottonbro from Pexels<\/a><\/p>\n<h2>References<\/h2>\n<div id=\"ref-lebert2008project\">\n<div id=\"ref-abeille1991using\">\n<ul>\n<li>Abeill\u00e9, Anne, Yves Schabes, and Aravind K Joshi. 1991. \u201cUsing Lexicalized Tags for Machine Translation.\u201d <span style=\"font-size: 10pt; color: #0000ff;\"><a style=\"color: #0000ff;\" href=\"https:\/\/scholar.google.fr\/scholar?cluster=15057618694470041028&amp;hl=en&amp;as_sdt=0,5\" target=\"_blank\" rel=\"noopener\">Google Scholar<\/a><\/span><\/li>\n<li>Al Thani, D., Al Tamimi, A., Othman, A., Habib, A., Lahiri, A., &amp; Ahmed, S. (2019, December). Mada Innovation Program: A Go-to-Market ecosystem for Arabic Accessibility Solutions. In <i style=\"font-size: 19px;\">2019 7th International Conference on ICT &amp; Accessibility (ICTA)<\/i><span style=\"font-size: 19px;\"> (pp. 1-3). IEEE. <span style=\"font-size: 10pt; color: #0000ff;\"><a style=\"color: #0000ff;\" href=\"https:\/\/scholar.google.fr\/scholar?hl=en&amp;as_sdt=0%2C5&amp;q=Mada+Innovation+Program%3A+A+Go-to-Market+ecosystem+for+Arabic+Accessibility+Solutions&amp;btnG=\" target=\"_blank\" rel=\"noopener\">Google Scholar<\/a><\/span><\/span><\/li>\n<li>Jemni, M., Semreen, S., Othman, A., Tmar, Z., &amp; Aouiti, N. (2013, October). Toward the creation of an Arab Gloss for Arabic Sign Language annotation. In <i style=\"font-size: 19px;\">Fourth International Conference on Information and Communication Technology and Accessibility (ICTA)<\/i><span style=\"font-size: 19px;\"> (pp. 1-5). IEEE. <span style=\"color: #0000ff; font-size: 10pt;\"><a style=\"color: #0000ff;\" href=\"https:\/\/scholar.google.fr\/scholar?cluster=7399695397053325555&amp;hl=en&amp;as_sdt=0,5\" target=\"_blank\" rel=\"noopener\">Google Scholar<\/a><\/span><\/span><\/li>\n<li>Kipp M., Heloir A., Nguyen Q., Sign Language Avatars: Animation and Comprehensibility, International Workshop on Intelligent Virtual Agents, pp 113-126, 2011. <span style=\"color: #0000ff; font-size: 10pt;\"><a style=\"color: #0000ff;\" href=\"https:\/\/scholar.google.fr\/scholar?cluster=18372178401152792786&amp;hl=en&amp;as_sdt=0,5\" target=\"_blank\" rel=\"noopener\">Google Scholar<\/a><\/span><\/li>\n<li>Lahiri, A., Othman, A., Al-Thani, D. A., &amp; Al-Tamimi, A. (2020, September). Mada Accessibility and Assistive Technology Glossary: A Digital Resource of Specialized Terms. In <i style=\"font-size: 19px;\">ICCHP<\/i><span style=\"font-size: 19px;\"> (p. 207). <span style=\"color: #0000ff; font-size: 10pt;\"><a style=\"color: #0000ff;\" href=\"https:\/\/scholar.google.fr\/scholar?cluster=3285415367295847696&amp;hl=en&amp;as_sdt=0,5\" target=\"_blank\" rel=\"noopener\">Google Scholar<\/a><\/span><\/span><\/li>\n<li>Lebert, Marie. 2008. \u201cProject Gutenberg (1971-2008).\u201d Project Gutenberg.<\/li>\n<li>Othman, Achraf, and Mohamed Jemni. 2012. \u201cEnglish-Asl Gloss Parallel Corpus 2012: Aslg-Pc12.\u201d In <em style=\"font-size: 19px;\">5th Workshop on the Representation and Processing of Sign Languages: Interactions Between Corpus and Lexicon LREC<\/em><span style=\"font-size: 19px;\">. <span style=\"color: #0000ff; font-size: 10pt;\"><a style=\"color: #0000ff;\" href=\"https:\/\/scholar.google.fr\/scholar?cluster=3851256601789815156&amp;hl=en&amp;as_sdt=0,5\" target=\"_blank\" rel=\"noopener\">Google Scholar<\/a><\/span><\/span><\/li>\n<li>Othmane A. And Jemni M., Designing High Accuracy Statistical Machine Translation for Sign Language Using Parallel Corpus: Case Study English and American Sign Language, Journal of Information Technology Research (JITR), pp 12\u20132, 2019. <span style=\"color: #0000ff; font-size: 10pt;\"><a style=\"color: #0000ff;\" href=\"https:\/\/scholar.google.fr\/scholar?cluster=13466601975042375962&amp;hl=en&amp;as_sdt=0,5\" target=\"_blank\" rel=\"noopener\">Google Scholar<\/a><\/span><\/li>\n<li>Shieber, Stuart M. 1994. \u201cRESTRICTING the Weak-Generative Capacity of Synchronous Tree-Adjoining Grammars.\u201d <em style=\"font-size: 19px;\">Computational Intelligence<\/em><span style=\"font-size: 19px;\"> 10 (4): 371\u201385. <span style=\"color: #0000ff; font-size: 10pt;\"><a style=\"color: #0000ff;\" href=\"https:\/\/scholar.google.fr\/scholar?cluster=3775236245839411305&amp;hl=en&amp;as_sdt=0,5\" target=\"_blank\" rel=\"noopener\">Google Scholar<\/a><\/span><\/span><\/li>\n<li>Shieber, Stuart, and Yves Schabes. 1990. \u201cSynchronous Tree-Adjoining Grammars.\u201d In\u00a0<em style=\"font-size: 19px;\">Proceedings of the 13th International Conference on Computational Linguistics<\/em><span style=\"font-size: 19px;\">. Association for Computational Linguistics. <span style=\"color: #0000ff; font-size: 10pt;\"><a style=\"color: #0000ff;\" href=\"https:\/\/scholar.google.fr\/scholar?cluster=2815623651347222045&amp;hl=en&amp;as_sdt=0,5\" target=\"_blank\" rel=\"noopener\">Google Scholar<\/a><\/span><\/span><\/li>\n<li>Zhao, Liwei, Karin Kipper, William Schuler, Christian Vogler, Norman Badler, and Martha Palmer. 2000. \u201cA Machine Translation System from English to American Sign Language.\u201d In\u00a0<em style=\"font-size: 19px;\">Conference of the Association for Machine Translation in the Americas<\/em><span style=\"font-size: 19px;\">, 54\u201367. Springer. <span style=\"color: #0000ff; font-size: 10pt;\"><a style=\"color: #0000ff;\" href=\"https:\/\/scholar.google.fr\/scholar?cluster=1569718222624679816&amp;hl=en&amp;as_sdt=0,5\" target=\"_blank\" rel=\"noopener\">Google Scholar<\/a><\/span><\/span><\/li>\n<\/ul>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Authors: Achraf Othman Research and Innovation Letters \u2022 Volume 1 \u2022 Issue 1 \u2022 August 2021 \u2022 Published: August 9, 2021 \u2022 PDF Abstract- Digital Accessibility to the content in web environments for people with hearing disabilities and with hearing impairment with a low level of literacy is becoming increasingly critical (Dena et al., 2020,<\/p>\n","protected":false},"author":1,"featured_media":943,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_uag_custom_page_level_css":""},"categories":[6],"tags":[141,142,83,87],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?fit=1024%2C683&ssl=1","uagb_featured_image_src":{"full":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?fit=1024%2C683&ssl=1",1024,683,false],"thumbnail":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?resize=150%2C150&ssl=1",150,150,true],"medium":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?fit=300%2C200&ssl=1",300,200,true],"medium_large":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?fit=768%2C512&ssl=1",768,512,true],"large":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?fit=770%2C514&ssl=1",770,514,true],"1536x1536":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?fit=1024%2C683&ssl=1",1024,683,true],"2048x2048":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?fit=1024%2C683&ssl=1",1024,683,true],"post-thumbnail":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?resize=270%2C180&ssl=1",270,180,true],"contentberg-main":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?resize=770%2C515&ssl=1",770,515,true],"contentberg-main-full":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?resize=1024%2C508&ssl=1",1024,508,true],"contentberg-slider-stylish":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?resize=900%2C515&ssl=1",900,515,true],"contentberg-slider-carousel":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?resize=370%2C370&ssl=1",370,370,true],"contentberg-slider-grid-b":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?resize=554%2C466&ssl=1",554,466,true],"contentberg-slider-grid-b-sm":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?resize=306%2C466&ssl=1",306,466,true],"contentberg-slider-bold-sm":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?resize=150%2C150&ssl=1",150,150,true],"contentberg-grid":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?resize=370%2C245&ssl=1",370,245,true],"contentberg-list":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?resize=260%2C200&ssl=1",260,200,true],"contentberg-list-b":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?resize=370%2C305&ssl=1",370,305,true],"contentberg-thumb":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?resize=87%2C67&ssl=1",87,67,true],"contentberg-thumb-alt":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?resize=150%2C150&ssl=1",150,150,true]},"uagb_author_info":{"display_name":"Achraf Othman","author_link":"https:\/\/achrafothman.net\/site\/author\/achraf-othman\/"},"uagb_comment_info":1,"uagb_excerpt":"Authors: Achraf Othman Research and Innovation Letters \u2022 Volume 1 \u2022 Issue 1 \u2022 August 2021 \u2022 Published: August 9, 2021 \u2022 PDF Abstract- Digital Accessibility to the content in web environments for people with hearing disabilities and with hearing impairment with a low level of literacy is becoming increasingly critical (Dena et al., 2020,","jetpack_shortlink":"https:\/\/wp.me\/p8KjJN-fc","jetpack-related-posts":[{"id":2006,"url":"https:\/\/achrafothman.net\/site\/qatar-sign-language-avatar-arabic\/","url_meta":{"origin":942,"position":0},"title":"Meet the First Qatari Sign Language Avatar: A 3D Realistic Virtual Conversational Agent","date":"September 28, 2021","format":false,"excerpt":"Authors: Achraf Othman Research and Innovation Letters \u2022 Volume 1 \u2022 Issue 2 \u2022 September 2021 \u2022 Published: September 28, 2021 \u2022 PDF Abstract- When it comes to inventions and technological assistance for the hearing impaired, science has come a long way, there is absolutely no doubt about that. However,\u2026","rel":"","context":"In &quot;Research and Innovation Letters&quot;","img":{"alt_text":"BuHamad Qatari Sign Language Avatar","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?fit=644%2C480&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]},{"id":3705,"url":"https:\/\/achrafothman.net\/site\/unveiling-my-latest-book-sign-language-processing-from-gesture-to-meaning\/","url_meta":{"origin":942,"position":1},"title":"Unveiling My Latest Book: Sign Language Processing\u2014From Gesture to Meaning","date":"October 1, 2024","format":false,"excerpt":"It is with great excitement that I announce the release of my latest book, Sign Language Processing: From Gesture to Meaning. This work is the culmination of years of dedication, research, and a deep commitment to understanding the intricacies of sign languages, an area that bridges language, culture, and technology.\u2026","rel":"","context":"In &quot;Blog&quot;","img":{"alt_text":"Sign Language Processing Springer Book","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/WhatsApp-Image-2024-09-15-at-10.55.50-AM-e1727749850619.jpeg?fit=1183%2C1044&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]},{"id":3356,"url":"https:\/\/achrafothman.net\/site\/new-dataset-jumla-qsl-22-a-dataset-of-qatari-sign-language-sentences\/","url_meta":{"origin":942,"position":2},"title":"New dataset: JUMLA-QSL-22: A DATASET OF QATARI SIGN LANGUAGE SENTENCES","date":"February 20, 2023","format":false,"excerpt":"ABSTRACT Sign languages are the most common mode of communication with and between hearing-impaired individuals. In the Arab world, Arabic sign language is used with different dialects supporting a distinct set of rules for the gestures used. With research on natural language processing advancing, models have been developed to translate\u2026","rel":"","context":"In &quot;Publications&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/dataset.jpg?fit=1200%2C903&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]},{"id":118,"url":"https:\/\/achrafothman.net\/site\/phd-defense\/","url_meta":{"origin":942,"position":3},"title":"PhD Defense: \u201cMachine Translation for Sign Language based on Statistical Approach\u201c","date":"May 18, 2017","format":"image","excerpt":"I am happy to report that on March\u00a010, 2017, I had my doctoral dissertation defense, as part of the WebSign project, and that the committee found my research to be worthy. My dissertation was titled \u201cMachine Translation for Sign Language based on Statistical Approach\u201d and was based on translation between\u2026","rel":"","context":"In &quot;Blog&quot;","img":{"alt_text":"PhD Defence Dr. Achraf Othman","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/17157361_10211187722864810_4042370163602591668_o.jpg?fit=1200%2C900&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]},{"id":848,"url":"https:\/\/achrafothman.net\/site\/a-i-for-accessibility-hackathon-2021\/","url_meta":{"origin":942,"position":4},"title":"A.I. for Accessibility Hackathon 2021","date":"July 9, 2021","format":false,"excerpt":"According to the World Health Organization, 1 billion individuals, or 15% of the world population, are considered to have disabilities that exclude them from education and the workplace, weaken their potential to connect and communicate effectively, and limit their ability for independent living. With the shift to online living, more\u2026","rel":"","context":"In &quot;Blog&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/able.png?fit=1080%2C1080&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]},{"id":337,"url":"https:\/\/achrafothman.net\/site\/machine-translation-for-sign-language\/","url_meta":{"origin":942,"position":5},"title":"New Journal Publication: Designing High Accuracy Statistical Machine Translation for Sign Language","date":"March 12, 2019","format":"image","excerpt":"In this article, the authors deal with the machine translation of written English text to sign language. They study the existing systems and issues in order to propose an implantation of a statistical machine translation from written English text to American Sign Language (English\/ASL) taking care of several features of\u2026","rel":"","context":"In &quot;Blog&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/dd.png?fit=750%2C438&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/posts\/942"}],"collection":[{"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/comments?post=942"}],"version-history":[{"count":12,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/posts\/942\/revisions"}],"predecessor-version":[{"id":2489,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/posts\/942\/revisions\/2489"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/media\/943"}],"wp:attachment":[{"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/media?parent=942"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/categories?post=942"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/tags?post=942"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}